DecixaDecixabeta
The State of x402·Issue 01·April 2026

x402 has 5,500 working APIs.Only 32 are for storing data or sending messages.

A monthly report from Decixa — the search and decision layer for AI agents on top of x402. We track every x402 listing we can find across the major directories, apply an automated quality filter, probe what survives, and rank what works. This report is what we see from that vantage point, made public.

Snapshot: April 25–26, 2026
30,600
Tracked
every listing across the major directories
14,456
Filtered
set aside by an automated quality filter
9,246
Probed
reached the HTTP-402 probe pipeline
5,523
Verified live
returned a valid HTTP 402 handshake
01

Why Decixa is publishing this

x402 is the HTTP-402-based payment protocol that lets AI agents pay APIs in USDC, on the spot, without contracts or sign-ups. Coinbase open-sourced it in 2025. Since then, an ecosystem of directories has grown around it — Bazaar (the official Coinbase Developer Platform discovery layer) and a handful of community catalogs.

That ecosystem now lists more than 30,000 endpoints. But listed is not the same as works. And for an agent — which depends on each step succeeding to keep its execution chain intact — a single dead listing is not an inconvenience. It’s a structural failure that cascades through whatever the agent was trying to do. In practice, it means the agent just stops. So the gap between “listed” and “works” is not cosmetic; it’s the difference between an agent that completes a task and one that stops mid-execution.

We built Decixa precisely because that gap is wide enough to need its own search layer. To do that job, Decixa already tracks every x402 listing we can find, applies an automated quality filter, probes the survivors, and ranks the ones that respond. That gives us a vantage point on the whole ecosystem from end to end.

This is the first of a planned monthly Decixa report sharing that view publicly, with the methodology open and the failure modes named. The headline finding this month: of the 5,523 verified-live endpoints, only 32 will let an agent write — store data or send a message. The rest of this report walks through how we got there, and what the rest of the working pool looks like.

02

How we measure it

We treat x402 listings the way a search engine treats web pages — you don’t index everything you crawl, and you don’t probe everything you index. Each step in the pipeline is a strict filter applied before the next.

30,600
Tracked
every listing we found across the major x402 directories
↓ apply quality filter
14,456
Set aside
metadata makes them poor probe candidates — see § 04
↓ probe what survives
9,246
Probed
HTTP GET, classified into 9 outcomes — see § 05
↓ keep what returns 402
5,523
Verified live
returned a valid HTTP 402 handshake. This is the working pool.
03

The numbers, in three layers

One numerator. Three denominators. All three are correct — they just answer different questions.

18%
Raw view

Of everything that calls itself x402, what fraction works today?

5,523 numerator
30,600 denominator
28%
Curated view

Of listings that survived directory review and have a live host, what fraction works?

5,523 numerator
19,880 denominator
60%
Probed view

Of listings the probe actually tried, what fraction works?

API-level. The route-level pass rate is 55.5% — see methodology.

5,523 numerator
9,246 denominator

The raw number (18%) is the headline if you’re an analyst comparing ecosystems. The probed number (60%) is the headline if you’re a developer asking “should I bother calling these endpoints?” Both are honest.

04

The quality filter — what we set aside, and why

Of 19,880 approved-and-alive listings, 14,456 carry an explicit excluded-reason flag.They were tracked, kept in the directory, but routed away from the probe pipeline because we could tell from the metadata alone that probing them wouldn’t be useful.

This is the part of the methodology readers most often misread. We don’t probe these because we have a mechanical reason not to — not because we’re behind. Here is the full breakdown.

low_information
11,80481.7%

name is unusable, description has no provider-specific content, or the classifier couldn't pick a function (confidence < 0.60).

x402_non_compliant
2,46517.1%

probed at least once, returned something other than HTTP 402. Out of x402 scope by definition.

non_production_host
760.5%

hostname matches staging / dev / test / localhost / 127.x / .local.

url_encoded_name
540.4%

URL-encoded template parameters (%7Bhash%7D, %7Baddress%7D), or name is an Ethereum address / UUID / IPFS CID.

invalid_name_pattern
200.1%

name matches a known broken pattern (encoded-only segments, etc).

address_in_url
140.1%

endpoint path contains a parameterized address slot, not a reusable resource.

invalid_endpoint
120.1%

endpoint is an IPv4 literal or a temporary tunneling domain (*.ngrok.io, *.tunnelmole.net).

service_flow_endpoint
90.1%

per-session flow URL like /pay/{UUID}, not a reusable endpoint.

template_description
20.0%

description is a known template string with no provider-specific content.

New this month — provider-level blocklist

Two providers were responsible for 3,384 listings of effectively duplicated functionality this month — the same handful of endpoint types repeated across thousands of auto-generated names. In one case, parameterized over Solana token addresses; in the other, behind a single proxy domain with hex-suffix names. After the second batch in two weeks, both were placed on a manual provider blocklist. Listings from blocklisted providers are now rejected at intake.

We don’t publish provider names — the blocklist is silent — but we do count them: 2 entries as of April 2026, accounting for ~3,400 rejections this month. This is a deliberate human decision, not an automated heuristic.

05

Probe outcomes — what happened to the 9,246 we did probe

After the filter, the 9,246 listings that reach the probe pipeline get classified into one of nine outcomes. Two numbers worth pausing on: phantom domains (7.4%) and non-402 responses (34%). Together, that’s 41.4% of probed listings — listings tell you what was claimed, probes tell you what works.

verified_402 (passed)
5,14155.5%
non_402_response
3,14834.0%
dns_error (phantom domain)
6847.4%
timeout
1581.7%
auth_required
680.7%
ssl_error
350.4%
waf_blocked
80.1%
connection_refused
80.1%
unknown
70.1%

Phantom domains (7.4%): 684 endpoints point to hostnames that no longer resolve via DNS. Listed, indexed, propagated — and then the domain lapsed. Non-402 responses (34%): endpoints respond, but not with the payment-required handshake. Schema drift, unimplemented protocol, or auth gates intercepting before x402 can take over.

06

Capability distribution — what kinds of APIs work?

Of the 5,523 verified-live APIs, here is the distribution by capability — the verb axis of our taxonomy. Grouped: Read 76% / Compute 10% / Write 14%.

ExtractRead
2,40743.6%
AnalyzeRead
1,22222.1%
SearchRead
57710.4%
TransactWrite
5119.3%
GenerateCompute
3346.0%
ModifyWrite
2314.2%
TransformCompute
2093.8%
StoreWrite
170.3%
CommunicateWrite
150.3%

Most x402 APIs today read or analyze data and charge per call. That’s the easiest product to ship. It also matches Web 1.0 in shape — the early commercial web was overwhelmingly “fetch this, return that,” and only later did write-side APIs (forms, payments, messaging) catch up.

07

The capability gap — 32 endpoints

Inside the Write category, Transact accounts for almost all of it: 511 endpoints — bridges, on-chain payments, settlement. The rest of the agent-economy write surface — actually changing state in third-party systems — is thin.

17
Store

Database, key-value, file storage. Across the entire verified ecosystem.

15
Communicate

SMS, email, push, chat. Across the entire verified ecosystem.

That’s 32 endpoints, total, between two of the most basic things an agent might want to pay for: “remember this” and “tell someone.” If you’re looking for an x402 product to build, that’s where the index is hungry.

This is not a critique of the ecosystem; it’s a description. The directories are full of read APIs because read is what shipped first. Whoever ships the first credible x402 SMS provider or x402 KV store will have a category mostly to themselves.

08

Domain distribution — a parallel view

The capability axis is one way to read the ecosystem. The other is by domain — what kind of subject matter the API is about. We ran a k-means clustering pass over the embedding of every verified-live description and asked Claude to name the clusters. At k=9:

Discovery & Search
1,04919.1%
On-chain Data Extraction
95417.4%
Crypto Market Data
85015.5%
Utility Data APIs
84515.4%
Crypto Asset Intelligence
58010.6%
Retail Gift Cards
4247.7%
Security Risk Scoring
4107.5%
Data Search APIs
2494.5%
Social Media Micropayments
1302.4%

Three of the nine clusters — on-chain extraction, market data, asset intelligence — are crypto-specific, and together they’re 43% of the verified ecosystem. Add the Crypto Discovery slice inside cluster 1 and the share is closer to 55%. The verb axis says “this ecosystem reads more than it writes.” The domain axis says “this ecosystem reads about cryptomore than anything else.” Both are true.

09

Health — of the 5,523 working APIs, how many actually run?

Verified isn’t the same as healthy. We probe each verified endpoint on a schedule and track uptime over the last 7 days plus p95 latency.

Uptime (7-day)
99–100%3,975 (72.0%)
95–99%921 (16.7%)
80–95%2 (0.0%)
50–80%141 (2.6%)
<50%147 (2.7%)
no measurement337 (6.1%)

88.7% of the verified pool maintains ≥95% uptime — production-grade. 2.7% drop below 50%, mostly endpoints in the process of going dark.

P95 latency
<200ms (fast)1,336 (24.2%)
200–500ms (medium)2,898 (52.5%)
500–1000ms (slow)663 (12.0%)
≥1000ms (very slow)289 (5.2%)
unknown337 (6.1%)

76.7% respond in under 500ms — fine for a single agent step, painful inside a loop. The 5.2% above one second are usually AI-inference endpoints or first-call cold starts on serverless.

10

What this means

For agent developers

There are 5,523 endpoints you can actually call today. But most of them only read. If your agent needs to write, your options are extremely limited. Most agents today are read-heavy by necessity, not by design.

Decixa indexes all of them at api.decixa.ai/api/agent/discover — pass the task as natural language and the search returns ranked endpoints with cost, latency, and verification metadata attached.

For providers

If your endpoint is in the 41.4% of probed listings that didn’t return 402, Decixa records the specific reason on the listing’s detail page so you can see what to fix without re-running the probe yourself.

Submit at decixa.ai/submit and the probe re-runs within minutes instead of waiting for the next cycle.

For ecosystem builders

32 endpoints across Store and Communicate, on a base of 5,523. Read one way: if you’re shipping an agent that needs to write, your options today are extremely limited.

Read the other way: if you’re deciding what to build in x402, this isn’t just a gap, it’s the map. Whatever else shifts in the protocol over the next twelve months, those two columns are going to close — by someone. The shortlist writes itself.

11

Methodology, footnotes, caveats

Snapshot taken April 25–26, 2026. We track 30,600 listings sourced from the major x402 directories — Bazaar (Coinbase CDP) is one of them — plus direct submissions through decixa.ai/submit. Probe results are refreshed on a rolling basis at roughly 1,300 listings per day. The verified-live count is the deduplicated apis-table view, restricted to review_status='approved', payment_req_parsed=true, and is_dead=false.

Pending review (189 listings) is excluded from the headline pass-rates because nothing has been done with them yet. The handful of listings in on_hold (192) are similarly held out of the headline numbers; we use that status for listings that need a one-off decision before they enter the regular pipeline — typically services that resell a third-party API in ways that may not match the original terms.

Two units coexist in this report. The capability and verified-live counts are at the API level; the probe-outcome failure modes are at the route level (an API can expose multiple endpoints). The two views differ by ~5% due to multi-route servers and ongoing sync between the apis and routes tables. Numbers in the narrative are rounded to absorb that.

The data is reproducible. Anyone with the public Bazaar Discovery API and an HTTP probe loop can produce a comparable snapshot. We just put it together and run it monthly.

12

What's next

This is the first of a planned monthly Decixa series. May’s report will track the same numbers month-over-month: pass-rate drift, probe-pipeline coverage, and capability/domain mix. We’ll also follow up on the two open threads from this month — what shows up in the gap categories (Store, Communicate) and how the provider blocklist evolves.

If there’s a question we should answer with this data, reply on Twitter or open an issue at github.com/koki-socialgist/decixa-mcp. We have the index. We can probably check.

Search the verified-live pool

All 5,523 endpoints in this report are searchable through Decixa. Pass natural-language intent, get back ranked endpoints with cost, latency, and verification metadata.