Why Decixa is publishing this
x402 is the HTTP-402-based payment protocol that lets AI agents pay APIs in USDC, on the spot, without contracts or sign-ups. Coinbase open-sourced it in 2025. Since then, an ecosystem of directories has grown around it — Bazaar (the official Coinbase Developer Platform discovery layer) and a handful of community catalogs.
That ecosystem now lists more than 30,000 endpoints. But listed is not the same as works. And for an agent — which depends on each step succeeding to keep its execution chain intact — a single dead listing is not an inconvenience. It’s a structural failure that cascades through whatever the agent was trying to do. In practice, it means the agent just stops. So the gap between “listed” and “works” is not cosmetic; it’s the difference between an agent that completes a task and one that stops mid-execution.
We built Decixa precisely because that gap is wide enough to need its own search layer. To do that job, Decixa already tracks every x402 listing we can find, applies an automated quality filter, probes the survivors, and ranks the ones that respond. That gives us a vantage point on the whole ecosystem from end to end.
This is the first of a planned monthly Decixa report sharing that view publicly, with the methodology open and the failure modes named. The headline finding this month: of the 5,523 verified-live endpoints, only 32 will let an agent write — store data or send a message. The rest of this report walks through how we got there, and what the rest of the working pool looks like.
How we measure it
We treat x402 listings the way a search engine treats web pages — you don’t index everything you crawl, and you don’t probe everything you index. Each step in the pipeline is a strict filter applied before the next.
The numbers, in three layers
One numerator. Three denominators. All three are correct — they just answer different questions.
“Of everything that calls itself x402, what fraction works today?”
“Of listings that survived directory review and have a live host, what fraction works?”
“Of listings the probe actually tried, what fraction works?”
API-level. The route-level pass rate is 55.5% — see methodology.
The raw number (18%) is the headline if you’re an analyst comparing ecosystems. The probed number (60%) is the headline if you’re a developer asking “should I bother calling these endpoints?” Both are honest.
The quality filter — what we set aside, and why
Of 19,880 approved-and-alive listings, 14,456 carry an explicit excluded-reason flag.They were tracked, kept in the directory, but routed away from the probe pipeline because we could tell from the metadata alone that probing them wouldn’t be useful.
This is the part of the methodology readers most often misread. We don’t probe these because we have a mechanical reason not to — not because we’re behind. Here is the full breakdown.
name is unusable, description has no provider-specific content, or the classifier couldn't pick a function (confidence < 0.60).
probed at least once, returned something other than HTTP 402. Out of x402 scope by definition.
hostname matches staging / dev / test / localhost / 127.x / .local.
URL-encoded template parameters (%7Bhash%7D, %7Baddress%7D), or name is an Ethereum address / UUID / IPFS CID.
name matches a known broken pattern (encoded-only segments, etc).
endpoint path contains a parameterized address slot, not a reusable resource.
endpoint is an IPv4 literal or a temporary tunneling domain (*.ngrok.io, *.tunnelmole.net).
per-session flow URL like /pay/{UUID}, not a reusable endpoint.
description is a known template string with no provider-specific content.
Two providers were responsible for 3,384 listings of effectively duplicated functionality this month — the same handful of endpoint types repeated across thousands of auto-generated names. In one case, parameterized over Solana token addresses; in the other, behind a single proxy domain with hex-suffix names. After the second batch in two weeks, both were placed on a manual provider blocklist. Listings from blocklisted providers are now rejected at intake.
We don’t publish provider names — the blocklist is silent — but we do count them: 2 entries as of April 2026, accounting for ~3,400 rejections this month. This is a deliberate human decision, not an automated heuristic.
Probe outcomes — what happened to the 9,246 we did probe
After the filter, the 9,246 listings that reach the probe pipeline get classified into one of nine outcomes. Two numbers worth pausing on: phantom domains (7.4%) and non-402 responses (34%). Together, that’s 41.4% of probed listings — listings tell you what was claimed, probes tell you what works.
Phantom domains (7.4%): 684 endpoints point to hostnames that no longer resolve via DNS. Listed, indexed, propagated — and then the domain lapsed. Non-402 responses (34%): endpoints respond, but not with the payment-required handshake. Schema drift, unimplemented protocol, or auth gates intercepting before x402 can take over.
Capability distribution — what kinds of APIs work?
Of the 5,523 verified-live APIs, here is the distribution by capability — the verb axis of our taxonomy. Grouped: Read 76% / Compute 10% / Write 14%.
Most x402 APIs today read or analyze data and charge per call. That’s the easiest product to ship. It also matches Web 1.0 in shape — the early commercial web was overwhelmingly “fetch this, return that,” and only later did write-side APIs (forms, payments, messaging) catch up.
The capability gap — 32 endpoints
Inside the Write category, Transact accounts for almost all of it: 511 endpoints — bridges, on-chain payments, settlement. The rest of the agent-economy write surface — actually changing state in third-party systems — is thin.
Database, key-value, file storage. Across the entire verified ecosystem.
SMS, email, push, chat. Across the entire verified ecosystem.
That’s 32 endpoints, total, between two of the most basic things an agent might want to pay for: “remember this” and “tell someone.” If you’re looking for an x402 product to build, that’s where the index is hungry.
This is not a critique of the ecosystem; it’s a description. The directories are full of read APIs because read is what shipped first. Whoever ships the first credible x402 SMS provider or x402 KV store will have a category mostly to themselves.
Domain distribution — a parallel view
The capability axis is one way to read the ecosystem. The other is by domain — what kind of subject matter the API is about. We ran a k-means clustering pass over the embedding of every verified-live description and asked Claude to name the clusters. At k=9:
Three of the nine clusters — on-chain extraction, market data, asset intelligence — are crypto-specific, and together they’re 43% of the verified ecosystem. Add the Crypto Discovery slice inside cluster 1 and the share is closer to 55%. The verb axis says “this ecosystem reads more than it writes.” The domain axis says “this ecosystem reads about cryptomore than anything else.” Both are true.
Health — of the 5,523 working APIs, how many actually run?
Verified isn’t the same as healthy. We probe each verified endpoint on a schedule and track uptime over the last 7 days plus p95 latency.
88.7% of the verified pool maintains ≥95% uptime — production-grade. 2.7% drop below 50%, mostly endpoints in the process of going dark.
76.7% respond in under 500ms — fine for a single agent step, painful inside a loop. The 5.2% above one second are usually AI-inference endpoints or first-call cold starts on serverless.
What this means
For agent developers
There are 5,523 endpoints you can actually call today. But most of them only read. If your agent needs to write, your options are extremely limited. Most agents today are read-heavy by necessity, not by design.
Decixa indexes all of them at api.decixa.ai/api/agent/discover — pass the task as natural language and the search returns ranked endpoints with cost, latency, and verification metadata attached.
For providers
If your endpoint is in the 41.4% of probed listings that didn’t return 402, Decixa records the specific reason on the listing’s detail page so you can see what to fix without re-running the probe yourself.
Submit at decixa.ai/submit and the probe re-runs within minutes instead of waiting for the next cycle.
For ecosystem builders
32 endpoints across Store and Communicate, on a base of 5,523. Read one way: if you’re shipping an agent that needs to write, your options today are extremely limited.
Read the other way: if you’re deciding what to build in x402, this isn’t just a gap, it’s the map. Whatever else shifts in the protocol over the next twelve months, those two columns are going to close — by someone. The shortlist writes itself.
Methodology, footnotes, caveats
Snapshot taken April 25–26, 2026. We track 30,600 listings sourced from the major x402 directories — Bazaar (Coinbase CDP) is one of them — plus direct submissions through decixa.ai/submit. Probe results are refreshed on a rolling basis at roughly 1,300 listings per day. The verified-live count is the deduplicated apis-table view, restricted to review_status='approved', payment_req_parsed=true, and is_dead=false.
Pending review (189 listings) is excluded from the headline pass-rates because nothing has been done with them yet. The handful of listings in on_hold (192) are similarly held out of the headline numbers; we use that status for listings that need a one-off decision before they enter the regular pipeline — typically services that resell a third-party API in ways that may not match the original terms.
Two units coexist in this report. The capability and verified-live counts are at the API level; the probe-outcome failure modes are at the route level (an API can expose multiple endpoints). The two views differ by ~5% due to multi-route servers and ongoing sync between the apis and routes tables. Numbers in the narrative are rounded to absorb that.
The data is reproducible. Anyone with the public Bazaar Discovery API and an HTTP probe loop can produce a comparable snapshot. We just put it together and run it monthly.
What's next
This is the first of a planned monthly Decixa series. May’s report will track the same numbers month-over-month: pass-rate drift, probe-pipeline coverage, and capability/domain mix. We’ll also follow up on the two open threads from this month — what shows up in the gap categories (Store, Communicate) and how the provider blocklist evolves.
If there’s a question we should answer with this data, reply on Twitter or open an issue at github.com/koki-socialgist/decixa-mcp. We have the index. We can probably check.
Search the verified-live pool
All 5,523 endpoints in this report are searchable through Decixa. Pass natural-language intent, get back ranked endpoints with cost, latency, and verification metadata.