March 2026

State of Agent Compatibility 2026

We scored 1,456 SaaS tools on how well AI agents can actually use them. The average score is 36 out of 100. The software industry is not ready for agents.

1,456
Tools scored
36.5
Average score
58
Highest score
2.2%
Have MCP server
Key takeaway

Nine out of ten tools score below 40. No tool on the market scores above 58. The biggest bottleneck isn't pricing or documentation — it's authentication and discoverability. Most tools were built for humans clicking through browsers, not agents calling APIs.

The score distribution

We expected a bell curve. Instead we got a cliff. Nearly 90% of all tools cluster in the 20–39 range, with the single largest group (55.3%) scoring 35–39. Only 151 tools — about one in ten — break above 40.

20–24
69 (4.7%)
25–29
47 (3.2%)
30–34
384 (26.4%)
35–39
805 (55.3%)
40–44
27 (1.9%)
45–49
101 (6.9%)
50–54
1 (0.1%)
55–59
22 (1.5%)

Put differently: if an AI agent picked a random SaaS tool to integrate with, there is an 89.6% chance it would score below 40. The tool would likely have documentation and public pricing — but no way for an agent to authenticate without a human in the loop.

Where tools pass and where they fail

We score tools across three dimensions: Discovery (can an agent find and understand the tool?), Auth (can an agent sign up and authenticate?), and Pricing (is there a tier that works for autonomous agents?). The gap between them tells the story.

Pricing
91.9
Discovery
59.5
Auth
47.3

Pricing is mostly solved. 80% of tools have a free tier, public pricing, and usage-based billing. This makes sense — SaaS companies were already trending this way for developers.

Discovery is halfway there. Every tool we scored has public docs and code examples (we filtered for this). But only 68.5% have an OpenAPI spec, and the emerging agent-native standards are nearly absent: just 2.2% have an MCP server, and a mere 0.5% (7 tools) have published an llms.txt file.

Auth is the bottleneck. 72.7% support API key auth, which agents can use. But the remaining 27.3% require OAuth flows, SAML, or manual approval — all of which need a human with a browser. 10.7% still use CAPTCHAs, phone verification, or manual signup gates that block agents entirely.

The biggest gaps

Our scoring checklist tests 19 specific capabilities. The pass rates reveal exactly where the industry falls short.

Almost nobody does

llms.txt
0.5%
MCP server
2.2%
No CAPTCHA
10.7%
No phone verify
10.7%

Almost everybody does

Public docs
100%
Status page
100%
Public pricing
100%
Free tier
79.7%
API key auth
72.7%
OpenAPI spec
68.5%
The gap in one sentence

Most tools have great docs, fair pricing, and some API access. But the agent-native layer — MCP, llms.txt, frictionless auth — barely exists. Only 7 out of 1,456 tools have published an llms.txt file.

The 10 most agent-ready tools

These are the tools that score highest across all dimensions. They tend to be developer-first platforms with strong API cultures. Even the best score just 58 out of 100.

#ToolScoreCategory
1Stripe58payments
2Supabase58database
3Airtable55databaseproductivity
4Algolia55search
5Apify55automation
6Browserbase55devtoolsai-infra
7Cal.com55productivity
8E2B55devtoolsai-infra
9Exa55search
10Firecrawl55data-infra

Notice a pattern: 8 of the top 10 have an MCP server. They're developer tools and AI-native platforms that anticipated the agent use case. Traditional enterprise software is nowhere on this list.

The 10 least agent-ready tools

These tools score 24 out of 100. They typically require manual signup, lack API key auth, and have no machine-readable documentation. Most are enterprise platforms that assume a human operator.

ToolScoreCategory
360Learning24education
AppFolio24real-estate
AuditBoard24compliance
Bluebeam24construction
Catalyst24customer-success
Chartbeat24analytics
Clari24sales
Concord24legal
ContractPodAi24legal
CrowdStrike24security

Agent-readiness by category

Which types of software are most and least ready for agents? The gap between the best and worst categories is 22 points.

Most agent-ready categories

AI infrastructure
46.2
Auth platforms
45.2
Data infrastructure
44.5
Search
43.8
API management
42.6

Least agent-ready categories

Legal
28.1
Property mgmt
27.8
Hospitality
27.5
Real estate
26.4
Risk mgmt
24.0

The pattern is clear: developer-facing tools lead, industry-specific enterprise tools trail. AI infrastructure tools (46.2) score almost double risk management tools (24.0). The categories at the bottom — legal, property management, hospitality — are industries where software was built for specialists, not machines.

MCP: the emerging standard barely anyone supports

Model Context Protocol (MCP) lets AI agents connect to tools through a standardized interface. It's the closest thing we have to a universal agent API. But adoption is vanishingly low.

32
Tools with MCP
2.2%
Adoption rate
+17.5
Score bonus (avg)

Tools with MCP servers average 53.5, compared to 36.1 for those without — a 17.5-point premium. MCP alone doesn't guarantee agent-readiness, but the tools that have adopted it tend to be the same tools that do everything else right: API keys, OpenAPI specs, free tiers, developer docs.

Of the 7 tools that have published an llms.txt file, all 7 also have MCP servers. These are the true early movers: Stripe, Supabase, Mintlify, Tally.so, Hugging Face, Anthropic, and OpenAI.

What actually moves the score

Two factors correlate most strongly with higher scores:

38.6 vs 28.4
Free tier vs. no free tier
37.9 vs 32.8
API key vs. no API key

A free tier adds ~10 points. API key auth adds ~5 points. Together, these two features account for most of the variance in agent-readiness scores. They represent the minimum: can an agent sign up without talking to a human, and can it authenticate without a browser?

The 7 tools with llms.txt

The llms.txt standard is a simple text file that helps AI models understand what a tool does and how to use it. Think of it as robots.txt for the AI era. Adoption is almost nonexistent.

ToolScoreAlso has MCP?
Stripe58Yes
Supabase58Yes
Mintlify55Yes
Tally.so55Yes
Hugging Face48Yes
Anthropic42Yes
OpenAI42Yes

Every single tool with llms.txt also has an MCP server. This isn't coincidence — these are the companies that are actively thinking about the agent experience. They're building for the next interface, not just the current one.

Get weekly updates as scores change

We re-score tools as they ship agent features. Subscribe to get notified when the landscape shifts.

Methodology

Every tool in this report was scored by the Agent Native Registry using a 19-point checklist across three dimensions.

Each checklist item is scored as pass/fail, then weighted by dimension. The overall score is a weighted average of all three dimensions. Tools were scored using a combination of automated checks and AI-assisted review (Claude, via the Anthropic API).

The dataset includes 1,456 SaaS tools across 671 categories. We focused on tools with public documentation and some form of API access. Pure consumer apps and hardware products were excluded.

Raw data and individual tool scores are available in the full directory. Want to check your tool's score? Use our free score checker.

Featured in this report?

We'll do a full audit of your tool for free — all we ask is a share on LinkedIn or X. Email us to claim yours.

This report was researched and written by Alex Ingram at the Agent Native Registry. Data current as of March 2026.