State of Agent Compatibility 2026
We scored 1,456 SaaS tools on how well AI agents can actually use them. The average score is 36 out of 100. The software industry is not ready for agents.
Nine out of ten tools score below 40. No tool on the market scores above 58. The biggest bottleneck isn't pricing or documentation — it's authentication and discoverability. Most tools were built for humans clicking through browsers, not agents calling APIs.
The score distribution
We expected a bell curve. Instead we got a cliff. Nearly 90% of all tools cluster in the 20–39 range, with the single largest group (55.3%) scoring 35–39. Only 151 tools — about one in ten — break above 40.
Put differently: if an AI agent picked a random SaaS tool to integrate with, there is an 89.6% chance it would score below 40. The tool would likely have documentation and public pricing — but no way for an agent to authenticate without a human in the loop.
Where tools pass and where they fail
We score tools across three dimensions: Discovery (can an agent find and understand the tool?), Auth (can an agent sign up and authenticate?), and Pricing (is there a tier that works for autonomous agents?). The gap between them tells the story.
Pricing is mostly solved. 80% of tools have a free tier, public pricing, and usage-based billing. This makes sense — SaaS companies were already trending this way for developers.
Discovery is halfway there. Every tool we scored has public docs and code examples (we filtered for this). But only 68.5% have an OpenAPI spec, and the emerging agent-native standards are nearly absent: just 2.2% have an MCP server, and a mere 0.5% (7 tools) have published an llms.txt file.
Auth is the bottleneck. 72.7% support API key auth, which agents can use. But the remaining 27.3% require OAuth flows, SAML, or manual approval — all of which need a human with a browser. 10.7% still use CAPTCHAs, phone verification, or manual signup gates that block agents entirely.
The biggest gaps
Our scoring checklist tests 19 specific capabilities. The pass rates reveal exactly where the industry falls short.
Almost nobody does
Almost everybody does
Most tools have great docs, fair pricing, and some API access. But the agent-native layer — MCP, llms.txt, frictionless auth — barely exists. Only 7 out of 1,456 tools have published an llms.txt file.
The 10 most agent-ready tools
These are the tools that score highest across all dimensions. They tend to be developer-first platforms with strong API cultures. Even the best score just 58 out of 100.
| # | Tool | Score | Category |
|---|---|---|---|
| 1 | Stripe | 58 | payments |
| 2 | Supabase | 58 | database |
| 3 | Airtable | 55 | databaseproductivity |
| 4 | Algolia | 55 | search |
| 5 | Apify | 55 | automation |
| 6 | Browserbase | 55 | devtoolsai-infra |
| 7 | Cal.com | 55 | productivity |
| 8 | E2B | 55 | devtoolsai-infra |
| 9 | Exa | 55 | search |
| 10 | Firecrawl | 55 | data-infra |
Notice a pattern: 8 of the top 10 have an MCP server. They're developer tools and AI-native platforms that anticipated the agent use case. Traditional enterprise software is nowhere on this list.
The 10 least agent-ready tools
These tools score 24 out of 100. They typically require manual signup, lack API key auth, and have no machine-readable documentation. Most are enterprise platforms that assume a human operator.
| Tool | Score | Category |
|---|---|---|
| 360Learning | 24 | education |
| AppFolio | 24 | real-estate |
| AuditBoard | 24 | compliance |
| Bluebeam | 24 | construction |
| Catalyst | 24 | customer-success |
| Chartbeat | 24 | analytics |
| Clari | 24 | sales |
| Concord | 24 | legal |
| ContractPodAi | 24 | legal |
| CrowdStrike | 24 | security |
Agent-readiness by category
Which types of software are most and least ready for agents? The gap between the best and worst categories is 22 points.
Most agent-ready categories
Least agent-ready categories
The pattern is clear: developer-facing tools lead, industry-specific enterprise tools trail. AI infrastructure tools (46.2) score almost double risk management tools (24.0). The categories at the bottom — legal, property management, hospitality — are industries where software was built for specialists, not machines.
MCP: the emerging standard barely anyone supports
Model Context Protocol (MCP) lets AI agents connect to tools through a standardized interface. It's the closest thing we have to a universal agent API. But adoption is vanishingly low.
Tools with MCP servers average 53.5, compared to 36.1 for those without — a 17.5-point premium. MCP alone doesn't guarantee agent-readiness, but the tools that have adopted it tend to be the same tools that do everything else right: API keys, OpenAPI specs, free tiers, developer docs.
Of the 7 tools that have published an llms.txt file, all 7 also have MCP servers. These are the true early movers: Stripe, Supabase, Mintlify, Tally.so, Hugging Face, Anthropic, and OpenAI.
What actually moves the score
Two factors correlate most strongly with higher scores:
A free tier adds ~10 points. API key auth adds ~5 points. Together, these two features account for most of the variance in agent-readiness scores. They represent the minimum: can an agent sign up without talking to a human, and can it authenticate without a browser?
The 7 tools with llms.txt
The llms.txt standard is a simple text file that helps AI models understand what a tool does and how to use it. Think of it as robots.txt for the AI era. Adoption is almost nonexistent.
| Tool | Score | Also has MCP? |
|---|---|---|
| Stripe | 58 | Yes |
| Supabase | 58 | Yes |
| Mintlify | 55 | Yes |
| Tally.so | 55 | Yes |
| Hugging Face | 48 | Yes |
| Anthropic | 42 | Yes |
| OpenAI | 42 | Yes |
Every single tool with llms.txt also has an MCP server. This isn't coincidence — these are the companies that are actively thinking about the agent experience. They're building for the next interface, not just the current one.
Get weekly updates as scores change
We re-score tools as they ship agent features. Subscribe to get notified when the landscape shifts.
Methodology
Every tool in this report was scored by the Agent Native Registry using a 19-point checklist across three dimensions.
- Discovery — OpenAPI spec, llms.txt, MCP server, MCP registry listing, public docs, code examples, changelog, status page
- Auth — No CAPTCHA, no phone verification, API key auth, no manual approval, no billing required for signup, no org/company requirement
- Pricing — Free tier available, usage-based pricing, no minimum contract, public pricing page, free tier sufficient for testing
Each checklist item is scored as pass/fail, then weighted by dimension. The overall score is a weighted average of all three dimensions. Tools were scored using a combination of automated checks and AI-assisted review (Claude, via the Anthropic API).
The dataset includes 1,456 SaaS tools across 671 categories. We focused on tools with public documentation and some form of API access. Pure consumer apps and hardware products were excluded.
Raw data and individual tool scores are available in the full directory. Want to check your tool's score? Use our free score checker.
We'll do a full audit of your tool for free — all we ask is a share on LinkedIn or X. Email us to claim yours.
This report was researched and written by Alex Ingram at the Agent Native Registry. Data current as of March 2026.