The Discovery Problem
AI agents have a fundamental problem: they don't know what tools exist.
A human developer browses npm, reads READMEs, evaluates options, and installs packages. An AI agent can't do any of that. It needs machine-readable discovery — a way to find, evaluate, and adopt tools programmatically.
But here's the catch: the AI ecosystem is fragmented. Claude Desktop speaks MCP. Google's agents speak A2A. LangChain agents use Python. OpenAI function calling uses JSON schemas. Every framework has its own way of consuming tools.
That's why JarvisSDK supports five discovery protocols. Not because five is a magic number, but because the ecosystem demands it. Here's how each one works and when to use it.
Protocol 1: REST API (HATEOAS)
Best for: Custom agents, backend services, any HTTP client
The REST API is the universal interface. Any language, any framework, any HTTP client can consume it. Our REST endpoints follow HATEOAS principles — each response includes _links that tell the agent what it can do next.
# Search for tools
GET /api/v1/catalog/search?q=text+processing
# Self-equip based on mission
POST /api/v1/agent/arm
{ "mission": "I need to process CSVs and send Slack notifications" }
# Execute a tool
POST /api/v1/modules/text-toolkit/execute
{ "action": "word_count", "params": { "text": "Hello world" } }
The self-equip endpoint (/agent/arm) is unique to JarvisSDK. Instead of browsing a catalog, the agent describes its mission and gets back a curated toolkit ranked by trust score. This is the "agent-native" part — discovery driven by intent, not by browsing.
When to use REST: When you're building a custom agent, when you need fine-grained control over the integration, or when your stack doesn't support the other protocols natively.
Protocol 2: MCP (Model Context Protocol)
Best for: Claude Desktop, Cursor, Windsurf, any MCP-compatible client
MCP is Anthropic's protocol for connecting AI models to tools and data sources. It uses JSON-RPC 2.0 over stdio or HTTP, and it's rapidly becoming the standard for desktop AI tools.
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "text_toolkit__word_count",
"arguments": { "text": "Hello world" }
},
"id": 1
}
Our MCP server exposes all 68 builtin modules as MCP tools. To connect, add JarvisSDK to your client config:
{
"mcpServers": {
"jarvis-sdk": {
"url": "https://jarvissdk.com/api/mcp",
"headers": { "x-api-key": "jsk_your_key_here" }
}
}
}
When to use MCP: When your users work in Claude Desktop, Cursor, or Windsurf. MCP is the path of least resistance for desktop AI tool integration.
Protocol 3: A2A (Agent-to-Agent)
Best for: Cross-agent discovery, Google agent ecosystem, multi-agent systems
A2A is Google's protocol for agent-to-agent communication. It's built on Agent Cards — JSON metadata files that describe what an agent can do, hosted at /.well-known/agent.json.
{
"name": "JarvisSDK",
"description": "Agent-native module marketplace",
"url": "https://jarvissdk.com",
"capabilities": {
"streaming": false,
"pushNotifications": false
},
"skills": [
{
"id": "tool-discovery",
"name": "Tool Discovery",
"description": "Search 1,300+ tools by keyword, category, or trust score"
}
]
}
A2A enables agent-initiated discovery. An agent looking for capabilities can fetch Agent Cards from known service providers and evaluate their skills programmatically. This is how multi-agent systems will negotiate capabilities in the future.
When to use A2A: When you're building multi-agent systems where agents need to discover each other's capabilities dynamically.
Protocol 4: llms.txt
Best for: LLMs that need to understand a service quickly, RAG systems, documentation crawlers
llms.txt is the simplest protocol. It's a plain text file at /api/llms.txt that describes the service in a format optimized for LLM comprehension. No JSON. No schema. Just text that a model can read and understand.
# JarvisSDK
> The marketplace where AI agents shop for themselves.
## What We Do
JarvisSDK is an agent-native module marketplace with 1,300+ tools...
## API Endpoints
- POST /api/v1/modules/{name}/execute — Execute a module action
- POST /api/v1/agent/arm — Self-equip toolkit based on mission
...
## Authentication
All API calls require x-api-key header...
This is intentionally low-tech. When an LLM is trying to figure out how to use a service, it doesn't need a formal schema — it needs clear, well-structured documentation it can reason about.
When to use llms.txt: When you want LLMs to understand your service without formal integration. Great for RAG pipelines, context windows, and "teach the model about this service" use cases.
Protocol 5: OpenAPI 3.1.0
Best for: Code generation, SDK generation, API gateways, formal integration
OpenAPI is the industry standard for machine-readable API specifications. Our OpenAPI spec at /openapi.json describes every endpoint, parameter, and response schema in full detail.
paths:
/api/v1/modules/{name}/execute:
post:
summary: Execute a module action
parameters:
- name: name
in: path
required: true
schema: { type: string }
requestBody:
content:
application/json:
schema:
properties:
action: { type: string }
params: { type: object }
OpenAPI enables automatic SDK generation, API gateway integration, and formal validation. It's the protocol for when you need precision and completeness.
When to use OpenAPI: When you're generating client SDKs, setting up API gateways, or need formal schema validation. Also useful for AI coding assistants (Copilot, Cursor) that can read OpenAPI specs to generate integration code.
Why You Need All Five
The natural question is: why not just pick one? The answer is that each protocol serves a different consumer:
| Consumer | Protocol | Why |
|---|---|---|
| Custom agent (Python/TS) | REST API | Universal, fine-grained control |
| Claude Desktop user | MCP | Native integration, zero config |
| Multi-agent system | A2A | Dynamic discovery, capability negotiation |
| LLM in a chat window | llms.txt | Quick comprehension, no integration needed |
| SDK generator / API gateway | OpenAPI | Formal schema, automated tooling |
Supporting all five means zero friction for any agent, any framework, any use case. The agent that discovers us via A2A can execute tools via REST. The LLM that reads our llms.txt can point its developer to our OpenAPI spec. The Claude Desktop user can connect via MCP and never touch an API.
This is what "agent-native" means. It's not about building for one protocol — it's about meeting every agent where it is.
The Protocol Landscape Is Evolving
Six months from now, this list might be different. New protocols emerge. Existing ones consolidate. The MCP ecosystem is growing fast. A2A is still early. Something entirely new might appear.
That's why we built JarvisSDK's architecture to be protocol-agnostic at the core. All five protocols are thin layers over the same module registry, the same execution engine, the same trust scoring system. Adding a sixth protocol is a single route file, not an architecture change.
The marketplace that wins will be the one that's always accessible, regardless of which protocol your agent speaks.
*JarvisSDK supports REST, MCP, A2A, llms.txt, and OpenAPI out of the box. Start building at jarvissdk.com.*