MCP Servers
9

MCP Resources and Prompts: The Underused Half of the Protocol

MCP Resources and Prompts — Are They Half of the Protocol?

May 6, 2026

Open ten MCP server repos at random. Nine of them register tools and nothing else. The README will list six tool names, the code will be six server.registerTool(...) calls, and the protocol's other two primitives — resources and prompts — won't be mentioned anywhere.

That isn't a bug in the ecosystem. Tools are the most obvious primitive. A tool is a function the agent can call; functions-the-agent-can-call is the thing every "AI plus actions" mental model is already wired for. Resources and prompts are less intuitive, harder to demo, and absent from most quickstarts. So they get skipped.

The cost of skipping them is subtle. A tools-only MCP server treats the LLM as a remote procedure caller. The richer surface — agent reads the data, agent gets a starting point from the user — has to be re-invented inside the tools, usually badly. Adding resources and prompts to a server doesn't unlock new abilities so much as let the existing abilities sit in their natural shape.

This post is a working developer's account of what those two primitives are for, when each one earns its keep, and what the two MCP servers we built — mcp-blog-publisher and flutter-pipeline-mcp — look like once we go back and add them.

Quick refresher on the three primitives

The MCP protocol exposes three things from a server to a host:

  • Tools — actions the agent can invoke. Mutating, parameterized, treated as RPC. publishDraft({ filename }). The agent decides when to call them.
  • Resources — read-only data the host can pull and put in front of the agent as context. file:///drafts/2026-04-26-mcp-pillar.md. The host (or the user, through the host) decides what to read.
  • Prompts — user-facing shortcuts the host can surface in its UI. "Refactor this for readability." "Generate release notes from this branch." Triggered by the user, not the agent.

All three travel over the same JSON-RPC wire and use the same transport. Tools is the half everyone ships. Resources and prompts are the half worth being deliberate about.

What resources actually are

A resource is read-only context exposed by the server, addressable by URI. The host can list resources, read a resource, or subscribe to changes. The agent itself doesn't call a resource the way it calls a tool — the host fetches the resource and includes it in the model's context.

The line between "tool" and "resource" lands here: if the action is read this and put it in front of the model, it is a resource; if the action is do this thing, possibly mutating something, possibly producing a result that depends on parameters, it is a tool.

That distinction matters in three ways once a server starts to scale.

Caching and freshness become the host's problem instead of the server's. A tool returns a result every time it is called; the host has to invoke it explicitly, the LLM has to decide to invoke it, and there is no native concept of "this hasn't changed since last time." A resource has a URI and a content hash. The host can cache by URI, refresh on the server's notification, and avoid round-tripping for data that hasn't moved. A read-heavy server with a million-row table behind it is a different shape if the agent is calling getRecord(id) ten times versus the host pre-loading three resources.

The user gets a UI affordance. Hosts that support resources (Claude Desktop, Cursor, the Zed integration) typically expose them as attachable items — the user can pick a resource and add it to their conversation. That is a different mental model from "the agent will figure out when to read this." Sometimes the user knows. Letting the user say "look at this draft" by pointing at it is faster, cheaper, and more accurate than letting the agent guess.

The protocol distinguishes intent. When the host shows the agent a resource it has loaded, the agent sees data with provenance — "you are looking at the contents of file:///drafts/post-1.md." When a tool returns the same bytes, the model sees "the result of calling readDraft({ filename: "post-1.md" })." The first framing is closer to how humans think about reading a file; the second is closer to how programs think about it. Models, as it happens, also work better in the first framing for read-only context.

A worked example: drafts as resources

In the pillar post, mcp-blog-publisher exposed three tools: listDrafts, readDraft, and publishDraft. The first two are read-only. They have no side effects. They take no parameters beyond a filename. They are resources pretending to be tools because tools are the path of least resistance.

Refactored, the server should:

  • Expose every Markdown file in blog-drafts/ as a resource with URI file:///<absolute-path>.
  • Expose a resource template (file:///drafts/{name}) so the host can resolve names to URIs without enumerating.
  • Keep publishDraft as a tool, because publishing mutates state and takes a real parameter.

The handler shape with the SDK:

typescript
server.registerResource(
  "drafts",
  new ResourceTemplate("file:///drafts/{name}", { list: undefined }),
  {
    title: "Blog drafts",
    description: "Markdown drafts under /blog-drafts",
    mimeType: "text/markdown",
  },
  async (uri, { name }) => {
    const target = path.resolve(DRAFTS_ROOT, name);
    if (!target.startsWith(DRAFTS_ROOT)) throw new Error("path escape");
    const text = await fs.readFile(target, "utf8");
    return {
      contents: [{ uri: uri.href, text, mimeType: "text/markdown" }],
    };
  },
);

The path-traversal guard from the security post is still here — the schema and the handler together are the boundary, and "this is a resource not a tool" doesn't change that.

What the host does with this changes everything. Claude Desktop's resource picker now lists every draft. The user can drop a draft into the conversation directly. The agent doesn't have to invoke readDraft and burn a tool-call round trip; the contents are already in the context window, attributed to the URI. The LLM sees less RPC noise and more grounded text.

For flutter-pipeline-mcp, the equivalent move is exposing pubspec.yaml and the latest test report as resources. The agent needs both routinely; making them resources means the host can attach them on session start instead of the agent issuing two tool calls every time it wants to know which Flutter version this project is on.

When resources are the wrong call

Not every read is a resource. Three signals that you should keep something as a tool:

It takes parameters that aren't a path. searchDrafts({ query }) is a tool. The result depends on a parameter the user is unlikely to compose by hand into a URI, and the agent is the right caller because the agent is the one who knows the query.

The result is expensive enough that pre-fetching is wasteful. A resource the host might fetch eagerly should be cheap to produce. Anything that runs a query against a 50GB table on every read is better gated behind a tool the agent calls only when it has decided the answer is worth paying for.

The "data" is actually a computation. getCurrentTime() is a tool, not a resource. So is flutterAnalyze({ projectPath }) — the result is the output of a process that runs on demand, not a stable artifact with a URI. Treating computations as resources is a category error and the host's caching will do the wrong thing.

A useful test we have started using: can a human user reasonably say "I want this thing in front of the model" and point to it without writing a query? If yes, resource. If no, tool.

What prompts actually are

A prompt, in MCP, is a server-defined template that the host surfaces to the user. The user picks the prompt from a menu (slash command, UI dropdown, palette), optionally fills in arguments, and the host materializes it into a starting message for the agent.

The two-line summary: prompts are user-invokable, not agent-invokable. Nothing the agent does triggers a prompt. The user does. The agent only sees the resulting message.

This is the part that surprises people. A prompt in MCP is not a "system prompt" you set on the server side to influence the agent's behavior. It is a user-facing affordance — a labeled, parameterized, server-supplied template the host shows in its UI as a selectable shortcut.

Why would a server expose those?

Because a server tends to know what its tools are for better than the user does. The author of mcp-blog-publisher knows what a typical "publish my next draft" workflow looks like. Encoding that as a prompt — "Write release notes for the most recent draft and publish it" — saves the user from typing the workflow every time, and saves the agent from inferring the workflow from a less-specific user message.

The mental model worth landing on: a tool is what the agent invokes; a prompt is what the user invokes; a resource is what the host attaches. Three primitives, three different actors.

A worked example: prompts for the publisher

mcp-blog-publisher has tools (publishDraft) and now resources (drafts). What it gains from prompts:

typescript
server.registerPrompt(
  "review-and-publish",
  {
    title: "Review and publish a draft",
    description:
      "Walks through a draft, suggests edits, and publishes once approved",
    argsSchema: { name: z.string().describe("Draft filename") },
  },
  async ({ name }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `Read the draft at /drafts/${name}. Identify any factual claims that need a citation, any sections that drift in voice, and any obvious typos. Suggest edits inline. Once I confirm, call publishDraft({ filename: "${name}" }).`,
        },
      },
    ],
  }),
);

What this does for the user, in Claude Desktop:

  • A new entry appears in the slash-command menu: /review-and-publish.
  • Selecting it prompts for the draft name (or, with a list callback, the user gets a dropdown).
  • The host sends the templated message to the agent.
  • The agent reads the draft (resource), drafts suggestions, waits for confirmation, then calls publishDraft (tool).

The server has now taught the host what a sensible workflow with these tools looks like. Without prompts, every user has to compose that workflow themselves, badly, repeatedly. With prompts, the workflow is part of the server's published interface.

This is the part of the MCP protocol that closes the loop between "I built a tool" and "I shipped a usable feature." Tools alone are a bag of verbs. Prompts are sentences.

Patterns we have started using

A few prompt patterns that keep showing up across servers:

The setup prompt. A short prompt that pre-loads the agent with context — relevant resources, an explanation of the conventions, a starting question. Use this when the workflow has a "step zero" the user keeps forgetting. For flutter-pipeline-mcp: a start-debug-session prompt that attaches pubspec.yaml, the last test failure, and a starter sentence — "You are debugging a failing Flutter test. The pubspec is attached. The most recent test report is attached. Start by reading the test, then the related source file, then propose a fix."

The destructive-action prompt. A prompt that wraps a dangerous tool with explicit confirmation language. "You are about to delete a draft permanently. List the file's title and first paragraph, ask the user to confirm, and only then call `deleteDraft`." This is a prompt-shaped substitute for the "confirm before destructive" tool-description pattern from the security post — they compose well together.

The report-shaped prompt. A prompt whose body asks for a structured summary the user wants regularly. "Summarize the last 24 hours of CI results from `flutter_test_report`. Group by test file. Highlight regressions." Saves the user from reinventing the same prompt every Monday.

The common thread: prompts are where you encode the way you want the agent to behave with these specific tools. Not at the model level, not at the system-prompt level — at the per-server level, where it can travel with the server and be picked from a UI.

Where this lands in the bigger picture

A useful frame for resources and prompts together: a tools-only MCP server is a service. A server with all three primitives is a workspace. The first one answers "what can the agent do?". The second one answers "what should the agent be working on, with what data, and how?".

Most projects we have looked at end up wanting the workspace shape eventually. The progression usually goes:

  1. Ship tools. Get value.
  2. Notice the agent is making the same read calls every time. Convert those to resources.
  3. Notice users are typing the same workflow prompts every time. Encode them as prompts.
  4. Realize the server is now meaningfully reusable across team members because the intent is in the protocol, not in the user's head.

The amount of code involved is small. The mental shift is bigger. Start treating MCP as a protocol with three primitives instead of one and a class of "I wish my agent worked the way I want" frustrations stops happening.

Where this fits in the series

This post sits next to the pillar and the tool-design post — once you are comfortable with tools, resources and prompts are the natural next move. The security post applies just as much: a resource handler needs the same path-traversal guard as a tool handler, and a prompt that wraps a destructive tool inherits the destructive tool's threat model.

If you are ready to build a host that consumes all three primitives — including on a phone, against an on-device model — the Flutter capstone is where this series is heading next.

Tools alone are a bag of verbs. Resources turn the agent into a reader. Prompts turn the user into a director. The servers that ship all three feel different to work with, in a way that is hard to articulate until you have used one.

Related Topics

mcp resourcesmcp promptsmcp primitivesmcp beyond tools
Flutter & Node.js

Ready to build your app?

Flutter apps built on Clean Architecture — documented, tested, and yours to own. See which plan fits your project.

Clean Architecture on every tier
iOS + Android, source code included
From $4,900 — no monthly lock-in