buzzabout docs
MCP

MCP overview

How Buzzabout exposes itself as an MCP server, what tools are available, and how to authenticate.

The Buzzabout MCP server exposes the same primitives as the REST API as a set of tools that any Model Context Protocol client can call — Claude Desktop, Claude on the web, custom agents, or your own integration.

Transport

https://api.buzzabout.ai/mcp/

Streamable HTTP transport (the modern MCP transport — no separate SSE endpoint). One URL handles tool listing, tool calls, and OAuth discovery.

Trailing slash is required

Use https://api.buzzabout.ai/mcp/ (with trailing slash). The unslashed /mcp returns a 307 redirect that strips the request body in many MCP clients, which surfaces as silent connection failures or empty tool lists. Configure your client with the trailing slash up front.

Connecting

Wire the URL above into your MCP client. The streamable-HTTP transport handles tool discovery, tool calls, and OAuth in a single endpoint — no separate SSE channel.

Authentication

Two paths:

  • x-api-key — same key as the REST API. Header is checked first on every request. Best for headless agents.
  • OAuth 2.1 — interactive clients (Claude Desktop, Claude web) exchange an authorisation code for a JWT. The server issues the JWT on the user's behalf.

The Authorization: Bearer <jwt> and x-api-key paths share the same User extraction — every tool sees the same authenticated user regardless of how the request authenticated.

One exception: buzzabout__ask

buzzabout__ask impersonates a user against the AI assistant chat backend, so it requires the OAuth/JWT path. API-key callers get a structured forbidden error. All ten other tools work fine with either auth method.

What's exposed

Eleven tools, namespaced buzzabout__* to avoid collisions in hosts that connect multiple MCP servers.

Tool selection model

Tools split cleanly into three groups:

  1. Thin REST-mirror toolsbuzzabout__list_datasets, buzzabout__get_dataset, buzzabout__create_dataset, buzzabout__create_dataset_run, buzzabout__get_dataset_run, buzzabout__create_audience_dataset, buzzabout__create_audience_dataset_run, buzzabout__get_audience_dataset_run, buzzabout__list_mentions, buzzabout__list_audience_profiles.
  2. Chat toolbuzzabout__ask delegates to the AI assistant for conversational analysis.
  3. (No PATCH / DELETE tools — admin actions stay in the web app. No list_audience_datasets / list_*_runsbuzzabout__ask covers those use cases conversationally.)

Async runs

Tools that kick off async work (buzzabout__create_dataset_run, buzzabout__create_audience_dataset_run) return immediately:

{
  "run_id": "dr_01H...",
  "dataset_id": "ds_01H...",
  "status": "pending",
  "next_step": "Call buzzabout__get_dataset_run with this run_id to poll completion.",
  "created_at": "2026-05-01T12:00:30Z"
}

The host LLM polls buzzabout__get_dataset_run / buzzabout__get_audience_dataset_run until status.type is completed or failed.

When to use MCP vs the API

Use MCPUse REST
Interactive — a person talking to Claude.Batch — scheduled job, cron-driven sync.
The host LLM does the orchestration.You're writing the orchestration.
You want buzzabout__ask to drive the workflow.You only need primitive CRUD.

Both surfaces are backed by the same primitives, so hybrid setups (e.g. schedule the run via REST, ask Claude to summarise via MCP) work naturally.

Next

On this page