Quickstart
Five minutes from a fresh API key to your first dataset and mentions list.
Spin up your first dataset in five minutes. By the end you'll have an API key, a dataset populated with real social-media posts, and a list of mentions you can work with.
Prerequisites
- A Buzzabout account (sign up — the free tier is enough for this walkthrough).
- A terminal with
curl, or any HTTP client you like.
Walkthrough
Get an API key
Open Settings → API keys in the web app and click New key. Copy
the value (it starts with bz_live_) somewhere safe — you'll see it
once.
export BUZZABOUT_KEY="bz_live_..."See authentication for the full lifecycle — rotation, revocation, MCP OAuth.
Create a dataset
A dataset is a named container for the mentions you'll collect.
curl -X POST https://api.buzzabout.ai/v1/datasets \
-H "x-api-key: $BUZZABOUT_KEY" \
-H "Content-Type: application/json" \
-d '{ "name": "cold brew" }'{
"status": "success",
"data": {
"id": "ds_01H...",
"name": "cold brew",
"created_at": "2026-05-01T12:00:00Z"
}
}Trigger a dataset run
A run is what actually collects posts from social platforms. The
call is asynchronous — it returns 202 Accepted immediately with a run
in pending state.
curl -X POST https://api.buzzabout.ai/v1/datasets/ds_01H.../runs \
-H "x-api-key: $BUZZABOUT_KEY" \
-H "Content-Type: application/json" \
-d '{
"search_query": {
"type": "prompt",
"sources": ["reddit", "tiktok"],
"search_query": "cold brew coffee"
},
"count": 200
}'{
"status": "success",
"data": {
"id": "dr_01H...",
"dataset_id": "ds_01H...",
"status": { "type": "pending", "steps": [] },
"created_at": "2026-05-01T12:00:30Z"
}
}Poll until completed
curl https://api.buzzabout.ai/v1/datasets/ds_01H.../runs/dr_01H... \
-H "x-api-key: $BUZZABOUT_KEY"Keep polling until data.status.type is completed. A 200-post run
typically takes 1–3 minutes.
{
"status": "success",
"data": {
"id": "dr_01H...",
"dataset_id": "ds_01H...",
"status": {
"type": "completed",
"steps": [
{ "name": "scraping", "completed_at": 1714564890 },
{ "name": "analysis", "completed_at": 1714565010 }
]
},
"params": { "...": "..." },
"mentions_count": 200,
"created_at": "2026-05-01T12:00:30Z",
"updated_at": "2026-05-01T12:03:30Z"
}
}List mentions
Mentions are global — POST /v1/mentions returns all the mentions
across every dataset you own. Pass dataset_ids to scope the search.
curl -X POST https://api.buzzabout.ai/v1/mentions \
-H "x-api-key: $BUZZABOUT_KEY" \
-H "Content-Type: application/json" \
-d '{
"dataset_ids": ["ds_01H..."],
"limit": 5,
"sort": "engagement_rate",
"order": "desc"
}'{
"status": "success",
"data": [
{
"source": "reddit",
"id": "post_01H...",
"author": { "title": "u/sipdaily", "...": "..." },
"text": "Nobody tells you that nitro cold brew tastes...",
"url": "https://www.reddit.com/r/coffee/comments/...",
"num_views": 12400,
"num_likes": 248,
"engagement_rate": "0.020",
"datasets": [{ "id": "ds_01H...", "name": "cold brew" }],
"...": "..."
}
],
"has_next": true,
"cursor": "eyJzb3J0X3ZhbHVlIjogIjAuMDIwIiwgImlkIjogInBvc3RfMDFIIn0"
}Using an LLM client?
If you're integrating with Claude or ChatGPT instead of writing HTTP yourself, skip ahead to MCP / Connect Claude. The same workflow runs as a sequence of MCP tool calls.
Next steps
- Run your first analysis — end-to-end walkthrough including audience scraping and the AI assistant.
- Authentication — production-grade key management, rotation, and OAuth for MCP.
- API / Endpoints / Datasets — the full reference for the calls we just made.