Documentation Index
Fetch the complete documentation index at: https://api-docs.tiro.ooo/llms.txt
Use this file to discover all available pages before exploring further.
If you’re building a Claude Code, Cursor, or ChatGPT agent that needs Tiro data, this page tells you when to call which surface and how to keep your context window light.
The mental model
MCP returns to context. CLI returns to disk.
When you call MCP get_note_transcript, the entire transcript text — typically 5–50 KB per note — is injected into your conversation context window. Every subsequent turn carries that weight until compaction or summarization.
When you call tiro notes transcript --output transcript.md, the same content lands on disk. stdout returns a single metadata line:
{"ok":true,"data":{"saved":"./transcript.md","size":12450,"format":"md","guid":"note-a8f2c1…","paragraphCount":42,"segmentCount":127}}
For 100 transcripts, that’s 50 KB → ~50 tokens (path + count). Read each one only when you need it.
| You need to… | Use | Notes |
|---|
| Find notes by topic | MCP search_notes | hydrated with primary documents; lives in context |
| Pull 50+ notes to disk | CLI tiro notes search "…" --json > meetings.jsonl | NDJSON pipe; trivial to slice with jq |
| Read a verbatim quote | MCP get_note_transcript | small subset only — token-heavy |
| Save a full transcript for later | CLI tiro notes transcript <guid> --output <path> | content on disk; metadata in stdout |
| Browse meetings by date | CLI tiro notes list --since 7d --json | lightweight — metadata only |
| Read MCP-shape JSON without an MCP host | CLI tiro notes transcript <guid> --format json | byte-for-byte match with MCP get_note_transcript |
When to call MCP vs CLI
Are you in an MCP-aware client (Claude Desktop, Cursor, Code)?
│
├─ NO → CLI is your only option. Use --json everywhere.
│
└─ YES → Will the result fit comfortably in your context window?
(Rule of thumb: <5 KB total, single-shot reasoning)
│
├─ YES → MCP. Schema-typed call, multi-turn friendly.
│
└─ NO → CLI with --output. Disk holds the content,
context holds the path. Reach for Read tool
on demand.
Worked example — 30 days of “Acme Corp” meetings
You want to draft a client-specific quarterly summary. You need every meeting where Acme came up in the last 30 days, with full transcripts.
# 1. Search → metadata + hydrated documents (NDJSON)
tiro notes search "Acme Corp" --since 30d --json > /tmp/acme.jsonl
# stdout: ~12 lines of NDJSON, ~3 KB total
# 2. Pull guids without loading bodies
cat /tmp/acme.jsonl | jq -r '.guid' > /tmp/guids.txt
# 3. Download each transcript to disk (paths only return to context)
xargs -I{} tiro notes transcript {} --output ./out/{}.md < /tmp/guids.txt
Cumulative context cost: ≈ 80 tokens (3 stdout lines + 12 metadata acks). 12 meeting transcripts now live in ./out/ — read them with the Read tool one at a time, only when you need exact wording.
Compare to MCP-only: get_note_transcript × 12 calls would inject 60–600 KB into context (12 transcripts × 5–50 KB each). Most of that is dead weight — you only need 2–3 of the transcripts in detail.
Reading errors as JSON
Every error from the CLI follows a stable envelope:
{
"ok": false,
"error": {
"code": "auth_required",
"message": "Not authenticated. Run `tiro auth login` to sign in, or set TIRO_TOKEN env var.",
"suggestion": "tiro auth login",
"errorType": "auth_required",
"httpStatus": null,
"requestId": null
}
}
Stable fields:
| Field | Use |
|---|
error.code | machine-readable identifier — branch on this |
error.errorType | coarse-grained category (unauthorized, not_found, bad_request, …) |
error.suggestion | the exact next command to run for auto-recovery |
error.httpStatus | the upstream HTTP status (when applicable) |
error.requestId | upstream x-request-id for support escalation |
error.message is human-readable and may change wording across releases — don’t pattern-match against it.
Exit codes
| Code | Meaning | Agent action |
|---|
0 | success | continue |
1 | generic error | inspect error.code |
2 | usage error (bad flag, invalid date, missing required arg) | fix the call; do not retry blindly |
4 | auth required | run tiro auth login, then retry |
64 | EX_USAGE | same as 2 |
65 | EX_DATAERR | response failed schema validation; surface to user |
78 | EX_CONFIG | no token in env or keychain; ask the user to authenticate or supply TIRO_TOKEN |
Quick auth-recovery loop in shell:
output=$(tiro notes list --json 2>&1)
exit_code=$?
if [ $exit_code -eq 4 ] || [ $exit_code -eq 78 ]; then
tiro auth login
output=$(tiro notes list --json)
fi
echo "$output"
Output guarantees
--json is NDJSON for streams — list and search emit one JSON object per line. Pagination cursors arrive as a final {"_cursor": "…"} line.
--output <path> writes atomically — temp file + rename, never partial.
- TTY auto-detection — pretty in interactive shells, JSON when piped or redirected. Force either with
--pretty / --json.
tiro notes transcript --format json matches MCP get_note_transcript — same field names, same nesting, same speaker-segment structure. Reuse your existing parser.
- Tokens are never echoed —
auth status only shows the first 4 chars; logs even at --verbose redact the rest.
Stable contract — what won’t break across patch releases
error.code values
error.errorType values
- Exit codes
- NDJSON line shape for list/search
- The MCP-shape JSON returned by
tiro notes transcript --format json
- The metadata-line shape returned by
--output operations
Anything else (pretty output, error messages, verbose log format) is best-effort and may change.
Links