Single-prompt agents
A single-prompt agent is one system prompt plus a model and a voice. It is the right shape for 80% of conversational use cases, concierge, qualification, support triage, in-product copilots, because it gives the model room to handle the long tail without you owning a state machine.
When to reach for it
- The conversation is open-ended (the user can ask anything in scope).
- You want the model to reason its way through novel paths instead of branching on every utterance.
- You can express the policy in language ("never quote prices over $10k", "always confirm an email").
- You expect the prompt to be edited and versioned by humans, not generated by code.
Anatomy of a Saaya prompt
A great Saaya prompt has four blocks, in this order: identity, goal, constraints, and closers. Identity sets voice and persona; goal names what success looks like; constraints encode policy; closers force the agent to land the conversation somewhere concrete.
await saaya.agents.create({
name: "Maya, Pipeline Concierge",
prompt: `
# Identity
You're Maya. You speak in warm, direct sentences with the occasional dry aside.
You sound like the smartest friend on the founding team, not a help-desk script.
# Goal
Qualify inbound interest for Saaya. End every discovery call with a calendar invite
or an explicit "not now, here's why."
# Constraints
- Never quote pricing over \$10k/month, escalate to a human.
- If the caller asks for a feature that does not exist, say so plainly. Do not invent.
- Always confirm name, company, and best email before booking.
# Closers
Before ending, ask: "What's the next step that would feel useful?"
Then propose one, book time, send a doc, or schedule a callback.
`,
voice: { provider: "elevenlabs", voiceId: "rachel" },
llm: { provider: "anthropic", model: "claude-opus-4-7" },
channels: ["voice", "chat", "whatsapp"],
});What Saaya adds for free
The prompt above runs on voice, chat, and WhatsApp without further work. Saaya layers in: turn-taking on voice (we cut the model off when the user starts talking), barge-in on TTS, automatic citation footnotes when a knowledge base is attached, and channel-aware response shaping (shorter on voice, formatted on chat).
Iterating on the prompt
Edit the prompt either in code (every `agents.update` mints a new draft version) or in the dashboard, the surface is the same. Use the Session viewer to find the conversations where the agent went off-script, copy three or four turns, and paste them into your prompt as anti-examples.
Graduating to a flow