LLMs
The brain. Pick by reasoning depth, latency, and cost. Cheap and fast for triage; deep and careful for closing or clinical conversations.
Saaya is provider-agnostic by design. Swap the brain, the voice, the avatar, the channel — independently, per agent or per session. No rewrites. No lock-in.
A new model drops every two months. With Saaya, you flip a config flag — your prompts, tools, and integrations all keep working.
Use cheap-and-fast models for triage, deep-and-careful models for closing. Per-agent and per-session routing built in.
Set primary and fallback providers per agent. Outages don't take down your support line. Audit logs keep you honest.
The brain. Pick by reasoning depth, latency, and cost. Cheap and fast for triage; deep and careful for closing or clinical conversations.
How the agent hears you. Quality and language coverage matter most — ASR errors compound through the rest of the pipeline.
How the agent sounds. Choose by warmth, accent, and language. Voice is half the trust — pick deliberately.
When the conversation is on video, the face matters as much as the voice. Match expression to dialogue, lip-sync, and bandwidth budget.
How the agent shows up. Real phone numbers, WhatsApp Business, web chat — same agent, every channel.
Each agent definition has a config flag for LLM, STT, TTS, and avatar. Change the value and redeploy — that's the entire swap. No code changes, no test rewrites. Useful for cost-routing (Haiku for triage, Opus for complex) or for falling back when a provider has an outage.
Free tier ships with the most popular providers preconfigured. Bring your own keys whenever you’re ready.