Changelog

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new:

Feel free to reply to this email directly to share your feedback.

1. Dashboard Overhaul

We’ve completed a full dashboard overhaul.

  • A more user-friendly and intuitive agent builder.
  • Added more functionality to the history tab.
  • Overhauled the billing portal.
  • A cleaner and more consistent UI.

Go to dashboard

2.  Warm Transfer

We’ve added a warm transfer feature.

If you need to provide background information and hand off the call to the next agent, this feature allows you to set up a prompt or static message for smooth transitions.

Watch the tutorial

3.  Disable Transcript Formatting

We’ve added a toggle to disable transcript formatting. This can help resolve the ASR (Automatic Speech Recognition) errors we recently discovered:

  • Phone numbers were being misinterpreted as timestamps.
  • Double numbers were not being accurately captured.

If you encounter issues related to number transcription, try out this toggle.

4.   Cal.com Custom Fields

If you’ve added custom fields in Cal.com, you can now use them in Retell.

When using Cal.com functions, you can instruct the agent to collect specific information, and it will automatically display the collected data in the booking event.

Watch the tutorial

Bug fixes and Improvements

  • Boosted keywords
  • Custom function timeout
  • Maintaining Consistent Voice Tonality

Launching Two Programs (for Partners Looking to Collaborate with Retell AI)

Affiliate Program

Launching by the end of next week!

Partner Program

Apply now to join our Partner Program.

Coming soon:

  1. Integrate with Real-time API
  2. RAG: Add your custom knowledge base.
  3. Organization Management: Invite teammates to join your organization.
  4. Usage Dashboard: Charts for Daily Usage

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new.

Feel free to reply to this email directly to share your feedback.

1. More Languages

We’ve now supported more languages:

  • zh-CN (Chinese - China)
  • ru-RU (Russian - Russia)
  • it-IT (Italian - Italy)
  • ko-KR(Korean-Korea)
  • nl-NL (Dutch - Netherlands)
  • pl-PL (Polish - Poland)
  • tr-TR (Turkish - Turkey)
  • vi-VN (Vietnamese - Vietnam)

Simply change the language in the settings panel on the agent creation page.

2.  Max Call Duration

You can now set the maximum duration for calls in minutes to prevent spam.

3.  Extended Voicemail Detection

You can set the duration for detecting voicemail. In some B2B use cases, there are welcome messages before going to voicemail. Setting a longer voicemail detection time can solve this issue.

4.   LLM Temperature

You can adjust the LLM Temperature to get more varied results. The default setting is more deterministic and provides better function call results.

5.  Agent Voice Volume

You can now control the volume of the agent’s voice.

Important Notification

  • We will fully stop Audio Websocket and V1 call APIs at 12/31/2024.

Bug fixes and Improvements

  • DTMF issue is now fixed.
  • Failed inbound calls will now have an entry in history.
  • Addressed inaudible speech.
  • Added retry for inbound dynamic variable webhook.
  • Added timeout for Retell webhook.
  • Fixed a bug where custom voice was accidentally overwritten.
  • Improved voicemail detection performance.

Community Videos

How to use Dynamic Variables in Retell VoiceAI Automation

Rish - AI Business Automation

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new.

Feel free to reply to this email directly to share your feedback.

1. DTMF (Navigate IVR)

Guide the voice agent through IVR systems with button presses (e.g., “Press 1 to reach support”).

See Doc

2.  Voicemail Handling

  • You could set up the voicemail when calls reach the voicemail.
  • Dashboard only, not available for API.

3. SIP Trunking Integration

You can now integrate Retell AI with your telephony providers, using your own phone numbers (e.g., Twilio, Vonage). This works with both Retell LLM and Custom LLM.

Integration options:

  • Elastic SIP trunking
  • Dial to SIP Endpoint

See Doc

4. Multilingual Agent (English & Spanish)

You could make a multilingual agent who could speak English and Spanish at the same time.

See Doc

5. Pronunciation

You can also control how certain words are pronounced. This is useful when you want to make sure certain uncommon words are pronounced correctly.

See Doc

6. Voice model selector

We’ve added new settings for voice model selection:

  • Turbo v2.5: Fast multilingual model
  • Turbo v2: Fast English-only model with pronunciation tag support
  • Multilingual v2: Rich emotional expression with a nice accent.

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new.

Feel free to reply to this email directly to share your feedback.

1. Audio Infrastructure Update

We have upgraded our audio infrastructure to WebRTC, moving away from the original websocket-based system. This change ensures better scalability and reliability:

  • Web Calls: All web calls are now on WebRTC.
  • Phone Calls: Migration to WebRTC is in progress, pending resolution of some SIP blockers.

2. Call API V2

We've introduced the updates in our Call API V2, which now separates phone call and web call objects and includes a few field and API changes:

See Doc

3. Concurrency Enhancements

  • Default Limits: The default concurrency limit for all users has been increased to 20.
  • Concurrency API: A new API to check your current concurrency and limit is now available here.

4.  Seperation inbound and outbound

  • Agent Separation: Our APIs now support separate inbound and outbound agents, with the option to disable either as needed.
  • Nickname Field: Easily find specific numbers with the addition of a nickname field for better organization.

5. Bug Fix and Reliability Improvement

  • Enhanced all modules with a smarter retry and failover mechanism.
  • Resolved issues with audio choppiness and looping.
  • Corrected the display of function call results in the LLM playground.
  • Addressed the scrolling issue in the history tab.

6. Usage Limits

In response to abuse and misuse of our platform, we added some usage limits accordingly:

  • Scam Detection: Implemented to safeguard users.
  • Call Length Limit: Maximum of 1 hour.
  • Token Length Limit: Maximum of 8192 tokens for Retell LLM. For multi-state prompts, this includes the longest state plus the general prompt.
  • Please contact us if you need exceptions.

Coming soon:

  1. SIP: Migration of phone call traffic.
  2. DTMF Support: Enhanced functionality for phone calls.
  3. Integration Flexibility: Easier ways to bring your own Twilio and integrate with other telephony providers.
  4. Batch Calling API: Streamline multiple calls through our upcoming API.

Community video

Here are the videos that our community has created! Feel free to check them out. If you've made videos for Retell AI, please let us know, and we'll feature them in our newsletter. Keep the creativity flowing!

Retell AI Virtual AirBnb Property Manager

GrayTeck

Como FUNCIONA La mejor IA de VOZ en ESPAÑOL | Retell AI

Jorge Tomas

End to End Ai Voice Ordering w/ Retell AI

Do NOT Annoy The Brandnew AI Flight Instructor

Swiss001

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new

Feel free to reply to this email directly to share your feedback.

1. SOC 2 Type 1 Certification

We've obtained the Vanta SOC 2 Type 1 certification and are currently awaiting the SOC 2 Type 2 certification.

2. Debugging mode

Click on "Test LLM" to enter debugging mode. It works with both single prompts and stateful multi-prompt agents. Now, you can test the LLM without speaking. You can create, store, and edit the conversation.

Pro tip:
For multi-states prompt agent, you can change the starting point to the middle state and test from there.

3. TTS Fallback

Your stability is our top priority. We've added the capability to specify a fallback for TTS. In case of an outage with one provider, your agent can use another voice from a different provider.

4. GPT-4o and pricing

The OpenAI GPT-4o LLM is now available on Retell. The voice interface API has not been released yet, but we plan to integrate it as soon as it becomes available. Stay tuned!

The pricing for GPT-4o is $0.10 per minute (optional).

See Pricing

5. Add Pronunciation Input

You can now guide the model to pronounce a word, name, or phrase in a specific way.  For example:  "word": "actually",      "alphabet": "ipa",      "phoneme": "ˈæktʃuəli".

This feature is currently available only via the API but will soon be added to the dashboard.

See Doc

6. Normalize for speech option

Normalize the some part of text (number, currency, date, etc) to spoken to its spoken form for more consistent speech synthesis.

See Doc

7. End call after user stay silent

Now you could set if users stay silent for a period after agent speech, then end the call.

The minimum value allowed is 10,000 ms (10 s). By default, this is set to 600000 (10 min).

See Doc

8. Miscellaneous Updates

  1. Voice Lists API: You can now get available voices via API. API References
  2. Ambient Sound: New ambient sound for the call center and ambient sound volume control.
  3. Asterisks Fix: We noticed that OpenAI models recently started generating asterisks, causing some problems. We have applied a fix to stop this.
  4. SDK Updated

Article

retell AI call

Retell AI lets companies build ‘voice agents’ to answer phone calls

Techcrunch

Community video

Here are the videos that our community has created! Feel free to check them out. If you've made videos for Retell AI, please let us know, and we'll feature them in our newsletter. Keep the creativity flowing!

Lead Qualification with Voice AI | How to integrate with Gohighlevel

Sanava

End to End Ai Voice Ordering w/ Retell AI

End to End Ai Voice Ordering w/ Retell AI

GrayTeck

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new 👇. Feel free to reply to this email directly to share your feedback.

1. Enhanced Call Monitoring

Voice AI agent Assitant

Call Analysis:  We've introduced metrics like Call Completion Status, Task Completion Status, User Sentiment, Average End-to-End Latency, and Network Latency for comprehensive monitoring. You can access these directly on the dashboard or through API.

Disconnection Reason Tracking: Get insights into call issues with the addition of "Disconnection Reason" in the dashboard and "get-call" object. For more details, refer to our Error Code Table.

Function Call Tracking: Transcripts now include function call results, offering a seamless view of when and what outcomes were triggered. Available in the dashboard and get-call API. For custom LLM users, can use tool call invocation event and tool call result event to pass function calling results to us, so that you can utilize the weaved transcript and can utilize dashboard to view when your function is triggered.

2. New Features

Reminder Settings: You can now configure reminder settings to define the duration of silence before an agent follows up with a response. Learn more.

Backchanneling: Backchannel is the ability for the agent to make small noises like “uh-huh”, “I see”, etc. during user speech, to improve engagement of the call. You can set whether to enable it, how often it triggers, what words are used. Learn more.

“Read Numbers Slowly”: Optimize the reading of numbers (or anything else) by making sure it is read slowly and clearly. How to Read Slowly.

Metadata Event for Custom LLM: Pass data from your backend to the frontend during a call with the new metadata event. See API reference.

3. Major Upgrade to Python Custom LLM (Important)

Improved async OpenAI performance for better latency and stability. Highly recommended for existing Python Custom LLM users to upgrade to the latest version.

See Doc

4. Webhook Security

Improved webhook security with the signature "verify" function in the new SDK. Find a code example in the custom LLM demo repositories and in the documentation.

Additionally, the webhook includes a temporary recording for users who opt out of storage; please note that this recording will expire in 10 minutes.

See Doc

This week’s video

We’ve got a shout out in the latest episode of Y Combinator’s podcast Lightcone.

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

The Retell AI team 💛

Dear Retell Community,

We're thrilled to share some exciting updates to our platform! Here’s what’s new 👇. Feel free to reply to this email directly to share your feedback on these updates!

1. Retell LLM Updates

LLM Model Options: Choose between GPT-3.5-turbo and GPT-4-turbo, with additional models coming soon. Available through both our API and dashboard.

Interruption Sensitivity Slider: Adjust how easily users can interrupt the agent. This feature is now accessible in our API and dashboard.

2. Pricing Updates

We've updated our pricing structure to be clearer and more modular.

Conversation voice engine API

- With OpenAI / Deepgram voices ($0.08/min)

- With Elevenlabs voices ($0.10/min)

LLM Agent

- Retell LLM - GPT 3.5 ($0.02/min )

- Retell LLM - GPT 4.0 ($0.2/min )

- Custom LLM (No charge)

Telephony

- Retell Twilio ($0.01/min )

- Custom Twilio (No charge)

See Detail

3. Monitoring & Debugging Tools

Dashboard Updates: The history tab now includes a public log, essential for debugging and understanding your agent's current state, tool interactions, and more.

Enhanced API Responses: Our get-call API now provides latency tracking for LLM and websocket roundtrip times.

See Doc

4. Security Features

Ensure the authenticity of requests with our new IP verification feature. Authorized Retell server IPs are: 13.248.202.14, 3.33.169.178.

5. Other Improvements

Enhancements for Custom LLM Users

  • You can now turn off interruption for each response (no_interruption_allowed in doc)
  • Ability to let agent interrupt / speak when no response is needed (doc)
  • Config event to control whether enable reconnection, and whether to send the call detail at beginning of call
  • New ping pong events and reconnection mechanism in LLM websocket: will reconnect the websocket if connection lost, and will also track server roundtrip latency (available in get-call API)

Web Call Frontend Upgrades

  • Frontend SDK now contains a lot more events that are helpful for animation
    • "audio": real time audio played in the system
    • "agentStartTalking", "agentStopTalking": track whether agent is speaking, not applicable when ambient sound is used
  • "enable_audio_alignment" option to get audio buffer and text alignment in frontend. Not supported in frontend SDK.

SDK improvement: Our updated SDK maintains backward compatibility, ensuring smooth transitions and consistent performance.

🌟 This week’s Demo: Introducing Retell LLM

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

Retell AI team 💛

Dear Retell Community,

We've been closely listening to your feedback and identifying the most significant friction points in building powerful voice AI agents that interact and take actions like humans. As a result, we're thrilled to announce the beta launch of Retell LLM, a unique framework for LLM agents that's optimized for real-time voice conversations and action executions.

1️⃣ Retell LLM (Beta)

Low Latency, Conversational LLM with Reliable Function Calls

Experience lightning-fast voice AI with an average end-to-end latency of just 800ms with our LLM, mirroring the performance featured in the South Bay Dental Office demo on our website. Our LLM has been fine-tuned for conciseness and a conversational tone, making it perfect for voice-based interactions. It is also engineered to reliably initiate function calls.

Single-Prompt vs. Stateful Multi-Prompt Agents

We provide two options for creating an agent. The Single-Prompt Agent is ideal for straightforward tasks that require a brief input. For scenarios where the agent's prompt is lengthy and the tasks are too complex for a single input to be effective, the Stateful Multi-Prompt Agent is recommended. This approach divides the prompt into various states, each with its own prompt, linked by conditional edges.

User-Friendly UI for Agent Creation and API for Programmatic Agent Creation

Our dashboard allows you to quickly create an LLM agent using prompts and the drag-and-drop functionality for stateful multi-prompt agents. You can seamlessly build, test, and deploy agents into production using our dashboard or achieve the same programmatically via our API.

Pre-defined Tool Calling Abilities such as Call Transfer, Ending Calls, and Appointment Booking

Leverage our pre-defined tool calling capabilities, including ending calls, transferring calls, checking calendar availability (via Cal.com), and booking appointments (via Cal.com), to easily build real-world actions. We also offer support for custom tools for more tailored actions.

Maintaining Continuous Interaction During Actions That Take Longer

To address delays in actions that require more time to complete, you can activate this feature. It enables the agent to maintain a conversation with the user throughout the duration of the function call. This ensures the voice AI agent keeps the interaction smooth and avoids awkward silences, even when function calls take longer.

2️⃣ SDK Update v3.4.0 Announcement

Please note, the previous SDK version will be phased out in 60 days. We encourage you to transition to the latest SDK version.

3️⃣ Status Page

Stay informed with system status on our new status page.

Status Page

4️⃣ Public Log in get-call API

To streamline your troubleshooting process, we've introduced a public log within our get-call API. This new feature aids in quicker issue resolution and smoother integration, detailed further at the link below.

See doc

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Dear Retell Community,

We're excited to announce some major updates to our services. Here's the latest:

1️⃣ More Affordable Premium Voices

Thanks to recent cost reductions in our premium voice service, we're excited to pass these savings on to our customers. We're pleased to announce a new, lower price for our premium voice service—now just $0.12 per minute, down from $0.17. Enterprise pricing will also see similar reductions (please contact us at founders@retellai.com for more information).

Please note: The adjusted pricing will take effect from March 1st, and billing will be charged at the end of this month.

2️⃣ Customizable Dashboard Settings

Gain more control over your voice output with new dashboard settings.

  • Ambient Sound;
  • Responsiveness;
  • Voice Speed;
  • Voice Temperature;
  • Backchanneling;

Tailor your voice interactions to suit your precise needs and preferences for a truly personalized experience.

3️⃣ Secure Webhooks for Enhanced Security

Boost your communication security with our new webhook signatures. This feature enables you to confirm that any received webhook genuinely comes from Retell, providing an additional layer of protection.

See doc

4️⃣ Launch of Multilingual Support

We're excited to announce the launch of our multilingual version, now supporting German, Spanish, Hindi, Portuguese, and Japanese. Access and set your preferred language through our dashboard.

While this feature is currently available via API, we're working on extending support to our SDKs shortly.

See doc

5️⃣ Opt-Out of Transcripts and Recordings Storage

Based on user feedback, we've introduced an opt-out option for storing transcripts and recordings. This feature, available in our API and the Playground, gives you more control over your data and privacy.

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Dear Retell Community,

We are excited to share several updates and new features with you. Our goal is to continually improve our offerings to better meet your needs. Here's what's new:

1️⃣ Enterprise Plan and Discounts Now Available

We're excited to announce the availability of our discounted enterprise tiered pricing. For more information on that, please contact our team at founders@retellai.com.

2️⃣ Enhanced Conversation with Lower Latency

We've launched improvements to further reduce latency (by approximately 30%). Try our demo on the website again and experience the magical speed.

3️⃣ New Agent Control Parameters

We've introduced additional control parameters for agents for greater customization and control. Including:

  • Responsiveness: Adjust how responsive your agent is to utterances.
  • Voice Speed: Control the speech rate of your agent to be faster or slower.
  • Boost Keywords: Prioritize specific keywords for speech recognition.

These parameters have been added to our API. Documentation is being updated, and we are also working on incorporating these features into the SDKs. For more details, visit Create Agent API Reference.

4️⃣ New Call Control Parameter: - end_call_after_silence_ms

This parameter enables the automatic termination of calls following a specified duration of user inactivity. It's designed to streamline operations and improve efficiency.

5️⃣ Word-Level Timestamps in Transcripts

To enhance the utility of our transcripts, we are now including word-level timestamps. This feature is pending documentation updates, so stay tuned for more information at Audio WebSocket API Reference.

6️⃣ [Auto-reconnection] Web Call Updates - Client JS SDK 1.3.0

For users utilizing web calls, our latest client JavaScript SDK (version 1.3.0) now supports auto-reconnection of the socket in case of network disconnections. This ensures a more reliable and uninterrupted service.

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Dear Retell Community,

We're thrilled to announce a series of updates and improvements to our platform, designed to enhance your experience and offer you more versatility and control. Here's what's new:

1️⃣ Domain changed

Please note that our domain has changed. Make sure to update your bookmarks and records to stay connected with us seamlessly.

2️⃣ New TTS provider: Deepgram

We've introduced Deepgram as our new TTS provider. Explore it on the Dashboard and discover your favorite one! The price is still $0.10/minute($6/h)

Also, we've added more voice choices from 11labs, ensuring more stable and diverse voice options for your projects.

3️⃣ New Control Parameters: Voice Temperature

Gain control over the stability and variability of your voice output, allowing for more tailored and dynamic audio experiences.

See  doc

4️⃣ New Agent ability: Back channeling

Enhance interactions with the ability for the agent to backchannel, using phrases like "yeah" and "uh-huh" to express interest and engagement during conversations.

See  doc

5️⃣ Python Backend Demo Now in FastAPI

By popular demand, our Python backend demo has transitioned to FastAPI. It includes Twilio integration and a simple function calling example, providing a more robust and user-friendly experience.

See  demo

6️⃣ New Version of Web Frontend SDK

Our updated web frontend SDK makes integration easier and improves performance, allowing you to access live transcripts directly on your web frontend.

See  SDK

7️⃣Improved Performance in Noisy Environments

Our product now offers improved performance even in noisy settings, ensuring your voice interactions remain clear and uninterrupted.

We're excited for you to experience these updates and hope they significantly enhance your projects and workflows.

As always, we value your feedback and are here to support you in leveraging these new features to their fullest potential.

Best regards,

Retell AI team 💛

Important Update:

1️⃣ New pricing tier released

Dear Retell Community,

We are thrilled to announce a new and significantly more affordable pricing tier featuring OpenAI's TTS. Effective immediately, you can take advantage of our state-of-the-art voice conversation API with OpenAI TTS at the new rate of $0.10 per minute.

This adjustment reflects our commitment to providing you with exceptional value and enhancing your voice interaction experience.

We believe this new pricing will make our product more accessible and allow you to leverage our technology for a wider range of applications.

2️⃣ SDK updated

We updated our SDK, so update your retell SDK to stay in the loop.
- https://www.npmjs.com/package/retell-sdk
- https://pypi.org/project/retell-sdk/

We added a frontend js SDK to abstract away the details of capturing mic and setting up playback.
- https://www.npmjs.com/package/retell-client-js-sdk

We update our documentation at https://docs.re-tell.ai/guide/intro to help people integrate.

See SDK documentation

3️⃣ Open-source demo repo

We open sourced the LLM and twilio codes that powers our dashboard as a demo:

Node.js demo:

https://github.com/adam-team/retell-backend-node-demo

Python demo:

GitHub - adam-team/python-backend-demo

We open sourced the web frontend demo:

React demo using SDK :

GitHub - adam-team/retell-frontend-reactjs-demo

React demo using native JS:

GitHub - adam-team/retell-frontend-reactjs-native-demo

See Opensource demo repo doc

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

🛠️ Important Update:

API Changes & New Features

Dear Retell Community,

In our quest to deliver a human-level conversation experience, we've made a strategic decision to refocus our efforts on voice conversation quality, while scaling back on certain other nice-to-haves. The current API will be phased out after this Wednesday at 12:00 PM. We warmly invite you to adopt our new API, designed to continue providing you with a magical AI conversation experience long-term.

🌟 Key Changes:

  • LLM Open Sourced: Our LLM will no longer be included in the API. Instead, use the "Custom LLM" feature to integrate your own LLM into the conversation pipeline. Our LLM will remain accessible on the dashboard for demo purposes.
  • Twilio and Phone Call Features Open Sourced: These features are removed from the API but remain accessible on the dashboard for demo purposes.
  • Custom LLM Integration: Our API now exclusively supports the integration of your own LLM via a websocket, requiring a specified websocket URL for agent creation.
  • SDK Updates: We're updating our Node.js SDK to align with these changes, with the Python SDK update to follow soon.

🌟 New Features:

  • LIVE Transcript Feature: Leverage real-time transcription for more informed LLM responses.
  • Open Sourced Repositories: Gain more customizability with our open-sourced LLM voice agent implementation and Twilio and phone call features.
  • Reduced Pricing: Enjoy our service at a 15% discount, now priced at $0.17 per minute.

We understand that this transition may require adjustments in your current setup, and we are here to support you through this change. Please feel free to reach out to us for any assistance or further information regarding the new API.

Book a meeting with founders

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

Introduce Our Latest Feature:

Function Calling

Dear Retell Community,

We're excited to share a groundbreaking update that's set to transform how you interact with our platform - the launch of our newest feature: Function Calling.

Have you ever wished your agents could access real-time data, like the current time or weather updates? Or effortlessly connect with your server to schedule appointments or set up meetings? With Function Calling, these capabilities are now at your fingertips.

Seamless Integration in Two Simple Steps:
Implementing Function Calling is straightforward and user-friendly:

1. Endpoint Setup: Create an endpoint to handle function calls. This API is your tool to perform a multitude of actions - from retrieving up-to-the-minute data to seamless server communication.

2. Function Definition in Your Agent: Integrate this Post API into your agent setup, unlocking a whole new world of interactivity and responsiveness. Your agents are now more dynamic, intelligent, and adaptable than ever.

To ensure you benefit from this feature and other improvements, please update your Python SDK to version 1.0.0 or your node.js SDK to version 2.2.4.

For a detailed understanding of how Function Calling can revolutionize your experience with Retell, we encourage you to delve into our comprehensive documentation.

See Documentation

Thank you for your continued support. We are eager to hear your feedback and experiences with this new feature.

Warm regards,
Team Retell AI

Retell AI (YC W24) – API for Building Voice AI Agents That Interact Like Humans

tl;dr: You can now use our API to build human-like voice AI agents with a few lines of code. We've achieved response times averaging 800ms, reaching the level of human interactions.

Problem — Building human-like voice AI agents is hard

  • Human-like interaction is hard: Building a voice AI agent that can talk seems easy (stitching together speech-to-text, LLM, and text-to-speech), but making it interact like humans is hard. In human conversation, one has to respond fast and handle all kinds of situations (handling interruptions, knowing when it's your turn to speak, etc).
  • Long development time: Developers building voice products spend hundreds of hours focusing on the voice conversation experience alone.
  • Quality often falls short: Current voice AI products often have long response latency (>3s), do not know when to appropriately speak (might interrupt you too much, or respond not quickly enough), and do not handle user interruptions well (AI and human talking over each other), and the list goes on.

Solution — API for building human-like voice AI agents with minimal code

  • Human-Like Conversations: With our API, you can build a superior voice conversation experience with just a few lines of code. We've achieved response times averaging 800ms, reaching the level of human interactions. We also handle interruptions and provide smart turn-taking determination to achieve seamless conversational exchanges (see our demo).
Retell AI Demo
  • Simple Agent Creation: You can quickly build versatile agents using just a prompt. For use cases where you want full control over how to respond to users, while still enjoying our industry-leading low latency and natural conversation, you may use our Custom LLM feature.
  • Web and Phone Call: We support both phone calls (inbound and outbound phone calls) and web calls. Integration into your existing products requires just a few lines of code.

Who is this product meant for?

Anyone building a voice experience can benefit from using Retell AI. Whether you're developing AI call agents, AI coaching apps, or AI lifelike companions, we are your trusted ally in creating the best conversational experience for your users.

How to get started?

Have questions?

  • You can email us at founders@re-tell.ai
  • We'll understand your use cases, guide you through our product, and help you quickly implement your voice AI use case.