// Stay up to date with Retell AI
Retell AI Changelogs
Latest Features (Call Analysis, Backchanneling) And Python Custom LLM Update
May 2, 2024

Dear Retell Community,

We're thrilled to share some exciting new features and updates on the platform. Here’s what’s new 👇. Feel free to reply to this email directly to share your feedback.

1. Enhanced Call Monitoring

Voice AI agent Assitant

Call Analysis:  We've introduced metrics like Call Completion Status, Task Completion Status, User Sentiment, Average End-to-End Latency, and Network Latency for comprehensive monitoring. You can access these directly on the dashboard or through API.

Disconnection Reason Tracking: Get insights into call issues with the addition of "Disconnection Reason" in the dashboard and "get-call" object. For more details, refer to our Error Code Table.

Function Call Tracking: Transcripts now include function call results, offering a seamless view of when and what outcomes were triggered. Available in the dashboard and get-call API. For custom LLM users, can use tool call invocation event and tool call result event to pass function calling results to us, so that you can utilize the weaved transcript and can utilize dashboard to view when your function is triggered.

2. New Features

Reminder Settings: You can now configure reminder settings to define the duration of silence before an agent follows up with a response. Learn more.

Backchanneling: Backchannel is the ability for the agent to make small noises like “uh-huh”, “I see”, etc. during user speech, to improve engagement of the call. You can set whether to enable it, how often it triggers, what words are used. Learn more.

“Read Numbers Slowly”: Optimize the reading of numbers (or anything else) by making sure it is read slowly and clearly. How to Read Slowly.

Metadata Event for Custom LLM: Pass data from your backend to the frontend during a call with the new metadata event. See API reference.

3. Major Upgrade to Python Custom LLM (Important)

Improved async OpenAI performance for better latency and stability. Highly recommended for existing Python Custom LLM users to upgrade to the latest version.

See Doc

4. Webhook Security

Improved webhook security with the signature "verify" function in the new SDK. Find a code example in the custom LLM demo repositories and in the documentation.

Additionally, the webhook includes a temporary recording for users who opt out of storage; please note that this recording will expire in 10 minutes.

See Doc

This week’s video

We’ve got a shout out in the latest episode of Y Combinator’s podcast Lightcone.

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

The Retell AI team 💛

New LLM Model Options, Modular Pricing, And More
April 19, 2024

Dear Retell Community,

We're thrilled to share some exciting updates to our platform! Here’s what’s new 👇. Feel free to reply to this email directly to share your feedback on these updates!

1. Retell LLM Updates

LLM Model Options: Choose between GPT-3.5-turbo and GPT-4-turbo, with additional models coming soon. Available through both our API and dashboard.

Interruption Sensitivity Slider: Adjust how easily users can interrupt the agent. This feature is now accessible in our API and dashboard.

2. Pricing Updates

We've updated our pricing structure to be clearer and more modular.

Conversation voice engine API

- With OpenAI / Deepgram voices ($0.08/min)

- With Elevenlabs voices ($0.10/min)

LLM Agent

- Retell LLM - GPT 3.5 ($0.02/min )

- Retell LLM - GPT 4.0 ($0.2/min )

- Custom LLM (No charge)


- Retell Twilio ($0.01/min )

- Custom Twilio (No charge)

See Detail

3. Monitoring & Debugging Tools

Dashboard Updates: The history tab now includes a public log, essential for debugging and understanding your agent's current state, tool interactions, and more.

Enhanced API Responses: Our get-call API now provides latency tracking for LLM and websocket roundtrip times.

See Doc

4. Security Features

Ensure the authenticity of requests with our new IP verification feature. Authorized Retell server IPs are:,

5. Other Improvements

Enhancements for Custom LLM Users

  • You can now turn off interruption for each response (no_interruption_allowed in doc)
  • Ability to let agent interrupt / speak when no response is needed (doc)
  • Config event to control whether enable reconnection, and whether to send the call detail at beginning of call
  • New ping pong events and reconnection mechanism in LLM websocket: will reconnect the websocket if connection lost, and will also track server roundtrip latency (available in get-call API)

Web Call Frontend Upgrades

  • Frontend SDK now contains a lot more events that are helpful for animation
    • "audio": real time audio played in the system
    • "agentStartTalking", "agentStopTalking": track whether agent is speaking, not applicable when ambient sound is used
  • "enable_audio_alignment" option to get audio buffer and text alignment in frontend. Not supported in frontend SDK.

SDK improvement: Our updated SDK maintains backward compatibility, ensuring smooth transitions and consistent performance.

🌟 This week’s Demo: Introducing Retell LLM

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

Retell AI team 💛

Introducing Retell LLM Beta
April 8, 2024

Dear Retell Community,

We've been closely listening to your feedback and identifying the most significant friction points in building powerful voice AI agents that interact and take actions like humans. As a result, we're thrilled to announce the beta launch of Retell LLM, a unique framework for LLM agents that's optimized for real-time voice conversations and action executions.

1️⃣ Retell LLM (Beta)

Low Latency, Conversational LLM with Reliable Function Calls

Experience lightning-fast voice AI with an average end-to-end latency of just 800ms with our LLM, mirroring the performance featured in the South Bay Dental Office demo on our website. Our LLM has been fine-tuned for conciseness and a conversational tone, making it perfect for voice-based interactions. It is also engineered to reliably initiate function calls.

Single-Prompt vs. Stateful Multi-Prompt Agents

We provide two options for creating an agent. The Single-Prompt Agent is ideal for straightforward tasks that require a brief input. For scenarios where the agent's prompt is lengthy and the tasks are too complex for a single input to be effective, the Stateful Multi-Prompt Agent is recommended. This approach divides the prompt into various states, each with its own prompt, linked by conditional edges.

User-Friendly UI for Agent Creation and API for Programmatic Agent Creation

Our dashboard allows you to quickly create an LLM agent using prompts and the drag-and-drop functionality for stateful multi-prompt agents. You can seamlessly build, test, and deploy agents into production using our dashboard or achieve the same programmatically via our API.

Pre-defined Tool Calling Abilities such as Call Transfer, Ending Calls, and Appointment Booking

Leverage our pre-defined tool calling capabilities, including ending calls, transferring calls, checking calendar availability (via Cal.com), and booking appointments (via Cal.com), to easily build real-world actions. We also offer support for custom tools for more tailored actions.

Maintaining Continuous Interaction During Actions That Take Longer

To address delays in actions that require more time to complete, you can activate this feature. It enables the agent to maintain a conversation with the user throughout the duration of the function call. This ensures the voice AI agent keeps the interaction smooth and avoids awkward silences, even when function calls take longer.

2️⃣ SDK Update v3.4.0 Announcement

Please note, the previous SDK version will be phased out in 60 days. We encourage you to transition to the latest SDK version.

3️⃣ Status Page

Stay informed with system status on our new status page.

Status Page

4️⃣ Public Log in get-call API

To streamline your troubleshooting process, we've introduced a public log within our get-call API. This new feature aids in quicker issue resolution and smoother integration, detailed further at the link below.

See doc

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Latest Updates From Retell: Lower Prices, Enhanced Security, And Multilingual Support
March 12, 2024

Dear Retell Community,

We're excited to announce some major updates to our services. Here's the latest:

1️⃣ More Affordable Premium Voices

Thanks to recent cost reductions in our premium voice service, we're excited to pass these savings on to our customers. We're pleased to announce a new, lower price for our premium voice service—now just $0.12 per minute, down from $0.17. Enterprise pricing will also see similar reductions (please contact us at founders@retellai.com for more information).

Please note: The adjusted pricing will take effect from March 1st, and billing will be charged at the end of this month.

2️⃣ Customizable Dashboard Settings

Gain more control over your voice output with new dashboard settings.

  • Ambient Sound;
  • Responsiveness;
  • Voice Speed;
  • Voice Temperature;
  • Backchanneling;

Tailor your voice interactions to suit your precise needs and preferences for a truly personalized experience.

3️⃣ Secure Webhooks for Enhanced Security

Boost your communication security with our new webhook signatures. This feature enables you to confirm that any received webhook genuinely comes from Retell, providing an additional layer of protection.

See doc

4️⃣ Launch of Multilingual Support

We're excited to announce the launch of our multilingual version, now supporting German, Spanish, Hindi, Portuguese, and Japanese. Access and set your preferred language through our dashboard.

While this feature is currently available via API, we're working on extending support to our SDKs shortly.

See doc

5️⃣ Opt-Out of Transcripts and Recordings Storage

Based on user feedback, we've introduced an opt-out option for storing transcripts and recordings. This feature, available in our API and the Playground, gives you more control over your data and privacy.

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Enhancements and New Features release
March 4, 2024

Dear Retell Community,

We are excited to share several updates and new features with you. Our goal is to continually improve our offerings to better meet your needs. Here's what's new:

1️⃣ Enterprise Plan and Discounts Now Available

We're excited to announce the availability of our discounted enterprise tiered pricing. For more information on that, please contact our team at founders@retellai.com.

2️⃣ Enhanced Conversation with Lower Latency

We've launched improvements to further reduce latency (by approximately 30%). Try our demo on the website again and experience the magical speed.

3️⃣ New Agent Control Parameters

We've introduced additional control parameters for agents for greater customization and control. Including:

  • Responsiveness: Adjust how responsive your agent is to utterances.
  • Voice Speed: Control the speech rate of your agent to be faster or slower.
  • Boost Keywords: Prioritize specific keywords for speech recognition.

These parameters have been added to our API. Documentation is being updated, and we are also working on incorporating these features into the SDKs. For more details, visit Create Agent API Reference.

4️⃣ New Call Control Parameter: - end_call_after_silence_ms

This parameter enables the automatic termination of calls following a specified duration of user inactivity. It's designed to streamline operations and improve efficiency.

5️⃣ Word-Level Timestamps in Transcripts

To enhance the utility of our transcripts, we are now including word-level timestamps. This feature is pending documentation updates, so stay tuned for more information at Audio WebSocket API Reference.

6️⃣ [Auto-reconnection] Web Call Updates - Client JS SDK 1.3.0

For users utilizing web calls, our latest client JavaScript SDK (version 1.3.0) now supports auto-reconnection of the socket in case of network disconnections. This ensures a more reliable and uninterrupted service.

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

Important Update:New TTS Options, Enhanced Features & More
February 20, 2024

Dear Retell Community,

We're thrilled to announce a series of updates and improvements to our platform, designed to enhance your experience and offer you more versatility and control. Here's what's new:

1️⃣ Domain changed

Please note that our domain has changed. Make sure to update your bookmarks and records to stay connected with us seamlessly.

2️⃣ New TTS provider: Deepgram

We've introduced Deepgram as our new TTS provider. Explore it on the Dashboard and discover your favorite one! The price is still $0.10/minute($6/h)

Also, we've added more voice choices from 11labs, ensuring more stable and diverse voice options for your projects.

3️⃣ New Control Parameters: Voice Temperature

Gain control over the stability and variability of your voice output, allowing for more tailored and dynamic audio experiences.

See  doc

4️⃣ New Agent ability: Back channeling

Enhance interactions with the ability for the agent to backchannel, using phrases like "yeah" and "uh-huh" to express interest and engagement during conversations.

See  doc

5️⃣ Python Backend Demo Now in FastAPI

By popular demand, our Python backend demo has transitioned to FastAPI. It includes Twilio integration and a simple function calling example, providing a more robust and user-friendly experience.

See  demo

6️⃣ New Version of Web Frontend SDK

Our updated web frontend SDK makes integration easier and improves performance, allowing you to access live transcripts directly on your web frontend.

See  SDK

7️⃣Improved Performance in Noisy Environments

Our product now offers improved performance even in noisy settings, ensuring your voice interactions remain clear and uninterrupted.

We're excited for you to experience these updates and hope they significantly enhance your projects and workflows.

As always, we value your feedback and are here to support you in leveraging these new features to their fullest potential.

Best regards,

Retell AI team 💛

More Affordable Pricing Tier And SDK Updates
February 6, 2024

Important Update:

1️⃣ New pricing tier released

Dear Retell Community,

We are thrilled to announce a new and significantly more affordable pricing tier featuring OpenAI's TTS. Effective immediately, you can take advantage of our state-of-the-art voice conversation API with OpenAI TTS at the new rate of $0.10 per minute.

This adjustment reflects our commitment to providing you with exceptional value and enhancing your voice interaction experience.

We believe this new pricing will make our product more accessible and allow you to leverage our technology for a wider range of applications.

2️⃣ SDK updated

We updated our SDK, so update your retell SDK to stay in the loop.
- https://www.npmjs.com/package/retell-sdk
- https://pypi.org/project/retell-sdk/

We added a frontend js SDK to abstract away the details of capturing mic and setting up playback.
- https://www.npmjs.com/package/retell-client-js-sdk

We update our documentation at https://docs.re-tell.ai/guide/intro to help people integrate.

See SDK documentation

3️⃣ Open-source demo repo

We open sourced the LLM and twilio codes that powers our dashboard as a demo:

Node.js demo:


Python demo:

GitHub - adam-team/python-backend-demo

We open sourced the web frontend demo:

React demo using SDK :

GitHub - adam-team/retell-frontend-reactjs-demo

React demo using native JS:

GitHub - adam-team/retell-frontend-reactjs-native-demo

See Opensource demo repo doc

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

Important Update: API Changes & New Features
January 31, 2024

🛠️ Important Update:

API Changes & New Features

Dear Retell Community,

In our quest to deliver a human-level conversation experience, we've made a strategic decision to refocus our efforts on voice conversation quality, while scaling back on certain other nice-to-haves. The current API will be phased out after this Wednesday at 12:00 PM. We warmly invite you to adopt our new API, designed to continue providing you with a magical AI conversation experience long-term.

🌟 Key Changes:

  • LLM Open Sourced: Our LLM will no longer be included in the API. Instead, use the "Custom LLM" feature to integrate your own LLM into the conversation pipeline. Our LLM will remain accessible on the dashboard for demo purposes.
  • Twilio and Phone Call Features Open Sourced: These features are removed from the API but remain accessible on the dashboard for demo purposes.
  • Custom LLM Integration: Our API now exclusively supports the integration of your own LLM via a websocket, requiring a specified websocket URL for agent creation.
  • SDK Updates: We're updating our Node.js SDK to align with these changes, with the Python SDK update to follow soon.

🌟 New Features:

  • LIVE Transcript Feature: Leverage real-time transcription for more informed LLM responses.
  • Open Sourced Repositories: Gain more customizability with our open-sourced LLM voice agent implementation and Twilio and phone call features.
  • Reduced Pricing: Enjoy our service at a 15% discount, now priced at $0.17 per minute.

We understand that this transition may require adjustments in your current setup, and we are here to support you through this change. Please feel free to reach out to us for any assistance or further information regarding the new API.

Book a meeting with founders

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

Introduce Our Latest Feature: Function Calling
January 23, 2024

Introduce Our Latest Feature:

Function Calling

Dear Retell Community,

We're excited to share a groundbreaking update that's set to transform how you interact with our platform - the launch of our newest feature: Function Calling.

Have you ever wished your agents could access real-time data, like the current time or weather updates? Or effortlessly connect with your server to schedule appointments or set up meetings? With Function Calling, these capabilities are now at your fingertips.

Seamless Integration in Two Simple Steps:
Implementing Function Calling is straightforward and user-friendly:

1. Endpoint Setup: Create an endpoint to handle function calls. This API is your tool to perform a multitude of actions - from retrieving up-to-the-minute data to seamless server communication.

2. Function Definition in Your Agent: Integrate this Post API into your agent setup, unlocking a whole new world of interactivity and responsiveness. Your agents are now more dynamic, intelligent, and adaptable than ever.

To ensure you benefit from this feature and other improvements, please update your Python SDK to version 1.0.0 or your node.js SDK to version 2.2.4.

For a detailed understanding of how Function Calling can revolutionize your experience with Retell, we encourage you to delve into our comprehensive documentation.

See Documentation

Thank you for your continued support. We are eager to hear your feedback and experiences with this new feature.

Warm regards,
Team Retell AI

Conversational Voice API - It's Finally Here.
January 18, 2024

Retell AI (YC W24) – API for Building Voice AI Agents That Interact Like Humans

tl;dr: You can now use our API to build human-like voice AI agents with a few lines of code. We've achieved response times averaging 800ms, reaching the level of human interactions.

Problem — Building human-like voice AI agents is hard

  • Human-like interaction is hard: Building a voice AI agent that can talk seems easy (stitching together speech-to-text, LLM, and text-to-speech), but making it interact like humans is hard. In human conversation, one has to respond fast and handle all kinds of situations (handling interruptions, knowing when it's your turn to speak, etc).
  • Long development time: Developers building voice products spend hundreds of hours focusing on the voice conversation experience alone.
  • Quality often falls short: Current voice AI products often have long response latency (>3s), do not know when to appropriately speak (might interrupt you too much, or respond not quickly enough), and do not handle user interruptions well (AI and human talking over each other), and the list goes on.

Solution — API for building human-like voice AI agents with minimal code

  • Human-Like Conversations: With our API, you can build a superior voice conversation experience with just a few lines of code. We've achieved response times averaging 800ms, reaching the level of human interactions. We also handle interruptions and provide smart turn-taking determination to achieve seamless conversational exchanges (see our demo).
Retell AI Demo
  • Simple Agent Creation: You can quickly build versatile agents using just a prompt. For use cases where you want full control over how to respond to users, while still enjoying our industry-leading low latency and natural conversation, you may use our Custom LLM feature.
  • Web and Phone Call: We support both phone calls (inbound and outbound phone calls) and web calls. Integration into your existing products requires just a few lines of code.

Who is this product meant for?

Anyone building a voice experience can benefit from using Retell AI. Whether you're developing AI call agents, AI coaching apps, or AI lifelike companions, we are your trusted ally in creating the best conversational experience for your users.

How to get started?

Have questions?

  • You can email us at founders@re-tell.ai
  • We'll understand your use cases, guide you through our product, and help you quickly implement your voice AI use case.