New LLM Model Options, Modular Pricing, And More

Dear Retell Community,

We're thrilled to share some exciting updates to our platform! Hereโ€™s whatโ€™s new ๐Ÿ‘‡. Feel free to reply to this email directly to share your feedback on these updates!

1. Retell LLM Updates

LLM Model Options: Choose between GPT-3.5-turbo and GPT-4-turbo, with additional models coming soon. Available through both our API and dashboard.

Interruption Sensitivity Slider: Adjust how easily users can interrupt the agent. This feature is now accessible in our API and dashboard.

2. Pricing Updates

We've updated our pricing structure to be clearer and more modular.

Conversation voice engine API

- With OpenAI / Deepgram voices ($0.08/min)

- With Elevenlabs voices ($0.10/min)

LLM Agent

- Retell LLM - GPT 3.5 ($0.02/min )

- Retell LLM - GPT 4.0 ($0.2/min )

- Custom LLM (No charge)


- Retell Twilio ($0.01/min )

- Custom Twilio (No charge)

See Detail

3. Monitoring & Debugging Tools

Dashboard Updates: The history tab now includes a public log, essential for debugging and understanding your agent's current state, tool interactions, and more.

Enhanced API Responses: Our get-call API now provides latency tracking for LLM and websocket roundtrip times.

See Doc

4. Security Features

Ensure the authenticity of requests with our new IP verification feature. Authorized Retell server IPs are:,

5. Other Improvements

Enhancements for Custom LLM Users

Web Call Frontend Upgrades

SDK improvement: Our updated SDK maintains backward compatibility, ensuring smooth transitions and consistent performance.

๐ŸŒŸ This weekโ€™s Demo: Introducing Retell LLM

Thank you for being part of our community. We look forward to your feedback on these updates and are excited to see how you leverage these new features!

Retell AI team ๐Ÿ’›