The short answer: Yes, the TCPA covers your AI voice agent. Every outbound AI call to a U.S. cell phone needs prior express consent before you dial. Penalties run $500 to $1,500 per call with no aggregate cap. Class-action exposure is the bigger risk, not the FCC. The 2025 and 2026 settlement docket proves it.
This playbook is written for the people building, buying, and operating AI outbound programs in 2026. Heads of growth, RevOps leads, contact center directors, voice AI product managers, and the in-house counsel sitting next to them. It assumes you already know what an AI voice agent does. What it does not assume is that you have read the four FCC orders, two NPRMs, one Eleventh Circuit reversal, and one Fifth Circuit decision that have rewritten this area of law since the start of 2024.
The legal questions are settled in some places, unsettled in others, and changing fast in a third group. The compliance posture you set in mid-2025 is probably already wrong somewhere in your stack. This document tells you where.
Yes. Settled by the FCC on February 8, 2024. AI-generated voices are "artificial or prerecorded voice" under the TCPA. The legal status of the voice depends on how it is produced, not how human it sounds.
The FCC's Declaratory Ruling was unambiguous: any technology that generates a human-sounding voice, including real-time conversational AI, voice cloning, and large-language-model-driven agents, falls within the existing restrictions of 47 U.S.C. § 227(b). The FCC explicitly noted that the statute "does not allow for any carve out of technologies that purport to provide the equivalent of a live agent." That language was directed at exactly the kind of operators who hoped a sufficiently lifelike voice would escape the rules.
The August 2024 Notice of Proposed Rulemaking goes further. It defines an "AI-generated call" formally and proposes mandatory in-call AI disclosure plus consent language that specifically references AI use. As of April 2026 the rule is not final, and the current FCC under Chairman Carr has signaled a lighter regulatory posture. State attorneys general and class-action plaintiffs are not waiting for the rule to finalize. They are using the existing TCPA framework, and they are winning.
Statutory damages start at $500 per call. Class actions in 2025 and 2026 settled in the $5M to $20M range. The MortgageOne case filed in February 2026 exposes a vendor's chain of liability that catches operators who outsource AI calling. The math punishes scale.
The current 2026 docket tells the story better than any abstract risk analysis. Gen Digital, the parent of Norton and LifeLock, agreed to a $9.95 million settlement in January 2026 for prerecorded voice calls placed to people who were never customers. Hy Cite Enterprises (Royal Prestige) settled for $4.75 million in early 2026 over a similar fact pattern, with class members eligible for $600 to $1,000 each. AbleTo, partnered with Aetna, received preliminary approval on a class settlement in February 2026 for voicemails placed without consent.
The case the industry should be watching closely is Lamb v. Mortgage One Funding, filed February 24, 2026 in the Eastern District of Michigan. Mortgage One used an AI voice agent to cold-call consumers about cash-out refinancing. The proposed class definition reaches every consumer who received an artificial-voice call from Mortgage One "or from any of the company's vendors, lead generators, or agents." That construction is the part that matters. The plaintiff is making explicit what the FCC has long held implicitly: the entity on whose behalf the calls are made bears liability, regardless of which downstream vendor pressed dial. If you are buying AI calling from a third party and assuming the third party owns the compliance risk, Lamb is the case that proves you wrong.
Older settlements set the upper bound. The QuoteWizard $19 million settlement became the reference point for what happens when a company cannot trace consent through its vendor chain. Aggregate class-action filings under the TCPA have surged, with one industry tracker reporting filings up 95% year over year and aggregate verdicts exceeding $925 million across the docket.
Two tiers, with one major regional carve-out. Marketing AI calls require Prior Express Written Consent (PEWC) in 47 states. Informational AI calls require Prior Express Consent (PEC), which can be oral. In Texas, Louisiana, and Mississippi, oral consent now satisfies the statute for marketing too, thanks to a February 2026 Fifth Circuit ruling.
PEWC is the heightened standard. It applies to any AI marketing call placed to a wireless number nationwide and to any prerecorded call to a residential landline. The consumer must affirmatively sign (electronic signatures under the E-SIGN Act qualify) a disclosure that names your specific business, identifies the phone number being authorized, makes clear that consent is not a condition of any purchase, and increasingly references the use of AI-generated voices.
The "marketing" definition is wider than most teams realize. Any call whose purpose includes encouraging the purchase, rental, or investment in goods or services qualifies, even if no transaction occurs on the call. An "account check-in" that pivots to an upsell is marketing. A "free benefits review" call that ends with a quote is marketing. The FCC and the courts assess purpose, not the opening sentence.
PEC is the lighter standard, available for informational and transactional AI calls: appointment reminders, delivery notifications, account alerts, fraud warnings, prescription pickups, shipping updates. No signed writing is required. Voluntarily providing a phone number in a transaction-related context, such as a patient handing over a cell number on an intake form, satisfies PEC for related communications. The same number does not satisfy PEWC for unrelated marketing later.
This distinction is where most enterprise programs unintentionally cross the line. A clinic that captures cell numbers for appointment reminders has PEC. The same clinic that uses those numbers to promote a new aesthetic-medicine service line has stepped outside PEC and into PEWC territory without separate authorization.
The Fifth Circuit decided Bradford v. Sovereign Pest Control of Texas, Inc. on February 25, 2026, holding that the TCPA's text only requires "prior express consent," not "prior express written consent," for any artificial-voice call. The panel applied the Supreme Court's Loper Bright framework and concluded that the FCC overstepped its statutory authority in 2012 when it created the heightened written-consent requirement for telemarketing.
Practically, this means an oral "yes" given by a consumer who voluntarily provided their phone number now defeats a TCPA claim in Texas, Louisiana, and Mississippi. It does not change anything in the other 47 states, where federal courts continue to apply the FCC's written-consent rule. State law is not displaced either: Florida, for instance, still requires AI-specific written consent regardless of any federal interpretation. A Texas company calling a Florida resident is bound by Florida's standard.
The defensive value of Bradford is real but narrow. It strengthens the position of operators who capture verbal consent during inbound calls and log it cleanly. It does not authorize anyone to abandon written consent capture for nationwide outbound programs.
The exemption your sales team thinks exists usually does not. The Established Business Relationship rule, the "warm lead" theory, and most variants of "but they're our customer" do not authorize AI calls. Real exemptions exist for emergency communications, narrow healthcare carve-outs, and certain debt collection scenarios. Everything else needs consent.
No. This is the single most expensive misunderstanding in the AI outbound playbook. EBR exempts you from the National Do Not Call Registry restrictions: a manual call from a sales rep to a past customer remains lawful even if the customer's number is on the DNC list. EBR does not exempt the call from the AI consent requirement. The artificial voice itself triggers the consent obligation.
The result is counterintuitive. Your live SDR can dial a 16-month-old customer on the DNC list under EBR. Your AI agent cannot dial the same person without separate consent. The voice is what the law cares about. Treating an active customer database as fair game for AI outbound has been the entry point for several recent class actions, including Gen Digital, where consumers on the call list had no LifeLock or Norton accounts at all.
Yes, with structure. A web form submission, content download, or demo request constitutes voluntary provision of a phone number in a transactional context. That gives you PEC for related informational outreach: confirming the demo, qualifying the lead, following up on the specific inquiry. It does not give you PEWC for unrelated marketing.
The clean architecture is to pair every form with a dual consent disclosure: one checkbox for transactional follow-up (PEC), one separate checkbox for ongoing marketing communications including AI voice (PEWC). Both unchecked by default. This converts inbound leads into a defensible outbound list immediately, without retrofitting consent later.
In the Fifth Circuit, yes, after Bradford. Outside the Fifth Circuit, no. A handed business card and a verbal "call me" creates oral consent, which now satisfies the statute in Texas, Louisiana, and Mississippi. In the other 47 states, you need to convert that interest into written consent before an AI agent dials. The practical mechanism is a same-day follow-up email with an opt-in link. The badge scan, timestamp, and click should be retained for the four-year TCPA statute of limitations (seven is what most defense counsel recommend).
First-party collection (your business calling about a debt owed to you) generally falls outside the marketing consent framework because the calls are informational. Third-party collection inherits the same exemption from TCPA marketing rules, but the FDCPA imposes its own conduct rules: mandatory mini-Miranda warnings, restricted calling hours stricter than TCPA, frequency caps, and consumer-validation requirements. AI agents in collection workflows operate in PEC territory but must be programmed for the FDCPA layer. Skip a mini-Miranda and the call is unlawful regardless of consent status.
Genuine medical communications, including appointment reminders, prescription pickup notifications, lab-result alerts, and emergency-care messaging, have specific FCC exemptions developed over years of HIPAA-adjacent rulemaking. The exemptions are narrow and protect the consumer's interest in receiving the call. They do not extend to wellness-program marketing, premium upsells, or service-line promotion, all of which require separate PEWC.
The phrase "warm cold list" has no legal meaning. A consumer who downloaded your competitor's whitepaper, fits your ICP, or appears in a third-party intent database has not consented to receive your AI calls. Co-registration is the gray zone where most operators get burned: lead-gen partners frequently sell consents that name "and our partners" or "selected marketing affiliates." Courts have grown skeptical of these constructions, particularly where the consumer cannot reasonably identify the buyer at the moment of consent.
The defensible architecture treats every co-reg lead as a candidate for re-consent before AI dial. An automated email or SMS opt-in flow re-establishes a clean record. The cost of the friction is small relative to the cost of one class certification.
Federally, not yet. In several states, yes. As a defensive posture, always. Texas requires disclosure within 30 seconds. California, Florida, Colorado, Illinois, and Utah have variants. The FCC's pending rule will likely make in-call AI disclosure mandatory federally within 12 to 24 months.
The TCPA's existing rules require any artificial-voice call to identify the calling entity by name and provide a contact telephone or address at the start of the call. That language has been on the books since 1991 and predates AI. The pending NPRM would add a specific AI-identification requirement: a clear, plain-language disclosure that the call uses AI-generated voice technology, delivered at the opening of the call. Public comments closed in late 2024. Finalization timing remains uncertain under the current Commission.
Texas SB 140, effective September 2024, requires AI voice technology to be disclosed within the first 30 seconds of a call and prohibits voice cloning of identifiable persons without consent. Texas's broader TRAIGA (HB 149), effective January 1, 2026, requires state agencies and certain regulated industries to disclose AI interaction without using dark patterns.
California's AB 489 prohibits AI from falsely claiming healthcare credentials and requires disclosure when AI communicates with patients. California's SB 1001 (the bot disclosure law) has covered commercial transactions and political messaging since 2019. Florida requires written consent that explicitly references AI use. Colorado, Illinois, and Utah each have variants of disclosure or transparency obligations layered on top of federal rules. The EU AI Act imposes parallel requirements on any company calling EU residents from anywhere in the world.
A single sentence in the first 30 seconds satisfies most jurisdictions simultaneously: "This is an AI assistant calling from [Company] on a recorded line. Is this a good time to talk?" That sentence covers Texas's 30-second rule, California's bot disclosure obligation, the FCC's likely future rule, the EU AI Act's transparency requirement, and most enterprise procurement language requiring AI disclosure. It also tends to improve, not degrade, consumer engagement. Recent contact-center research consistently finds that disclosed AI calls outperform undisclosed ones on completion rate and CSAT, because callers appreciate not being deceived.
UK AI calling sits under PECR administered by the ICO, the UK GDPR, and Ofcom's nuisance-call rules. PECR splits calls into "live" (a human or human-equivalent) and "automated" (recorded or pre-recorded). AI voice agents fall in a gray zone the regulator has not formally resolved. ICO informal guidance suggests that an AI capable of genuine two-way conversation, with human handoff available, may be treated more like a live call than an automated one. The defensive posture is to assume automated, which requires prior consent for marketing and disclosure of the calling company.
Live calls to UK businesses are permitted without prior consent unless the business is on the Corporate Telephone Preference Service. Live calls to UK consumers require screening against the Telephone Preference Service. ICO fines for PECR violations have historically been capped at £500,000, but draft legislation under the Data (Use and Access) Act 2025 raises the cap to GDPR-level penalties of up to £17.5 million or 4% of global turnover.
Consumers can now revoke consent through any reasonable means. STOP, QUIT, REVOKE, OPT OUT, CANCEL, UNSUBSCRIBE, and END are all per se reasonable. So is "please stop calling," "remove me," or any free-form sentence expressing the same intent. Honor the request across all channels within 10 business days. The "revoke-all" extension to all unrelated communications is delayed but coming.
The FCC's April 2025 Revocation of Consent Rule eliminated keyword-only opt-out systems. If a consumer expresses intent to stop receiving messages in plain English, that is a valid revocation regardless of whether you trained your system to recognize the specific word they used. AI voice agents have a structural advantage here over SMS workflows because natural-language understanding catches free-form requests SMS keyword filters miss. The advantage only counts if the agent is programmed to log the revocation as a structured event and propagate it to suppression in real time.
Three operational requirements bind every program:
The cost of getting opt-out propagation wrong is documented. The Cider US Holding case, filed in April 2025, settled within 30 days for $5.95 million after the platform failed to recognize "please cease" as a valid revocation. Albertsons settled a similar matter for $5.9 million almost as quickly. Plaintiffs' firms have built a repeatable template: enroll in an SMS program, send a non-keyword opt-out, let messages accumulate for proof, file. The voice equivalent is now in motion.
Six elements separate a defensible deployment from a class-action target. Documented consent for every dialed number. Real-time suppression. AI disclosure in the opening seconds. Calling-hour and frequency discipline. Free-form opt-out recognition with cross-system propagation. Audit-ready recordkeeping retained for at least four years.
Every outbound dial should originate from a consent record query, not a contact-list lookup. Before the agent connects, the system should verify a current, valid consent exists for that specific number, for that specific call type (marketing or informational), in that specific consumer's state of residence. Numbers without a current consent record should fail the dial attempt automatically. The record itself needs to capture the timestamp, the consent capture channel, the exact disclosure language the consumer saw or heard, the consumer's affirmative action (checkbox click, electronic signature, recorded verbal "yes"), and an IP address or device identifier where applicable.
National DNC scrubbing must happen at least every 31 days under federal rules, but for AI outbound, real-time suppression is the only architecture that survives scrutiny. Internal DNC adds, opt-outs from any channel, and revocations captured during a previous call must propagate to the dialer in seconds, not overnight. Modern outbound platforms expose this as a precall HTTP request returning an allow/deny decision in under 50 milliseconds, which is the only way to scale to thousands of concurrent calls without race conditions.
Place the AI disclosure in the agent's opening turn. Not on slide 30 of a script. Not after the consumer asks. The first or second sentence the agent delivers should identify the calling company, identify the call as AI-generated, and offer the consumer the chance to continue or end. This satisfies the strictest current state requirements and positions the program for the federal rule that is almost certainly coming.
The TCPA limits calls to between 8 a.m. and 9 p.m. local time of the called party. For a multi-state campaign, the dialer needs to compute local time per number using current time zone data, not a single dialer time zone. Several states layer additional restrictions: Connecticut limits calls to 9 a.m. start under SB 1058, Florida prohibits calls before 8 a.m. or after 8 p.m., and Virginia's SB 1339 extends opt-out honor periods to 10 years. Frequency limits exist at the state level too: most jurisdictions consider three or more calls in a short window to a non-engaging number as evidence of "willful" conduct, which trebles damages.
The agent's natural-language layer must classify revocation intent independent of keyword matching. "Take me off your list," "I'm not interested in any of this," "stop calling me" all need to register as global opt-outs and write to suppression in real time. The post-call event log should mark the revocation timestamp, the exact transcript line, and confirm that suppression propagation completed across all linked channels.
Every call should produce a structured artifact: timestamp, called party, agent identity, opening disclosure transcript, full conversation transcript, sentiment markers, any revocation events, and a citation to the consent record relied upon at dial time. Recordings should be encrypted at rest with access logs. Retention should match the four-year TCPA statute of limitations at minimum and seven years for industries with overlapping FDCPA or HIPAA obligations. When a class-action complaint lands, your ability to produce the consent record and the call transcript within hours is the difference between a six-figure settlement and a discovery fight that lasts two years.
Treat compliance as a procurement filter, not a feature. A platform that cannot answer specific compliance questions in writing should not enter your shortlist, regardless of latency benchmarks or per-minute pricing.
The questions that matter look different from the demo. Ask whether HIPAA coverage is provided through a self-service Business Associate Agreement portal or requires a sales call to negotiate (the difference is weeks of procurement time). Ask whether SOC 2 Type II reports are current and available under NDA, or whether the vendor will only confirm "in progress." Ask how the platform handles PII redaction in stored transcripts. Ask whether consent records can be queried as a precall API check or only logged after the fact. Ask how revocation events propagate to your other systems and on what latency. Ask for the data retention defaults and whether they can be configured per industry.
The vendors that built compliance into the architecture from day one will answer these questions in a sentence each. The vendors that bolted it on later will answer with a 40-slide deck.
Retell AI was designed against this checklist. Self-service BAA portal for HIPAA, SOC 2 Type I and Type II reports under NDA, GDPR coverage, configurable PII redaction, precall consent verification via webhook, and structured opt-out events propagated through post call analysis. Customers handling regulated workflows including Medical Data Systems use the platform for inbound triage and high-volume collections under the same compliance posture. The reason to mention this is practical: most evaluation cycles for AI voice end in week six when procurement realizes the vendor cannot produce the documentation the lawyers asked for in week one.
Three regulatory developments will move the floor. An FCC final rule on AI calls, the January 2027 activation of the revoke-all standard, and continued state-by-state fragmentation of AI-specific obligations.
The FCC's NPRM on AI-generated calls is the highest-stakes pending item. A final rule is expected by Q4 2026 or Q1 2027, though current FCC leadership has signaled deregulatory priorities that could push it longer. The most likely outcome includes an explicit AI-identification requirement at call open, modified consent language requirements that name AI specifically during capture, and a formal definition of "AI-generated call" broad enough to cover both real-time generation and pre-recorded synthetic voice. Sophisticated operators are building to the proposed standard now rather than waiting.
The revoke-all rule activation on January 31, 2027 is binary and irreversible. The 47 C.F.R. § 64.1200(a)(10) language treats any opt-out received in response to one type of communication as an opt-out from all unrelated robocall and robotext traffic from the same caller. For enterprises operating multiple brands, business units, or product lines under a single corporate parent, the rule forces consolidation of suppression infrastructure across silos that currently run independently. This is a 2026 build, not a 2027 build.
State law fragmentation is the slowest-moving but most expensive trend. Colorado's revised AI Act framework, due for legislative action before its June 30, 2026 effective date, may classify high-volume AI voice systems as "high-risk AI" with documentation, impact-assessment, and bias-testing obligations. The federal preemption Executive Order issued by the current administration in early 2026 is being challenged in multiple federal courts and may not survive review. The fragmented landscape is the new operating environment. Plan compliance investment around the strictest applicable jurisdiction, never the most lenient.
Not without prior consent specific to your business. The FCC's February 2024 ruling pulled AI-generated voices into the TCPA's "artificial or prerecorded voice" framework, which means cold dialing without documented consent exposes you to $500 to $1,500 in statutory damages per call with no aggregate cap. The narrow exemptions for emergencies, certain healthcare communications, and informational calls do not authorize marketing AI cold calls. Operationally, "AI cold calling" should mean AI-driven warm outreach to consented leads, not unconsented cold dialing. The Lamb v. Mortgage One case filed in February 2026 demonstrates how quickly cold dialing exposure becomes class-wide.
Yes, for every U.S. cell phone number that will receive an AI marketing call. Bought lists rarely carry transferable consent. Co-registration consents that name "and partners" generically have become hard to defend in court. The only safe foundation for a 50,000-record AI campaign is a per-number consent record you can produce on demand, sourced from your own opt-in flows or from a partner whose consent language explicitly named your business. If you cannot produce that documentation for any number on the list, treat that number as cold and exclude it from the campaign.
Currently required by several states, almost certain to become federal, and a strong defensive posture in every case. Texas requires disclosure within the first 30 seconds. California, Florida, Colorado, and Utah have related obligations. The pending FCC rule would extend mandatory disclosure federally. Beyond legal compliance, transparent AI disclosure tends to improve completion rates and CSAT, because consumers respond worse to discovering deception mid-call than to knowing the truth at the start.
AI cold calling is legal in the UK but tightly regulated under PECR, UK GDPR, Ofcom rules, and the EU AI Act for any call to EU residents. Required practices include prior consent for automated marketing calls, screening against the Telephone Preference Service and Corporate Telephone Preference Service, identifying the calling company, disclosing AI use, honoring opt-outs, and conducting Data Protection Impact Assessments before launch. ICO enforcement is the primary risk, and the cap on PECR fines is rising from £500,000 to GDPR-level penalties under recent legislative changes.
No. EBR exempts you from the National Do Not Call Registry for manual calls, not from the AI consent requirement. The artificial voice itself triggers the consent obligation regardless of relationship. Treating your customer database as a fair-game list for AI outbound is one of the most expensive misunderstandings in the playbook. Consent capture has to happen at signup, renewal, or any subsequent inbound interaction. Without it, the AI agent cannot dial the customer lawfully.
It validated oral consent as sufficient for AI marketing calls in three states. The Fifth Circuit ruled in February 2026 that the TCPA itself requires only "prior express consent," not the heightened written consent the FCC has demanded since 2012. The decision binds Texas, Louisiana, and Mississippi. In those states, a documented oral "yes" is now an affirmative defense to a TCPA claim. The other 47 states continue to apply the FCC's written-consent standard. For nationwide programs, the operating instruction is unchanged: capture written consent.
They fall under the lighter PEC tier, which permits voluntary number provision as consent. Informational calls about appointments, deliveries, fraud alerts, and account status fall under PEC because they serve the consumer's interest. AI agents handling these calls can operate without separate written consent, provided the consumer voluntarily gave the number in a related transaction. The risk is scope creep: an "appointment reminder" call that pivots to a wellness-program pitch crosses into marketing and now requires PEWC.
Federal rule: 10 business days. Defensible practice: minutes. The 10-day window is the legal floor, not the operational target. Recent class actions have settled in the millions because plaintiffs were able to document continued contact after a clear opt-out. The 2025 Cider US Holding settlement ($5.95M in under 30 days) and the Albertsons settlement ($5.9M) both involved gaps between opt-out receipt and suppression list propagation. Any AI agent that catches a verbal opt-out must write the suppression in real time and propagate across the dialer, CRM, SMS platform, email service, and partner data feeds.
No. The entity on whose behalf the calls are made bears liability, regardless of which vendor pressed dial. This is the critical implication of the Lamb v. Mortgage One class definition, which sweeps in calls placed by "vendors, lead generators, or agents." If you outsource AI calling, you remain liable for the vendor's compliance failures. Vendor selection is therefore part of compliance posture, not separate from it. Diligence on the vendor's consent capture, suppression infrastructure, and audit trail capabilities is mandatory before a single call goes out under your name.
Statutory damages of $500 to $1,500 per call with no aggregate cap, plus class certification multiplier. A 50,000-record campaign run without consent is theoretical exposure between $25 million and $75 million. Most cases settle well below those numbers, but the 2026 docket establishes a clear range: $4.75M (Hy Cite), $5.9M (Albertsons), $5.95M (Cider US Holding), $9.95M (Gen Digital), $19M (QuoteWizard). Class certification dramatically changes the math. Practical floor for a non-compliant campaign that draws plaintiff attention: low-seven figures. Practical ceiling: nine figures.
Treat compliance as the chassis, not the brake. The companies running the largest AI outbound programs in the US are not the ones with the loosest interpretation of TCPA. They are the ones with the most rigorous documentation. A well-built consent layer makes everything downstream faster: better connect rates because numbers are pre-suppressed, better answer rates because branded calling is in place, better conversion because consumers know what they agreed to, and better defense if a complaint ever lands.
Audit your consent records first. Then your suppression infrastructure. Then your disclosure language. Then pilot the AI layer. Inverting that sequence is how the 2026 settlement docket got built.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. Telecommunications and AI regulations vary by jurisdiction and change frequently. Consult qualified counsel for guidance specific to your business.
See how much your business could save by switching to AI-powered voice agents.
Total Human Agent Cost
AI Agent Cost
Estimated Savings
A Demo Phone Number From Retell Clinic Office

Start building smarter conversations today.

