Hello, scammer, I am your nightmare.
Meet Daisy, a kind-looking grandmother. While she doesn’t have a physical form, she’s an expert AI anti-fraud specialist.
Unlike traditional anti-fraud campaigns, Daisy doesn’t simply remind you to be cautious about scams. Instead, she tackles the problem at its source by engaging scammers in casual conversation, gradually wearing down their patience.
There may not be free lunches falling from the sky, but scammers will throw you bait.
The purpose behind creating Daisy was to use scammers’ own persistence to waste their time, reducing their opportunities to bait other potential victims. As the saying goes, true anti-fraud victory isn’t about defeating scammers—it’s about making scamming unprofitable.
Rather Than Teaching One Person to Avoid Scams, Better to Waste a Scammer’s Time
Daisy was developed by O2, a British telecommunications company.
Alongside Vodafone, EE, and Three, O2 is one of Britain’s four major telecom operators. According to O2’s data, 67% of British people express concerns about becoming scam victims, with a quarter experiencing various degrees of scam attempts weekly.
O2 was frustrated by the rampant scammer activities.
So they came up with an ingenious solution: creating an AI grandmother named Daisy who actively engages with scammers in conversations lasting up to 40 minutes.
The method is somewhat self-sacrificing yet practical—after exhausting the scammer, they have less time to defraud others.
Daisy is designed as a kind grandmother, reflecting reality where elderly people are often the primary targets of scammers.
In the official demo, Daisy chats with scammers about everyday topics, vividly sharing her hobbies—like knitting sweaters and talking about her lovely cats.
The goal is simple: keep the call going as long as possible.
When it comes to sensitive information like bank account numbers or passwords, this AI grandmother makes up fictitious details, ensuring there’s a response for everything while leading nowhere, essentially telling elaborate tales.
O2’s head of fraud, Murray Mackenzie, cuts to the heart of the matter, explaining that Daisy’s effectiveness lies in completely reversing the dynamics with scammers. “By keeping them on the line, we’ve cleverly beaten and surpassed them at their own ‘game.'”
In fact, similar tools existed before Daisy.
According to The Guardian, a team at Macquarie University in Australia developed an AI chatbot called Apate specifically designed to converse with telecom scammers.
When Australian police identify and intercept scam calls, they transfer them to this “victim bot.”
Creator Kafaar shared the origin story of Apate.
While picnicking with his two children, he told a joke to a scam caller. This inspired him to think about using meaningless chatter to waste scammers’ time, preventing them from deceiving more people.
I thought, the purpose is to deceive the deceivers, waste their time so they can’t call others.
Apate has hundreds of different identities, each with distinct accents and personalities—some feign confusion, others speak repetitively like elderly people, and some have quick tempers. This variety helps prevent scammers from recognizing patterns.
In essence, rather than teaching one person to avoid scams, it’s better to waste a scammer’s time.
As Technology Upgrades, Human Judgment Remains the Ultimate Defense
AI’s rapid development provided the perfect opportunity for Daisy’s creation.
Built on a customized Large Language Model (LLM), Daisy operates similarly to Google Gemini Live or early versions of ChatGPT Voice.
Specifically, the AI transcribes the caller’s voice, then this AI grandmother adjusts her response style and content based on her preset character personality, finally converting it back to her AI voice:
- Voice-to-Text Transcription: The process of converting captured voice to text is called “transcription.” This allows Daisy to understand what callers are saying and prepare appropriate responses.
- Response Generation: Daisy uses a custom LLM to generate responses. This model not only understands text but generates appropriate answers based on context and preset character personality layers.
- Character Personality Layer: This refers to how Daisy adjusts response style and content according to preset personality traits, making responses match her character—like being warm and patient like a grandmother.
- Text-to-Speech: Generated text responses need to be converted back to speech for phone communication. This “text-to-speech” (TTS) process uses a custom AI model.
During training, Daisy received help from YouTube creator Jim Browning.
With over 4 million subscribers, Browning is one of YouTube’s most influential anti-scam activists. His videos not only demonstrate scam techniques but reveal how the entire scam industry operates through detailed technical analysis and examples.
After all, your enemies always understand you best—scammers included.
The entire process of Daisy’s conversations with scammers—from listening to voice, transcribing text, generating responses, to converting text back to speech—happens in real-time with minimal delay.
However, Daisy isn’t directly available to O2 users.
The reason is simple: O2 worries users might use it for revenge against scammers. Their internal research shows 71% of Britons have considered retaliating against those who scammed them or their loved ones.
According to Techspot, this AI tool has been listed in scammers’ commonly used “easy target” number lists. Daisy can interact with callers 24/7 without human intervention.
But here’s the thing: There are now two types of scammers—those who don’t understand AI, and those who do.
Recently, startup Bland AI claimed on X that with large model support, their product would become the world’s fastest conversational AI, able to communicate without “hallucinations.”
Additionally, Bland AI bots feature extremely natural generative voice and multiple language switching capabilities.
Beyond skillfully mimicking human conversation tones and pauses, in WIRED’s technical testing, Bland AI bots could even deny being AI and falsely report their true identity.
APPSO previously used AI to create some zero-cost but very realistic digital twins.
With AI technology enhancement, these tools can easily be used by fraudsters to deceive us.
Ironically, we might end up with AI scammers on one end of the phone and AI Daisy on the other, engaged in an endless loop of deception and counter-deception. Those in the know understand how deep this rabbit hole goes.
Even earlier, smart phone systems were quite sophisticated.
Service providers could quickly build management platforms upon request. Users could set up scripts, templates, communication lines, and other parameters through simple interface operations, then launch automatic outbound calls after importing contact information.
Domestic phone manufacturers have made considerable efforts in this area.
For instance, most domestic phone systems provide smart identification and blocking of pseudo base stations. Additionally, you can enable “online identification of unknown numbers” in call blocking settings for more accurate identification and blocking of harassment calls.
Scamming is essentially a filtering game—scammers don’t care about failing 999 times; they only need to succeed once.
Once scammed, everything returns to zero.
While scam methods may vary endlessly, this official anti-fraud formula never gets old:
Person (stranger) + Communication (any tool where you can’t meet face-to-face) + Request (money transfer, anything involving money) = Scam.
In the AI era, both scammers and anti-fraud technology are upgrading, but human judgment remains the ultimate defense line.