What this archive covers
Incident pages include summaries, structured metadata, and source links for events involving AI-generated deception, synthetic identity misuse, and trust breakdowns online.
Realz documents public incidents involving deepfakes, voice cloning, synthetic media, impersonation, and digital identity abuse.
This archive is designed to make emerging trust risks easier to understand, easier to track, and easier to reference over time.
Incident pages include summaries, structured metadata, and source links for events involving AI-generated deception, synthetic identity misuse, and trust breakdowns online.
Deepfake and impersonation incidents are not just media problems. They affect safety, fraud exposure, verification, and how people decide what to trust.
Browse incidents by title and context, open each record for details, and use the linked sources for further review.
A structured archive of publicly reported incidents related to synthetic media, AI impersonation, and digital identity abuse.
The Berwyn Police Department warned residents about a scam in which callers use artificial intelligence to clone voices and impersonate loved ones or trusted officials. Scammers reportedly call victims urging them to send money or share personal information; police advise hanging up, calling the impersonated person using a known number to verify, avoiding urgent or secret payment requests, and reporting suspected scams to the Berwyn Police Department.
Topics: Voice Cloning, Ai Impersonation, Manipulated Identity, Synthetic Media, Authenticity Harm, Digital Identity Abuse
In Lawrence, Kansas, police responded after a family member received a phone call in which the caller ID showed the mother’s number and a voice that sounded like the mother said a man had a gun to her head and was demanding money. The caller’s brother received the same call and the mother was not answering her phone. Officers used a shared location app to track the mother’s phone, followed a moving location to a vehicle, and conducted a high‑risk vehicle stop; the occupants complied and were not involved. The report states the incident involved AI voice cloning and caller ID spoofing and notes many similar cases originate overseas and often go unsolved.
Topics: Voice Cloning, Ai Impersonation, Digital Identity Abuse, Authenticity Harm
The FBI warned that unknown actors have used AI voice cloning and other synthetic-media tools to impersonate senior U.S. government officials and to extract sensitive or classified information or conduct scams. The bureau said the campaign uses SMS to initiate contact, moves victims to encrypted messaging apps (Signal, WhatsApp, Telegram) or voice calls, and then employs familiar talking points—including offers of meetings with high-ranking officials or board nominations—to obtain passport photos, access to contact lists, wire transfers, or other sensitive data.
Topics: Deepfake, Ai Impersonation, Synthetic Media, Manipulated Identity, Digital Identity Abuse, Authenticity Harm
On July 12, a Hillsborough County, Florida woman (identified as Sharon) received a panicked phone call that sounded like her daughter, who claimed to have been arrested after a car crash. A caller claiming to be a lawyer then demanded $15,000 for bail; Sharon withdrew the money and delivered it to a person who arrived to collect it. The family later confirmed the real daughter was at work and the call was fraudulent, leaving Sharon and her husband out $15,000.
Topics: Voice Cloning, Ai Impersonation, Synthetic Media, Manipulated Identity, Authenticity Harm, Digital Identity Abuse