Rajya Sabha MP and philanthropist Sudha Murty has become the latest high-profile victim of a sophisticated deepfake scam, with fraudsters using artificial intelligence to create a synthetic video of her promoting a fraudulent investment scheme.
AI-Generated Video Promises Unrealistic Returns
In the viral video circulating on social media platforms, a digitally manipulated version of Sudha Murty is heard endorsing an investment plan and promising extravagant returns of "20 to 30 times" the initial amount. The deepfake content appears convincingly real, leveraging Murty's trusted public image to lure potential victims.
Taking immediate cognizance of the situation, Murty issued a stern warning to the public about "AI and the cunning minds" behind such deceptive technology. She expressed deep concern about her likeness being misused for financial fraud.
"I am really concerned about fake messages using my face and my voice to promote investments promising 20 or 30 times returns," Murty told PTI. "This is all fake and driven by AI and a cunning mind behind it."
Murty's Clear Warning and Advice to Public
The founder-chairperson of Infosys Foundation categorically stated that she never discusses or promotes investments publicly. She urged citizens to treat any such content as entirely fraudulent.
"I will never talk about investments anywhere, anytime," she asserted. "If you see my face or hear my voice promoting investments, do not believe it."
Offering practical advice, the MP emphasized careful financial decision-making: "This is hard-earned money - please think carefully, verify with a bank or trusted source, and only then decide." She recommended personal analysis and verification with reliable institutions before committing funds to any scheme.
Not the First Cyber Attack on Murty
This deepfake incident follows another cyber fraud attempt targeting Murty in September this year. According to police complaints, scammers made a hoax call appearing as 'Telecom Dept' on Truecaller, posing as ministry employees.
The caller falsely claimed her mobile services would be disconnected due to alleged misuse and threatened that obscene content was being broadcast from her number. The individual, described as rude and using a fake identity, attempted to extract sensitive personal information. Murty has filed for legal action regarding this separate incident.
How to Identify and Avoid Scams
These incidents highlight the evolving tactics of cyber criminals. Here are key indicators of potential scams:
- Sense of False Authority: Scammers often impersonate representatives of popular brands, businesses, or government agencies.
- Unsolicited Contact: Random calls or messages from companies or government bodies should raise immediate suspicion.
- Created Urgency: Fraudsters manufacture false deadlines or emergencies to pressure quick compliance.
- Poor Communication: Scam emails and messages frequently contain spelling errors, grammatical mistakes, and strange subject lines.
- Deepfake Deception: Criminals may use AI to impersonate loved ones in fake emergencies. Establishing safe words or asking verification questions can expose them.
The rise of accessible AI tools has made deepfake scams increasingly common, targeting both celebrities and ordinary citizens. Authorities advise heightened vigilance, independent verification of any unusual financial offers, and immediate reporting of suspicious content to cybercrime cells.