A crypto founder had his laptop compromised when he joined what appeared to be a Microsoft Teams call with Pierre Kaklamanos, a Cardano Foundation contact he had spoken with before.
When “Pierre” reached out about Atrium and sent a Teams invite, nothing looked out of place. On the call, the face and voice matched what he remembered, and two other apparent foundation members were present.
When the call lagged and dropped him, a prompt told him his Teams software was out of date and needed reinstalling through Terminal. He ran the command, then shut the laptop off because the battery was dying, which limited the damage in retrospect.
He describes himself as “quite technically savvy,” which is part of the point that the attack worked because the context felt legitimate.
Social engineers have always relied on familiarity, and executing that at scale once required either a compromised account or weeks of text-based rapport-building.
The video call was the authentication layer, the thing victims learned to trust, and replicating it is now within reach.
Fake update
Microsoft documented campaigns in February and March 2026 in which malicious files masqueraded as workplace apps, such as msteams.exe and zoomworkspace.clientsetup.exe, with phishing lures that mimicked legitimate Teams and Zoom meeting workflows.
In a separate warning, Microsoft described “ClickFix”-style prompts targeting macOS users, instructing them to paste commands into Terminal and targeting browser passwords, crypto wallets, cloud credentials, and developer keys.
The fake Teams update fits both patterns simultaneously.
Google Cloud’s Mandiant unit described a crypto-focused intrusion built on the same structure. A compromised Telegram account, a spoofed Zoom meeting, what witnesses described as a deepfake-style executive video, and troubleshooting commands that launched the infection.
Mandiant said it could not independently verify which AI model, if any, generated the video, but confirmed the group used fake meetings and AI tools during social engineering.
On Apr. 24, the real Pierre Kaklamanos posted on X saying his Telegram had been hacked and that someone was impersonating him, along with “a few other people in the industry this week.”
He told followers to avoid clicking links or booking meetings through the account and to verify contact through LinkedIn direct messages.
By then, the founder had already messaged the account suggesting they switch to Google Meet. Whoever controlled Pierre’s Telegram account replied that he had gotten busy and asked to reschedule, with the attacker still managing the persona once the call ended.
That exchange turns the incident from an isolated embarrassment into a live campaign signal that the method is active, the account compromise is the entry point, and the relationship history is the weapon.
| Stage | What the victim saw | Why it looked legitimate | What the attacker was likely trying to achieve |
|---|---|---|---|
| Initial outreach | “Pierre” reached out about Atrium and suggested a call | The victim had spoken with Pierre before, including on video | Reopen an existing trust relationship instead of starting from a cold approach |
| Meeting setup | A Microsoft Teams invite for the next day | Teams is a normal business workflow and the topic was plausible | Move the target into a controlled environment that felt routine |
| Live call | Familiar face, familiar voice, plus two other apparent Cardano Foundation members | The social context matched the victim’s memory of prior interactions | Lower suspicion and make the call itself feel like verification |
| Call disruption | Lagging, instability, then getting kicked out | Technical glitches are common in video calls | Create frustration and set up the fake “fix” as a normal troubleshooting step |
| Fake update prompt | A message saying Teams was out of date and needed reinstalling through Terminal | Software update prompts are familiar, and the user rarely used Teams | Get the victim to execute a malicious command directly |
| Command execution | The victim ran the command, then shut down the laptop because the battery was dying | The workflow still felt like a routine app fix at that moment | Launch the infection chain and gain access to credentials or device data |
| Post-call follow-up | The victim suggested switching to Google Meet; the attacker said he got busy and asked to reschedule | The persona continued behaving like a real contact after the failed attempt | Keep the relationship alive for another attempt and avoid immediate suspicion |
Why generative media changes the threat surface
The founder said he now believes the call may have involved AI-generated or manipulated video. Forensic confirmation of the tools is lacking, and the OpenAI connection here is governed by its own safety documentation.
OpenAI launched its 4o image generation model on Mar. 25, describing it as capable of “precise, accurate, photorealistic outputs,” and released the ChatGPT Images 2.0 System Card on Apr. 21.
The firm stated that the model’s “heightened realism” could, absent safeguards, enable more convincing deepfakes of real people, places, or events. One of the leading AI labs has now put on record that its own image model raises the ceiling on what a convincing fake can look like.
The World Economic Forum said in January 2026 that generative AI lowers the barrier to phishing while raising its credibility, through realistic deepfake audio and video that can evade both detection systems and human scrutiny.
INTERPOL declared financial fraud one of the world’s most severe and rapidly evolving transnational crimes in March 2026, identifying deepfake videos, audio, and chatbots as tools that make impersonation of trusted people easier to carry out at scale.
Chainalysis estimated that crypto scams and fraud reached $17 billion in 2025, with impersonation scams up 1,400% year over year and AI-enabled scams generating 4.5 times as much revenue as traditional methods.

Crypto attracts this class of attack because it combines high-value targets, fast settlement rails, and an informal communications culture in which Telegram introductions and ad hoc video calls between founders are routine.
Mandiant documented that the group behind the crypto Zoom intrusion targeted software firms, developers, venture firms, and executives across payments, brokerage, staking, and wallet infrastructure.
Mandiant noted that the victim’s data could be used to seed future social engineering, with each compromise generating material for the next.
Two paths forward
Zoom announced on Apr. 17 a partnership to add real-time human verification to meetings, a “Verified Human” badge, and a “Deep Face Waiting Room,” treating participant authenticity as a product problem.
Gartner predicts that by 2027, 50% of enterprises will invest in disinformation-security products or TrustOps strategies, up from less than 5% today.
In the bull case, that buildout reaches critical mass quickly enough that attackers must defeat multiple independent trust layers to complete a conversion, and the economics of impersonation campaigns deteriorate.
In the bear case, the timeline compresses before defenses do. Gartner warned that AI agents may halve the time required to exploit account takeovers by 2027, narrowing the window for human hesitation or security team intervention.
Deloitte estimated that generative AI-enabled fraud losses in the US alone could climb from roughly $12 billion in 2023 to $40 billion by 2027.
| Scenario | What changes | What stays vulnerable | Implication for crypto firms |
|---|---|---|---|
| Bull case | Verification tools spread quickly: human-verification badges, liveness checks, stronger internal trust rails, and more formal approval workflows | Informal founder-to-founder chats, legacy messaging habits, and ad hoc scheduling still create openings | Attackers face more friction and lower conversion rates because they must defeat several trust layers instead of one |
| Bear case | AI-generated impersonation improves faster than defenses are adopted; fake meetings and fake troubleshooting become standard playbooks | Public-facing executives, Telegram-based outreach, video-first verification habits, and staff under time pressure | Relationship hijacking becomes routine, and each compromise creates material for the next scam |
| What success looks like | Sensitive requests get verified across separate channels, with known numbers, shared passphrases, hardware keys, or pre-agreed internal systems | Social pressure, urgency, and trust in familiar faces and voices cannot be fully removed | Firms reduce the chance that one spoofed call can lead directly to compromise |
| What failure looks like | Teams rely on the call itself as proof of identity, even as deepfake and impersonation tools improve | Video remains persuasive even when it is no longer reliable as authentication | Crypto organizations become easier to target because executives are both high-value victims and reusable lure assets |
Every public-facing crypto executive becomes both a target and a lure asset, a source of voice recordings, video clips, and relationship graphs that attackers can deploy against the next victim.
Zoom is building liveness checks into meetings, Microsoft is documenting attack chains that impersonate its own software, and the FBI has warned that malicious actors are already using AI-generated voice and text to impersonate trusted contacts, advising against assuming a message is authentic because it appears to come from a known person.
Verification now requires independent rails, such as a known phone number, a hardware key, a shared passphrase established before any meeting, or a pre-agreed internal channel that no attacker has accessed.
The post AI scams in crypto approach breaking point – OpenAI’s new image model shows why they could get worse appeared first on CryptoSlate.







