Advice for your family

Seeing (and hearing)—is no longer believing.
The stats are alarming: Fraud involving deepfakes has surged by 3,000% recently, with projected losses reaching a staggering $40 billion by 2027.
In the “old days” (2-3 years ago), generating a single AI image took 30 seconds. Today, AI systems can generate hundreds of images per second, life-like video, full voice clones—enabling convincing, deceptive deepfakes on the fly.
NVIDIA CEO Jensen Huang puts it bluntly: “Every single pixel will be generated soon. Not rendered: generated.”
Deepfakes are being weaponized for financial fraud and election manipulation, undermining the very concept of truth itself. The old adage “seeing is believing” simply no longer applies.
So, what can you do? While technology races forward at Moore’s Law speed, our human defenses need to keep pace. Here are three powerful strategies to protect yourself:
#1. Establish a Secret Verification Code
Create a pre-arranged password with family members and close contacts. When faced with unexpected requests for money or sensitive information, request this verification code before taking action. This simple, old-school approach provides a powerful defense against even the most sophisticated AI voice clones. When your “son” calls in distress asking for emergency cash, your code word becomes your shield.
NOTE: My mission is to amplify Abundance and Moonshots in the world. If you know someone who would benefit from this blog, please share it with them. Help me share my mission. Here’s an easy way to do that and also earn perks for helping get the word out by clicking the below link.
Your unique link:
https://share.diamandis.com/047f0e62/
#2. Question Every Unusual Request
Approach sudden demands for money, personal information, or urgent action with healthy skepticism. If something feels off about a call, video, or message—trust that instinct. Today’s AI avatars have become remarkably convincing, but they typically pressure you toward impulsive decisions. Remember: legitimate requests can withstand the scrutiny of verification and time. As Berkeley professor Hany Farid points out, deepfakes create a world where “plausible deniability” becomes all too common.
#3. Look for the Digital Glitches
While deepfake technology improves daily, telltale signs often remain. Watch for unnatural eye movements, robotic speech patterns, or pixel inconsistencies, particularly around facial features in videos. Audio deepfakes may contain unusual background noise or inconsistent voice modulation. The golden rule remains: if an interaction seems suspiciously perfect or oddly imperfect, proceed with extreme caution.
The Bright Side of Deepfakes
Recording Your Parents/Grandparents
Here’s another piece of family advice for your consideration. (Note: despite the risks, deepfake technology isn’t inherently evil. Like any powerful innovation, it offers remarkable potential benefits alongside serious dangers.)
If you are lucky enough to have your parents and/or grandparents still alive and of sound mind, consider sitting each of them down for a 2- or 3-hour recording session. Aim your smartphone at them and start recording their story. Capturing their video, voice, and story will enable you someday (if you choose) to bring them back as a fully-interactive AI avatar that will allow your great, great grandkids to know and interact with them.
The future of this technology remains unwritten. As I’ve often said, “The world’s biggest problems are the world’s biggest business opportunities.” Entrepreneurs working alongside governments will build new infrastructures and countermeasures—similar to how the banking industry developed systems to prevent currency counterfeiting.
Until then, vigilance remains our most powerful defense against scammers who are targeting both your wallet and your trust.
The next time your “grandmother” urgently asks for money, pause, verify, and outsmart the deception.
Until next time,
Peter
Responses