We’ve written a lot about AI – here and here and here – and it’s mostly been about the benefits. AI is improving physician diagnostics and helping identify better therapies more quickly. Its innovative new solutions for aging in place, for example, can help bring about the reality that the hospital of the future could be, in many respects, the home.
But AI can also be dangerous. As we reported here, it’s becoming a feature of more and more scams that target the elderly.
Perhaps the deadliest is called “deepfake” technology. You need to know about it, and how to protect yourself.
As reported by the National Cybersecurity Alliance, “Deepfakes are artificial intelligence-generated videos or audio clips that make it appear as though someone is saying or doing something they never did … Deepfakes can be used to defame individuals and commit fraud. For example, if your vocal identity and sensitive information got into the wrong hands, a cybercriminal could use deepfaked audio to contact your bank.”
You don’t have to be using any AI product yourself to become a victim. The technology can be used to scrub your data (such as videos, photos and voice recordings) from websites, like social media platforms. The criminals can then create a deepfake of you.
How can you protect yourself?
The article offers a number of excellent suggestions. (Read the full story for more details.)
1. Be extremely cautious about what information you share online. Adjust the settings of a social media platform “so that only trusted people can see what you share.”
2. Take full advantage of websites’ privacy settings to control who can see your personal information and content.
3. Use a digital watermark on all your photos. “This can discourage deepfake creators from using your content since it makes their efforts more traceable.” There is a wide range of watermark tools online, many of them free. Here is just one example.
4. Use multi-factor authentication for all your accounts. Examples are facial scans, requiring a code texted to your phone, or using a standalone app.
5. Make sure your passwords are long and strong. The article recommends at least 16 characters and a mix of upper case letters, lower case letters, numbers and special characters. There are many password manager systems, such as NordPass and RoboForm (both of which have free versions), that will help you store them.
6. Keep your software up to date. Older versions may have vulnerabilities that the criminals have figured out how to exploit. It’s best to turn on automatic updates so you don’t have to keep checking.
7. Be wary of inbound emails, direct messages, texts, or phone calls if you’re not 100% sure of the source. Do not click on any links from unknown sources. Be especially wary of messages that claim to come from government agencies and are asking you for some immediate action.
8. Report any deepfake incidents. Let the platform hosting the content know, and also report it to federal law enforcement.
In addition to fraudsters obtaining your personal information, videos and other images and then using AI tools to create an impersonation of you, there’s another big risk: they can create deepfake video calls to impersonate your family members, or authority figures you know, and then mislead you into divulging sensitive information or sending money.
As reported here, this technique is becoming increasingly prevalent.
“Deepfake videos are synthetically generated or manipulated multimedia content that can portray individuals saying or doing things they never actually did. Powered by artificial intelligence and machine learning, these videos have reached a level of sophistication where it can be incredibly difficult to distinguish real footage from the fake.”
According to the FTC, there was a 70% increase in scam attempts using some form of “synthetic media” compared to the previous year. Victims over the age of 60 account for a disproportionate amount of financial losses, “often running into the tens of thousands … One notable case involved a scam where fraudsters used a deepfake video of a son in distress to convince an elderly parent to wire a significant sum of money for bail. In another instance, a deepfake of a CEO was used to request fraudulent wire transfers from company accounts.”
One solution, says the article, is Knowledge-Based Verification (KBV), which involves asking personal questions that only individuals would be able to answer, and that the fraudsters wouldn’t know. Examples would be a key biographical detail, like the name of their first pet or the model of their first car. Or there could be a “safe” word or phrase that only people in the family would know, and that you would ask for at the outset of a video call.
This is a fast-moving topic and we will be reporting on it often.