Skip to main content
A A A

Article

Technology is awesome when it works. We depend on it for so many things that, when it doesn’t work as expected, it can cause major problems for businesses. The latest technology that can be used for both good and evil is artificial intelligence (AI).

Deepfake technology is a form of AI that can manipulate or generate video, audio, or images. Deepfake technology has progressed over the past few years so that it is accessible to anyone and is so realistic that it’s very difficult to distinguish between real and fake.

Recently in the news, there have been reports of some of the bad uses of deepfakes, such as for political manipulation and generating sexually explicit photos of a pop superstar.

More concerning for businesses are uses of technology that can lead to fraud. Businesses need to continually review their security practices and procedures to ensure they are updated for changing technology.

Recently reported was a use of deepfake technology to scam a company out of Hong Kong $200 million (approximately U.S. $25.6 million). As reported in news sources, deepfake technology was used to simulate a multi-person video conference where all participants, except for the victim, were fabricated. The scammers were able to use publicly available video footage to create the deepfakes. Of particular concern is that an employee in the finance department of the unnamed company received what the employee believed to be a phishing message. Likely following company protocol, the employee attempted to verify the requested monetary transaction before completing it. Unfortunately, the employee “verified” 15 transfers with scammers that were using deepfake technology to impersonate the chief financial officer and other employees.

Sometimes picking up the phone and calling a known number is a sufficient check to confirm wire instructions or a payment request. However, as voice cloning and SIM swaps continue to proliferate and deepfake video technology is now able to successfully impersonate multiple people simultaneously on a video call, businesses need to continue to be cautious about both internal and external checks. Fortunately, every time technology is used for evil, the forces of good find ways to combat fraudsters. But it’s a continual cycle, which means that businesses need to be prepared.

Miller Nash’s privacy & data security team can assist with reviewing current policies and procedures and can help respond to data security incidents and fraudulent monetary transfers. Give us a call if you have questions about artificial intelligence and protecting your business.

This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

  Edit this post