Deepfakes and voice as the next data breach #Cybersecuirty - The Entrepreneurial Way with A.I.

Breaking

Monday, October 21, 2019

Deepfakes and voice as the next data breach #Cybersecuirty

#HackerNews

Deepfake technology, which uses deep learning to create fake or altered video and audio content, continues to pose a major threat to businesses, consumers, and society as a whole.

In the lead up to the 2020 U.S. presidential election, government officials have expressed concerns about potential deepfake attacks to spread misinformation, and evidence suggests that while this technology is advancing rapidly, governments and tech companies are still ill-prepared to detect and combat it.

Deepfakes caught in the wild

We’ve seen how quickly deepfake videos can catch on, with tools like social media allowing them to spread like wildfire. Recent examples have included an altered video of House Speaker Nancy Pelosi slurring her words, as well as footage of Facebook’s Mark Zuckerberg giving a speech on the power of big data, actor Bill Hader doing an impression of Tom Cruise, and actress Jennifer Lawrence giving a speech with Steve Buscemi’s face.

Not all of these deepfake videos had malicious intent, but show how prevalent and mainstream deepfakes are becoming, and the potential for bad actors to leverage the technology to perpetrate crimes.

Prominent figures like these are easy to target because they have so much public content available online that can be repurposed for deepfakes, but as this technology continues to advance, it won’t be long before criminals have the tools to expand their targets beyond world leaders and celebrities. Cybersecurity companies have already seen successful deepfake audio attacks on businesses, suggesting the next big target for deepfake attacks could really be anyone.

As Senator Marco Rubio put it, “In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long range missiles… and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.” He’s not wrong, and it’s only a matter of finding a Youtube video or recording a voice sample on a phone call to alter.

Deepfakes could open deep pockets

Incentivized by large potential gains, new fraudsters relentlessly invest time in gathering intelligence on intended victims and studying the paper trail of their target firm before initiating their attacks. It’s not only the number of attempted attacks that are climbing, it’s the actual amount of losses.

Voice fraud can be costly to businesses and consumers alike, with phone-based ID theft costing U.S. consumers approximately $12 billion every year. With continued advances in voice and artificial intelligence technology, we expect these attacks will only continue to increase and improve.

Voice is the next data breach

As cybercriminals continue to evolve their tactics and identify additional channels to target, I anticipate that voice will be the next major data breach. Companies need to build defenses against the technology before it gets too unwieldy to contain. In fact, over the past five years, Pindrop has seen a massive uptick in synthetic audio attacks.

Understanding the real dangers of synthetic audio, we set out to better understand and detect these attacks. We developed our own audio deepfakes using the voices of popular world leaders and celebrities that are regularly in the media (e.g. President Barack Obama and Ellen DeGeneres).

We found that, while synthetic audio sounds normal to the human ear, it isn’t possible at the normal speed and frequency at which humans speak. With an Achilles heel identified, we learned that AI and voice biometric technologies can analyze audio to successfully differentiate real from synthetic audios.

Derailing deepfakes

Tools to detect and combat deepfakes and synthetic audio are available to businesses, but unfortunately there is not yet anything to prevent these attacks from happening in the first place. Until more progress is made, consumers should think twice before they share potentially fake content on social media, just as businesses should before responding to suspicious requests from customers.

If something seems off, do your own research to verify sources before accepting the content as fact. To keep up with increasingly sophisticated bad actors, it will be important for businesses and consumers alike to be vigilant to protect themselves from misinformation and fraud spread by deepfake technology and synthetic audio attacks.





Security

via https://www.aiupnow.com

Help Net Security, Khareem Sudlow