from STRATFOR
In 1896, cinema pioneers Auguste and Louis Lumiere debuted a short film showing a train pulling into a station. Audience members reportedly fled the Paris theater in terror, afraid they would be run down. Though the train on screen could do them no harm, of course, the experience, and the danger, nevertheless felt real. Today, filmgoers are a savvier lot, well-versed in the many feats that camera tricks and postproduction can accomplish. But a certain threat still lurks in the deceptions of the cinema - only now the dangers are often more real than the objects and actors depicted on screen.
On Dec. 11, Vice's Motherboard ran a story discussing how advances in artificial intelligence have made it possible to digitally superimpose celebrities' faces onto other actors' bodies with an algorithm. The article focused on the technology's use for making fake celebrity porn videos, but the trick has come in handy for mainstream movies as well: Think Carrie Fisher and Peter Cushing in a recent "Star Wars" installment. The possibilities in the mainstream and adult film industries are practically endless with this innovation, especially combined with programs such as Adobe Voco or Lyrebird, which can create a vocal library from a recorded sample of someone's voice. Using these tools in tandem, filmmakers of all kinds can make any person, or at least a convincing digital replica, say and do whatever they want for whatever end - be it entertainment or something more insidious.
Taking Spoofing to the Next Level
Fraudsters have long been on the cutting edge of technology and have eagerly latched on to emerging tools in pursuit of ill-gotten gains. Early on, they harnessed the power to "spoof" email - or send messages from someone else's email address - to wreak havoc on the internet. Scammers and hackers have used spoofing to spread spam, launch phishing campaigns and conduct more sophisticated spear-phishing attacks as well. By spoofing an account, for example, a cybercriminal could send fraudulent emails from a CEO, CFO or company president ordering an employee to wire money somewhere, like to a new and highly confidential business partner. Scammers and hackers have also taken advantage of caller ID spoofing to fool prospective marks not only for telemarketing purposes, but also for virtual kidnappings and "vishing," or vocal phishing, attacks.
Imagine what a fraudster could do with advancing AI technology. With a spoofed telephone number and a sample of an executive's voice, a criminal could leave a convincing voicemail instructing an employee to look out for a (spoofed) email containing an important attachment or confidential instructions for an urgent funds transfer. Hearing an authentic-sounding message from the boss may help overcome the employee's suspicions and persuade him or her to heed the email's instructions or open the corrupted attachment.
The criminal applications of the new AI technology don't stop at CEO fraud, though. Imagine a case where a high net worth individual is out of the country on a well-publicized trip. A scammer could use the same spoofing tactics to call the person's estate manager or personal assistant and leave a voicemail with instructions to, say, let the new alarm technician - a paparazzo in disguise - into the house for maintenance or to give the "service tech" the keys to the Ferrari. The technique could come in handy for a wide array of criminal plots, from plain old theft to murder; a virtual kidnapper could send authentic-looking and -sounding ransom videos without even abducting a victim.
Artifice in Intelligence
Though the technology's implications for crime are unsettling, perhaps the broadest potential for these new AI tools lies in the realm of intelligence. Russian intelligence agencies have relied on forgery to solicit information since at least the early 20th century. The Soviets took the tactic, along with other active measures such as blackmail, to new heights during the Cold War, forging a range of items including government documents, personal letters and journals. Soviet intelligence agencies like the KGB and GRU also worked to disseminate propaganda and disinformation, cultivating ties with journalists around the world to help peddle their false stories to a global audience - sometimes to devastating effect. Claims that the United States coordinated the seizure of the Grand Mosque of Mecca in 1979, for instance, led to widespread anti-American protests and prompted the looting and burning of the U.S. Embassy in Islamabad, Pakistan. Similarly, rumors that Americans were traveling to Central America in the 1980s to harvest organs from children inspired attacks against U.S. tourists in Guatemala. Long before fake news became a widely used term, the Soviets had mastered the medium.
Today, propagating falsehoods is easier than ever. Whereas Soviet intelligence agencies depended on sympathetic or otherwise cooperative journalists and media outlets to push their disinformation to the masses, their Russian counterparts can now do that themselves thanks to the internet. The Russian government has a staggering presence on social media, including a vast number of bot accounts that distribute disinformation aimed at influencing elections in Europe and the United States. Western intelligence agencies and nongovernmental organizations alike have documented the Kremlin's online exploits, such as hacking and posting politically sensitive memes and inflammatory messages. In the run-up to the U.S. presidential vote in 2016, Moscow used these techniques to rile voters on the left and the right.
A World of Opportunities for Chaos
Now add to this repertoire the new audio and video tools coming into use. These AI programs could revolutionize disinformation, enabling Russian intelligence operatives to sow seeds of chaos like never before. Looking for a compromising video of President Donald Trump that matches the description of the one in the Steele dossier? The tools could gin one up in a few hours. Need an audio recording of a purported cellphone intercept between a government minister asking for a bribe from an American CIA station chief? Coming right up! Even if the final products weren't flawless, and even if authorities eventually debunked them as counterfeits, a significant proportion of the population would still believe them to be authentic. The mistrust these documents would generate against government agencies and media would convince some viewers of their validity, especially if the specious recordings reinforced the audience's preconceived opinions and biases. Some people, after all, still insist that a viral video clip showing a chimpanzee shooting at West African soldiers - a marketing stunt to promote the 2011 movie "Rise of the Planet of the Apes" - depicts a real incident.
Of course, Russia's intelligence agencies aren't the only ones that will explore and exploit the possibilities the new AI video and audio programs have to offer. Intelligence outfits around the world have a long tradition of hiring illusionists to help teach their operatives how to use misdirection and other chicanery to fool their observers. The certainty of "I saw it with my own eyes" is difficult to shake, even if you know trickery may have been involved in what you witnessed firsthand. As a result, when spoofed video and audio recordings of politicians and other influential figures start emerging - and they will - they will cause just the kind of confusion their creators crave. The new techniques will make the methods used to influence the elections of 2016 and 2017 look trivial by comparison.
0 Response to "When Seeing Isn't Enough To Believe"
Post a Comment