Your private dwelling. Your own little oasis- Your sanctuary.
Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant.
BERKELEY, Calif. — Many people have grown accustomed to talking to their smart devices, asking them to read a text, play a song or set an alarm. But someone else might be secretly talking to them, too
So for two years now.... researchers have demonstrated they too can send "hidden commands" that are UNDETECTABLE to your human ear directly to your wifi connected creepy home spy systemOver the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant.
Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites.
A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.
This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.
“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors.
[Read more on what Alexa can hear when brought into your home]
Yup, it's already happening
These deceptions illustrate how artificial intelligence — even as it is making great strides — can still be tricked and manipulated.
Computers can be fooled into identifying an airplane as a cat just by changing a few pixels of a digital image, while researchers can make a self-driving car swerve or speed up simply by pasting small stickers on road signs and confusing the vehicle’s computer vision system.With audio attacks, the researchers are exploiting the gap between human and machine speech recognition. Speech recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.
Yet many people leave their smartphones unlocked, and, at least for now, voice recognition systems are notoriously easy to fool.There is already a history of smart devices being exploited for commercial gains through spoken commands.
Last year, Burger King caused a stir with an online ad that purposely asked ‘O.K., Google, what is the Whopper burger?” Android devices with voice-enabled search would respond by reading from the Whopper’s Wikipedia page. The ad was canceled after viewers started editing the Wikipedia page to comic effect.
A few months later, the animated series South Park followed up with an entire episode built around voice commands that caused viewers’ voice-recognition assistants to parrot adolescent obscenities.
There is no American law against broadcasting subliminal messages to humans, let alone machines. The Federal Communications Commission discourages the practice as “counter to the public interest,” and the Television Code of the National Association of Broadcasters bans “transmitting messages below the threshold of normal awareness.” Neither say anything about subliminal stimuli for smart devices.
"There is no American law against broadcasting subliminal messages to humans, let alone machines"
Courts have ruled that subliminal messages may constitute an invasion of privacy, but the law has not extended the concept of privacy to machines.Now the technology is racing even further ahead of the law. Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.
Courts have ruled that subliminal messages may constitute an invasion of privacy, but the law has not extended the concept of privacy to machines.
It is perfectly fine to target both you and an AI device that is privy to your life/bank/security and well being with subliminals...
Of course, read the rest, at the opening link.
From earlier today:
Ironically, the creepy AI taking commands from elsewhere walks hand in hand with the "indoor generation" and it's problematic existence.