Security researchers designed apps to spy on users of Amazon Echo and Google Home smart speakers, uncovering vulnerabilities that they say could be exploited by other app developers, according to BBC News.
Berlin-based Security Research Labs (SRL) designed eight apps, billed as offering horoscopes and random number generators. Once those apps were approved by Amazon and Google, they were modified by SRL to record and collect audio data, even after the users had tried to turn the app off and had received a “goodbye” message from the device. The software kept running for a short time, and if certain phrases were heard, a transcript was sent to SRL.
The research reveals that app developers, not just manufacturers like Amazon and Google, can listen in on users without their knowledge. Their findings also suggest that those eavesdropping features could be added through updates after an app is already approved by the companies. Google and Amazon review software when it’s first submitted, but later updates haven’t been subject to the same scrutiny, according to a report from Liliputing.
Once they had exposed the vulnerabilities, SRL notified Amazon and Google, which have since blocked the apps. Google also said it’s “putting additional mechanisms in place to prevent these issues from occurring in the future,” in a statement released to Ars Technica. Amazon says it has stopped apps from recording transcripts after users give the speaker the “stop” command.
“Smart spies undermine the assumption that voice apps are only active as long as they are in dialogue with the user,” SRL chief scientist Karsten Nohl told BBC News.
Nohl said that the light on the smart speakers remained on when the apps were picking up audio, and recommended that users watch these lights closely to be sure whether the device is listening. In another variation, apps asked users to state their passwords, in another red flag for users, according to Nohl. Amazon says it’s now prevented apps from asking for user passwords.
Fabian Bräunlein, a senior security consultant at SRL, told Ars Technica:
“It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes. We now show that, not only the manufacturers, but… also hackers can abuse those voice assistants to intrude on someone’s privacy.”
Photo by Asivechowdhury [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]
Leave a Reply
You must be logged in to post a comment.