Gaps in Amazon’s skill vetting process for the Alexa voice assistant allow a bad actor to publish a skill under any developer name and make backend code changes after approval to phish out sensitive information.
In the Amazon lingo, Skills are applications that third-party developers can submit for Alexa.
The vulnerability had been found by German and US researchers after they analyzed the security measures in Amazon’s Alexa voice assistant and some 90K+ Skills available in seven countries.
The researchers – Christopher Lentzsch and Martin Degeling, from Horst Görtz Institute for IT Security at Ruhr-Universität Bochum, and Sheel Jayesh Shah, Benjamin Andow (now at Google), Anupam Das, and William Enck, from North Carolina State University – described discovered flaws in Amazon’s Skill review process in research presented this Wednesday at the Network and Distributed System Security Symposium (NDSS) conference.
By abusing the flaws, the researchers were able to publish Skills under the names of well-known companies and tweak the code. They showed that “not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information,” they explain in their paper titled “Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem.”
Because Amazon fails to check for changes in Skill server logic, a bad-wishing developer can alter the response to an existing trigger phrase so that Alexa starts asking for credit card information.
“We tested this by building a skill that asks users for their phone numbers (one of the permission-protected data types) without invoking the customer phone number permission API,” the paper explains.
The researchers found 358 Skills that can request information that should be protected by a permission API. The researchers found that almost a quarter (24.2%) of Alexa Skills don’t fully disclose the data they collect. This may be especially problematic for Skills in the “kids” and “health and fitness” categories due to the higher privacy standards set by regulators.
Amazon, researchers say, has confirmed some of their findings and is preparing countermeasures.
“The security of our devices and services is a top priority,” Amazon’s spokesperson said in an email to The Register. “We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior.”