Dangerous skills: Understanding and mitigating security risks of voice-controlled third-party functions on virtual personal assistant systems

Nan Zhang, Xianghang Mi, Xuan Feng, Xiaofeng Wang, Yuan Tian, Feng Qian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

125 Scopus citations

Abstract

Virtual personal assistants (VPA) (e.g., Amazon Alexa and Google Assistant) today mostly rely on the voice channel to communicate with their users, which however is known to be vulnerable, lacking proper authentication (from the user to the VPA). A new authentication challenge, from the VPA service to the user, has emerged with the rapid growth of the VPA ecosystem, which allows a third party to publish a function (called skill) for the service and therefore can be exploited to spread malicious skills to a large audience during their interactions with smart speakers like Amazon Echo and Google Home. In this paper, we report a study that concludes such remote, large-scale attacks are indeed realistic. We discovered two new attacks: voice squatting in which the adversary exploits the way a skill is invoked (e.g., ''open capital one''), using a malicious skill with a similarly pronounced name (e.g., ''capital won'') or a paraphrased name (e.g., ''capital one please'') to hijack the voice command meant for a legitimate skill (e.g., ''capital one''), and voice masquerading in which a malicious skill impersonates the VPA service or a legitimate skill during the user's conversation with the service to steal her personal information. These attacks aim at the way VPAs work or the user's misconceptions about their functionalities, and are found to pose a realistic threat by our experiments (including user studies and real-world deployments) on Amazon Echo and Google Home. The significance of our findings has already been acknowledged by Amazon and Google, and further evidenced by the risky skills found on Alexa and Google markets by the new squatting detector we built. We further developed a technique that automatically captures an ongoing masquerading attack and demonstrated its efficacy.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE Symposium on Security and Privacy, SP 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1381-1396
Number of pages16
ISBN (Electronic)9781538666609
DOIs
StatePublished - May 2019
Externally publishedYes
Event40th IEEE Symposium on Security and Privacy, SP 2019 - San Francisco, United States
Duration: May 19 2019May 23 2019

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
Volume2019-May
ISSN (Print)1081-6011

Conference

Conference40th IEEE Symposium on Security and Privacy, SP 2019
Country/TerritoryUnited States
CitySan Francisco
Period5/19/195/23/19

Bibliographical note

Funding Information:
We thank our shepherd Franziska Roesner and anonymous reviewers for their comments and help in preparing the final version of the paper. This project is supported in part by the NSF 1408874, 1527141, 1618493, 1618898, 1801432 and ARO W911NF1610127.

Publisher Copyright:
© 2019 IEEE.

Keywords

  • Attack-and-defense
  • IoT-security
  • Security-and-privacy
  • Voice-assistant-security

Fingerprint

Dive into the research topics of 'Dangerous skills: Understanding and mitigating security risks of voice-controlled third-party functions on virtual personal assistant systems'. Together they form a unique fingerprint.

Cite this