
Registered user since Mon 17 May 2021
Contributions
View general profile
Registered user since Mon 17 May 2021
Contributions
This submission includes the artifacts of our paper named \emph{Scrutinizing Privacy Policy Compliance of Virtual Personal Assistant Apps}.
DOIResearch Papers
Wed 12 Oct 2022 11:00 - 11:20 at Ballroom C East - Technical Session 9 - Security and Privacy Chair(s): Wei YangVarious virtual personal assistant (VPA) services, e.g. Amazon Alexa and Google Assistant, have become increasingly popular in recent years. This can be partly attributed to a flourishing ecosystem centered around them. Third-party developers are enabled to create VPA applications (or \emph{VPA apps} for short), e.g. Amazon Alexa skills and Google Assistant Actions, which then are released to app stores and become easily accessible by end users through their smart devices.
Similar to their mobile counterparts, VPA apps are accompanied by a privacy policy document that informs users of their data collection, use, retention and sharing practices. The privacy policies are legal documents, which are usually lengthy and complex, hence making it difficult for users to comprehend. Due to this developers may exploit the situation by intentionally or unintentionally failing to comply with them.
In this work, we conduct the first systematic study on the privacy policy compliance issue of VPA apps. We develop \emph{Skipper}, which targets the VPA apps (i.e., \emph{skills}) of Amazon Alexa, the most popular VPA service. \emph{Skipper} automatically depicts the skill into the \emph{declared privacy profile}, by analyzing their privacy policy documents with Natural Language Process (NLP) and machine learning techniques. It then conducts a black-box testing to generate the \emph{behavioral privacy profile} of the skill and checks the consistency between the two profiles. We conduct a large-scale auditing on all 61,505 skills available on Amazon Alexa store. \emph{Skipper} finds that the vast majority of skills suffer from the privacy policy noncompliance issue. Our work reveals the \emph{state quo} of the privacy policy compliance in contemporary VPA apps. Our findings are expected to raise an alert to the app developers and users, and would encourage the VPA app store operators to put in place regulations on privacy policy compliance.
Research Papers
Wed 12 Oct 2022 11:10 - 11:30 at Banquet A - Technical Session 10 - Testing I Chair(s): Gordon FraserVirtual personal assistant (VPA) services, e.g. Amazon Alexa and Google Assistant, are becoming increasingly popular recently. Users interact with them through voice-based apps, e.g. Amazon Alexa skills and Google Assistant actions. Unlike the desktop and mobile apps which have visible and intuitive graphical user interface (GUI) to facilitate interaction, VPA apps convey information purely verbally through the voice user interface (VUI), which is known to be limited in its invisibility, single mode and high demand of user attention. This may lead to various problems on the usability and correctness of VPA apps.
In this work, we propose a model-based framework named Vitas to handle VUI testing of VPA apps. Vitas interacts with the app VUI, and during the testing process, it retrieves semantic information from voice feedbacks by natural language processing. It incrementally constructs the finite state machine (FSM) model of the app with a weighted exploration strategy guided by key factors such as the coverage of app functionality. We conduct a large-scale testing on 41,581 VPA apps (i.e., skills) of Amazon Alexa, the most popular VPA service, and find that 51.29% of them have weaknesses. They largely suffer from problems such as unexpected exit/start, privacy violation and so on. Our work reveals the immaturity of the VUI designs and implementations in VPA apps, and sheds light on the improvement of several crucial aspects of VPA apps.