Application phishing attacks are rooted in users inability to distinguish legitimate applications from malicious ones. Previous work has shown that personalized security indicators can help users in detecting application phishing attacks in mobile platforms. A personalized security indicator is a visual secret, shared between the user and a security-sensitive application (e.g., mobile banking). The user sets up the indicator when the application is started for the first time. Later on, the application displays the indicator to authenticate itself to the user. Despite their potential, no previous work has addressed the problem of how to securely setup a personalized security indicator -- a procedure that can itself be the target of phishing attacks. In this paper, we propose a setup scheme for personalized security indicators. Our solution allows a user to identify the legitimate application at the time she sets up the indicator, even in the presence of malicious applications. We implement and evaluate a prototype of the proposed solution for the Android platform. We also provide the results of a small-scale user study aimed at evaluating the usability and security of our solution.
{"title":"Hardened Setup of Personalized Security Indicators to Counter Phishing Attacks in Mobile Banking","authors":"Claudio Marforio, Ramya Jayaram Masti, Claudio Soriente, Kari Kostiainen, Srdjan Capkun","doi":"10.1145/2994459.2994462","DOIUrl":"https://doi.org/10.1145/2994459.2994462","url":null,"abstract":"Application phishing attacks are rooted in users inability to distinguish legitimate applications from malicious ones. Previous work has shown that personalized security indicators can help users in detecting application phishing attacks in mobile platforms. A personalized security indicator is a visual secret, shared between the user and a security-sensitive application (e.g., mobile banking). The user sets up the indicator when the application is started for the first time. Later on, the application displays the indicator to authenticate itself to the user. Despite their potential, no previous work has addressed the problem of how to securely setup a personalized security indicator -- a procedure that can itself be the target of phishing attacks. In this paper, we propose a setup scheme for personalized security indicators. Our solution allows a user to identify the legitimate application at the time she sets up the indicator, even in the presence of malicious applications. We implement and evaluate a prototype of the proposed solution for the Android platform. We also provide the results of a small-scale user study aimed at evaluating the usability and security of our solution.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124575668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shinjo Park, Altaf Shaik, Ravishankar Borgaonkar, Jean-Pierre Seifert
With its high penetration rate and relatively good clock accuracy, smartphones are replacing watches in several market segments. Modern smartphones have more than one clock source to complement each other: NITZ (Network Identity and Time Zone), NTP (Network Time Protocol), and GNSS (Global Navigation Satellite System) including GPS. NITZ information is delivered by the cellular core network, indicating the network name and clock information. NTP provides a facility to synchronize the clock with a time server. Among these clock sources, only NITZ and NTP are updated without user interaction, as location services require manual activation. In this paper, we analyze security aspects of these clock sources and their impact on security features of modern smartphones. In particular, we investigate NITZ and NTP procedures over cellular networks (2G, 3G and 4G) and Wi-Fi communication respectively. Furthermore, we analyze several European, Asian, and American cellular networks from NITZ perspective. We identify three classes of vulnerabilities: specification issues in a cellular protocol, configurational issues in cellular network deployments, and implementation issues in different mobile OS's. We demonstrate how an attacker with low cost setup can spoof NITZ and NTP messages to cause Denial of Service attacks. Finally, we propose methods for securely synchronizing the clock on smartphones.
{"title":"White Rabbit in Mobile: Effect of Unsecured Clock Source in Smartphones","authors":"Shinjo Park, Altaf Shaik, Ravishankar Borgaonkar, Jean-Pierre Seifert","doi":"10.1145/2994459.2994465","DOIUrl":"https://doi.org/10.1145/2994459.2994465","url":null,"abstract":"With its high penetration rate and relatively good clock accuracy, smartphones are replacing watches in several market segments. Modern smartphones have more than one clock source to complement each other: NITZ (Network Identity and Time Zone), NTP (Network Time Protocol), and GNSS (Global Navigation Satellite System) including GPS. NITZ information is delivered by the cellular core network, indicating the network name and clock information. NTP provides a facility to synchronize the clock with a time server. Among these clock sources, only NITZ and NTP are updated without user interaction, as location services require manual activation. In this paper, we analyze security aspects of these clock sources and their impact on security features of modern smartphones. In particular, we investigate NITZ and NTP procedures over cellular networks (2G, 3G and 4G) and Wi-Fi communication respectively. Furthermore, we analyze several European, Asian, and American cellular networks from NITZ perspective. We identify three classes of vulnerabilities: specification issues in a cellular protocol, configurational issues in cellular network deployments, and implementation issues in different mobile OS's. We demonstrate how an attacker with low cost setup can spoof NITZ and NTP messages to cause Denial of Service attacks. Finally, we propose methods for securely synchronizing the clock on smartphones.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127204747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Telegram is a popular messaging app which supports end-to-end encrypted communication. In Spring 2015 we performed an audit of Telegram's Android source code. This short paper summarizes our findings. Our main discovery is that the symmetric encryption scheme used in Telegram -- known as MTProto -- is not IND-CCA secure, since it is possible to turn any ciphertext into a different ciphertext that decrypts to the same message. We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack. At the same time, we see no reason why one should use a less secure encryption scheme when more secure (and at least as efficient) solutions exist. The take-home message (once again) is that well-studied, provably secure encryption schemes that achieve strong definitions of security (e.g., authenticated-encryption) are to be preferred to home-brewed encryption schemes.
{"title":"On the CCA (in)Security of MTProto","authors":"J. Jakobsen, Claudio Orlandi","doi":"10.1145/2994459.2994468","DOIUrl":"https://doi.org/10.1145/2994459.2994468","url":null,"abstract":"Telegram is a popular messaging app which supports end-to-end encrypted communication. In Spring 2015 we performed an audit of Telegram's Android source code. This short paper summarizes our findings. Our main discovery is that the symmetric encryption scheme used in Telegram -- known as MTProto -- is not IND-CCA secure, since it is possible to turn any ciphertext into a different ciphertext that decrypts to the same message. We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack. At the same time, we see no reason why one should use a less secure encryption scheme when more secure (and at least as efficient) solutions exist. The take-home message (once again) is that well-studied, provably secure encryption schemes that achieve strong definitions of security (e.g., authenticated-encryption) are to be preferred to home-brewed encryption schemes.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133770717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We studied a new type of fraudulent activities, usage fraud, on Android platform in this paper. Different from previous fraud on mobile platforms targeting advertisers or mobile users, usage fraud was invented to boost usage statistics on third-party analytics platforms like Google Analytics to cheat investors. To understand the business model and infrastructures employed by the fraudsters, we infiltrated two underground services, Laicaimao and Anzhibao. A number of insights have been gained during this course, including the use of emulators and manipulation of user identifiers. In addition, we evaluated the efficacy of the existing fraud services and the defense status quo on 8 popular analytics platforms. Our result indicates that the fraud services are indeed capable of crafting valid usage numbers and the basic checks are missed by analytics platforms. We give several recommendations in the end and call for the contribution from the community to fight against this new type of fraud.
{"title":"What You See Isn't Always What You Get: A Measurement Study of Usage Fraud on Android Apps","authors":"W. Liu, Yueqian Zhang, Zhou Li, Haixin Duan","doi":"10.1145/2994459.2994472","DOIUrl":"https://doi.org/10.1145/2994459.2994472","url":null,"abstract":"We studied a new type of fraudulent activities, usage fraud, on Android platform in this paper. Different from previous fraud on mobile platforms targeting advertisers or mobile users, usage fraud was invented to boost usage statistics on third-party analytics platforms like Google Analytics to cheat investors. To understand the business model and infrastructures employed by the fraudsters, we infiltrated two underground services, Laicaimao and Anzhibao. A number of insights have been gained during this course, including the use of emulators and manipulation of user identifiers. In addition, we evaluated the efficacy of the existing fraud services and the defense status quo on 8 popular analytics platforms. Our result indicates that the fraud services are indeed capable of crafting valid usage numbers and the basic checks are missed by analytics platforms. We give several recommendations in the end and call for the contribution from the community to fight against this new type of fraud.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124079822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Competition among app developers has caused app stores to be permeated with many groups of general-purpose apps that are functionally-similar. Examples are the many flashlight or alarm clock apps to choose from. Within groups of functionally-similar apps, however, permission usage by individual apps sometimes varies widely. Although (run-time) permission warnings inform users of the sensitive access required by apps, many users continue to ignore these warnings due to conditioning or a lack of understanding. Thus, users may inadvertently expose themselves to additional privacy and security risks by installing a more permission-hungry app when there was a functionally-similar alternative that used less permissions. We study the variation in permission usage across 50,000 Google Play Store search results for 2500 searches each yielding a group of 20 functionally-similar apps. Using fine-grained contextual analysis of permission usage within groups of apps, we identified over 3400 (potentially) over-privileged apps, approximately 7% of the studied dataset. We implement our contextual permission analysis framework as a tool, called SecuRank, and release it to the general public in the form of an Android app and website. SecuRank allows users to audit their list of installed apps to determine whether any of them can be replaced with a functionally-similar alternative that requires less sensitive access to their device. By running SecuRank on the entire Google Play Store, we discovered that up to 50% of apps can be replaced with preferable alternative, with free apps and very popular apps more likely to have such alternatives.
应用开发商之间的竞争导致应用商店充斥着许多功能相似的通用应用。例如,许多手电筒或闹钟应用程序可供选择。然而,在一组功能相似的应用程序中,单个应用程序的权限使用有时差异很大。尽管(运行时)权限警告告知用户应用程序所需的敏感访问权限,但由于条件反射或缺乏理解,许多用户继续忽略这些警告。因此,当有一个功能相似的替代方案使用更少的权限时,用户可能会无意中安装一个更需要权限的应用程序,从而使自己暴露在额外的隐私和安全风险中。我们研究了50,000个Google Play Store搜索结果中的权限使用变化,每个搜索结果中有2500个搜索结果,每个搜索结果产生20个功能相似的应用。通过对应用程序组内的权限使用情况进行细粒度上下文分析,我们确定了超过3400个(潜在的)特权过度的应用程序,约占研究数据集的7%。我们将上下文权限分析框架作为一个名为SecuRank的工具来实现,并以Android应用程序和网站的形式向公众发布。SecuRank允许用户审核他们已安装的应用程序列表,以确定是否有任何应用程序可以替换为功能相似的替代方案,而不需要对其设备进行敏感访问。通过在整个Google Play Store运行SecuRank,我们发现多达50%的应用可以被更可取的选择所取代,免费应用和非常受欢迎的应用更有可能有这样的选择。
{"title":"SecuRank: Starving Permission-Hungry Apps Using Contextual Permission Analysis","authors":"Vincent F. Taylor, I. Martinovic","doi":"10.1145/2994459.2994474","DOIUrl":"https://doi.org/10.1145/2994459.2994474","url":null,"abstract":"Competition among app developers has caused app stores to be permeated with many groups of general-purpose apps that are functionally-similar. Examples are the many flashlight or alarm clock apps to choose from. Within groups of functionally-similar apps, however, permission usage by individual apps sometimes varies widely. Although (run-time) permission warnings inform users of the sensitive access required by apps, many users continue to ignore these warnings due to conditioning or a lack of understanding. Thus, users may inadvertently expose themselves to additional privacy and security risks by installing a more permission-hungry app when there was a functionally-similar alternative that used less permissions. We study the variation in permission usage across 50,000 Google Play Store search results for 2500 searches each yielding a group of 20 functionally-similar apps. Using fine-grained contextual analysis of permission usage within groups of apps, we identified over 3400 (potentially) over-privileged apps, approximately 7% of the studied dataset. We implement our contextual permission analysis framework as a tool, called SecuRank, and release it to the general public in the form of an Android app and website. SecuRank allows users to audit their list of installed apps to determine whether any of them can be replaced with a functionally-similar alternative that requires less sensitive access to their device. By running SecuRank on the entire Google Play Store, we discovered that up to 50% of apps can be replaced with preferable alternative, with free apps and very popular apps more likely to have such alternatives.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"385 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115488829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For more than a decade, Trusted Execution Environments (TEEs), found primarily in mobile phone and tablets, have been used to implement operator and third-party secure services like payment clients, electronic identities, rights management and device-local attestation. For many years, ARM TrustZone A (TM) (TZA) primitives were more or less the only available hardware mechanism to build a TEE, but in recent years alternative hardware security solutions have emerged for the same general purpose --- some are more tailored to the upcoming IoT device market whereas we also now have hardware that potentially can bring TEEs into the cloud infrastructure. In my talk I will introduce the contemporary TEE as is being deployed in today's devices, but one focal point of the presentation is on a functional comparison between the hardware support provided by TZA and the recently released and deployed Intel SGX(TM) and ARM TrustZone M (TM) architectures. Each solution has its relative strengths and drawbacks that reflects its main deployment purpose, and as a result, the software stack that completes the TEE environment will have to significantly adapt to each individual hardware platform. The final part of the talk will present a few conducted tests and research prototypes where we have gone beyond the TEE as it typically is set up today -- e.g. exploring problems emerging in a cloud environment with migrating workloads as well as policy enforcement in IoT devices.
十多年来,主要用于手机和平板电脑的可信执行环境(tee)已被用于实现运营商和第三方安全服务,如支付客户端、电子身份、权限管理和设备本地认证。多年来,ARM TrustZone A (TM) (TZA)原语或多或少是构建TEE的唯一可用硬件机制,但近年来,为了同样的通用目的,出现了其他硬件安全解决方案——其中一些更适合即将到来的物联网设备市场,而我们现在也有了可能将TEE带入云基础设施的硬件。在我的演讲中,我将介绍在当今设备中部署的当代TEE,但演讲的一个重点是TZA提供的硬件支持与最近发布和部署的英特尔SGX(TM)和ARM TrustZone M (TM)架构之间的功能比较。每个解决方案都有其相对的优点和缺点,这反映了其主要的部署目的,因此,完成TEE环境的软件堆栈必须显著地适应每个单独的硬件平台。演讲的最后一部分将展示一些已进行的测试和研究原型,我们已经超越了TEE,因为它通常是今天设置的,例如探索云环境中迁移工作负载出现的问题,以及物联网设备中的策略执行。
{"title":"Hardware Isolation for Trusted Execution","authors":"Jan-Erik Ekberg","doi":"10.1145/2994459.2994460","DOIUrl":"https://doi.org/10.1145/2994459.2994460","url":null,"abstract":"For more than a decade, Trusted Execution Environments (TEEs), found primarily in mobile phone and tablets, have been used to implement operator and third-party secure services like payment clients, electronic identities, rights management and device-local attestation. For many years, ARM TrustZone A (TM) (TZA) primitives were more or less the only available hardware mechanism to build a TEE, but in recent years alternative hardware security solutions have emerged for the same general purpose --- some are more tailored to the upcoming IoT device market whereas we also now have hardware that potentially can bring TEEs into the cloud infrastructure. In my talk I will introduce the contemporary TEE as is being deployed in today's devices, but one focal point of the presentation is on a functional comparison between the hardware support provided by TZA and the recently released and deployed Intel SGX(TM) and ARM TrustZone M (TM) architectures. Each solution has its relative strengths and drawbacks that reflects its main deployment purpose, and as a result, the software stack that completes the TEE environment will have to significantly adapt to each individual hardware platform. The final part of the talk will present a few conducted tests and research prototypes where we have gone beyond the TEE as it typically is set up today -- e.g. exploring problems emerging in a cloud environment with migrating workloads as well as policy enforcement in IoT devices.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115382755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cameras have become nearly ubiquitous with the rise of smartphones and laptops. New wearable devices, such as Google Glass, focus directly on using live video data to enable augmented reality and contextually enabled services. However, granting applications full access to video data exposes more information than is necessary for their functionality, introducing privacy risks. We propose a privilege-separation architecture for visual recognizer applications that encourages modularization and least privilege---separating the recognizer logic, sandboxing it to restrict filesystem and network access, and restricting what it can extract from the raw video data. We designed and implemented a prototype that separates the recognizer and application modules and evaluated our architecture on a set of 17 computer-vision applications. Our experiments show that our prototype incurs low overhead for each of these applications, reduces some of the privacy risks associated with these applications, and in some cases can actually increase the performance due to increased parallelism and concurrency.
{"title":"Securing Recognizers for Rich Video Applications","authors":"Christopher Thompson, D. Wagner","doi":"10.1145/2994459.2994461","DOIUrl":"https://doi.org/10.1145/2994459.2994461","url":null,"abstract":"Cameras have become nearly ubiquitous with the rise of smartphones and laptops. New wearable devices, such as Google Glass, focus directly on using live video data to enable augmented reality and contextually enabled services. However, granting applications full access to video data exposes more information than is necessary for their functionality, introducing privacy risks. We propose a privilege-separation architecture for visual recognizer applications that encourages modularization and least privilege---separating the recognizer logic, sandboxing it to restrict filesystem and network access, and restricting what it can extract from the raw video data. We designed and implemented a prototype that separates the recognizer and application modules and evaluated our architecture on a set of 17 computer-vision applications. Our experiments show that our prototype incurs low overhead for each of these applications, reduces some of the privacy risks associated with these applications, and in some cases can actually increase the performance due to increased parallelism and concurrency.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115541359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elie Bursztein, Artem Malyshev, Tadek Pietraszek, Kurt Thomas
In this work we present Picasso: a lightweight device class fingerprinting protocol that allows a server to verify the software and hardware stack of a mobile or desktop client. As an example, Picasso can distinguish between traffic sent by an authentic iPhone running Safari on iOS from an emulator or desktop client spoofing the same configuration. Our fingerprinting scheme builds on unpredictable yet stable noise introduced by a client's browser, operating system, and graphical stack when rendering HTML5 canvases. Our algorithm is resistant to replay and includes a hardware-bound proof of work that forces a client to expend a configurable amount of CPU and memory to solve challenges. We demonstrate that Picasso can distinguish 52 million Android, iOS, Windows, and OSX clients running a diversity of browsers with 100% accuracy. We discuss applications of Picasso in abuse fighting, including protecting the Play Store or other mobile app marketplaces from inorganic interactions; or identifying login attempts to user accounts from previously unseen device classes.
{"title":"Picasso: Lightweight Device Class Fingerprinting for Web Clients","authors":"Elie Bursztein, Artem Malyshev, Tadek Pietraszek, Kurt Thomas","doi":"10.1145/2994459.2994467","DOIUrl":"https://doi.org/10.1145/2994459.2994467","url":null,"abstract":"In this work we present Picasso: a lightweight device class fingerprinting protocol that allows a server to verify the software and hardware stack of a mobile or desktop client. As an example, Picasso can distinguish between traffic sent by an authentic iPhone running Safari on iOS from an emulator or desktop client spoofing the same configuration. Our fingerprinting scheme builds on unpredictable yet stable noise introduced by a client's browser, operating system, and graphical stack when rendering HTML5 canvases. Our algorithm is resistant to replay and includes a hardware-bound proof of work that forces a client to expend a configurable amount of CPU and memory to solve challenges. We demonstrate that Picasso can distinguish 52 million Android, iOS, Windows, and OSX clients running a diversity of browsers with 100% accuracy. We discuss applications of Picasso in abuse fighting, including protecting the Play Store or other mobile app marketplaces from inorganic interactions; or identifying login attempts to user accounts from previously unseen device classes.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125146455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Quay-de la Vallee, Paige Selby, S. Krishnamurthi
App-based systems are typically supported by marketplaces that provide easy discovery and installation of third-party apps. To mitigate risks to user privacy, many app systems use permissions to control apps' access to user data. It then falls to users to decide which apps to install and how to manage their permissions, which many users lack the expertise to do in a meaningful way. Marketplaces are ideally positioned to inform users about privacy, but they do not take advantage of this. This lack of privacy guidance makes it difficult for users to make informed privacy decisions. We present both an app marketplace and a permission management assistant that incorporate privacy information as a key element, in the form of permission ratings. We discuss gathering this rating information from both human and automated sources, presenting the ratings in a way that users can understand, and using this information to promote privacy-respecting apps and help users manage permissions.
{"title":"On a (Per)Mission: Building Privacy Into the App Marketplace","authors":"Hannah Quay-de la Vallee, Paige Selby, S. Krishnamurthi","doi":"10.1145/2994459.2994466","DOIUrl":"https://doi.org/10.1145/2994459.2994466","url":null,"abstract":"App-based systems are typically supported by marketplaces that provide easy discovery and installation of third-party apps. To mitigate risks to user privacy, many app systems use permissions to control apps' access to user data. It then falls to users to decide which apps to install and how to manage their permissions, which many users lack the expertise to do in a meaningful way. Marketplaces are ideally positioned to inform users about privacy, but they do not take advantage of this. This lack of privacy guidance makes it difficult for users to make informed privacy decisions. We present both an app marketplace and a permission management assistant that incorporate privacy information as a key element, in the form of permission ratings. We discuss gathering this rating information from both human and automated sources, presenting the ratings in a way that users can understand, and using this information to promote privacy-respecting apps and help users manage permissions.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123794459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mansour Ahmadi, B. Biggio, Steven Arzt, Davide Ariu, G. Giacinto
Google Cloud Messaging (GCM) is a widely-used and reliable mechanism that helps developers to build more efficient Android applications; in particular, it enables sending push notifications to an application only when new information is available for it on its servers. For this reason, GCM is now used by more than 60% among the most popular Android applications. On the other hand, such a mechanism is also exploited by attackers to facilitate their malicious activities; e.g., to abuse functionality of advertisement libraries in adware, or to command and control bot clients. However, to our knowledge, the extent to which GCM is used in malicious Android applications (badware, for short) has never been evaluated before. In this paper, we do not only aim to investigate the aforementioned issue, but also to show how traces of GCM flows in Android applications can be exploited to improve Android badware detection. To this end, we first extend Flowdroid to extract GCM flows from Android applications. Then, we embed those flows in a vector space, and train different machine-learning algorithms to detect badware that use GCM to perform malicious activities. We demonstrate that combining different classifiers trained on the flows originated from GCM services allows us to improve the detection rate up to 2.4%, while decreasing the false positive rate by 1.9%, and, more interestingly, to correctly detect 14 never-before-seen badware applications.
Google Cloud Messaging (GCM)是一种广泛使用且可靠的机制,可帮助开发人员构建更高效的Android应用程序;特别是,只有当应用程序的服务器上有新信息可用时,它才能向应用程序发送推送通知。由于这个原因,GCM现在在最流行的Android应用程序中被超过60%的人使用。另一方面,这种机制也会被攻击者利用,为其恶意活动提供便利;例如,滥用广告软件中的广告库功能,或命令和控制bot客户端。然而,据我们所知,GCM在恶意Android应用程序(简称恶意软件)中使用的程度以前从未被评估过。在本文中,我们不仅旨在研究上述问题,而且还展示了如何利用Android应用程序中GCM流的痕迹来改进Android恶意软件检测。为此,我们首先扩展Flowdroid以从Android应用程序中提取GCM流。然后,我们将这些流嵌入到向量空间中,并训练不同的机器学习算法来检测使用GCM执行恶意活动的恶意软件。我们证明,结合来自GCM服务的流训练的不同分类器,可以将检测率提高到2.4%,同时将假阳性率降低1.9%,更有趣的是,可以正确检测14个从未见过的恶意应用程序。
{"title":"Detecting Misuse of Google Cloud Messaging in Android Badware","authors":"Mansour Ahmadi, B. Biggio, Steven Arzt, Davide Ariu, G. Giacinto","doi":"10.1145/2994459.2994469","DOIUrl":"https://doi.org/10.1145/2994459.2994469","url":null,"abstract":"Google Cloud Messaging (GCM) is a widely-used and reliable mechanism that helps developers to build more efficient Android applications; in particular, it enables sending push notifications to an application only when new information is available for it on its servers. For this reason, GCM is now used by more than 60% among the most popular Android applications. On the other hand, such a mechanism is also exploited by attackers to facilitate their malicious activities; e.g., to abuse functionality of advertisement libraries in adware, or to command and control bot clients. However, to our knowledge, the extent to which GCM is used in malicious Android applications (badware, for short) has never been evaluated before. In this paper, we do not only aim to investigate the aforementioned issue, but also to show how traces of GCM flows in Android applications can be exploited to improve Android badware detection. To this end, we first extend Flowdroid to extract GCM flows from Android applications. Then, we embed those flows in a vector space, and train different machine-learning algorithms to detect badware that use GCM to perform malicious activities. We demonstrate that combining different classifiers trained on the flows originated from GCM services allows us to improve the detection rate up to 2.4%, while decreasing the false positive rate by 1.9%, and, more interestingly, to correctly detect 14 never-before-seen badware applications.","PeriodicalId":420892,"journal":{"name":"Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121447947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}