{"title":"Keynote: Privacy and Trust: Friend or Foe","authors":"Ling Liu","doi":"10.1145/3139531.3139537","DOIUrl":null,"url":null,"abstract":"Internet of Things (IoT) and Big Data have fueled the development of fully distributed computational architectures for future cyber systems from data analytics, machine learning (ML) to artificial intelligence (AI). Trust and Privacy become two vital and necessary measures for distributed management of IoT powered big data learning systems and services. However, these two measures have been studied independently in computer science, social science and law. Trust is widely considered as a critical measure for the correctness, predictability, and resiliency (with respect to reliability and security) of software systems, be it big data systems, IoT systems, machine learning systems, or Artificial Intelligence systems. Privacy on the other hand is commonly recognized as a personalization measure for imposing control on the ways of how data is captured, accessed and analyzed, and the ways of how data analytic results from ML models and AI systems should be released and shared. Broadly speaking, in human society, we rely on three types of trust in our everyday work and life to achieve a peaceful mind: (1) verifiable belief-driven trust, (2) statistical evidence based trust, and (3) complex systemwide cognitive trust. Interestingly, privacy has been a more controversial subject. On one hand, privacy is an important built-in dimension of trust, which is deep rooted in human society, and a highly valued virtue in Western civilization. Even though different human beings may have diverse levels of privacy sensitivity, we all trust that our privacy is respected in our social and professional environments, including at home, at work and in social commons. Thus, Privacy is a perfect example of three-fold trust: belief-driven, statistical evident, and complex cognitive trust. On the other hand, many view privacy (and privacy protection) as an antagonistic measure of trust and one is often asked to show trust at the cost of giving up on privacy. Are Privacy and Trust friend or foe? This keynote will share my view to this question from multiple perspectives. I conjecture that the answer to this question can fundamentally change the ways we conduct research in privacy and trust in the next generation of big data enhanced cyber learning systems from data mining, machine learning to artificial intelligence.","PeriodicalId":295031,"journal":{"name":"Proceedings of the 2017 Workshop on Women in Cyber Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 Workshop on Women in Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3139531.3139537","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Internet of Things (IoT) and Big Data have fueled the development of fully distributed computational architectures for future cyber systems from data analytics, machine learning (ML) to artificial intelligence (AI). Trust and Privacy become two vital and necessary measures for distributed management of IoT powered big data learning systems and services. However, these two measures have been studied independently in computer science, social science and law. Trust is widely considered as a critical measure for the correctness, predictability, and resiliency (with respect to reliability and security) of software systems, be it big data systems, IoT systems, machine learning systems, or Artificial Intelligence systems. Privacy on the other hand is commonly recognized as a personalization measure for imposing control on the ways of how data is captured, accessed and analyzed, and the ways of how data analytic results from ML models and AI systems should be released and shared. Broadly speaking, in human society, we rely on three types of trust in our everyday work and life to achieve a peaceful mind: (1) verifiable belief-driven trust, (2) statistical evidence based trust, and (3) complex systemwide cognitive trust. Interestingly, privacy has been a more controversial subject. On one hand, privacy is an important built-in dimension of trust, which is deep rooted in human society, and a highly valued virtue in Western civilization. Even though different human beings may have diverse levels of privacy sensitivity, we all trust that our privacy is respected in our social and professional environments, including at home, at work and in social commons. Thus, Privacy is a perfect example of three-fold trust: belief-driven, statistical evident, and complex cognitive trust. On the other hand, many view privacy (and privacy protection) as an antagonistic measure of trust and one is often asked to show trust at the cost of giving up on privacy. Are Privacy and Trust friend or foe? This keynote will share my view to this question from multiple perspectives. I conjecture that the answer to this question can fundamentally change the ways we conduct research in privacy and trust in the next generation of big data enhanced cyber learning systems from data mining, machine learning to artificial intelligence.