As data gets more complex and applications of machine learning (ML) algorithms for decision-making broaden and diversify, traditional ML methods by minimizing an unconstrained or simply constrained convex objective are becoming increasingly unsatisfactory. To address this new challenge, recent ML research has sparked a paradigm shift in learning predictive models into non-convex learning and heavily constrained learning. Non-Convex Learning (NCL) refers to a family of learning methods that involve optimizing non-convex objectives. Heavily Constrained Learning (HCL) refers to a family of learning methods that involve constraints that are much more complicated than a simple norm constraint (e.g., data-dependent functional constraints, non-convex constraints), as in conventional learning. This paradigm shift has already created many promising outcomes: (i) non-convex deep learning has brought breakthroughs for learning representations from large-scale structured data (e.g., images, speech) (LeCun, Bengio, & Hinton, 2015; Krizhevsky, Sutskever, & Hinton, 2012; Amodei et al., 2016; Deng & Liu, 2018); (ii) non-convex regularizers (e.g., for enforcing sparsity or low-rank) could be more effective than their convex counterparts for learning high-dimensional structured models (C.-H. Zhang & Zhang, 2012; J. Fan & Li, 2001; C.-H. Zhang, 2010; T. Zhang, 2010); (iii) constrained learning is being used to learn predictive models that satisfy various constraints to respect social norms (e.g., fairness) (B. E. Woodworth, Gunasekar, Ohannessian, & Srebro, 2017; Hardt, Price, Srebro, et al., 2016; Zafar, Valera, Gomez Rodriguez, & Gummadi, 2017; A. Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018), to improve the interpretability (Gupta et al., 2016; Canini, Cotter, Gupta, Fard, & Pfeifer, 2016; You, Ding, Canini, Pfeifer, & Gupta, 2017), to enhance the robustness (Globerson & Roweis, 2006a; Sra, Nowozin, & Wright, 2011; T. Yang, Mahdavi, Jin, Zhang, & Zhou, 2012), etc. In spite of great promises brought by these new learning paradigms, they also bring emerging challenges to the design of computationally efficient algorithms for big data and the analysis of their statistical properties.
{"title":"Advancing non-convex and constrained learning: challenges and opportunities","authors":"Tianbao Yang","doi":"10.1145/3362077.3362085","DOIUrl":"https://doi.org/10.1145/3362077.3362085","url":null,"abstract":"As data gets more complex and applications of machine learning (ML) algorithms for decision-making broaden and diversify, traditional ML methods by minimizing an unconstrained or simply constrained convex objective are becoming increasingly unsatisfactory. To address this new challenge, recent ML research has sparked a paradigm shift in learning predictive models into non-convex learning and heavily constrained learning. Non-Convex Learning (NCL) refers to a family of learning methods that involve optimizing non-convex objectives. Heavily Constrained Learning (HCL) refers to a family of learning methods that involve constraints that are much more complicated than a simple norm constraint (e.g., data-dependent functional constraints, non-convex constraints), as in conventional learning. This paradigm shift has already created many promising outcomes: (i) non-convex deep learning has brought breakthroughs for learning representations from large-scale structured data (e.g., images, speech) (LeCun, Bengio, & Hinton, 2015; Krizhevsky, Sutskever, & Hinton, 2012; Amodei et al., 2016; Deng & Liu, 2018); (ii) non-convex regularizers (e.g., for enforcing sparsity or low-rank) could be more effective than their convex counterparts for learning high-dimensional structured models (C.-H. Zhang & Zhang, 2012; J. Fan & Li, 2001; C.-H. Zhang, 2010; T. Zhang, 2010); (iii) constrained learning is being used to learn predictive models that satisfy various constraints to respect social norms (e.g., fairness) (B. E. Woodworth, Gunasekar, Ohannessian, & Srebro, 2017; Hardt, Price, Srebro, et al., 2016; Zafar, Valera, Gomez Rodriguez, & Gummadi, 2017; A. Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018), to improve the interpretability (Gupta et al., 2016; Canini, Cotter, Gupta, Fard, & Pfeifer, 2016; You, Ding, Canini, Pfeifer, & Gupta, 2017), to enhance the robustness (Globerson & Roweis, 2006a; Sra, Nowozin, & Wright, 2011; T. Yang, Mahdavi, Jin, Zhang, & Zhou, 2012), etc. In spite of great promises brought by these new learning paradigms, they also bring emerging challenges to the design of computationally efficient algorithms for big data and the analysis of their statistical properties.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"29-39"},"PeriodicalIF":0.0,"publicationDate":"2019-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3362077.3362085","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45348983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence is a rapidly advancing field with the potential to revolutionize health care, transportation, and national security. Although the technology has been ubiquitous in every day society for a while, the advent of self-driving cars and smart home devices have propelled a discussion of the associated ethical risks and responsibilities. Since the usage of AI can have significant impacts on people, it is essential to establish a set of ethical values to follow when developing and deploying AI.
{"title":"The intersection of ethics and AI","authors":"Annie Zhou","doi":"10.1145/3362077.3362087","DOIUrl":"https://doi.org/10.1145/3362077.3362087","url":null,"abstract":"Artificial intelligence is a rapidly advancing field with the potential to revolutionize health care, transportation, and national security. Although the technology has been ubiquitous in every day society for a while, the advent of self-driving cars and smart home devices have propelled a discussion of the associated ethical risks and responsibilities. Since the usage of AI can have significant impacts on people, it is essential to establish a set of ethical values to follow when developing and deploying AI.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"64-69"},"PeriodicalIF":0.0,"publicationDate":"2019-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3362077.3362087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44027259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shari Trewin, Sara H. Basson, Michael J. Muller, Stacy M. Branham, J. Treviranus, D. Gruen, Daniell Hebert, Natalia Lyckowski, Erich Manser
In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human-centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.
{"title":"Considerations for AI fairness for people with disabilities","authors":"Shari Trewin, Sara H. Basson, Michael J. Muller, Stacy M. Branham, J. Treviranus, D. Gruen, Daniell Hebert, Natalia Lyckowski, Erich Manser","doi":"10.1145/3362077.3362086","DOIUrl":"https://doi.org/10.1145/3362077.3362086","url":null,"abstract":"In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human-centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"40-63"},"PeriodicalIF":0.0,"publicationDate":"2019-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3362077.3362086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46846856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this column, we briefly describe a rich dataset with many opportunities for interesting data science and machine learning assignments and research projects, we take up a simple question, and we offer code illustrating use of the dataset in pursuit of answers to the question.
{"title":"AI education matters","authors":"T. Neller","doi":"10.1145/3340470.3340474","DOIUrl":"https://doi.org/10.1145/3340470.3340474","url":null,"abstract":"In this column, we briefly describe a rich dataset with many opportunities for interesting data science and machine learning assignments and research projects, we take up a simple question, and we offer code illustrating use of the dataset in pursuit of answers to the question.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"8 - 10"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340474","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41992153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Welcome to the second issue of the fifth volume of the AI Matters Newsletter. We have exciting news from SIGAI Vice-Chair Sanmay Das: "We are delighted to announce that the first ever ACM SIGAI Industry Award for Excellence in Artificial Intelligence will be awarded to the Decision Service created by the Real World Reinforcement Learning Team from Microsoft! The award will be presented at IJCAI 2019. For more on the award and the team that received it, please see https://sigai.acm.org/awards/industry_award.html."
{"title":"Welcome to AI matters 5(2)","authors":"A. McGovern, Iolanda Leite","doi":"10.1145/3340470.3340471","DOIUrl":"https://doi.org/10.1145/3340470.3340471","url":null,"abstract":"Welcome to the second issue of the fifth volume of the AI Matters Newsletter. We have exciting news from SIGAI Vice-Chair Sanmay Das: \"We are delighted to announce that the first ever ACM SIGAI Industry Award for Excellence in Artificial Intelligence will be awarded to the Decision Service created by the Real World Reinforcement Learning Team from Microsoft! The award will be presented at IJCAI 2019. For more on the award and the team that received it, please see https://sigai.acm.org/awards/industry_award.html.\"","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"3 - 3"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45319984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transparency in decision-making AI systems can only become actionable in practice when all stakeholders share responsibility for validating outcomes. We propose a three-party regulatory framework that incentivizes collaborative development in the AI ecosystem and guarantees fairness and accountability are not merely afterthoughts in high-impact domains.
{"title":"Beyond transparency","authors":"Janelle Berscheid, F. Roewer-Després","doi":"10.1145/3340470.3340476","DOIUrl":"https://doi.org/10.1145/3340470.3340476","url":null,"abstract":"Transparency in decision-making AI systems can only become actionable in practice when all stakeholders share responsibility for validating outcomes. We propose a three-party regulatory framework that incentivizes collaborative development in the AI ecosystem and guarantees fairness and accountability are not merely afterthoughts in high-impact domains.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"13 - 22"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47538961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI Policy is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog (https://sigai.acm.org/aimatters/blog/). We welcome everyone to make blog comments so we can develop a rich knowledge base of information and ideas representing the SIGAI members.
AI Policy是AI Matters的一个常规专栏,根据每月两次出现在AI Matters博客上的帖子进行总结和评论(https://sigai.acm.org/aimatters/blog/)。我们欢迎大家发表博客评论,这样我们就可以开发一个丰富的信息和想法的知识库,代表SIGAI成员。
{"title":"AI policy matters","authors":"L. Medsker","doi":"10.1145/3340470.3340475","DOIUrl":"https://doi.org/10.1145/3340470.3340475","url":null,"abstract":"AI Policy is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog (https://sigai.acm.org/aimatters/blog/). We welcome everyone to make blog comments so we can develop a rich knowledge base of information and ideas representing the SIGAI members.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"40 1","pages":"11 - 12"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41284083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Across: 1) French city by the Straight of Dover 6) Go through a printed paper 12) A point in time 13) Protected with a concrete defense 14) A.C ___, from Saved by the Bell 16) Seas at a raised level 17) ___ Braxton, American singer 18) Undesired spot 20) Gear tooth 21) Be indebted 22) Mine in France 23) Ireland in the local language 24) Buckingham guard attire (2 wds.) 26) Summation circuit 27) Crying out loud 29) Follow up actively (2 wds.) 32) Said more formally 36) Nominated as a fellow 37) Continuous pain 38) Output of a mining procedure 39) Verb invoked with ability 40) Legal argument 41) Old tourist attraction 42) Unpleasant experience 44) Clothes area 46) ___ Wonder from the world of music 47) Person hired to help 48) Unexcitingly 49) Present on the list of requirements
{"title":"Crosswords","authors":"A. Botea","doi":"10.1145/3340470.3340480","DOIUrl":"https://doi.org/10.1145/3340470.3340480","url":null,"abstract":"Across: 1) French city by the Straight of Dover 6) Go through a printed paper 12) A point in time 13) Protected with a concrete defense 14) A.C ___, from Saved by the Bell 16) Seas at a raised level 17) ___ Braxton, American singer 18) Undesired spot 20) Gear tooth 21) Be indebted 22) Mine in France 23) Ireland in the local language 24) Buckingham guard attire (2 wds.) 26) Summation circuit 27) Crying out loud 29) Follow up actively (2 wds.) 32) Said more formally 36) Nominated as a fellow 37) Continuous pain 38) Output of a mining procedure 39) Verb invoked with ability 40) Legal argument 41) Old tourist attraction 42) Unpleasant experience 44) Clothes area 46) ___ Wonder from the world of music 47) Person hired to help 48) Unexcitingly 49) Present on the list of requirements","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"46 - 46"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340480","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48530206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In AI Matters Volume 4, Issue 2, and Issue 4, we raised the notion of the possibility of an AI Cosmology in part in response to the "AI Hype Cycle" that we are currently experiencing. We posited that our current machine learning and big data era represents but one peak among several previous peaks in AI research in which each peak had accompanying "Hype Cycles". We associated each peak with an epoch in a possible AI Cosmology. We briefly explored the logic machines, cybernetics, and expert system epochs. One of the objectives of identifying these epochs was to help establish that we have been here before. In particular we've been in the territory where some application of AI research finds substantial commercial success which is then closely followed by AI fever and hype. The public's expectations are heightened only to end in disillusionment when the applications fall short. Whereas it is sometimes somewhat of a challenge even for AI researchers, educators, and practitioners to know where the reality ends and hype begins, the layperson is often in an impossible position and at the mercy of pop culture, marketing and advertising campaigns. We suggested that an AI Cosmology might help us identify a single standard model for AI that could be the foundation for a common shared understanding of what AI is and what it is not. A tool to help the layperson understand where AI has been, where it's going, and where it can't go. Something that could provide a basic road map to help the general public navigate the pitfalls of AI Hype.
{"title":"What metrics should we use to measure commercial AI?","authors":"C. Hughes, Tracey Hughes","doi":"10.1145/3340470.3340479","DOIUrl":"https://doi.org/10.1145/3340470.3340479","url":null,"abstract":"In AI Matters Volume 4, Issue 2, and Issue 4, we raised the notion of the possibility of an AI Cosmology in part in response to the \"AI Hype Cycle\" that we are currently experiencing. We posited that our current machine learning and big data era represents but one peak among several previous peaks in AI research in which each peak had accompanying \"Hype Cycles\". We associated each peak with an epoch in a possible AI Cosmology. We briefly explored the logic machines, cybernetics, and expert system epochs. One of the objectives of identifying these epochs was to help establish that we have been here before. In particular we've been in the territory where some application of AI research finds substantial commercial success which is then closely followed by AI fever and hype. The public's expectations are heightened only to end in disillusionment when the applications fall short. Whereas it is sometimes somewhat of a challenge even for AI researchers, educators, and practitioners to know where the reality ends and hype begins, the layperson is often in an impossible position and at the mercy of pop culture, marketing and advertising campaigns. We suggested that an AI Cosmology might help us identify a single standard model for AI that could be the foundation for a common shared understanding of what AI is and what it is not. A tool to help the layperson understand where AI has been, where it's going, and where it can't go. Something that could provide a basic road map to help the general public navigate the pitfalls of AI Hype.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"41 - 45"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340479","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48231415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing adoption of machine learning to inform decisions in employment, pricing, and criminal justice has raised concerns that algorithms may perpetuate historical and societal discrimination. Academics have responded by introducing numerous definitions of "fairness" with corresponding mathematical formalisations, proposed as one-size-fits-all, universal conditions. This paper will explore three of the definitions and demonstrate their embedded ethical values and contextual limitations, using credit risk evaluation as an example use case. I will propose a new approach - context-conscious fairness - that takes into account two main trade-offs: between aggregate benefit and inequity and between accuracy and interpretability. Fairness is not a notion with absolute and binary measurement; the target outcomes and their trade-offs must be specified with respect to the relevant domain context.
{"title":"Context-conscious fairness in using machine learning to make decisions","authors":"M. S. Lee","doi":"10.1145/3340470.3340477","DOIUrl":"https://doi.org/10.1145/3340470.3340477","url":null,"abstract":"The increasing adoption of machine learning to inform decisions in employment, pricing, and criminal justice has raised concerns that algorithms may perpetuate historical and societal discrimination. Academics have responded by introducing numerous definitions of \"fairness\" with corresponding mathematical formalisations, proposed as one-size-fits-all, universal conditions. This paper will explore three of the definitions and demonstrate their embedded ethical values and contextual limitations, using credit risk evaluation as an example use case. I will propose a new approach - context-conscious fairness - that takes into account two main trade-offs: between aggregate benefit and inequity and between accuracy and interpretability. Fairness is not a notion with absolute and binary measurement; the target outcomes and their trade-offs must be specified with respect to the relevant domain context.","PeriodicalId":91445,"journal":{"name":"AI matters","volume":"5 1","pages":"23 - 29"},"PeriodicalIF":0.0,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3340470.3340477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42324037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}