Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314204
A. Padalkar, Pramod Pathak, Paul Stynes
. Plastic waste sorting involves the separation of plastic into its individual plastic types. This research proposes an Object Detection and Scaling Model for plastic waste sorting to detect four types of plastics using the WaDaBa dataset. This research compares the Object Detection and Scaling Models Scaled-Yolov4 and EfficientDet. Results demonstrate that Scaled-Yolov4-CSP outperforms the state of the art, Colour-Histogram based Canny-Edge-Gaussian Filter, by 21% accuracy.
{"title":"An Object Detection and Scaling Model for Plastic Waste Sorting","authors":"A. Padalkar, Pramod Pathak, Paul Stynes","doi":"10.4108/eai.20-11-2021.2314204","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314204","url":null,"abstract":". Plastic waste sorting involves the separation of plastic into its individual plastic types. This research proposes an Object Detection and Scaling Model for plastic waste sorting to detect four types of plastics using the WaDaBa dataset. This research compares the Object Detection and Scaling Models Scaled-Yolov4 and EfficientDet. Results demonstrate that Scaled-Yolov4-CSP outperforms the state of the art, Colour-Histogram based Canny-Edge-Gaussian Filter, by 21% accuracy.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314139
D. Bacciu, Antonio Carta, Daniele Di Sarli, C. Gallicchio, Vincenzo Lomonaco, Salvatore Petroni
Deploying Autonomous Driving systems requires facing some novel challenges for the Automotive industry. One of the most critical aspects that can severely compromise their deployment is Functional Safety. The ISO 26262 standard provides guidelines to ensure Functional Safety of road vehicles. However, this standard is not suitable to develop Artificial Intelligence based systems such as systems based on Recurrent Neural Networks (RNNs). To address this issue, in this paper we propose a new methodology, composed of three steps. The first step is the robustness evaluation of the RNN against inputs perturbations. Then, a proper set of safety measures must be defined according to the model’s robustness, where less robust models will require stronger mitigation. Finally, the functionality of the entire system must be extensively tested according to Safety Of The Intended Functionality (SOTIF) guidelines, providing quantitative results about the occurrence of unsafe scenarios, and by evaluating appropriate Safety Performance Indicators.
{"title":"Towards Functional Safety Compliance of Recurrent Neural Networks","authors":"D. Bacciu, Antonio Carta, Daniele Di Sarli, C. Gallicchio, Vincenzo Lomonaco, Salvatore Petroni","doi":"10.4108/eai.20-11-2021.2314139","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314139","url":null,"abstract":"Deploying Autonomous Driving systems requires facing some novel challenges for the Automotive industry. One of the most critical aspects that can severely compromise their deployment is Functional Safety. The ISO 26262 standard provides guidelines to ensure Functional Safety of road vehicles. However, this standard is not suitable to develop Artificial Intelligence based systems such as systems based on Recurrent Neural Networks (RNNs). To address this issue, in this paper we propose a new methodology, composed of three steps. The first step is the robustness evaluation of the RNN against inputs perturbations. Then, a proper set of safety measures must be defined according to the model’s robustness, where less robust models will require stronger mitigation. Finally, the functionality of the entire system must be extensively tested according to Safety Of The Intended Functionality (SOTIF) guidelines, providing quantitative results about the occurrence of unsafe scenarios, and by evaluating appropriate Safety Performance Indicators.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314145
Amar Singh, Shipra Tholia
Some recent experiments with AI, such as MIT’s psychic AI Norman, Microsoft’s Nazi Tay, Amazon’s 2016 racial fiasco of Prime program subscribers, and many others, have exposed the vulnerability of developing AI solely based on human experiences. Such development shall only serve the anthropogenic causes (that too gendered and racially motivated), neglecting the interests of other species. However, ecosystemic artificial intelligence provides an alternative approach where AI interacts and learns from a broad community of species. Learning as such AI adapts itself, privileging the coherence and unity that an ecosystem demands. René Laloux’s animated film Fantastic Planet (1973) focuses on this ecosystemic interaction of AI. The film highlights the positive changes that can be brought in subdued communities when engaged with AI, leading to engendering harmony. René Laloux’s conception of AI comes with the idea of how it can serve in assimilating the marginalized sections within the mainstream by empowering them. This paper delves into examining the situations that the film brings forth, which becomes vital in understanding our relationship to the earth at present, and our role moving forward into the future.
{"title":"René Laloux’s vision of Ecotopian AI: Exploring the Ecosystemic AI through Fantastic Planet","authors":"Amar Singh, Shipra Tholia","doi":"10.4108/eai.20-11-2021.2314145","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314145","url":null,"abstract":"Some recent experiments with AI, such as MIT’s psychic AI Norman, Microsoft’s Nazi Tay, Amazon’s 2016 racial fiasco of Prime program subscribers, and many others, have exposed the vulnerability of developing AI solely based on human experiences. Such development shall only serve the anthropogenic causes (that too gendered and racially motivated), neglecting the interests of other species. However, ecosystemic artificial intelligence provides an alternative approach where AI interacts and learns from a broad community of species. Learning as such AI adapts itself, privileging the coherence and unity that an ecosystem demands. René Laloux’s animated film Fantastic Planet (1973) focuses on this ecosystemic interaction of AI. The film highlights the positive changes that can be brought in subdued communities when engaged with AI, leading to engendering harmony. René Laloux’s conception of AI comes with the idea of how it can serve in assimilating the marginalized sections within the mainstream by empowering them. This paper delves into examining the situations that the film brings forth, which becomes vital in understanding our relationship to the earth at present, and our role moving forward into the future.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"41 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120902570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314263
Lea Buchhorn
Technological developments have and will continue to influence our everyday lives. One of them, AI, promises many benefits in various fields, such as medicine, agriculture, or the military. On the other hand, AI advancement encompasses multifaceted risks and challenges, such as data privacy concerns, opaque decision-making, or discrimination against groups or individuals. AI and Big Data have gained more and more importance in military operations all over the globe. The German military has been trailing different approaches to AI-based early crisis detection applications. However, the more insights are gained about AI and the harm human errors in designing algorithms can cause, the more ethical concerns arise. Thus, this paper investigates which ethical challenges the German military is facing while testing and trying to implement AI-based early crisis detection systems.
{"title":"The Ethics of Early Crisis Detection - Big Data, AI, and Algorithms in the German Military","authors":"Lea Buchhorn","doi":"10.4108/eai.20-11-2021.2314263","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314263","url":null,"abstract":"Technological developments have and will continue to influence our everyday lives. One of them, AI, promises many benefits in various fields, such as medicine, agriculture, or the military. On the other hand, AI advancement encompasses multifaceted risks and challenges, such as data privacy concerns, opaque decision-making, or discrimination against groups or individuals. AI and Big Data have gained more and more importance in military operations all over the globe. The German military has been trailing different approaches to AI-based early crisis detection applications. However, the more insights are gained about AI and the harm human errors in designing algorithms can cause, the more ethical concerns arise. Thus, this paper investigates which ethical challenges the German military is facing while testing and trying to implement AI-based early crisis detection systems.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133643020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314203
William Saakyan, Olya Hakobyan, Hanna Drimalla
Emotion recognition models can be confounded by representation bias, where populations of certain gender, age or ethnoracial characteristics are not sufficiently represented in the training data. This may result in erroneous predictions with consequences of personal relevance in sensitive contexts. We systematically examined 130 emotion (audio, visual and audio-visual) datasets and found that age and ethnoracial background are the most affected dimensions, while gender is largely balanced in emotion datasets. The observed disparities between age and ethnoracial groups are compounded by scarce and inconsistent reports of demographic information. Finally, we observed a lack of information about the annotators of emotion datasets, another potential source of bias.
{"title":"Representational bias in expression and annotation of emotions in audiovisual databases","authors":"William Saakyan, Olya Hakobyan, Hanna Drimalla","doi":"10.4108/eai.20-11-2021.2314203","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314203","url":null,"abstract":"Emotion recognition models can be confounded by representation bias, where populations of certain gender, age or ethnoracial characteristics are not sufficiently represented in the training data. This may result in erroneous predictions with consequences of personal relevance in sensitive contexts. We systematically examined 130 emotion (audio, visual and audio-visual) datasets and found that age and ethnoracial background are the most affected dimensions, while gender is largely balanced in emotion datasets. The observed disparities between age and ethnoracial groups are compounded by scarce and inconsistent reports of demographic information. Finally, we observed a lack of information about the annotators of emotion datasets, another potential source of bias.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131834212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314154
Shian-Yu Chiu, Kun-Ru Wu, Y. Tseng
. Skeleton-based action recognition has attracted lots of attention in computer vision. Human mutual interaction recognition relies on extracting discriminative features for better understanding details. In this work, we propose two vectors to encode joint dynamics and spatial interaction information. The proposed model shows remarkable performance at handling sequential data. Experimental results demonstrate that our model outperforms state-of-the-art approaches with much less overheads.
{"title":"Two-Person Mutual Action Recognition Using Joint Dynamics and Coordinate Transformation","authors":"Shian-Yu Chiu, Kun-Ru Wu, Y. Tseng","doi":"10.4108/eai.20-11-2021.2314154","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314154","url":null,"abstract":". Skeleton-based action recognition has attracted lots of attention in computer vision. Human mutual interaction recognition relies on extracting discriminative features for better understanding details. In this work, we propose two vectors to encode joint dynamics and spatial interaction information. The proposed model shows remarkable performance at handling sequential data. Experimental results demonstrate that our model outperforms state-of-the-art approaches with much less overheads.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121830316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314200
J. Stypińska
Scholars in fairness and ethics in AI have successfully and critically identified discriminatory outcomes pertaining to the social categories of gender and race. The salient scrutiny of fairness, important for the debate of AI for social good, has nonetheless paid insufficient attention to the critical category of age. The aging population has been largely neglected during the turn to digitality and AI. Ageism in AI can be manifested in five interconnected forms: (1) age biases in algorithms and datasets, (2) age stereotypes, prejudices and ideologies of actors in AI, (3) invisibility of old age in discourses on AI, (4) discriminatory effects of use of AI technology on different age groups, (5) exclusion as users of AI technology, services and products. Furthermore, the paper provides illustrations of these forms of ageism in AI.
{"title":"Ageism in AI: new forms of age discrimination in the era of algorithms and artificial intelligence","authors":"J. Stypińska","doi":"10.4108/eai.20-11-2021.2314200","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314200","url":null,"abstract":"Scholars in fairness and ethics in AI have successfully and critically identified discriminatory outcomes pertaining to the social categories of gender and race. The salient scrutiny of fairness, important for the debate of AI for social good, has nonetheless paid insufficient attention to the critical category of age. The aging population has been largely neglected during the turn to digitality and AI. Ageism in AI can be manifested in five interconnected forms: (1) age biases in algorithms and datasets, (2) age stereotypes, prejudices and ideologies of actors in AI, (3) invisibility of old age in discourses on AI, (4) discriminatory effects of use of AI technology on different age groups, (5) exclusion as users of AI technology, services and products. Furthermore, the paper provides illustrations of these forms of ageism in AI.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134113244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314136
Emmie Hine
Artificial intelligence (AI) systems shape our infospheres, mediating our interactions and defining what information we have access to. This poses a tremendous threat to individual autonomy and impacts society, both online and offline. Users are often unaware of the potential impacts of using these systems, and companies that utilise them are not incentivised to adequately inform their users of those impacts. Forms of digital design ethics, including pro-ethical design and tolerant paternalism, have been proposed to help protect user autonomy, but are not sufficient to ensure that users are educated enough to make informed decisions. In this paper, I use sexual consent as defined by American universities to outline and propose ways to implement a model of “informed digital consent” that would ensure that users are well-informed so that their autonomy is not only respected, but enhanced.
{"title":"Informed Digital Consent for Use of AI Systems Grounded in a Model of Sexual Consent","authors":"Emmie Hine","doi":"10.4108/eai.20-11-2021.2314136","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314136","url":null,"abstract":"Artificial intelligence (AI) systems shape our infospheres, mediating our interactions and defining what information we have access to. This poses a tremendous threat to individual autonomy and impacts society, both online and offline. Users are often unaware of the potential impacts of using these systems, and companies that utilise them are not incentivised to adequately inform their users of those impacts. Forms of digital design ethics, including pro-ethical design and tolerant paternalism, have been proposed to help protect user autonomy, but are not sufficient to ensure that users are educated enough to make informed decisions. In this paper, I use sexual consent as defined by American universities to outline and propose ways to implement a model of “informed digital consent” that would ensure that users are well-informed so that their autonomy is not only respected, but enhanced.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114247994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4108/eai.20-11-2021.2314105
A. Owe, S. Baum
Sustainability is widely considered a good thing and is therefore a matter of ethical significance. This paper analyzes the ethical dimensions of existing work on AI and sustainability, finding that most of it is focused on sustaining the environment for human benefit. The paper calls for sustainability that is not human-centric and that extends into the distant future, especially for advanced future AI as a technology that can advance expansion beyond Earth.
{"title":"The Ethics of Sustainability for Artificial Intelligence","authors":"A. Owe, S. Baum","doi":"10.4108/eai.20-11-2021.2314105","DOIUrl":"https://doi.org/10.4108/eai.20-11-2021.2314105","url":null,"abstract":"Sustainability is widely considered a good thing and is therefore a matter of ethical significance. This paper analyzes the ethical dimensions of existing work on AI and sustainability, finding that most of it is focused on sustaining the environment for human benefit. The paper calls for sustainability that is not human-centric and that extends into the distant future, especially for advanced future AI as a technology that can advance expansion beyond Earth.","PeriodicalId":119759,"journal":{"name":"Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115145279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}