To operate nuclear power plants (NPPs) safely and efficiently, signals from sensors must be valid and accurate. Signals deliver the current situation and status of the system to the operator or systems that use them as inputs. Therefore, faulty signals may degrade the performance of both control systems and operators in the emergency situation, as learned from past accidents at NPPs. Moreover, With the increasing interest in autonomous and automatic controls, the integrity and reliability of input signals becomes important for the successful control. This study proposes an algorithm for the faulty signal restoration under emergency situations using deep convolutional generative adversarial networks (DCGAN) that generates a new data from random noise using two networks (i.e., generator and discriminator). To restore faulty signals, the algorithm receives a faulty signal as an input and generates a normal signal using a pre-trained normal signal distribution. This study also suggests optimization steps to improve the performance of the algorithm. The optimization consists of three steps; 1) selection of optimal inputs, 2) determine of the hyper-parameters for DCGAN. Then, the data for implementation and optimization are collected by using a Compact Nuclear Simulator (CNS) developed by the Korea Atomic Energy Research Institute (KAERI). To reflect the characteristics of actual signals in NPPs, Gaussian noise with a 5% standard deviation is also added to the data.
{"title":"Faulty Signal Restoration Algorithm in the Emergency Situation Using Deep Learning Methods","authors":"Younhee Choi, Jonghyun Kim","doi":"10.54941/ahfe1001454","DOIUrl":"https://doi.org/10.54941/ahfe1001454","url":null,"abstract":"To operate nuclear power plants (NPPs) safely and efficiently, signals from sensors must be valid and accurate. Signals deliver the current situation and status of the system to the operator or systems that use them as inputs. Therefore, faulty signals may degrade the performance of both control systems and operators in the emergency situation, as learned from past accidents at NPPs. Moreover, With the increasing interest in autonomous and automatic controls, the integrity and reliability of input signals becomes important for the successful control. This study proposes an algorithm for the faulty signal restoration under emergency situations using deep convolutional generative adversarial networks (DCGAN) that generates a new data from random noise using two networks (i.e., generator and discriminator). To restore faulty signals, the algorithm receives a faulty signal as an input and generates a normal signal using a pre-trained normal signal distribution. This study also suggests optimization steps to improve the performance of the algorithm. The optimization consists of three steps; 1) selection of optimal inputs, 2) determine of the hyper-parameters for DCGAN. Then, the data for implementation and optimization are collected by using a Compact Nuclear Simulator (CNS) developed by the Korea Atomic Energy Research Institute (KAERI). To reflect the characteristics of actual signals in NPPs, Gaussian noise with a 5% standard deviation is also added to the data.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131386401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In early design stage of tire pattern, it is very useful to predict noise level associated with tire pattern. Artificial neural network (ANN) was used for development of the model for the prediction of tire pattern noise recently. The ANN used supervised training method which extracts the feature applying Gaussian curve fitting to the tread profile spectrum of tire pattern and used it as the input of ANN. This method requests laser scanning for tire pattern of a real tire. In early design, there is no real tire. In this study, the convolutional neural network (CNN) to predict tire pattern noise was developed based on non-supervised training method. Two Learning algorithms such as stochastic gradient descent (SGD) and RMSProp were studied in the CNN model for the comparison of their learning performance. RMSProp algorithm was suggested for the CNN model. In this case, a pattern image of a tire to be designed was used as the input of CNN. The CNN to predict tire pattern noise was developed and its utility in the early design stage of tire was discussed. In the study, pattern noise for 28 tires were measured in the anechoic chamber and their pattern images were scanned. For the training of ANN and CNN, pattern noise for 24 tires and their pattern images were used. The trained ANN and CNN were validated respectively with 4 tires which were not used for the training of two neural networks. Finally, two networks were successfully developed and validated for the prediction of tire pattern noise. The trained CNN can be used for the prediction of pattern noise for a tire to be designed in early design stage using the only drawing image of tire whilst ANN can be used for the prediction of pattern noise for a real tire in development stage.
{"title":"Pattern noise prediction using Artificial Neural Network","authors":"Sang Kwon Lee","doi":"10.54941/ahfe1001465","DOIUrl":"https://doi.org/10.54941/ahfe1001465","url":null,"abstract":"In early design stage of tire pattern, it is very useful to predict noise level associated with tire pattern. Artificial neural network (ANN) was used for development of the model for the prediction of tire pattern noise recently. The ANN used supervised training method which extracts the feature applying Gaussian curve fitting to the tread profile spectrum of tire pattern and used it as the input of ANN. This method requests laser scanning for tire pattern of a real tire. In early design, there is no real tire. In this study, the convolutional neural network (CNN) to predict tire pattern noise was developed based on non-supervised training method. Two Learning algorithms such as stochastic gradient descent (SGD) and RMSProp were studied in the CNN model for the comparison of their learning performance. RMSProp algorithm was suggested for the CNN model. In this case, a pattern image of a tire to be designed was used as the input of CNN. The CNN to predict tire pattern noise was developed and its utility in the early design stage of tire was discussed. In the study, pattern noise for 28 tires were measured in the anechoic chamber and their pattern images were scanned. For the training of ANN and CNN, pattern noise for 24 tires and their pattern images were used. The trained ANN and CNN were validated respectively with 4 tires which were not used for the training of two neural networks. Finally, two networks were successfully developed and validated for the prediction of tire pattern noise. The trained CNN can be used for the prediction of pattern noise for a tire to be designed in early design stage using the only drawing image of tire whilst ANN can be used for the prediction of pattern noise for a real tire in development stage.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Pîzarro, Byron Vásquez, Willan Steven Mendieta Molina, Remigio Hurtado
First of all, one of the applications of artificial intelligence is the prediction of diseases, including hepatitis. Hepatitis has been a recurring disease over the years as it seriously affects the population, increasing by 125,000 deaths per year. This process of inflammation and damage to the organ affects its performance, as well as the functioning of the other organs in the body. In this work, an analysis of variables and their influence on the objective variable is made, in addition, results are presented from a predictive model.We propose a predictive analysis model that incorporates artificial neural networks and we have compared this prediction method with other classification-oriented models such as support vector machines (SVM) and genetic algorithms. We have conducted our method as a classification problem. This method requires a prior process of data processing and exploratory analysis to identify the variables or factors that directly influence this type of disease. In this way, we will be able to identify the variables that intervene in the development of this disease and that affect the liver or the correct functioning of this organ, presenting discomfort to the human body, as well as complications such as liver failure or liver cancer. Our model is structured in the following steps: first, data extraction is performed, which was collected from the machine learning repository of the University of California at Irvine (UCI). Then these data go through a variable transformation process. Subsequently, it is processed with learning and optimization through a neural network. The optimization (fine-tuning) is performed in three phases: complication hyperparameter optimization, neural network layer density optimization, and finally dropout regularization optimization. Finally, the visualization and analysis of results is carried out. We have used a data set of patient medical records, among the variables are: age, sex, gender, hemoglobin, etc. We have found factors related either indirectly or directly to the disease. The results of the model are presented according to the quality measures: Recall, Precision and MAE.We can say that this research leaves the doors open to new challenges such as new implementations within the field of medicine, not only focused on the liver, but also being able to extend the development environment to other applications and organs of the human body in order to avoid risks possible, or future complications. It should be noted that the future of applications with the use of artificial neural networks is constantly evolving, the application of improved models such as the use of random forests, assembly algorithms show a great capacity for application both in biomedical engineering and in focused areas to the analysis of different types of medical images.
{"title":"Hepatitis predictive analysis model through deep learning using neural networks based on patient history","authors":"J. Pîzarro, Byron Vásquez, Willan Steven Mendieta Molina, Remigio Hurtado","doi":"10.54941/ahfe1001449","DOIUrl":"https://doi.org/10.54941/ahfe1001449","url":null,"abstract":"First of all, one of the applications of artificial intelligence is the prediction of diseases, including hepatitis. Hepatitis has been a recurring disease over the years as it seriously affects the population, increasing by 125,000 deaths per year. This process of inflammation and damage to the organ affects its performance, as well as the functioning of the other organs in the body. In this work, an analysis of variables and their influence on the objective variable is made, in addition, results are presented from a predictive model.We propose a predictive analysis model that incorporates artificial neural networks and we have compared this prediction method with other classification-oriented models such as support vector machines (SVM) and genetic algorithms. We have conducted our method as a classification problem. This method requires a prior process of data processing and exploratory analysis to identify the variables or factors that directly influence this type of disease. In this way, we will be able to identify the variables that intervene in the development of this disease and that affect the liver or the correct functioning of this organ, presenting discomfort to the human body, as well as complications such as liver failure or liver cancer. Our model is structured in the following steps: first, data extraction is performed, which was collected from the machine learning repository of the University of California at Irvine (UCI). Then these data go through a variable transformation process. Subsequently, it is processed with learning and optimization through a neural network. The optimization (fine-tuning) is performed in three phases: complication hyperparameter optimization, neural network layer density optimization, and finally dropout regularization optimization. Finally, the visualization and analysis of results is carried out. We have used a data set of patient medical records, among the variables are: age, sex, gender, hemoglobin, etc. We have found factors related either indirectly or directly to the disease. The results of the model are presented according to the quality measures: Recall, Precision and MAE.We can say that this research leaves the doors open to new challenges such as new implementations within the field of medicine, not only focused on the liver, but also being able to extend the development environment to other applications and organs of the human body in order to avoid risks possible, or future complications. It should be noted that the future of applications with the use of artificial neural networks is constantly evolving, the application of improved models such as the use of random forests, assembly algorithms show a great capacity for application both in biomedical engineering and in focused areas to the analysis of different types of medical images.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"3 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The lack of data to perform various models that feed an artificial intelligence with which you can get or discover various patterns of behavior in a set of data. Therefore, due to this lack of data, the systems are not well nourished with data large enough to fulfill its learning function, presenting a synthetic database which is parameterized with restrictions on the characteristics of graphomotor and language elements, which develops a set of combinations that will be the model for the AI. As effect to all this gave a commensurable amount of 777,600 combinations at the moment of applying the first filter with the respective restrictions, when taking the valid combinations that are 77304 a second filter is applied with the remaining restrictions that gave 57,672 valid combinations for the generation of the synthetic database that will feed the AI. It is concluded that the generation of synthetic data helps to create, according to its importance, more or less similar to real data and in this way ensures a quantity and no dependence on real or original data.
{"title":"Proposal for the Generation of Profiles using a Synthetic Database","authors":"Andres Viscaino - Quito, L. Serpa-Andrade","doi":"10.54941/ahfe1001462","DOIUrl":"https://doi.org/10.54941/ahfe1001462","url":null,"abstract":"The lack of data to perform various models that feed an artificial intelligence with which you can get or discover various patterns of behavior in a set of data. Therefore, due to this lack of data, the systems are not well nourished with data large enough to fulfill its learning function, presenting a synthetic database which is parameterized with restrictions on the characteristics of graphomotor and language elements, which develops a set of combinations that will be the model for the AI. As effect to all this gave a commensurable amount of 777,600 combinations at the moment of applying the first filter with the respective restrictions, when taking the valid combinations that are 77304 a second filter is applied with the remaining restrictions that gave 57,672 valid combinations for the generation of the synthetic database that will feed the AI. It is concluded that the generation of synthetic data helps to create, according to its importance, more or less similar to real data and in this way ensures a quantity and no dependence on real or original data.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121047544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Ricardo Castillo Salazar, L. Lanzarini, Hector Fernando Gomez Alvarado, José Varela-Aldás
Dementia is a brain disorder that affects older individuals in their ability to carry out their daily activities, such as in the case of neurological diseases. The main objective of this study is to automatically classify the mood of an Alzheimer's patient into one of the following categories: wandering, nervous, depressed, disoriented, bored or normal i.e. in Alzheimer's patients from videos obtained in nursing homes for the elderly in the canton of Ambato, Ecuador. We worked with a population of 39 people from both sexes who were diagnosed with Alzheimer's and whose ages ranged between 75 and 89 years of age. The methods used are pose detection, feature extraction, and pose classification. This was achieved with the usage of neural networks, the walk classifier, and the Levenshtein Distance metric. As a result, a sequence of moods is generated, which determine a relationship between the software and the human expert for the expected effect. It is concluded that artificial vision software allows us to recognize the mood states of the Alzheimer patients during pose changes over time.
{"title":"Artificial vision system to detect the mood of an Alzheimer's patient","authors":"David Ricardo Castillo Salazar, L. Lanzarini, Hector Fernando Gomez Alvarado, José Varela-Aldás","doi":"10.54941/ahfe1001445","DOIUrl":"https://doi.org/10.54941/ahfe1001445","url":null,"abstract":"Dementia is a brain disorder that affects older individuals in their ability to carry out their daily activities, such as in the case of neurological diseases. The main objective of this study is to automatically classify the mood of an Alzheimer's patient into one of the following categories: wandering, nervous, depressed, disoriented, bored or normal i.e. in Alzheimer's patients from videos obtained in nursing homes for the elderly in the canton of Ambato, Ecuador. We worked with a population of 39 people from both sexes who were diagnosed with Alzheimer's and whose ages ranged between 75 and 89 years of age. The methods used are pose detection, feature extraction, and pose classification. This was achieved with the usage of neural networks, the walk classifier, and the Levenshtein Distance metric. As a result, a sequence of moods is generated, which determine a relationship between the software and the human expert for the expected effect. It is concluded that artificial vision software allows us to recognize the mood states of the Alzheimer patients during pose changes over time.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data is not free of biases, and AI systems that are based on the data are not either. What can be done to try the best, to minimize the risk of building systems that perpetuate the biases that exist in society and in data? In our paper we explore the possibilities along the User Centered Design Process and in Design Thinking, to lower the risk of keeping imbalances or gaps in data and models. But looking at the design process is not enough: Decision makers, development team and design team, respectively their composition and awareness towards risks of discrimination and their decisions in involving potential users and non-users, collecting data and testing the application also play a major role in trying to implement systems with the least biases possible.
{"title":"Lowering the risk of bias in AI applications","authors":"Jj Link, Helena Dadakou, Anne Elisabeth Krüger","doi":"10.54941/ahfe1003286","DOIUrl":"https://doi.org/10.54941/ahfe1003286","url":null,"abstract":"Data is not free of biases, and AI systems that are based on the data are not either. What can be done to try the best, to minimize the risk of building systems that perpetuate the biases that exist in society and in data? In our paper we explore the possibilities along the User Centered Design Process and in Design Thinking, to lower the risk of keeping imbalances or gaps in data and models. But looking at the design process is not enough: Decision makers, development team and design team, respectively their composition and awareness towards risks of discrimination and their decisions in involving potential users and non-users, collecting data and testing the application also play a major role in trying to implement systems with the least biases possible.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125282115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a comprehensive analysis of deepfakes, focusing on their creation, generation, and detection. Deepfakes are realistic fabricated videos, images, or audios generated using artificial intelligence algorithms. While initially seen as a source of entertainment and commercial applications, the negative social consequences of deepfakes have become apparent. They are misused for creating adult content, blackmailing individuals, and spreading misinformation, leading to a decline in trust and potential societal implications. The paper also discusses the importance of legislation in regulating the use of deepfakes and explores techniques for their detection, including machine learning and natural language processing. Understanding deepfakes is essential to address their ethical and legal implications in today's digital landscape.
{"title":"Understanding Deepfakes: A Comprehensive Analysis of Creation, Generation, and Detection","authors":"S. Alanazi, Seemal Asif","doi":"10.54941/ahfe1003290","DOIUrl":"https://doi.org/10.54941/ahfe1003290","url":null,"abstract":"This paper provides a comprehensive analysis of deepfakes, focusing on their creation, generation, and detection. Deepfakes are realistic fabricated videos, images, or audios generated using artificial intelligence algorithms. While initially seen as a source of entertainment and commercial applications, the negative social consequences of deepfakes have become apparent. They are misused for creating adult content, blackmailing individuals, and spreading misinformation, leading to a decline in trust and potential societal implications. The paper also discusses the importance of legislation in regulating the use of deepfakes and explores techniques for their detection, including machine learning and natural language processing. Understanding deepfakes is essential to address their ethical and legal implications in today's digital landscape.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116630932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heiko Fischer, Sven Seidenstricker, Thomas Berger, T. Holopainen
Digitalization is a driving force for innovation in the business-to-business (B2B) environment and profoundly changes the way companies do business. It affects the entire value chain of a company and can be used for automating human tasks. For instance, previous research indicates that 40% of all sales tasks can be automated. Thus the digital transformation in sales has the potential to improve a firm’s performance. Depending on its development level, digitalization in sales can assist or even replace numerous sales tasks. Therefore, using digital solutions in sales can be seen as an essential trigger to competitive advantage. Recent developments in research and practice have revealed that especially artificial intelligence (AI) has gained increasing attention in the sales domain. A challenging issue in this domain is how AI affects the sales process and how it can be applied meaningfully in B2B sales. Thus, our paper aims to investigate how AI can be used along the sales process and how it can improve sales practices.To explore this, we conduct systematic literature research in scientific databases such as Business Source Premier, Science Direct, Emerald, Springer Online Library, Wiley Online Library, and Google Scholar, supplementing the findings with a qualitative research approach. Analyzing this literature focused on digital transformation in sales, we find that the application and benefits of AI depend on the sales process step. For this reason, we conduct research on B2B sales process models, compare them, and choose a reference model for the evaluation of AI in B2B sales. Moreover, we present common definitions of AI and show how this technology is usually applied in B2B sales. Afterward, we combine the sales process with use cases of AI. For each step, we present use cases in detail and explain their benefits for sales. For instance, we find that especially tasks with traditionally high human involvement are challenging to automate. In particular, in complex sales situations, the human salesperson may not be entirely replaced by digital technologies, while routine tasks can be carried out with the help of digital technologies. Our paper closes with a discussion and conclusion. Summing up, the proposed paper analyzes different viewpoints of the sales process in the digital sales literature. We can conclude that the main focus of our paper will be presenting the application of AI along the sales cycle. Our research closes with a discussion and conclusion and gives recommendations for practice and academia.
{"title":"Artificial intelligence in B2B sales: Impact on the sales process","authors":"Heiko Fischer, Sven Seidenstricker, Thomas Berger, T. Holopainen","doi":"10.54941/ahfe1001456","DOIUrl":"https://doi.org/10.54941/ahfe1001456","url":null,"abstract":"Digitalization is a driving force for innovation in the business-to-business (B2B) environment and profoundly changes the way companies do business. It affects the entire value chain of a company and can be used for automating human tasks. For instance, previous research indicates that 40% of all sales tasks can be automated. Thus the digital transformation in sales has the potential to improve a firm’s performance. Depending on its development level, digitalization in sales can assist or even replace numerous sales tasks. Therefore, using digital solutions in sales can be seen as an essential trigger to competitive advantage. Recent developments in research and practice have revealed that especially artificial intelligence (AI) has gained increasing attention in the sales domain. A challenging issue in this domain is how AI affects the sales process and how it can be applied meaningfully in B2B sales. Thus, our paper aims to investigate how AI can be used along the sales process and how it can improve sales practices.To explore this, we conduct systematic literature research in scientific databases such as Business Source Premier, Science Direct, Emerald, Springer Online Library, Wiley Online Library, and Google Scholar, supplementing the findings with a qualitative research approach. Analyzing this literature focused on digital transformation in sales, we find that the application and benefits of AI depend on the sales process step. For this reason, we conduct research on B2B sales process models, compare them, and choose a reference model for the evaluation of AI in B2B sales. Moreover, we present common definitions of AI and show how this technology is usually applied in B2B sales. Afterward, we combine the sales process with use cases of AI. For each step, we present use cases in detail and explain their benefits for sales. For instance, we find that especially tasks with traditionally high human involvement are challenging to automate. In particular, in complex sales situations, the human salesperson may not be entirely replaced by digital technologies, while routine tasks can be carried out with the help of digital technologies. Our paper closes with a discussion and conclusion. Summing up, the proposed paper analyzes different viewpoints of the sales process in the digital sales literature. We can conclude that the main focus of our paper will be presenting the application of AI along the sales cycle. Our research closes with a discussion and conclusion and gives recommendations for practice and academia.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129553788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Datasets for hand gesture recognition are now an important aspect of machine learning. Many datasets have been created for machine learning purposes. Some of the notable datasets include Modified National Institute of Standards and Technology (MNIST) dataset, Common Objects in Context (COCO) dataset, Canadian Institute For Advanced Research (CIFAR-10) dataset, LeNet-5, AlexNet, GoogLeNet, The American Sign Language Lexicon Video Dataset and 2D Static Hand Gesture Colour Image Dataset for ASL Gestures. However, there is no dataset for Kenya Sign language (KSL). This paper proposes the creation of a KSL hand gesture recognition dataset. The dataset is intended to be in two-fold. One for static hand gestures, and one for dynamic hand gestures. With respect to dynamic hand gestures short videos of the KSL alphabet a to z and numbers 0 to 10 will be considered. Likewise, for the static gestures KSL alphabet a to z will be considered. It is anticipated that this dataset will be vital in creation of sign language hand gesture recognition systems not only for Kenya sign language but of other sign languages as well. This will be possible because of learning transfer ability when implementing sign language systems using neural network models.
{"title":"Towards Kenyan Sign Language Hand Gesture Recognition Dataset","authors":"C. Nyaga, R. Wario","doi":"10.54941/ahfe1003281","DOIUrl":"https://doi.org/10.54941/ahfe1003281","url":null,"abstract":"Datasets for hand gesture recognition are now an important aspect of machine learning. Many datasets have been created for machine learning purposes. Some of the notable datasets include Modified National Institute of Standards and Technology (MNIST) dataset, Common Objects in Context (COCO) dataset, Canadian Institute For Advanced Research (CIFAR-10) dataset, LeNet-5, AlexNet, GoogLeNet, The American Sign Language Lexicon Video Dataset and 2D Static Hand Gesture Colour Image Dataset for ASL Gestures. However, there is no dataset for Kenya Sign language (KSL). This paper proposes the creation of a KSL hand gesture recognition dataset. The dataset is intended to be in two-fold. One for static hand gestures, and one for dynamic hand gestures. With respect to dynamic hand gestures short videos of the KSL alphabet a to z and numbers 0 to 10 will be considered. Likewise, for the static gestures KSL alphabet a to z will be considered. It is anticipated that this dataset will be vital in creation of sign language hand gesture recognition systems not only for Kenya sign language but of other sign languages as well. This will be possible because of learning transfer ability when implementing sign language systems using neural network models.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131975061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abraham Sanders, Mara Schwartz, Albert Ling Sheng Chang, Shannon Briggs, J. Braasch, Dakuo Wang, Mei Si, T. Strzalkowski
Efficient evaluation of dialogue agents is a major problem in conversational AI, with current research still relying largely on human studies for method validation. Recently, there has been a trend toward the use of automatic self-play and bot-bot evaluation as an approximation for human ratings of conversational systems. Such methods promise to alleviate the time and financial costs associated with human evaluation, and current proposed methods show moderate to strong correlation with human judgements. In this study, we further investigate the fitness of end-to-end self-play and bot-bot interaction for dialogue system evaluation. Specifically, we perform a human study to confirm self-play evaluations of a recently proposed agent that implements a GPT-2 based response generator on the Persuasion For Good charity solicitation task. This agent leverages Progression Function (PF) models to predict the evolving acceptability of an ongoing dialogue and uses dialogue rollouts to proactively simulate how candidate responses may impact the future success of the conversation. The agent was evaluated in an automatic self-play setting, using automatic metrics to estimate sentiment and intent to donate in each simulated dialogue. This evaluation indicated that sentiment and intent to donate were higher (p < 0.05) across dialogues involving the progression-aware agents with rollouts, compared to a baseline agent with no rollout-based planning mechanism. To validate the use of self-play in this setting, we follow up by conducting a human evaluation of this same agent on a range of factors including convincingness, aggression, competence, confidence, friendliness, and task utility on the same Persuasion For Good solicitation task. Results show that human users agree with previously reported automatic self-play results with respect to agent sentiment, specifically showing improvement in friendliness and confidence in the experimental condition; however, we also discover that for the same agent, humans reported a lower desire to use it in the future compared to the baseline. We perform a qualitative sentiment analysis of participant feedback to explore possible reasons for this, and discuss implications for self-play and bot-bot interaction as a general framework for evaluating conversational systems.
{"title":"Towards a Proper Evaluation of Automated Conversational Systems","authors":"Abraham Sanders, Mara Schwartz, Albert Ling Sheng Chang, Shannon Briggs, J. Braasch, Dakuo Wang, Mei Si, T. Strzalkowski","doi":"10.54941/ahfe1003276","DOIUrl":"https://doi.org/10.54941/ahfe1003276","url":null,"abstract":"Efficient evaluation of dialogue agents is a major problem in conversational AI, with current research still relying largely on human studies for method validation. Recently, there has been a trend toward the use of automatic self-play and bot-bot evaluation as an approximation for human ratings of conversational systems. Such methods promise to alleviate the time and financial costs associated with human evaluation, and current proposed methods show moderate to strong correlation with human judgements. In this study, we further investigate the fitness of end-to-end self-play and bot-bot interaction for dialogue system evaluation. Specifically, we perform a human study to confirm self-play evaluations of a recently proposed agent that implements a GPT-2 based response generator on the Persuasion For Good charity solicitation task. This agent leverages Progression Function (PF) models to predict the evolving acceptability of an ongoing dialogue and uses dialogue rollouts to proactively simulate how candidate responses may impact the future success of the conversation. The agent was evaluated in an automatic self-play setting, using automatic metrics to estimate sentiment and intent to donate in each simulated dialogue. This evaluation indicated that sentiment and intent to donate were higher (p < 0.05) across dialogues involving the progression-aware agents with rollouts, compared to a baseline agent with no rollout-based planning mechanism. To validate the use of self-play in this setting, we follow up by conducting a human evaluation of this same agent on a range of factors including convincingness, aggression, competence, confidence, friendliness, and task utility on the same Persuasion For Good solicitation task. Results show that human users agree with previously reported automatic self-play results with respect to agent sentiment, specifically showing improvement in friendliness and confidence in the experimental condition; however, we also discover that for the same agent, humans reported a lower desire to use it in the future compared to the baseline. We perform a qualitative sentiment analysis of participant feedback to explore possible reasons for this, and discuss implications for self-play and bot-bot interaction as a general framework for evaluating conversational systems.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123881262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}