Users of e-commerce websites review different aspects of a product in the comment section. In this research, an approach is proposed for opinion aspect extraction and recognition in selling systems. We have used the users' opinions from the Digikala website (www.Digikala.com), which is an Iranian e-commerce company. In this research, a language-independent framework is proposed that is adjustable to other languages. In this regard, after necessary text processing and preparation steps, the existence of an aspect in an opinion is determined using deep learning algorithms. The proposed model combines Convolutional Neural Network (CNN) and long-short-term memory (LSTM) deep learning approaches. CNN is one of the best algorithms for extracting latent features from data. On the other hand, LSTM can detect latent temporal relationships among different words in a text due to its memory ability and attention model. The approach is evaluated on six classes of opinion aspects. Based on the experiments, the proposed model's accuracy, precision, and recall are 70%, 60%, and 85%, respectively. The proposed model was compared in terms of the above criteria with CNN, Naive Bayes, and SVM algorithms and showed satisfying performance.
{"title":"Deep aspect extraction and classification for opinion mining in e-commerce applications using convolutional neural network feature extraction followed by long short term memory attention model","authors":"Kamal Sharbatian, Mohammad Hossein Moattar","doi":"10.1002/ail2.86","DOIUrl":"10.1002/ail2.86","url":null,"abstract":"<p>Users of e-commerce websites review different aspects of a product in the comment section. In this research, an approach is proposed for opinion aspect extraction and recognition in selling systems. We have used the users' opinions from the Digikala website (www.Digikala.com), which is an Iranian e-commerce company. In this research, a language-independent framework is proposed that is adjustable to other languages. In this regard, after necessary text processing and preparation steps, the existence of an aspect in an opinion is determined using deep learning algorithms. The proposed model combines Convolutional Neural Network (CNN) and long-short-term memory (LSTM) deep learning approaches. CNN is one of the best algorithms for extracting latent features from data. On the other hand, LSTM can detect latent temporal relationships among different words in a text due to its memory ability and attention model. The approach is evaluated on six classes of opinion aspects. Based on the experiments, the proposed model's accuracy, precision, and recall are 70%, 60%, and 85%, respectively. The proposed model was compared in terms of the above criteria with CNN, Naive Bayes, and SVM algorithms and showed satisfying performance.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"4 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.86","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42279959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ease with which mobile money is used to facilitate cross-border payments presents a global threat to law enforcement in the fight against money laundering and terrorist financing. This paper aims to utilize machine learning classifiers to predict transactions flagged as a fraud in mobile money transfers. The data for this study were obtained from real-time transactions that simulate a well-known mobile transfer fraud scheme. Logistic regression is used as the baseline model and is compared with ensemble and gradient descent models. The results indicate that the logistic regression model still showed reasonable performance while not performing as well as the other models. Among all the measures, the random forest classifier exhibited outstanding performance. The amount of money transferred emerged as the top feature for predicting money laundering transactions in mobile money transfers. These findings suggest that further research is needed to enhance the logistic regression model, and the random forest classifier should be explored as a potential tool for law enforcement and financial institutions to detect money laundering activities in mobile money transfers.
{"title":"Predicting mobile money transaction fraud using machine learning algorithms","authors":"Mark E. Lokanan","doi":"10.1002/ail2.85","DOIUrl":"10.1002/ail2.85","url":null,"abstract":"<p>The ease with which mobile money is used to facilitate cross-border payments presents a global threat to law enforcement in the fight against money laundering and terrorist financing. This paper aims to utilize machine learning classifiers to predict transactions flagged as a fraud in mobile money transfers. The data for this study were obtained from real-time transactions that simulate a well-known mobile transfer fraud scheme. Logistic regression is used as the baseline model and is compared with ensemble and gradient descent models. The results indicate that the logistic regression model still showed reasonable performance while not performing as well as the other models. Among all the measures, the random forest classifier exhibited outstanding performance. The amount of money transferred emerged as the top feature for predicting money laundering transactions in mobile money transfers. These findings suggest that further research is needed to enhance the logistic regression model, and the random forest classifier should be explored as a potential tool for law enforcement and financial institutions to detect money laundering activities in mobile money transfers.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"4 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.85","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46263497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitrios Christofidellis, Marzena Maria Lehmann, Torsten Luksch, Marco Stenta, Matteo Manica
Patents show how technology evolves in most scientific fields over time. The best way to use this valuable knowledge base is to use efficient and effective information retrieval and searches for related prior art. Patent classification, that is, assigning a patent to one or more predefined categories, is a fundamental step towards synthesizing the information content of an invention. To this end, architectures based on Transformers, especially those derived from the BERT family have already been proposed in the literature and they have shown remarkable results by setting a new state-of-the-art performance for the classification task. Here, we study how domain adaptation can push the performance boundaries in patent classification by rigorously evaluating and implementing a collection of recent transfer learning techniques, for example, domain-adaptive pretraining and adapters. Our analysis shows how leveraging these advancements enables the development of state-of-the-art models with increased precision, recall, and F1-score. We base our evaluation on both standard patent classification datasets derived from patent offices-defined code hierarchies and more practical real-world use-case scenarios containing labels from the agrochemical industrial domain. The application of these domain adapted techniques to patent classification in a multilingual setting is also examined and evaluated.
{"title":"Automated patent classification for crop protection via domain adaptation","authors":"Dimitrios Christofidellis, Marzena Maria Lehmann, Torsten Luksch, Marco Stenta, Matteo Manica","doi":"10.1002/ail2.80","DOIUrl":"10.1002/ail2.80","url":null,"abstract":"<p>Patents show how technology evolves in most scientific fields over time. The best way to use this valuable knowledge base is to use efficient and effective information retrieval and searches for related prior art. Patent classification, that is, assigning a patent to one or more predefined categories, is a fundamental step towards synthesizing the information content of an invention. To this end, architectures based on Transformers, especially those derived from the BERT family have already been proposed in the literature and they have shown remarkable results by setting a new state-of-the-art performance for the classification task. Here, we study how domain adaptation can push the performance boundaries in patent classification by rigorously evaluating and implementing a collection of recent transfer learning techniques, for example, domain-adaptive pretraining and adapters. Our analysis shows how leveraging these advancements enables the development of state-of-the-art models with increased precision, recall, and <i>F</i>1-score. We base our evaluation on both standard patent classification datasets derived from patent offices-defined code hierarchies and more practical real-world use-case scenarios containing labels from the agrochemical industrial domain. The application of these domain adapted techniques to patent classification in a multilingual setting is also examined and evaluated.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.80","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43702643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Schizophrenia is a psychiatric disorder which is prevalent in individuals around the world, where diagnosis methods for this disorder are done via a combination of interview style questioning of the patient alongside a review of their medical record; but these methods have been largely criticised for being subjective between psychiatrists and largely unreplicable. Schizophrenia also occurs in adolescent individuals who have been said to be even more challenging to diagnose largely due to delusions being mistaken for childhood fantasies, and established methods for adult patients being applied to diagnose adolescents. This work investigates the use of electroencephalography (EEG) signals acquired from adolescent patients in the age range of 10–14 years, alongside signal processing methods and machine learning modelling towards the diagnosis of adolescent schizophrenia. The results from the machine learning modelling showed that the linear discriminant analysis (LDA) and fine K-nearest neighbour (KNN) produced the best recognition results for models with easy and hard interpretability, respectively. Additionally, a computational method was applied towards contrasting the neuroanatomical activation patterns in the brain of the schizophrenic and normal adolescents, where it was seen that the neural activation patterns of the normal adolescents showed a greater consistency when compared with the schizophrenics.
{"title":"Enhanced recognition of adolescents with schizophrenia and a computational contrast of their neuroanatomy with healthy patients using brainwave signals","authors":"Ejay Nsugbe","doi":"10.1002/ail2.79","DOIUrl":"10.1002/ail2.79","url":null,"abstract":"<p>Schizophrenia is a psychiatric disorder which is prevalent in individuals around the world, where diagnosis methods for this disorder are done via a combination of interview style questioning of the patient alongside a review of their medical record; but these methods have been largely criticised for being subjective between psychiatrists and largely unreplicable. Schizophrenia also occurs in adolescent individuals who have been said to be even more challenging to diagnose largely due to delusions being mistaken for childhood fantasies, and established methods for adult patients being applied to diagnose adolescents. This work investigates the use of electroencephalography (EEG) signals acquired from adolescent patients in the age range of 10–14 years, alongside signal processing methods and machine learning modelling towards the diagnosis of adolescent schizophrenia. The results from the machine learning modelling showed that the linear discriminant analysis (LDA) and fine K-nearest neighbour (KNN) produced the best recognition results for models with easy and hard interpretability, respectively. Additionally, a computational method was applied towards contrasting the neuroanatomical activation patterns in the brain of the schizophrenic and normal adolescents, where it was seen that the neural activation patterns of the normal adolescents showed a greater consistency when compared with the schizophrenics.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.79","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42576192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Johann Wetzel, Kevin Ryczko, Roger Gordon Melko, Isaac Tamblyn
We introduce twin neural network regression (TNNR). This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNNR intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network. Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNNR. We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared with other state-of-the-art methods. Furthermore, TNNR is constrained by self-consistency conditions. We find that the violation of these conditions provides a signal for the prediction uncertainty.
{"title":"Twin neural network regression","authors":"Sebastian Johann Wetzel, Kevin Ryczko, Roger Gordon Melko, Isaac Tamblyn","doi":"10.1002/ail2.78","DOIUrl":"10.1002/ail2.78","url":null,"abstract":"<p>We introduce twin neural network regression (TNNR). This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNNR intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network. Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNNR. We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared with other state-of-the-art methods. Furthermore, TNNR is constrained by self-consistency conditions. We find that the violation of these conditions provides a signal for the prediction uncertainty.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.78","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77494627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harshit Bokadia, Scott Cheng-Hsin Yang, Zhaobin Li, Tomas Folke, Patrick Shafto
In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically only assessed through expensive user studies. Here we propose two complementary metrics to algorithmically evaluate the interpretability of saliency map explanations. One metric assesses perceptual interpretability by quantifying the visual coherence of the saliency map. The second metric assesses semantic interpretability by capturing the degree of overlap between the saliency map and textbook features—features human experts use to make a classification. We use a melanoma dataset and a deep-neural network classifier as a case-study to explore how our two interpretability metrics relate to each other and a faithfulness metric. Across six commonly used saliency methods, we find that none achieves high scores across all three metrics for all test images, but that different methods perform well in different regions of the data distribution. This variation between methods can be leveraged to consistently achieve high interpretability and faithfulness by using our metrics to inform saliency mask selection on a case-by-case basis. Our interpretability metrics provide a new way to evaluate saliency-based explanations and allow for the adaptive combination of saliency-based explanation methods.
{"title":"Evaluating perceptual and semantic interpretability of saliency methods: A case study of melanoma","authors":"Harshit Bokadia, Scott Cheng-Hsin Yang, Zhaobin Li, Tomas Folke, Patrick Shafto","doi":"10.1002/ail2.77","DOIUrl":"10.1002/ail2.77","url":null,"abstract":"<p>In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically only assessed through expensive user studies. Here we propose two complementary metrics to algorithmically evaluate the interpretability of saliency map explanations. One metric assesses perceptual interpretability by quantifying the visual coherence of the saliency map. The second metric assesses semantic interpretability by capturing the degree of overlap between the saliency map and textbook features—features human experts use to make a classification. We use a melanoma dataset and a deep-neural network classifier as a case-study to explore how our two interpretability metrics relate to each other and a faithfulness metric. Across six commonly used saliency methods, we find that none achieves high scores across all three metrics for all test images, but that different methods perform well in different regions of the data distribution. This variation between methods can be leveraged to consistently achieve high interpretability and faithfulness by using our metrics to inform saliency mask selection on a case-by-case basis. Our interpretability metrics provide a new way to evaluate saliency-based explanations and allow for the adaptive combination of saliency-based explanation methods.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.77","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45070752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Priscilla Adong, Engineer Bainomugisha, Deo Okure, Richard Sserunjogi
Low-cost air quality monitoring networks can potentially increase the availability of high-resolution monitoring to inform analytic and evidence-informed approaches to better manage air quality. This is particularly relevant in low and middle-income settings where access to traditional reference-grade monitoring networks remains a challenge. However, low-cost air quality sensors are impacted by ambient conditions which could lead to over- or underestimation of pollution concentrations and thus require field calibration to improve their accuracy and reliability. In this paper, we demonstrate the feasibility of using machine learning methods for large-scale calibration of AirQo sensors, low-cost PM sensors custom-designed for and deployed in Sub-Saharan urban settings. The performance of various machine learning methods is assessed by comparing model corrected PM using k-nearest neighbours, support vector regression, multivariate linear regression, ridge regression, lasso regression, elastic net regression, XGBoost, multilayer perceptron, random forest and gradient boosting with collocated reference PM concentrations from a Beta Attenuation Monitor (BAM). To this end, random forest and lasso regression models were superior for PM2.5 and PM10 calibration, respectively. Employing the random forest model decreased RMSE of raw data from 18.6 μg/m3 to 7.2 μg/m3 with an average BAM PM2.5 concentration of 37.8 μg/m3 while the lasso regression model decreased RMSE from 13.4 μg/m3 to 7.9 μg/m3 with an average BAM PM10 concentration of 51.1 μg/m3. We validate our models through cross-unit and cross-site validation, allowing analysis of AirQo devices' consistency. The resulting calibration models were deployed to the entire large-scale air quality monitoring network consisting of over 120 AirQo devices, which demonstrates the use of machine learning systems to address practical challenges in a developing world setting.
{"title":"Applying machine learning for large scale field calibration of low-cost PM2.5 and PM10 air pollution sensors","authors":"Priscilla Adong, Engineer Bainomugisha, Deo Okure, Richard Sserunjogi","doi":"10.1002/ail2.76","DOIUrl":"10.1002/ail2.76","url":null,"abstract":"<p>Low-cost air quality monitoring networks can potentially increase the availability of high-resolution monitoring to inform analytic and evidence-informed approaches to better manage air quality. This is particularly relevant in low and middle-income settings where access to traditional reference-grade monitoring networks remains a challenge. However, low-cost air quality sensors are impacted by ambient conditions which could lead to over- or underestimation of pollution concentrations and thus require field calibration to improve their accuracy and reliability. In this paper, we demonstrate the feasibility of using machine learning methods for large-scale calibration of AirQo sensors, low-cost PM sensors custom-designed for and deployed in Sub-Saharan urban settings. The performance of various machine learning methods is assessed by comparing model corrected PM using <i>k</i>-nearest neighbours, support vector regression, multivariate linear regression, ridge regression, lasso regression, elastic net regression, XGBoost, multilayer perceptron, random forest and gradient boosting with collocated reference PM concentrations from a Beta Attenuation Monitor (BAM). To this end, random forest and lasso regression models were superior for PM<sub>2.5</sub> and PM<sub>10</sub> calibration, respectively. Employing the random forest model decreased RMSE of raw data from 18.6 μg/m<sup>3</sup> to 7.2 μg/m<sup>3</sup> with an average BAM PM<sub>2.5</sub> concentration of 37.8 μg/m<sup>3</sup> while the lasso regression model decreased RMSE from 13.4 μg/m<sup>3</sup> to 7.9 μg/m<sup>3</sup> with an average BAM PM<sub>10</sub> concentration of 51.1 μg/m<sup>3</sup>. We validate our models through cross-unit and cross-site validation, allowing analysis of AirQo devices' consistency. The resulting calibration models were deployed to the entire large-scale air quality monitoring network consisting of over 120 AirQo devices, which demonstrates the use of machine learning systems to address practical challenges in a developing world setting.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.76","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48427411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Johannes B. L. G Husom, Pierre Bernabé, Sagar Sen
Power output is one of the most accurate methods for measuring exercise intensity during outdoor endurance sports, since it records the actual effect of the work performed by the muscles over time. However, power meters are expensive and are limited to activity forms where it is possible to embed sensors in the propulsion system such as in cycling. We investigate using breathing to estimate power output during exercise, in order to create a portable method for tracking physical effort that is universally applicable in many activity forms. Breathing can be quantified through respiratory inductive plethysmography (RIP), which entails recording the movement of the rib cage and abdomen caused by breathing, and it enables us to have a portable, non-invasive device for measuring breathing. RIP signals, heart rate and power output were recorded during a N-of-1 study of a person performing a set of workouts on a stationary bike. The recorded data were used to build predictive models through deep learning algorithms. A convolutional neural network (CNN) trained on features derived from RIP signals and heart rate obtained a mean absolute percentage error (MAPE) of 0.20 (ie, 20% average error). The model showed promising capability of estimating correct power levels and reactivity to changes in power output, but the accuracy is significantly lower than that of cycling power meters.
{"title":"Deep learning to predict power output from respiratory inductive plethysmography data","authors":"Erik Johannes B. L. G Husom, Pierre Bernabé, Sagar Sen","doi":"10.1002/ail2.65","DOIUrl":"10.1002/ail2.65","url":null,"abstract":"<p>Power output is one of the most accurate methods for measuring exercise intensity during outdoor endurance sports, since it records the actual effect of the work performed by the muscles over time. However, power meters are expensive and are limited to activity forms where it is possible to embed sensors in the propulsion system such as in cycling. We investigate using breathing to estimate power output during exercise, in order to create a portable method for tracking physical effort that is universally applicable in many activity forms. Breathing can be quantified through respiratory inductive plethysmography (RIP), which entails recording the movement of the rib cage and abdomen caused by breathing, and it enables us to have a portable, non-invasive device for measuring breathing. RIP signals, heart rate and power output were recorded during a N-of-1 study of a person performing a set of workouts on a stationary bike. The recorded data were used to build predictive models through deep learning algorithms. A convolutional neural network (CNN) trained on features derived from RIP signals and heart rate obtained a mean absolute percentage error (MAPE) of 0.20 (ie, 20% average error). The model showed promising capability of estimating correct power levels and reactivity to changes in power output, but the accuracy is significantly lower than that of cycling power meters.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.65","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46623297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Qualitative Investigation in Explainable Artificial Intelligence: Further Insight from Social Science","authors":"Adam J. Johs, Denise E. Agosto, Rosina O. Weber","doi":"10.1002/ail2.64","DOIUrl":"https://doi.org/10.1002/ail2.64","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44715694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}