Pub Date : 2023-12-01Epub Date: 2023-10-10DOI: 10.3390/ai4040044
Laith R Sultan, Allison Haertter, Maryam Al-Hasani, George Demiris, Theodore W Cary, Yale Tung-Chen, Chandra M Sehgal
With the 2019 coronavirus disease (COVID-19) pandemic, there is an increasing demand for remote monitoring technologies to reduce patient and provider exposure. One field that has an increasing potential is teleguided ultrasound, where telemedicine and point-of-care ultrasound (POCUS) merge to create this new scope. Teleguided POCUS can minimize staff exposure while preserving patient safety and oversight during bedside procedures. In this paper, we propose the use of teleguided POCUS supported by AI technologies for the remote monitoring of COVID-19 patients by non-experienced personnel including self-monitoring by the patients themselves. Our hypothesis is that AI technologies can facilitate the remote monitoring of COVID-19 patients through the utilization of POCUS devices, even when operated by individuals without formal medical training. In pursuit of this goal, we performed a pilot analysis to evaluate the performance of users with different clinical backgrounds using a computer-based system for COVID-19 detection using lung ultrasound. The purpose of the analysis was to emphasize the potential of the proposed AI technology for improving diagnostic performance, especially for users with less experience.
{"title":"Can Artificial Intelligence Aid Diagnosis by Teleguided Point-of-Care Ultrasound? A Pilot Study for Evaluating a Novel Computer Algorithm for COVID-19 Diagnosis Using Lung Ultrasound.","authors":"Laith R Sultan, Allison Haertter, Maryam Al-Hasani, George Demiris, Theodore W Cary, Yale Tung-Chen, Chandra M Sehgal","doi":"10.3390/ai4040044","DOIUrl":"10.3390/ai4040044","url":null,"abstract":"<p><p>With the 2019 coronavirus disease (COVID-19) pandemic, there is an increasing demand for remote monitoring technologies to reduce patient and provider exposure. One field that has an increasing potential is teleguided ultrasound, where telemedicine and point-of-care ultrasound (POCUS) merge to create this new scope. Teleguided POCUS can minimize staff exposure while preserving patient safety and oversight during bedside procedures. In this paper, we propose the use of teleguided POCUS supported by AI technologies for the remote monitoring of COVID-19 patients by non-experienced personnel including self-monitoring by the patients themselves. Our hypothesis is that AI technologies can facilitate the remote monitoring of COVID-19 patients through the utilization of POCUS devices, even when operated by individuals without formal medical training. In pursuit of this goal, we performed a pilot analysis to evaluate the performance of users with different clinical backgrounds using a computer-based system for COVID-19 detection using lung ultrasound. The purpose of the analysis was to emphasize the potential of the proposed AI technology for improving diagnostic performance, especially for users with less experience.</p>","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10623579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71489759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vagelis Plevris, George Papazafeiropoulos, Alejandro Jiménez Rios
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities.
{"title":"Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard","authors":"Vagelis Plevris, George Papazafeiropoulos, Alejandro Jiménez Rios","doi":"10.3390/ai4040048","DOIUrl":"https://doi.org/10.3390/ai4040048","url":null,"abstract":"In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135315766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning is employed in many applications, such as computer vision, natural language processing, robotics, and recommender systems. Large and complex neural networks lead to high accuracy; however, they adversely affect many aspects of deep learning performance, such as training time, latency, throughput, energy consumption, and memory usage in the training and inference stages. To solve these challenges, various optimization techniques and frameworks have been developed for the efficient performance of deep learning models in the training and inference stages. Although optimization techniques such as quantization have been studied thoroughly in the past, less work has been done to study the performance of frameworks that provide quantization techniques. In this paper, we have used different performance metrics to study the performance of various quantization frameworks, including TensorFlow automatic mixed precision and TensorRT. These performance metrics include training time and memory utilization in the training stage along with latency and throughput for graphics processing units (GPUs) in the inference stage. We have applied the automatic mixed precision (AMP) technique during the training stage using the TensorFlow framework, while for inference we have utilized the TensorRT framework for the post-training quantization technique using the TensorFlow TensorRT (TF-TRT) application programming interface (API).We performed model profiling for different deep learning models, datasets, image sizes, and batch sizes for both the training and inference stages, the results of which can help developers and researchers to devise and deploy efficient deep learning models for GPUs.
{"title":"Deep Learning Performance Characterization on GPUs for Various Quantization Frameworks","authors":"Muhammad Ali Shafique, Arslan Munir, Joonho Kong","doi":"10.3390/ai4040047","DOIUrl":"https://doi.org/10.3390/ai4040047","url":null,"abstract":"Deep learning is employed in many applications, such as computer vision, natural language processing, robotics, and recommender systems. Large and complex neural networks lead to high accuracy; however, they adversely affect many aspects of deep learning performance, such as training time, latency, throughput, energy consumption, and memory usage in the training and inference stages. To solve these challenges, various optimization techniques and frameworks have been developed for the efficient performance of deep learning models in the training and inference stages. Although optimization techniques such as quantization have been studied thoroughly in the past, less work has been done to study the performance of frameworks that provide quantization techniques. In this paper, we have used different performance metrics to study the performance of various quantization frameworks, including TensorFlow automatic mixed precision and TensorRT. These performance metrics include training time and memory utilization in the training stage along with latency and throughput for graphics processing units (GPUs) in the inference stage. We have applied the automatic mixed precision (AMP) technique during the training stage using the TensorFlow framework, while for inference we have utilized the TensorRT framework for the post-training quantization technique using the TensorFlow TensorRT (TF-TRT) application programming interface (API).We performed model profiling for different deep learning models, datasets, image sizes, and batch sizes for both the training and inference stages, the results of which can help developers and researchers to devise and deploy efficient deep learning models for GPUs.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135888444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context.
{"title":"From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems","authors":"Ellen Hohma, Christoph Lütge","doi":"10.3390/ai4040046","DOIUrl":"https://doi.org/10.3390/ai4040046","url":null,"abstract":"The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135918671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. We begin by proposing societal, ethical, legal, and practical criteria that should be considered in the development and implementation of AI models. Based on this framework, we discuss the applications of AI that are transforming the mortgage market, including digital marketing, the inclusion of non-traditional “big data” in credit scoring algorithms, AI property valuation, and loan underwriting models. We conclude that although the current AI models may reflect the same biases that have existed historically in the mortgage market, opportunities exist for proactive, responsible AI model development designed to remove the systemic barriers to mortgage credit access.
{"title":"Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?","authors":"Vanessa G. Perry, Kirsten Martin, Ann Schnare","doi":"10.3390/ai4040045","DOIUrl":"https://doi.org/10.3390/ai4040045","url":null,"abstract":"Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. We begin by proposing societal, ethical, legal, and practical criteria that should be considered in the development and implementation of AI models. Based on this framework, we discuss the applications of AI that are transforming the mortgage market, including digital marketing, the inclusion of non-traditional “big data” in credit scoring algorithms, AI property valuation, and loan underwriting models. We conclude that although the current AI models may reflect the same biases that have existed historically in the mortgage market, opportunities exist for proactive, responsible AI model development designed to remove the systemic barriers to mortgage credit access.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136208848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eryn Rigley, Adriane Chapman, Christine Evers, Will McNeill
As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems.
{"title":"Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion","authors":"Eryn Rigley, Adriane Chapman, Christine Evers, Will McNeill","doi":"10.3390/ai4040043","DOIUrl":"https://doi.org/10.3390/ai4040043","url":null,"abstract":"As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135251439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leilasadat Mirghaderi, Monika Sziron, Elisabeth Hildt
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization.
{"title":"Ethics and Transparency Issues in Digital Platforms: An Overview","authors":"Leilasadat Mirghaderi, Monika Sziron, Elisabeth Hildt","doi":"10.3390/ai4040042","DOIUrl":"https://doi.org/10.3390/ai4040042","url":null,"abstract":"There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135425877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis D. Apostolopoulos, Mpesi Tzani, Sokratis I. Aznaouridis
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using images. This paper presents a general machine learning model for assessing fruit quality using deep image features. This model leverages the learning capabilities of the recent successful networks for image classification called vision transformers (ViT). The ViT model is built and trained with a combination of various fruit datasets and taught to distinguish between good and rotten fruit images based on their visual appearance and not predefined quality attributes. The general model demonstrated impressive results in accurately identifying the quality of various fruits, such as apples (with a 99.50% accuracy), cucumbers (99%), grapes (100%), kakis (99.50%), oranges (99.50%), papayas (98%), peaches (98%), tomatoes (99.50%), and watermelons (98%). However, it showed slightly lower performance in identifying guavas (97%), lemons (97%), limes (97.50%), mangoes (97.50%), pears (97%), and pomegranates (97%).
{"title":"A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features","authors":"Ioannis D. Apostolopoulos, Mpesi Tzani, Sokratis I. Aznaouridis","doi":"10.3390/ai4040041","DOIUrl":"https://doi.org/10.3390/ai4040041","url":null,"abstract":"Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using images. This paper presents a general machine learning model for assessing fruit quality using deep image features. This model leverages the learning capabilities of the recent successful networks for image classification called vision transformers (ViT). The ViT model is built and trained with a combination of various fruit datasets and taught to distinguish between good and rotten fruit images based on their visual appearance and not predefined quality attributes. The general model demonstrated impressive results in accurately identifying the quality of various fruits, such as apples (with a 99.50% accuracy), cucumbers (99%), grapes (100%), kakis (99.50%), oranges (99.50%), papayas (98%), peaches (98%), tomatoes (99.50%), and watermelons (98%). However, it showed slightly lower performance in identifying guavas (97%), lemons (97%), limes (97.50%), mangoes (97.50%), pears (97%), and pomegranates (97%).","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135537919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. Shyam Sunder, Vinay Anand Tikkiwal, Arun Kumar, Bhishma Tyagi
Aerosols play a crucial role in the climate system due to direct and indirect effects, such as scattering and absorbing radiant energy. They also have adverse effects on visibility and human health. Humans are exposed to fine PM2.5, which has adverse health impacts related to cardiovascular and respiratory-related diseases. Long-term trends in PM concentrations are influenced by emissions and meteorological variations, while meteorological factors primarily drive short-term variations. Factors such as vegetation cover, relative humidity, temperature, and wind speed impact the divergence in the PM2.5 concentrations on the surface. Machine learning proved to be a good predictor of air quality. This study focuses on predicting PM2.5 with these parameters as input for spatial and temporal information. The work analyzes the in situ observations for PM2.5 over Singapore for seven years (2014–2021) at five locations, and these datasets are used for spatial prediction of PM2.5. The study aims to provide a novel framework based on temporal-based prediction using Random Forest (RF), Gradient Boosting (GB) regression, and Tree-based Pipeline Optimization Tool (TP) Auto ML works based on meta-heuristic via genetic algorithm. TP produced reasonable Global Performance Index values; 7.4 was the highest GPI value in August 2016, and the lowest was −0.6 in June 2019. This indicates the positive performance of the TP model; even the negative values are less than other models, denoting less pessimistic predictions. The outcomes are explained with the eXplainable Artificial Intelligence (XAI) techniques which help to investigate the fidelity of feature importance of the machine learning models to extract information regarding the rhythmic shift of the PM2.5 pattern.
{"title":"Unveiling the Transparency of Prediction Models for Spatial PM2.5 over Singapore: Comparison of Different Machine Learning Approaches with eXplainable Artificial Intelligence","authors":"M. S. Shyam Sunder, Vinay Anand Tikkiwal, Arun Kumar, Bhishma Tyagi","doi":"10.3390/ai4040040","DOIUrl":"https://doi.org/10.3390/ai4040040","url":null,"abstract":"Aerosols play a crucial role in the climate system due to direct and indirect effects, such as scattering and absorbing radiant energy. They also have adverse effects on visibility and human health. Humans are exposed to fine PM2.5, which has adverse health impacts related to cardiovascular and respiratory-related diseases. Long-term trends in PM concentrations are influenced by emissions and meteorological variations, while meteorological factors primarily drive short-term variations. Factors such as vegetation cover, relative humidity, temperature, and wind speed impact the divergence in the PM2.5 concentrations on the surface. Machine learning proved to be a good predictor of air quality. This study focuses on predicting PM2.5 with these parameters as input for spatial and temporal information. The work analyzes the in situ observations for PM2.5 over Singapore for seven years (2014–2021) at five locations, and these datasets are used for spatial prediction of PM2.5. The study aims to provide a novel framework based on temporal-based prediction using Random Forest (RF), Gradient Boosting (GB) regression, and Tree-based Pipeline Optimization Tool (TP) Auto ML works based on meta-heuristic via genetic algorithm. TP produced reasonable Global Performance Index values; 7.4 was the highest GPI value in August 2016, and the lowest was −0.6 in June 2019. This indicates the positive performance of the TP model; even the negative values are less than other models, denoting less pessimistic predictions. The outcomes are explained with the eXplainable Artificial Intelligence (XAI) techniques which help to investigate the fidelity of feature importance of the machine learning models to extract information regarding the rhythmic shift of the PM2.5 pattern.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbin Li, Hakim Hacid, Ebtesam Almazrouei, Merouane Debbah
The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to the end-user environment, for privacy preservation, low latency to real-time performance, and resource optimization. Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment. Nevertheless, edge-powered ML solutions are more complex to realize due to the joint constraints from both edge computing and AI domains, and the corresponding solutions are expected to be efficient and adapted in technologies such as data processing, model compression, distributed inference, and advanced learning paradigms for Edge ML requirements. Despite the fact that a great deal of the attention garnered by Edge ML is gained in both the academic and industrial communities, we noticed the lack of a complete survey on existing Edge ML technologies to provide a common understanding of this concept. To tackle this, this paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques, focusing on the soft computing aspects of existing paradigms and techniques. We start by identifying the Edge ML requirements driven by the joint constraints. We then extensively survey more than twenty paradigms and techniques along with their representative work, covering two main parts: edge inference, and edge learning. In particular, we analyze how each technique fits into Edge ML by meeting a subset of the identified requirements. We also summarize Edge ML frameworks and open issues to shed light on future directions for Edge ML.
{"title":"A Comprehensive Review and a Taxonomy of Edge Machine Learning: Requirements, Paradigms, and Techniques","authors":"Wenbin Li, Hakim Hacid, Ebtesam Almazrouei, Merouane Debbah","doi":"10.3390/ai4030039","DOIUrl":"https://doi.org/10.3390/ai4030039","url":null,"abstract":"The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to the end-user environment, for privacy preservation, low latency to real-time performance, and resource optimization. Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment. Nevertheless, edge-powered ML solutions are more complex to realize due to the joint constraints from both edge computing and AI domains, and the corresponding solutions are expected to be efficient and adapted in technologies such as data processing, model compression, distributed inference, and advanced learning paradigms for Edge ML requirements. Despite the fact that a great deal of the attention garnered by Edge ML is gained in both the academic and industrial communities, we noticed the lack of a complete survey on existing Edge ML technologies to provide a common understanding of this concept. To tackle this, this paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques, focusing on the soft computing aspects of existing paradigms and techniques. We start by identifying the Edge ML requirements driven by the joint constraints. We then extensively survey more than twenty paradigms and techniques along with their representative work, covering two main parts: edge inference, and edge learning. In particular, we analyze how each technique fits into Edge ML by meeting a subset of the identified requirements. We also summarize Edge ML frameworks and open issues to shed light on future directions for Edge ML.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135782464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}