Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202336
Nurul Izzati Saleh, Mohamad Taha Ijab
The service sector is a key focus of the Fourth Industrial Revolution (IR4.0), a digital revolution that affects all industries. A key component of IR4.0 is the introduction and uptake of new technologies by organizations, including artificial intelligence (AI), big data analytics, the Internet of Things (IoT), cloud computing, augmented reality, simulation, cybersecurity, systems integration, additive manufacturing, and robotics and autonomous systems. According to research, 59% of businesses with expertise in big data and IoT also employ AI technologies. Through the development, adoption, and integration of technology solutions into the workforce and industries, industry participants’ readiness and their use of these technologies will be able to increase productivity growth. According to a survey of the literature, Malaysia in particular still has a low to medium degree of industry readiness for IR4.0. The purpose of this paper is to conduct a systematic literature review in order to comprehend the IR4.0 readiness models that have been discussed in the literature, the driving and impeding forces behind IR4.0 readiness, and the use of self-evaluation tools by industry participants to gauge their own IR4.0 readiness level. Six prominent internet databases, including Scopus, Emerald Insight, IEEE, Springer, Web of Science, and Science Direct, were used in the review. Finally, 55 out of the initially searched 10,428 articles were selected based on the inclusion and exclusion criteria set for the study after rigorous methods of screening the papers. According to the research, readiness models are frequently addressed and framed around a variety of theories and their theoretical constructs, including success models, information systems, acceptance theory, and pertinent maturity and readiness theories. The following factors frequently play a dual role, acting as both a driving and an inhibiting influence. These factors include funding, infrastructure, regulatory, skills and competency, technology, and commitment. This study suggests the IR4.0 Readiness and Implementation Framework for industry based on the synthesized literature. The framework seeks to help industry participants deploy IR4.0 in stages and gradually increase their IR4.0 readiness levels.
服务业是第四次工业革命的重点,这是一场影响所有行业的数字革命。工业革命4.0的一个关键组成部分是组织引入和采用新技术,包括人工智能(AI)、大数据分析、物联网(IoT)、云计算、增强现实、仿真、网络安全、系统集成、增材制造、机器人和自主系统。根据研究,59%的大数据和物联网专业企业也采用了人工智能技术。通过将技术解决方案开发、采用和集成到劳动力和行业中,行业参与者的准备和他们对这些技术的使用将能够提高生产率增长。根据文献调查,特别是马来西亚,工业对工业4.0的准备程度仍然处于中低水平。本文的目的是进行系统的文献综述,以理解文献中讨论的IR4.0准备模型,IR4.0准备背后的驱动力和阻碍力量,以及行业参与者使用自我评估工具来衡量他们自己的IR4.0准备水平。本文使用了Scopus、Emerald Insight、IEEE、施普林格、Web of Science和Science Direct等六大知名互联网数据库。最后,根据为研究设定的纳入和排除标准,经过严格的筛选方法,从最初搜索的10428篇文章中筛选出55篇。根据研究,准备模型经常围绕各种理论及其理论结构进行讨论和框架,包括成功模型、信息系统、接受理论以及相关的成熟度和准备理论。以下因素经常发挥双重作用,既起推动作用,又起抑制作用。这些因素包括资金、基础设施、监管、技能和能力、技术和承诺。本研究提出了基于综合文献的工业IR4.0准备和实施框架。该框架旨在帮助行业参与者分阶段部署IR4.0,并逐步提高其IR4.0准备水平。
{"title":"Industrial Revolution 4.0 (IR4.0) Readiness Among Industry Players: A Systematic Literature Review","authors":"Nurul Izzati Saleh, Mohamad Taha Ijab","doi":"10.47852/bonviewaia2202336","DOIUrl":"https://doi.org/10.47852/bonviewaia2202336","url":null,"abstract":"The service sector is a key focus of the Fourth Industrial Revolution (IR4.0), a digital revolution that affects all industries. A key component of IR4.0 is the introduction and uptake of new technologies by organizations, including artificial intelligence (AI), big data analytics, the Internet of Things (IoT), cloud computing, augmented reality, simulation, cybersecurity, systems integration, additive manufacturing, and robotics and autonomous systems. According to research, 59% of businesses with expertise in big data and IoT also employ AI technologies. Through the development, adoption, and integration of technology solutions into the workforce and industries, industry participants’ readiness and their use of these technologies will be able to increase productivity growth. According to a survey of the literature, Malaysia in particular still has a low to medium degree of industry readiness for IR4.0. The purpose of this paper is to conduct a systematic literature review in order to comprehend the IR4.0 readiness models that have been discussed in the literature, the driving and impeding forces behind IR4.0 readiness, and the use of self-evaluation tools by industry participants to gauge their own IR4.0 readiness level. Six prominent internet databases, including Scopus, Emerald Insight, IEEE, Springer, Web of Science, and Science Direct, were used in the review. Finally, 55 out of the initially searched 10,428 articles were selected based on the inclusion and exclusion criteria set for the study after rigorous methods of screening the papers. According to the research, readiness models are frequently addressed and framed around a variety of theories and their theoretical constructs, including success models, information systems, acceptance theory, and pertinent maturity and readiness theories. The following factors frequently play a dual role, acting as both a driving and an inhibiting influence. These factors include funding, infrastructure, regulatory, skills and competency, technology, and commitment. This study suggests the IR4.0 Readiness and Implementation Framework for industry based on the synthesized literature. The framework seeks to help industry participants deploy IR4.0 in stages and gradually increase their IR4.0 readiness levels.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135237090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5121/csit.2023.131803
Takanori Asano, Yoshiaki Yasumura
In this paper, For the task of the depth map of a scene given a single RGB image. We present an estimation method using a deep learning model that incorporates size perspective (size constancy cues). By utilizing a size perspective, the proposed method aims to address the difficulty of depth estimation tasks which stems from the limited correlation between the information inherent to objects in RGB images (such as shape and color) and their corresponding depths. The proposed method consists of two deep learning models, a size perspective model and a depth estimation model, The size-perspective model plays a role like that of the size perspective and estimates approximate depths for each object in the image based on the size of the object's bounding box and its actual size. Based on these rough depth estimation (pre-depth estimation) results, A depth image representing through depths of each object (pre-depth image) is generated and this image is input with the RGB image into the depth estimation model. The pre-depth image is used as a hint for depth estimation and improves the performance of the depth estimation model. With the proposed method, it becomes possible to obtain depth inputs for the depth estimation model without using any devices other than a monocular camera be forehand. The proposed method contributes to the improvement in accuracy when there are objects present in the image that can be detected by the object detection model. In the experiments using an original indoor scene dataset, the proposed method demonstrated improvement in accuracy compared to the method without pre-depth images.
{"title":"Monocular Depth Estimation Using a Deep Learning Model with Pre-Depth Estimation based on Size Perspective","authors":"Takanori Asano, Yoshiaki Yasumura","doi":"10.5121/csit.2023.131803","DOIUrl":"https://doi.org/10.5121/csit.2023.131803","url":null,"abstract":"In this paper, For the task of the depth map of a scene given a single RGB image. We present an estimation method using a deep learning model that incorporates size perspective (size constancy cues). By utilizing a size perspective, the proposed method aims to address the difficulty of depth estimation tasks which stems from the limited correlation between the information inherent to objects in RGB images (such as shape and color) and their corresponding depths. The proposed method consists of two deep learning models, a size perspective model and a depth estimation model, The size-perspective model plays a role like that of the size perspective and estimates approximate depths for each object in the image based on the size of the object's bounding box and its actual size. Based on these rough depth estimation (pre-depth estimation) results, A depth image representing through depths of each object (pre-depth image) is generated and this image is input with the RGB image into the depth estimation model. The pre-depth image is used as a hint for depth estimation and improves the performance of the depth estimation model. With the proposed method, it becomes possible to obtain depth inputs for the depth estimation model without using any devices other than a monocular camera be forehand. The proposed method contributes to the improvement in accuracy when there are objects present in the image that can be detected by the object detection model. In the experiments using an original indoor scene dataset, the proposed method demonstrated improvement in accuracy compared to the method without pre-depth images.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135610231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202975
Leendert A. Remmelzwaal
Wildfires pose a significant threat to human lives, property, and the environment. Rapid response during a fire's early stages is critical to minimizing damage and danger. Traditional wildfire detection methods often rely on reports from bystanders, leading to delays in response times and the possibility of fires growing out of control. In this paper, ask the question: “Can AI object detection improve wildfire detection and response times?”. We present an innovative early fire detection system that leverages state-of-the-art hardware, artificial intelligence (AI)-powered object detection, and seamless integration with emergency services to significantly improve wildfire detection and response times. Our system employs high-definition panoramic cameras, solar-powered energy sources, and a sophisticated communication infrastructure to monitor vast landscapes in real-time. The AI model at the core of the system analyzes images captured by the cameras every 60 seconds, identifying early smoke patterns indicative of fires, and promptly notifying the fire department. We detail the system architecture, AI model framework, training process, and results obtained during testing and validation. The system demonstrates its effectiveness in detecting and reporting fires, reducing response times, and improving emergency services coordination. We have demonstrated that AI object detection can be an invaluable tool in the ongoing battle against wildfires, ultimately saving lives, property, and the environment.
{"title":"An AI-based Early Fire Detection System Utilizing HD Cameras and Real-time Image Analysis","authors":"Leendert A. Remmelzwaal","doi":"10.47852/bonviewaia3202975","DOIUrl":"https://doi.org/10.47852/bonviewaia3202975","url":null,"abstract":"Wildfires pose a significant threat to human lives, property, and the environment. Rapid response during a fire's early stages is critical to minimizing damage and danger. Traditional wildfire detection methods often rely on reports from bystanders, leading to delays in response times and the possibility of fires growing out of control. In this paper, ask the question: “Can AI object detection improve wildfire detection and response times?”. We present an innovative early fire detection system that leverages state-of-the-art hardware, artificial intelligence (AI)-powered object detection, and seamless integration with emergency services to significantly improve wildfire detection and response times. Our system employs high-definition panoramic cameras, solar-powered energy sources, and a sophisticated communication infrastructure to monitor vast landscapes in real-time. The AI model at the core of the system analyzes images captured by the cameras every 60 seconds, identifying early smoke patterns indicative of fires, and promptly notifying the fire department. We detail the system architecture, AI model framework, training process, and results obtained during testing and validation. The system demonstrates its effectiveness in detecting and reporting fires, reducing response times, and improving emergency services coordination. We have demonstrated that AI object detection can be an invaluable tool in the ongoing battle against wildfires, ultimately saving lives, property, and the environment.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"220 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79855247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202303
Ejay Nsugbe
Male fertility has been seen to be declining, prompting for more effective and accessible means of its assessment. Artificial intelligence methods have been effective toward predicting semen quality through a questionnaire-based information source comprising a selection of factors from the medical literature which have been seen to influence semen quality. Prior work has seen the application of supervised learning toward the prediction of semen quality, but since supervised learning hinges on the provision of data class labels it can be said to depend on an external intelligence intervention, which can translate toward further costs and resources in practical settings. In contrast, unsupervised learning methods partition data into clusters and groups based on an objective function and do not rely on the provision of class labels and can allow for a fully automated flow of a prediction platform. In this paper, we apply three unsupervised learning models with different model architectures, namely Gaussian mixture model (GMM), K-means, and spectral clustering (SC), alongside low dimensional embedding methods which include sparse autoencoder (SAE), principal component analysis (PCA), and robust PCA. The best results were obtained with a combination of the SAE and the SC algorithm, which was likely due to its nonspecific and arbitrary cluster shape assumption. Further work would now involve the exploration of similar unsupervised learning algorithms with a similar framework to the SC to investigate the extent to which various clusters can be learned with maximal accuracy.
{"title":"Toward a Self-Supervised Architecture for Semen Quality Prediction Using Environmental and Lifestyle Factors","authors":"Ejay Nsugbe","doi":"10.47852/bonviewaia2202303","DOIUrl":"https://doi.org/10.47852/bonviewaia2202303","url":null,"abstract":"Male fertility has been seen to be declining, prompting for more effective and accessible means of its assessment. Artificial intelligence methods have been effective toward predicting semen quality through a questionnaire-based information source comprising a selection of factors from the medical literature which have been seen to influence semen quality. Prior work has seen the application of supervised learning toward the prediction of semen quality, but since supervised learning hinges on the provision of data class labels it can be said to depend on an external intelligence intervention, which can translate toward further costs and resources in practical settings. In contrast, unsupervised learning methods partition data into clusters and groups based on an objective function and do not rely on the provision of class labels and can allow for a fully automated flow of a prediction platform. In this paper, we apply three unsupervised learning models with different model architectures, namely Gaussian mixture model (GMM), K-means, and spectral clustering (SC), alongside low dimensional embedding methods which include sparse autoencoder (SAE), principal component analysis (PCA), and robust PCA. The best results were obtained with a combination of the SAE and the SC algorithm, which was likely due to its nonspecific and arbitrary cluster shape assumption. Further work would now involve the exploration of similar unsupervised learning algorithms with a similar framework to the SC to investigate the extent to which various clusters can be learned with maximal accuracy.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202743
Eduardo Teófilo-Salvador, P. Ambrocio-Cruz, Margarita Rosado-Solís
Currently, technological development has exponentially fostered a growing collection of dispersed and diversified information. In the case of galaxy interaction studies, it is important to identify and recognize the parameters in the process, the tools and the computational codes available to select the appropriate one in depending on the availability of data. The objective was to characterize the parameters, techniques and methods developed, as well as the computational codes for numerical simulation. From the bibliography, it was reviewed how various authors have studied the interaction, presence of gas and star formation, then the review of computer codes with the requirements and benefits, to analyze and compare the initial and boundary conditions. With images, the CNN method programmed in python was applied to identify the differences and their possible accuracy. SPH systems have more robust algorithms, invariance, simplicity in implementation, flexible geometries, but do not parameterize artificial viscosities, discontinuous solutions and instabilities. In the case of AMR there is no artificial viscosity, resolution of discontinuities, suppression of instabilities, but with complex implementation, mesh details, resolution problems and they are not scalable. It is necessary to use indirect methods to infer some properties or perform preliminary iterations. The availability of observable data governs the behavior of possible numerical simulations, in addition to having tools such as a supercomputer, generating errors that can be adjusted, compared or verified, according to the techniques and methods shown in this study, in addition to the fact that codes that are not so well known and used stand out as those that are currently more applied.
{"title":"Methodological Characterization and Computational Codes in the Simulation of Interacting Galaxies","authors":"Eduardo Teófilo-Salvador, P. Ambrocio-Cruz, Margarita Rosado-Solís","doi":"10.47852/bonviewaia3202743","DOIUrl":"https://doi.org/10.47852/bonviewaia3202743","url":null,"abstract":"Currently, technological development has exponentially fostered a growing collection of dispersed and diversified information. In the case of galaxy interaction studies, it is important to identify and recognize the parameters in the process, the tools and the computational codes available to select the appropriate one in depending on the availability of data. The objective was to characterize the parameters, techniques and methods developed, as well as the computational codes for numerical simulation. From the bibliography, it was reviewed how various authors have studied the interaction, presence of gas and star formation, then the review of computer codes with the requirements and benefits, to analyze and compare the initial and boundary conditions. With images, the CNN method programmed in python was applied to identify the differences and their possible accuracy. SPH systems have more robust algorithms, invariance, simplicity in implementation, flexible geometries, but do not parameterize artificial viscosities, discontinuous solutions and instabilities. In the case of AMR there is no artificial viscosity, resolution of discontinuities, suppression of instabilities, but with complex implementation, mesh details, resolution problems and they are not scalable. It is necessary to use indirect methods to infer some properties or perform preliminary iterations. The availability of observable data governs the behavior of possible numerical simulations, in addition to having tools such as a supercomputer, generating errors that can be adjusted, compared or verified, according to the techniques and methods shown in this study, in addition to the fact that codes that are not so well known and used stand out as those that are currently more applied.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75373507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202549
Srikanta Pal, Ayush Roy, Palaiahnakote Shivakumara, Umapada Pal
The use of drones and unmanned aerial vehicles has significantly increased in various real-world applications such as monitoring illegal car parking, tracing vehicles, controlling traffic jams, and chasing vehicles. However, accurate detection of license plate numbers in drone images becomes complex and challenging due to variations in height distances and oblique angles during image capturing, unlike most existing methods that focus on normal images for text/license plate number detection. To address this issue, this work proposes a new model for License Plate Number Detection in Drone Images using Swin Transformer. The Swin Transformer is chosen due to its special properties such as higher accuracy, efficiency, and fewer computations, making it suitable for license plate number/text detection in drone images. To further improve the performance of the proposed model under adverse conditions such as degradations, poor quality, and occlusion, the proposed work incorporates a Maximally Stable Extremal Regions (MSER) based Regional Proposal Network (RPN) to represent text data in the images. Experimental results on both normal license plates and drone images demonstrate the superior performance of the proposed model over state-of-the-art methods.
{"title":"Adapting a Swin Transformer for License Plate Number and Text Detection in Drone Images","authors":"Srikanta Pal, Ayush Roy, Palaiahnakote Shivakumara, Umapada Pal","doi":"10.47852/bonviewaia3202549","DOIUrl":"https://doi.org/10.47852/bonviewaia3202549","url":null,"abstract":"The use of drones and unmanned aerial vehicles has significantly increased in various real-world applications such as monitoring illegal car parking, tracing vehicles, controlling traffic jams, and chasing vehicles. However, accurate detection of license plate numbers in drone images becomes complex and challenging due to variations in height distances and oblique angles during image capturing, unlike most existing methods that focus on normal images for text/license plate number detection. To address this issue, this work proposes a new model for License Plate Number Detection in Drone Images using Swin Transformer. The Swin Transformer is chosen due to its special properties such as higher accuracy, efficiency, and fewer computations, making it suitable for license plate number/text detection in drone images. To further improve the performance of the proposed model under adverse conditions such as degradations, poor quality, and occlusion, the proposed work incorporates a Maximally Stable Extremal Regions (MSER) based Regional Proposal Network (RPN) to represent text data in the images. Experimental results on both normal license plates and drone images demonstrate the superior performance of the proposed model over state-of-the-art methods.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135585129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021406
Mengyao Kang, Jiawei Zhao, Farnaz Farid
Machine learning-based prediction models have the potential to revamp various industries, and one such promising area is healthcare. This study demonstrates the potential impact of machine learning in healthcare, particularly in managing patients with Chronic Obstructive Pulmonary Disease (COPD). The experimental results showcase the remarkable performance of machine learning models, surpassing doctors' predictions for COPD patients. Among the evaluated models, the Gradient Boosted Decision Tree classifier emerges as the top performer, displaying exceptional classification accuracy, precision, recall, and F1-Score compared to doctors' experience. Notably, the comparison between the best machine learning model and doctors' predictions reveals an interesting pattern: machine learning models tend to be more conservative, resulting in an increased probability of patient recovery.
{"title":"Implications of Classification Models for Patients with Chronic Obstructive Pulmonary Disease","authors":"Mengyao Kang, Jiawei Zhao, Farnaz Farid","doi":"10.47852/bonviewaia32021406","DOIUrl":"https://doi.org/10.47852/bonviewaia32021406","url":null,"abstract":"Machine learning-based prediction models have the potential to revamp various industries, and one such promising area is healthcare. This study demonstrates the potential impact of machine learning in healthcare, particularly in managing patients with Chronic Obstructive Pulmonary Disease (COPD). The experimental results showcase the remarkable performance of machine learning models, surpassing doctors' predictions for COPD patients. Among the evaluated models, the Gradient Boosted Decision Tree classifier emerges as the top performer, displaying exceptional classification accuracy, precision, recall, and F1-Score compared to doctors' experience. Notably, the comparison between the best machine learning model and doctors' predictions reveals an interesting pattern: machine learning models tend to be more conservative, resulting in an increased probability of patient recovery.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135445747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202359
Xing-Ming Long, Yu-Jie Chen, Jing Zhou
Augmented reality (AR), an advanced information and communication technology, has gained widespread concerns in teaching physics due to its possibility for reducing cost of equipment, enhancing comprehension of abstract concepts, and overcoming tendency of diminishing interests of students. Although the fruitful applications have been created frequently with the aid of AR framework using standard devices, for example, head-mounted display, there is less likely for the image detection-based framework to integrate the virtual assets with the real objects with ability of supporting inputs of the invisible elements and thus makes it difficult to maintain evolutions of the augmented information over the physical quantity such as current in the AR-based physical experiment. In this paper, an open AR framework with simulation-based assets triggered by user-defined inputs is proposed to allow physics teachers to create their own AR teaching materials. Considering the teaching of electric-thermal effects by exploring the thermoelectric cooler-based thermal management of power device, furthermore, the proposed framework is illustrated from the perspective of the three paramount tasks: (1) a simple and accurate generation of assets from the finite element simulation; (2) a low-cost and convenient measurement of physical quantities by the microcomputer unit with Bluetooth connectivity; and (3) a real-time and changeable controlment of simulation-based assets by the measured data using the thread-based Python script. Experimental results show that students are excited in the AR application interacted with the real operation of current, and moreover, physic teachers are easy to design and deploy the simulation-based framework with user-defined inputs to create their own AR learning materials for sparking growing interests of students.
{"title":"Development of AR Experiment on Electric-Thermal Effect by Open Framework with Simulation-Based Asset and User-Defined Input","authors":"Xing-Ming Long, Yu-Jie Chen, Jing Zhou","doi":"10.47852/bonviewaia2202359","DOIUrl":"https://doi.org/10.47852/bonviewaia2202359","url":null,"abstract":"Augmented reality (AR), an advanced information and communication technology, has gained widespread concerns in teaching physics due to its possibility for reducing cost of equipment, enhancing comprehension of abstract concepts, and overcoming tendency of diminishing interests of students. Although the fruitful applications have been created frequently with the aid of AR framework using standard devices, for example, head-mounted display, there is less likely for the image detection-based framework to integrate the virtual assets with the real objects with ability of supporting inputs of the invisible elements and thus makes it difficult to maintain evolutions of the augmented information over the physical quantity such as current in the AR-based physical experiment. In this paper, an open AR framework with simulation-based assets triggered by user-defined inputs is proposed to allow physics teachers to create their own AR teaching materials. Considering the teaching of electric-thermal effects by exploring the thermoelectric cooler-based thermal management of power device, furthermore, the proposed framework is illustrated from the perspective of the three paramount tasks: (1) a simple and accurate generation of assets from the finite element simulation; (2) a low-cost and convenient measurement of physical quantities by the microcomputer unit with Bluetooth connectivity; and (3) a real-time and changeable controlment of simulation-based assets by the measured data using the thread-based Python script. Experimental results show that students are excited in the AR application interacted with the real operation of current, and moreover, physic teachers are easy to design and deploy the simulation-based framework with user-defined inputs to create their own AR learning materials for sparking growing interests of students.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202524
Sandipan Choudhuri, Suli Adeniye, Arunabha Sen
In this work, we address a realistic case of unsupervised domain adaptation, where the source label set subsumes that of the target. This relaxation in the requirement of an identical label set assumption, as witnessed in the standard closed-set variant, poses a challenging obstacle of negative transfer that potentially misleads the learning process from the intended target classification objective. To counteract this issue, we propose a novel framework for a partial domain adaptation setup that enforces domain and category-level alignments through optimization of intra- and inter-class distances, uncertainty suppression on classifier predictions, and target supervision with an adaptive consensus-based sample filtering. In this work, we aim to modify the latent space arrangement where samples from identical classes are forced to reside in close proximity while that from distinct classes are well separated in a domain-agnostic fashion. In addition, the proposed model addresses a challenging issue of uncertainty propagation by employing a complement entropy objective that requires the incorrect classes to have uniformly distributed low-prediction probabilities. Target supervision is ensured by employing a robust technique for adaptive pseudo-label generation using a nonparametric classifier. The methodology employs a strategy that permits supervision from target samples with prediction probabilities higher than an adaptive threshold. We conduct experiments involving a range of partial domain adaptation tasks on two benchmark datasets to thoroughly assess the proposed model’s performance against the state-of-the-art methods. In addition, we performed an ablation study to validate the necessity of the incorporated modules and highlight their contribution to the proposed framework. The experimental findings obtained manifest the superior performance of the proposed model when compared to the benchmarks.
{"title":"Distribution Alignment Using Complement Entropy Objective and Adaptive Consensus-Based Label Refinement For Partial Domain Adaptation","authors":"Sandipan Choudhuri, Suli Adeniye, Arunabha Sen","doi":"10.47852/bonviewaia2202524","DOIUrl":"https://doi.org/10.47852/bonviewaia2202524","url":null,"abstract":"In this work, we address a realistic case of unsupervised domain adaptation, where the source label set subsumes that of the target. This relaxation in the requirement of an identical label set assumption, as witnessed in the standard closed-set variant, poses a challenging obstacle of negative transfer that potentially misleads the learning process from the intended target classification objective. To counteract this issue, we propose a novel framework for a partial domain adaptation setup that enforces domain and category-level alignments through optimization of intra- and inter-class distances, uncertainty suppression on classifier predictions, and target supervision with an adaptive consensus-based sample filtering. In this work, we aim to modify the latent space arrangement where samples from identical classes are forced to reside in close proximity while that from distinct classes are well separated in a domain-agnostic fashion. In addition, the proposed model addresses a challenging issue of uncertainty propagation by employing a complement entropy objective that requires the incorrect classes to have uniformly distributed low-prediction probabilities. Target supervision is ensured by employing a robust technique for adaptive pseudo-label generation using a nonparametric classifier. The methodology employs a strategy that permits supervision from target samples with prediction probabilities higher than an adaptive threshold. We conduct experiments involving a range of partial domain adaptation tasks on two benchmark datasets to thoroughly assess the proposed model’s performance against the state-of-the-art methods. In addition, we performed an ablation study to validate the necessity of the incorporated modules and highlight their contribution to the proposed framework. The experimental findings obtained manifest the superior performance of the proposed model when compared to the benchmarks.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"133 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136256576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-29DOI: 10.5121/csit.2022.121811
Fatema Nafa, Ryan Kanoff
Considering the current state of Covid-19 pandemic, vaccine research and production is more important than ever. Antibodies recognize epitopes, which are immunogenic regions of antigen, in a very specific manner, to trigger an immune response. It is extremely difficult to predict such locations, yet they have substantial implications for complex humoral immunogenicity pathways. This paper presents a machine learning epitope prediction model. The research creates several models to test the accuracy of B-cell epitope prediction based solely on protein features. The goal is to establish a quantitative comparison of the accuracy of three machine learning models, XGBoost, CatBoost, and LightGbM. Our results found similar accuracy between the XGBoost and LightGbM models with the CatBoost model having the highest accuracy of 82%. Though this accuracy is not high enough to be considered reliable it does warrant further research on the subject.
{"title":"Machine Learning based to Predict B-Cell Epitope Region Utilizing Protein Features","authors":"Fatema Nafa, Ryan Kanoff","doi":"10.5121/csit.2022.121811","DOIUrl":"https://doi.org/10.5121/csit.2022.121811","url":null,"abstract":"Considering the current state of Covid-19 pandemic, vaccine research and production is more important than ever. Antibodies recognize epitopes, which are immunogenic regions of antigen, in a very specific manner, to trigger an immune response. It is extremely difficult to predict such locations, yet they have substantial implications for complex humoral immunogenicity pathways. This paper presents a machine learning epitope prediction model. The research creates several models to test the accuracy of B-cell epitope prediction based solely on protein features. The goal is to establish a quantitative comparison of the accuracy of three machine learning models, XGBoost, CatBoost, and LightGbM. Our results found similar accuracy between the XGBoost and LightGbM models with the CatBoost model having the highest accuracy of 82%. Though this accuracy is not high enough to be considered reliable it does warrant further research on the subject.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73502025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}