Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021196
S. Pal, Ayush Roy, P. Shivakumara, U. Pal
An accurate image segmentation in noisy environment is complex and challenging. Unlike existing state-of-the-art methods that use superpixels for successful segmentation, we propose a new approach for noise-robust SLIC (Simple Linear Iterative Clustering) segmentation that incorporates a Canny edge detector. By leveraging Canny edge information, the proposed method modifies the pixel intensity distance measurement to overcome boundary adherence challenge. Furthermore, we adopt a selective approach to update cluster centers, focusing on pixels that contribute less to the noise. Extensive experiments on synthetic noisy images demonstrate the effectiveness of our approach. It significantly improves SLIC's performance in noisy image segmentation and boundary adherence, making it a promising technique for vision processing tasks.
{"title":"A Robust SLIC Based Approach for Segmentation using Canny Edge Detector","authors":"S. Pal, Ayush Roy, P. Shivakumara, U. Pal","doi":"10.47852/bonviewaia32021196","DOIUrl":"https://doi.org/10.47852/bonviewaia32021196","url":null,"abstract":"An accurate image segmentation in noisy environment is complex and challenging. Unlike existing state-of-the-art methods that use superpixels for successful segmentation, we propose a new approach for noise-robust SLIC (Simple Linear Iterative Clustering) segmentation that incorporates a Canny edge detector. By leveraging Canny edge information, the proposed method modifies the pixel intensity distance measurement to overcome boundary adherence challenge. Furthermore, we adopt a selective approach to update cluster centers, focusing on pixels that contribute less to the noise. Extensive experiments on synthetic noisy images demonstrate the effectiveness of our approach. It significantly improves SLIC's performance in noisy image segmentation and boundary adherence, making it a promising technique for vision processing tasks.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"125 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74689034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021298
S. Seker, T. Akinci, Ahmet Ozturk
: This study presents Plato’s metaphysical world under the interpretation of today’s cloud computing technology. In this sense, the world of ideas in Plato’s philosophy is defined as a cloud system and some cognitive descriptions are made in this direction. Hence, in this study, ancient philosophy is described as a base of today’s technology. With this study, The Platon's philosophy is also interpreted as a technological reflection of the philosophy and human thinking system.
{"title":"Plato’s Philosophy and Cloud Computing System with a Cognitive Approach","authors":"S. Seker, T. Akinci, Ahmet Ozturk","doi":"10.47852/bonviewaia32021298","DOIUrl":"https://doi.org/10.47852/bonviewaia32021298","url":null,"abstract":": This study presents Plato’s metaphysical world under the interpretation of today’s cloud computing technology. In this sense, the world of ideas in Plato’s philosophy is defined as a cloud system and some cognitive descriptions are made in this direction. Hence, in this study, ancient philosophy is described as a base of today’s technology. With this study, The Platon's philosophy is also interpreted as a technological reflection of the philosophy and human thinking system.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86388851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202962
Anas Akkar, Sam Cregan, Justin Cassens, Maame Araba Vander-Pallen, Tauheed Khan Mohd
The field of computer vision is rapidly evolving, with a focus on analyzing, manipulating, and understanding images at a sophisticated level. The primary objective of this discipline is to interpret the visual input from cameras and utilize this knowledge to manage computer or robotic systems or to generate more informative and visually appealing images. The potential applications of computer vision are wide-ranging and include video surveillance, biometrics, automotive, photography, movie production, web search, medicine, augmented reality gaming, novel user interfaces, and many others. This paper outlines how computer vision technology will be utilized to achieve a winning outcome in the game of Blackjack. The game of Blackjack has long captivated the attention of enthusiasts and players worldwide. One area of particular interest is the development of a winning strategy that maximizes the player's chances of success. With the advent of sophisticated computer algorithms and machine learning techniques, there is enormous potential for research in this area. This paper explores the game-winning strategies for Blackjack, with a particular focus on utilizing advanced analytical methods to identify optimal plays. By analyzing large data sets and leveraging the power of predictive modeling, we aim to create a robust and reliable framework for achieving consistent success in this popular casino game. We believe that this research avenue holds enormous promise for unlocking new insights into the game of Blackjack and developing a more comprehensive understanding of its intricacies.
{"title":"Playing Blackjack Using Computer Vision","authors":"Anas Akkar, Sam Cregan, Justin Cassens, Maame Araba Vander-Pallen, Tauheed Khan Mohd","doi":"10.47852/bonviewaia3202962","DOIUrl":"https://doi.org/10.47852/bonviewaia3202962","url":null,"abstract":"The field of computer vision is rapidly evolving, with a focus on analyzing, manipulating, and understanding images at a sophisticated level. The primary objective of this discipline is to interpret the visual input from cameras and utilize this knowledge to manage computer or robotic systems or to generate more informative and visually appealing images. The potential applications of computer vision are wide-ranging and include video surveillance, biometrics, automotive, photography, movie production, web search, medicine, augmented reality gaming, novel user interfaces, and many others. This paper outlines how computer vision technology will be utilized to achieve a winning outcome in the game of Blackjack. The game of Blackjack has long captivated the attention of enthusiasts and players worldwide. One area of particular interest is the development of a winning strategy that maximizes the player's chances of success. With the advent of sophisticated computer algorithms and machine learning techniques, there is enormous potential for research in this area. This paper explores the game-winning strategies for Blackjack, with a particular focus on utilizing advanced analytical methods to identify optimal plays. By analyzing large data sets and leveraging the power of predictive modeling, we aim to create a robust and reliable framework for achieving consistent success in this popular casino game. We believe that this research avenue holds enormous promise for unlocking new insights into the game of Blackjack and developing a more comprehensive understanding of its intricacies.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84361161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021010
Femi Johnson, O. Adebukola, O. Ojo, Adejimi Alaba, Opakunle Victor
Recruiters' decisions in the selection of candidates for specific job roles are not only dependent on physical attributes and academic qualifications but also on the fitness of candidates for the specified tasks. In this paper, we propose and develop a simple neuro-fuzzy-based task performance and fitness model for the selection of candidates. This is accomplished by obtaining from Kaggle (an online database) samples of task performance-related data of employees in various firms. Data were preprocessed and divided into 60%, 20%, and 20% for training, validating, and testing the developed neuro-fuzzy-based task performance model respectively. The most significant factors influencing the performance and fitness rating of workers were selected from the database using the Principal Components Analysis (PCA) ranking technique. The effectiveness of the proposed model was assessed, and discovered to generate an accuracy of 0.997%, 0.08% Root Mean Square Error (RMSE), and 0.042% Mean Absolute Error (MAE).
{"title":"A Task Performance and Fitness Predictive Model Based on Neuro-Fuzzy Modeling","authors":"Femi Johnson, O. Adebukola, O. Ojo, Adejimi Alaba, Opakunle Victor","doi":"10.47852/bonviewaia32021010","DOIUrl":"https://doi.org/10.47852/bonviewaia32021010","url":null,"abstract":"Recruiters' decisions in the selection of candidates for specific job roles are not only dependent on physical attributes and academic qualifications but also on the fitness of candidates for the specified tasks. In this paper, we propose and develop a simple neuro-fuzzy-based task performance and fitness model for the selection of candidates. This is accomplished by obtaining from Kaggle (an online database) samples of task performance-related data of employees in various firms. Data were preprocessed and divided into 60%, 20%, and 20% for training, validating, and testing the developed neuro-fuzzy-based task performance model respectively. The most significant factors influencing the performance and fitness rating of workers were selected from the database using the Principal Components Analysis (PCA) ranking technique. The effectiveness of the proposed model was assessed, and discovered to generate an accuracy of 0.997%, 0.08% Root Mean Square Error (RMSE), and 0.042% Mean Absolute Error (MAE).","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81508491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021063
B. Weber-Lewerenz, M. Traverso
This paper aims to understand the critical path of digital transformation in Construction by investigating major drivers for technical innovation, e.g., in Smart Cities. Despite available new technologies, increasing societal, environmental pressure, and data complexity the branch lacks a will to innovate and qualified personnel. The study identifies the potential of innovation and the pillars of sustainability to define ways to responsibly use data-driven, smart technologies in smart cities throughout their holistic life cycles. The mix of expert interview surveys and structured literature analysis is the basis to examine the status quo and innovative approaches. It enables to critically investigate limitations and human, societal and environmental impacts. This study`s findings offer orientation in navigating innovation for resilient, agile ecosystems with the dynamic ability to adapt to changing environment and to grow with the change and achieving the Sustainable Development Goals (SDGs) towards preservation and upgrade of buildings instead of new construction. The key challenge for sustainable technical innovation is to exploit human and societal potential. The study allocates the lack of research in this field and inadequate education as most significant limitations and critically evaluates that a disruptive culture of thinking may enable the sustainable design of smart cities. This study is unique as it develops a comprehensive, transparent Corporate Digital Responsibility (CDR) Policy Framework and provides orientation to assume ethical, societal, environmental responsibility as part of creating resilient, agile environments.
{"title":"Navigating Applied Artificial Intelligence (AI) in the Digital Era: How Smart Buildings and Smart Cities Become the Key to Sustainability","authors":"B. Weber-Lewerenz, M. Traverso","doi":"10.47852/bonviewaia32021063","DOIUrl":"https://doi.org/10.47852/bonviewaia32021063","url":null,"abstract":"This paper aims to understand the critical path of digital transformation in Construction by investigating major drivers for technical innovation, e.g., in Smart Cities. Despite available new technologies, increasing societal, environmental pressure, and data complexity the branch lacks a will to innovate and qualified personnel. The study identifies the potential of innovation and the pillars of sustainability to define ways to responsibly use data-driven, smart technologies in smart cities throughout their holistic life cycles. The mix of expert interview surveys and structured literature analysis is the basis to examine the status quo and innovative approaches. It enables to critically investigate limitations and human, societal and environmental impacts. This study`s findings offer orientation in navigating innovation for resilient, agile ecosystems with the dynamic ability to adapt to changing environment and to grow with the change and achieving the Sustainable Development Goals (SDGs) towards preservation and upgrade of buildings instead of new construction. The key challenge for sustainable technical innovation is to exploit human and societal potential. The study allocates the lack of research in this field and inadequate education as most significant limitations and critically evaluates that a disruptive culture of thinking may enable the sustainable design of smart cities. This study is unique as it develops a comprehensive, transparent Corporate Digital Responsibility (CDR) Policy Framework and provides orientation to assume ethical, societal, environmental responsibility as part of creating resilient, agile environments.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82404341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202290
Mehdi Gheisari, Hooman Hamidpour, Yang Liu, Peyman Saedi, Arif Raza, Ahmad Jalili, Hamidreza Rokhsati, Rashid Amin
The data mining (DM) is the computational process that consists of searching, extracting, and analyzing patterns in large data sets, including methods at the intersection of artificial intelligence, machine learning, statistics, and database schemes. Specifically, its primary goal is to extract information from a raw data set and transform it into an expected structure for further use. Moreover, an evolving perspective of DM is web mining (WM), which refers to the whole of DM and related routines. It is used to discover and extract information from web records and services automatically, that is, WM’s purpose is to obtain valuable data from the World Wide Web. Due to its importance, a survey about DM techniques in WM is necessary, as performed in this paper.
{"title":"Data Mining Techniques for Web Mining: A Survey","authors":"Mehdi Gheisari, Hooman Hamidpour, Yang Liu, Peyman Saedi, Arif Raza, Ahmad Jalili, Hamidreza Rokhsati, Rashid Amin","doi":"10.47852/bonviewaia2202290","DOIUrl":"https://doi.org/10.47852/bonviewaia2202290","url":null,"abstract":"The data mining (DM) is the computational process that consists of searching, extracting, and analyzing patterns in large data sets, including methods at the intersection of artificial intelligence, machine learning, statistics, and database schemes. Specifically, its primary goal is to extract information from a raw data set and transform it into an expected structure for further use. Moreover, an evolving perspective of DM is web mining (WM), which refers to the whole of DM and related routines. It is used to discover and extract information from web records and services automatically, that is, WM’s purpose is to obtain valuable data from the World Wide Web. Due to its importance, a survey about DM techniques in WM is necessary, as performed in this paper.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021378
Daniyal Baig, Waseem Akram, H. Burhan ul Haq, Muhammad Asif
Computer science and programming subjects can be overwhelming for new students, presenting them with significant challenges. As programming is considered one of the most important and complex subjects to grasp, it necessitates a fresh teaching methodology that can make the learning process more enjoyable and accessible. One approach that has gained attraction is the integration of gaming elements, which not only makes programming more engaging but also enhances understanding and retention. In our research, we adopted an innovative educational strategy that utilized a Role-Playing Game (RPG) centered on programming concepts. The aim of the research is to create an interactive and enjoyable learning experience for students by leveraging the immersive nature of gaming. The RPG provided a platform for students to actively participate in programming challenges, where they will apply their knowledge and skills to complete tasks and advance through the game. Our teaching methodology focuses on embedding programming concepts within the game's missions and quests. Additionally, we considered students' overall experience and engagement throughout the research study. Capturing both objective and subjective measures, we gained insights into the impact of our teaching methodology on student learning outcomes and their overall perception of the educational experience. In the RPG, each student is required to complete a series of tasks within the game in order to advance to the next mission. The sequential nature of the tasks ensured a structured learning process, gradually introducing new concepts and challenges to the students. The game mechanics provides an immersive environment for students to play different missions and answer the questions and learn programming. Through our research, we aim to present a compelling teaching methodology that effectively addresses the challenges facing new students in learning computer science and programming subjects. Harnessing the power of gaming, we strive to make programming more accessible, enjoyable, and engaging, ultimately empowering students to become proficient programmers. The evaluation of student performance, task accomplishment, and overall experience will provide valuable insights into the effectiveness and potential impact of this innovative approach.
{"title":"Cloud Gaming Approach To Learn Programming Concepts","authors":"Daniyal Baig, Waseem Akram, H. Burhan ul Haq, Muhammad Asif","doi":"10.47852/bonviewaia32021378","DOIUrl":"https://doi.org/10.47852/bonviewaia32021378","url":null,"abstract":"Computer science and programming subjects can be overwhelming for new students, presenting them with significant challenges. As programming is considered one of the most important and complex subjects to grasp, it necessitates a fresh teaching methodology that can make the learning process more enjoyable and accessible. One approach that has gained attraction is the integration of gaming elements, which not only makes programming more engaging but also enhances understanding and retention. In our research, we adopted an innovative educational strategy that utilized a Role-Playing Game (RPG) centered on programming concepts. The aim of the research is to create an interactive and enjoyable learning experience for students by leveraging the immersive nature of gaming. The RPG provided a platform for students to actively participate in programming challenges, where they will apply their knowledge and skills to complete tasks and advance through the game. Our teaching methodology focuses on embedding programming concepts within the game's missions and quests. Additionally, we considered students' overall experience and engagement throughout the research study. Capturing both objective and subjective measures, we gained insights into the impact of our teaching methodology on student learning outcomes and their overall perception of the educational experience. In the RPG, each student is required to complete a series of tasks within the game in order to advance to the next mission. The sequential nature of the tasks ensured a structured learning process, gradually introducing new concepts and challenges to the students. The game mechanics provides an immersive environment for students to play different missions and answer the questions and learn programming. Through our research, we aim to present a compelling teaching methodology that effectively addresses the challenges facing new students in learning computer science and programming subjects. Harnessing the power of gaming, we strive to make programming more accessible, enjoyable, and engaging, ultimately empowering students to become proficient programmers. The evaluation of student performance, task accomplishment, and overall experience will provide valuable insights into the effectiveness and potential impact of this innovative approach.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021381
Changtong Luo
Kernel methods are widely used in machine learning. They introduce a nonlinear transformation to achieve a linearization effect: using linear methods to solve nonlinear problems. However, typical kernel methods like Gaussian process regression suffer from a memory consumption issue for data-intensive modeling: the memory required by the algorithms increases rapidly with the growth of data, limiting their applicability. Localized methods can split the training data into batches and largely reduce the amount of data used each time, thus effectively alleviating the memory pressure. This paper combines the two approaches by embedding kernel functions into local learning methods and optimizing algorithm parameters including the local factors, model orders. This results in the kernel-embedded local learning (KELL) method. Numerical studies show that compared with kernel methods like Gaussian process regression, KELL can significantly reduce memory requirements for complex nonlinear models. And compared with other non-kernel methods, KELL demonstrates higher prediction accuracy.
{"title":"KELL: A Kernel-Embedded Local Learning for Data-Intensive Modeling","authors":"Changtong Luo","doi":"10.47852/bonviewaia32021381","DOIUrl":"https://doi.org/10.47852/bonviewaia32021381","url":null,"abstract":"Kernel methods are widely used in machine learning. They introduce a nonlinear transformation to achieve a linearization effect: using linear methods to solve nonlinear problems. However, typical kernel methods like Gaussian process regression suffer from a memory consumption issue for data-intensive modeling: the memory required by the algorithms increases rapidly with the growth of data, limiting their applicability. Localized methods can split the training data into batches and largely reduce the amount of data used each time, thus effectively alleviating the memory pressure. This paper combines the two approaches by embedding kernel functions into local learning methods and optimizing algorithm parameters including the local factors, model orders. This results in the kernel-embedded local learning (KELL) method. Numerical studies show that compared with kernel methods like Gaussian process regression, KELL can significantly reduce memory requirements for complex nonlinear models. And compared with other non-kernel methods, KELL demonstrates higher prediction accuracy.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135604614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202448
A. Yusuf, Abdullahi Adamu Kiri, Lukman Lawal
the purpose of solving a large-scale system of nonlinear equations, a hybrid conjugate gradient algorithm is introduced in thispaper, based on the convex combination ofβFRkandβPRPkparameters. It is made possible by incorporating the conjugacy condition togetherwith the proposed conjugate gradient search direction. Furthermore, a significant property of the method is that through a non-monotone typeline search it gives a descent search direction. Under appropriate conditions, the algorithm establishes its global convergence. Finally, resultsfrom numerical tests on a set of benchmark test problems indicate that the method is more effective and robust compared to some existingmethods.
{"title":"A Hybrid Conjugate Gradient Algorithm for Nonlinear System of Equations through Conjugacy Condition","authors":"A. Yusuf, Abdullahi Adamu Kiri, Lukman Lawal","doi":"10.47852/bonviewaia3202448","DOIUrl":"https://doi.org/10.47852/bonviewaia3202448","url":null,"abstract":"the purpose of solving a large-scale system of nonlinear equations, a hybrid conjugate gradient algorithm is introduced in thispaper, based on the convex combination ofβFRkandβPRPkparameters. It is made possible by incorporating the conjugacy condition togetherwith the proposed conjugate gradient search direction. Furthermore, a significant property of the method is that through a non-monotone typeline search it gives a descent search direction. Under appropriate conditions, the algorithm establishes its global convergence. Finally, resultsfrom numerical tests on a set of benchmark test problems indicate that the method is more effective and robust compared to some existingmethods.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"174 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76918650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202297
Sani Saminu, Guizhi Xu, Shuai Zhang, Isselmou Ab El Kader, Hajara Abdulkarim Aliyu, Adamu Halilu Jabire, Yusuf Kola Ahmed, Mohammed Jajere Adamu
Correctly interpreting an Electroencephalography (EEG) signal with high accuracy is a tedious and time-consuming task that may take several years of manual training due to its complexity, noisy, non-stationarity, and nonlinear nature. To deal with the vast amount of data and recent challenges of meeting the requirements to develop low cost, high speed, low complexity smart internet of medical things (IoMT) computer-aided devices (CAD), artificial intelligence (AI) techniques which consist of machine learning and deep learning plays a vital role in achieving the stated goals. Over the years, machine learning techniques have been developed to detect and classify epileptic seizures. But until recently, deep learning techniques have been applied in various applications such as image processing and computer visions. However, several research studies have turned their attention to exploring the efficacy of deep learning to overcome some challenges associated with conventional automatic seizure detection techniques. This paper endeavors to review and investigate the fundamentals, applications, and progress of AI-based techniques applied in CAD system for epileptic seizure detection and characterisation. It would help in actualising and realising smart wireless wearable medical devices so that patients can monitor seizures before their occurrence and help doctors diagnose and treat them. The work reveals that the recent application of deep learning algorithms improves the realisation and implementation of mobile health in a clinical environment.
{"title":"Applications of Artificial Intelligence in Automatic Detection of Epileptic Seizures Using EEG Signals: A Review","authors":"Sani Saminu, Guizhi Xu, Shuai Zhang, Isselmou Ab El Kader, Hajara Abdulkarim Aliyu, Adamu Halilu Jabire, Yusuf Kola Ahmed, Mohammed Jajere Adamu","doi":"10.47852/bonviewaia2202297","DOIUrl":"https://doi.org/10.47852/bonviewaia2202297","url":null,"abstract":"Correctly interpreting an Electroencephalography (EEG) signal with high accuracy is a tedious and time-consuming task that may take several years of manual training due to its complexity, noisy, non-stationarity, and nonlinear nature. To deal with the vast amount of data and recent challenges of meeting the requirements to develop low cost, high speed, low complexity smart internet of medical things (IoMT) computer-aided devices (CAD), artificial intelligence (AI) techniques which consist of machine learning and deep learning plays a vital role in achieving the stated goals. Over the years, machine learning techniques have been developed to detect and classify epileptic seizures. But until recently, deep learning techniques have been applied in various applications such as image processing and computer visions. However, several research studies have turned their attention to exploring the efficacy of deep learning to overcome some challenges associated with conventional automatic seizure detection techniques. This paper endeavors to review and investigate the fundamentals, applications, and progress of AI-based techniques applied in CAD system for epileptic seizure detection and characterisation. It would help in actualising and realising smart wireless wearable medical devices so that patients can monitor seizures before their occurrence and help doctors diagnose and treat them. The work reveals that the recent application of deep learning algorithms improves the realisation and implementation of mobile health in a clinical environment.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135235916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}