Taku Itami, Yuki Takeyama, Sota Akamine, Jun Yoneyema, Sebastien Ibarboure
LiDARs are utilized in various applications, such as self-driving vehicles and robotics, to aid in sensing the environment. However, LiDARs do not provide instantaneous images and they generate noise, adding to measurement errors. This noise, often referred to as motion blur phenomenon also observed in other imaging sensors results in decreased sensing accuracy for moving objects. This study introduces HPRDenoise, a noise reduction method based on hidden point removal, specifically designed to reduce motion blur during sprinting motion. This method capitalizes on the occlusion produced by a fixed-position LiDAR. We propose a comprehensive denoising approach to filter points from a point cloud without resorting to supervised learning, unlike most existing denoising algorithms. The number of correct frames and accuracy were compared for Raw, ScoreDenoise, which is the state-of-the-art method for random point cloud denoising, and HPRDenoise (Ours). Accuracy is defined as the ratio of the number of correct frames to the total number of frames. Experimental results demonstrate that the detection accuracy of point clouds processed with HPRDenoise is 72.73%, achieving better accuracy than those using conventional methods.
{"title":"Detecting people in sprinting motion using HPRDenoise: Point cloud denoising with hidden point removal","authors":"Taku Itami, Yuki Takeyama, Sota Akamine, Jun Yoneyema, Sebastien Ibarboure","doi":"10.32629/jai.v7i5.1634","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1634","url":null,"abstract":"LiDARs are utilized in various applications, such as self-driving vehicles and robotics, to aid in sensing the environment. However, LiDARs do not provide instantaneous images and they generate noise, adding to measurement errors. This noise, often referred to as motion blur phenomenon also observed in other imaging sensors results in decreased sensing accuracy for moving objects. This study introduces HPRDenoise, a noise reduction method based on hidden point removal, specifically designed to reduce motion blur during sprinting motion. This method capitalizes on the occlusion produced by a fixed-position LiDAR. We propose a comprehensive denoising approach to filter points from a point cloud without resorting to supervised learning, unlike most existing denoising algorithms. The number of correct frames and accuracy were compared for Raw, ScoreDenoise, which is the state-of-the-art method for random point cloud denoising, and HPRDenoise (Ours). Accuracy is defined as the ratio of the number of correct frames to the total number of frames. Experimental results demonstrate that the detection accuracy of point clouds processed with HPRDenoise is 72.73%, achieving better accuracy than those using conventional methods.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"1 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140746254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study introduces the Adaptive Multi-Layer Security Framework (AMLSF), a novel approach designed for real-time applications in smart city networks, addressing the current challenges in security systems. AMLSF innovatively incorporates machine learning algorithms for dynamic adjustment of security protocols based on real-time threat analysis and device behavior patterns. This approach marks a significant shift from static security measures, offering an adaptive encryption mechanism that scales according to application criticality and device mobility. Our methodology integrates hierarchical key management with real-time adaptability, further enhanced by an advanced rekeying strategy sensitive to device mobility and communication overhead. The paper’s findings reveal a substantial improvement in security efficiency. AMLSF outperforms existing models in encryption strength, rekeying time, communication overhead, and computational time by significant margins. Notably, AMLSF demonstrates an adaptability increase of over 30% compared to traditional models, with encryption strength and computational time efficiency improving by approximately 25%. These results underscore AMLSF’s capability in delivering robust, dynamic security without sacrificing performance. The achievements of AMLSF are significant, indicating a promising direction for smart city security frameworks. Its ability to adapt in real-time to various security needs, coupled with its performance efficiency, positions AMLSF as a superior choice for smart city networks facing diverse and evolving security threats. This framework sets a new benchmark in smart city security, paving the way for future developments in this rapidly advancing field.
{"title":"Adaptive Multi-Layer Security Framework (AMLSF) for real-time applications in smart city networks","authors":"M. S. Ram, R. Anandan","doi":"10.32629/jai.v7i5.1370","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1370","url":null,"abstract":"This study introduces the Adaptive Multi-Layer Security Framework (AMLSF), a novel approach designed for real-time applications in smart city networks, addressing the current challenges in security systems. AMLSF innovatively incorporates machine learning algorithms for dynamic adjustment of security protocols based on real-time threat analysis and device behavior patterns. This approach marks a significant shift from static security measures, offering an adaptive encryption mechanism that scales according to application criticality and device mobility. Our methodology integrates hierarchical key management with real-time adaptability, further enhanced by an advanced rekeying strategy sensitive to device mobility and communication overhead. The paper’s findings reveal a substantial improvement in security efficiency. AMLSF outperforms existing models in encryption strength, rekeying time, communication overhead, and computational time by significant margins. Notably, AMLSF demonstrates an adaptability increase of over 30% compared to traditional models, with encryption strength and computational time efficiency improving by approximately 25%. These results underscore AMLSF’s capability in delivering robust, dynamic security without sacrificing performance. The achievements of AMLSF are significant, indicating a promising direction for smart city security frameworks. Its ability to adapt in real-time to various security needs, coupled with its performance efficiency, positions AMLSF as a superior choice for smart city networks facing diverse and evolving security threats. This framework sets a new benchmark in smart city security, paving the way for future developments in this rapidly advancing field.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"71 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140747420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gulbakshee J. Dharmale, Dipti D. Patil, Tanaya Ganguly, Nitin Shekapure
The automatic speech recognition helps to achieve today’s demands such as flexibility in patient care, efficiency, medical records. ASR allows more effective use and combination of process management devices and systems. Because speech interaction is contactless, they can be seamlessly combined into a current hardware environment. This paper presents the phonetic system that implemented to improve the automatic speech recognition with higher accuracy for increasing performance. The system obtains input speech by a mic then works on the tried speech to recognize the spoken word. After that, it passes the ensuing text to the HMM classifier. The HMM classifier compares occurrence of the accredited word with probability map. The word with the highest probability of occurrence gets selected. It then substitutes accredited word with this utterance; this process is carried out for the entire accredited text. The phonetic system directly obtains and translates speech to text by providing 8% improvement in the accuracy of the system. Smart text independent multi-lingual SMS system is developed using phonetic system, which allows the user to convert their voice into text and send message. STIM SMS system can offer a very spirited substitute to traditional keyboard.
{"title":"Effective speech recognition for healthcare industry using phonetic system","authors":"Gulbakshee J. Dharmale, Dipti D. Patil, Tanaya Ganguly, Nitin Shekapure","doi":"10.32629/jai.v7i5.1019","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1019","url":null,"abstract":"The automatic speech recognition helps to achieve today’s demands such as flexibility in patient care, efficiency, medical records. ASR allows more effective use and combination of process management devices and systems. Because speech interaction is contactless, they can be seamlessly combined into a current hardware environment. This paper presents the phonetic system that implemented to improve the automatic speech recognition with higher accuracy for increasing performance. The system obtains input speech by a mic then works on the tried speech to recognize the spoken word. After that, it passes the ensuing text to the HMM classifier. The HMM classifier compares occurrence of the accredited word with probability map. The word with the highest probability of occurrence gets selected. It then substitutes accredited word with this utterance; this process is carried out for the entire accredited text. The phonetic system directly obtains and translates speech to text by providing 8% improvement in the accuracy of the system. Smart text independent multi-lingual SMS system is developed using phonetic system, which allows the user to convert their voice into text and send message. STIM SMS system can offer a very spirited substitute to traditional keyboard.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"23 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140753811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajay Thatere, Ashish Jirapure, M. Chawhan, A. Meshram, Prateek Verma
The advent of intelligent medical systems has heralded a new era in healthcare, promising enhanced diagnostic accuracy, treatment efficacy, and personalized patient care. Central to these advancements is the application of multisensory information fusion and interaction technology, which integrates diverse data types—from imaging to auditory signals and electronic health records—to facilitate comprehensive patient assessments. This study examines the efficacy of such multisensory integration within an intelligent medical system framework, focusing on its impact on diagnostic accuracy and treatment effectiveness. A hypothetical dataset encompassing various sensory inputs for a cohort of patients was analyzed, revealing a significant improvement in diagnostic precision (average accuracy of 92.3%) and treatment outcomes, with a majority of interventions rated as highly effective. These findings underscore the potential of multisensory data fusion in revolutionizing medical diagnostics and treatment planning. Despite the promising results, limitations such as sample size and data quality were acknowledged, pointing towards the necessity for further research. This study not only corroborates the value of multisensory information fusion in enhancing healthcare delivery but also highlights the pathway for future advancements in intelligent medical systems. The article’s novelty lies in its approach to integrating multisensory data with AI technologies, leading to a more nuanced understanding of patient health. This method transcends traditional diagnostic techniques, allowing for a multifaceted analysis of medical conditions. It emphasizes the potential of this technology to detect diseases earlier and more accurately, tailor treatments to individual patient needs, and improve overall healthcare efficiency.
{"title":"Integrating multisensory information fusion and interaction technologies in smart healthcare systems","authors":"Ajay Thatere, Ashish Jirapure, M. Chawhan, A. Meshram, Prateek Verma","doi":"10.32629/jai.v7i5.1564","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1564","url":null,"abstract":"The advent of intelligent medical systems has heralded a new era in healthcare, promising enhanced diagnostic accuracy, treatment efficacy, and personalized patient care. Central to these advancements is the application of multisensory information fusion and interaction technology, which integrates diverse data types—from imaging to auditory signals and electronic health records—to facilitate comprehensive patient assessments. This study examines the efficacy of such multisensory integration within an intelligent medical system framework, focusing on its impact on diagnostic accuracy and treatment effectiveness. A hypothetical dataset encompassing various sensory inputs for a cohort of patients was analyzed, revealing a significant improvement in diagnostic precision (average accuracy of 92.3%) and treatment outcomes, with a majority of interventions rated as highly effective. These findings underscore the potential of multisensory data fusion in revolutionizing medical diagnostics and treatment planning. Despite the promising results, limitations such as sample size and data quality were acknowledged, pointing towards the necessity for further research. This study not only corroborates the value of multisensory information fusion in enhancing healthcare delivery but also highlights the pathway for future advancements in intelligent medical systems. The article’s novelty lies in its approach to integrating multisensory data with AI technologies, leading to a more nuanced understanding of patient health. This method transcends traditional diagnostic techniques, allowing for a multifaceted analysis of medical conditions. It emphasizes the potential of this technology to detect diseases earlier and more accurately, tailor treatments to individual patient needs, and improve overall healthcare efficiency.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"47 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140760291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kiran Gul, Waheed Shahzad, Ali Raza, Essam Said Hanandeh, R. A. Zitar, Khaled Aldiabat, R. Shboul, L. Abualigah
The research study aims to examine why candidates in Pakistan failed the English Essay, Precis, and Composition sections of the Central Superior Services (CSS) tests. Those candidates chosen for various civil service positions take the prestigious and difficult CSS exam. The study aims to discover candidates’ difficulties in these particular CSS exam sections and investigate methods for enhancing their English language ability. A mixed-methods strategy is used in the research process to collect both quantitative and qualitative data. Participants in the CSS exam who once took the English Essay, Precis, and Composition papers and got fail in it received a survey form to respond according to their experience. Other than this, we also conducted semi-structured interviews with CSS test winners currently working as officials, such as Deputy Commissioners, Assistant Commissioners, Assistant Superintendents of Police, and Deputy Superintendents of Police. Insights into the causes of failure and the experiences of successful candidates are sought after from both data sources. The research findings highlighted several key factors contributing to failure in English Essays, Precis, and Composition papers. These factors include lack of comprehension and understanding, grammatical errors, inadequate organization, poor handwriting, insufficient practice, lack of originality, difficulty in adapting to essay prompts and precis passages, poor organization, failure to understand and address the purpose, insufficient development of ideas, failure to reach the required word count, grammatical mistakes, neglecting proofreading and revision, poor writing expression, and weak induction and conclusion in essays, tough paper pattern old formatted curriculum. Participants reported struggling to express their ideas coherently, having limited language skills, facing challenges in managing time effectively, lacking proper precis structure understanding, inadequate expertise in the subject, lack of training and resources, lack of analytical and critical thinking abilities, inadequate exam preparation, time management issues, poor grammar abilities, exam phobia, and limited vocabulary as potential factors contributing to failure.
{"title":"An investigation to identify the factors that cause failure in English essay, precis, and composition papers in CSS exams","authors":"Kiran Gul, Waheed Shahzad, Ali Raza, Essam Said Hanandeh, R. A. Zitar, Khaled Aldiabat, R. Shboul, L. Abualigah","doi":"10.32629/jai.v7i5.1254","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1254","url":null,"abstract":"The research study aims to examine why candidates in Pakistan failed the English Essay, Precis, and Composition sections of the Central Superior Services (CSS) tests. Those candidates chosen for various civil service positions take the prestigious and difficult CSS exam. The study aims to discover candidates’ difficulties in these particular CSS exam sections and investigate methods for enhancing their English language ability. A mixed-methods strategy is used in the research process to collect both quantitative and qualitative data. Participants in the CSS exam who once took the English Essay, Precis, and Composition papers and got fail in it received a survey form to respond according to their experience. Other than this, we also conducted semi-structured interviews with CSS test winners currently working as officials, such as Deputy Commissioners, Assistant Commissioners, Assistant Superintendents of Police, and Deputy Superintendents of Police. Insights into the causes of failure and the experiences of successful candidates are sought after from both data sources. The research findings highlighted several key factors contributing to failure in English Essays, Precis, and Composition papers. These factors include lack of comprehension and understanding, grammatical errors, inadequate organization, poor handwriting, insufficient practice, lack of originality, difficulty in adapting to essay prompts and precis passages, poor organization, failure to understand and address the purpose, insufficient development of ideas, failure to reach the required word count, grammatical mistakes, neglecting proofreading and revision, poor writing expression, and weak induction and conclusion in essays, tough paper pattern old formatted curriculum. Participants reported struggling to express their ideas coherently, having limited language skills, facing challenges in managing time effectively, lacking proper precis structure understanding, inadequate expertise in the subject, lack of training and resources, lack of analytical and critical thinking abilities, inadequate exam preparation, time management issues, poor grammar abilities, exam phobia, and limited vocabulary as potential factors contributing to failure.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"28 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An essential step in the drug development process is the accurate detection of drug-target interactions (DTI). The importance of binding affinity values in understanding protein-ligand interactions was previously disregarded, and DTI prediction was only seen as a binary classification problem. In this regard, we introduced the DFDTA-MultiAtt model for predicting the drug target binding affinity in two stages using the structural and sequential information. The first step of the first stage involves retrieving features from sequence data using a bi-directional long short term memory (Bi-LSTM) architecture together with a multi-attention module and dilated convolutional neural network (dilated-CNN) architecture, and the second step features are learnt from structure representation once again using a dilated-CNN. To predict the binding affinity, the second stage uses an ensemble learning model. The proposed model also produces findings with a greater overall accuracy when compared to contemporary state-of-the-art methods. The model generates an enormous +0.006 concordance index (CI) score on the Davis dataset and reduces the mean square error (MSE) by 0.174 on the KIBA dataset.
{"title":"DFDTA-MultiAtt: Multi-attention based deep learning ensemble fusion network for drug target affinity prediction","authors":"Balanand Jha, Akshay Deepak, Vikash Kumar, Gopalakrishnan Krishnasamy","doi":"10.32629/jai.v7i5.851","DOIUrl":"https://doi.org/10.32629/jai.v7i5.851","url":null,"abstract":"An essential step in the drug development process is the accurate detection of drug-target interactions (DTI). The importance of binding affinity values in understanding protein-ligand interactions was previously disregarded, and DTI prediction was only seen as a binary classification problem. In this regard, we introduced the DFDTA-MultiAtt model for predicting the drug target binding affinity in two stages using the structural and sequential information. The first step of the first stage involves retrieving features from sequence data using a bi-directional long short term memory (Bi-LSTM) architecture together with a multi-attention module and dilated convolutional neural network (dilated-CNN) architecture, and the second step features are learnt from structure representation once again using a dilated-CNN. To predict the binding affinity, the second stage uses an ensemble learning model. The proposed model also produces findings with a greater overall accuracy when compared to contemporary state-of-the-art methods. The model generates an enormous +0.006 concordance index (CI) score on the Davis dataset and reduces the mean square error (MSE) by 0.174 on the KIBA dataset.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140243612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a result of fast technical breakthroughs, globalization, a customer-centric emphasis, and team-based design techniques, 21st century workplace expectations for engineers have evolved. These changes need that engineering graduates possess highly developed critical thinking abilities in order to work at a high level in the engineering field. Critical thinking has been introduced as a fundamental ability in the new Skills Framework. Countries from all over the globe have taken steps to foster the development of critical thinking skills in their citizens, and researchers from a variety of fields pay attention to and conduct critical thinking research. However, comprehensive research on the teaching and learning of critical thinking in the Chinese setting is scarce. This study examines the research literature on critical thinking in Chinese classrooms in order to discover which theories and research methodologies are applied in critical thinking research. By scanning the CNKI and Web of Science databases, 63 Chinese and English publications were discovered using the PRISMA model. The analysis demonstrates that Chinese schools lack theoretical applications of critical thinking research. In the meanwhile, three distinct research techniques are used, however quantitative research approaches have the most papers. According to research, anyone interested in studying critical thinking should be familiar with its theory. In addition, researchers must use a range of study methodologies to guarantee that the results give information beyond summaries of critical thinking. Finally, Chinese researchers on critical thinking need greater exposure to qualitative data sources in order to modify their data gathering procedures.
由于技术的快速突破、全球化、以客户为中心和团队设计技术的发展,21 世纪对工程 师的职场期望也发生了变化。这些变化要求工程学毕业生具备高度发达的批判性思维能力,以便在工程学领域从事高水平的工作。在新的技能框架中,批判性思维已被列为一项基本能力。世界各国都在采取措施培养本国公民的批判性思维能力,各领域的研究人员也都在关注和开展批判性思维的研究。然而,关于批判性思维在中国环境下的教与学的综合研究却很少。本研究考察了有关中国课堂批判性思维的研究文献,以发现批判性思维研究中应用了哪些理论和研究方法。通过扫描 CNKI 和 Web of Science 数据库,利用 PRISMA 模型发现了 63 篇中英文出版物。分析表明,中国学校缺乏批判性思维研究的理论应用。同时,三种不同的研究方法都有使用,但定量研究方法的论文最多。研究表明,任何有兴趣研究批判性思维的人都应该熟悉批判性思维的理论。此外,研究者还必须使用一系列研究方法,以保证研究结果能提供批判性思维总结之外的信息。最后,中国的批判性思维研究者需要更多地接触定性数据来源,以修改他们的数据收集程序。
{"title":"A systematic review of critical thinking for Engineering Students in Chinese classrooms","authors":"Zhiying Liu, Sook Jhee Yoon, Maoxing Zheng","doi":"10.32629/jai.v7i5.916","DOIUrl":"https://doi.org/10.32629/jai.v7i5.916","url":null,"abstract":"As a result of fast technical breakthroughs, globalization, a customer-centric emphasis, and team-based design techniques, 21st century workplace expectations for engineers have evolved. These changes need that engineering graduates possess highly developed critical thinking abilities in order to work at a high level in the engineering field. Critical thinking has been introduced as a fundamental ability in the new Skills Framework. Countries from all over the globe have taken steps to foster the development of critical thinking skills in their citizens, and researchers from a variety of fields pay attention to and conduct critical thinking research. However, comprehensive research on the teaching and learning of critical thinking in the Chinese setting is scarce. This study examines the research literature on critical thinking in Chinese classrooms in order to discover which theories and research methodologies are applied in critical thinking research. By scanning the CNKI and Web of Science databases, 63 Chinese and English publications were discovered using the PRISMA model. The analysis demonstrates that Chinese schools lack theoretical applications of critical thinking research. In the meanwhile, three distinct research techniques are used, however quantitative research approaches have the most papers. According to research, anyone interested in studying critical thinking should be familiar with its theory. In addition, researchers must use a range of study methodologies to guarantee that the results give information beyond summaries of critical thinking. Finally, Chinese researchers on critical thinking need greater exposure to qualitative data sources in order to modify their data gathering procedures.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"39 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140241801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prema Bhushan Sahane, Gayathri M., S. A. Bagal, P. Sambhare, Satish Billewar, Kirti Borhade, John Blesswin, Selva Mary
Visual cryptography (VC) has emerged as a pivotal solution for secure information transmission, leveraging its unique capability to encrypt images in a user-friendly and accessible manner. This survey paper provides an in-depth analysis of various VC methods, highlighting their distinct encryption and decryption techniques, applicability, and security levels. The study delves into the technical specifications of each VC type, offering insights into secret image formats, the number of secret images used, types of shares, pixel expansion, and complexity. Significant attention is given to the practical applications of VC, ranging from secure document verification and anti-counterfeiting measures to digital watermarking and online data protection. The paper also identifies key challenges in the field, such as image quality retention post-decryption, computational efficiency, and scalability. Future prospects of VC are explored, particularly its potential integration with emerging technologies like AI and blockchain. This survey aims to provide a comprehensive understanding of VC’s current state, its diverse applications, and the future possibilities, making it a valuable resource for researchers and practitioners in the field of data security and cryptography.
{"title":"Beyond pixels and ciphers: Navigating the advancements and challenges in visual cryptography","authors":"Prema Bhushan Sahane, Gayathri M., S. A. Bagal, P. Sambhare, Satish Billewar, Kirti Borhade, John Blesswin, Selva Mary","doi":"10.32629/jai.v7i5.1525","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1525","url":null,"abstract":"Visual cryptography (VC) has emerged as a pivotal solution for secure information transmission, leveraging its unique capability to encrypt images in a user-friendly and accessible manner. This survey paper provides an in-depth analysis of various VC methods, highlighting their distinct encryption and decryption techniques, applicability, and security levels. The study delves into the technical specifications of each VC type, offering insights into secret image formats, the number of secret images used, types of shares, pixel expansion, and complexity. Significant attention is given to the practical applications of VC, ranging from secure document verification and anti-counterfeiting measures to digital watermarking and online data protection. The paper also identifies key challenges in the field, such as image quality retention post-decryption, computational efficiency, and scalability. Future prospects of VC are explored, particularly its potential integration with emerging technologies like AI and blockchain. This survey aims to provide a comprehensive understanding of VC’s current state, its diverse applications, and the future possibilities, making it a valuable resource for researchers and practitioners in the field of data security and cryptography.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"16 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140244314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhua Yang, Norriza Binti Hussin, Maoxing Zheng, Dan Wang
The main aim of this study is to provide universities with a way of examining and predicting student performance. The fundamental aim and purpose of this study is to help academic institutions to analyse and predict student performance. The credibility and accuracy of the model was examined by comparing the predicted results of the model with the observed values. And educational data mining techniques were used to create student profiles. Weighted gain, classification analysis, decision tree and rule induction were used in this study. The results of the study showed that the level of students' academic performance varied according to criteria such as academic structure, faculty, mode of enrolment and gender. In order to determine the relative importance of variables, the information weight gain technique was used after generating rule induction parameters and hidden rules between data. Using data mining techniques, we can obtain both guidelines to instruct students and information to help us identify them.
{"title":"Investigation and the development of learning analytics dashboard in open and distance learning using big data mining","authors":"Yuhua Yang, Norriza Binti Hussin, Maoxing Zheng, Dan Wang","doi":"10.32629/jai.v7i5.919","DOIUrl":"https://doi.org/10.32629/jai.v7i5.919","url":null,"abstract":"The main aim of this study is to provide universities with a way of examining and predicting student performance. The fundamental aim and purpose of this study is to help academic institutions to analyse and predict student performance. The credibility and accuracy of the model was examined by comparing the predicted results of the model with the observed values. And educational data mining techniques were used to create student profiles. Weighted gain, classification analysis, decision tree and rule induction were used in this study. The results of the study showed that the level of students' academic performance varied according to criteria such as academic structure, faculty, mode of enrolment and gender. In order to determine the relative importance of variables, the information weight gain technique was used after generating rule induction parameters and hidden rules between data. Using data mining techniques, we can obtain both guidelines to instruct students and information to help us identify them.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"77 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140241737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Herbicides, chemical substances designed to eliminate weeds, find widespread use in agriculture to eradicate unwanted plants and enhance crop productivity, despite their adverse impacts on both human health and the environment. The study involves the construction of a neural network classifier employing a Convolutional Neural Network (CNN) through Keras to categorize images with corresponding labels. This research paper introduces two distinct neural networks: a basic neural network and a hybrid variant combining CNN with Keras. Both networks undergo training and testing, yielding an accuracy of 30% for the basic neural network, whereas the hybrid neural network achieves an impressive 97% accuracy. Consequently, this model significantly diminishes the need for herbicide spraying over crops such as fruits, vegetables, and sugarcane, aiming to safeguard humans, animals, birds, and the environment from the detrimental effects of harmful chemicals. Functioning as the elevated API within the TensorFlow framework, Keras furnishes a user-friendly and immensely efficient interface tailored to address machine learning (ML) challenges, particularly in the realm of contemporary deep learning. Encompassing all facets of the machine learning process, from data manipulation to fine-tuning hyper parameters to deployment, Keras was meticulously crafted to expedite rapid experimentation.
除草剂是一种用于清除杂草的化学物质,尽管会对人类健康和环境造成不利影响,但在农业中仍被广泛使用,以根除有害植物并提高作物产量。本研究涉及通过 Keras 构建一个神经网络分类器,采用卷积神经网络(CNN)将图像与相应的标签进行分类。本研究论文介绍了两种不同的神经网络:一种是基本神经网络,另一种是结合了 CNN 和 Keras 的混合变体。两个网络都经过了训练和测试,基本神经网络的准确率为 30%,而混合神经网络的准确率则达到了令人印象深刻的 97%。因此,该模型大大降低了对水果、蔬菜和甘蔗等作物喷洒除草剂的需求,旨在保护人类、动物、鸟类和环境免受有害化学物质的危害。作为 TensorFlow 框架内的高级 API,Keras 提供了一个用户友好且非常高效的界面,专门用于应对机器学习(ML)挑战,尤其是当代深度学习领域的挑战。从数据操作到微调超参数再到部署,Keras 涵盖了机器学习过程的方方面面,是为加快快速实验而精心打造的。
{"title":"Deep learning for sustainable agriculture: Weed classification model to optimize herbicide application","authors":"Indu Malik, A. Baghel, Harshit Bhardwaj","doi":"10.32629/jai.v7i5.1403","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1403","url":null,"abstract":"Herbicides, chemical substances designed to eliminate weeds, find widespread use in agriculture to eradicate unwanted plants and enhance crop productivity, despite their adverse impacts on both human health and the environment. The study involves the construction of a neural network classifier employing a Convolutional Neural Network (CNN) through Keras to categorize images with corresponding labels. This research paper introduces two distinct neural networks: a basic neural network and a hybrid variant combining CNN with Keras. Both networks undergo training and testing, yielding an accuracy of 30% for the basic neural network, whereas the hybrid neural network achieves an impressive 97% accuracy. Consequently, this model significantly diminishes the need for herbicide spraying over crops such as fruits, vegetables, and sugarcane, aiming to safeguard humans, animals, birds, and the environment from the detrimental effects of harmful chemicals. Functioning as the elevated API within the TensorFlow framework, Keras furnishes a user-friendly and immensely efficient interface tailored to address machine learning (ML) challenges, particularly in the realm of contemporary deep learning. Encompassing all facets of the machine learning process, from data manipulation to fine-tuning hyper parameters to deployment, Keras was meticulously crafted to expedite rapid experimentation.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"2011 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}