Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140968
Shan Zuo, Kai Xiao, Taitian Mao
Scientific research is becoming more and more crucial to contemporary society as the backbone of the nation's innovation-driven development. The rapid growth of information technology and the rise of information technology in scientific research both contribute to the globalization of scientific research. Small research groups still don't have a place to showcase and share their accomplishments, though. In order to integrate scientific research information and combine personalised recommendation technology to suggest developments of interest to users through their historical behaviour data, the study proposes a personalised recommendation and sharing management system for scientific and technological achievements based on the Ruby on Rails framework. According to the testing results, the system had a 299ms request response time, a maximum 1KB request resource size, and a 20ms data transfer time. Additionally, the study's user-based collaborative filtering recommendation algorithm has an accuracy rate of 41% when the nearest neighbor parameter is set to 50, there are 10 information suggestions, and there are 0.7 training sets, which essentially satisfies the system criteria. In conclusion, the research suggested that a personalised recommendation and sharing management system for scientific and technological accomplishments can essentially satisfy the needs of small research teams to communicate and share scientific accomplishments, as well as realise the sharing of scientific achievements.
科学研究作为国家创新驱动发展的中坚力量,在当代社会越来越重要。信息技术的快速发展和信息技术在科学研究中的兴起,都促进了科学研究的全球化。然而,小型研究小组仍然没有一个地方来展示和分享他们的成就。为了整合科研信息,结合个性化推荐技术,通过用户的历史行为数据向用户推荐感兴趣的发展,本研究提出了一种基于Ruby on Rails框架的科技成果个性化推荐与分享管理系统。根据测试结果,系统的请求响应时间为299ms,请求资源大小最大为1KB,数据传输时间为20ms。此外,本研究基于用户的协同过滤推荐算法,当最近邻参数设置为50,有10个信息建议,有0.7个训练集时,准确率为41%,基本满足系统标准。综上所述,个性化科技成果推荐与分享管理系统能够从根本上满足小型科研团队交流与分享科技成果的需求,实现科技成果的共享。
{"title":"Design of Personalized Recommendation and Sharing Management System for Science and Technology Achievements based on WEBSOCKET Technology","authors":"Shan Zuo, Kai Xiao, Taitian Mao","doi":"10.14569/ijacsa.2023.0140968","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140968","url":null,"abstract":"Scientific research is becoming more and more crucial to contemporary society as the backbone of the nation's innovation-driven development. The rapid growth of information technology and the rise of information technology in scientific research both contribute to the globalization of scientific research. Small research groups still don't have a place to showcase and share their accomplishments, though. In order to integrate scientific research information and combine personalised recommendation technology to suggest developments of interest to users through their historical behaviour data, the study proposes a personalised recommendation and sharing management system for scientific and technological achievements based on the Ruby on Rails framework. According to the testing results, the system had a 299ms request response time, a maximum 1KB request resource size, and a 20ms data transfer time. Additionally, the study's user-based collaborative filtering recommendation algorithm has an accuracy rate of 41% when the nearest neighbor parameter is set to 50, there are 10 information suggestions, and there are 0.7 training sets, which essentially satisfies the system criteria. In conclusion, the research suggested that a personalised recommendation and sharing management system for scientific and technological accomplishments can essentially satisfy the needs of small research teams to communicate and share scientific accomplishments, as well as realise the sharing of scientific achievements.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140985
Ahmad Zainul Fanani, Arry Maulana Syarif
An innovative preservation approach was proposed to document historical buildings in 3D model, and to present it virtually. The approach was applied to the Lawang Sewu building, one of the architectural masterpieces that is part of Indonesian history. Virtual Reality (VR) technology was used to create a Lawang Sewu VR application program that allows users to virtually walk around the building. A new method for 3D reconstruction was proposed, where data of photo, video and miniature documentation, as well as notes collected from observations were used as the main reference. Meanwhile, architectural record data was used in cases where information cannot be obtained through the main reference. The proposed method focuses on traditional techniques, both at the data acquisition and 3D modelling stages. Poly modelling techniques were chosen for 3D reconstruction. The poly modelling technique was chosen based on its ease and flexibility in controlling the number of polys in 3D models, and was suitable to be applied for repetitive spatial typologies, such as the Lawang Sewu building. After given textures, the 3D model was sent to the VR editor. In addition of running on the desktop platform, Head Mounted Device (HMD) that supports the creation of an immersive experience, was also chosen to run the Lawang Sewu VR. The evaluation carried out to measure the level of similarity of the 3D model to the original building and the sensation of an immersive experience felt by the user shows good achievements.
{"title":"Historical Building 3D Reconstruction for a Virtual Reality-based Documentation","authors":"Ahmad Zainul Fanani, Arry Maulana Syarif","doi":"10.14569/ijacsa.2023.0140985","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140985","url":null,"abstract":"An innovative preservation approach was proposed to document historical buildings in 3D model, and to present it virtually. The approach was applied to the Lawang Sewu building, one of the architectural masterpieces that is part of Indonesian history. Virtual Reality (VR) technology was used to create a Lawang Sewu VR application program that allows users to virtually walk around the building. A new method for 3D reconstruction was proposed, where data of photo, video and miniature documentation, as well as notes collected from observations were used as the main reference. Meanwhile, architectural record data was used in cases where information cannot be obtained through the main reference. The proposed method focuses on traditional techniques, both at the data acquisition and 3D modelling stages. Poly modelling techniques were chosen for 3D reconstruction. The poly modelling technique was chosen based on its ease and flexibility in controlling the number of polys in 3D models, and was suitable to be applied for repetitive spatial typologies, such as the Lawang Sewu building. After given textures, the 3D model was sent to the VR editor. In addition of running on the desktop platform, Head Mounted Device (HMD) that supports the creation of an immersive experience, was also chosen to run the Lawang Sewu VR. The evaluation carried out to measure the level of similarity of the 3D model to the original building and the sensation of an immersive experience felt by the user shows good achievements.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140944
Franciskus Antonius, Myagmarsuren Orosoo, Aanandha Saravanan K, Indrajit Patra, Prema S
Effective detection has been extremely difficult due to plagiarism's pervasiveness throughout a variety of fields, including academia and research. Increasingly complex plagiarism detection strategies are being used by people, making traditional approaches ineffective. The assessment of plagiarism involves a comprehensive examination encompassing syntactic, lexical, semantic, and structural facets. In contrast to traditional string-matching techniques, this investigation adopts a sophisticated Natural Language Processing (NLP) framework. The preprocessing phase entails a series of intricate steps ultimately refining the raw text data. The crux of this methodology lies in the integration of two distinct metrics within the Encoder Representation from Transformers (E-BERT) approach, effectively facilitating a granular exploration of textual similarity. Within the realm of NLP, the amalgamation of Deep and Shallow approaches serves as a lens to delve into the intricate nuances of the text, uncovering underlying layers of meaning. The discerning outcomes of this research unveil the remarkable proficiency of Deep NLP in promptly identifying substantial revisions. Integral to this innovation is the novel utilization of the Waterman algorithm and an English-Spanish dictionary, which contribute to the selection of optimal attributes. Comparative evaluations against alternative models employing distinct encoding methodologies, along with logistic regression as a classifier underscore the potency of the proposed implementation. The culmination of extensive experimentation substantiates the system's prowess, boasting an impressive 99.5% accuracy rate in extracting instances of plagiarism. This research serves as a pivotal advancement in the domain of plagiarism detection, ushering in effective and sophisticated methods to combat the growing spectre of unoriginal content.
{"title":"Enhanced Plagiarism Detection Through Advanced Natural Language Processing and E-BERT Framework of the Smith-Waterman Algorithm","authors":"Franciskus Antonius, Myagmarsuren Orosoo, Aanandha Saravanan K, Indrajit Patra, Prema S","doi":"10.14569/ijacsa.2023.0140944","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140944","url":null,"abstract":"Effective detection has been extremely difficult due to plagiarism's pervasiveness throughout a variety of fields, including academia and research. Increasingly complex plagiarism detection strategies are being used by people, making traditional approaches ineffective. The assessment of plagiarism involves a comprehensive examination encompassing syntactic, lexical, semantic, and structural facets. In contrast to traditional string-matching techniques, this investigation adopts a sophisticated Natural Language Processing (NLP) framework. The preprocessing phase entails a series of intricate steps ultimately refining the raw text data. The crux of this methodology lies in the integration of two distinct metrics within the Encoder Representation from Transformers (E-BERT) approach, effectively facilitating a granular exploration of textual similarity. Within the realm of NLP, the amalgamation of Deep and Shallow approaches serves as a lens to delve into the intricate nuances of the text, uncovering underlying layers of meaning. The discerning outcomes of this research unveil the remarkable proficiency of Deep NLP in promptly identifying substantial revisions. Integral to this innovation is the novel utilization of the Waterman algorithm and an English-Spanish dictionary, which contribute to the selection of optimal attributes. Comparative evaluations against alternative models employing distinct encoding methodologies, along with logistic regression as a classifier underscore the potency of the proposed implementation. The culmination of extensive experimentation substantiates the system's prowess, boasting an impressive 99.5% accuracy rate in extracting instances of plagiarism. This research serves as a pivotal advancement in the domain of plagiarism detection, ushering in effective and sophisticated methods to combat the growing spectre of unoriginal content.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140961
Venkateswara Rao Naramala, B. Anjanee Kumar, Vuda Sreenivasa Rao, Annapurna Mishra, Shaikh Abdul Hannan, Yousef A.Baker El-Ebiary, R. Manikandan
Diabetes is a potentially sight-threatening condition that can lead to blindness if left undetected. Timely diagnosis of diabetic retinopathy, a persistent eye ailment, is critical to prevent irreversible vision loss. However, the traditional method of diagnosing diabetic retinopathy through retinal testing by ophthalmologists is labor-intensive and time-consuming. Additionally, early identification of glaucoma, indicated by the Cup-to-Disc Ratio (CDR), is vital to prevent vision impairment, yet its subtle initial symptoms make timely detection challenging. This research addresses these diagnostic challenges by leveraging machine learning and deep learning techniques. In particular, the study introduces the application of Restricted Boltzmann Machines (RBM) to the domain. By extracting and analyzing multiple features from retinal images, the proposed model aims to accurately categorize anomalies and automate the diagnostic process. The investigation further advances with the utilization of a U-network model for optic segmentation and employs the Squirrel Search Algorithm (SSA) to fine-tune RBM hyperparameters for optimal performance. The experimental evaluation conducted on the RIM-ONE DL dataset demonstrates the efficacy of the proposed methodology. A comprehensive comparison of results against previous prediction models is carried out, assessing accuracy, cross-validation, and Receiver Operating Characteristic (ROC) metrics. Remarkably, the proposed model achieves an accuracy value of 99.2% on the RIM-ONE DL dataset. By bridging the gap between automated diagnosis and ophthalmological practice, this research contributes significantly to the medical field. The model's robust performance and superior accuracy offer a promising avenue to support healthcare professionals in enhancing their decision-making processes, ultimately improving the quality of care for patients with retinal anomalies.
{"title":"Enhancing Diabetic Retinopathy Detection Through Machine Learning with Restricted Boltzmann Machines","authors":"Venkateswara Rao Naramala, B. Anjanee Kumar, Vuda Sreenivasa Rao, Annapurna Mishra, Shaikh Abdul Hannan, Yousef A.Baker El-Ebiary, R. Manikandan","doi":"10.14569/ijacsa.2023.0140961","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140961","url":null,"abstract":"Diabetes is a potentially sight-threatening condition that can lead to blindness if left undetected. Timely diagnosis of diabetic retinopathy, a persistent eye ailment, is critical to prevent irreversible vision loss. However, the traditional method of diagnosing diabetic retinopathy through retinal testing by ophthalmologists is labor-intensive and time-consuming. Additionally, early identification of glaucoma, indicated by the Cup-to-Disc Ratio (CDR), is vital to prevent vision impairment, yet its subtle initial symptoms make timely detection challenging. This research addresses these diagnostic challenges by leveraging machine learning and deep learning techniques. In particular, the study introduces the application of Restricted Boltzmann Machines (RBM) to the domain. By extracting and analyzing multiple features from retinal images, the proposed model aims to accurately categorize anomalies and automate the diagnostic process. The investigation further advances with the utilization of a U-network model for optic segmentation and employs the Squirrel Search Algorithm (SSA) to fine-tune RBM hyperparameters for optimal performance. The experimental evaluation conducted on the RIM-ONE DL dataset demonstrates the efficacy of the proposed methodology. A comprehensive comparison of results against previous prediction models is carried out, assessing accuracy, cross-validation, and Receiver Operating Characteristic (ROC) metrics. Remarkably, the proposed model achieves an accuracy value of 99.2% on the RIM-ONE DL dataset. By bridging the gap between automated diagnosis and ophthalmological practice, this research contributes significantly to the medical field. The model's robust performance and superior accuracy offer a promising avenue to support healthcare professionals in enhancing their decision-making processes, ultimately improving the quality of care for patients with retinal anomalies.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140888
Md. Sujan Ali, Jannatul Ferdous
—The synchronization of neural activity in the human brain has great significance for coordinating its various cognitive functions. It changes throughout time and in response to frequency. The activity is measured in terms of brain signals, like an electroencephalogram (EEG). The time-frequency (TF) synchronization among several EEG channels is measured in this research using an efficient approach. Most frequently, the windowed Fourier transforms-short-time Fourier transform (STFT), as well as wavelet transform (WT), and are used to measure the TF coherence. The information provided by these model-based methods in the TF domain is insufficient. The proposed synchro squeezing transform (SST)-based TF representation is a data-adaptive approach for resolving the problem of the traditional one. It enables more perfect estimation and better tracking of TF components. The SST generates a clearly defined TF depiction because of its data flexibility and frequency reassignment capabilities. Furthermore, a non-identical smoothing operator is used to smooth the TF coherence, which enhances the statistical consistency of neural synchronization. The experiment is run using both simulated and actual EEG data. The outcomes show that the suggested SST-dependent system performs significantly better than the previously mentioned traditional approaches. As a result, the coherences dependent on the suggested approach clearly distinguish between various forms of motor imagery movement. The TF coherence can be used to measure the interdependencies of neural activities.
{"title":"Motor Imagery EEG Signals Marginal Time Coherence Analysis for Brain-Computer Interface","authors":"Md. Sujan Ali, Jannatul Ferdous","doi":"10.14569/ijacsa.2023.0140888","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140888","url":null,"abstract":"—The synchronization of neural activity in the human brain has great significance for coordinating its various cognitive functions. It changes throughout time and in response to frequency. The activity is measured in terms of brain signals, like an electroencephalogram (EEG). The time-frequency (TF) synchronization among several EEG channels is measured in this research using an efficient approach. Most frequently, the windowed Fourier transforms-short-time Fourier transform (STFT), as well as wavelet transform (WT), and are used to measure the TF coherence. The information provided by these model-based methods in the TF domain is insufficient. The proposed synchro squeezing transform (SST)-based TF representation is a data-adaptive approach for resolving the problem of the traditional one. It enables more perfect estimation and better tracking of TF components. The SST generates a clearly defined TF depiction because of its data flexibility and frequency reassignment capabilities. Furthermore, a non-identical smoothing operator is used to smooth the TF coherence, which enhances the statistical consistency of neural synchronization. The experiment is run using both simulated and actual EEG data. The outcomes show that the suggested SST-dependent system performs significantly better than the previously mentioned traditional approaches. As a result, the coherences dependent on the suggested approach clearly distinguish between various forms of motor imagery movement. The TF coherence can be used to measure the interdependencies of neural activities.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"14 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89148871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140607
J. Young, M. Shishido
—Lack of opportunities is a significant hurdle for English as a Foreign Language (EFL) for students during their learning journey. Previous studies have explored the use of chatbots as learning partners to address this issue. However, the success of chatbot implementation depends on the quality of the reference dialogue content, yet research focusing on this subject is still limited. Typically, human experts are involved in creating suitable dialogue materials for students to ensure the quality of such content. Research attempting to utilize artificial intelligence (AI) technologies for generating dialogue practice materials is relatively limited, given the constraints of existing AI systems that may produce incoherent output. This research investigates the potential of leveraging OpenAI's ChatGPT, an AI system known for producing coherent output, to generate reference dialogues for an EFL chatbot system. The study aims to assess the effectiveness of ChatGPT in generating high-quality dialogue materials suitable for EFL students. By employing multiple readability metrics, we analyze the suitability of ChatGPT-generated dialogue materials and determine the target audience that can benefit the most. Our findings indicate that ChatGPT's dialogues are well-suited for students at the Common European Framework of Reference for Languages (CEFR) level A2 (elementary level). These dialogues are easily comprehensible, enabling students at this level to grasp most of the vocabulary used. Furthermore, a substantial portion of the dialogues intended for CEFR B1 (intermediate level) provides ample stimulation for learning new words. The integration of AI-powered chatbots in EFL education shows promise in overcoming limitations and providing valuable learning resources to students.
{"title":"Investigating OpenAI’s ChatGPT Potentials in Generating Chatbot's Dialogue for English as a Foreign Language Learning","authors":"J. Young, M. Shishido","doi":"10.14569/ijacsa.2023.0140607","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140607","url":null,"abstract":"—Lack of opportunities is a significant hurdle for English as a Foreign Language (EFL) for students during their learning journey. Previous studies have explored the use of chatbots as learning partners to address this issue. However, the success of chatbot implementation depends on the quality of the reference dialogue content, yet research focusing on this subject is still limited. Typically, human experts are involved in creating suitable dialogue materials for students to ensure the quality of such content. Research attempting to utilize artificial intelligence (AI) technologies for generating dialogue practice materials is relatively limited, given the constraints of existing AI systems that may produce incoherent output. This research investigates the potential of leveraging OpenAI's ChatGPT, an AI system known for producing coherent output, to generate reference dialogues for an EFL chatbot system. The study aims to assess the effectiveness of ChatGPT in generating high-quality dialogue materials suitable for EFL students. By employing multiple readability metrics, we analyze the suitability of ChatGPT-generated dialogue materials and determine the target audience that can benefit the most. Our findings indicate that ChatGPT's dialogues are well-suited for students at the Common European Framework of Reference for Languages (CEFR) level A2 (elementary level). These dialogues are easily comprehensible, enabling students at this level to grasp most of the vocabulary used. Furthermore, a substantial portion of the dialogues intended for CEFR B1 (intermediate level) provides ample stimulation for learning new words. The integration of AI-powered chatbots in EFL education shows promise in overcoming limitations and providing valuable learning resources to students.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"26 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89176538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
—In Brain-Computer interface (BCI) applications, achieving accurate control relies heavily on the classification accuracy and efficiency of motor imagery electroencephalogram (EEG) signals. However, factors such as mutual interference between multi-channel signals, inter-individual variability, and noise interference in the channels pose challenges to motor imagery EEG signal classification. To address these problems, this paper proposes an Adaptive Channel Selection algorithm aimed at optimizing classification accuracy and Information Translate Rate (ITR). First, C3, C4, and Cz are selected as key channels based on neurophysiological evidence and extensive experimental studies. Next, the channel selection is fine-tuned using spatial location and absolute Pearson correlation coefficients. By analyzing the relationship between EEG channels and key channels, the most relevant channel combination is determined for each subject, reducing confounding information and improving classification accuracy. To validate the method, the SHU Dataset and the PhysioNet Dataset are used in experiments. The Graph ResNet classification model is employed to extract features from the selected channel combinations using deep learning techniques. Experimental results show that the average classification accuracy is improved by 5.36% and 9.19%, and the Information Translate Rate is improved by 29.24% and 26.75%, respectively, compared to a single channel combination.
{"title":"An Adaptive Channel Selection and Graph ResNet Based Algorithm for Motor Imagery Classification","authors":"Yongquan Xia, Jianhua Dong, Duan Li, Kuan-Ching Li, J. Nan, Ruyun Xu","doi":"10.14569/ijacsa.2023.0140525","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140525","url":null,"abstract":"—In Brain-Computer interface (BCI) applications, achieving accurate control relies heavily on the classification accuracy and efficiency of motor imagery electroencephalogram (EEG) signals. However, factors such as mutual interference between multi-channel signals, inter-individual variability, and noise interference in the channels pose challenges to motor imagery EEG signal classification. To address these problems, this paper proposes an Adaptive Channel Selection algorithm aimed at optimizing classification accuracy and Information Translate Rate (ITR). First, C3, C4, and Cz are selected as key channels based on neurophysiological evidence and extensive experimental studies. Next, the channel selection is fine-tuned using spatial location and absolute Pearson correlation coefficients. By analyzing the relationship between EEG channels and key channels, the most relevant channel combination is determined for each subject, reducing confounding information and improving classification accuracy. To validate the method, the SHU Dataset and the PhysioNet Dataset are used in experiments. The Graph ResNet classification model is employed to extract features from the selected channel combinations using deep learning techniques. Experimental results show that the average classification accuracy is improved by 5.36% and 9.19%, and the Information Translate Rate is improved by 29.24% and 26.75%, respectively, compared to a single channel combination.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"8 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87544500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140595
Khaja Raoufuddin Ahmed, S. A. Jalil, S. Usman
—Skin cancers have been on an upward trend, with melanoma being the most severe type. A growing body of investigation is employing digital camera images to computer-aided examine suspected skin lesions for cancer. Due to the presence of distracting elements including lighting fluctuations and surface light reflections, interpretation of these images is typically difficult. Segmenting the area of the lesion from healthy skin is a crucial step in the diagnosis of cancer. Hence, in this research an optimized deep learning approach is introduced for the skin lesion segmentation. For this, the EfficientNet is integrated with the UNet for enhancing the segmentation accuracy. Also, the Improved Tuna Swarm Optimization (ITSO) is utilized for adjusting the modifiable parameters of the U-EfficientNet to minimize the information loss during the learning phase. The proposed ITSU-EfficientNet is assessed based on various evaluation measures like Accuracy, Mean Square Error (MSE), Precision, Recall, IoU, and Dice Coefficient and acquired the values are 0.94, 0.06, 0.94, 0.94, 0.92 and 0.94 respectively.
{"title":"Improved Tuna Swarm-based U-EfficientNet: Skin Lesion Image Segmentation by Improved Tuna Swarm Optimization","authors":"Khaja Raoufuddin Ahmed, S. A. Jalil, S. Usman","doi":"10.14569/ijacsa.2023.0140595","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140595","url":null,"abstract":"—Skin cancers have been on an upward trend, with melanoma being the most severe type. A growing body of investigation is employing digital camera images to computer-aided examine suspected skin lesions for cancer. Due to the presence of distracting elements including lighting fluctuations and surface light reflections, interpretation of these images is typically difficult. Segmenting the area of the lesion from healthy skin is a crucial step in the diagnosis of cancer. Hence, in this research an optimized deep learning approach is introduced for the skin lesion segmentation. For this, the EfficientNet is integrated with the UNet for enhancing the segmentation accuracy. Also, the Improved Tuna Swarm Optimization (ITSO) is utilized for adjusting the modifiable parameters of the U-EfficientNet to minimize the information loss during the learning phase. The proposed ITSU-EfficientNet is assessed based on various evaluation measures like Accuracy, Mean Square Error (MSE), Precision, Recall, IoU, and Dice Coefficient and acquired the values are 0.94, 0.06, 0.94, 0.94, 0.92 and 0.94 respectively.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"116 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87689184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140695
Qingyong Zhu
Myocarditis is an important public health concern since it can cause heart failure and abrupt death. It can be diagnosed with magnetic resonance imaging (MRI) of the heart, a non-invasive imaging technology with the potential for operator bias. The study provides a deep learning-based model for myocarditis detection using CMR images to support medical professionals. The proposed architecture comprises a convolutional neural network (CNN), a fully-connected decision layer, a generative adversarial network (GAN)-based algorithm for data augmentation, an enhanced DE for pre-training weights, and a reinforcement learning-based method for training. We present a new method of employing produced images for data augmentation based on GAN to improve the classification performance of the provided CNN. Unbalanced data is one of the most significant classification issues, as negative samples are more than positive, decimating system performance. To solve this issue, we offer an RL-based training method that learns minority class examples with attention. In addition, we tackle the challenges associated with the training step, which typically relies on gradient-based techniques for the learning process; however, these methods often face issues like sensitivity to initialization. To start the BP process, we present an improved differential evolution (DE) technique that leverages a clustering-based mutation operator. It recognizes a successful cluster for DE and applies an original updating strategy to produce potential solutions. We assess our suggested model on the Z-Alizadeh Sani myocarditis dataset and show that it outperforms other methods. Keywords—Myocarditis; generative adversarial network; data augmentation; differential evolution
{"title":"A Novel Method for Myocardial Image Classification using Data Augmentation","authors":"Qingyong Zhu","doi":"10.14569/ijacsa.2023.0140695","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140695","url":null,"abstract":"Myocarditis is an important public health concern since it can cause heart failure and abrupt death. It can be diagnosed with magnetic resonance imaging (MRI) of the heart, a non-invasive imaging technology with the potential for operator bias. The study provides a deep learning-based model for myocarditis detection using CMR images to support medical professionals. The proposed architecture comprises a convolutional neural network (CNN), a fully-connected decision layer, a generative adversarial network (GAN)-based algorithm for data augmentation, an enhanced DE for pre-training weights, and a reinforcement learning-based method for training. We present a new method of employing produced images for data augmentation based on GAN to improve the classification performance of the provided CNN. Unbalanced data is one of the most significant classification issues, as negative samples are more than positive, decimating system performance. To solve this issue, we offer an RL-based training method that learns minority class examples with attention. In addition, we tackle the challenges associated with the training step, which typically relies on gradient-based techniques for the learning process; however, these methods often face issues like sensitivity to initialization. To start the BP process, we present an improved differential evolution (DE) technique that leverages a clustering-based mutation operator. It recognizes a successful cluster for DE and applies an original updating strategy to produce potential solutions. We assess our suggested model on the Z-Alizadeh Sani myocarditis dataset and show that it outperforms other methods. Keywords—Myocarditis; generative adversarial network; data augmentation; differential evolution","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"5 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87696579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.14569/ijacsa.2023.0140383
Wenzhi Wang, Zhanqiao Liu
—Cloud computing involves the dynamic provision of virtualized and scalable resources over the Internet as services. Different types of services with the same functionality but different non-functionality features may be delivered in a cloud environment in response to customer requests, which may need to be combined to satisfy the customer's complex requirements. Recent research has focused on combining unique and loosely-coupled services into a preferred system. An optimized composite service consists of formerly existing single and simple services combined to provide an optimal composite service, thereby improving the quality of service (QoS). In recent years, cloud computing has driven the rapid proliferation of multi-provision cloud service compositions, in which cloud service providers can provide multiple services simultaneously. Service composition fulfils a variety of user needs in a variety of scenarios. The composite request (service request) in a multi-cloud environment requires atomic services (service candidates) located in multiple clouds. Service composition combines atomic services from multiple clouds into a single service. Since cloud services are rapidly growing and their Quality of Service (QoS) is widely varying, finding the necessary services and composing them with quality assurances is an increasingly challenging technical task. This paper presents a method that uses the firefly optimization algorithm (FOA) and fuzzy logic to balance multiple QoS factors and satisfy service composition constraints. Experimental results prove that the proposed method outperforms previous ones in terms of response time, availability, and energy consumption.
{"title":"Cloud Service Composition using Firefly Optimization Algorithm and Fuzzy Logic","authors":"Wenzhi Wang, Zhanqiao Liu","doi":"10.14569/ijacsa.2023.0140383","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140383","url":null,"abstract":"—Cloud computing involves the dynamic provision of virtualized and scalable resources over the Internet as services. Different types of services with the same functionality but different non-functionality features may be delivered in a cloud environment in response to customer requests, which may need to be combined to satisfy the customer's complex requirements. Recent research has focused on combining unique and loosely-coupled services into a preferred system. An optimized composite service consists of formerly existing single and simple services combined to provide an optimal composite service, thereby improving the quality of service (QoS). In recent years, cloud computing has driven the rapid proliferation of multi-provision cloud service compositions, in which cloud service providers can provide multiple services simultaneously. Service composition fulfils a variety of user needs in a variety of scenarios. The composite request (service request) in a multi-cloud environment requires atomic services (service candidates) located in multiple clouds. Service composition combines atomic services from multiple clouds into a single service. Since cloud services are rapidly growing and their Quality of Service (QoS) is widely varying, finding the necessary services and composing them with quality assurances is an increasingly challenging technical task. This paper presents a method that uses the firefly optimization algorithm (FOA) and fuzzy logic to balance multiple QoS factors and satisfy service composition constraints. Experimental results prove that the proposed method outperforms previous ones in terms of response time, availability, and energy consumption.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"47 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89688781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}