Pub Date : 2024-08-23DOI: 10.1007/s41870-024-02103-6
M. Pandiyarajan, R. S. Valarmathi
Dementia disease is a syndrome caused by various disorders and conditions that affect the brain which causes gradual decline in neurological function commonly observed in older individuals. The disease is categorized into three stages in our research: Mild dementia (MD), Non-dementia (ND) and very mild dementia (VMD). Magnetic Resonance Imaging (MRI) scan of the brain is used for diagnosing dementia. In this research, a dense residual deep learning model using stochastic gradient descent with momentum optimizer based on VGG-structure for classifying dementia (VDRNet19) is proposed, which can detect all three stages of dementia The proposed model is trained and tested with the Open Access Series of Imaging and Studies (OASIS) dataset. In this work, the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement method is employed to preprocess the raw for analysis. In order to confront the imbalance in dataset, augmentation techniques are used. As a result, a balanced dataset comprising a total of 1941 images across the three classes are obtained. Initially, six existing models including DenseNet201, VGG19, ResNet152, AlzheimerNet [13], MobileNetV2 and ensemble of pretrained networks were trained and tested to attain 93.84%, 92.42%, 91.1%, 89.73%, 87.67% and 94.86% of test accuracies respectively. DenseNet201, VGG19, ResNet152 yields the highest accuracy, which is the backbone to design the proposed model. VDRNet19 using optimizer as stochastic gradient descent with momentum, 0.01 as learning rate, achieves the highest testing accuracy of 97.26%. This study compared six pre-trained models alongside the proposed model in terms of performance metrics to determine if the VDRNet19 model excels in classifying the three classes. An ablation study was conducted to validate the chosen hyperparameters. Results indicate that the proposed model surpasses traditional methods in classifying dementia stages from brain MRI scan images.
{"title":"VDRNet19: a dense residual deep learning model using stochastic gradient descent with momentum optimizer based on VGG-structure for classifying dementia","authors":"M. Pandiyarajan, R. S. Valarmathi","doi":"10.1007/s41870-024-02103-6","DOIUrl":"https://doi.org/10.1007/s41870-024-02103-6","url":null,"abstract":"<p>Dementia disease is a syndrome caused by various disorders and conditions that affect the brain which causes gradual decline in neurological function commonly observed in older individuals. The disease is categorized into three stages in our research: Mild dementia (MD), Non-dementia (ND) and very mild dementia (VMD). Magnetic Resonance Imaging (MRI) scan of the brain is used for diagnosing dementia. In this research, a dense residual deep learning model using stochastic gradient descent with momentum optimizer based on VGG-structure for classifying dementia (VDRNet19) is proposed, which can detect all three stages of dementia The proposed model is trained and tested with the Open Access Series of Imaging and Studies (OASIS) dataset. In this work, the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement method is employed to preprocess the raw for analysis. In order to confront the imbalance in dataset, augmentation techniques are used. As a result, a balanced dataset comprising a total of 1941 images across the three classes are obtained. Initially, six existing models including DenseNet201, VGG19, ResNet152, AlzheimerNet [13], MobileNetV2 and ensemble of pretrained networks were trained and tested to attain 93.84%, 92.42%, 91.1%, 89.73%, 87.67% and 94.86% of test accuracies respectively. DenseNet201, VGG19, ResNet152 yields the highest accuracy, which is the backbone to design the proposed model. VDRNet19 using optimizer as stochastic gradient descent with momentum, 0.01 as learning rate, achieves the highest testing accuracy of 97.26%. This study compared six pre-trained models alongside the proposed model in terms of performance metrics to determine if the VDRNet19 model excels in classifying the three classes. An ablation study was conducted to validate the chosen hyperparameters. Results indicate that the proposed model surpasses traditional methods in classifying dementia stages from brain MRI scan images.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1007/s41870-024-02153-w
M. Raviraja Holla, D. Suma, M. Darshan Holla
Growing concerns about public safety have driven the demand for real-time surveillance, particularly in monitoring systems like people counters. Traditional methods heavily reliant on facial detection face challenges due to the complex nature of facial features. This paper presents an innovative people counting system known for its robustness, utilizing holistic bodily characteristics for improved detection and tallying. This system achieves exceptional performance through advanced computer vision techniques, with a flawless accuracy and precision rate of 100% under ideal conditions. Even in challenging visual conditions, it maintains an impressive overall accuracy of 98.42% and a precision of 97.51%. Comprehensive analyses, including violin plot and heatmaps, support this outstanding performance. Additionally, by assessing accuracy and execution time concerning the number of cascading stages, we highlight the significant advantages of our approach. Experimentation with the TUD-Pedestrian dataset demonstrates an accuracy of 94.2%. Evaluation using the UCFCC dataset further proves the effectiveness of our approach in handling diverse scenarios, showcasing its robustness in real-world crowd counting applications. Compared to benchmark approaches, our proposed system demonstrates real-time precision and efficiency.
{"title":"Optimizing accuracy and efficiency in real-time people counting with cascaded object detection","authors":"M. Raviraja Holla, D. Suma, M. Darshan Holla","doi":"10.1007/s41870-024-02153-w","DOIUrl":"https://doi.org/10.1007/s41870-024-02153-w","url":null,"abstract":"<p>Growing concerns about public safety have driven the demand for real-time surveillance, particularly in monitoring systems like people counters. Traditional methods heavily reliant on facial detection face challenges due to the complex nature of facial features. This paper presents an innovative people counting system known for its robustness, utilizing holistic bodily characteristics for improved detection and tallying. This system achieves exceptional performance through advanced computer vision techniques, with a flawless accuracy and precision rate of 100% under ideal conditions. Even in challenging visual conditions, it maintains an impressive overall accuracy of 98.42% and a precision of 97.51%. Comprehensive analyses, including violin plot and heatmaps, support this outstanding performance. Additionally, by assessing accuracy and execution time concerning the number of cascading stages, we highlight the significant advantages of our approach. Experimentation with the TUD-Pedestrian dataset demonstrates an accuracy of 94.2%. Evaluation using the UCFCC dataset further proves the effectiveness of our approach in handling diverse scenarios, showcasing its robustness in real-world crowd counting applications. Compared to benchmark approaches, our proposed system demonstrates real-time precision and efficiency.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1007/s41870-024-02119-y
E. Suganthi, F. Kurus Malai Selvi
Cloud computing enables individuals and businesses to buy services as needed. Numerous services are available through the paradigm, including online services that are easily accessible, platforms for deploying applications, and storage. One major problem in the cloud is load balancing (LB), making it difficult to guarantee application performance to the Quality of Service (QoS) measurement and adhere to the Service Level Agreement (SLA) document as cloud providers require of businesses. Equitable workload distribution among servers is a challenge for cloud providers. By effectively using virtual machines' (VMs) resources, an effective load-balancing approach should maximize and guarantee high user satisfaction. This research paper proposes an efficient load-balancing model for cloud computing using a weight factor and priority-based approach. This approach efficiently allocates the VM to the Physical Machine (PM). The main objective of this approach is to maintain QoS while reducing power usage, resource waste, and migration overhead. Based on the resources (CPU, RAM, Bandwidth), the PM current condition is computed using the suggested PM load identification algorithm based on the resource weight factor. The priority-based VM allocation model determines the ideal solution for selecting the suitable PM for the VM. The recommended method is simulated using the Cloudsim toolbox, and performance in terms of EC and SLA breaches is assessed using the PlanetLab workload. Ultimately, the experimental findings demonstrate that the suggested algorithm significantly improves SLAV and energy usage compared to existing approaches.
{"title":"Weight factor and priority-based virtual machine load balancing model for cloud computing","authors":"E. Suganthi, F. Kurus Malai Selvi","doi":"10.1007/s41870-024-02119-y","DOIUrl":"https://doi.org/10.1007/s41870-024-02119-y","url":null,"abstract":"<p>Cloud computing enables individuals and businesses to buy services as needed. Numerous services are available through the paradigm, including online services that are easily accessible, platforms for deploying applications, and storage. One major problem in the cloud is load balancing (LB), making it difficult to guarantee application performance to the Quality of Service (QoS) measurement and adhere to the Service Level Agreement (SLA) document as cloud providers require of businesses. Equitable workload distribution among servers is a challenge for cloud providers. By effectively using virtual machines' (VMs) resources, an effective load-balancing approach should maximize and guarantee high user satisfaction. This research paper proposes an efficient load-balancing model for cloud computing using a weight factor and priority-based approach. This approach efficiently allocates the VM to the Physical Machine (PM). The main objective of this approach is to maintain QoS while reducing power usage, resource waste, and migration overhead. Based on the resources (CPU, RAM, Bandwidth), the PM current condition is computed using the suggested PM load identification algorithm based on the resource weight factor. The priority-based VM allocation model determines the ideal solution for selecting the suitable PM for the VM. The recommended method is simulated using the Cloudsim toolbox, and performance in terms of EC and SLA breaches is assessed using the PlanetLab workload. Ultimately, the experimental findings demonstrate that the suggested algorithm significantly improves SLAV and energy usage compared to existing approaches.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1007/s41870-024-02128-x
Rajneesh Kumar Pandey, Tanmoy Kanti Das
Cyber-physical systems (CPS) are vulnerable to cyber attacks which disrupt the operations of the associated physical process. Sensors are deployed in CPS to observe its functioning and control systems like actuators, Remote Terminal Units (RTU), programmable logic controllers (PLC), etc., are used to change the state of the CPS. Any abnormal state transitions due to cyber attack or natural fault may not be detected by the traditional Intrusion Detection System (IDS). Behavior specification-based IDS, which employs laws of physics to detect the intrusion, may be helpful in this context. However, specifying acceptable behaviors based on the laws of physics for all the installed control systems for a complex CPS like a smart grid, water treatment plant, etc., is a challenging task. Here, we employ a data-driven strategy to model the behavior of each control system installed in a CPS. Later, we use the models to predict the acceptable states of all the control systems. We utilize an AI-based classifier to model control systems such as actuators. Subsequently, we juxtapose the actual states of the actuators with their predicted states, examining how this combination correlates with the overall state of the CPS to identify anomalies. Typically, there should be a strong correlation between predicted and actual states, making the Hamming distance between them a crucial factor in our experimentation. To establish the relationship between controller states and CPS states, we employ a novel deep neural network-based approach for classification. Experimental validation of our approach leverages data from a water treatment testbed, where we achieve superior performance compared to the most state-of-the-art methods, achieving a F1-score of 0.96.
{"title":"Anomaly detection in cyber-physical systems using actuator state transition model","authors":"Rajneesh Kumar Pandey, Tanmoy Kanti Das","doi":"10.1007/s41870-024-02128-x","DOIUrl":"https://doi.org/10.1007/s41870-024-02128-x","url":null,"abstract":"<p>Cyber-physical systems (CPS) are vulnerable to cyber attacks which disrupt the operations of the associated physical process. Sensors are deployed in CPS to observe its functioning and control systems like actuators, Remote Terminal Units (RTU), programmable logic controllers (PLC), etc., are used to change the state of the CPS. Any abnormal state transitions due to cyber attack or natural fault may not be detected by the traditional Intrusion Detection System (IDS). Behavior specification-based IDS, which employs laws of physics to detect the intrusion, may be helpful in this context. However, specifying acceptable behaviors based on the laws of physics for all the installed control systems for a complex CPS like a smart grid, water treatment plant, etc., is a challenging task. Here, we employ a data-driven strategy to model the behavior of each control system installed in a CPS. Later, we use the models to predict the acceptable states of all the control systems. We utilize an AI-based classifier to model control systems such as actuators. Subsequently, we juxtapose the actual states of the actuators with their predicted states, examining how this combination correlates with the overall state of the CPS to identify anomalies. Typically, there should be a strong correlation between predicted and actual states, making the Hamming distance between them a crucial factor in our experimentation. To establish the relationship between controller states and CPS states, we employ a novel deep neural network-based approach for classification. Experimental validation of our approach leverages data from a water treatment testbed, where we achieve superior performance compared to the most state-of-the-art methods, achieving a <i>F1-score</i> of <b>0</b>.<b>96</b>.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1007/s41870-024-02150-z
Arun Prasad Jaganathan
Recent studies reveal that standard Convolutional Neural Networks (CNNs)—conventionally struggle—when the training data is corrupted, leading to significant performance drops with noisy inputs. Therefore, real-world data, influenced by various sources of noise like sensor inaccuracies, weather fluctuations, lighting variations, and obstructions, exacerbates this challenge substantially. To address this limitation—employing style transfer on the training data has been proposed by various studies. However, the precise impact of different style transfer parameter settings on the resulting model’s robustness remains unexplored. Therefore, in this study, we systematically investigated various magnitudes of style transfer applied to the training data, assessing their effectiveness in enhancing model robustness. Our findings indicate that the most substantial improvement in robustness occurs when applying style transfer with maximum magnitude to the training data. Furthermore, we examined the significance of the dataset’s composition from which the styles are derived. Our results demonstrate that utilizing a limited subset of just 64 diverse, randomly selected styles is adequate to observe desired performance generalization even under corrupted testing conditions. Therefore, instead of uniformly selecting styles from the dataset, we developed a probability distribution for selection. Notably, styles with higher selection probabilities exhibit qualitatively distinct characteristics compared to those with lower probabilities, suggesting a discernible impact on the model’s robustness. Utilizing style transfer with styles having maximum likelihood according to the learned distribution led to a 1.4% increase in mean performance under corruption compared to using an equivalent number of randomly chosen styles.
{"title":"Meta-styled CNNs: boosting robustness through adaptive learning and style transfer","authors":"Arun Prasad Jaganathan","doi":"10.1007/s41870-024-02150-z","DOIUrl":"https://doi.org/10.1007/s41870-024-02150-z","url":null,"abstract":"<p>Recent studies reveal that standard Convolutional Neural Networks (CNNs)—conventionally struggle—when the training data is corrupted, leading to significant performance drops with noisy inputs. Therefore, real-world data, influenced by various sources of noise like sensor inaccuracies, weather fluctuations, lighting variations, and obstructions, exacerbates this challenge substantially. To address this limitation—employing style transfer on the training data has been proposed by various studies. However, the precise impact of different style transfer parameter settings on the resulting model’s robustness remains unexplored. Therefore, in this study, we systematically investigated various magnitudes of style transfer applied to the training data, assessing their effectiveness in enhancing model robustness. Our findings indicate that the most substantial improvement in robustness occurs when applying style transfer with maximum magnitude to the training data. Furthermore, we examined the significance of the dataset’s composition from which the styles are derived. Our results demonstrate that utilizing a limited subset of just 64 diverse, randomly selected styles is adequate to observe desired performance generalization even under corrupted testing conditions. Therefore, instead of uniformly selecting styles from the dataset, we developed a probability distribution for selection. Notably, styles with higher selection probabilities exhibit qualitatively distinct characteristics compared to those with lower probabilities, suggesting a discernible impact on the model’s robustness. Utilizing style transfer with styles having maximum likelihood according to the learned distribution led to a 1.4% increase in mean performance under corruption compared to using an equivalent number of randomly chosen styles.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1007/s41870-024-02111-6
Yusuf Awad, Islam Hegazy, El-Sayed M. El-Horbaty
The issue of smartphone battery drainage is a common and widespread concern faced by numerous users. This problem arises due to the convergence of various factors, foremost among them being intensive active usage, the concurrent operation of numerous background applications, elevated screen brightness levels, persistent bad network connectivity, and the increased requirements on the device’s hardware elements. Mitigating this problem requires a strategic approach to reduce the processes running in the background, calibrate an optimal screen brightness, and disable idle or underutilized sensors and hardware components. Achieving an effective balance in managing these multifaceted aspects is vital for enhancing device efficiency, reducing battery drainage, and ultimately optimizing the overall usability of smartphones. In the context of this research, we present an innovative recommendation engine designed to empower users with actionable recommendations. These recommendations are actions to be taken in the system variable settings and interaction with the smartphone that will minimize battery drainage. Through rigorous testing in real-world scenarios, our recommendation engine has demonstrated tangible success, yielding an approximately daily smartphone usage extension of an average of 3.5 h in real-world testing, thus underscoring its practical efficacy and potential for substantial impact on user experience and device longevity.
{"title":"Power-saving actionable recommendation system to minimize battery drainage in smartphones","authors":"Yusuf Awad, Islam Hegazy, El-Sayed M. El-Horbaty","doi":"10.1007/s41870-024-02111-6","DOIUrl":"https://doi.org/10.1007/s41870-024-02111-6","url":null,"abstract":"<p>The issue of smartphone battery drainage is a common and widespread concern faced by numerous users. This problem arises due to the convergence of various factors, foremost among them being intensive active usage, the concurrent operation of numerous background applications, elevated screen brightness levels, persistent bad network connectivity, and the increased requirements on the device’s hardware elements. Mitigating this problem requires a strategic approach to reduce the processes running in the background, calibrate an optimal screen brightness, and disable idle or underutilized sensors and hardware components. Achieving an effective balance in managing these multifaceted aspects is vital for enhancing device efficiency, reducing battery drainage, and ultimately optimizing the overall usability of smartphones. In the context of this research, we present an innovative recommendation engine designed to empower users with actionable recommendations. These recommendations are actions to be taken in the system variable settings and interaction with the smartphone that will minimize battery drainage. Through rigorous testing in real-world scenarios, our recommendation engine has demonstrated tangible success, yielding an approximately daily smartphone usage extension of an average of 3.5 h in real-world testing, thus underscoring its practical efficacy and potential for substantial impact on user experience and device longevity.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1007/s41870-024-02151-y
Y. Jani, P. Raajan
The Internet of Medical Things (IoMT) could significantly enhance conventional Healthcare (HC) services. To secure HC data, numerous data preservation and authentication techniques were developed. But, they could not effectively address security concerns and failed to retrieve data in minimal time. Thus, this work proposes a user Authenticated Security framework with blockchain-based authorization using an encoded access policy and smart contract with data indexing. Primarily, to book an appointment, the patient registers and login to the server. After consultation, the data is sensed and converted into cipher. This cipher is encrypted and uploaded to the hospital cloud server. In American Standard Code for Information Interchange(ASCII) binary Indexed Tree, the data’s location is indexed. In the meantime, a smart contract is created grounded on consultation details, which are converted to hashcode and stored in the blockchain. Afterward, by utilizing the patient’s and doctor’s data attributes, an encoded access policy is created. Now, the doctor login to the server, and a smart contract is created, which is converted to hash code. Grounded on the smart contract and encoded policy, blockchain-based authorization is performed. After verifying, the data is retrieved with the help of the indexed tree. Lastly, to provide a prescription, the attributes of the decrypted data are analyzed using Sigmoid Swish Long short-term memory (SS-LSTM). In experimental assessment, the proposed mechanism’s performance is proven with superior outcomes.
医疗物联网(IoMT)可大大提升传统的医疗保健(HC)服务。为了确保医疗保健数据的安全,人们开发了许多数据保护和身份验证技术。但是,这些技术无法有效解决安全问题,也无法在最短时间内检索数据。因此,这项工作提出了一个用户认证安全框架,该框架基于区块链授权,使用编码访问策略和带有数据索引的智能合约。首先,为了预约,患者需要注册并登录服务器。就诊后,数据被感知并转换成密码。该密码经过加密后上传到医院云服务器。在美国信息交换标准码(ASCII)二进制索引树中,数据的位置被索引。同时,根据诊疗细节创建智能合约,将其转换为哈希码并存储在区块链中。然后,利用病人和医生的数据属性,创建一个编码访问策略。现在,医生登录服务器,创建智能合约,并将其转换为哈希代码。以智能合约和编码策略为基础,执行基于区块链的授权。验证后,借助索引树检索数据。最后,为了提供处方,使用 Sigmoid Swish Long short-term memory(SS-LSTM)分析解密数据的属性。在实验评估中,拟议机制的性能以优异的结果得到了证明。
{"title":"User-authenticated IoMT security model using blockchain authorization with data indexing and analysis","authors":"Y. Jani, P. Raajan","doi":"10.1007/s41870-024-02151-y","DOIUrl":"https://doi.org/10.1007/s41870-024-02151-y","url":null,"abstract":"<p>The Internet of Medical Things (IoMT) could significantly enhance conventional Healthcare (HC) services. To secure HC data, numerous data preservation and authentication techniques were developed. But, they could not effectively address security concerns and failed to retrieve data in minimal time. Thus, this work proposes a user Authenticated Security framework with blockchain-based authorization using an encoded access policy and smart contract with data indexing. Primarily, to book an appointment, the patient registers and login to the server. After consultation, the data is sensed and converted into cipher. This cipher is encrypted and uploaded to the hospital cloud server. In American Standard Code for Information Interchange(ASCII) binary Indexed Tree, the data’s location is indexed. In the meantime, a smart contract is created grounded on consultation details, which are converted to hashcode and stored in the blockchain. Afterward, by utilizing the patient’s and doctor’s data attributes, an encoded access policy is created. Now, the doctor login to the server, and a smart contract is created, which is converted to hash code. Grounded on the smart contract and encoded policy, blockchain-based authorization is performed. After verifying, the data is retrieved with the help of the indexed tree. Lastly, to provide a prescription, the attributes of the decrypted data are analyzed using Sigmoid Swish Long short-term memory (SS-LSTM). In experimental assessment, the proposed mechanism’s performance is proven with superior outcomes.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1007/s41870-024-02137-w
S. M. Archana, Jay Prakash
Biomedical Named Entity Recognition (Bio-NER) identifies and categorises the named entities of biomedical text data such as disease, chemical, protein, and gene. Since most of the biomedical data originates from the real world, the majority of data instances do not pertain to the specific named entity of interest. Therefore, this imbalance of data adversely impacts the performance of Bio-NER using machine learning models, as their learning objective is usually dominated by the majority of non-entity tokens. Various undersampling techniques have been introduced to address this issue. Balanced Undersampling (BUS) is one of the approaches which operates at the sentence level to enhance biomedical NER (Bio-NER). However, BUS lacks in preserving contextual information during the undersampling procedure. To overcome this limitation, we introduce an improved Balanced Undersampling method (iBUS) for Bio-NER. During the undersampling process, iBUS considers the importance of individual instances and generates a balanced dataset while retaining essential instances. To validate the effectiveness of the proposed method over competitive methods, we perform experiments using the NCBI disease dataset, CHEMDNER, and BC5CDR chemical datasets. The experimental results demonstrate the superiority of the proposed method in terms of the F1 score compared to competitive approaches.
生物医学命名实体识别(Bio-NER)可识别生物医学文本数据中的命名实体,如疾病、化学物质、蛋白质和基因等,并对其进行分类。由于生物医学数据大多来自现实世界,大多数数据实例与特定的命名实体无关。因此,这种不平衡的数据会对使用机器学习模型的生物 NER 性能产生不利影响,因为它们的学习目标通常被大多数非实体标记所支配。为了解决这个问题,人们引入了各种欠采样技术。均衡欠采样(BUS)是其中一种在句子层面上增强生物医学 NER(Bio-NER)的方法。然而,平衡下采样在下采样过程中无法保留上下文信息。为了克服这一局限性,我们为生物 NER 引入了一种改进的平衡下采样方法(iBUS)。在下采样过程中,iBUS 会考虑单个实例的重要性,并在保留基本实例的同时生成一个平衡的数据集。为了验证所提方法相对于竞争方法的有效性,我们使用 NCBI 疾病数据集、CHEMDNER 和 BC5CDR 化学数据集进行了实验。实验结果表明,就 F1 分数而言,建议的方法优于竞争方法。
{"title":"Biomedical named entity recognition through improved balanced undersampling for addressing class imbalance and preserving contextual information","authors":"S. M. Archana, Jay Prakash","doi":"10.1007/s41870-024-02137-w","DOIUrl":"https://doi.org/10.1007/s41870-024-02137-w","url":null,"abstract":"<p>Biomedical Named Entity Recognition (Bio-NER) identifies and categorises the named entities of biomedical text data such as disease, chemical, protein, and gene. Since most of the biomedical data originates from the real world, the majority of data instances do not pertain to the specific named entity of interest. Therefore, this imbalance of data adversely impacts the performance of Bio-NER using machine learning models, as their learning objective is usually dominated by the majority of non-entity tokens. Various undersampling techniques have been introduced to address this issue. Balanced Undersampling (BUS) is one of the approaches which operates at the sentence level to enhance biomedical NER (Bio-NER). However, BUS lacks in preserving contextual information during the undersampling procedure. To overcome this limitation, we introduce an improved Balanced Undersampling method (iBUS) for Bio-NER. During the undersampling process, iBUS considers the importance of individual instances and generates a balanced dataset while retaining essential instances. To validate the effectiveness of the proposed method over competitive methods, we perform experiments using the NCBI disease dataset, CHEMDNER, and BC5CDR chemical datasets. The experimental results demonstrate the superiority of the proposed method in terms of the F1 score compared to competitive approaches.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1007/s41870-024-02091-7
E. Anbazhagan, E. Sophiya, R. Prasanna Kumar
The adoption of digital health records and the rise of online medical forums resulted in massive volumes of unstructured healthcare data. Most of the data used by traditional drug recommendation systems is obtained from patient Electronic Health Records (EHR) and subjective feedback and experiences included in patient evaluations. Nevertheless, the current systems based on sentiment analysis fail consider Symptom based diagnosis whereas researches that proposes Graph models doesn’t not include patient satisfaction and Health History as some has specific needs. To address the draw backs of existing drug recommendation systems, this study suggests a novel approach that combines symptom-disease mapping with sentiment analysis of patient reviews. The primary objective of the research is to utilize machine learning classifiers to make symptom-based predictions about probable medical conditions as Phase I. Then, before being fed into sequence network and machine learning models, patient reviews that are relevant to the predicted condition are filtered as Phase II. This method generates probabilities for suggesting certain drugs by evaluating sentiments and incorporating review ratings. With a Performance score of Ensemble Model up to 99.25% in Phase I and accuracy of 99.45% for sentiment analyser in Phase II. The performance of the model was evaluated based on accuracy, Receiver Operating Characteristic Curve (ROC)-Area Under Curve (AUC) score, sensitivity, selectivity. The proposed system helps in recommending the optimal drug for any type of symptom samples which is available in database.
{"title":"Sentiment-aware drug recommendations with a focus on symptom-condition mapping","authors":"E. Anbazhagan, E. Sophiya, R. Prasanna Kumar","doi":"10.1007/s41870-024-02091-7","DOIUrl":"https://doi.org/10.1007/s41870-024-02091-7","url":null,"abstract":"<p>The adoption of digital health records and the rise of online medical forums resulted in massive volumes of unstructured healthcare data. Most of the data used by traditional drug recommendation systems is obtained from patient Electronic Health Records (EHR) and subjective feedback and experiences included in patient evaluations. Nevertheless, the current systems based on sentiment analysis fail consider Symptom based diagnosis whereas researches that proposes Graph models doesn’t not include patient satisfaction and Health History as some has specific needs. To address the draw backs of existing drug recommendation systems, this study suggests a novel approach that combines symptom-disease mapping with sentiment analysis of patient reviews. The primary objective of the research is to utilize machine learning classifiers to make symptom-based predictions about probable medical conditions as Phase I. Then, before being fed into sequence network and machine learning models, patient reviews that are relevant to the predicted condition are filtered as Phase II. This method generates probabilities for suggesting certain drugs by evaluating sentiments and incorporating review ratings. With a Performance score of Ensemble Model up to 99.25% in Phase I and accuracy of 99.45% for sentiment analyser in Phase II. The performance of the model was evaluated based on accuracy, Receiver Operating Characteristic Curve (ROC)-Area Under Curve (AUC) score, sensitivity, selectivity. The proposed system helps in recommending the optimal drug for any type of symptom samples which is available in database.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-18DOI: 10.1007/s41870-024-02155-8
Asish Debnath, Uttam Kr. Mondal
This study addresses challenges arising from large audio file storage needs and rising network bandwidth demands. In this paper, a novel audio codec design is proposed, integrating audio sample segregation, user input variance controlled principal component analysis (PCA), and Convolutional Neural Network (CNN). PCA computes sample variance feature vectors, extracts principal components, and determines compression rates. This method leverages PCA and CNN to compress audio efficiently, yielding high-quality reconstructed audio. Experimental results show that increasing PCA components generally improves PSNR values, while decreasing components may reduce CR, MSE, and other error metrics. The simulation results are analyzed and compared to other existing lossless audio encoding schemes with various statistical and robustness features.
{"title":"Leveraging CNN and principal component analysis for dynamic variance control in audio compression","authors":"Asish Debnath, Uttam Kr. Mondal","doi":"10.1007/s41870-024-02155-8","DOIUrl":"https://doi.org/10.1007/s41870-024-02155-8","url":null,"abstract":"<p>This study addresses challenges arising from large audio file storage needs and rising network bandwidth demands. In this paper, a novel audio codec design is proposed, integrating audio sample segregation, user input variance controlled principal component analysis (PCA), and Convolutional Neural Network (CNN). PCA computes sample variance feature vectors, extracts principal components, and determines compression rates. This method leverages PCA and CNN to compress audio efficiently, yielding high-quality reconstructed audio. Experimental results show that increasing PCA components generally improves PSNR values, while decreasing components may reduce CR, MSE, and other error metrics. The simulation results are analyzed and compared to other existing lossless audio encoding schemes with various statistical and robustness features.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}