Pub Date : 2024-07-13DOI: 10.1007/s12652-024-04829-4
A. Muthukumar, M. Thanga Raj, R. Ramalakshmi, A. Meena, P. Kaleeswari
An additional tool for swaying public opinion on social media is to present recent developments in the creation of natural language. The term “Deep fake” originates from deep learning technology, which effortlessly and seamlessly steers someone toward digital media. Artificial Intelligence (AI) techniques are a crucial component of deep fakes. The generative powers of generative capabilities greatly reinforce the advancements in language modeling for content generation. Due to low-cost computing infrastructure, sophisticated tools, and readily available content on social media, deep fakes propagate misinformation and hoaxes. These technologies make it simple to produce misinformation that spreads fear and confusion to everyone. As such, distinguishing between authentic and fraudulent content can be challenging. This study presents a novel automated approach for the identification of deep fakes, based on Adaptive Gaining Sharing Knowledge (AGSK) and using DenseNet121 architecture. During pre-processing, the image’s sensitive data variance or noise is eliminated. Following that, CapsuleNet is used to extract the feature vectors. The deep fake is identified from the resulting feature vectors by an AGSK with DenseNet121 architecture, together with the hyper-parameter that has been optimized using the AGSK model. Propaganda and defamation pose less of a concern, and the results of the suggested deepfake image recognition model show how reliable and successful the model is. The accuracy of detection is 98% higher than other cutting-edge models.
{"title":"Fake and propaganda images detection using automated adaptive gaining sharing knowledge algorithm with DenseNet121","authors":"A. Muthukumar, M. Thanga Raj, R. Ramalakshmi, A. Meena, P. Kaleeswari","doi":"10.1007/s12652-024-04829-4","DOIUrl":"https://doi.org/10.1007/s12652-024-04829-4","url":null,"abstract":"<p>An additional tool for swaying public opinion on social media is to present recent developments in the creation of natural language. The term “Deep fake” originates from deep learning technology, which effortlessly and seamlessly steers someone toward digital media. Artificial Intelligence (AI) techniques are a crucial component of deep fakes. The generative powers of generative capabilities greatly reinforce the advancements in language modeling for content generation. Due to low-cost computing infrastructure, sophisticated tools, and readily available content on social media, deep fakes propagate misinformation and hoaxes. These technologies make it simple to produce misinformation that spreads fear and confusion to everyone. As such, distinguishing between authentic and fraudulent content can be challenging. This study presents a novel automated approach for the identification of deep fakes, based on Adaptive Gaining Sharing Knowledge (AGSK) and using DenseNet121 architecture. During pre-processing, the image’s sensitive data variance or noise is eliminated. Following that, CapsuleNet is used to extract the feature vectors. The deep fake is identified from the resulting feature vectors by an AGSK with DenseNet121 architecture, together with the hyper-parameter that has been optimized using the AGSK model. Propaganda and defamation pose less of a concern, and the results of the suggested deepfake image recognition model show how reliable and successful the model is. The accuracy of detection is 98% higher than other cutting-edge models.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1007/s12652-024-04828-5
Jiajia Wang, Helin Gong, Anshui Li
Uncertainty theory is a branch of mathematics for modeling belief degrees. Within the framework of uncertainty theory, uncertain variable is used to represent quantities with uncertainty, and uncertain process is used to model the evolution of uncertain quantities. Uncertain differential equation is a type of differential equation involving uncertain processes, which has been successfully applied in many disciplines such as finance, optimal control, differential game, epidemic spread and so on. Uncertain differential equation has become the main tool to deal with dynamic uncertain systems. One of the key issues within the research of uncertain differential equations is the estimation of parameters involved based on the observed data. However, it is relatively difficult to solve this issue when the structures of the corresponding terms in the equations are not known in advance. To address this problem, one nonparametric estimation technique called weighted threshold moment estimation for homogeneous uncertain differential equations is proposed in this paper when no prior information about the terms is obtained. Numerical examples are presented to demonstrate the feasibility and efficiency of our method, highlighted by an empirical study of the Shanghai Interbank Offered Rate in China. The paper concludes with final remarks and recommendations for future research.
{"title":"On weighted threshold moment estimation of uncertain differential equations with applications in interbank rates analysis","authors":"Jiajia Wang, Helin Gong, Anshui Li","doi":"10.1007/s12652-024-04828-5","DOIUrl":"https://doi.org/10.1007/s12652-024-04828-5","url":null,"abstract":"<p>Uncertainty theory is a branch of mathematics for modeling belief degrees. Within the framework of uncertainty theory, uncertain variable is used to represent quantities with uncertainty, and uncertain process is used to model the evolution of uncertain quantities. Uncertain differential equation is a type of differential equation involving uncertain processes, which has been successfully applied in many disciplines such as finance, optimal control, differential game, epidemic spread and so on. Uncertain differential equation has become the main tool to deal with dynamic uncertain systems. One of the key issues within the research of uncertain differential equations is the estimation of parameters involved based on the observed data. However, it is relatively difficult to solve this issue when the structures of the corresponding terms in the equations are not known in advance. To address this problem, one nonparametric estimation technique called weighted threshold moment estimation for homogeneous uncertain differential equations is proposed in this paper when no prior information about the terms is obtained. Numerical examples are presented to demonstrate the feasibility and efficiency of our method, highlighted by an empirical study of the Shanghai Interbank Offered Rate in China. The paper concludes with final remarks and recommendations for future research.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1007/s12652-024-04832-9
Shreya Thiagarajan, A. Vijayalakshmi, G. Hannah Grace
Precision agriculture uses data gathered from various sources to improve agricultural yields and the effectiveness of crop management techniques like fertiliser application, irrigation management, and pesticide application. Reduced usage of agrochemicals is a key step towards more sustainable agriculture. Weed management robots which can perform tasks like selective sprinkling or mechanical weed elimination, contribute to this objective. A trustworthy crop/weed classification system that can accurately recognise and classify crops and weeds is required for these robots to function. In this paper, we explore various deep learning models for achieving reliable segmentation results in less training time. We classify every pixel of the images into different categories using semantic segmentation models. The models are based on an encoder-decoder architecture, where feature maps are extracted during encoding and spatial information is recovered during decoding. We examine the segmentation output on a beans dataset containing different weeds, which were collected under highly distinct environmental conditions, including cloudy, rainy, dawn, evening, full sun, and shadow.
{"title":"Weed detection in precision agriculture: leveraging encoder-decoder models for semantic segmentation","authors":"Shreya Thiagarajan, A. Vijayalakshmi, G. Hannah Grace","doi":"10.1007/s12652-024-04832-9","DOIUrl":"https://doi.org/10.1007/s12652-024-04832-9","url":null,"abstract":"<p>Precision agriculture uses data gathered from various sources to improve agricultural yields and the effectiveness of crop management techniques like fertiliser application, irrigation management, and pesticide application. Reduced usage of agrochemicals is a key step towards more sustainable agriculture. Weed management robots which can perform tasks like selective sprinkling or mechanical weed elimination, contribute to this objective. A trustworthy crop/weed classification system that can accurately recognise and classify crops and weeds is required for these robots to function. In this paper, we explore various deep learning models for achieving reliable segmentation results in less training time. We classify every pixel of the images into different categories using semantic segmentation models. The models are based on an encoder-decoder architecture, where feature maps are extracted during encoding and spatial information is recovered during decoding. We examine the segmentation output on a beans dataset containing different weeds, which were collected under highly distinct environmental conditions, including cloudy, rainy, dawn, evening, full sun, and shadow.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s12652-024-04824-9
Muhammad Hadi, Iqra Safder, Hajra Waheed, Farooq Zaman, Naif Radi Aljohani, Raheel Nawaz, Saeed Ul Hassan, Raheem Sarwar
Image caption generation has emerged as a remarkable development that bridges the gap between Natural Language Processing (NLP) and Computer Vision (CV). It lies at the intersection of these fields and presents unique challenges, particularly when dealing with low-resource languages such as Urdu. Limited research on basic Urdu language understanding necessitates further exploration in this domain. In this study, we propose three Seq2Seq-based architectures specifically tailored for Urdu image caption generation. Our approach involves leveraging transformer models to generate captions in Urdu, a significantly more challenging task than English. To facilitate the training and evaluation of our models, we created an Urdu-translated subset of the flickr8k dataset, which contains images featuring dogs in action accompanied by corresponding Urdu captions. Our designed models encompassed a deep learning-based approach, utilizing three different architectures: Convolutional Neural Network (CNN) + Long Short-term Memory (LSTM) with Soft attention employing word2Vec embeddings, CNN+Transformer, and Vit+Roberta models. Experimental results demonstrate that our proposed model outperforms existing state-of-the-art approaches, achieving 86 BLEU-1 and 90 BERT-F1 scores. The generated Urdu image captions exhibit syntactic, contextual, and semantic correctness. Our study highlights the inherent challenges associated with retraining models on low-resource languages. Our findings highlight the potential of pre-trained models for facilitating the development of NLP and CV applications in low-resource language settings.
{"title":"A transformer-based Urdu image caption generation","authors":"Muhammad Hadi, Iqra Safder, Hajra Waheed, Farooq Zaman, Naif Radi Aljohani, Raheel Nawaz, Saeed Ul Hassan, Raheem Sarwar","doi":"10.1007/s12652-024-04824-9","DOIUrl":"https://doi.org/10.1007/s12652-024-04824-9","url":null,"abstract":"<p>Image caption generation has emerged as a remarkable development that bridges the gap between Natural Language Processing (NLP) and Computer Vision (CV). It lies at the intersection of these fields and presents unique challenges, particularly when dealing with low-resource languages such as Urdu. Limited research on basic Urdu language understanding necessitates further exploration in this domain. In this study, we propose three Seq2Seq-based architectures specifically tailored for Urdu image caption generation. Our approach involves leveraging transformer models to generate captions in Urdu, a significantly more challenging task than English. To facilitate the training and evaluation of our models, we created an Urdu-translated subset of the flickr8k dataset, which contains images featuring dogs in action accompanied by corresponding Urdu captions. Our designed models encompassed a deep learning-based approach, utilizing three different architectures: Convolutional Neural Network (CNN) + Long Short-term Memory (LSTM) with Soft attention employing word2Vec embeddings, CNN+Transformer, and Vit+Roberta models. Experimental results demonstrate that our proposed model outperforms existing state-of-the-art approaches, achieving 86 BLEU-1 and 90 BERT-F1 scores. The generated Urdu image captions exhibit syntactic, contextual, and semantic correctness. Our study highlights the inherent challenges associated with retraining models on low-resource languages. Our findings highlight the potential of pre-trained models for facilitating the development of NLP and CV applications in low-resource language settings.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s12652-024-04827-6
Muhammad Nouman, Sui Yang Khoo, M. A. Parvez Mahmud, Abbas Z. Kouzani
Sleep posture is closely related to sleep quality, and can offer insights into an individual’s health. This correlation can potentially aid in the early detection of mental health disorders such as depression and anxiety. Current research focuses on embedding pressure sensors in bedsheets, attaching accelerometers on a subject’s chest, and installing cameras in bedrooms for sleep posture monitoring. However, such solutions sacrifice either the user's sleep comfort or privacy. This study explores the effectiveness of using contactless ultra-wideband (UWB) sensors for sleep posture monitoring. We employed a UWB dataset that is composed of the measurements from 12 volunteers during sleep. A stacking ensemble learning method is introduced for the monitoring of sleep postural transitions, which constitute two levels of learning. At the base-learner level, six transfer learning models (VGG16, ResNet50V2, MobileNet50V2, DenseNet121, VGG19, and ResNet101V2) are trained on the training dataset for initial predictions. Then, the logistic regression is employed as a meta-learner which is trained on the predictions gained from the base-learner to obtain final sleep postural transitions. In addition, a sleep posture monitoring algorithm is presented that can give accurate statistics of total sleep postural transitions. Extensive experiments are conducted, achieving the highest accuracy rate of 86.7% for the classification of sleep postural transitions. Moreover, time-series data augmentation is employed, which improves the accuracy by 13%. The privacy-preserving sleep monitoring solution presented in this paper holds promise for applications in mental health research.
{"title":"Advancing mental health predictions through sleep posture analysis: a stacking ensemble learning approach","authors":"Muhammad Nouman, Sui Yang Khoo, M. A. Parvez Mahmud, Abbas Z. Kouzani","doi":"10.1007/s12652-024-04827-6","DOIUrl":"https://doi.org/10.1007/s12652-024-04827-6","url":null,"abstract":"<p>Sleep posture is closely related to sleep quality, and can offer insights into an individual’s health. This correlation can potentially aid in the early detection of mental health disorders such as depression and anxiety. Current research focuses on embedding pressure sensors in bedsheets, attaching accelerometers on a subject’s chest, and installing cameras in bedrooms for sleep posture monitoring. However, such solutions sacrifice either the user's sleep comfort or privacy. This study explores the effectiveness of using contactless ultra-wideband (UWB) sensors for sleep posture monitoring. We employed a UWB dataset that is composed of the measurements from 12 volunteers during sleep. A stacking ensemble learning method is introduced for the monitoring of sleep postural transitions, which constitute two levels of learning. At the base-learner level, six transfer learning models (VGG16, ResNet50V2, MobileNet50V2, DenseNet121, VGG19, and ResNet101V2) are trained on the training dataset for initial predictions. Then, the logistic regression is employed as a meta-learner which is trained on the predictions gained from the base-learner to obtain final sleep postural transitions. In addition, a sleep posture monitoring algorithm is presented that can give accurate statistics of total sleep postural transitions. Extensive experiments are conducted, achieving the highest accuracy rate of 86.7% for the classification of sleep postural transitions. Moreover, time-series data augmentation is employed, which improves the accuracy by 13%. The privacy-preserving sleep monitoring solution presented in this paper holds promise for applications in mental health research.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s12652-024-04822-x
Eman I. Abd El-Latif, Mohamed El-dosuky, Ashraf Darwish, Aboul Ella Hassanien
This paper presents a new model based on Convolutional Neural Networks (CNN) with a long short-term memory network (LSTM) and ensemble technique for identifying seven different dogs’ behaviors. The proposed model uses data collected from two sensors attached to the dog’s back and neck. In the initial step in the model, the undefined tasks are removed, and the synthetic minority oversampling technique (SMOTE) is performed to address the imbalanced data problem. Then, CNN_LSTM and ensemble classifier are adapted to identify various dog behaviors. Finally, two experiments are performed to evaluate the model. The first experiment is performed utilizing the original data (imbalanced datasets) while the second uses a balanced dataset. Experimental results can identify seven dog behaviors with a potential accuracy of 96.73%, 96.76% sensitivity, 96.73% specificity, and 96.73% F1 score. Therefore, the SMOTE method, a data balancing strategy, not only overcomes the unbalanced data problem but also significantly improves minority class accuracy. Additionally, the suggested model is tested against cutting-edge algorithms, and the outcomes demonstrate its superior performance.
{"title":"Dog behaviors identification model using ensemble convolutional neural long short-term memory networks","authors":"Eman I. Abd El-Latif, Mohamed El-dosuky, Ashraf Darwish, Aboul Ella Hassanien","doi":"10.1007/s12652-024-04822-x","DOIUrl":"https://doi.org/10.1007/s12652-024-04822-x","url":null,"abstract":"<p>This paper presents a new model based on Convolutional Neural Networks (CNN) with a long short-term memory network (LSTM) and ensemble technique for identifying seven different dogs’ behaviors. The proposed model uses data collected from two sensors attached to the dog’s back and neck. In the initial step in the model, the undefined tasks are removed, and the synthetic minority oversampling technique (SMOTE) is performed to address the imbalanced data problem. Then, CNN_LSTM and ensemble classifier are adapted to identify various dog behaviors. Finally, two experiments are performed to evaluate the model. The first experiment is performed utilizing the original data (imbalanced datasets) while the second uses a balanced dataset. Experimental results can identify seven dog behaviors with a potential accuracy of 96.73%, 96.76% sensitivity, 96.73% specificity, and 96.73% F1 score. Therefore, the SMOTE method, a data balancing strategy, not only overcomes the unbalanced data problem but also significantly improves minority class accuracy. Additionally, the suggested model is tested against cutting-edge algorithms, and the outcomes demonstrate its superior performance.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s12652-024-04825-8
Joy Dhar, Souvik Roy
Cervical cancer is the most prevailing woman illness globally. Since cervical cancer is a very preventable illness, early diagnosis exhibits the most adaptive plan to lessen its global responsibility. However, because of infrequent knowledge, shortage of access to pharmaceutical centers, and costly schemes worldwide, most probably in emerging nations, the vulnerable subject populations cannot regularly experience the test. So, we need a clinical screening analysis to diagnose cervical cancer early and support the doctor to heal and evade cervical cancer?s spread in women?s other organs and save several lives. Thus, this paper introduces a novel hybrid approach to solve such problems: a hybrid feature selection approach with the Bayesian optimization-based optimized CatBoost (HFS-OCB) method to diagnose and predict cervical cancer risk. Genetic algorithm and mutual information approaches utilize feature selection methodology in this suggested research and form a hybrid feature selection (HFS) method to generate the most significant features from the input dataset. This paper also utilizes a novel Bayesian optimization-based hyperparameter optimization approach: optimized CatBoost (OCB) method to provide the most optimal hyperparameters for the CatBoost algorithm. The CatBoost algorithm is used to classify cervical cancer risk. There are two real-world, publicly available cervical cancer-based datasets utilized in this suggested research to evaluate and verify the suggested approach?s performance. A 20-fold cross-validation strategy and a renowned performance evaluation metric are utilized to assess the suggested approach?s performance. The outcome implies that the possibility of forming cervical cancer can be efficiently foretold using the suggested HFS-OCB method. Therefore, the suggested approach?s indicated result is compared with the other algorithms and provides the prediction. Such a predicted result shows that the suggested approach is more capable, reliable, scalable, and more effective than the other machine learning algorithms.
{"title":"Identification and diagnosis of cervical cancer using a hybrid feature selection approach with the bayesian optimization-based optimized catboost classification algorithm","authors":"Joy Dhar, Souvik Roy","doi":"10.1007/s12652-024-04825-8","DOIUrl":"https://doi.org/10.1007/s12652-024-04825-8","url":null,"abstract":"<p>Cervical cancer is the most prevailing woman illness globally. Since cervical cancer is a very preventable illness, early diagnosis exhibits the most adaptive plan to lessen its global responsibility. However, because of infrequent knowledge, shortage of access to pharmaceutical centers, and costly schemes worldwide, most probably in emerging nations, the vulnerable subject populations cannot regularly experience the test. So, we need a clinical screening analysis to diagnose cervical cancer early and support the doctor to heal and evade cervical cancer?s spread in women?s other organs and save several lives. Thus, this paper introduces a novel hybrid approach to solve such problems: a hybrid feature selection approach with the Bayesian optimization-based optimized CatBoost (HFS-OCB) method to diagnose and predict cervical cancer risk. Genetic algorithm and mutual information approaches utilize feature selection methodology in this suggested research and form a hybrid feature selection (HFS) method to generate the most significant features from the input dataset. This paper also utilizes a novel Bayesian optimization-based hyperparameter optimization approach: optimized CatBoost (OCB) method to provide the most optimal hyperparameters for the CatBoost algorithm. The CatBoost algorithm is used to classify cervical cancer risk. There are two real-world, publicly available cervical cancer-based datasets utilized in this suggested research to evaluate and verify the suggested approach?s performance. A 20-fold cross-validation strategy and a renowned performance evaluation metric are utilized to assess the suggested approach?s performance. The outcome implies that the possibility of forming cervical cancer can be efficiently foretold using the suggested HFS-OCB method. Therefore, the suggested approach?s indicated result is compared with the other algorithms and provides the prediction. Such a predicted result shows that the suggested approach is more capable, reliable, scalable, and more effective than the other machine learning algorithms.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) devices are often directly authenticated by the gateways within the network. In complex and large systems, IoT devices may be connected to the gateway through another device in the network. In such a scenario, new device should be authenticated with the gateway through the intermediate device. To address this issue, an authentication process is proposed in this paper for IoT-enabled healthcare systems. This approach performs a privacy-preserving mutual authentication between the gateway and an IoT device through intermediate devices, which are already authenticated by the gateway. The proposed approach relies on the session key established during gateway-intermediate device authentication. To emphasizes lightweight and efficient system, the proposed approach employs lightweight cryptographic operations, such as XOR, concatenation, and hash functions within IoT networks. This approach goes beyond the traditional device-to-device authentication, allowing authentication to propagate across multiple devices or nodes in the network. The proposed work establishes a secure session between an authorized device and a gateway, preventing unauthorized devices from accessing healthcare systems. The security of the protocol is validated through a thorough analysis using the AVISPA tool, and its performance is evaluated against existing schemes, demonstrating significantly lower communication and computation costs.
{"title":"Lightweight and privacy-preserving device-to-device authentication to enable secure transitive communication in IoT-based smart healthcare systems","authors":"Sangjukta Das, Maheshwari Prasad Singh, Suyel Namasudra","doi":"10.1007/s12652-024-04810-1","DOIUrl":"https://doi.org/10.1007/s12652-024-04810-1","url":null,"abstract":"<p>Internet of Things (IoT) devices are often directly authenticated by the gateways within the network. In complex and large systems, IoT devices may be connected to the gateway through another device in the network. In such a scenario, new device should be authenticated with the gateway through the intermediate device. To address this issue, an authentication process is proposed in this paper for IoT-enabled healthcare systems. This approach performs a privacy-preserving mutual authentication between the gateway and an IoT device through intermediate devices, which are already authenticated by the gateway. The proposed approach relies on the session key established during gateway-intermediate device authentication. To emphasizes lightweight and efficient system, the proposed approach employs lightweight cryptographic operations, such as XOR, concatenation, and hash functions within IoT networks. This approach goes beyond the traditional device-to-device authentication, allowing authentication to propagate across multiple devices or nodes in the network. The proposed work establishes a secure session between an authorized device and a gateway, preventing unauthorized devices from accessing healthcare systems. The security of the protocol is validated through a thorough analysis using the AVISPA tool, and its performance is evaluated against existing schemes, demonstrating significantly lower communication and computation costs.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s12652-024-04826-7
Thomas Dolmark, Osama Sohaib, Ghassan Beydoun, Firouzeh Taghikhah
The importance of knowledge for organizational success is widely recognized, leading managers to leverage knowledge actively. Within knowledge transfer, the Absorptive Capacity (ACAP) of Knowledge Recipients (KR) emerges as an unresolved barrier. ACAP is the dynamic capability to absorb knowledge and surpass the aggregation of individual ACAP within an organization. However, more research is needed on individual-level ACAP and its implications for bridging the gap between individual and organizational knowledge transfer. To address this gap, this study employs Agent-Based Modeling (ABM) as a simulation method to replicate individual ACAP within an organization, facilitating the examination of knowledge transfer dynamics. ABM allows for the detailed analysis of interactions between individual KRs and the organizational environment, revealing how uninterrupted time and other factors influence knowledge absorption. The implications of the study are that ABM provides specific insights into how individual ACAP affects organizational learning and performance, emphasizing the importance of uninterrupted time for KR to achieve optimal knowledge exploitation and highlighting the need for organizational practices and policies that foster environments conducive to knowledge absorption.
{"title":"Agent-based modelling of individual absorptive capacity for effective knowledge transfer","authors":"Thomas Dolmark, Osama Sohaib, Ghassan Beydoun, Firouzeh Taghikhah","doi":"10.1007/s12652-024-04826-7","DOIUrl":"https://doi.org/10.1007/s12652-024-04826-7","url":null,"abstract":"<p>The importance of knowledge for organizational success is widely recognized, leading managers to leverage knowledge actively. Within knowledge transfer, the Absorptive Capacity (ACAP) of Knowledge Recipients (KR) emerges as an unresolved barrier. ACAP is the dynamic capability to absorb knowledge and surpass the aggregation of individual ACAP within an organization. However, more research is needed on individual-level ACAP and its implications for bridging the gap between individual and organizational knowledge transfer. To address this gap, this study employs Agent-Based Modeling (ABM) as a simulation method to replicate individual ACAP within an organization, facilitating the examination of knowledge transfer dynamics. ABM allows for the detailed analysis of interactions between individual KRs and the organizational environment, revealing how uninterrupted time and other factors influence knowledge absorption. The implications of the study are that ABM provides specific insights into how individual ACAP affects organizational learning and performance, emphasizing the importance of uninterrupted time for KR to achieve optimal knowledge exploitation and highlighting the need for organizational practices and policies that foster environments conducive to knowledge absorption.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-15DOI: 10.1007/s12652-024-04812-z
P. Theepalakshmi, U. Srinivasulu Reddy
{"title":"Finding the transcription factor binding locations using novel algorithm segmentation to filtration (S2F)","authors":"P. Theepalakshmi, U. Srinivasulu Reddy","doi":"10.1007/s12652-024-04812-z","DOIUrl":"https://doi.org/10.1007/s12652-024-04812-z","url":null,"abstract":"","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141336658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}