Mobile devices are supporting a wide range of applications irrespective of their configuration. There is a need to make the mobile applications executable on mobile devices without concern of battery life. For optimizing mobile applications computational offloading is highly preferred. It helps to overcome the severity of scarce resources constraint mobile devices. In offloading, which part of the application to be offloaded, on which processor and what is available bandwidth rate are the main crucial issues. As subtasks of mobile applications are interdependent, efficient execution of application requires research of favorable wireless network conditions before to take the offloading decision. Broadly in mobile cloud computing the applications is either delay sensitive or delay tolerant. For delay sensitive applications completion time has the highest priority whereas for delay tolerant type of applications depending on the network conditions decision of offloading can be taken. Sometimes, computation time on a cloud server is less but it consumes high communication time which ultimately gives inefficient offloading results. To address this issue, we have proposed a heuristic based level wise task offloading (HTLO). It includes computation time, communication time and maximum energy available on the mobile device to take the decision of offloading. For simulation study, a mobile application is considered as a directed graph and all the tasks are executed on the basis of their levels. The overall results of the proposed heuristic approach are compared with state-of-the-art K-M LARAC algorithm and results show the improvement in execution time, communication time, mobile device energy consumption and total energy consumption.
{"title":"Constraints Based Heuristic Approach for Task Offloading In Mobile Cloud Computing","authors":"R. Kumari, S. Kaushal","doi":"10.17762/ITII.V8I1.74","DOIUrl":"https://doi.org/10.17762/ITII.V8I1.74","url":null,"abstract":"Mobile devices are supporting a wide range of applications irrespective of their configuration. There is a need to make the mobile applications executable on mobile devices without concern of battery life. For optimizing mobile applications computational offloading is highly preferred. It helps to overcome the severity of scarce resources constraint mobile devices. In offloading, which part of the application to be offloaded, on which processor and what is available bandwidth rate are the main crucial issues. As subtasks of mobile applications are interdependent, efficient execution of application requires research of favorable wireless network conditions before to take the offloading decision. Broadly in mobile cloud computing the applications is either delay sensitive or delay tolerant. For delay sensitive applications completion time has the highest priority whereas for delay tolerant type of applications depending on the network conditions decision of offloading can be taken. Sometimes, computation time on a cloud server is less but it consumes high communication time which ultimately gives inefficient offloading results. To address this issue, we have proposed a heuristic based level wise task offloading (HTLO). It includes computation time, communication time and maximum energy available on the mobile device to take the decision of offloading. For simulation study, a mobile application is considered as a directed graph and all the tasks are executed on the basis of their levels. The overall results of the proposed heuristic approach are compared with state-of-the-art K-M LARAC algorithm and results show the improvement in execution time, communication time, mobile device energy consumption and total energy consumption.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"359 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80202128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The significant number of fatalities and serious injuries caused by traffic accidents around the world is a worrying problem. Developing nations typically bear a heavier weight of casualties. As a result, developing a model to forecast the likelihood of accidents is extremely difficult. However, the application of machine learning algorithms is one of the significant techniques to forecast the seriousness of such events. As a result, the main goal of the suggested thesis is to automate the process of accident detection by evaluating the severity levels and filtering a set of influential factors that could cause a road accident and generating them using IoT. SMOTE's theoretical notions are put into practice in order to address data imbalance and to ensure that the dataset is balanced. In a later step, the dataset is put to use in the process of building a framework that is constructed from five machine learning algorithms and one stacking algorithm. In the final step of the process, a study is conducted using variables such as the state of the weather and the varying degrees of severity that can have a role in the occurrence of traffic accidents. According to the findings of the experimental analysis that was carried out as part of the research project, the random forest model generated a higher level of accuracy than any of the other models that were put into use, achieving 74%.
{"title":"An IOT based Accident Severity Prediction Mechanism using Machine Learning","authors":"Aditya Verma","doi":"10.17762/itii.v7i3.811","DOIUrl":"https://doi.org/10.17762/itii.v7i3.811","url":null,"abstract":"The significant number of fatalities and serious injuries caused by traffic accidents around the world is a worrying problem. Developing nations typically bear a heavier weight of casualties. As a result, developing a model to forecast the likelihood of accidents is extremely difficult. However, the application of machine learning algorithms is one of the significant techniques to forecast the seriousness of such events. As a result, the main goal of the suggested thesis is to automate the process of accident detection by evaluating the severity levels and filtering a set of influential factors that could cause a road accident and generating them using IoT. SMOTE's theoretical notions are put into practice in order to address data imbalance and to ensure that the dataset is balanced. In a later step, the dataset is put to use in the process of building a framework that is constructed from five machine learning algorithms and one stacking algorithm. In the final step of the process, a study is conducted using variables such as the state of the weather and the varying degrees of severity that can have a role in the occurrence of traffic accidents. According to the findings of the experimental analysis that was carried out as part of the research project, the random forest model generated a higher level of accuracy than any of the other models that were put into use, achieving 74%.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73194657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) has emerged as a promising technology to revolutionize healthcare by transforming the way medical services are delivered, improving patient outcomes, and reducing costs. IoT-enabled devices and systems offer immense potential for enhancing patient outcomes, improving healthcare delivery, and reducing costs. This paper presents an overview of the challenges and opportunities associated with IoT adoption in healthcare, emphasizing its potential to enhance patient care and streamline medical processes. This paper highlights the crucial role of IoT in transforming healthcare systems and emphasizes the need for multidisciplinary collaboration among stakeholders to ensure the successful implementation of IoT in healthcare.
{"title":"Iot In Healthcare: Challenges and Opportunities for Improved Patient Outcomes","authors":"Saumitra Chattopadhyay","doi":"10.17762/itii.v7i3.813","DOIUrl":"https://doi.org/10.17762/itii.v7i3.813","url":null,"abstract":"The Internet of Things (IoT) has emerged as a promising technology to revolutionize healthcare by transforming the way medical services are delivered, improving patient outcomes, and reducing costs. IoT-enabled devices and systems offer immense potential for enhancing patient outcomes, improving healthcare delivery, and reducing costs. This paper presents an overview of the challenges and opportunities associated with IoT adoption in healthcare, emphasizing its potential to enhance patient care and streamline medical processes. This paper highlights the crucial role of IoT in transforming healthcare systems and emphasizes the need for multidisciplinary collaboration among stakeholders to ensure the successful implementation of IoT in healthcare.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78241932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing global population and the increasing demand for food have led to a pressing need for sustainable agricultural practices. To address this challenge, we present an AI-Based Precision and Intelligent Farming System that leverages state-of-the-art machine learning techniques to optimize resource utilization and crop yields. This study demonstrates the integration of various data sources such as satellite imagery, IoT sensors, and historical data to develop a comprehensive and adaptive system for precision agriculture. Our approach employs deep learning models, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, to analyze and predict crop health, growth, and potential yield. Furthermore, we propose a reinforcement learning-based decision-making module for effective irrigation, fertilization, and pest control management. The proposed system is extensively evaluated on real-world datasets, showing significant improvements in crop yield, water efficiency, and overall sustainability compared to traditional farming methods. Our findings suggest that the AI-Based Precision and Intelligent Farming System has the potential to revolutionize agriculture and contribute to global food security while minimizing environmental impacts.
{"title":"AI Based Precision and Intelligent Farming System","authors":"Samir Rana","doi":"10.17762/itii.v7i3.809","DOIUrl":"https://doi.org/10.17762/itii.v7i3.809","url":null,"abstract":"The growing global population and the increasing demand for food have led to a pressing need for sustainable agricultural practices. To address this challenge, we present an AI-Based Precision and Intelligent Farming System that leverages state-of-the-art machine learning techniques to optimize resource utilization and crop yields. This study demonstrates the integration of various data sources such as satellite imagery, IoT sensors, and historical data to develop a comprehensive and adaptive system for precision agriculture. Our approach employs deep learning models, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, to analyze and predict crop health, growth, and potential yield. Furthermore, we propose a reinforcement learning-based decision-making module for effective irrigation, fertilization, and pest control management. The proposed system is extensively evaluated on real-world datasets, showing significant improvements in crop yield, water efficiency, and overall sustainability compared to traditional farming methods. Our findings suggest that the AI-Based Precision and Intelligent Farming System has the potential to revolutionize agriculture and contribute to global food security while minimizing environmental impacts.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87712290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Infectious diseases of plants provide a substantial danger to the world's food supply and the agricultural industry. The early detection and classification of diseases that affect plant leaves is essential for minimizing crop loss and creating disease management measures that are both efficient and effective. For the purpose of this research, we present a unique approach that utilizes advanced machine learning techniques in order to classify plant diseases in a manner that is both accurate and efficient. In the first part of this multi-part series, we will begin by providing a detailed analysis of several different machine learning techniques, such as deep learning, convolutional neural networks (CNNs), and K-nearest neighbor (KNN), support vector machines (SVMs). Next, we provide an overview of a methodology for preprocessing the leaf images, which includes the addition of enhancements to the images, segmentation of the images, and the extraction of features. Next, we apply various machine learning algorithms to a large, diverse dataset of plant-leaf images that have varying degrees of disease severity and compare the performance of these algorithms as they are implemented on the dataset. Our findings provide evidence that the method being proposed is successful in correctly recognizing and categorizing plant diseases that affect leaf tissue. In terms of accuracy, precision, and recall, the models that are based on deep learning, in particular CNNs, perform significantly better than classical machine learning techniques. In addition, we investigate various methods to enhance the interpretability of the model and provide insights into the primary factors that contribute to the accuracy of categorization.
{"title":"Utilizing Machine Learning Techniques for Plant-Leaf Diseases Classification","authors":"I. Kumar","doi":"10.17762/itii.v7i3.810","DOIUrl":"https://doi.org/10.17762/itii.v7i3.810","url":null,"abstract":"Infectious diseases of plants provide a substantial danger to the world's food supply and the agricultural industry. The early detection and classification of diseases that affect plant leaves is essential for minimizing crop loss and creating disease management measures that are both efficient and effective. For the purpose of this research, we present a unique approach that utilizes advanced machine learning techniques in order to classify plant diseases in a manner that is both accurate and efficient. In the first part of this multi-part series, we will begin by providing a detailed analysis of several different machine learning techniques, such as deep learning, convolutional neural networks (CNNs), and K-nearest neighbor (KNN), support vector machines (SVMs). Next, we provide an overview of a methodology for preprocessing the leaf images, which includes the addition of enhancements to the images, segmentation of the images, and the extraction of features. Next, we apply various machine learning algorithms to a large, diverse dataset of plant-leaf images that have varying degrees of disease severity and compare the performance of these algorithms as they are implemented on the dataset. Our findings provide evidence that the method being proposed is successful in correctly recognizing and categorizing plant diseases that affect leaf tissue. In terms of accuracy, precision, and recall, the models that are based on deep learning, in particular CNNs, perform significantly better than classical machine learning techniques. In addition, we investigate various methods to enhance the interpretability of the model and provide insights into the primary factors that contribute to the accuracy of categorization.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82193823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) has revolutionized various aspects of our daily lives, particularly in the healthcare sector. The integration of IoT with wearable devices has opened up new possibilities for healthcare monitoring, enabling the continuous tracking of patients' physiological parameters and promoting personalized medical care. This systematic review examines the current landscape of IoT-enabled wearable devices for healthcare monitoring, their potential applications, and the associated challenges. We conducted a thorough literature search to identify the most relevant and recent studies on IoT-enabled wearable devices for healthcare monitoring. Several devices were discussed, including smartwatches, fitness trackers, wearable electrocardiogram (ECG) monitors, continuous glucose monitoring systems, and smart patches for vital sign monitoring. These wearables offer numerous advantages, such as real-time monitoring, improved patient adherence, early detection of potential health issues, and enhanced patient-physician communication. The review also explores the potential drawbacks and challenges of implementing IoT-enabled wearable devices in healthcare, such as data privacy concerns, device interoperability, and the need for standardized data collection and analysis methods. Moreover, we discuss potential solutions and future research directions to overcome these challenges and promote the widespread adoption of IoT-enabled wearables for healthcare monitoring. In conclusion, IoT-enabled wearable devices have the potential to transform the healthcare sector by facilitating remote patient monitoring, improving treatment outcomes, and reducing healthcare costs. However, addressing the existing challenges and incorporating user feedback in the design and development process is essential for the successful integration of IoT-enabled wearables into the healthcare ecosystem.
{"title":"IoT-Enabled Healthcare Monitoring: A Systematic Review of Wearable Devices","authors":"H. Sivaraman","doi":"10.17762/itii.v7i3.815","DOIUrl":"https://doi.org/10.17762/itii.v7i3.815","url":null,"abstract":"The Internet of Things (IoT) has revolutionized various aspects of our daily lives, particularly in the healthcare sector. The integration of IoT with wearable devices has opened up new possibilities for healthcare monitoring, enabling the continuous tracking of patients' physiological parameters and promoting personalized medical care. This systematic review examines the current landscape of IoT-enabled wearable devices for healthcare monitoring, their potential applications, and the associated challenges. We conducted a thorough literature search to identify the most relevant and recent studies on IoT-enabled wearable devices for healthcare monitoring. Several devices were discussed, including smartwatches, fitness trackers, wearable electrocardiogram (ECG) monitors, continuous glucose monitoring systems, and smart patches for vital sign monitoring. These wearables offer numerous advantages, such as real-time monitoring, improved patient adherence, early detection of potential health issues, and enhanced patient-physician communication. The review also explores the potential drawbacks and challenges of implementing IoT-enabled wearable devices in healthcare, such as data privacy concerns, device interoperability, and the need for standardized data collection and analysis methods. Moreover, we discuss potential solutions and future research directions to overcome these challenges and promote the widespread adoption of IoT-enabled wearables for healthcare monitoring. In conclusion, IoT-enabled wearable devices have the potential to transform the healthcare sector by facilitating remote patient monitoring, improving treatment outcomes, and reducing healthcare costs. However, addressing the existing challenges and incorporating user feedback in the design and development process is essential for the successful integration of IoT-enabled wearables into the healthcare ecosystem.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86828505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AbstractThe rapid advancement of technology has led to the development of the Internet of Things (IoT), which is revolutionizing various sectors, including healthcare. Smart healthcare systems, powered by IoT, have the potential to significantly improve medical diagnostics and treatment, thus enhancing patient outcomes and reducing healthcare costs. This paper aims to analyze the impact of IoT on medical diagnostics and treatment, focusing on three key areas: remote patient monitoring, telemedicine, and artificial intelligence (AI) in diagnostics. Remote patient monitoring allows for real-time data collection and analysis of patient health, enabling healthcare professionals to make informed decisions and provide prompt interventions. IoT devices, such as wearable sensors and smart medical equipment, facilitate the continuous monitoring of vital signs and symptoms, leading to timely detection of abnormalities and improved disease management. Telemedicine, enabled by IoT, allows healthcare providers to virtually consult with patients, reducing the need for in-person visits and expanding access to medical care, especially for individuals in remote or underserved areas. This technology enhances patient-provider communication, fosters a more personalized approach to medicine, and increases the efficiency of healthcare services. Finally, AI-powered diagnostic tools, integrated with IoT devices, can process and analyze large volumes of data to identify patterns and correlations, leading to more accurate and efficient diagnoses. These systems can also aid in treatment planning and decision-making, resulting in improved patient care and outcomes.
{"title":"Smart Healthcare Systems: The Impact of IoT on Medical Diagnostics and Treatment","authors":"Manika Manwal","doi":"10.17762/itii.v7i3.814","DOIUrl":"https://doi.org/10.17762/itii.v7i3.814","url":null,"abstract":"AbstractThe rapid advancement of technology has led to the development of the Internet of Things (IoT), which is revolutionizing various sectors, including healthcare. Smart healthcare systems, powered by IoT, have the potential to significantly improve medical diagnostics and treatment, thus enhancing patient outcomes and reducing healthcare costs. This paper aims to analyze the impact of IoT on medical diagnostics and treatment, focusing on three key areas: remote patient monitoring, telemedicine, and artificial intelligence (AI) in diagnostics. Remote patient monitoring allows for real-time data collection and analysis of patient health, enabling healthcare professionals to make informed decisions and provide prompt interventions. IoT devices, such as wearable sensors and smart medical equipment, facilitate the continuous monitoring of vital signs and symptoms, leading to timely detection of abnormalities and improved disease management. Telemedicine, enabled by IoT, allows healthcare providers to virtually consult with patients, reducing the need for in-person visits and expanding access to medical care, especially for individuals in remote or underserved areas. This technology enhances patient-provider communication, fosters a more personalized approach to medicine, and increases the efficiency of healthcare services. Finally, AI-powered diagnostic tools, integrated with IoT devices, can process and analyze large volumes of data to identify patterns and correlations, leading to more accurate and efficient diagnoses. These systems can also aid in treatment planning and decision-making, resulting in improved patient care and outcomes.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83163594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phishing is an assault that is typically carried out by combining foundations of social engineering with ever-evolving technical approaches. Phishing is also known as spear phishing. This masking of the phony site to work as if it were the real one induces the user to divulge their personal details such as the passwords and bank accounts associated with such accounts. As a result, in modern times, conducting an exhaustive study on previous attacks is obligatory in order to adequately prepare ourselves to avoid becoming victims of such dangers. The purpose of this study article is to provide a better knowledge on the working principles of such threats, to promote the development of anti-phishing measures in the future, and to provide a brief discussion on prior and ongoing attacks. The fundamental purpose of the work that is being given is to raise people's levels of awareness and teach them on how to protect themselves from attacks of this kind. In addition, the purpose of this assessment is to offer assistance to policy makers and software developers so that they may arrive at the best decisions and help create an environment free of viruses.
{"title":"A Review on Anti-Phishing Framework","authors":"Preeti Chaudhary","doi":"10.17762/itii.v7i3.812","DOIUrl":"https://doi.org/10.17762/itii.v7i3.812","url":null,"abstract":"Phishing is an assault that is typically carried out by combining foundations of social engineering with ever-evolving technical approaches. Phishing is also known as spear phishing. This masking of the phony site to work as if it were the real one induces the user to divulge their personal details such as the passwords and bank accounts associated with such accounts. As a result, in modern times, conducting an exhaustive study on previous attacks is obligatory in order to adequately prepare ourselves to avoid becoming victims of such dangers. The purpose of this study article is to provide a better knowledge on the working principles of such threats, to promote the development of anti-phishing measures in the future, and to provide a brief discussion on prior and ongoing attacks. The fundamental purpose of the work that is being given is to raise people's levels of awareness and teach them on how to protect themselves from attacks of this kind. In addition, the purpose of this assessment is to offer assistance to policy makers and software developers so that they may arrive at the best decisions and help create an environment free of viruses.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83901754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the goal of providing the most possible benefit to the patient, integrative medicine involves combining conventional and evidence-based alternative medications and treatments. Herb-drug interactions (HDIs) provide a significant barrier to the same. Since these HDIs may have either positive or negative effects—even be fatal—a comprehensive knowledge of HDI outcomes is crucial for the effective integration of conventional and alternative medical practises. In this article, we provide a concise overview of HDIs, highlighting the interplays between drug metabolising enzymes and transporters while discussing the many kinds of HDIs and the tools/methods for studying and predicting HDIs. Future perspectives are also discussed in this paper, with an emphasis on the endogenous participants in the interplays and methods for predicting drug-disease-herb interactions to achieve the desired results.
{"title":"Pharmacognosy Basics for Understanding Herbal Drug Interactions Commonly Used for Sustained Home Remedies","authors":"A. Dhyani","doi":"10.17762/itii.v7i2.808","DOIUrl":"https://doi.org/10.17762/itii.v7i2.808","url":null,"abstract":"With the goal of providing the most possible benefit to the patient, integrative medicine involves combining conventional and evidence-based alternative medications and treatments. Herb-drug interactions (HDIs) provide a significant barrier to the same. Since these HDIs may have either positive or negative effects—even be fatal—a comprehensive knowledge of HDI outcomes is crucial for the effective integration of conventional and alternative medical practises. In this article, we provide a concise overview of HDIs, highlighting the interplays between drug metabolising enzymes and transporters while discussing the many kinds of HDIs and the tools/methods for studying and predicting HDIs. Future perspectives are also discussed in this paper, with an emphasis on the endogenous participants in the interplays and methods for predicting drug-disease-herb interactions to achieve the desired results. \u0000 ","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"41 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89242266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stress is a common issue in modern society and can lead to various health problems when left unaddressed. Accurate stress detection is, therefore, crucial in order to provide effective interventions and improve overall well-being. This study presents the implementation of a Long Short-Term Memory (LSTM) model to detect stress using electroencephalogram (EEG) signals. EEG signals were collected from a sample of participants while they were exposed to stress-inducing tasks and control tasks. The data was pre-processed using filtering and artifact removal techniques to ensure high quality and reliability. The pre-processed EEG signals were then used to extract relevant features, such as spectral power and coherence, which served as inputs to the LSTM model. A deep learning architecture was developed, incorporating the LSTM layers and other components to optimize the model's performance. The LSTM model was trained and validated using the available dataset. The results showed that the LSTM model significantly outperformed the other algorithms in terms of accuracy, sensitivity, and specificity. Furthermore, the model demonstrated robustness in detecting stress across various tasks and EEG channels. These findings suggest that LSTM-based models have the potential to be used as effective tools for stress detection in real-life scenarios, and can contribute to the development of more personalized stress management interventions. Future research should focus on refining the model and exploring its applicability in different populations and settings.
{"title":"Implementation of Long Short-Term Memory (LSTM) Model for Stress Detection Using EEG Signal","authors":"Ayushi Jain","doi":"10.17762/itii.v7i2.803","DOIUrl":"https://doi.org/10.17762/itii.v7i2.803","url":null,"abstract":"Stress is a common issue in modern society and can lead to various health problems when left unaddressed. Accurate stress detection is, therefore, crucial in order to provide effective interventions and improve overall well-being. This study presents the implementation of a Long Short-Term Memory (LSTM) model to detect stress using electroencephalogram (EEG) signals. EEG signals were collected from a sample of participants while they were exposed to stress-inducing tasks and control tasks. The data was pre-processed using filtering and artifact removal techniques to ensure high quality and reliability. The pre-processed EEG signals were then used to extract relevant features, such as spectral power and coherence, which served as inputs to the LSTM model. A deep learning architecture was developed, incorporating the LSTM layers and other components to optimize the model's performance. The LSTM model was trained and validated using the available dataset. The results showed that the LSTM model significantly outperformed the other algorithms in terms of accuracy, sensitivity, and specificity. Furthermore, the model demonstrated robustness in detecting stress across various tasks and EEG channels. These findings suggest that LSTM-based models have the potential to be used as effective tools for stress detection in real-life scenarios, and can contribute to the development of more personalized stress management interventions. Future research should focus on refining the model and exploring its applicability in different populations and settings.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90430648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}