Pub Date : 2024-08-22DOI: 10.1016/j.measen.2024.101296
Suplab Kanti Podder , Debabrata Samanta , Blerta Prevalla Etemi
Aims
The research discovers how IoT contributes to workspace optimization, utilizing occupancy sensors to streamline office layouts, improve energy efficiency, and enhance the overall work environment in smart cities in India.
Subject and methods
In the present research study, both descriptive and exploratory research design were implemented and respondents include the Experts, HR Analysts and Regular Employees of Services organizations. The independent and dependent variables were identified and multiple regression analysis was executed for data analysis using SPSS software.
Results
The results or outcomes of the research summarizes the positive response of technological upgradation in HR practices in modern organizations. HR Analytics interconnect with applications of IoT that facilitates for better resource utilization and monitoring system.
Conclusion
The study concludes by presenting a comprehensive framework for HR professionals to effectively integrate IoT into their analytics practices, emphasizing the need for collaboration, communication, and the establishment of clear privacy policies.
{"title":"Impact of Internet of Things (IoT) applications on HR analytics and sustainable business practices in smart city","authors":"Suplab Kanti Podder , Debabrata Samanta , Blerta Prevalla Etemi","doi":"10.1016/j.measen.2024.101296","DOIUrl":"10.1016/j.measen.2024.101296","url":null,"abstract":"<div><h3>Aims</h3><p>The research discovers how IoT contributes to workspace optimization, utilizing occupancy sensors to streamline office layouts, improve energy efficiency, and enhance the overall work environment in smart cities in India.</p></div><div><h3>Subject and methods</h3><p>In the present research study, both descriptive and exploratory research design were implemented and respondents include the Experts, HR Analysts and Regular Employees of Services organizations. The independent and dependent variables were identified and multiple regression analysis was executed for data analysis using SPSS software.</p></div><div><h3>Results</h3><p>The results or outcomes of the research summarizes the positive response of technological upgradation in HR practices in modern organizations. HR Analytics interconnect with applications of IoT that facilitates for better resource utilization and monitoring system.</p></div><div><h3>Conclusion</h3><p>The study concludes by presenting a comprehensive framework for HR professionals to effectively integrate IoT into their analytics practices, emphasizing the need for collaboration, communication, and the establishment of clear privacy policies.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101296"},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002721/pdfft?md5=32ce6ea58fbae3fba5fb960256e5a81c&pid=1-s2.0-S2665917424002721-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1016/j.measen.2024.101293
Luke Evans , Ian Ashton , Brian Sellar
Recommended practice for quantifying the energy resource at a tidal energy site requires the use of multiple instruments deployed across the site. However, the instruments used work by emitting an acoustic pulse and instruments working at the same time have the potential to interfere with each other through a process known as ’cross-talk’. It is important to understand the impact of cross-talk on measurements and how this can be managed and through data processing or suitable positioning of devices. The ReDAPT project conducted a measurement campaign using two Acoustic Doppler Current Profilers (ADCPs) placed upstream of an operational tidal turbine. This aimed to assess the ’in-line’ instrument placement guidelines from IEC 62600-200 for Power Performance Assessment (PPA) in real-world conditions. Consequently, the results within hold potential to support arguments for expanding these zones or adjusting their general dimensions. Despite adhering to industry standards and best practices to eliminate unreliable data in the Quality Control (QC) checks, in both concurrently measuring ADCPs at different time stamps in approximately 15 % of the returned data. This work identified for the first time interference throughout the campaign and quantified subsequent impact on estimates. A method to remove data anomalies caused by interference between closely positioned ADCPs has been developed and demonstrated, resulting in a 7 % variation in estimated Annual Energy Production (AEP). The algorithm effectively removed approximately 90 % of the corrupted measurements. Moving forward, multi-sensor deployments could use the algorithm described to double-check for interference within the data sets, although care should be taken to avoid this by choosing a suitable layout for deployment.
{"title":"On the utility of partially corrupted flow measurement data arising from adjacent acoustic Doppler current profilers for energy yield assessment","authors":"Luke Evans , Ian Ashton , Brian Sellar","doi":"10.1016/j.measen.2024.101293","DOIUrl":"10.1016/j.measen.2024.101293","url":null,"abstract":"<div><p>Recommended practice for quantifying the energy resource at a tidal energy site requires the use of multiple instruments deployed across the site. However, the instruments used work by emitting an acoustic pulse and instruments working at the same time have the potential to interfere with each other through a process known as ’cross-talk’. It is important to understand the impact of cross-talk on measurements and how this can be managed and through data processing or suitable positioning of devices. The ReDAPT project conducted a measurement campaign using two Acoustic Doppler Current Profilers (ADCPs) placed upstream of an operational tidal turbine. This aimed to assess the ’in-line’ instrument placement guidelines from IEC 62600-200 for Power Performance Assessment (PPA) in real-world conditions. Consequently, the results within hold potential to support arguments for expanding these zones or adjusting their general dimensions. Despite adhering to industry standards and best practices to eliminate unreliable data in the Quality Control (QC) checks, in both concurrently measuring ADCPs at different time stamps in approximately 15 % of the returned data. This work identified for the first time interference throughout the campaign and quantified subsequent impact on estimates. A method to remove data anomalies caused by interference between closely positioned ADCPs has been developed and demonstrated, resulting in a 7 % variation in estimated Annual Energy Production (AEP). The algorithm effectively removed approximately 90 % of the corrupted measurements. Moving forward, multi-sensor deployments could use the algorithm described to double-check for interference within the data sets, although care should be taken to avoid this by choosing a suitable layout for deployment.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101293"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002691/pdfft?md5=30e40f2957963f548db06be49a880498&pid=1-s2.0-S2665917424002691-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Businesses that want to benefit from cloud computing must choose a Cloud Service Provider (CSP). Cost, performance, Reliability, security, and SLAs must be evaluated during the decision process. CSP assessment is tough because of uncertainties and erroneous data. Fuzzy logic and the firefly optimization technique have been proposed in this paper to achieve optimal results based on diverse components. The proposed methodology uses consumer, service provider, and public reviews based on the three elements. These components' ratings can be used to analyze efficiency. Simple fuzzy logic is inferior to optimized fuzzy logic, according to experiments. The Firefly Optimized Fuzzy DSS is compared against non-optimized fuzzy decision-making systems and standard optimization methods. The results show that the proposed model is better for selecting the best CSP based on many parameters and managing assessment uncertainty. Fuzzy logic and optimization methods provide more nuanced and precise decision-making that accounts for subjective assessments and confusing facts. Businesses can make informed choices and ensure their CSP needs are satisfied with the approach. Finally, the Firefly Optimized Fuzzy Decision Support System offers a new perspective on cloud service provider selection by merging fuzzy logic with optimization. The system's ability to handle poor evaluations and ambiguity makes it ideal for CSP selection's complex decision-making process. This paper helps build decision support systems for choosing a cloud service provider and has substantial implications for firms seeking successful cloud computing solutions. This research work's conclusions have major implications for corporations and organizations searching for the finest cloud service providers. CSP-related real-world datasets are tested experimentally.
{"title":"Optimizing cloud service provider selection with firefly-guided fuzzy decision support system for smart cities","authors":"Surjeet Dalal , Ajay Kumar , Umesh Kumar Lilhore , Neeraj Dahiya , Vivek Jaglan , Uma Rani","doi":"10.1016/j.measen.2024.101294","DOIUrl":"10.1016/j.measen.2024.101294","url":null,"abstract":"<div><p>Businesses that want to benefit from cloud computing must choose a Cloud Service Provider (CSP). Cost, performance, Reliability, security, and SLAs must be evaluated during the decision process. CSP assessment is tough because of uncertainties and erroneous data. Fuzzy logic and the firefly optimization technique have been proposed in this paper to achieve optimal results based on diverse components. The proposed methodology uses consumer, service provider, and public reviews based on the three elements. These components' ratings can be used to analyze efficiency. Simple fuzzy logic is inferior to optimized fuzzy logic, according to experiments. The Firefly Optimized Fuzzy DSS is compared against non-optimized fuzzy decision-making systems and standard optimization methods. The results show that the proposed model is better for selecting the best CSP based on many parameters and managing assessment uncertainty. Fuzzy logic and optimization methods provide more nuanced and precise decision-making that accounts for subjective assessments and confusing facts. Businesses can make informed choices and ensure their CSP needs are satisfied with the approach. Finally, the Firefly Optimized Fuzzy Decision Support System offers a new perspective on cloud service provider selection by merging fuzzy logic with optimization. The system's ability to handle poor evaluations and ambiguity makes it ideal for CSP selection's complex decision-making process. This paper helps build decision support systems for choosing a cloud service provider and has substantial implications for firms seeking successful cloud computing solutions. This research work's conclusions have major implications for corporations and organizations searching for the finest cloud service providers. CSP-related real-world datasets are tested experimentally.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101294"},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002708/pdfft?md5=8b34cc351a34cf6a7ca30aa30d9fc402&pid=1-s2.0-S2665917424002708-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-15DOI: 10.1016/j.measen.2024.101295
P.S. Smitha, G. Balaarunesh, C. Sruthi Nath, Aminta Sabatini S
Early detection and classification of brain tumors are crucial for patient survival. This study proposes a comprehensive deep learning approach for early brain tumor classification using medical imaging data. A diverse dataset encompassing various tumor types, stages, and healthy brain images is utilized. Preprocessing techniques like augmentation and normalization enhance data robustness. A convolutional neural network (CNN) architecture serves as the primary model, leveraging transfer learning from pre-trained models to extract relevant features even with limited data. The training process optimizes hyperparameters to prevent overfitting, and performance is evaluated using metrics like accuracy, precision, recall, F1 score, confusion matrices, and ROC curves on a separate test set. Focusing on early detection, the model explores predicting tumor growth trajectories and identifying subtle pre-tumor patterns, aligning with expert diagnoses and boosting real-world applicability. Ethical and regulatory guidelines are adhered to in data handling. Continuous improvement involves updating the model with new data and monitoring its clinical performance. This research contributes to advancing early tumor classification methods, potentially improving patient outcomes and treatment strategies.
{"title":"Classification of brain tumor using deep learning at early stage","authors":"P.S. Smitha, G. Balaarunesh, C. Sruthi Nath, Aminta Sabatini S","doi":"10.1016/j.measen.2024.101295","DOIUrl":"10.1016/j.measen.2024.101295","url":null,"abstract":"<div><p>Early detection and classification of brain tumors are crucial for patient survival. This study proposes a comprehensive deep learning approach for early brain tumor classification using medical imaging data. A diverse dataset encompassing various tumor types, stages, and healthy brain images is utilized. Preprocessing techniques like augmentation and normalization enhance data robustness. A convolutional neural network (CNN) architecture serves as the primary model, leveraging transfer learning from pre-trained models to extract relevant features even with limited data. The training process optimizes hyperparameters to prevent overfitting, and performance is evaluated using metrics like accuracy, precision, recall, F1 score, confusion matrices, and ROC curves on a separate test set. Focusing on early detection, the model explores predicting tumor growth trajectories and identifying subtle pre-tumor patterns, aligning with expert diagnoses and boosting real-world applicability. Ethical and regulatory guidelines are adhered to in data handling. Continuous improvement involves updating the model with new data and monitoring its clinical performance. This research contributes to advancing early tumor classification methods, potentially improving patient outcomes and treatment strategies.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101295"},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266591742400271X/pdfft?md5=929ac12d03164a03ac2027a69f6b0393&pid=1-s2.0-S266591742400271X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-11DOI: 10.1016/j.measen.2024.101290
Tanushri Jaiswal , D.C. Jhariya , Mridu Sahu
Over the past few years, there has been a revitalized emphasis on comprehending the shifts in land cover and their implications for a range of environmental factors. This investigation seeks to analyze how changes in land surface temperatures (LST), normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), and alterations in land cover intersect within the lower Kharun catchment area. The primary dataset utilized in this study is for 2001 and 2021 Landsat 7 and 8, part of the Landsat program managed by the United States Geological Survey (USGS), offer essential Earth observation data using their multispectral and thermal sensors which are designed to detect thermal radiation emitted from the Earth's surface. When these bands are properly processed, they enable accurate temperature measurements. Visual interpretation was conducted on these images, categorizing them into five specific classes of land cover these were vegetation, open land, settlement, waterbodies, and cultivation. Following this, spectral indices like NDVI and NDBI were calculated, and LST was derived using a single-channel algorithm. Subsequently, correlation analysis was utilized to explore the interconnectedness or mutual relationship among the spatial distribution of these parameters. Over the period from 2001 to 2021, the most significant changes in land use were observed in the settlement area and cultivation, which increased by 6.92 and 6.23 sq. km, respectively. Conversely, open land, vegetation, and waterbodies experienced decreases of 7.13, 5.56, and 0.46 sq. km, respectively. The patterns in which LST, NDBI, and NDVI are distributed, exhibited corresponding variations following changes in land cover. The observed alterations in LST, NDBI, and NDVI are believed to be primarily influenced by the expansion of built-up areas. A noticeable association suggests that as built-up areas increase, both NDBI and LST values typically rise.
Furthermore, a correlation observed between LST with NDVI was negative, suggesting an inverse relationship between these parameters. On the other hand, the correlation of LST with NDBI observed was positive, indicating that these parameters exhibit a direct relationship. Overall, these findings seem to be complex and highlight the interactions between changing land cover and environmental parameters, underscoring the importance of understanding these relationships for effective land management and environmental monitoring.
{"title":"Variability in land surface temperature concerning escalating urban development using thermal data of andsat sensor: A case study of Lower Kharun Catchment, Chhattisgarh, India","authors":"Tanushri Jaiswal , D.C. Jhariya , Mridu Sahu","doi":"10.1016/j.measen.2024.101290","DOIUrl":"10.1016/j.measen.2024.101290","url":null,"abstract":"<div><p>Over the past few years, there has been a revitalized emphasis on comprehending the shifts in land cover and their implications for a range of environmental factors. This investigation seeks to analyze how changes in land surface temperatures (LST), normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), and alterations in land cover intersect within the lower Kharun catchment area. The primary dataset utilized in this study is for 2001 and 2021 Landsat 7 and 8, part of the Landsat program managed by the United States Geological Survey (USGS), offer essential Earth observation data using their multispectral and thermal sensors which are designed to detect thermal radiation emitted from the Earth's surface. When these bands are properly processed, they enable accurate temperature measurements. Visual interpretation was conducted on these images, categorizing them into five specific classes of land cover these were vegetation, open land, settlement, waterbodies, and cultivation. Following this, spectral indices like NDVI and NDBI were calculated, and LST was derived using a single-channel algorithm. Subsequently, correlation analysis was utilized to explore the interconnectedness or mutual relationship among the spatial distribution of these parameters. Over the period from 2001 to 2021, the most significant changes in land use were observed in the settlement area and cultivation, which increased by 6.92 and 6.23 sq. km, respectively. Conversely, open land, vegetation, and waterbodies experienced decreases of 7.13, 5.56, and 0.46 sq. km, respectively. The patterns in which LST, NDBI, and NDVI are distributed, exhibited corresponding variations following changes in land cover. The observed alterations in LST, NDBI, and NDVI are believed to be primarily influenced by the expansion of built-up areas. A noticeable association suggests that as built-up areas increase, both NDBI and LST values typically rise.</p><p>Furthermore, a correlation observed between LST with NDVI was negative, suggesting an inverse relationship between these parameters. On the other hand, the correlation of LST with NDBI observed was positive, indicating that these parameters exhibit a direct relationship. Overall, these findings seem to be complex and highlight the interactions between changing land cover and environmental parameters, underscoring the importance of understanding these relationships for effective land management and environmental monitoring.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101290"},"PeriodicalIF":0.0,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002666/pdfft?md5=29ae31cc445a9b532c6467aef57413c7&pid=1-s2.0-S2665917424002666-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.measen.2024.101291
S. Jayachitra , M. Balasubramani , Abdullah Mohammed Kaleem , Jayavarapu Karthik , G. Keerthiga , R. Mythili
Feature selection is a major challenge in data mining which involves complex searching procedure to acquire relevant feature subset. The effectiveness of classification approaches is greatly susceptible to data dimensionality. The Higher dimensionality intricate numerous problems like higher computational costs and over fitting problem. The essential key factor to mitigate the problem is feature selection. The main motive is to minimize the number of features through eliminating noisy, insignificant, and redundant features from the original data. The Metaheuristic algorithm attains excellent performance for solving this kind of problems. In this paper, the grading based binary salp swarm optimization has been proposed to solve various complex problems with lesser computational time. The grading system has been used to maintain the balance among exploitation and exploration. The proposed method is examined using ten benchmark real datasets. The comparative result exhibits the promising performance of our proposed method and surpasses with other optimization interms of investigating evaluation measures.
{"title":"An efficient ranking based binary salp swarm optimization for feature selection in high dimensional datasets","authors":"S. Jayachitra , M. Balasubramani , Abdullah Mohammed Kaleem , Jayavarapu Karthik , G. Keerthiga , R. Mythili","doi":"10.1016/j.measen.2024.101291","DOIUrl":"10.1016/j.measen.2024.101291","url":null,"abstract":"<div><p>Feature selection is a major challenge in data mining which involves complex searching procedure to acquire relevant feature subset. The effectiveness of classification approaches is greatly susceptible to data dimensionality. The Higher dimensionality intricate numerous problems like higher computational costs and over fitting problem. The essential key factor to mitigate the problem is feature selection. The main motive is to minimize the number of features through eliminating noisy, insignificant, and redundant features from the original data. The Metaheuristic algorithm attains excellent performance for solving this kind of problems. In this paper, the grading based binary salp swarm optimization has been proposed to solve various complex problems with lesser computational time. The grading system has been used to maintain the balance among exploitation and exploration. The proposed method is examined using ten benchmark real datasets. The comparative result exhibits the promising performance of our proposed method and surpasses with other optimization interms of investigating evaluation measures.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101291"},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002678/pdfft?md5=be2cd62c32373bba12f02c48a5a0f31c&pid=1-s2.0-S2665917424002678-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.measen.2024.101292
Lin Wang
In order to meet the requirements of urban construction and further urbanization of the country, the author proposes an industrial 5.1 air quality monitoring system to develop smart city infrastructure. The author utilizes a wireless network of lighting nodes to solve the cost and positioning accuracy issues of perception nodes covering a large area, achieving real-time alarm of urban air quality status and location of pollution occurrence. The author also adopted the latest design concept of monitoring systems combined with cloud platform interfaces, breaking the closed design of traditional IoT systems and enabling better utilization of air quality data. The test results indicate that: The communication distance of CC2530 can be maintained at around 70m under normal power, while the spacing between urban street lights is approximately 30m, which fully meets the project requirements. After two days of testing, the system alarm function and various functions are running normally.
Conclusion
The key parts of the system have been tested and simulated, and ideal results have been obtained.
{"title":"Design industrial 5.1 air quality monitoring system and develop smart city infrastructure","authors":"Lin Wang","doi":"10.1016/j.measen.2024.101292","DOIUrl":"10.1016/j.measen.2024.101292","url":null,"abstract":"<div><p>In order to meet the requirements of urban construction and further urbanization of the country, the author proposes an industrial 5.1 air quality monitoring system to develop smart city infrastructure. The author utilizes a wireless network of lighting nodes to solve the cost and positioning accuracy issues of perception nodes covering a large area, achieving real-time alarm of urban air quality status and location of pollution occurrence. The author also adopted the latest design concept of monitoring systems combined with cloud platform interfaces, breaking the closed design of traditional IoT systems and enabling better utilization of air quality data. The test results indicate that: The communication distance of CC2530 can be maintained at around 70m under normal power, while the spacing between urban street lights is approximately 30m, which fully meets the project requirements. After two days of testing, the system alarm function and various functions are running normally.</p></div><div><h3>Conclusion</h3><p>The key parts of the system have been tested and simulated, and ideal results have been obtained.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101292"},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266591742400268X/pdfft?md5=d32ce49958c8de75ea04ae778878cc5a&pid=1-s2.0-S266591742400268X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1016/j.measen.2024.101289
S. Jeevitha , J. Joel , N. Sathish Kumar , K. Immanuvel Arokia James
Myocardial Infarction otherwise called heart attack occurs in human beings when blood flow decreases or stops to a part of the heart which in turn damages the heart muscle. Prediction of abnormalities in cardio arrhythmia disease is done by using standard 12-lead Electrocardiography (ECG) signals, which also detects Posterior Myocardial Infarction (PMI). The QRS complex is the merged output of different parts of graphical deflection seen on a typical Electro Cardio Gram (Electrocardiography). The main purpose of the paper is to monitor and analyze particularly the Rpeak upward deflections from the QRS complex. Denoising the ECG signal is done by butter worth filter. The denoised signals are used to detect R peaks and image plotting is done by segmentation. R peak images are used to classify the abnormalities in Myocardial Infarction (MI) with the help of the CNN image processing technique. The publicly available PTB diagnostic dataset is used to classify the abnormalities in PMI. The detection of the R peaks is used to guide Cardiologists must advance the Percutaneous Coronary Intervention treatment. Prediction has been done using probability weighted average method. Troponin level has been calculated to evaluate a person's health condition which also supports in close prediction of diseases and abnormalities. From experimental analysis of proposed Probability weighted average method in troponin level (PWAMT), the accuracy scores in the ensemble model were found to be 86 % respectively. The running of algorithm took 250 s–300 s to execute the program and display the prediction results.
{"title":"Analysis of abnormalities in cardiac arrhythmia based on 12 - LEAD electrocardiography","authors":"S. Jeevitha , J. Joel , N. Sathish Kumar , K. Immanuvel Arokia James","doi":"10.1016/j.measen.2024.101289","DOIUrl":"10.1016/j.measen.2024.101289","url":null,"abstract":"<div><p>Myocardial Infarction otherwise called heart attack occurs in human beings when blood flow decreases or stops to a part of the heart which in turn damages the heart muscle. Prediction of abnormalities in cardio arrhythmia disease is done by using standard 12-lead Electrocardiography (ECG) signals, which also detects Posterior Myocardial Infarction (PMI). The QRS complex is the merged output of different parts of graphical deflection seen on a typical Electro Cardio Gram (Electrocardiography). The main purpose of the paper is to monitor and analyze particularly the R<sub>peak</sub> upward deflections from the QRS complex. Denoising the ECG signal is done by butter worth filter. The denoised signals are used to detect R peaks and image plotting is done by segmentation. R peak images are used to classify the abnormalities in Myocardial Infarction (MI) with the help of the CNN image processing technique. The publicly available PTB diagnostic dataset is used to classify the abnormalities in PMI. The detection of the R peaks is used to guide Cardiologists must advance the Percutaneous Coronary Intervention treatment. Prediction has been done using probability weighted average method. Troponin level has been calculated to evaluate a person's health condition which also supports in close prediction of diseases and abnormalities. From experimental analysis of proposed Probability weighted average method in troponin level (PWAMT), the accuracy scores in the ensemble model were found to be 86 % respectively. The running of algorithm took 250 s–300 s to execute the program and display the prediction results.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101289"},"PeriodicalIF":0.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002654/pdfft?md5=412d0ffe4d9ebe77f3d4bb7a239045a5&pid=1-s2.0-S2665917424002654-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1016/j.measen.2024.101286
Yuanrui Hong
In order to clarify the quantitative relationship between grid parameters and measurement errors of gateway energy meters, and accurately predict the dynamic measurement errors of gateway energy meters, the author proposes a nonlinear modeling of measurement errors of gateway energy meters. Firstly, elaborate on the NARX prediction model to clarify the basic structure of the nonlinear model; Then propose the process of modeling measurement errors; Finally, through testing, identify the main power grid parameters that affect measurement errors and the optimal structure of the model. The experimental results indicate that: The comparison between the true measurement error of two electricity meters and the measurement error calculated by the nonlinear estimator shows that the Hammerstein Weiner estimator has the highest fitting degree to the true measurement error curve, with fitting degrees of 82.21 % and 85.38 % for the measurement errors of 0.2S and 0.5S electricity meters, respectively. The prediction fit of the NRAX model based on the Hammerstein Weiner nonlinear estimator reaches about 81 % under different load conditions.
Conclusion
The model determined by this method can accurately predict the dynamic measurement error of the energy meter, and the research results have positive significance for improving the efficiency of gate energy meter calibration and identifying gate energy meter faults.
{"title":"Nonlinear modeling of measurement errors in gateway energy meters","authors":"Yuanrui Hong","doi":"10.1016/j.measen.2024.101286","DOIUrl":"10.1016/j.measen.2024.101286","url":null,"abstract":"<div><p>In order to clarify the quantitative relationship between grid parameters and measurement errors of gateway energy meters, and accurately predict the dynamic measurement errors of gateway energy meters, the author proposes a nonlinear modeling of measurement errors of gateway energy meters. Firstly, elaborate on the NARX prediction model to clarify the basic structure of the nonlinear model; Then propose the process of modeling measurement errors; Finally, through testing, identify the main power grid parameters that affect measurement errors and the optimal structure of the model. The experimental results indicate that: The comparison between the true measurement error of two electricity meters and the measurement error calculated by the nonlinear estimator shows that the Hammerstein Weiner estimator has the highest fitting degree to the true measurement error curve, with fitting degrees of 82.21 % and 85.38 % for the measurement errors of 0.2S and 0.5S electricity meters, respectively. The prediction fit of the NRAX model based on the Hammerstein Weiner nonlinear estimator reaches about 81 % under different load conditions.</p></div><div><h3>Conclusion</h3><p>The model determined by this method can accurately predict the dynamic measurement error of the energy meter, and the research results have positive significance for improving the efficiency of gate energy meter calibration and identifying gate energy meter faults.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"35 ","pages":"Article 101286"},"PeriodicalIF":0.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002629/pdfft?md5=d2dd4dececeef66d16da713391b576e8&pid=1-s2.0-S2665917424002629-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}