Maria Carmela Groccia, Rosita Guido, Domenico Conforti, Corrado Pelaia, Giuseppe Armentaro, Alfredo Francesco Toscani, Sofia Miceli, Elena Succurro, Marta Letizia Hribal, Angela Sciacqua
Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events is very important to limit serious consequences, improve the quality of care, and reduce its burden. CHF is a progressive condition in which patients may remain asymptomatic before the onset of symptoms, as observed in heart failure with a preserved ejection fraction. The early detection of underlying causes is critical for treatment optimisation and prognosis improvement. To develop models to predict cardiovascular deterioration events in patients with chronic heart failure, a real dataset was constructed and a knowledge discovery task was implemented in this study. The dataset is imbalanced, as it is common in real-world applications. It thus posed a challenge because imbalanced datasets tend to be overwhelmed by the abundance of majority-class instances during the learning process. To address the issue, a pipeline was developed specifically to handle imbalanced data. Different predictive models were developed and compared. To enhance sensitivity and other performance metrics, we employed multiple approaches, including data resampling, cost-sensitive methods, and a hybrid method that combines both techniques. These methods were utilised to assess the predictive capabilities of the models and their effectiveness in handling imbalanced data. By using these metrics, we aimed to identify the most effective strategies for achieving improved model performance in real scenarios with imbalanced datasets. The best model for predicting cardiovascular events achieved mean a sensitivity 65%, a mean specificity 55%, and a mean area under the curve of 0.71. The results show that cost-sensitive models combined with over/under sampling approaches are effective for the meaningful prediction of cardiovascular events in CHF patients.
{"title":"Cost-Sensitive Models to Predict Risk of Cardiovascular Events in Patients with Chronic Heart Failure","authors":"Maria Carmela Groccia, Rosita Guido, Domenico Conforti, Corrado Pelaia, Giuseppe Armentaro, Alfredo Francesco Toscani, Sofia Miceli, Elena Succurro, Marta Letizia Hribal, Angela Sciacqua","doi":"10.3390/info14100542","DOIUrl":"https://doi.org/10.3390/info14100542","url":null,"abstract":"Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events is very important to limit serious consequences, improve the quality of care, and reduce its burden. CHF is a progressive condition in which patients may remain asymptomatic before the onset of symptoms, as observed in heart failure with a preserved ejection fraction. The early detection of underlying causes is critical for treatment optimisation and prognosis improvement. To develop models to predict cardiovascular deterioration events in patients with chronic heart failure, a real dataset was constructed and a knowledge discovery task was implemented in this study. The dataset is imbalanced, as it is common in real-world applications. It thus posed a challenge because imbalanced datasets tend to be overwhelmed by the abundance of majority-class instances during the learning process. To address the issue, a pipeline was developed specifically to handle imbalanced data. Different predictive models were developed and compared. To enhance sensitivity and other performance metrics, we employed multiple approaches, including data resampling, cost-sensitive methods, and a hybrid method that combines both techniques. These methods were utilised to assess the predictive capabilities of the models and their effectiveness in handling imbalanced data. By using these metrics, we aimed to identify the most effective strategies for achieving improved model performance in real scenarios with imbalanced datasets. The best model for predicting cardiovascular events achieved mean a sensitivity 65%, a mean specificity 55%, and a mean area under the curve of 0.71. The results show that cost-sensitive models combined with over/under sampling approaches are effective for the meaningful prediction of cardiovascular events in CHF patients.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135743780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The power system is one of the most susceptible systems to failures, which are most frequently caused by transmission line faults. Transmission line failures account for 85% of all power system malfunctions. However, over the last decade, numerous fault detection methods have been developed to ensure the reliability and stability of power systems. A hybrid detection method based on the idea of redundancy property is presented in this paper. Because the continuous wavelet transform itself does not extract fault features for small defects effectively, the stationary wavelet transform approach is employed to assist in their detection. As a result of its ability to decompose the signal into high- and low-frequency components, undecimated reconstruction by using the algebraic summation operation (ASO) is used. This approach creates redundancy, which is useful for the feature extraction of small defects and makes faulty parts more evident. The numerical value of the redundancy ratio’s contribution to the original signal is approximately equal to 36%. Following this method for redundant signal reconstruction, a continuous wavelet transform is used to extract the fault characteristic significantly easier in the time-scale (frequency) domain. Finally, the suggested technique has been demonstrated to be an efficient fault detection and identification tool for use in power systems. In fact, using this advanced signal processing technique will help with early fault detection, which is mainly about predictive maintenance. This application provides more reliable operation conditions.
{"title":"Signal Processing Application Based on a Hybrid Wavelet Transform to Fault Detection and Identification in Power System","authors":"Yasmin Nasser Mohamed, Serhat Seker, Tahir Cetin Akinci","doi":"10.3390/info14100540","DOIUrl":"https://doi.org/10.3390/info14100540","url":null,"abstract":"The power system is one of the most susceptible systems to failures, which are most frequently caused by transmission line faults. Transmission line failures account for 85% of all power system malfunctions. However, over the last decade, numerous fault detection methods have been developed to ensure the reliability and stability of power systems. A hybrid detection method based on the idea of redundancy property is presented in this paper. Because the continuous wavelet transform itself does not extract fault features for small defects effectively, the stationary wavelet transform approach is employed to assist in their detection. As a result of its ability to decompose the signal into high- and low-frequency components, undecimated reconstruction by using the algebraic summation operation (ASO) is used. This approach creates redundancy, which is useful for the feature extraction of small defects and makes faulty parts more evident. The numerical value of the redundancy ratio’s contribution to the original signal is approximately equal to 36%. Following this method for redundant signal reconstruction, a continuous wavelet transform is used to extract the fault characteristic significantly easier in the time-scale (frequency) domain. Finally, the suggested technique has been demonstrated to be an efficient fault detection and identification tool for use in power systems. In fact, using this advanced signal processing technique will help with early fault detection, which is mainly about predictive maintenance. This application provides more reliable operation conditions.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135739421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pummy Dhiman, Anupam Bonkra, Amandeep Kaur, Yonis Gulzar, Yasir Hamid, Mohammad Shuaib Mir, Arjumand Bano Soomro, Osman Elwasila
Recent developments in IoT, big data, fog and edge networks, and AI technologies have had a profound impact on a number of industries, including medical. The use of AI for therapeutic purposes has been hampered by its inexplicability. Explainable Artificial Intelligence (XAI), a revolutionary movement, has arisen to solve this constraint. By using decision-making and prediction outputs, XAI seeks to improve the explicability of standard AI models. In this study, we examined global developments in empirical XAI research in the medical field. The bibliometric analysis tools VOSviewer and Biblioshiny were used to examine 171 open access publications from the Scopus database (2019–2022). Our findings point to several prospects for growth in this area, notably in areas of medicine like diagnostic imaging. With 109 research articles using XAI for healthcare classification, prediction, and diagnosis, the USA leads the world in research output. With 88 citations, IEEE Access has the greatest number of publications of all the journals. Our extensive survey covers a range of XAI applications in healthcare, such as diagnosis, therapy, prevention, and palliation, and offers helpful insights for researchers who are interested in this field. This report provides a direction for future healthcare industry research endeavors.
{"title":"Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis","authors":"Pummy Dhiman, Anupam Bonkra, Amandeep Kaur, Yonis Gulzar, Yasir Hamid, Mohammad Shuaib Mir, Arjumand Bano Soomro, Osman Elwasila","doi":"10.3390/info14100541","DOIUrl":"https://doi.org/10.3390/info14100541","url":null,"abstract":"Recent developments in IoT, big data, fog and edge networks, and AI technologies have had a profound impact on a number of industries, including medical. The use of AI for therapeutic purposes has been hampered by its inexplicability. Explainable Artificial Intelligence (XAI), a revolutionary movement, has arisen to solve this constraint. By using decision-making and prediction outputs, XAI seeks to improve the explicability of standard AI models. In this study, we examined global developments in empirical XAI research in the medical field. The bibliometric analysis tools VOSviewer and Biblioshiny were used to examine 171 open access publications from the Scopus database (2019–2022). Our findings point to several prospects for growth in this area, notably in areas of medicine like diagnostic imaging. With 109 research articles using XAI for healthcare classification, prediction, and diagnosis, the USA leads the world in research output. With 88 citations, IEEE Access has the greatest number of publications of all the journals. Our extensive survey covers a range of XAI applications in healthcare, such as diagnosis, therapy, prevention, and palliation, and offers helpful insights for researchers who are interested in this field. This report provides a direction for future healthcare industry research endeavors.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135738828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the WHO’s strategies to reduce road traffic injuries and fatalities is to enhance vehicle safety. Driving fatigue detection can be used to increase vehicle safety. Our previous study developed an ECG-based driving fatigue detection framework with AdaBoost, producing a high cross-validated accuracy of 98.82% and a testing accuracy of 81.82%; however, the study did not consider the driver’s cognitive state related to fatigue and redundant features in the classification model. In this paper, we propose developments in the feature extraction and feature selection phases in the driving fatigue detection framework. For feature extraction, we employ heart rate fragmentation to extract non-linear features to analyze the driver’s cognitive status. These features are combined with features obtained from heart rate variability analysis in the time, frequency, and non-linear domains. In feature selection, we employ mutual information to filter redundant features. To find the number of selected features with the best model performance, we carried out 28 combination experiments consisting of 7 possible selected features out of 58 features and 4 ensemble learnings. The results of the experiments show that the random forest algorithm with 44 selected features produced the best model performance testing accuracy of 95.45%, with cross-validated accuracy of 98.65%.
{"title":"ECG-Based Driving Fatigue Detection using Heart Rate Variability Analysis with Mutual Information","authors":"Junartho Halomoan, Kalamullah Ramli, Dodi Sudiana, Teddy Surya Gunawan, Muhammad Salman","doi":"10.3390/info14100539","DOIUrl":"https://doi.org/10.3390/info14100539","url":null,"abstract":"One of the WHO’s strategies to reduce road traffic injuries and fatalities is to enhance vehicle safety. Driving fatigue detection can be used to increase vehicle safety. Our previous study developed an ECG-based driving fatigue detection framework with AdaBoost, producing a high cross-validated accuracy of 98.82% and a testing accuracy of 81.82%; however, the study did not consider the driver’s cognitive state related to fatigue and redundant features in the classification model. In this paper, we propose developments in the feature extraction and feature selection phases in the driving fatigue detection framework. For feature extraction, we employ heart rate fragmentation to extract non-linear features to analyze the driver’s cognitive status. These features are combined with features obtained from heart rate variability analysis in the time, frequency, and non-linear domains. In feature selection, we employ mutual information to filter redundant features. To find the number of selected features with the best model performance, we carried out 28 combination experiments consisting of 7 possible selected features out of 58 features and 4 ensemble learnings. The results of the experiments show that the random forest algorithm with 44 selected features produced the best model performance testing accuracy of 95.45%, with cross-validated accuracy of 98.65%.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioanna Anastasaki, George Drosatos, George Pavlidis, Konstantinos Rantos
Immersive technologies are revolutionary technological advancements that offer users unparalleled experiences of immersion in a virtual or mixed world of virtual and real elements. In such technology, user privacy, security, and anonymity are paramount, as users often share private and sensitive information. Therefore, user authentication is a critical requirement in these environments. This paper presents a systematic literature review of recently published research papers on immersive technology-based user authentication mechanisms. After conducting the literature search in September 2023 using Scopus, the selection process identified 36 research publications that were further analyzed. The analysis revealed three major types of authentications related to immersive technologies, consistent with previous works: knowledge-based, biometric, and multi-factor methods. The reviewed papers are categorized according to these groups, and the methods used are scrutinized. To the best of our knowledge, this systematic literature review is the first that provides a comprehensive consolidation of immersive technologies for user authentication in virtual, augmented, and mixed reality.
{"title":"User Authentication Mechanisms Based on Immersive Technologies: A Systematic Review","authors":"Ioanna Anastasaki, George Drosatos, George Pavlidis, Konstantinos Rantos","doi":"10.3390/info14100538","DOIUrl":"https://doi.org/10.3390/info14100538","url":null,"abstract":"Immersive technologies are revolutionary technological advancements that offer users unparalleled experiences of immersion in a virtual or mixed world of virtual and real elements. In such technology, user privacy, security, and anonymity are paramount, as users often share private and sensitive information. Therefore, user authentication is a critical requirement in these environments. This paper presents a systematic literature review of recently published research papers on immersive technology-based user authentication mechanisms. After conducting the literature search in September 2023 using Scopus, the selection process identified 36 research publications that were further analyzed. The analysis revealed three major types of authentications related to immersive technologies, consistent with previous works: knowledge-based, biometric, and multi-factor methods. The reviewed papers are categorized according to these groups, and the methods used are scrutinized. To the best of our knowledge, this systematic literature review is the first that provides a comprehensive consolidation of immersive technologies for user authentication in virtual, augmented, and mixed reality.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135831361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina, Muhammad Shafique
Machine learning (ML) is widely used today, especially through deep neural networks (DNNs); however, increasing computational load and resource requirements have led to cloud-based solutions. To address this problem, a new generation of networks has emerged called spiking neural networks (SNNs), which mimic the behavior of the human brain to improve efficiency and reduce energy consumption. These networks often process large amounts of sensitive information, such as confidential data, and thus privacy issues arise. Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting them. This research compares traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme. The LeNet-5 and AlexNet models, widely-used convolutional architectures, are used for both DNN and SNN models based on their respective architectures, and the networks are trained and compared using the FashionMNIST dataset. The results show that SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the plaintext modulus t, although their execution time is longer due to their time-coding nature with multiple time steps.
{"title":"A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks","authors":"Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina, Muhammad Shafique","doi":"10.3390/info14100537","DOIUrl":"https://doi.org/10.3390/info14100537","url":null,"abstract":"Machine learning (ML) is widely used today, especially through deep neural networks (DNNs); however, increasing computational load and resource requirements have led to cloud-based solutions. To address this problem, a new generation of networks has emerged called spiking neural networks (SNNs), which mimic the behavior of the human brain to improve efficiency and reduce energy consumption. These networks often process large amounts of sensitive information, such as confidential data, and thus privacy issues arise. Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting them. This research compares traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme. The LeNet-5 and AlexNet models, widely-used convolutional architectures, are used for both DNN and SNN models based on their respective architectures, and the networks are trained and compared using the FashionMNIST dataset. The results show that SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the plaintext modulus t, although their execution time is longer due to their time-coding nature with multiple time steps.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Schiller, Bruce Caulkins, Annie S. Wu, Sean Mondesire
Internet of Things (IoT) devices are common in today’s computer networks. These devices can be computationally powerful, yet prone to cybersecurity exploitation. To remedy these growing security weaknesses, this work proposes a new artificial intelligence method that makes these IoT networks safer through the use of autonomous, swarm-based cybersecurity penetration testing. In this work, the introduced Particle Swarm Optimization (PSO) penetration testing technique is compared against traditional linear and queue-based approaches to find vulnerabilities in smart homes and IoT networks. To evaluate the effectiveness of the PSO approach, a network simulator is used to simulate smart home networks of two scales: a small, home network and a large, commercial-sized network. These experiments demonstrate that the swarm-based algorithms detect vulnerabilities significantly faster than the linear algorithms. The presented findings support the case that autonomous and swarm-based penetration testing in a network could be used to render more secure IoT networks in the future. This approach can affect private households with smart home networks, settings within the Industrial Internet of Things (IIoT), and military environments.
{"title":"Security Awareness in Smart Homes and Internet of Things Networks through Swarm-Based Cybersecurity Penetration Testing","authors":"Thomas Schiller, Bruce Caulkins, Annie S. Wu, Sean Mondesire","doi":"10.3390/info14100536","DOIUrl":"https://doi.org/10.3390/info14100536","url":null,"abstract":"Internet of Things (IoT) devices are common in today’s computer networks. These devices can be computationally powerful, yet prone to cybersecurity exploitation. To remedy these growing security weaknesses, this work proposes a new artificial intelligence method that makes these IoT networks safer through the use of autonomous, swarm-based cybersecurity penetration testing. In this work, the introduced Particle Swarm Optimization (PSO) penetration testing technique is compared against traditional linear and queue-based approaches to find vulnerabilities in smart homes and IoT networks. To evaluate the effectiveness of the PSO approach, a network simulator is used to simulate smart home networks of two scales: a small, home network and a large, commercial-sized network. These experiments demonstrate that the swarm-based algorithms detect vulnerabilities significantly faster than the linear algorithms. The presented findings support the case that autonomous and swarm-based penetration testing in a network could be used to render more secure IoT networks in the future. This approach can affect private households with smart home networks, settings within the Industrial Internet of Things (IIoT), and military environments.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136344030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sicong Liu, Qingcheng Fan, Chunjiang Zhao, Shuqin Li
Animal resources are significant to human survival and development and the ecosystem balance. Automated multi-animal object detection is critical in animal research and conservation and ecosystem monitoring. The objective is to design a model that mitigates the challenges posed by the large number of parameters and computations in existing animal object detection methods. We developed a backbone network with enhanced representative capabilities to pursue this goal. This network combines the foundational structure of the Transformer model with the Large Selective Kernel (LSK) module, known for its wide receptive field. To further reduce the number of parameters and computations, we incorporated a channel pruning technique based on Fisher information to eliminate channels of lower importance. With the help of the advantages of the above designs, a real-time animal object detection model based on a Large Selective Kernel and channel pruning (RTAD) was built. The model was evaluated using a public animal dataset, AP-10K, which included 50 annotated categories. The results demonstrated that our model has almost half the parameters of YOLOv8-s yet surpasses it by 6.2 AP. Our model provides a new solution for real-time animal object detection.
{"title":"RTAD: A Real-Time Animal Object Detection Model Based on a Large Selective Kernel and Channel Pruning","authors":"Sicong Liu, Qingcheng Fan, Chunjiang Zhao, Shuqin Li","doi":"10.3390/info14100535","DOIUrl":"https://doi.org/10.3390/info14100535","url":null,"abstract":"Animal resources are significant to human survival and development and the ecosystem balance. Automated multi-animal object detection is critical in animal research and conservation and ecosystem monitoring. The objective is to design a model that mitigates the challenges posed by the large number of parameters and computations in existing animal object detection methods. We developed a backbone network with enhanced representative capabilities to pursue this goal. This network combines the foundational structure of the Transformer model with the Large Selective Kernel (LSK) module, known for its wide receptive field. To further reduce the number of parameters and computations, we incorporated a channel pruning technique based on Fisher information to eliminate channels of lower importance. With the help of the advantages of the above designs, a real-time animal object detection model based on a Large Selective Kernel and channel pruning (RTAD) was built. The model was evaluated using a public animal dataset, AP-10K, which included 50 annotated categories. The results demonstrated that our model has almost half the parameters of YOLOv8-s yet surpasses it by 6.2 AP. Our model provides a new solution for real-time animal object detection.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136344697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work describes a methodology for sound event detection in domestic environments. Efficient solutions in this task can support the autonomous living of the elderly. The methodology deals with the “Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE)” 2023, and more specifically with Task 4a “Sound event detection of domestic activities”. This task involves the detection of 10 common events in domestic environments in 10 s sound clips. The events may have arbitrary duration in the 10 s clip. The main components of the methodology are data augmentation on mel-spectrograms that represent the sound clips, feature extraction by passing spectrograms through a frequency-dynamic convolution network with an extra attention module in sequence with each convolution, concatenation of these features with BEATs embeddings, and use of BiGRU for sequence modeling. Also, a mean teacher model is employed for leveraging unlabeled data. This research focuses on the effect of data augmentation techniques, of the feature extraction models, and on self-supervised learning. The main contribution is the proposed feature extraction model, which uses weighted attention on frequency in each convolution, combined in sequence with a local attention module adopted by computer vision. The proposed system features promising and robust performance.
{"title":"Sound Event Detection in Domestic Environment Using Frequency-Dynamic Convolution and Local Attention","authors":"Grigorios-Aris Cheimariotis, Nikolaos Mitianoudis","doi":"10.3390/info14100534","DOIUrl":"https://doi.org/10.3390/info14100534","url":null,"abstract":"This work describes a methodology for sound event detection in domestic environments. Efficient solutions in this task can support the autonomous living of the elderly. The methodology deals with the “Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE)” 2023, and more specifically with Task 4a “Sound event detection of domestic activities”. This task involves the detection of 10 common events in domestic environments in 10 s sound clips. The events may have arbitrary duration in the 10 s clip. The main components of the methodology are data augmentation on mel-spectrograms that represent the sound clips, feature extraction by passing spectrograms through a frequency-dynamic convolution network with an extra attention module in sequence with each convolution, concatenation of these features with BEATs embeddings, and use of BiGRU for sequence modeling. Also, a mean teacher model is employed for leveraging unlabeled data. This research focuses on the effect of data augmentation techniques, of the feature extraction models, and on self-supervised learning. The main contribution is the proposed feature extraction model, which uses weighted attention on frequency in each convolution, combined in sequence with a local attention module adopted by computer vision. The proposed system features promising and robust performance.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136344282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread adoption of blockchain platforms across various decentralized applications, the smart contract’s vulnerabilities are continuously growing and evolving. Consequently, a failure to optimize conventional vulnerability analysis methods results in unforeseen effects caused by overlooked classes of vulnerabilities. Current methods have difficulty dealing with multifaceted intrusions, which calls for more robust approaches. Therefore, overdependence on environment-defined parameters in the contract execution logic binds the contract to the manipulation of such parameters and is perceived as a security vulnerability. Several vulnerability analysis tools have been identified as insufficient to effectively identify certain types of vulnerability. In this paper, we perform a domain-specific evaluation of state-of-the-art vulnerability detection tools on smart contracts. A domain can be defined as a particular area of knowledge, expertise, or industry. We use a perspective specific to the area of energy contracts to draw logical and language-dependent features to advance the structural and procedural comprehension of these contracts. The goal is to reach a greater degree of abstraction and navigate the complexities of decentralized applications by determining their domains. In particular, we analyze code embedding of energy smart contracts and characterize their vulnerabilities in transactive energy systems. We conclude that energy contracts can be affected by a relatively large number of defects. It also appears that the detection accuracy of the tools varies depending on the domain. This suggests that security flaws may be domain-specific. As a result, in some domains, many vulnerabilities can be overlooked by existing analytical tools. Additionally, the overall impact of a specific vulnerability can differ significantly between domains, making its mitigation a priority subject to business logic. As a result, more effort should be directed towards the reliable and accurate detection of existing and new types of vulnerability from a domain-specific point of view.
{"title":"Evaluation of Smart Contract Vulnerability Analysis Tools: A Domain-Specific Perspective","authors":"Bahareh Lashkari, Petr Musilek","doi":"10.3390/info14100533","DOIUrl":"https://doi.org/10.3390/info14100533","url":null,"abstract":"With the widespread adoption of blockchain platforms across various decentralized applications, the smart contract’s vulnerabilities are continuously growing and evolving. Consequently, a failure to optimize conventional vulnerability analysis methods results in unforeseen effects caused by overlooked classes of vulnerabilities. Current methods have difficulty dealing with multifaceted intrusions, which calls for more robust approaches. Therefore, overdependence on environment-defined parameters in the contract execution logic binds the contract to the manipulation of such parameters and is perceived as a security vulnerability. Several vulnerability analysis tools have been identified as insufficient to effectively identify certain types of vulnerability. In this paper, we perform a domain-specific evaluation of state-of-the-art vulnerability detection tools on smart contracts. A domain can be defined as a particular area of knowledge, expertise, or industry. We use a perspective specific to the area of energy contracts to draw logical and language-dependent features to advance the structural and procedural comprehension of these contracts. The goal is to reach a greater degree of abstraction and navigate the complexities of decentralized applications by determining their domains. In particular, we analyze code embedding of energy smart contracts and characterize their vulnerabilities in transactive energy systems. We conclude that energy contracts can be affected by a relatively large number of defects. It also appears that the detection accuracy of the tools varies depending on the domain. This suggests that security flaws may be domain-specific. As a result, in some domains, many vulnerabilities can be overlooked by existing analytical tools. Additionally, the overall impact of a specific vulnerability can differ significantly between domains, making its mitigation a priority subject to business logic. As a result, more effort should be directed towards the reliable and accurate detection of existing and new types of vulnerability from a domain-specific point of view.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}