Digital breast tomosynthesis (DBT) stands out as a highly robust screening technique capable of enhancing the rate at which breast cancer is detected. It also addresses certain limitations that are inherent to mammography. Nonetheless, the process of manually examining numerous DBT slices per case is notably time-intensive. To address this, computer-aided detection (CAD) systems based on deep learning have emerged, aiming to automatically identify breast tumors within DBT images. However, the current CAD systems are hindered by a variety of challenges. These challenges encompass the diversity observed in breast density, as well as the varied shapes, sizes, and locations of breast lesions. To counteract these limitations, we propose a novel method for detecting breast tumors within DBT images. This method relies on a potent dynamic ensemble technique, along with robust individual breast tumor detectors (IBTDs). The proposed dynamic ensemble technique utilizes a deep neural network to select the optimal IBTD for detecting breast tumors, based on the characteristics of the input DBT image. The developed individual breast tumor detectors hinge on resilient deep-learning architectures and inventive data augmentation methods. This study introduces two data augmentation strategies, namely channel replication and channel concatenation. These data augmentation methods are employed to surmount the scarcity of available data and to replicate diverse scenarios encompassing variations in breast density, as well as the shapes, sizes, and locations of breast lesions. This enhances the detection capabilities of each IBTD. The effectiveness of the proposed method is evaluated against two state-of-the-art ensemble techniques, namely non-maximum suppression (NMS) and weighted boxes fusion (WBF), finding that the proposed ensemble method achieves the best results with an F1-score of 84.96% when tested on a publicly accessible DBT dataset. When evaluated across different modalities such as breast mammography, the proposed method consistently attains superior tumor detection outcomes.
{"title":"Detecting Breast Tumors in Tomosynthesis Images Utilizing Deep Learning-Based Dynamic Ensemble Approach","authors":"Loay Hassan, Adel Saleh, Vivek Kumar Singh, Domenec Puig, Mohamed Abdel-Nasser","doi":"10.3390/computers12110220","DOIUrl":"https://doi.org/10.3390/computers12110220","url":null,"abstract":"Digital breast tomosynthesis (DBT) stands out as a highly robust screening technique capable of enhancing the rate at which breast cancer is detected. It also addresses certain limitations that are inherent to mammography. Nonetheless, the process of manually examining numerous DBT slices per case is notably time-intensive. To address this, computer-aided detection (CAD) systems based on deep learning have emerged, aiming to automatically identify breast tumors within DBT images. However, the current CAD systems are hindered by a variety of challenges. These challenges encompass the diversity observed in breast density, as well as the varied shapes, sizes, and locations of breast lesions. To counteract these limitations, we propose a novel method for detecting breast tumors within DBT images. This method relies on a potent dynamic ensemble technique, along with robust individual breast tumor detectors (IBTDs). The proposed dynamic ensemble technique utilizes a deep neural network to select the optimal IBTD for detecting breast tumors, based on the characteristics of the input DBT image. The developed individual breast tumor detectors hinge on resilient deep-learning architectures and inventive data augmentation methods. This study introduces two data augmentation strategies, namely channel replication and channel concatenation. These data augmentation methods are employed to surmount the scarcity of available data and to replicate diverse scenarios encompassing variations in breast density, as well as the shapes, sizes, and locations of breast lesions. This enhances the detection capabilities of each IBTD. The effectiveness of the proposed method is evaluated against two state-of-the-art ensemble techniques, namely non-maximum suppression (NMS) and weighted boxes fusion (WBF), finding that the proposed ensemble method achieves the best results with an F1-score of 84.96% when tested on a publicly accessible DBT dataset. When evaluated across different modalities such as breast mammography, the proposed method consistently attains superior tumor detection outcomes.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136023327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of people, data, and processes. Additionally, interdependencies among different infrastructures can lead to discrepancies in data models that propagate and intensify across interconnected systems. This article introduces “BigDaM”, a Big Data Management framework for critical infrastructures. It is a cutting-edge data model that adheres to the latest technological standards and aims to consolidate APIs and services within highly complex CI infrastructures. Our approach takes a bottom-up perspective, treating each service interconnection as an autonomous entity that must align with the proposed common vocabulary and data model. By injecting strict guidelines into the service/component development’s lifecycle, we explicitly promote interoperability among the services within critical infrastructure ecosystems. This approach facilitates the exchange and reuse of data from a shared repository among developers, small and medium-sized enterprises (SMEs), and large vendors. Business challenges have also been taken into account, in order to link the generated data assets of CIs with the business world. The complete framework has been tested in the main EU ports, part of the transportation sector of CIs. Performance evaluation and the aforementioned testing is also being analyzed, highlighting the capabilities of the proposed approach.
{"title":"BigDaM: Efficient Big Data Management and Interoperability Middleware for Seaports as Critical Infrastructures","authors":"Anastasios Nikolakopoulos, Matilde Julian Segui, Andreu Belsa Pellicer, Michalis Kefalogiannis, Christos-Antonios Gizelis, Achilleas Marinakis, Konstantinos Nestorakis, Theodora Varvarigou","doi":"10.3390/computers12110218","DOIUrl":"https://doi.org/10.3390/computers12110218","url":null,"abstract":"Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of people, data, and processes. Additionally, interdependencies among different infrastructures can lead to discrepancies in data models that propagate and intensify across interconnected systems. This article introduces “BigDaM”, a Big Data Management framework for critical infrastructures. It is a cutting-edge data model that adheres to the latest technological standards and aims to consolidate APIs and services within highly complex CI infrastructures. Our approach takes a bottom-up perspective, treating each service interconnection as an autonomous entity that must align with the proposed common vocabulary and data model. By injecting strict guidelines into the service/component development’s lifecycle, we explicitly promote interoperability among the services within critical infrastructure ecosystems. This approach facilitates the exchange and reuse of data from a shared repository among developers, small and medium-sized enterprises (SMEs), and large vendors. Business challenges have also been taken into account, in order to link the generated data assets of CIs with the business world. The complete framework has been tested in the main EU ports, part of the transportation sector of CIs. Performance evaluation and the aforementioned testing is also being analyzed, highlighting the capabilities of the proposed approach.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136234652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.3390/computers12110219
Sadiqa Jafari, Yung-Cheol Byun
Predicting the remaining useful life (RUL) is a pivotal step in ensuring the reliability of lithium-ion batteries (LIBs). In order to enhance the precision and stability of battery RUL prediction, this study introduces an innovative hybrid deep learning model that seamlessly integrates convolutional neural network (CNN) and gated recurrent unit (GRU) architectures. Our primary goal is to significantly improve the accuracy of RUL predictions for LIBs. Our model excels in its predictive capabilities by skillfully extracting intricate features from a diverse array of data sources, including voltage (V), current (I), temperature (T), and capacity. Within this novel architectural design, parallel CNN layers are meticulously crafted to process each input feature individually. This approach enables the extraction of highly pertinent information from multi-channel charging profiles. We subjected our model to rigorous evaluations across three distinct scenarios to validate its effectiveness. When compared to LSTM, GRU, and CNN-LSTM models, our CNN-GRU model showcases a remarkable reduction in root mean square error, mean square error, mean absolute error, and mean absolute percentage error. These results affirm the superior predictive capabilities of our CNN-GRU model, which effectively harnesses the strengths of both CNNs and GRU networks to achieve superior prediction accuracy. This study draws upon NASA data to underscore the outstanding predictive performance of the CNN-GRU model in estimating the RUL of LIBs.
{"title":"A CNN-GRU Approach to the Accurate Prediction of Batteries’ Remaining Useful Life from Charging Profiles","authors":"Sadiqa Jafari, Yung-Cheol Byun","doi":"10.3390/computers12110219","DOIUrl":"https://doi.org/10.3390/computers12110219","url":null,"abstract":"Predicting the remaining useful life (RUL) is a pivotal step in ensuring the reliability of lithium-ion batteries (LIBs). In order to enhance the precision and stability of battery RUL prediction, this study introduces an innovative hybrid deep learning model that seamlessly integrates convolutional neural network (CNN) and gated recurrent unit (GRU) architectures. Our primary goal is to significantly improve the accuracy of RUL predictions for LIBs. Our model excels in its predictive capabilities by skillfully extracting intricate features from a diverse array of data sources, including voltage (V), current (I), temperature (T), and capacity. Within this novel architectural design, parallel CNN layers are meticulously crafted to process each input feature individually. This approach enables the extraction of highly pertinent information from multi-channel charging profiles. We subjected our model to rigorous evaluations across three distinct scenarios to validate its effectiveness. When compared to LSTM, GRU, and CNN-LSTM models, our CNN-GRU model showcases a remarkable reduction in root mean square error, mean square error, mean absolute error, and mean absolute percentage error. These results affirm the superior predictive capabilities of our CNN-GRU model, which effectively harnesses the strengths of both CNNs and GRU networks to achieve superior prediction accuracy. This study draws upon NASA data to underscore the outstanding predictive performance of the CNN-GRU model in estimating the RUL of LIBs.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136234653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, these opportunities are met with challenges in automated evaluation. Automatic scoring of mathematical self-explanations is crucial for preprocessing tasks, including the categorization of learner responses, identification of common misconceptions, and the creation of tailored feedback and model solutions. Nevertheless, this task is hindered by the dearth of ample sample sets. Our research introduces a semi-supervised technique using the large language model (LLM), specifically its Japanese variant, to enrich datasets for the automated scoring of mathematical self-explanations. We rigorously evaluated the quality of self-explanations across five datasets, ranging from human-evaluated originals to ones devoid of original content. Our results show that combining LLM-based explanations with mathematical material significantly improves the model’s accuracy. Interestingly, there is an optimal limit to how many synthetic self-explanation data can benefit the system. Exceeding this limit does not further improve outcomes. This study thus highlights the need for careful consideration when integrating synthetic data into solutions, especially within the mathematics discipline.
{"title":"Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach","authors":"Ryosuke Nakamoto, Brendan Flanagan, Taisei Yamauchi, Yiling Dai, Kyosuke Takami, Hiroaki Ogata","doi":"10.3390/computers12110217","DOIUrl":"https://doi.org/10.3390/computers12110217","url":null,"abstract":"In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, these opportunities are met with challenges in automated evaluation. Automatic scoring of mathematical self-explanations is crucial for preprocessing tasks, including the categorization of learner responses, identification of common misconceptions, and the creation of tailored feedback and model solutions. Nevertheless, this task is hindered by the dearth of ample sample sets. Our research introduces a semi-supervised technique using the large language model (LLM), specifically its Japanese variant, to enrich datasets for the automated scoring of mathematical self-explanations. We rigorously evaluated the quality of self-explanations across five datasets, ranging from human-evaluated originals to ones devoid of original content. Our results show that combining LLM-based explanations with mathematical material significantly improves the model’s accuracy. Interestingly, there is an optimal limit to how many synthetic self-explanation data can benefit the system. Exceeding this limit does not further improve outcomes. This study thus highlights the need for careful consideration when integrating synthetic data into solutions, especially within the mathematics discipline.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"104 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135316010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.3390/computers12100215
A. B. M. S. U. Doulah, Mirza Rasheduzzaman, Faed Ahmed Arnob, Farhana Sarker, Nipa Roy, Md. Anwar Ullah, Khondaker A. Mamun
Over the past 10 years, the use of augmented reality (AR) applications to assist individuals with special needs such as intellectual disabilities, autism spectrum disorder (ASD), and physical disabilities has become more widespread. The beneficial features of AR for individuals with autism have driven a large amount of research into using this technology in assisting against autism-related impairments. This study aims to evaluate the effectiveness of AR in rehabilitating and training individuals with ASD through a systematic review using the PRISMA methodology. A comprehensive search of relevant databases was conducted, and 25 articles were selected for further investigation after being filtered based on inclusion criteria. The studies focused on areas such as social interaction, emotion recognition, cooperation, learning, cognitive skills, and living skills. The results showed that AR intervention was most effective in improving individuals’ social skills, followed by learning, behavioral, and living skills. This systematic review provides guidance for future research by highlighting the limitations in current research designs, control groups, sample sizes, and assessment and feedback methods. The findings indicate that augmented reality could be a useful and practical tool for supporting individuals with ASD in daily life activities and promoting their social interactions.
{"title":"Application of Augmented Reality Interventions for Children with Autism Spectrum Disorder (ASD): A Systematic Review","authors":"A. B. M. S. U. Doulah, Mirza Rasheduzzaman, Faed Ahmed Arnob, Farhana Sarker, Nipa Roy, Md. Anwar Ullah, Khondaker A. Mamun","doi":"10.3390/computers12100215","DOIUrl":"https://doi.org/10.3390/computers12100215","url":null,"abstract":"Over the past 10 years, the use of augmented reality (AR) applications to assist individuals with special needs such as intellectual disabilities, autism spectrum disorder (ASD), and physical disabilities has become more widespread. The beneficial features of AR for individuals with autism have driven a large amount of research into using this technology in assisting against autism-related impairments. This study aims to evaluate the effectiveness of AR in rehabilitating and training individuals with ASD through a systematic review using the PRISMA methodology. A comprehensive search of relevant databases was conducted, and 25 articles were selected for further investigation after being filtered based on inclusion criteria. The studies focused on areas such as social interaction, emotion recognition, cooperation, learning, cognitive skills, and living skills. The results showed that AR intervention was most effective in improving individuals’ social skills, followed by learning, behavioral, and living skills. This systematic review provides guidance for future research by highlighting the limitations in current research designs, control groups, sample sizes, and assessment and feedback methods. The findings indicate that augmented reality could be a useful and practical tool for supporting individuals with ASD in daily life activities and promoting their social interactions.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135411639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.3390/computers12100216
Amal Naitali, Mohammed Ridouani, Fatima Salahdine, Naima Kaabouch
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment.
{"title":"Deepfake Attacks: Generation, Detection, Datasets, Challenges, and Research Directions","authors":"Amal Naitali, Mohammed Ridouani, Fatima Salahdine, Naima Kaabouch","doi":"10.3390/computers12100216","DOIUrl":"https://doi.org/10.3390/computers12100216","url":null,"abstract":"Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135366445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-21DOI: 10.3390/computers12100214
Ingrid A. Buckley, Eduardo B. Fernandez
Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an enumeration of dependability patterns, which include fault tolerance, reliability, safety, and availability patterns. After defining classification groups and showing basic pattern relationships, we showed the references to the publications where these patterns were introduced and enumerated their intents. Another objective was evaluating these patterns to see if their descriptions are appropriate for a possible catalog, which would make them useful to developers and researchers. We found that most of them need remodeling because they use ad hoc templates or no templates. We considered some models from which we can derive patterns and methodologies that incorporate the use of patterns to build dependable software systems. We also provided directions for research.
{"title":"Dependability Patterns: A Survey","authors":"Ingrid A. Buckley, Eduardo B. Fernandez","doi":"10.3390/computers12100214","DOIUrl":"https://doi.org/10.3390/computers12100214","url":null,"abstract":"Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an enumeration of dependability patterns, which include fault tolerance, reliability, safety, and availability patterns. After defining classification groups and showing basic pattern relationships, we showed the references to the publications where these patterns were introduced and enumerated their intents. Another objective was evaluating these patterns to see if their descriptions are appropriate for a possible catalog, which would make them useful to developers and researchers. We found that most of them need remodeling because they use ad hoc templates or no templates. We considered some models from which we can derive patterns and methodologies that incorporate the use of patterns to build dependable software systems. We also provided directions for research.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces an AI model designed for the diagnosis and monitoring of the SARS-CoV-2 virus. The present artificial intelligence (AI) model founded on the machine learning concept was created for the identification/recognition, keeping under observation, and prediction of a patient’s clinical evaluation infected with the CoV-2 virus. The deep learning (DL)-initiated process (an AI subset) is punctually prepared to identify patterns and provide automated information to healthcare professionals. The AI algorithm is based on the fractal analysis of CT chest images, which is a practical guide to detecting the virus and establishing the degree of lung infection. CT pulmonary images, delivered by a free public source, were utilized for developing correct AI algorithms with the aim of COVID-19 virus observation/recognition, having access to coherent medical data, or not. The box-counting procedure was used with a predilection to determine the fractal parameters, the value of the fractal dimension, and the value of lacunarity. In the case of a confirmation, the analysed image is used as input data for a program responsible for measuring the degree of health impairment/damage using fractal analysis. The support of image scans with computer tomography assistance is solely the commencement part of a correctly established diagnostic. A profiled software framework has been used to perceive all the details collected. With the trained AI model, a maximum accuracy of 98.1% was obtained. This advanced procedure presents an important potential in the progress of an intricate medical solution to pulmonary disease evaluation.
{"title":"The SARS-CoV-2 Virus Detection with the Help of Artificial Intelligence (AI) and Monitoring the Disease Using Fractal Analysis","authors":"Mihai-Virgil Nichita, Maria-Alexandra Paun, Vladimir-Alexandru Paun, Viorel-Puiu Paun","doi":"10.3390/computers12100213","DOIUrl":"https://doi.org/10.3390/computers12100213","url":null,"abstract":"This paper introduces an AI model designed for the diagnosis and monitoring of the SARS-CoV-2 virus. The present artificial intelligence (AI) model founded on the machine learning concept was created for the identification/recognition, keeping under observation, and prediction of a patient’s clinical evaluation infected with the CoV-2 virus. The deep learning (DL)-initiated process (an AI subset) is punctually prepared to identify patterns and provide automated information to healthcare professionals. The AI algorithm is based on the fractal analysis of CT chest images, which is a practical guide to detecting the virus and establishing the degree of lung infection. CT pulmonary images, delivered by a free public source, were utilized for developing correct AI algorithms with the aim of COVID-19 virus observation/recognition, having access to coherent medical data, or not. The box-counting procedure was used with a predilection to determine the fractal parameters, the value of the fractal dimension, and the value of lacunarity. In the case of a confirmation, the analysed image is used as input data for a program responsible for measuring the degree of health impairment/damage using fractal analysis. The support of image scans with computer tomography assistance is solely the commencement part of a correctly established diagnostic. A profiled software framework has been used to perceive all the details collected. With the trained AI model, a maximum accuracy of 98.1% was obtained. This advanced procedure presents an important potential in the progress of an intricate medical solution to pulmonary disease evaluation.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135512024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-20DOI: 10.3390/computers12100212
Alan Huang, Justie Su-Tzu Juan
A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us regularly share photos of our daily lives online. However, due to the ease of creating a PRNU-based camera fingerprint, the privacy leakage problem is taken more seriously. To address this issue, a security scheme based on Boneh–Goh–Nissim (BGN) encryption was proposed in 2021. While effective, the BGN encryption incurs a high run-time computational overhead due to its power computation. Therefore, we devised a new scheme to address this issue, employing polynomial encryption and pixel confusion methods, resulting in a computation time that is over ten times faster than BGN encryption. This eliminates the need to only send critical pixels to a Third-Party Expert in the previous method. Furthermore, our scheme does not require decryption, as polynomial encryption and pixel confusion do not alter the correlation value. Consequently, the scheme we presented surpasses previous methods in both theoretical analysis and experimental performance, being faster and more capable.
{"title":"L-PRNU: Low-Complexity Privacy-Preserving PRNU-Based Camera Attribution Scheme","authors":"Alan Huang, Justie Su-Tzu Juan","doi":"10.3390/computers12100212","DOIUrl":"https://doi.org/10.3390/computers12100212","url":null,"abstract":"A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us regularly share photos of our daily lives online. However, due to the ease of creating a PRNU-based camera fingerprint, the privacy leakage problem is taken more seriously. To address this issue, a security scheme based on Boneh–Goh–Nissim (BGN) encryption was proposed in 2021. While effective, the BGN encryption incurs a high run-time computational overhead due to its power computation. Therefore, we devised a new scheme to address this issue, employing polynomial encryption and pixel confusion methods, resulting in a computation time that is over ten times faster than BGN encryption. This eliminates the need to only send critical pixels to a Third-Party Expert in the previous method. Furthermore, our scheme does not require decryption, as polynomial encryption and pixel confusion do not alter the correlation value. Consequently, the scheme we presented surpasses previous methods in both theoretical analysis and experimental performance, being faster and more capable.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Home automation technologies are a vital part of humanity, as they provide convenience in otherwise mundane and repetitive tasks. In recent years, given the development of the Internet of Things (IoT) and artificial intelligence (AI) sectors, these technologies have seen a tremendous rise, both in the methodologies utilized and in their industrial impact. Hence, many organizations and companies are securing commercial rights by patenting such technologies. In this study, we employ an analysis of 8482 home automation patents from the United States Patent and Trademark Office (USPTO) to extract thematic clusters and distinguish those that drive the market and those that have declined over the course of time. Moreover, we identify prevalent competitors per cluster and analyze the results under the spectrum of their market impact and objectives. The key findings indicate that home automation networks encompass a variety of technological areas and organizations with diverse interests.
{"title":"Classifying the Main Technology Clusters and Assignees of Home Automation Networks Using Patent Classifications","authors":"Konstantinos Charmanas, Konstantinos Georgiou, Nikolaos Mittas, Lefteris Angelis","doi":"10.3390/computers12100211","DOIUrl":"https://doi.org/10.3390/computers12100211","url":null,"abstract":"Home automation technologies are a vital part of humanity, as they provide convenience in otherwise mundane and repetitive tasks. In recent years, given the development of the Internet of Things (IoT) and artificial intelligence (AI) sectors, these technologies have seen a tremendous rise, both in the methodologies utilized and in their industrial impact. Hence, many organizations and companies are securing commercial rights by patenting such technologies. In this study, we employ an analysis of 8482 home automation patents from the United States Patent and Trademark Office (USPTO) to extract thematic clusters and distinguish those that drive the market and those that have declined over the course of time. Moreover, we identify prevalent competitors per cluster and analyze the results under the spectrum of their market impact and objectives. The key findings indicate that home automation networks encompass a variety of technological areas and organizations with diverse interests.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135570281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}