Pub Date : 2024-06-13DOI: 10.3389/fcomp.2024.1393723
Nandini Gandhi, Kaushik Gopalan, Prajish Prasad
Mechanisms for plagiarism detection play a crucial role in maintaining academic integrity, acting both to penalize wrongdoing while also serving as a preemptive deterrent for bad behavior. This manuscript proposes a customized plagiarism detection algorithm tailored to detect source code plagiarism in the Python programming language. Our approach combines textual and syntactic techniques, employing a support vector machine (SVM) to effectively combine various indicators of similarity and calculate the resulting similarity scores. The algorithm was trained and tested using a sample of code submissions of 4 coding problems each from 45 volunteers; 15 of these were original submissions while the other 30 were plagiarized samples. The submissions of two of the questions was used for training and the other two for testing-using the leave-p-out cross-validation strategy to avoid overfitting. We compare the performance of the proposed method with two widely used tools-MOSS and JPlag—and find that the proposed method results in a small but significant improvement in accuracy compared to JPlag, while significantly outperforming MOSS in flagging plagiarized samples.
{"title":"A Support Vector Machine based approach for plagiarism detection in Python code submissions in undergraduate settings","authors":"Nandini Gandhi, Kaushik Gopalan, Prajish Prasad","doi":"10.3389/fcomp.2024.1393723","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1393723","url":null,"abstract":"Mechanisms for plagiarism detection play a crucial role in maintaining academic integrity, acting both to penalize wrongdoing while also serving as a preemptive deterrent for bad behavior. This manuscript proposes a customized plagiarism detection algorithm tailored to detect source code plagiarism in the Python programming language. Our approach combines textual and syntactic techniques, employing a support vector machine (SVM) to effectively combine various indicators of similarity and calculate the resulting similarity scores. The algorithm was trained and tested using a sample of code submissions of 4 coding problems each from 45 volunteers; 15 of these were original submissions while the other 30 were plagiarized samples. The submissions of two of the questions was used for training and the other two for testing-using the leave-p-out cross-validation strategy to avoid overfitting. We compare the performance of the proposed method with two widely used tools-MOSS and JPlag—and find that the proposed method results in a small but significant improvement in accuracy compared to JPlag, while significantly outperforming MOSS in flagging plagiarized samples.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141346952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.3389/fcomp.2024.1400750
Shamaila Qayyum, Salma Imtiaz, Huma Hayat Khan, Ahmad Almadhor, Vincent Karovic
Crowdsourcing software development (CSSD) is an emerging technique in software development. It helps utilize the diversified skills of people from across the world. Similar to all emerging techniques, CSSD has its own benefits and challenges. Some unique challenges arise when CSSD is used with Agile methodology. This is because many characteristics of CSSD differ from Agile principles. CSSD is a distributed approach where workers are unknown to each other, whereas Agile advocates teamness and is mostly suitable for colocated teams. Many organizations are now combining crowdsourcing software development (CSSD) and Agile methodologies, yet there is limited understanding on the implications of this integration. It is crucial to emphasize the human factors at play when implementing Agile alongside CSSD. This involves considering how teams interact, communicate, and adapt within these frameworks. By recognizing these dynamics, organizations can better navigate the complexities of integrating CSSD and Agile, ultimately fostering more efficient and collaborative development processes.This study aimed to explore the human factors involved in the integration of CSSD with Agile, by identifying the challenges that practitioners face when they follow Agile with CSSD and the strategies they follow. The study contributes by providing an in-depth understanding of a new approach, CSSD, integrated with Agile. The study also explores the challenges faced by practitioners that are not already enlisted.These identified challenges are grouped into six different categories, which are trust-related challenges, coordination and communication challenges, organizational challenges, task-related challenges, project-related challenges, and some general challenges. Strategies for each of these categories of challenges are also identified. The list of challenges and strategies identified in this study can be helpful in further research on CSSD and Agile integration. The practitioners can also follow these strategies to reduce the impact of challenges they face while they perform CSSD along with Agile.
{"title":"Working with agile and crowd: human factors identified from the industry","authors":"Shamaila Qayyum, Salma Imtiaz, Huma Hayat Khan, Ahmad Almadhor, Vincent Karovic","doi":"10.3389/fcomp.2024.1400750","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1400750","url":null,"abstract":"Crowdsourcing software development (CSSD) is an emerging technique in software development. It helps utilize the diversified skills of people from across the world. Similar to all emerging techniques, CSSD has its own benefits and challenges. Some unique challenges arise when CSSD is used with Agile methodology. This is because many characteristics of CSSD differ from Agile principles. CSSD is a distributed approach where workers are unknown to each other, whereas Agile advocates teamness and is mostly suitable for colocated teams. Many organizations are now combining crowdsourcing software development (CSSD) and Agile methodologies, yet there is limited understanding on the implications of this integration. It is crucial to emphasize the human factors at play when implementing Agile alongside CSSD. This involves considering how teams interact, communicate, and adapt within these frameworks. By recognizing these dynamics, organizations can better navigate the complexities of integrating CSSD and Agile, ultimately fostering more efficient and collaborative development processes.This study aimed to explore the human factors involved in the integration of CSSD with Agile, by identifying the challenges that practitioners face when they follow Agile with CSSD and the strategies they follow. The study contributes by providing an in-depth understanding of a new approach, CSSD, integrated with Agile. The study also explores the challenges faced by practitioners that are not already enlisted.These identified challenges are grouped into six different categories, which are trust-related challenges, coordination and communication challenges, organizational challenges, task-related challenges, project-related challenges, and some general challenges. Strategies for each of these categories of challenges are also identified. The list of challenges and strategies identified in this study can be helpful in further research on CSSD and Agile integration. The practitioners can also follow these strategies to reduce the impact of challenges they face while they perform CSSD along with Agile.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.3389/fcomp.2024.1394397
Mengxi Liu, Sizhen Bian, Zimin Zhao, Bo Zhou, P. Lukowicz
This work described a novel non-contact, wearable, real-time eye blink detection solution based on capacitive sensing technology. A custom-built prototype employing low-cost and low-power consumption capacitive sensors was integrated into standard glasses, with a copper tape electrode affixed to the frame. The blink of an eye induces a variation in capacitance between the electrode and the eyelid, thereby generating a distinctive capacitance-related signal. By analyzing this signal, eye blink activity can be accurately identified. The effectiveness and reliability of the proposed solution were evaluated through five distinct scenarios involving eight participants. Utilizing a user-dependent detection method with a customized predefined threshold value, an average precision of 92% and a recall of 94% were achieved. Furthermore, an efficient user-independent model based on the two-bit precision decision tree was further applied, yielding an average precision of 80% and an average recall of 81%. These results demonstrate the potential of the proposed technology for real-world applications requiring precise and unobtrusive eye blink detection.
{"title":"Energy-efficient, low-latency, and non-contact eye blink detection with capacitive sensing","authors":"Mengxi Liu, Sizhen Bian, Zimin Zhao, Bo Zhou, P. Lukowicz","doi":"10.3389/fcomp.2024.1394397","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1394397","url":null,"abstract":"This work described a novel non-contact, wearable, real-time eye blink detection solution based on capacitive sensing technology. A custom-built prototype employing low-cost and low-power consumption capacitive sensors was integrated into standard glasses, with a copper tape electrode affixed to the frame. The blink of an eye induces a variation in capacitance between the electrode and the eyelid, thereby generating a distinctive capacitance-related signal. By analyzing this signal, eye blink activity can be accurately identified. The effectiveness and reliability of the proposed solution were evaluated through five distinct scenarios involving eight participants. Utilizing a user-dependent detection method with a customized predefined threshold value, an average precision of 92% and a recall of 94% were achieved. Furthermore, an efficient user-independent model based on the two-bit precision decision tree was further applied, yielding an average precision of 80% and an average recall of 81%. These results demonstrate the potential of the proposed technology for real-world applications requiring precise and unobtrusive eye blink detection.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.3389/fcomp.2024.1335369
Jingwen Ding, Giuseppe Spallitta, Roberto Sebastiani
This paper builds on top of a paper we have published very recently, in which we have proposed a novel approach to prime factorization (PF) by quantum annealing, where 8, 219, 999 = 32, 749 × 251 was the highest prime product we were able to factorize—which, to the best of our knowledge is the largest number which was ever factorized by means of a quantum device. The series of annealing experiments which led us to these results, however, did not follow a straight-line path; rather, they involved a convoluted trial-and-error process, full of failed or partially-failed attempts and backtracks, which only in the end drove us to find the successful annealing strategies. In this paper, we delve into the reasoning behind our experimental decisions and provide an account of some of the attempts we have taken before conceiving the final strategies that allowed us to achieve the results. This involves also a bunch of ideas, techniques, and strategies we investigated which, although turned out to be inferior wrt. those we adopted in the end, may instead provide insights to a more-specialized audience of D-Wave users and practitioners. In particular, we show the following insights: (i) different initialization techniques affect performances, among which flux biases are effective when targeting locally-structured embeddings; (ii) chain strengths have a lower impact in locally-structured embeddings compared to problem relying on global embeddings; (iii) there is a trade-off between broken chain and excited CFAs, suggesting an incremental annealing offset remedy approach based on the modules instead of single qubits. Thus, by sharing the details of our experiences, we aim to provide insights into the evolving landscape of quantum annealing, and help people access and effectively use D-Wave quantum annealers.
{"title":"Experimenting with D-Wave quantum annealers on prime factorization problems","authors":"Jingwen Ding, Giuseppe Spallitta, Roberto Sebastiani","doi":"10.3389/fcomp.2024.1335369","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1335369","url":null,"abstract":"This paper builds on top of a paper we have published very recently, in which we have proposed a novel approach to prime factorization (PF) by quantum annealing, where 8, 219, 999 = 32, 749 × 251 was the highest prime product we were able to factorize—which, to the best of our knowledge is the largest number which was ever factorized by means of a quantum device. The series of annealing experiments which led us to these results, however, did not follow a straight-line path; rather, they involved a convoluted trial-and-error process, full of failed or partially-failed attempts and backtracks, which only in the end drove us to find the successful annealing strategies. In this paper, we delve into the reasoning behind our experimental decisions and provide an account of some of the attempts we have taken before conceiving the final strategies that allowed us to achieve the results. This involves also a bunch of ideas, techniques, and strategies we investigated which, although turned out to be inferior wrt. those we adopted in the end, may instead provide insights to a more-specialized audience of D-Wave users and practitioners. In particular, we show the following insights: (i) different initialization techniques affect performances, among which flux biases are effective when targeting locally-structured embeddings; (ii) chain strengths have a lower impact in locally-structured embeddings compared to problem relying on global embeddings; (iii) there is a trade-off between broken chain and excited CFAs, suggesting an incremental annealing offset remedy approach based on the modules instead of single qubits. Thus, by sharing the details of our experiences, we aim to provide insights into the evolving landscape of quantum annealing, and help people access and effectively use D-Wave quantum annealers.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.3389/fcomp.2024.1387354
Ali Hussein Ali, Maha Charfeddine, Boudour Ammar, Bassem Ben Hamed, Faisal Albalwy, Abdulrahman Alqarafi, Amir Hussain
The advancement of communication and internet technology has brought risks to network security. Thus, Intrusion Detection Systems (IDS) was developed to combat malicious network attacks. However, IDSs still struggle with accuracy, false alarms, and detecting new intrusions. Therefore, organizations are using Machine Learning (ML) and Deep Learning (DL) algorithms in IDS for more accurate attack detection. This paper provides an overview of IDS, including its classes and methods, the detected attacks as well as the dataset, metrics, and performance indicators used. A thorough examination of recent publications on IDS-based solutions is conducted, evaluating their strengths and weaknesses, as well as a discussion of their potential implications, research challenges, and new trends. We believe that this comprehensive review paper covers the most recent advances and developments in ML and DL-based IDS, and also facilitates future research into the potential of emerging Artificial Intelligence (AI) to address the growing complexity of cybersecurity challenges.
{"title":"Unveiling machine learning strategies and considerations in intrusion detection systems: a comprehensive survey","authors":"Ali Hussein Ali, Maha Charfeddine, Boudour Ammar, Bassem Ben Hamed, Faisal Albalwy, Abdulrahman Alqarafi, Amir Hussain","doi":"10.3389/fcomp.2024.1387354","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1387354","url":null,"abstract":"The advancement of communication and internet technology has brought risks to network security. Thus, Intrusion Detection Systems (IDS) was developed to combat malicious network attacks. However, IDSs still struggle with accuracy, false alarms, and detecting new intrusions. Therefore, organizations are using Machine Learning (ML) and Deep Learning (DL) algorithms in IDS for more accurate attack detection. This paper provides an overview of IDS, including its classes and methods, the detected attacks as well as the dataset, metrics, and performance indicators used. A thorough examination of recent publications on IDS-based solutions is conducted, evaluating their strengths and weaknesses, as well as a discussion of their potential implications, research challenges, and new trends. We believe that this comprehensive review paper covers the most recent advances and developments in ML and DL-based IDS, and also facilitates future research into the potential of emerging Artificial Intelligence (AI) to address the growing complexity of cybersecurity challenges.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research presents a process for analyzing a hybrid microgrid's dependability using a fuzzy Markov model. The research initiated an analysis of the various microgrid components, such as wind power systems, solar photovoltaic (PV) systems, and battery storage systems. The states that are induced by component failures are represented using a state-space model. The research continues by suggesting a hybrid microgrid reliability model that analyzes data using a Markov process. Problems arise when trying to estimate reliability metrics for the microgrid using data that is both restricted and imprecise. This is why the study takes uncertainties into account to make microgrid reliability estimations more realistic. The importance of microgrid components concerning their overall availability is evaluated using fuzzy sets and reliability assessments. The study uses numerical analysis and then carefully considers the outcomes. The overall availability of hybrid microgrids is 0.99999.
{"title":"Fuzzy Markov model for the reliability analysis of hybrid microgrids","authors":"Kunjabihari Swain, Murthy Cherukuri, Indu Sekhar Samanta, Abhilash Pati, Jayant Giri, Amrutanshu Panigrahi, Hong Qin, Saurav Mallik","doi":"10.3389/fcomp.2024.1406086","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1406086","url":null,"abstract":"This research presents a process for analyzing a hybrid microgrid's dependability using a fuzzy Markov model. The research initiated an analysis of the various microgrid components, such as wind power systems, solar photovoltaic (PV) systems, and battery storage systems. The states that are induced by component failures are represented using a state-space model. The research continues by suggesting a hybrid microgrid reliability model that analyzes data using a Markov process. Problems arise when trying to estimate reliability metrics for the microgrid using data that is both restricted and imprecise. This is why the study takes uncertainties into account to make microgrid reliability estimations more realistic. The importance of microgrid components concerning their overall availability is evaluated using fuzzy sets and reliability assessments. The study uses numerical analysis and then carefully considers the outcomes. The overall availability of hybrid microgrids is 0.99999.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.3389/fcomp.2024.1400360
Marko Arik, Ricardo Gregorio Lugo, Rain Ottis, Adrian Nicholas Venables
This study aims to investigate Offensive Cyber Operations (OCO) planner development, focusing on addressing the need for tailored training paths and the continuous evolution of frameworks. As the complexity of global challenges and security threats grows, OCO planners play a pivotal role in operationalising and executing operations effectively. The research utilized a qualitative case study approach, combining literature reviews and interviews with OCO military professionals, to explore OCO planners' competencies and training frameworks at the operational level. Interviews emphasize the need for comprehensive training, trust, and standardized training pathways in OCO planning, with real-time exposure being the most effective approach for practical planning. The literature review highlights key OCO training options, including Cyber Range Integration, cognitive architectures, and Persistent Cyber Training Environment platforms. It emphasizes educational initiatives, industry contributions, and practical experience in developing expertise in OCO. Discussions highlight the importance of Cyber Range Integration, educational initiatives, and practical experience in OCO. It emphasizes the need for a dual skill set and a structured training path for OCO planners. Real-time exposure through exercises and courses is the most effective approach to becoming a practical OCO planner.
{"title":"Optimizing offensive cyber operation planner‘s development: exploring tailored training paths and framework evolution","authors":"Marko Arik, Ricardo Gregorio Lugo, Rain Ottis, Adrian Nicholas Venables","doi":"10.3389/fcomp.2024.1400360","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1400360","url":null,"abstract":"This study aims to investigate Offensive Cyber Operations (OCO) planner development, focusing on addressing the need for tailored training paths and the continuous evolution of frameworks. As the complexity of global challenges and security threats grows, OCO planners play a pivotal role in operationalising and executing operations effectively. The research utilized a qualitative case study approach, combining literature reviews and interviews with OCO military professionals, to explore OCO planners' competencies and training frameworks at the operational level. Interviews emphasize the need for comprehensive training, trust, and standardized training pathways in OCO planning, with real-time exposure being the most effective approach for practical planning. The literature review highlights key OCO training options, including Cyber Range Integration, cognitive architectures, and Persistent Cyber Training Environment platforms. It emphasizes educational initiatives, industry contributions, and practical experience in developing expertise in OCO. Discussions highlight the importance of Cyber Range Integration, educational initiatives, and practical experience in OCO. It emphasizes the need for a dual skill set and a structured training path for OCO planners. Real-time exposure through exercises and courses is the most effective approach to becoming a practical OCO planner.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141374490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.3389/fcomp.2024.1420709
Vladimir Villarreal, L. Muñoz, Joseph González, Jesús Fontecha, C. Dobrescu, Mel Nielsen, Dimas Concepcion, Marco Rodríguez
In the scientific research domain, the Open Science movement stands as a fundamental pillar for advancing knowledge and innovation globally. This article presents the design and implementation of the GITCE Open Data Ecosystem (GITCE-ODE) Research Data Management System (RDMS), developed by the Research Group on Emerging Computational Technologies (GITCE) at the Technological University of Panama, as a platform for the long-term storage, publication, and dissemination of research products.The architecture of the GITCE-ODE RDMS encompasses the entire data engineering lifecycle, facilitating information processing stages such as extraction, transformation, loading (ETL), as well as the management and analysis of diverse datasets and metadata.Compliance with the FAIR principles ensures that published data and products are Findable, Accessible, Interoperable, and Reusable, promoting automation in the discovery and reuse of digital resources. Key considerations of the web portal include file format standardization, data categorization, treatment of semantic context, and organization of resources to ensure efficient management and administration of open research data.Through this platform, GITCE aims to foster collaboration, transparency, and accessibility in scientific research, contributing to the ongoing advancement of knowledge transfer and innovation.
{"title":"A methodological approach for data standardization and management of Open Data portals for scientific research groups: a case study on mobile and ubiquitous ecosystems","authors":"Vladimir Villarreal, L. Muñoz, Joseph González, Jesús Fontecha, C. Dobrescu, Mel Nielsen, Dimas Concepcion, Marco Rodríguez","doi":"10.3389/fcomp.2024.1420709","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1420709","url":null,"abstract":"In the scientific research domain, the Open Science movement stands as a fundamental pillar for advancing knowledge and innovation globally. This article presents the design and implementation of the GITCE Open Data Ecosystem (GITCE-ODE) Research Data Management System (RDMS), developed by the Research Group on Emerging Computational Technologies (GITCE) at the Technological University of Panama, as a platform for the long-term storage, publication, and dissemination of research products.The architecture of the GITCE-ODE RDMS encompasses the entire data engineering lifecycle, facilitating information processing stages such as extraction, transformation, loading (ETL), as well as the management and analysis of diverse datasets and metadata.Compliance with the FAIR principles ensures that published data and products are Findable, Accessible, Interoperable, and Reusable, promoting automation in the discovery and reuse of digital resources. Key considerations of the web portal include file format standardization, data categorization, treatment of semantic context, and organization of resources to ensure efficient management and administration of open research data.Through this platform, GITCE aims to foster collaboration, transparency, and accessibility in scientific research, contributing to the ongoing advancement of knowledge transfer and innovation.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141374563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.3389/fcomp.2024.1286057
D. Vert, M. Willsch, Berat Yenilen, Renaud Sirdey, Stéphane Louise, K. Michielsen
We benchmark Quantum Annealing (QA) vs. Simulated Annealing (SA) with a focus on the impact of the embedding of problems onto the different topologies of the D-Wave quantum annealers. The series of problems we study are especially designed instances of the maximum cardinality matching problem that are easy to solve classically but difficult for SA and, as found experimentally, not easy for QA either. In addition to using several D-Wave processors, we simulate the QA process by numerically solving the time-dependent Schrödinger equation. We find that the embedded problems can be significantly more difficult than the unembedded problems, and some parameters, such as the chain strength, can be very impactful for finding the optimal solution. Thus, finding a good embedding and optimal parameter values can improve the results considerably. Interestingly, we find that although SA succeeds for the unembedded problems, the SA results obtained for the embedded version scale quite poorly in comparison with what we can achieve on the D-Wave quantum annealers.
我们对量子退火(QA)与模拟退火(SA)进行了基准测试,重点研究了将问题嵌入 D-Wave 量子退火器的不同拓扑结构所产生的影响。我们研究的一系列问题都是最大明细匹配问题的特别设计实例,这些问题在经典上很容易解决,但对 SA 来说却很困难,而且实验发现,对 QA 来说也不容易。除了使用多个 D-Wave 处理器外,我们还通过数值求解随时间变化的薛定谔方程来模拟 QA 过程。我们发现,嵌入式问题可能比非嵌入式问题困难得多,而且一些参数(如链强度)对找到最优解有很大影响。因此,找到一个好的嵌入和最佳参数值可以大大改善结果。有趣的是,我们发现尽管 SA 成功地解决了非嵌入式问题,但与我们在 D-Wave 量子退火器上所能取得的结果相比,嵌入式版本的 SA 结果却相当糟糕。
{"title":"Benchmarking quantum annealing with maximum cardinality matching problems","authors":"D. Vert, M. Willsch, Berat Yenilen, Renaud Sirdey, Stéphane Louise, K. Michielsen","doi":"10.3389/fcomp.2024.1286057","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1286057","url":null,"abstract":"We benchmark Quantum Annealing (QA) vs. Simulated Annealing (SA) with a focus on the impact of the embedding of problems onto the different topologies of the D-Wave quantum annealers. The series of problems we study are especially designed instances of the maximum cardinality matching problem that are easy to solve classically but difficult for SA and, as found experimentally, not easy for QA either. In addition to using several D-Wave processors, we simulate the QA process by numerically solving the time-dependent Schrödinger equation. We find that the embedded problems can be significantly more difficult than the unembedded problems, and some parameters, such as the chain strength, can be very impactful for finding the optimal solution. Thus, finding a good embedding and optimal parameter values can improve the results considerably. Interestingly, we find that although SA succeeds for the unembedded problems, the SA results obtained for the embedded version scale quite poorly in comparison with what we can achieve on the D-Wave quantum annealers.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141381866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.3389/fcomp.2024.1373906
Hamidreza Sadrarhami, S. M. Zanjani, M. Dolatshahi, Behrang Barekatain
Quantum-dot Cellular Automata (QCA) is recognized in electronics for its low power consumption and high-density capabilities, emerging as a potential substitute for CMOS technology. GDI (Gate Diffusion Input) technology is featured as an innovative approach for enhancing power efficiency and spatial optimization in digital circuits. This study introduces an advanced four-input Improved Gate Diffusion Input (IGDI) design specifically for QCA technology as a universal gate. A key feature of the proposed 10-cell block is the absence of cross-wiring, which significantly enhances the circuit’s operational efficiency. Its universal cell nature allows for the carrying out of various logical gates by merely altering input values, without necessitating any structural redesign. The proposed design showcases notable advancements over prior models, including a reduced cell count by 17%, a 29% decrease in total energy usage, and a 44% reduction in average energy loss. This innovative IGDI design efficiently executes 21 combinational and various sequential functions. Simulations in 18 nm technology, accompanied by energy consumption analyses, demonstrate this design’s superior performance compared to existing models in key areas such as multiplexers, comparators, and memory circuits, alongside a significant reduction in cell count.
{"title":"Design and simulation of a new QCA-based low-power universal gate","authors":"Hamidreza Sadrarhami, S. M. Zanjani, M. Dolatshahi, Behrang Barekatain","doi":"10.3389/fcomp.2024.1373906","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1373906","url":null,"abstract":"Quantum-dot Cellular Automata (QCA) is recognized in electronics for its low power consumption and high-density capabilities, emerging as a potential substitute for CMOS technology. GDI (Gate Diffusion Input) technology is featured as an innovative approach for enhancing power efficiency and spatial optimization in digital circuits. This study introduces an advanced four-input Improved Gate Diffusion Input (IGDI) design specifically for QCA technology as a universal gate. A key feature of the proposed 10-cell block is the absence of cross-wiring, which significantly enhances the circuit’s operational efficiency. Its universal cell nature allows for the carrying out of various logical gates by merely altering input values, without necessitating any structural redesign. The proposed design showcases notable advancements over prior models, including a reduced cell count by 17%, a 29% decrease in total energy usage, and a 44% reduction in average energy loss. This innovative IGDI design efficiently executes 21 combinational and various sequential functions. Simulations in 18 nm technology, accompanied by energy consumption analyses, demonstrate this design’s superior performance compared to existing models in key areas such as multiplexers, comparators, and memory circuits, alongside a significant reduction in cell count.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141384400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}