Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.
To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.
We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.
有界穷举测试是一种非常有效的错误查找技术,它建议在所有有效的有界输入条件下测试给定程序,测试条件由开发人员提供。现有的有界穷举测试技术要求开发人员提供有效输入的精确说明。针对这种情况,我们提出了 BEAPI,它是一种工具,给定一个被测试的 Java 类,只需使用该类的方法,就能生成该类对象的有界穷举集,而无需说明。BEAPI 从类的公共 API 中创建方法调用序列,并执行这些序列以生成输入。BEAPI 实现了非常有效的剪枝技术,使其能够高效地生成输入。我们在多个文献案例研究中对 BEAPI 进行了实验性评估,结果表明它的性能可与现有最好的基于规范的有界穷举生成工具(Korat)媲美,而无需对有效输入进行规范。
{"title":"BEAPI: A tool for bounded exhaustive input generation from APIs","authors":"Mariano Politano , Valeria Bengolea , Facundo Molina , Nazareno Aguirre , Marcelo Frias , Pablo Ponzio","doi":"10.1016/j.scico.2024.103153","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103153","url":null,"abstract":"<div><p>Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.</p><p>To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.</p><p>We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103153"},"PeriodicalIF":1.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called path ranges, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.
In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run different analyses on different program parts. We have implemented this generalization on top of the tool CPAchecker and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.
{"title":"Parallel program analysis on path ranges","authors":"Jan Haltermann , Marie-Christine Jakobs , Cedric Richter , Heike Wehrheim","doi":"10.1016/j.scico.2024.103154","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103154","url":null,"abstract":"<div><p>Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called <em>path ranges</em>, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.</p><p>In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run <em>different</em> analyses on different program parts. We have implemented this generalization on top of the tool <span>CPAchecker</span> and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103154"},"PeriodicalIF":1.3,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000777/pdfft?md5=c9721851a6e6fced1e9f8337cb568046&pid=1-s2.0-S0167642324000777-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1016/j.scico.2024.103152
Jaemin Hong , Sunghwan Shim , Sanguk Park , Tae Woo Kim , Jungwoo Kim , Junsoo Lee , Sukyoung Ryu , Jeehoon Kang
Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are aliased AND mutable (A&M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&M states with the same reasoning principle dubbed an A&M pattern and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&M patterns in legacy OSs. We present modular abstractions for six A&M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6 to the level of the clean-slate Rust OSs.
{"title":"Taming shared mutable states of operating systems in Rust","authors":"Jaemin Hong , Sunghwan Shim , Sanguk Park , Tae Woo Kim , Jungwoo Kim , Junsoo Lee , Sukyoung Ryu , Jeehoon Kang","doi":"10.1016/j.scico.2024.103152","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103152","url":null,"abstract":"<div><p>Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are <em>aliased AND mutable</em> (A&M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&M states with the same reasoning principle dubbed an <em>A&M pattern</em> and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&M patterns in legacy OSs. We present modular abstractions for six A&M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span>, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span> to the level of the clean-slate Rust OSs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103152"},"PeriodicalIF":1.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1016/j.scico.2024.103140
Shivani Jain, Anju Saha
In complex systems, the maintenance phase engenders the emergence of code smells due to incessant shifts in requirements and designs, stringent timelines, and the developer's relative inexperience. While not conventionally classified as errors, code smells inherently signify flawed design structures that lead to future bugs and errors. It increases the software budget and eventually makes the system hard to maintain or completely obsolete. To mitigate these challenges, practitioners must detect and refactor code smells. However, the theoretical interpretation of smell definitions and intelligent establishment of threshold values pose a significant conundrum. Supervised machine learning emerges as a potent strategy to address these problems and alleviate the dependence on expert intervention. The learning mechanism of these algorithms can be refined through data pre-processing and hyperparameter tuning. Selecting the best values for hyperparameters can be tedious and requires an expert. This study introduces an innovative paradigm that fuses twelve swarm-based, meta-heuristic algorithms with two machine learning classifiers, optimizing their hyperparameters, eliminating the need for an expert, and automating the entire code smell detection process. Through this synergistic approach, the highest post-optimization accuracy, precision, recall, F-measure, and ROC-AUC values are 99.09%, 99.20%, 99.09%, 98.06%, and 100%, respectively. The most remarkable upsurge is 35.9% in accuracy, 53.79% in precision, 35.90% in recall, 44.73% in F-measure, and 36.28% in ROC-AUC. Artificial Bee Colony, Grey Wolf, and Salp Swarm Optimizer are the top-performing swarm-intelligent algorithms. God and Data Class are the most readily detectable smells with optimized classifiers. Statistical tests underscore the profound impact of employing swarm-based algorithms to optimize machine learning classifiers, corroborated by statistical tests. This seamless integration enhances classifier performance, automates code smell detection, and offers a robust solution to a persistent software engineering challenge.
{"title":"Improving and comparing performance of machine learning classifiers optimized by swarm intelligent algorithms for code smell detection","authors":"Shivani Jain, Anju Saha","doi":"10.1016/j.scico.2024.103140","DOIUrl":"10.1016/j.scico.2024.103140","url":null,"abstract":"<div><p>In complex systems, the maintenance phase engenders the emergence of code smells due to incessant shifts in requirements and designs, stringent timelines, and the developer's relative inexperience. While not conventionally classified as errors, code smells inherently signify flawed design structures that lead to future bugs and errors. It increases the software budget and eventually makes the system hard to maintain or completely obsolete. To mitigate these challenges, practitioners must detect and refactor code smells. However, the theoretical interpretation of smell definitions and intelligent establishment of threshold values pose a significant conundrum. Supervised machine learning emerges as a potent strategy to address these problems and alleviate the dependence on expert intervention. The learning mechanism of these algorithms can be refined through data pre-processing and hyperparameter tuning. Selecting the best values for hyperparameters can be tedious and requires an expert. This study introduces an innovative paradigm that fuses twelve swarm-based, meta-heuristic algorithms with two machine learning classifiers, optimizing their hyperparameters, eliminating the need for an expert, and automating the entire code smell detection process. Through this synergistic approach, the highest post-optimization accuracy, precision, recall, F-measure, and ROC-AUC values are 99.09%, 99.20%, 99.09%, 98.06%, and 100%, respectively. The most remarkable upsurge is 35.9% in accuracy, 53.79% in precision, 35.90% in recall, 44.73% in F-measure, and 36.28% in ROC-AUC. Artificial Bee Colony, Grey Wolf, and Salp Swarm Optimizer are the top-performing swarm-intelligent algorithms. God and Data Class are the most readily detectable smells with optimized classifiers. Statistical tests underscore the profound impact of employing swarm-based algorithms to optimize machine learning classifiers, corroborated by statistical tests. This seamless integration enhances classifier performance, automates code smell detection, and offers a robust solution to a persistent software engineering challenge.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103140"},"PeriodicalIF":1.3,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141053981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1016/j.scico.2024.103141
Shuo Liu , Jacky Wai Keung , Zhen Yang , Yihan Liao , Yishu Li
Context
Programs with non-termination behavior induce various bugs, such as denial-of-service vulnerability and memory exhaustion. Hence the ability to detect non-termination programs before software deployment is crucial. Existing detection methods are either execution-based or deep learning-based. Despite great advances, their limitations are evident. The former requires complex sandbox environments for execution, while the latter lacks fine-grained analysis.
Objective
To overcome the above limitations, this paper proposes a graph-enhanced contrastive approach, namely TerGEC, which combines both inter-class and intra-class semantics to carry out a more fine-grained analysis and exempt execution during the detection process.
Methods
In detail, TerGEC analyzes behaviors of programs from Abstract Syntax Trees (ASTs), thereby capturing intra-class semantics both syntactically and lexically. Besides, it incorporates contrastive learning to learn the discrepancy between program behaviors of termination and non-termination, thereby acquiring inter-class semantics. In addition, graph augmentation is designed to improve the robustness. Weighted contrastive loss and focal loss are also equipped in TerGEC to alleviate the classes-imbalance problem during the non-termination detection. Consequently, the whole detection process can be handled more fine-grained, and the execution can also be exempted due to the nature of deep learning.
Results
We evaluate TerGEC on five datasets of both Python and C languages. Extensive experiments demonstrate TerGEC achieves the best performance overall. Among all experimented datasets, TerGEC outperforms state-of-the-art baselines by 8.20% in terms of mAP and by 17.07% in terms of AUC on average.
Conclusion
TerGEC is capable of detecting non-terminating programs with high precision, showing that the combination of inter-class and intra-class learning, along with our proposed classes-imbalance solutions, is significantly effective in practice.
{"title":"TerGEC: A graph enhanced contrastive approach for program termination analysis","authors":"Shuo Liu , Jacky Wai Keung , Zhen Yang , Yihan Liao , Yishu Li","doi":"10.1016/j.scico.2024.103141","DOIUrl":"10.1016/j.scico.2024.103141","url":null,"abstract":"<div><h3>Context</h3><p>Programs with non-termination behavior induce various bugs, such as denial-of-service vulnerability and memory exhaustion. Hence the ability to detect non-termination programs before software deployment is crucial. Existing detection methods are either execution-based or deep learning-based. Despite great advances, their limitations are evident. The former requires complex sandbox environments for execution, while the latter lacks fine-grained analysis.</p></div><div><h3>Objective</h3><p>To overcome the above limitations, this paper proposes a graph-enhanced contrastive approach, namely TerGEC, which combines both inter-class and intra-class semantics to carry out a more fine-grained analysis and exempt execution during the detection process.</p></div><div><h3>Methods</h3><p>In detail, TerGEC analyzes behaviors of programs from Abstract Syntax Trees (ASTs), thereby capturing intra-class semantics both syntactically and lexically. Besides, it incorporates contrastive learning to learn the discrepancy between program behaviors of termination and non-termination, thereby acquiring inter-class semantics. In addition, graph augmentation is designed to improve the robustness. Weighted contrastive loss and focal loss are also equipped in TerGEC to alleviate the classes-imbalance problem during the non-termination detection. Consequently, the whole detection process can be handled more fine-grained, and the execution can also be exempted due to the nature of deep learning.</p></div><div><h3>Results</h3><p>We evaluate TerGEC on five datasets of both Python and C languages. Extensive experiments demonstrate TerGEC achieves the best performance overall. Among all experimented datasets, TerGEC outperforms state-of-the-art baselines by 8.20% in terms of mAP and by 17.07% in terms of AUC on average.</p></div><div><h3>Conclusion</h3><p>TerGEC is capable of detecting non-terminating programs with high precision, showing that the combination of inter-class and intra-class learning, along with our proposed classes-imbalance solutions, is significantly effective in practice.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103141"},"PeriodicalIF":1.3,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141028873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-12DOI: 10.1016/j.scico.2024.103137
Ezequiel Kahan , Marcela Genero , Alejandro Oliveros
Requirements elicitation processes have a series of challenges and limitations in terms of business process focus, system transparency, and dealing with the complexity resulting from interdependence. The Design Thinking approach, which focuses on people and on understanding the context of problems, can contribute to solving them. For this reason, a requirements elicitation process based on Design Thinking has been defined, consisting of three activities: Empathise, Synthesise, and Ideate. For refining this process, a focus group discussion with experts was conducted. The experts provided feedback, specifically on the role of empathy in the process, its domain of application and activities. The results analysed from the focus group confirm the usefulness of the process and generate a series of lessons learned that allowed us to continue refining it. This paper presents the cited process, the main characteristics and results of the focus group and the refined process.
{"title":"Refining a design thinking-based requirements elicitation process: Insights from a focus group","authors":"Ezequiel Kahan , Marcela Genero , Alejandro Oliveros","doi":"10.1016/j.scico.2024.103137","DOIUrl":"10.1016/j.scico.2024.103137","url":null,"abstract":"<div><p>Requirements elicitation processes have a series of challenges and limitations in terms of business process focus, system transparency, and dealing with the complexity resulting from interdependence. The Design Thinking approach, which focuses on people and on understanding the context of problems, can contribute to solving them. For this reason, a requirements elicitation process based on Design Thinking has been defined, consisting of three activities: Empathise, Synthesise, and Ideate. For refining this process, a focus group discussion with experts was conducted. The experts provided feedback, specifically on the role of empathy in the process, its domain of application and activities. The results analysed from the focus group confirm the usefulness of the process and generate a series of lessons learned that allowed us to continue refining it. This paper presents the cited process, the main characteristics and results of the focus group and the refined process.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103137"},"PeriodicalIF":1.3,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141050019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-11DOI: 10.1016/j.scico.2024.103136
Phesto P. Namayala , Tabu S. Kondo
User eXperience (UX) significantly influences the success of free and open source software (FOSS) projects and is measured using UX capability maturity models (UXCMMs). Every organization desires higher levels of UX maturity; however, it requires upfront preparations and process quality control.
Harmonizing processes and analytical lenses for determining preparation for UX maturity are still challenging, and studies to create them are limited. The analysis is ad hoc and based on the actors’ will and experiences. This study proposes and validates analytical lenses.
Findings show that UX experts agreed that the lenses could be used with a consensus percentage of 81 %, the threshold value (d) = 0.112, and crisp values greater than α-cut = 0.5. On validation, 47.57 % of stakeholders agreed, and 52.43 % strongly agreed they were relevant. Results help evaluate the status quo and change culture and policies toward ideal preparation. Two areas are suggested for future research.
{"title":"Application of fuzzy Delphi technique to identify analytical lenses for determining the preparation of free and open source software projects for user experience maturity","authors":"Phesto P. Namayala , Tabu S. Kondo","doi":"10.1016/j.scico.2024.103136","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103136","url":null,"abstract":"<div><p>User eXperience (UX) significantly influences the success of free and open source software (FOSS) projects and is measured using UX capability maturity models (UXCMMs). Every organization desires higher levels of UX maturity; however, it requires upfront preparations and process quality control.</p><p>Harmonizing processes and analytical lenses for determining preparation for UX maturity are still challenging, and studies to create them are limited. The analysis is ad hoc and based on the actors’ will and experiences. This study proposes and validates analytical lenses.</p><p>Findings show that UX experts agreed that the lenses could be used with a consensus percentage of 81 %, the threshold value (d) = 0.112, and crisp values greater than α-cut = 0.5. On validation, 47.57 % of stakeholders agreed, and 52.43 % strongly agreed they were relevant. Results help evaluate the status quo and change culture and policies toward ideal preparation. Two areas are suggested for future research.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103136"},"PeriodicalIF":1.3,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140906819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1016/j.scico.2024.103139
Jaime Alvarado-Valiente, Javier Romero-Álvarez, Enrique Moguel, Jose García-Alonso, Juan M. Murillo
Quantum computing plays a crucial role in solving complex problems for which classical supercomputers require an impractical amount of time. This emerging paradigm has the potential to revolutionize various fields such as cryptography, chemistry, and finance, making it a highly relevant area of research and development. Major companies such as Google, Amazon, IBM, and Microsoft, along with prestigious research institutions such as Oxford and MIT, are investing significant efforts into advancing this technology. However, the lack of a standardized approach among different providers poses challenges for developers to effectively access and utilize quantum computing resources. In this study, we propose a quantum orchestrator that is designed to facilitate the orchestration and execution of quantum circuits across multiple quantum service providers. The proposed solution aims to simplify the process for developers and facilitate the execution of quantum tasks using resources offered by different providers. The proposal is validated with the implementation of the proposed orchestrator for Amazon Braket and IBM Quantum. It can support both quantum and classical developers in defining, configuring, and executing circuits independently of the selected provider.
量子计算在解决传统超级计算机需要耗费大量时间才能解决的复杂问题方面发挥着至关重要的作用。这一新兴范式有可能彻底改变密码学、化学和金融等各个领域,使其成为一个高度相关的研发领域。谷歌、亚马逊、IBM 和微软等大公司,以及牛津大学和麻省理工学院等著名研究机构,都在投入大量精力推进这项技术。然而,不同供应商之间缺乏标准化的方法,这给开发人员有效访问和利用量子计算资源带来了挑战。在本研究中,我们提出了一种量子协调器,旨在促进多个量子服务提供商之间量子电路的协调和执行。所提出的解决方案旨在简化开发人员的流程,促进利用不同提供商提供的资源执行量子任务。亚马逊 Braket 和 IBM Quantum 的拟议协调器的实施验证了该提议。它可以支持量子和经典开发人员定义、配置和执行电路,而不受所选提供商的影响。
{"title":"Orchestration for quantum services: The power of load balancing across multiple service providers","authors":"Jaime Alvarado-Valiente, Javier Romero-Álvarez, Enrique Moguel, Jose García-Alonso, Juan M. Murillo","doi":"10.1016/j.scico.2024.103139","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103139","url":null,"abstract":"<div><p>Quantum computing plays a crucial role in solving complex problems for which classical supercomputers require an impractical amount of time. This emerging paradigm has the potential to revolutionize various fields such as cryptography, chemistry, and finance, making it a highly relevant area of research and development. Major companies such as Google, Amazon, IBM, and Microsoft, along with prestigious research institutions such as Oxford and MIT, are investing significant efforts into advancing this technology. However, the lack of a standardized approach among different providers poses challenges for developers to effectively access and utilize quantum computing resources. In this study, we propose a quantum orchestrator that is designed to facilitate the orchestration and execution of quantum circuits across multiple quantum service providers. The proposed solution aims to simplify the process for developers and facilitate the execution of quantum tasks using resources offered by different providers. The proposal is validated with the implementation of the proposed orchestrator for Amazon Braket and IBM Quantum. It can support both quantum and classical developers in defining, configuring, and executing circuits independently of the selected provider.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103139"},"PeriodicalIF":1.3,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000625/pdfft?md5=965e55d13e89cbb8a0e04346b111f55c&pid=1-s2.0-S0167642324000625-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140947833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1016/j.scico.2024.103138
Yung-Ting Chuang, Hsin-Yu Chang
Background and Context
In today's tech-driven world, programming courses are crucial. Yet, teaching programming is challenging, leading to high student failure rates. Understanding student learning patterns is key, but there's a lack of research utilizing tools to automatically collect and analyze interaction data for insights into student performance and behaviors.
Objectives
Study aims to compare problem-solving behaviors of novice and competent programmers during coding tests, identifying patterns and exploring relationships with program correctness.
Method
We built an online system with programming challenges to collect behavior data from novice and competent programmers. Our system analyzed data using various metrics to explore behavior-program correctness relationships.
Findings
Analysis showed distinct problem-solving behavior patterns. Competent programmers had fewer syntax errors, spent less time fixing bugs, and had higher program correctness. Novices made more syntax errors and spent more time fixing coding errors. Both groups used tabs for code structure, but competent programmers introduced unfamiliar variables more often and commented on them afterward. Emphasizing iterative revisions and active engagement enhances problem-solving skills and programming proficiency. Radar charts are effective for identifying improvement areas in teaching programming. The relationship between behavior and program correctness was positively correlated for competent programmers but not novices.
Implications
Study findings have implications for programming education. Radar charts help teachers identify course improvement areas. Novices can learn from competent programmers' behavior. Instructors should encourage continuous skill improvement through revisions and engagement. Identified unfamiliar programming aspects offer insights for targeted learning.
{"title":"Analyzing novice and competent programmers' problem-solving behaviors using an automated evaluation system","authors":"Yung-Ting Chuang, Hsin-Yu Chang","doi":"10.1016/j.scico.2024.103138","DOIUrl":"10.1016/j.scico.2024.103138","url":null,"abstract":"<div><h3>Background and Context</h3><p>In today's tech-driven world, programming courses are crucial. Yet, teaching programming is challenging, leading to high student failure rates. Understanding student learning patterns is key, but there's a lack of research utilizing tools to automatically collect and analyze interaction data for insights into student performance and behaviors.</p></div><div><h3>Objectives</h3><p>Study aims to compare problem-solving behaviors of novice and competent programmers during coding tests, identifying patterns and exploring relationships with program correctness.</p></div><div><h3>Method</h3><p>We built an online system with programming challenges to collect behavior data from novice and competent programmers. Our system analyzed data using various metrics to explore behavior-program correctness relationships.</p></div><div><h3>Findings</h3><p>Analysis showed distinct problem-solving behavior patterns. Competent programmers had fewer syntax errors, spent less time fixing bugs, and had higher program correctness. Novices made more syntax errors and spent more time fixing coding errors. Both groups used tabs for code structure, but competent programmers introduced unfamiliar variables more often and commented on them afterward. Emphasizing iterative revisions and active engagement enhances problem-solving skills and programming proficiency. Radar charts are effective for identifying improvement areas in teaching programming. The relationship between behavior and program correctness was positively correlated for competent programmers but not novices.</p></div><div><h3>Implications</h3><p>Study findings have implications for programming education. Radar charts help teachers identify course improvement areas. Novices can learn from competent programmers' behavior. Instructors should encourage continuous skill improvement through revisions and engagement. Identified unfamiliar programming aspects offer insights for targeted learning.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103138"},"PeriodicalIF":1.3,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141043264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}