首页 > 最新文献

Journal of Computer Languages最新文献

英文 中文
Application of Normalized Systems Theory to pure functional code to achieve sustainability of web front-end applications 将归一化系统理论应用于纯功能代码,实现web前端应用程序的可持续性
IF 1.8 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-15 DOI: 10.1016/j.cola.2025.101346
Jan Slifka, Robert Pergl
Modern web front-end applications frequently encounter challenges in maintaining long-term stability as they evolve to accommodate new requirements. This growing complexity often leads to diminishing maintainability and, in some cases, costly rewrites. To address this issue, we propose a methodology that integrates Normalized Systems Theory (NST)–which provides the structural foundations for stable software—with functional programming (FP) principles to construct inherently evolvable front-end systems. Our approach is implemented and evaluated using Elm, a statically typed, purely functional language designed for web front-end development. By aligning Elm’s design patterns with NST theorems, we establish a framework for building systems that are modular, maintainable, and resilient to change. We validate the efficacy of this methodology through a case study of a production-grade Elm application, demonstrating notable improvements in evolvability and system sustainability. While our implementation focuses on Elm, the underlying principles extend to other functional technologies, offering a broadly applicable strategy for achieving long-term stability in web front-end architecture.
现代web前端应用程序在不断发展以适应新的需求时,经常遇到维护长期稳定性的挑战。这种不断增长的复杂性通常会导致可维护性的降低,在某些情况下,还会导致代价高昂的重写。为了解决这个问题,我们提出了一种方法,该方法将规范化系统理论(NST) -它为稳定的软件提供了结构基础-与函数式编程(FP)原则集成在一起,以构建内在可进化的前端系统。我们的方法是使用Elm实现和评估的,Elm是一种静态类型的纯函数式语言,专为web前端开发而设计。通过将Elm的设计模式与NST定理结合起来,我们建立了一个框架,用于构建模块化、可维护和适应变化的系统。我们通过生产级Elm应用程序的案例研究验证了该方法的有效性,展示了在可发展性和系统可持续性方面的显着改进。虽然我们的实现侧重于Elm,但其基本原则扩展到其他功能技术,为实现web前端架构的长期稳定性提供了广泛适用的策略。
{"title":"Application of Normalized Systems Theory to pure functional code to achieve sustainability of web front-end applications","authors":"Jan Slifka,&nbsp;Robert Pergl","doi":"10.1016/j.cola.2025.101346","DOIUrl":"10.1016/j.cola.2025.101346","url":null,"abstract":"<div><div>Modern web front-end applications frequently encounter challenges in maintaining long-term stability as they evolve to accommodate new requirements. This growing complexity often leads to diminishing maintainability and, in some cases, costly rewrites. To address this issue, we propose a methodology that integrates Normalized Systems Theory (NST)–which provides the structural foundations for stable software—with functional programming (FP) principles to construct inherently evolvable front-end systems. Our approach is implemented and evaluated using Elm, a statically typed, purely functional language designed for web front-end development. By aligning Elm’s design patterns with NST theorems, we establish a framework for building systems that are modular, maintainable, and resilient to change. We validate the efficacy of this methodology through a case study of a production-grade Elm application, demonstrating notable improvements in evolvability and system sustainability. While our implementation focuses on Elm, the underlying principles extend to other functional technologies, offering a broadly applicable strategy for achieving long-term stability in web front-end architecture.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"85 ","pages":"Article 101346"},"PeriodicalIF":1.8,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving LLM-based code completion using LR parsing 使用LR解析改进基于llm的代码完成
IF 1.8 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-29 DOI: 10.1016/j.cola.2025.101352
Md Monir Ahammod Bin Atique , Hyeon-Ah Moon , Isao Sasano , Kwanghoon Choi
Code completion is a crucial feature in modern IDEs, improving programming efficiency. Traditional systems rely on prefix filtering and static ranking but often overwhelm users with lengthy, alphabetically sorted lists. Recent research has introduced LR-parsing-based approaches that derive completion candidates from language syntax and compute their ranks using open-source programs; however, these methods only suggest structural candidates, requiring manual refinement into complete code. To address this, we propose a hybrid method that integrates LR parsing with LLMs to enhance accuracy and usability. Our approach refines structural candidates using LR parsing into textual code suggestions via an LLM, referencing a database of ranked candidates from open-source programs. This combines the syntactic precision of LR parsing with the generative capabilities of LLMs. This study examines whether LLMs benefit from LR structural candidates in code completion. By comparing completions with and without these candidates, we assess their impact. Building on prior research, we also explore how leveraging top-ranked structural candidates can effectively enhance LLM-based code completion precision. We also demonstrate our method through VSCode extensions for Microsoft Small Basic and C. As a language-agnostic solution, our system applies to any language with a defined LR grammar. Our findings suggest that integrating LR parsing with LLM-based completion improves both accuracy and usability, paving the way for more effective code completion in modern IDEs.
代码补全是现代ide的一个关键特性,可以提高编程效率。传统的系统依赖于前缀过滤和静态排名,但往往用冗长的、按字母顺序排序的列表淹没用户。最近的研究引入了基于lr解析的方法,从语言语法中获得补全候选项,并使用开源程序计算它们的排名;然而,这些方法只建议结构候选,需要手工细化为完整的代码。为了解决这个问题,我们提出了一种将LR解析与llm集成在一起的混合方法,以提高准确性和可用性。我们的方法通过LLM将LR解析细化为文本代码建议,并参考来自开源程序的排名候选数据库。这结合了LR解析的语法精度和llm的生成能力。本研究探讨llm是否受益于LR结构候选人在代码完成。通过比较有和没有这些候选人的完成情况,我们评估他们的影响。在先前研究的基础上,我们还探讨了如何利用排名靠前的候选结构来有效地提高基于llm的代码完成精度。我们还通过VSCode扩展为Microsoft Small Basic和c演示了我们的方法。作为一种语言无关的解决方案,我们的系统适用于任何具有定义LR语法的语言。我们的研究结果表明,将LR解析与基于llm的补全集成可以提高准确性和可用性,为现代ide中更有效的代码补全铺平了道路。
{"title":"Improving LLM-based code completion using LR parsing","authors":"Md Monir Ahammod Bin Atique ,&nbsp;Hyeon-Ah Moon ,&nbsp;Isao Sasano ,&nbsp;Kwanghoon Choi","doi":"10.1016/j.cola.2025.101352","DOIUrl":"10.1016/j.cola.2025.101352","url":null,"abstract":"<div><div>Code completion is a crucial feature in modern IDEs, improving programming efficiency. Traditional systems rely on prefix filtering and static ranking but often overwhelm users with lengthy, alphabetically sorted lists. Recent research has introduced LR-parsing-based approaches that derive completion candidates from language syntax and compute their ranks using open-source programs; however, these methods only suggest structural candidates, requiring manual refinement into complete code. To address this, we propose a hybrid method that integrates LR parsing with LLMs to enhance accuracy and usability. Our approach refines structural candidates using LR parsing into textual code suggestions via an LLM, referencing a database of ranked candidates from open-source programs. This combines the syntactic precision of LR parsing with the generative capabilities of LLMs. This study examines whether LLMs benefit from LR structural candidates in code completion. By comparing completions with and without these candidates, we assess their impact. Building on prior research, we also explore how leveraging top-ranked structural candidates can effectively enhance LLM-based code completion precision. We also demonstrate our method through VSCode extensions for Microsoft Small Basic and C. As a language-agnostic solution, our system applies to any language with a defined LR grammar. Our findings suggest that integrating LR parsing with LLM-based completion improves both accuracy and usability, paving the way for more effective code completion in modern IDEs.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101352"},"PeriodicalIF":1.8,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating quantized Large Language Models for code generation on low-resource language benchmarks 在低资源语言基准上评估用于代码生成的量化大型语言模型
IF 1.8 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-29 DOI: 10.1016/j.cola.2025.101351
Enkhbold Nyamsuren
Democratization of AI, which is making AI accessible and usable for everyone, is an important topic with the broader topic of the digital divide. This issue is especially relevant to Large Language Models (LLM) that are becoming increasingly popular as AI co-pilots but suffer from a lack of accessibility due to high computational demand. In this study, we evaluate whether LLM quantization is a viable approach toward enabling LLMs on generic consumer devices. The study assesses the performance of five quantized code LLMs in Lua and Python code generation tasks. All code LLMs had approximately 7 billion parameters and were deployed on a generic CPU-only consumer laptop. To evaluate the impact of quantization, the models were tested at 2-, 4-, and 8-bit integer precisions. Pass@1 and pass@10 evaluations were done at variable temperatures and token sampling rates. Along with tasks such as question answering, text summarization, and text generation, programming tasks are one of the popular applications of AI co-pilots. Furthermore, code generation is a high-precision task, which makes it a suitable benchmark to evaluate and compare quantized models for everyday use by individuals. Lua is chosen as a low-resource language to avoid models’ biases related to high-resource languages. Performance in Lua is contrasted against performance in Python, which was chosen as a high-resource language. The results suggest that the models quantized at the 4-bit integer precision offer the best trade-off between performance and model size. These models can be comfortably deployed on an average laptop without a dedicated GPU. The findings suggest that lower quantization precision adversely affects more the performance in low-resource languages than in high-resource languages. But it also hinted that the quantization to an integer precision from the full precision affects more the performance in high-resource language. The quantized models at 8-bit integer precision require more inference that does not effectively translate to better performance. While quantization indeed increases the accessibility of smaller LLMs with 7 billion parameters, these LLMs demonstrate overall low performance (less than 50%) on high-precision and low-resource tasks such as Lua code generation. While accessibility is improved, usability is still not at a practical level of foundational LLMs such as GPT-4o or Llama 3.1 with 405 billion parameters. Additionally, in the most failed instances, the models excel at generating code that is free of syntax errors but fails at unit tests or has runtime issues. This means that any generated code requires extensive testing that may negate any potential efficiency boost delivered by these smaller coding models.
人工智能的民主化,即让每个人都能访问和使用人工智能,是一个与数字鸿沟这个更广泛的话题相关的重要话题。这个问题与大型语言模型(LLM)特别相关,大型语言模型作为人工智能的副驾驶越来越受欢迎,但由于高计算需求而缺乏可访问性。在本研究中,我们评估了LLM量化是否是在通用消费设备上实现LLM的可行方法。该研究评估了五个量化代码llm在Lua和Python代码生成任务中的性能。所有代码llm都有大约70亿个参数,并且部署在只有cpu的普通消费者笔记本电脑上。为了评估量化的影响,对模型进行了2位、4位和8位整数精度的测试。Pass@1和pass@10评估是在不同的温度和采样率下进行的。与问答、文本摘要和文本生成等任务一样,编程任务是人工智能副驾驶的热门应用之一。此外,代码生成是一项高精度的任务,这使得它成为评估和比较个人日常使用的量化模型的合适基准。选择Lua作为低资源语言是为了避免模型与高资源语言相关的偏差。Lua的性能与Python的性能形成对比,Python被选为高资源语言。结果表明,以4位整数精度量化的模型在性能和模型大小之间提供了最佳的权衡。这些模型可以在没有专用GPU的普通笔记本电脑上轻松部署。研究结果表明,相对于高资源语言,低量化精度对低资源语言的表现影响更大。但这也暗示了在高资源语言中,从全精度量化到整数精度对性能的影响更大。8位整数精度的量化模型需要更多的推理,这并不能有效地转化为更好的性能。虽然量化确实增加了具有70亿个参数的小型llm的可访问性,但这些llm在高精度和低资源任务(如Lua代码生成)上的总体性能较低(低于50%)。虽然可访问性得到了改善,但可用性仍未达到具有4050亿个参数的基础法学硕士(如gpt - 40或Llama 31)的实际水平。此外,在大多数失败的实例中,模型擅长生成没有语法错误的代码,但在单元测试中失败或存在运行时问题。这意味着任何生成的代码都需要大量的测试,这可能会抵消这些较小的编码模型所带来的任何潜在的效率提升。
{"title":"Evaluating quantized Large Language Models for code generation on low-resource language benchmarks","authors":"Enkhbold Nyamsuren","doi":"10.1016/j.cola.2025.101351","DOIUrl":"10.1016/j.cola.2025.101351","url":null,"abstract":"<div><div>Democratization of AI, which is making AI accessible and usable for everyone, is an important topic with the broader topic of the digital divide. This issue is especially relevant to Large Language Models (LLM) that are becoming increasingly popular as AI co-pilots but suffer from a lack of accessibility due to high computational demand. In this study, we evaluate whether LLM quantization is a viable approach toward enabling LLMs on generic consumer devices. The study assesses the performance of five quantized code LLMs in Lua and Python code generation tasks. All code LLMs had approximately 7 billion parameters and were deployed on a generic CPU-only consumer laptop. To evaluate the impact of quantization, the models were tested at 2-, 4-, and 8-bit integer precisions. Pass@1 and pass@10 evaluations were done at variable temperatures and token sampling rates. Along with tasks such as question answering, text summarization, and text generation, programming tasks are one of the popular applications of AI co-pilots. Furthermore, code generation is a high-precision task, which makes it a suitable benchmark to evaluate and compare quantized models for everyday use by individuals. Lua is chosen as a low-resource language to avoid models’ biases related to high-resource languages. Performance in Lua is contrasted against performance in Python, which was chosen as a high-resource language. The results suggest that the models quantized at the 4-bit integer precision offer the best trade-off between performance and model size. These models can be comfortably deployed on an average laptop without a dedicated GPU. The findings suggest that lower quantization precision adversely affects more the performance in low-resource languages than in high-resource languages. But it also hinted that the quantization to an integer precision from the full precision affects more the performance in high-resource language. The quantized models at 8-bit integer precision require more inference that does not effectively translate to better performance. While quantization indeed increases the accessibility of smaller LLMs with 7 billion parameters, these LLMs demonstrate overall low performance (less than 50%) on high-precision and low-resource tasks such as Lua code generation. While accessibility is improved, usability is still not at a practical level of foundational LLMs such as GPT-4o or Llama 3.1 with 405 billion parameters. Additionally, in the most failed instances, the models excel at generating code that is free of syntax errors but fails at unit tests or has runtime issues. This means that any generated code requires extensive testing that may negate any potential efficiency boost delivered by these smaller coding models.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101351"},"PeriodicalIF":1.8,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novice modelers’ subjective comprehension and interaction with token-animated process models 新手建模者对令牌动画过程模型的主观理解与交互
IF 1.8 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-26 DOI: 10.1016/j.cola.2025.101350
Ilia Maslov , Stephan Poelmans , Yves Wautelet , Frederik Gailly
Process modeling is fundamental for effective (business) process management. Comprehension of process models by novice modelers and the effective integration of learning technologies present crucial challenges that can be addressed through the application of visualization, animation, and simulation techniques. In this study, we examine the experiences and perceptions of novice modelers deploying token-animated process models, drawing upon data from 119 college students specializing in business management and business engineering who answered comprehension questions based on these models. We concentrate on investigating perceived understanding through the utilization of Technology Adoption Model (TAM) constructs, employing Partial Least Square to validate an extended research model based on TAM. We additionally analyze qualitative data from respondents' answers to open questions to extract codes and themes that complete the research model's findings. The results confirm that token-animated process models are useful and preferred as a learning technique. Tokens enhance cognitive facilitation by incorporating visualization, animation, and simulation functionalities, resulting in improved objective and perceived comprehension. We extend the comprehension determinants of process models with perceived enjoyment and show that emotional states are also important in the utilization of tokens for teaching purposes. Over 80 % of participants reported a clear preference for using token-animated process models, confirming high levels of student acceptance. Our study also identified recommendations for enhancement and potential limitations associated with the use of animated tokens in education. Further theoretical and practical implications are finally discussed.
流程建模是有效(业务)流程管理的基础。新手建模者对过程模型的理解和学习技术的有效集成提出了关键的挑战,这些挑战可以通过可视化、动画和仿真技术的应用来解决。在本研究中,我们研究了部署令牌动画过程模型的新手建模者的经验和看法,利用119名专业为商业管理和商业工程的大学生的数据,他们回答了基于这些模型的理解问题。我们专注于通过利用技术采用模型(TAM)结构来调查感知理解,使用偏最小二乘法来验证基于TAM的扩展研究模型。我们还从受访者对开放性问题的回答中分析定性数据,以提取代码和主题,以完成研究模型的发现。结果证实了令牌动画过程模型作为一种学习技术是有用和首选的。代币通过结合可视化、动画和模拟功能来增强认知促进,从而提高客观和感知理解。我们将过程模型的理解决定因素扩展为感知享受,并表明情绪状态在使用代币进行教学时也很重要。超过80%的参与者报告了对使用代币动画过程模型的明确偏好,证实了学生的高接受度。我们的研究还确定了在教育中使用动画代币的增强建议和潜在限制。最后讨论了进一步的理论和实践意义。
{"title":"Novice modelers’ subjective comprehension and interaction with token-animated process models","authors":"Ilia Maslov ,&nbsp;Stephan Poelmans ,&nbsp;Yves Wautelet ,&nbsp;Frederik Gailly","doi":"10.1016/j.cola.2025.101350","DOIUrl":"10.1016/j.cola.2025.101350","url":null,"abstract":"<div><div>Process modeling is fundamental for effective (business) process management. Comprehension of process models by novice modelers and the effective integration of learning technologies present crucial challenges that can be addressed through the application of visualization, animation, and simulation techniques. In this study, we examine the experiences and perceptions of novice modelers deploying token-animated process models, drawing upon data from 119 college students specializing in business management and business engineering who answered comprehension questions based on these models. We concentrate on investigating perceived understanding through the utilization of Technology Adoption Model (TAM) constructs, employing Partial Least Square to validate an extended research model based on TAM. We additionally analyze qualitative data from respondents' answers to open questions to extract codes and themes that complete the research model's findings. The results confirm that token-animated process models are useful and preferred as a learning technique. Tokens enhance cognitive facilitation by incorporating visualization, animation, and simulation functionalities, resulting in improved objective and perceived comprehension. We extend the comprehension determinants of process models with perceived enjoyment and show that emotional states are also important in the utilization of tokens for teaching purposes. Over 80 % of participants reported a clear preference for using token-animated process models, confirming high levels of student acceptance. Our study also identified recommendations for enhancement and potential limitations associated with the use of animated tokens in education. Further theoretical and practical implications are finally discussed.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101350"},"PeriodicalIF":1.8,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TestLoter: A logic-driven framework for automated unit test generation and error repair using large language models TestLoter:一个逻辑驱动的框架,用于使用大型语言模型自动生成单元测试和修复错误
IF 1.8 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-23 DOI: 10.1016/j.cola.2025.101348
Ruofan Yang, Xianghua Xu, Ran Wang
Automated unit test generation is a critical technique for improving software quality and development efficiency. However, traditional methods often produce test cases with poor business consistency, while large language model based approaches face two major challenges: a high error rate in generated tests and insufficient code coverage. To address these issues, this paper proposes TestLoter, a logic-driven test generation framework. The core contributions of TestLoter are twofold. First, by integrating the structured analysis capabilities of white-box testing with the functional validation characteristics of black-box testing, we design a logic-driven test generation chain-of-thought that enables deep semantic analysis of code. Second, we establish a hierarchical repair mechanism to systematically correct errors in generated test cases, significantly enhancing the correctness of the test code. Experimental results on nine open-source projects covering various domains, such as data processing and utility libraries, demonstrate that TestLoter achieves 83.6% line coverage and 78% branch coverage. Our approach outperforms both LLM-based methods and traditional search-based software testing techniques in terms of coverage, while also reducing the number of errors in the generated unit test code.
自动化单元测试生成是提高软件质量和开发效率的关键技术。然而,传统方法经常产生业务一致性差的测试用例,而基于大型语言模型的方法面临两个主要挑战:生成测试中的高错误率和不足的代码覆盖率。为了解决这些问题,本文提出了TestLoter,一个逻辑驱动的测试生成框架。TestLoter的核心贡献有两个方面。首先,通过将白盒测试的结构化分析能力与黑盒测试的功能验证特征集成在一起,我们设计了一个逻辑驱动的测试生成思维链,从而能够对代码进行深入的语义分析。其次,我们建立了一个层次修复机制来系统地纠正生成的测试用例中的错误,显著提高了测试代码的正确性。在涵盖数据处理和实用程序库等多个领域的9个开源项目上的实验结果表明,TestLoter实现了83.6%的行覆盖率和78%的分支覆盖率。我们的方法在覆盖率方面优于基于llm的方法和传统的基于搜索的软件测试技术,同时也减少了生成的单元测试代码中的错误数量。
{"title":"TestLoter: A logic-driven framework for automated unit test generation and error repair using large language models","authors":"Ruofan Yang,&nbsp;Xianghua Xu,&nbsp;Ran Wang","doi":"10.1016/j.cola.2025.101348","DOIUrl":"10.1016/j.cola.2025.101348","url":null,"abstract":"<div><div>Automated unit test generation is a critical technique for improving software quality and development efficiency. However, traditional methods often produce test cases with poor business consistency, while large language model based approaches face two major challenges: a high error rate in generated tests and insufficient code coverage. To address these issues, this paper proposes TestLoter, a logic-driven test generation framework. The core contributions of TestLoter are twofold. First, by integrating the structured analysis capabilities of white-box testing with the functional validation characteristics of black-box testing, we design a logic-driven test generation chain-of-thought that enables deep semantic analysis of code. Second, we establish a hierarchical repair mechanism to systematically correct errors in generated test cases, significantly enhancing the correctness of the test code. Experimental results on nine open-source projects covering various domains, such as data processing and utility libraries, demonstrate that TestLoter achieves 83.6% line coverage and 78% branch coverage. Our approach outperforms both LLM-based methods and traditional search-based software testing techniques in terms of coverage, while also reducing the number of errors in the generated unit test code.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101348"},"PeriodicalIF":1.8,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144721370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A methodology for empirical complexity analysis based on Newton’s polynomial interpolation 基于牛顿多项式插值的经验复杂性分析方法
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-21 DOI: 10.1016/j.cola.2025.101347
Rafael Fontes Sumitani, Lucas Victor da Silva Costa, Frederico F. Campos, Fernando Magno Quintão Pereira
A cost model is a function that relates how often each part of a program runs depending on its inputs. Cost models can be derived automatically via the observation of counters: instrumentation that tracks execution of program operations. This paper defines Newton Counters: counters that can be described via a polynomial ranging on a single program input variable whose value can be read in constant time. Additionally, it shows that Newton Counters are prevalent in actual codes. Motivated by this observation, the paper introduces a methodology to derive automatic cost models. Said methodology combines static code analyses with interpolation via Newton’s divided difference method. This approach is currently available as a tool, Merlin. The effectiveness of this tool is demonstrated on 949 executable C programs taken from the Jotai collection, and on genann, a neural network library.
成本模型是一个函数,它关系到程序的每个部分根据其输入运行的频率。成本模型可以通过观察计数器自动导出:跟踪程序操作执行的工具。本文定义了牛顿计数器(Newton Counters):可以用一个多项式来描述的计数器,它的值可以在恒定时间内读取。此外,它还表明牛顿计数器在实际代码中很普遍。基于这一观察,本文介绍了一种推导自动成本模型的方法。该方法结合了静态代码分析和通过牛顿的分差法插值。这种方法目前是一种可用的工具,Merlin。这个工具的有效性在取自Jotai集合的949个可执行C程序和genann(一个神经网络库)上得到了验证。
{"title":"A methodology for empirical complexity analysis based on Newton’s polynomial interpolation","authors":"Rafael Fontes Sumitani,&nbsp;Lucas Victor da Silva Costa,&nbsp;Frederico F. Campos,&nbsp;Fernando Magno Quintão Pereira","doi":"10.1016/j.cola.2025.101347","DOIUrl":"10.1016/j.cola.2025.101347","url":null,"abstract":"<div><div>A cost model is a function that relates how often each part of a program runs depending on its inputs. Cost models can be derived automatically via the observation of counters: instrumentation that tracks execution of program operations. This paper defines Newton Counters: counters that can be described via a polynomial ranging on a single program input variable whose value can be read in constant time. Additionally, it shows that Newton Counters are prevalent in actual codes. Motivated by this observation, the paper introduces a methodology to derive automatic cost models. Said methodology combines static code analyses with interpolation via Newton’s divided difference method. This approach is currently available as a tool, <span>Merlin</span>. The effectiveness of this tool is demonstrated on 949 executable C programs taken from the <span>Jotai</span> collection, and on <span>genann</span>, a neural network library.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101347"},"PeriodicalIF":1.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144703220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind the gap: The missing features of the tools to support user studies in software engineering 注意差距:在软件工程中支持用户研究的工具所缺少的特性
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-17 DOI: 10.1016/j.cola.2025.101345
Lázaro Costa , Susana Barbosa , Jácome Cunha
User studies are paramount for advancing research in software engineering, particularly when evaluating tools and techniques involving programmers. However, researchers face several barriers when performing them despite the existence of supporting tools. We base our study on a set of tools and researcher-reported barriers identified in prior work on user studies in software engineering. In this work, we study how existing tools and their features cope with previously identified barriers. Moreover, we propose new features for the barriers that lack support. We validated our proposal with 102 researchers, achieving statistically significant positive support for all but one feature. We study the current gap between tools and barriers, using features as the bridge. We show there is a significant lack of support for several barriers, as some have no single tool to support them.
用户研究对于推进软件工程的研究是至关重要的,特别是在评估涉及程序员的工具和技术时。然而,尽管存在支持工具,研究人员在执行它们时仍面临一些障碍。我们的研究基于一组工具和研究人员报告的障碍,这些障碍是在软件工程用户研究的先前工作中确定的。在这项工作中,我们研究了现有工具及其功能如何应对先前确定的障碍。此外,我们还针对缺乏支持的障碍提出了新的功能。我们与102名研究人员验证了我们的建议,除了一个特征外,所有特征都得到了统计上显著的积极支持。我们研究工具和障碍之间的当前差距,使用特征作为桥梁。我们指出,有几个障碍严重缺乏支持,因为有些障碍没有单一的工具来支持它们。
{"title":"Mind the gap: The missing features of the tools to support user studies in software engineering","authors":"Lázaro Costa ,&nbsp;Susana Barbosa ,&nbsp;Jácome Cunha","doi":"10.1016/j.cola.2025.101345","DOIUrl":"10.1016/j.cola.2025.101345","url":null,"abstract":"<div><div>User studies are paramount for advancing research in software engineering, particularly when evaluating tools and techniques involving programmers. However, researchers face several barriers when performing them despite the existence of supporting tools. We base our study on a set of tools and researcher-reported barriers identified in prior work on user studies in software engineering. In this work, we study how existing tools and their features cope with previously identified barriers. Moreover, we propose new features for the barriers that lack support. We validated our proposal with 102 researchers, achieving statistically significant positive support for all but one feature. We study the current gap between tools and barriers, using features as the bridge. We show there is a significant lack of support for several barriers, as some have no single tool to support them.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101345"},"PeriodicalIF":1.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced LPeg techniques: A dual case study approach 高级LPeg技术:双案例研究方法
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-24 DOI: 10.1016/j.cola.2025.101343
Zixuan Zhu
This paper presents advanced optimization techniques for Lua Parsing Expression Grammars (LPeg) through two complementary case studies: a high-performance JSON parser and a sophisticated Glob-to-LPeg pattern converter. We demonstrate how strategic grammar construction can dramatically improve parsing performance without modifying the underlying LPeg library. For the JSON parser, we implement substitution capture and table construction optimization to reduce memory allocation overhead and improve object processing. For the Glob converter, we introduce segment-boundary separation, implement Cox’s flattened search strategy, and develop optimized braced condition handling to prevent exponential backtracking. Comprehensive benchmarks demonstrate that our JSON parser achieves processing speeds up to 125 MB/s on complex documents, consistently outperforming dkjson and showing competitive results against rxi_json across most test cases. Our Glob-to-LPeg converter exhibits 14%–92% better performance than Bun.Glob and runs 3–14 times faster than Minimatch across diverse pattern matching scenarios. This research provides practical optimization techniques for LPeg-based parsers, contributing valuable strategies to the text processing ecosystem.
本文通过两个互补的案例研究介绍了Lua解析表达式语法(LPeg)的高级优化技术:一个高性能JSON解析器和一个复杂的global -to-LPeg模式转换器。我们将演示战略性语法构造如何在不修改底层LPeg库的情况下显著提高解析性能。对于JSON解析器,我们实现了替换捕获和表构造优化,以减少内存分配开销并改进对象处理。对于Glob转换器,我们引入了段边界分离,实现了Cox的扁平搜索策略,并开发了优化的支撑条件处理来防止指数回溯。综合基准测试表明,我们的JSON解析器在复杂文档上的处理速度高达125 MB/s,在大多数测试用例中始终优于dkjson,并显示出与rxi_json竞争的结果。我们的global -to- lpeg转换器的性能比Bun高14%-92%。在不同的模式匹配场景中,Glob和运行速度比Minimatch快3-14倍。本研究为基于lpeg的解析器提供了实用的优化技术,为文本处理生态系统提供了有价值的策略。
{"title":"Advanced LPeg techniques: A dual case study approach","authors":"Zixuan Zhu","doi":"10.1016/j.cola.2025.101343","DOIUrl":"10.1016/j.cola.2025.101343","url":null,"abstract":"<div><div>This paper presents advanced optimization techniques for Lua Parsing Expression Grammars (LPeg) through two complementary case studies: a high-performance JSON parser and a sophisticated Glob-to-LPeg pattern converter. We demonstrate how strategic grammar construction can dramatically improve parsing performance without modifying the underlying LPeg library. For the JSON parser, we implement substitution capture and table construction optimization to reduce memory allocation overhead and improve object processing. For the Glob converter, we introduce segment-boundary separation, implement Cox’s flattened search strategy, and develop optimized braced condition handling to prevent exponential backtracking. Comprehensive benchmarks demonstrate that our JSON parser achieves processing speeds up to 125 MB/s on complex documents, consistently outperforming dkjson and showing competitive results against rxi_json across most test cases. Our Glob-to-LPeg converter exhibits 14%–92% better performance than Bun.Glob and runs 3–14 times faster than Minimatch across diverse pattern matching scenarios. This research provides practical optimization techniques for LPeg-based parsers, contributing valuable strategies to the text processing ecosystem.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101343"},"PeriodicalIF":1.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting learners in the transition from block-based to text-based programming, a systematic review 支持学习者从基于块的编程过渡到基于文本的编程,一个系统的回顾
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-18 DOI: 10.1016/j.cola.2025.101342
Glenn Strong, Nina Bresnihan, Brendan Tangney
This paper describes a systematic review of the approaches being taken to providing support to learners as they transition from block-based programming environments to text-based ones. It identifies and analyses the literature in the area, identifies the themes which are common across the different approaches being used, and determines gaps in the literature. With the widespread use of block-based programming environments in introductory programming education, the question of how to support learners in the transition to text-based environments has received much attention. The contribution of this paper is to analyse and characterise the approaches being taken to support learners by considering the question: what approaches have been developed to facilitate the transition from block-based programming to text-based programming for learners? To answer this, a systematic literature review was undertaken, combining manual and automatic searches to identify work in the field. A thematic analysis of the literature found eight themes covering technical and non-technical approaches to supporting transition, prompting a set of recommendations for gaps to be addressed in future development in the field.
本文系统地回顾了为学习者从基于块的编程环境过渡到基于文本的编程环境提供支持的方法。它确定并分析了该领域的文献,确定了不同方法中共同的主题,并确定了文献中的空白。随着基于块的编程环境在编程入门教育中的广泛应用,如何支持学习者向基于文本的环境过渡的问题受到了广泛关注。本文的贡献是通过考虑以下问题来分析和描述正在采取的支持学习者的方法:已经开发了哪些方法来促进学习者从基于块的编程到基于文本的编程的过渡?为了回答这个问题,进行了系统的文献综述,结合人工和自动搜索来确定该领域的工作。对文献的专题分析发现了八个主题,涵盖了支持过渡的技术和非技术方法,从而提出了一套关于在今后外地发展中应解决的差距的建议。
{"title":"Supporting learners in the transition from block-based to text-based programming, a systematic review","authors":"Glenn Strong,&nbsp;Nina Bresnihan,&nbsp;Brendan Tangney","doi":"10.1016/j.cola.2025.101342","DOIUrl":"10.1016/j.cola.2025.101342","url":null,"abstract":"<div><div>This paper describes a systematic review of the approaches being taken to providing support to learners as they transition from block-based programming environments to text-based ones. It identifies and analyses the literature in the area, identifies the themes which are common across the different approaches being used, and determines gaps in the literature. With the widespread use of block-based programming environments in introductory programming education, the question of how to support learners in the transition to text-based environments has received much attention. The contribution of this paper is to analyse and characterise the approaches being taken to support learners by considering the question: what approaches have been developed to facilitate the transition from block-based programming to text-based programming for learners? To answer this, a systematic literature review was undertaken, combining manual and automatic searches to identify work in the field. A thematic analysis of the literature found eight themes covering technical and non-technical approaches to supporting transition, prompting a set of recommendations for gaps to be addressed in future development in the field.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101342"},"PeriodicalIF":1.7,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the energy consumption of C++ and Java solutions mined from a programming contest site 调查从编程竞赛站点挖掘的c++和Java解决方案的能耗
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-16 DOI: 10.1016/j.cola.2025.101341
Sérgio Queiroz de Medeiros, Marcelo Borges Nogueira, Gustavo Quezado
The current concern about global warming has led to an increasing interest in the energy efficiency of computer applications. Assuming power is constant, the general trend is that faster programs consume less energy, thus optimizing a program for speed would also improve its energy efficiency.
We investigate this tendency in a set of C++ and Java solutions mined from Code Submission Evaluation System (CSES), a popular programming competition site, where each solution must give the correct answer under a given time limit. In such context, we can consider that all correct solutions for a problem were written with a speed concern, but not with energy efficiency in mind.
We selected 15 problems from CSES and for each of them we mined at least 30 C++ and Java solutions, evaluating time and energy efficiency of each solution in at least two different machines. In our scenario, where there is a great diversity of programming styles, execution speed, and memory usage, we could confirm the general trend: faster programs consume less energy. Moreover, we were able to use ordinary least squares to fit a linear function, with good precision, that relates energy consumption of a program to its execution time, as well as to automatically identify programs with abnormal energy consumption. A manual analysis of these programs revealed that often they perform a different amount of allocation and deallocation operations when compared to programs with similar execution times.
We also calculated the energy consumption profile of sets of random C++ solutions for these 15 CSES problems, and we tried to associate each set with its corresponding CSES problem by using the energy consumption profiles previously computed for each one of them. By using this approach, we could restrict, for each set of random C++ solutions, the classification task to a subset of 7 CSES problems, a reduction of more than 50% in the search space.
当前对全球变暖的关注引起了人们对计算机应用的能源效率的兴趣。假设功率是恒定的,一般趋势是更快的程序消耗更少的能量,因此优化程序的速度也会提高其能源效率。我们在一组从代码提交评估系统(Code Submission Evaluation System, CSES)中挖掘出来的c++和Java解决方案中研究了这种趋势,CSES是一个流行的编程竞赛网站,每个解决方案都必须在给定的时间限制内给出正确的答案。在这种情况下,我们可以认为一个问题的所有正确解决方案都是考虑到速度的,而不是考虑到能源效率。我们从CSES中选择了15个问题,并为每个问题挖掘了至少30个c++和Java解决方案,评估了每个解决方案在至少两台不同机器上的时间和能源效率。在我们的场景中,存在各种各样的编程风格、执行速度和内存使用,我们可以确认一个普遍趋势:更快的程序消耗更少的能量。此外,我们能够使用普通最小二乘拟合线性函数,具有良好的精度,将一个程序的能耗与它的执行时间联系起来,并自动识别异常能耗的程序。对这些程序的手工分析表明,与执行时间相似的程序相比,它们执行的分配和释放操作的数量通常不同。我们还计算了这15个CSES问题的随机c++解决方案集的能耗曲线,并尝试使用之前计算的每个CSES问题的能耗曲线将每个集与相应的CSES问题联系起来。通过使用这种方法,我们可以将每组随机c++解决方案的分类任务限制为7个CSES问题的子集,在搜索空间中减少了50%以上。
{"title":"Investigating the energy consumption of C++ and Java solutions mined from a programming contest site","authors":"Sérgio Queiroz de Medeiros,&nbsp;Marcelo Borges Nogueira,&nbsp;Gustavo Quezado","doi":"10.1016/j.cola.2025.101341","DOIUrl":"10.1016/j.cola.2025.101341","url":null,"abstract":"<div><div>The current concern about global warming has led to an increasing interest in the energy efficiency of computer applications. Assuming power is constant, the general trend is that faster programs consume less energy, thus optimizing a program for speed would also improve its energy efficiency.</div><div>We investigate this tendency in a set of C++ and Java solutions mined from Code Submission Evaluation System (CSES), a popular programming competition site, where each solution must give the correct answer under a given time limit. In such context, we can consider that all correct solutions for a problem were written with a speed concern, but not with energy efficiency in mind.</div><div>We selected 15 problems from CSES and for each of them we mined at least 30 C++ and Java solutions, evaluating time and energy efficiency of each solution in at least two different machines. In our scenario, where there is a great diversity of programming styles, execution speed, and memory usage, we could confirm the general trend: faster programs consume less energy. Moreover, we were able to use ordinary least squares to fit a linear function, with good precision, that relates energy consumption of a program to its execution time, as well as to automatically identify programs with abnormal energy consumption. A manual analysis of these programs revealed that often they perform a different amount of allocation and deallocation operations when compared to programs with similar execution times.</div><div>We also calculated the energy consumption profile of sets of random C++ solutions for these 15 CSES problems, and we tried to associate each set with its corresponding CSES problem by using the energy consumption profiles previously computed for each one of them. By using this approach, we could restrict, for each set of random C++ solutions, the classification task to a subset of 7 CSES problems, a reduction of more than 50% in the search space.</div></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"84 ","pages":"Article 101341"},"PeriodicalIF":1.7,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Computer Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1