Pub Date : 2024-08-29DOI: 10.1016/j.csi.2024.103921
Mengjun Huang , Yue Luo , Jiwei He , Ling Zhen , Lianfan Wu , Yang Zhang
Science communication conducted through mobile devices and mobile applications is an efficient and widespread phenomenon that requires communicators and design practitioners to further develop suitable design elements and strategies for such platforms. The effective application of multimodal or multisensory design in interfaces provides users with rich experiences. However, there is a lack of guiding recommendations for user interface design in the citizen science community. This study investigated factors affecting users’ perceptions and behavioral intentions toward multimodal scientific communication interface designs and identified the optimal combinations of such factors for such designs. Through a focus group, we defined three design dimensions of a science communication interface: visual, auditory, and haptic. An online experiment involving 916 participants was then conducted and integrated the technology acceptance model, expectation–confirmation model, and Taguchi method to examine the hierarchical combinations with the greatest influence in each dimension. The results indicated that interface design combinations primarily focusing on visual elements, with auditory and haptic as secondary elements, can serve as effective tools for science communication. Moreover, layout, color tones, vibration intensity, and sound volume significantly affected users’ perceptions and behavioral intentions. As one of the few studies using the Taguchi method to explore the design of science communication interfaces, the present findings enrich the multimodal theory from the perspectives of design and communication, highlighting its value in science communication. This paper simultaneously provides insights into how to select and combine multimodal design elements in science communication interfaces, demonstrating the potential of such designs to affect a user perception, satisfaction, confirmation, and continued usage intention.
{"title":"Who are the best contributors? Designing a multimodal science communication interface based on the ECM, TAM and the Taguchi methods","authors":"Mengjun Huang , Yue Luo , Jiwei He , Ling Zhen , Lianfan Wu , Yang Zhang","doi":"10.1016/j.csi.2024.103921","DOIUrl":"10.1016/j.csi.2024.103921","url":null,"abstract":"<div><p>Science communication conducted through mobile devices and mobile applications is an efficient and widespread phenomenon that requires communicators and design practitioners to further develop suitable design elements and strategies for such platforms. The effective application of multimodal or multisensory design in interfaces provides users with rich experiences. However, there is a lack of guiding recommendations for user interface design in the citizen science community. This study investigated factors affecting users’ perceptions and behavioral intentions toward multimodal scientific communication interface designs and identified the optimal combinations of such factors for such designs. Through a focus group, we defined three design dimensions of a science communication interface: visual, auditory, and haptic. An online experiment involving 916 participants was then conducted and integrated the technology acceptance model, expectation–confirmation model, and Taguchi method to examine the hierarchical combinations with the greatest influence in each dimension. The results indicated that interface design combinations primarily focusing on visual elements, with auditory and haptic as secondary elements, can serve as effective tools for science communication. Moreover, layout, color tones, vibration intensity, and sound volume significantly affected users’ perceptions and behavioral intentions. As one of the few studies using the Taguchi method to explore the design of science communication interfaces, the present findings enrich the multimodal theory from the perspectives of design and communication, highlighting its value in science communication. This paper simultaneously provides insights into how to select and combine multimodal design elements in science communication interfaces, demonstrating the potential of such designs to affect a user perception, satisfaction, confirmation, and continued usage intention.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103921"},"PeriodicalIF":4.1,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142151244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1016/j.csi.2024.103919
Kankana Datta , Biswapati Jana , Mamata Dalui Chakraborty
With the rapid development of advanced communication technology, protection of confidential data during transmission through public channel has become a challenging issue. In this context, the design of a data hiding scheme needs to ensure reversibility, robustness against various malicious attacks, and unaltered visual quality even after embedding high amount of secret data. To meet the above requirements, Cellular Automata along with Bit-Reversal Permutation technique have been utilized on dual-image with the target to enhance the robustness of suggested scheme due to distribution of secret information within two stego images which is hard to extract without both stego simultaneously. The proposed scheme makes a trade-off among visual quality, security and embedding capacity as essential for ensuring innocuous communication. The experimental results and comparison with the state-of-art methods establishes that the proposed scheme ensures high degree of robustness against different venomous attacks. This approach may be beneficial to private and public sector practitioners and government agencies to protect valuable multimedia secret data from adversarial cyber attacks.
{"title":"A cellular automata based secured reversible data hiding scheme for dual images using bit-reversal permutation technique","authors":"Kankana Datta , Biswapati Jana , Mamata Dalui Chakraborty","doi":"10.1016/j.csi.2024.103919","DOIUrl":"10.1016/j.csi.2024.103919","url":null,"abstract":"<div><p>With the rapid development of advanced communication technology, protection of confidential data during transmission through public channel has become a challenging issue. In this context, the design of a data hiding scheme needs to ensure reversibility, robustness against various malicious attacks, and unaltered visual quality even after embedding high amount of secret data. To meet the above requirements, Cellular Automata along with Bit-Reversal Permutation technique have been utilized on dual-image with the target to enhance the robustness of suggested scheme due to distribution of secret information within two stego images which is hard to extract without both stego simultaneously. The proposed scheme makes a trade-off among visual quality, security and embedding capacity as essential for ensuring innocuous communication. The experimental results and comparison with the state-of-art methods establishes that the proposed scheme ensures high degree of robustness against different venomous attacks. This approach may be beneficial to private and public sector practitioners and government agencies to protect valuable multimedia secret data from adversarial cyber attacks.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103919"},"PeriodicalIF":4.1,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-18DOI: 10.1016/j.csi.2024.103918
Aqeel Thamer Jawad , Rihab Maaloul , Lamia Chaari
Utilizing unmanned aerial vehicles (UAVs) to support V2X requirements leverages their versatility and line-of-sight communication. Our prior work explored a multi-agent learning approach for resource optimization, maximizing task offloading while maintaining QoS. This paper focuses on securing UAV communication, particularly authentication. Traditional methods are often unsuitable due to UAV limitations. We propose a novel authentication mechanism for a single ground control station (GCS) interacting with multiple UAVs across flight sessions. The system utilizes a key generation method based on chaotic maps to create unique flight session keys for each pre-defined flight plan. These keys, along with flight plans, are registered in a secure database. During flight, the GCS verifies UAV identity by employing the flight session key and corresponding flight plan for message authentication. This approach reduces computational and communication overhead compared to traditional certificate exchanges and asymmetric cryptography, which are energy-intensive for UAVs. While not a comprehensive security solution, this method provides an initial layer of protection for the UAV network.
{"title":"Authentication communication by using visualization cryptography for UAV networks","authors":"Aqeel Thamer Jawad , Rihab Maaloul , Lamia Chaari","doi":"10.1016/j.csi.2024.103918","DOIUrl":"10.1016/j.csi.2024.103918","url":null,"abstract":"<div><p>Utilizing unmanned aerial vehicles (UAVs) to support V2X requirements leverages their versatility and line-of-sight communication. Our prior work explored a multi-agent learning approach for resource optimization, maximizing task offloading while maintaining QoS. This paper focuses on securing UAV communication, particularly authentication. Traditional methods are often unsuitable due to UAV limitations. We propose a novel authentication mechanism for a single ground control station (GCS) interacting with multiple UAVs across flight sessions. The system utilizes a key generation method based on chaotic maps to create unique flight session keys for each pre-defined flight plan. These keys, along with flight plans, are registered in a secure database. During flight, the GCS verifies UAV identity by employing the flight session key and corresponding flight plan for message authentication. This approach reduces computational and communication overhead compared to traditional certificate exchanges and asymmetric cryptography, which are energy-intensive for UAVs. While not a comprehensive security solution, this method provides an initial layer of protection for the UAV network.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103918"},"PeriodicalIF":4.1,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142002235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-18DOI: 10.1016/j.csi.2024.103917
Rasha Ahmad Husein , Hala Aburajouh , Cagatay Catal
Code completion serves as a fundamental aspect of modern software development, improving developers' coding processes. Integrating code completion tools into an Integrated Development Environment (IDE) or code editor enhances the coding process and boosts productivity by reducing errors and speeding up code writing while reducing cognitive load. This is achieved by predicting subsequent tokens, such as keywords, variable names, types, function names, operators, and more. Different techniques can achieve code completion, and recent research has focused on Deep Learning methods, particularly Large Language Models (LLMs) utilizing Transformer algorithms. While several research papers have focused on the use of LLMs for code completion, these studies are fragmented, and there is no systematic overview of the use of LLMs for code completion. Therefore, we aimed to perform a Systematic Literature Review (SLR) study to investigate how LLMs have been applied for code completion so far. We have formulated several research questions to address how LLMs have been integrated for code completion-related tasks and to assess the efficacy of these LLMs in the context of code completion. To achieve this, we retrieved 244 papers from scientific databases using auto-search and specific keywords, finally selecting 23 primary studies based on an SLR methodology for in-depth analysis. This SLR study categorizes the granularity levels of code completion achieved by utilizing LLMs in IDEs, explores the existing issues in current code completion systems, how LLMs address these challenges, and the pre-training and fine-tuning methods employed. Additionally, this study identifies open research problems and outlines future research directions. Our analysis reveals that LLMs significantly enhance code completion performance across several programming languages and contexts, and their capability to predict relevant code snippets based on context and partial input boosts developer productivity substantially.
{"title":"Large language models for code completion: A systematic literature review","authors":"Rasha Ahmad Husein , Hala Aburajouh , Cagatay Catal","doi":"10.1016/j.csi.2024.103917","DOIUrl":"10.1016/j.csi.2024.103917","url":null,"abstract":"<div><p>Code completion serves as a fundamental aspect of modern software development, improving developers' coding processes. Integrating code completion tools into an Integrated Development Environment (IDE) or code editor enhances the coding process and boosts productivity by reducing errors and speeding up code writing while reducing cognitive load. This is achieved by predicting subsequent tokens, such as keywords, variable names, types, function names, operators, and more. Different techniques can achieve code completion, and recent research has focused on Deep Learning methods, particularly Large Language Models (LLMs) utilizing Transformer algorithms. While several research papers have focused on the use of LLMs for code completion, these studies are fragmented, and there is no systematic overview of the use of LLMs for code completion. Therefore, we aimed to perform a Systematic Literature Review (SLR) study to investigate how LLMs have been applied for code completion so far. We have formulated several research questions to address how LLMs have been integrated for code completion-related tasks and to assess the efficacy of these LLMs in the context of code completion. To achieve this, we retrieved 244 papers from scientific databases using auto-search and specific keywords, finally selecting 23 primary studies based on an SLR methodology for in-depth analysis. This SLR study categorizes the granularity levels of code completion achieved by utilizing LLMs in IDEs, explores the existing issues in current code completion systems, how LLMs address these challenges, and the pre-training and fine-tuning methods employed. Additionally, this study identifies open research problems and outlines future research directions. Our analysis reveals that LLMs significantly enhance code completion performance across several programming languages and contexts, and their capability to predict relevant code snippets based on context and partial input boosts developer productivity substantially.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103917"},"PeriodicalIF":4.1,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0920548924000862/pdfft?md5=8314175bcf57b70427dd5c869ea42978&pid=1-s2.0-S0920548924000862-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1016/j.csi.2024.103916
Neyire Deniz Sarier
In García-Rodríguez et al. 2024, two generic constructions for biometric-based non-transferable Attribute Based Credentials (biometric ABC) are presented, which offer different trade-offs between efficiency and trust assumptions. In this paper, we focus on the second scheme denoted as BioABC-ZK that tries to remove the strong (and unrealistic) trust assumption on the Reader R, and we show that BioABC-ZK has a security flaw for a colluding R and Verifier V. Besides, BioABC-ZK lacks GDPR-compliance, which requires secure processing of biometrics, for instance in form of Fuzzy Extractors, as opposed to () storing the reference biometric template in the user’s mobile phone and () processing of biometrics using an external untrusted R, whose foreign manufacturers are unlikely to adjust their products according to GDPR.
The contributions of this paper are threefold. First, we review efficient biometric ABC schemes to identify the privacy-by-design criteria for them. In view of these principles, we propose a new architecture for biometric ABC of Sarier 2021 by adapting the recently introduced core/helper setting. Briefly, a user in our modified setting is composed of a constrained core device (a SIM card) inside a helper device (a smart phone with dual SIM and face recognition feature), which – as opposed to García-Rodríguez et al. 2024 – does not need to store . This way, the new design provides Identity Privacy without the need for an external R and/or a dedicated hardware per user such as a biometric smart card reader or a tamper proof smart card as in current hardware-bound credential systems. Besides, the new system maintains minimal hardware requirements on the SIM card – only responsible for storing ABC and helper data –, which results in easy adoption and usability without loosing efficiency, if deep face fuzzy vault and our modified ABC scheme are employed together. As a result, a total overhead of 500 ms to a showing of a comparable non-biometric ABC is obtained instead of the 2.1 s in García-Rodríguez et al. 2024 apart from the removal of computationally expensive pairings. Finally, as different from García-Rodríguez et al. 2024, auditing is achieved via Blockchain instead of proving in zero-knowledge the actual biometric matching by the user to reveal malicious behavior of R and V.
García-Rodríguez 等人在 2024 年提出了两种基于生物特征的不可转移属性凭证(biometric ABC)的通用结构,它们在效率和信任假设之间做出了不同的权衡。本文重点讨论第二种方案,即 BioABC-ZK,该方案试图消除对阅读器 R 的强(不现实)信任假设,并证明 BioABC-ZK 在 R 和验证器 V 串通的情况下存在安全漏洞。此外,BioABC-ZK 不符合 GDPR 要求,而 GDPR 要求生物识别的安全处理,例如以模糊提取器的形式,而不是(i)将参考生物识别模板 aBio 存储在用户的手机中,以及(ii)使用外部不信任的 R 处理生物识别,国外制造商不太可能根据 GDPR 调整其产品。首先,我们回顾了有效的生物识别 ABC 方案,以确定其隐私设计标准。根据这些原则,我们通过调整最近引入的核心/助手设置,为 Sarier 2021 生物识别 ABC 提出了一种新的架构。简而言之,在我们修改后的设置中,用户由一个受限的核心设备(SIM 卡)和一个辅助设备(具有双 SIM 卡和人脸识别功能的智能手机)组成,与 García-Rodríguez 等人 2024 的设置不同,辅助设备不需要存储生物特征。因此,新设计无需外部 R 和/或每个用户的专用硬件,如生物识别智能卡读写器或防篡改智能卡,就能提供身份隐私保护,就像目前的硬件绑定证书系统一样。此外,新系统对 SIM 卡的硬件要求极低(仅负责存储 ABC 和辅助数据),因此,如果同时采用深层人脸模糊保险库和我们改进的 ABC 方案,则可以在不降低效率的情况下轻松采用新系统并提高其可用性。因此,除了去掉计算成本高昂的配对之外,显示类似的非生物识别 ABC 的总开销为 500 毫秒,而不是 García-Rodríguez 等人 2024 中的 2.1 秒。最后,与 García-Rodríguez 等人的论文 2024 不同的是,审计是通过区块链实现的,而不是通过零知识证明用户的实际生物特征匹配来揭示 R 和 V 的恶意行为。
{"title":"Best of two worlds: Efficient, usable and auditable biometric ABC on the blockchain","authors":"Neyire Deniz Sarier","doi":"10.1016/j.csi.2024.103916","DOIUrl":"10.1016/j.csi.2024.103916","url":null,"abstract":"<div><p>In García-Rodríguez et al. 2024, two generic constructions for biometric-based non-transferable Attribute Based Credentials (biometric ABC) are presented, which offer different trade-offs between efficiency and trust assumptions. In this paper, we focus on the second scheme denoted as BioABC-ZK that tries to remove the strong (and unrealistic) trust assumption on the Reader R, and we show that BioABC-ZK has a security flaw for a colluding R and Verifier V. Besides, BioABC-ZK lacks GDPR-compliance, which requires secure processing of biometrics, for instance in form of Fuzzy Extractors, as opposed to (<span><math><mi>i</mi></math></span>) storing the reference biometric template <span><math><msub><mrow><mi>a</mi></mrow><mrow><mi>B</mi><mi>i</mi><mi>o</mi></mrow></msub></math></span> in the user’s mobile phone and (<span><math><mrow><mi>i</mi><mi>i</mi></mrow></math></span>) processing of biometrics using an external untrusted R, whose foreign manufacturers are unlikely to adjust their products according to GDPR.</p><p>The contributions of this paper are threefold. First, we review efficient biometric ABC schemes to identify the privacy-by-design criteria for them. In view of these principles, we propose a new architecture for <em>biometric ABC</em> of Sarier 2021 by adapting the recently introduced <em>core/helper setting</em>. Briefly, a user in our modified setting is composed of a constrained core device (a SIM card) inside a helper device (a smart phone with dual SIM and face recognition feature), which – as opposed to García-Rodríguez et al. 2024 – does not need to store <span><math><msub><mrow><mi>a</mi></mrow><mrow><mi>B</mi><mi>i</mi><mi>o</mi></mrow></msub></math></span>. This way, the new design provides <em>Identity Privacy</em> without the need for an external R and/or a dedicated hardware per user such as a biometric smart card reader or a tamper proof smart card as in current hardware-bound credential systems. Besides, the new system maintains minimal hardware requirements on the SIM card – only responsible for storing ABC and helper data –, which results in easy adoption and usability without loosing efficiency, if deep face fuzzy vault and our modified ABC scheme are employed together. As a result, a total overhead of 500 ms to a showing of a comparable non-biometric ABC is obtained instead of the 2.1 s in García-Rodríguez et al. 2024 apart from the removal of computationally expensive pairings. Finally, as different from García-Rodríguez et al. 2024, auditing is achieved via Blockchain instead of proving in zero-knowledge the actual biometric matching by the user to reveal malicious behavior of R and V.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103916"},"PeriodicalIF":4.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141979792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1016/j.csi.2024.103907
Juanjo Pérez-Sánchez , Saima Rafi , Juan Manuel Carrillo de Gea , Joaquín Nicolás Ros , José Luis Fernández Alemán
Context:
DevOps is a software engineering paradigm that enables faster deliveries and higher quality products. However, DevOps adoption is a complex process that is still insufficiently supported by research. In addition, human factors are the main difficulty for a successful DevOps adoption, although very few studies address this topic.
Objective:
This paper addresses two research gaps identified in literature, namely: (1) the characterization of DevOps from the perspective of human factors, i.e. the description of DevOps’ human characteristics to better define it, and (2) the identification and analysis of human factors’ effect in the adoption of DevOps.
Method:
We employed a hybrid methodology that included a Systematic Mapping Study followed by the application of a clustering technique. A questionnaire for DevOps practitioners () was employed as an evaluation method.
Results:
A total of 59 human factors related to DevOps were identified, described, and synthesized. The results were used to build a theory on DevOps human factors.
Conclusion:
The main contribution of this paper is a theory proposal regarding human factors in DevOps adoption. The evaluation results show that almost every human factor identified in the mapping study was found relevant in DevOps adoption. The results of the study represent an extension of DevOps characterization and a first approximation to human factors in DevOps adoption.
{"title":"A theory on human factors in DevOps adoption","authors":"Juanjo Pérez-Sánchez , Saima Rafi , Juan Manuel Carrillo de Gea , Joaquín Nicolás Ros , José Luis Fernández Alemán","doi":"10.1016/j.csi.2024.103907","DOIUrl":"10.1016/j.csi.2024.103907","url":null,"abstract":"<div><h3>Context:</h3><p>DevOps is a software engineering paradigm that enables faster deliveries and higher quality products. However, DevOps adoption is a complex process that is still insufficiently supported by research. In addition, human factors are the main difficulty for a successful DevOps adoption, although very few studies address this topic.</p></div><div><h3>Objective:</h3><p>This paper addresses two research gaps identified in literature, namely: (1) the characterization of DevOps from the perspective of human factors, i.e. the description of DevOps’ human characteristics to better define it, and (2) the identification and analysis of human factors’ effect in the adoption of DevOps.</p></div><div><h3>Method:</h3><p>We employed a hybrid methodology that included a Systematic Mapping Study followed by the application of a clustering technique. A questionnaire for DevOps practitioners (<span><math><mrow><mi>n</mi><mo>=</mo><mn>15</mn></mrow></math></span>) was employed as an evaluation method.</p></div><div><h3>Results:</h3><p>A total of 59 human factors related to DevOps were identified, described, and synthesized. The results were used to build a theory on DevOps human factors.</p></div><div><h3>Conclusion:</h3><p>The main contribution of this paper is a theory proposal regarding human factors in DevOps adoption. The evaluation results show that almost every human factor identified in the mapping study was found relevant in DevOps adoption. The results of the study represent an extension of DevOps characterization and a first approximation to human factors in DevOps adoption.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103907"},"PeriodicalIF":4.1,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092054892400076X/pdfft?md5=8a197a8035fc3bc559533baac7028e12&pid=1-s2.0-S092054892400076X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last years, neural networks have been massively adopted by industry and research in a wide variety of contexts. Neural network milestones are generally reached by scaling up computation, completely disregarding the carbon footprint required for the associated computations. This trend has become unsustainable given the ever-growing use of deep learning, and could cause irreversible damage to the environment of our planet if it is not addressed soon.
Objective:
In this study, we aim to analyze not only the effects of different energy saving methods for neural networks but also the effects of the moment of intervention, and what makes certain moments optimal.
Method:
We developed a novel dataset by training convolutional neural networks in 12 different computer vision datasets and applying runtime decisions regarding layer freezing, model quantization and early stopping at different epochs in each run. We then fit an auto-regressive prediction model on the data collected capable to predict the accuracy and energy consumption achieved on future epochs for different methods. The predictions on accuracy and energy are used to estimate the optimal training path.
Results:
Following the predictions of the model can save 56.5% of energy consumed while also increasing validation accuracy by 2.38% by avoiding overfitting.The prediction model developed can predict the validation accuracy with a 8.4% of error, the energy consumed with a 14.3% of error and the trade-off between both with a 8.9% of error.
Conclusions:
This prediction model could potentially be used by the training algorithm to decide which methods apply to the model and at what moment in order to maximize the accuracy-energy trade-off.
{"title":"Energy-efficient neural network training through runtime layer freezing, model quantization, and early stopping","authors":"Álvaro Domingo Reguero , Silverio Martínez-Fernández , Roberto Verdecchia","doi":"10.1016/j.csi.2024.103906","DOIUrl":"10.1016/j.csi.2024.103906","url":null,"abstract":"<div><h3>Background:</h3><p>In the last years, neural networks have been massively adopted by industry and research in a wide variety of contexts. Neural network milestones are generally reached by scaling up computation, completely disregarding the carbon footprint required for the associated computations. This trend has become unsustainable given the ever-growing use of deep learning, and could cause irreversible damage to the environment of our planet if it is not addressed soon.</p></div><div><h3>Objective:</h3><p>In this study, we aim to analyze not only the effects of different energy saving methods for neural networks but also the effects of the moment of intervention, and what makes certain moments optimal.</p></div><div><h3>Method:</h3><p>We developed a novel dataset by training convolutional neural networks in 12 different computer vision datasets and applying runtime decisions regarding layer freezing, model quantization and early stopping at different epochs in each run. We then fit an auto-regressive prediction model on the data collected capable to predict the accuracy and energy consumption achieved on future epochs for different methods. The predictions on accuracy and energy are used to estimate the optimal training path.</p></div><div><h3>Results:</h3><p>Following the predictions of the model can save 56.5% of energy consumed while also increasing validation accuracy by 2.38% by avoiding overfitting.The prediction model developed can predict the validation accuracy with a 8.4% of error, the energy consumed with a 14.3% of error and the trade-off between both with a 8.9% of error.</p></div><div><h3>Conclusions:</h3><p>This prediction model could potentially be used by the training algorithm to decide which methods apply to the model and at what moment in order to maximize the accuracy-energy trade-off.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103906"},"PeriodicalIF":4.1,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0920548924000758/pdfft?md5=9fe0b023bddfc875c825b0c53e63af06&pid=1-s2.0-S0920548924000758-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.csi.2024.103915
Manal Kharbouch , Aurora Vizcaino , José Alberto García-Berná , Félix García , Ambrosio Toval , Oscar Pedreira , Ali Idri , José Luis Fernández-Alemán
Objective
Serious Games (SG) are a rising trend in Software Engineering (SE) education, for this reason, and since this topic is still immature and further research was encouraged, it is important to investigate how SGs are integrated into SE education. In this line, this study explores the landscape of SGs in SE) education, focusing on their categorization according to their addressed SWEBOK areas and Bloom's levels, extracted their key elements, mechanics and dynamics, exploring in depth their most portrayed player profiles, finding what makes them successful SGs in this field, and last addressing their resulting challenges in the realm of SE education.
Methodology
A systematic search was conducted across prominent databases: Science Direct, IEEE Xplore, ACM, Scopus, and Wiley. Initially, 125 papers met our initial inclusion criteria, from which 46 remained after rigorous full-text review. Utilizing snowball sampling, we added 28 additional studies, resulting in a total of 74 selected papers for comprehensive analysis.
Results
Among the selected papers, which spanned from the early 2000s to May 2021, a notable increase in publications on SGs in SE was observed, particularly since 2010. The majority of these studies focused on validation research (60 %), followed by solution proposals (17.56 %) and evaluation research (13.51 %). Publication channels predominantly included conferences (79.73 %), underscoring the emerging nature of SGs in SE research, with a smaller proportion appearing in journal articles (20.27 %). Specific focus areas within SE, such as Software Engineering Management (33.78 %) and SE Professional Practice (13.51 %), received significant attention, while others, like SE Models and Methods, showed minimal representation. Furthermore, SGs were found to effectively target higher-order cognitive skills based on Bloom's Taxonomy, with notable implementations of game dynamics such as Teams and Realism to enhance learning experiences. Despite these advancements, there remains a predominant focus on player profiles like Achievers (48.64 %) and Players (47.30 %), suggesting potential gaps in addressing a broader spectrum of learner types within SGs designed for SE education.
Conclusion
This study underscores the evolving role of SGs in SE education, emphasizing the need for diverse approaches to enhance engagement and educational outcomes. Future research should focus on optimizing SG potential across educational and industrial settings by expanding publication visibility, integrating artificial intelligence (AI), and conducting comprehensive evaluations of SGs tailored to SE contexts.
{"title":"Uncharted dimensions, gaps, and future trends of serious games in software engineering","authors":"Manal Kharbouch , Aurora Vizcaino , José Alberto García-Berná , Félix García , Ambrosio Toval , Oscar Pedreira , Ali Idri , José Luis Fernández-Alemán","doi":"10.1016/j.csi.2024.103915","DOIUrl":"10.1016/j.csi.2024.103915","url":null,"abstract":"<div><h3>Objective</h3><p>Serious Games (SG) are a rising trend in Software Engineering (SE) education, for this reason, and since this topic is still immature and further research was encouraged, it is important to investigate how SGs are integrated into SE education. In this line, this study explores the landscape of SGs in SE) education, focusing on their categorization according to their addressed SWEBOK areas and Bloom's levels, extracted their key elements, mechanics and dynamics, exploring in depth their most portrayed player profiles, finding what makes them successful SGs in this field, and last addressing their resulting challenges in the realm of SE education.</p></div><div><h3>Methodology</h3><p>A systematic search was conducted across prominent databases: Science Direct, IEEE Xplore, ACM, Scopus, and Wiley. Initially, 125 papers met our initial inclusion criteria, from which 46 remained after rigorous full-text review. Utilizing snowball sampling, we added 28 additional studies, resulting in a total of 74 selected papers for comprehensive analysis.</p></div><div><h3>Results</h3><p>Among the selected papers, which spanned from the early 2000s to May 2021, a notable increase in publications on SGs in SE was observed, particularly since 2010. The majority of these studies focused on validation research (60 %), followed by solution proposals (17.56 %) and evaluation research (13.51 %). Publication channels predominantly included conferences (79.73 %), underscoring the emerging nature of SGs in SE research, with a smaller proportion appearing in journal articles (20.27 %). Specific focus areas within SE, such as Software Engineering Management (33.78 %) and SE Professional Practice (13.51 %), received significant attention, while others, like SE Models and Methods, showed minimal representation. Furthermore, SGs were found to effectively target higher-order cognitive skills based on Bloom's Taxonomy, with notable implementations of game dynamics such as Teams and Realism to enhance learning experiences. Despite these advancements, there remains a predominant focus on player profiles like Achievers (48.64 %) and Players (47.30 %), suggesting potential gaps in addressing a broader spectrum of learner types within SGs designed for SE education.</p></div><div><h3>Conclusion</h3><p>This study underscores the evolving role of SGs in SE education, emphasizing the need for diverse approaches to enhance engagement and educational outcomes. Future research should focus on optimizing SG potential across educational and industrial settings by expanding publication visibility, integrating artificial intelligence (AI), and conducting comprehensive evaluations of SGs tailored to SE contexts.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103915"},"PeriodicalIF":4.1,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0920548924000849/pdfft?md5=aabc12b8df17cadc4147fd2af50e915d&pid=1-s2.0-S0920548924000849-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142122606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1016/j.csi.2024.103903
Renat Faizrakhmanov , Mohammad Reza Bahrami , Alexey Platunov
Despite the widespread adoption of smart devices and home automation systems, users still face usability challenges. Additionally, evaluating the usability of smart home systems is not straightforward. Our paper focuses on building a Smart Home Prototype, selecting, and evaluating software User Interfaces (UI). We have chosen the smart home heating system as the basis, but the principles are applicable to other applications. To discuss smart home UIs, it is necessary to understand the components, implementation scenarios, configuration, and operation of smart home systems. Therefore, we devote considerable attention to these topics in the paper. Our smart home system utilizes a Raspberry Pi computer, temperature sensors, servo drives, an Android-based smartphone, and other necessary hardware. We installed and configured open-source software and third-party services on the experimental setup. We provided a brief description of each selected software. We used the Home Assistant operating system, its mobile application, voice assistant Alice, and Telegram chatbot. In our study, we described a methodology for evaluating user interfaces in smart homes and conducted an experiment to assess their usability. The experiment results indicated which interface was more user-friendly and identified the drawbacks of each. In conclusion, we discuss UI features that can improve usability and available options for building a smart home based on capabilities.
尽管智能设备和家庭自动化系统已被广泛采用,但用户仍然面临着可用性方面的挑战。此外,评估智能家居系统的可用性并不简单。本文的重点是构建智能家居原型、选择和评估软件用户界面(UI)。我们选择智能家居供暖系统作为基础,但其原理也适用于其他应用。要讨论智能家居用户界面,就必须了解智能家居系统的组件、实施方案、配置和操作。因此,我们在本文中对这些主题给予了相当大的关注。我们的智能家居系统使用了 Raspberry Pi 计算机、温度传感器、伺服驱动器、基于安卓的智能手机和其他必要的硬件。我们在实验装置上安装并配置了开源软件和第三方服务。我们简要介绍了所选的每个软件。我们使用了 Home Assistant 操作系统、其移动应用程序、语音助手 Alice 和 Telegram 聊天机器人。在研究中,我们介绍了评估智能家居用户界面的方法,并进行了一项实验来评估它们的可用性。实验结果表明了哪种界面对用户更友好,并指出了每种界面的缺点。最后,我们讨论了可提高可用性的用户界面功能,以及根据功能构建智能家居的可用选项。
{"title":"Prototype, method, and experiment for evaluating usability of smart home user interfaces","authors":"Renat Faizrakhmanov , Mohammad Reza Bahrami , Alexey Platunov","doi":"10.1016/j.csi.2024.103903","DOIUrl":"10.1016/j.csi.2024.103903","url":null,"abstract":"<div><p>Despite the widespread adoption of smart devices and home automation systems, users still face usability challenges. Additionally, evaluating the usability of smart home systems is not straightforward. Our paper focuses on building a Smart Home Prototype, selecting, and evaluating software User Interfaces (UI). We have chosen the smart home heating system as the basis, but the principles are applicable to other applications. To discuss smart home UIs, it is necessary to understand the components, implementation scenarios, configuration, and operation of smart home systems. Therefore, we devote considerable attention to these topics in the paper. Our smart home system utilizes a Raspberry Pi computer, temperature sensors, servo drives, an Android-based smartphone, and other necessary hardware. We installed and configured open-source software and third-party services on the experimental setup. We provided a brief description of each selected software. We used the Home Assistant operating system, its mobile application, voice assistant Alice, and Telegram chatbot. In our study, we described a methodology for evaluating user interfaces in smart homes and conducted an experiment to assess their usability. The experiment results indicated which interface was more user-friendly and identified the drawbacks of each. In conclusion, we discuss UI features that can improve usability and available options for building a smart home based on capabilities.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103903"},"PeriodicalIF":4.1,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1016/j.csi.2024.103905
Jingshu Yuan, Kexin Zhai, Hongxin Li, Man Yuan
With the rapid development of artificial intelligence and enterprise digital transformation, the standardization organization, storage and management of semantic knowledge in computers have become the current research focus. As the core theory of knowledge system construction, knowledge organization (KO) provides theoretical support for the study of semantic knowledge organization and representation, among which knowledge organization system (KOS) is the important tool of semantic organization. At present, many scholars have carried out research from different perspectives of KOS based on theory, which provides the direction for the sustainable development of KOS. However, most of these studies focus on some aspects of KOS, which are in a "scattered" state, lacking systematic analysis of the basic principles of KOS construction and semantic organization based on theories and international standards. Therefore, this paper firstly constructs KOS theoretical models in the conceptual world and computer world respectively through a comprehensive study of multi-disciplinary basic theories such as semantics, logic, system theory, and international standards such as ISO 1087:2019, ISO 25964:2013, and ISO 11179:2023, and traces the iterative construction, organization and mapping process from "concept" in the conceptual world to "metadata" knowledge and semantics in the computer world. The semantic organization based on metadata is realized in computer. Secondly, on this basis, in order to realize ontology representation of domain knowledge, the ontology construction method based on MDR metadata is proposed. Finally, taking the semantic organization and ontology construction of Epicentre model in petroleum field as an example, the feasibility of the ideas and methods proposed in this paper is verified. The model and method proposed in this paper is independent of the specific type of KOS, so it is innovative and universal. The methodology is also applicable to other fields of conceptual system modeling, metadata standard construction, and data model modeling.
{"title":"Research on the construction and mapping model of knowledge organization system driven by standards","authors":"Jingshu Yuan, Kexin Zhai, Hongxin Li, Man Yuan","doi":"10.1016/j.csi.2024.103905","DOIUrl":"10.1016/j.csi.2024.103905","url":null,"abstract":"<div><p>With the rapid development of artificial intelligence and enterprise digital transformation, the standardization organization, storage and management of semantic knowledge in computers have become the current research focus. As the core theory of knowledge system construction, knowledge organization (KO) provides theoretical support for the study of semantic knowledge organization and representation, among which knowledge organization system (KOS) is the important tool of semantic organization. At present, many scholars have carried out research from different perspectives of KOS based on theory, which provides the direction for the sustainable development of KOS. However, most of these studies focus on some aspects of KOS, which are in a \"scattered\" state, lacking systematic analysis of the basic principles of KOS construction and semantic organization based on theories and international standards. Therefore, this paper firstly constructs KOS theoretical models in the conceptual world and computer world respectively through a comprehensive study of multi-disciplinary basic theories such as semantics, logic, system theory, and international standards such as ISO 1087:2019, ISO 25964:2013, and ISO 11179:2023, and traces the iterative construction, organization and mapping process from \"concept\" in the conceptual world to \"metadata\" knowledge and semantics in the computer world. The semantic organization based on metadata is realized in computer. Secondly, on this basis, in order to realize ontology representation of domain knowledge, the ontology construction method based on MDR metadata is proposed. Finally, taking the semantic organization and ontology construction of Epicentre model in petroleum field as an example, the feasibility of the ideas and methods proposed in this paper is verified. The model and method proposed in this paper is independent of the specific type of KOS, so it is innovative and universal. The methodology is also applicable to other fields of conceptual system modeling, metadata standard construction, and data model modeling.</p></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103905"},"PeriodicalIF":4.1,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}