Pub Date : 2024-11-14DOI: 10.1016/j.csi.2024.103938
Ning Tao , Anthony Ventresque , Vivek Nallur , Takfarinas Saber
Program synthesis is an important challenge that has attracted significant research interest, especially in recent years with advancements in Large Language Models (LLMs). Although LLMs have demonstrated success in program synthesis, there remains a lack of trust in the generated code due to documented risks (e.g., code with known and risky vulnerabilities). Therefore, it is important to restrict the search space and avoid bad programs. In this work, pre-defined restricted Backus–Naur Form (BNF) grammars are utilised, which are considered ‘safe’, and the focus is on identifying the most effective technique for grammar-obeying program synthesis, where the generated code must be correct and conform to the predefined grammar. It is shown that while LLMs perform well in generating correct programs, they often fail to produce code that adheres to the grammar. To address this, a novel Similarity-Based Many-Objective Grammar Guided Genetic Programming (SBMaOG3P) approach is proposed, leveraging the programs generated by LLMs in two ways: (i) as seeds following a grammar mapping process and (ii) as targets for similarity measure objectives. Experiments on a well-known and widely used program synthesis dataset indicate that the proposed approach successfully improves the rate of grammar-obeying program synthesis compared to various LLMs and the state-of-the-art Grammar-Guided Genetic Programming. Additionally, the proposed approach significantly improved the solution in terms of the best fitness value of each run for 21 out of 28 problems compared to G3P.
{"title":"Grammar-obeying program synthesis: A novel approach using large language models and many-objective genetic programming","authors":"Ning Tao , Anthony Ventresque , Vivek Nallur , Takfarinas Saber","doi":"10.1016/j.csi.2024.103938","DOIUrl":"10.1016/j.csi.2024.103938","url":null,"abstract":"<div><div>Program synthesis is an important challenge that has attracted significant research interest, especially in recent years with advancements in Large Language Models (LLMs). Although LLMs have demonstrated success in program synthesis, there remains a lack of trust in the generated code due to documented risks (e.g., code with known and risky vulnerabilities). Therefore, it is important to restrict the search space and avoid bad programs. In this work, pre-defined restricted Backus–Naur Form (BNF) grammars are utilised, which are considered ‘safe’, and the focus is on identifying the most effective technique for <em>grammar-obeying program synthesis</em>, where the generated code must be correct and conform to the predefined grammar. It is shown that while LLMs perform well in generating correct programs, they often fail to produce code that adheres to the grammar. To address this, a novel Similarity-Based Many-Objective Grammar Guided Genetic Programming (SBMaOG3P) approach is proposed, leveraging the programs generated by LLMs in two ways: (i) as seeds following a grammar mapping process and (ii) as targets for similarity measure objectives. Experiments on a well-known and widely used program synthesis dataset indicate that the proposed approach successfully improves the rate of grammar-obeying program synthesis compared to various LLMs and the state-of-the-art Grammar-Guided Genetic Programming. Additionally, the proposed approach significantly improved the solution in terms of the best fitness value of each run for 21 out of 28 problems compared to G3P.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103938"},"PeriodicalIF":4.1,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.csi.2024.103940
Marc Alier , Juanan Pereira , Francisco José García-Peñalvo , Maria Jose Casañ , Jose Cabré
This paper presents LAMB (Learning Assistant Manager and Builder), an innovative open-source software framework designed to create AI-powered Learning Assistants tailored for integration into learning management systems. LAMB addresses critical gaps in existing educational AI solutions by providing a framework specifically designed for the unique requirements of the education sector. It introduces novel features, including a modular architecture for seamless integration of AI assistants into existing LMS platforms and an intuitive interface for educators to create custom AI assistants without coding skills. Unlike existing AI tools in education, LAMB provides a comprehensive framework that addresses privacy concerns, ensures alignment with institutional policies, and promotes using authoritative sources. LAMB leverages the capabilities of large language models and associated generative artificial intelligence technologies to create generative intelligent learning assistants that enhance educational experiences by providing personalized learning support based on clear directions and authoritative fonts of information. Key features of LAMB include its modular architecture, which supports prompt engineering, retrieval-augmented generation, and the creation of extensive knowledge bases from diverse educational content, including video sources. The development and deployment of LAMB were iteratively refined using a minimum viable product approach, exemplified by the learning assistant: “Macroeconomics Study Coach,” which effectively integrated lecture transcriptions and other course materials to support student inquiries. Initial validations in various educational settings demonstrate the potential that learning assistants created with LAMB have to enhance teaching methodologies, increase student engagement, and provide personalized learning experiences. The system's usability, scalability, security, and interoperability with existing LMS platforms make it a robust solution for integrating artificial intelligence into educational environments. LAMB's open-source nature encourages collaboration and innovation among educators, researchers, and developers, fostering a community dedicated to advancing the role of artificial intelligence in education. This paper outlines the system architecture, implementation details, use cases, and the significant benefits and challenges encountered, offering valuable insights for future developments in artificial intelligence assistants for any sector.
{"title":"LAMB: An open-source software framework to create artificial intelligence assistants deployed and integrated into learning management systems","authors":"Marc Alier , Juanan Pereira , Francisco José García-Peñalvo , Maria Jose Casañ , Jose Cabré","doi":"10.1016/j.csi.2024.103940","DOIUrl":"10.1016/j.csi.2024.103940","url":null,"abstract":"<div><div>This paper presents LAMB (Learning Assistant Manager and Builder), an innovative open-source software framework designed to create AI-powered Learning Assistants tailored for integration into learning management systems. LAMB addresses critical gaps in existing educational AI solutions by providing a framework specifically designed for the unique requirements of the education sector. It introduces novel features, including a modular architecture for seamless integration of AI assistants into existing LMS platforms and an intuitive interface for educators to create custom AI assistants without coding skills. Unlike existing AI tools in education, LAMB provides a comprehensive framework that addresses privacy concerns, ensures alignment with institutional policies, and promotes using authoritative sources. LAMB leverages the capabilities of large language models and associated generative artificial intelligence technologies to create generative intelligent learning assistants that enhance educational experiences by providing personalized learning support based on clear directions and authoritative fonts of information. Key features of LAMB include its modular architecture, which supports prompt engineering, retrieval-augmented generation, and the creation of extensive knowledge bases from diverse educational content, including video sources. The development and deployment of LAMB were iteratively refined using a minimum viable product approach, exemplified by the learning assistant: “Macroeconomics Study Coach,” which effectively integrated lecture transcriptions and other course materials to support student inquiries. Initial validations in various educational settings demonstrate the potential that learning assistants created with LAMB have to enhance teaching methodologies, increase student engagement, and provide personalized learning experiences. The system's usability, scalability, security, and interoperability with existing LMS platforms make it a robust solution for integrating artificial intelligence into educational environments. LAMB's open-source nature encourages collaboration and innovation among educators, researchers, and developers, fostering a community dedicated to advancing the role of artificial intelligence in education. This paper outlines the system architecture, implementation details, use cases, and the significant benefits and challenges encountered, offering valuable insights for future developments in artificial intelligence assistants for any sector.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103940"},"PeriodicalIF":4.1,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.csi.2024.103937
Zishuo Guo, Hui Ma, Ao Li
Multimodal biometric recognition technology has attracted the attention of many scholars due to its higher security and stability than single-modal recognition, but its additional parameter quantity and computational cost have brought challenges to the lightweight deployment of the model. In order to meet the needs of a wider range of application scenarios, this paper proposes a lightweight model DPNet using fingerprint and finger vein images for multimodal recognition, which adopts a double-branch lightweight feature extraction structure combining detail optimization and perception compensation. Among them, the detail extraction optimization branch uses multi-scale dimensionality reduction filtering to obtain low-redundant detail information, and combines the depth extension operation to enhance the generalization ability of detail features. The perception compensation branch expands and compensates the model's perceptual field of view through lightweight spatial location query and global information attention. In addition, this paper designs a perceptual feature embedding method to embed perceptual compensation information in the way of importance adjustment to improve the consistency of embedded features. The ABFM fusion module is proposed to carry out multi-level lightweight and deep interactive fusion of the extracted finger modal features from the global to the spatial region, so as to improve the degree and utilization rate of feature fusion. In this paper, the model recognition performance and lightweight advantages are verified on three multimodal datasets. Experimental results show that the proposed model achieves the most advanced lightweight effect and recognition performance in the experimental comparison of all datasets.
{"title":"A lightweight finger multimodal recognition model based on detail optimization and perceptual compensation embedding","authors":"Zishuo Guo, Hui Ma, Ao Li","doi":"10.1016/j.csi.2024.103937","DOIUrl":"10.1016/j.csi.2024.103937","url":null,"abstract":"<div><div>Multimodal biometric recognition technology has attracted the attention of many scholars due to its higher security and stability than single-modal recognition, but its additional parameter quantity and computational cost have brought challenges to the lightweight deployment of the model. In order to meet the needs of a wider range of application scenarios, this paper proposes a lightweight model DPNet using fingerprint and finger vein images for multimodal recognition, which adopts a double-branch lightweight feature extraction structure combining detail optimization and perception compensation. Among them, the detail extraction optimization branch uses multi-scale dimensionality reduction filtering to obtain low-redundant detail information, and combines the depth extension operation to enhance the generalization ability of detail features. The perception compensation branch expands and compensates the model's perceptual field of view through lightweight spatial location query and global information attention. In addition, this paper designs a perceptual feature embedding method to embed perceptual compensation information in the way of importance adjustment to improve the consistency of embedded features. The ABFM fusion module is proposed to carry out multi-level lightweight and deep interactive fusion of the extracted finger modal features from the global to the spatial region, so as to improve the degree and utilization rate of feature fusion. In this paper, the model recognition performance and lightweight advantages are verified on three multimodal datasets. Experimental results show that the proposed model achieves the most advanced lightweight effect and recognition performance in the experimental comparison of all datasets.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103937"},"PeriodicalIF":4.1,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.csi.2024.103939
Tommy van Steen
With cybercriminals’ increased attention for human error as attack vector, organisations need to develop strategies to address behavioural risks if they want to keep their organisation secure. The traditional focus on awareness campaigns does not seem suitable for this goal and other avenues of applying the behavioural sciences to this field need to be explored. This paper outlines a five-step approach to developing a behavioural cybersecurity strategy to address this issue. The five steps consist of first deciding whether a solely technical solution is feasible before turning to nudging and affordances, cybersecurity training, and behavioural change campaigns for specific behaviours. The final step is to develop and implement a feedback loop that is used to assess the effectiveness of the strategy and inform organisations about next steps that can be taken. Beyond outlining the five-step approach, a research agenda is discussed aimed at strengthening each of the five steps and helping organisations in becoming more cybersecure by implementing a behavioural cybersecurity strategy.
{"title":"Developing a behavioural cybersecurity strategy: A five-step approach for organisations","authors":"Tommy van Steen","doi":"10.1016/j.csi.2024.103939","DOIUrl":"10.1016/j.csi.2024.103939","url":null,"abstract":"<div><div>With cybercriminals’ increased attention for human error as attack vector, organisations need to develop strategies to address behavioural risks if they want to keep their organisation secure. The traditional focus on awareness campaigns does not seem suitable for this goal and other avenues of applying the behavioural sciences to this field need to be explored. This paper outlines a five-step approach to developing a behavioural cybersecurity strategy to address this issue. The five steps consist of first deciding whether a solely technical solution is feasible before turning to nudging and affordances, cybersecurity training, and behavioural change campaigns for specific behaviours. The final step is to develop and implement a feedback loop that is used to assess the effectiveness of the strategy and inform organisations about next steps that can be taken. Beyond outlining the five-step approach, a research agenda is discussed aimed at strengthening each of the five steps and helping organisations in becoming more cybersecure by implementing a behavioural cybersecurity strategy.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103939"},"PeriodicalIF":4.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.csi.2024.103936
Yue Dai , Lulu Xue , Bo Yang , Tao Wang , Kejia Zhang
Smart healthcare is an emerging technology for enabling interaction between patients and medical personnel, medical institutions, and medical devices utilizing advanced Internet of Things (IoT) technologies. It has attracted significant attention from researchers because of the convenience of storing and sharing electronic medical records (EMRs) in the cloud. Given that a patient’s EMR contains sensitive individual information, it must be encrypted before uploading it to the cloud. As a solution for data confidentiality and fine-grained access control, the Ciphertext Policy Attribute-Based Encryption (CP-ABE) technique is proposed, which helps manipulate private personal data without explicit authorization. However, most CP-ABE schemes use a centralized mechanism which may lead to performance bottlenecks and single-point-of-failure issues. They will also be at risk of key abuse and privacy breaches in smart healthcare applications. To this end, in this paper, we investigate a traceable and revocable decentralized attribute-based encryption scheme with a fully hidden access policy (TR-HP-DABE). Firstly, to overcome the issues of user privacy leakage and single-point-of-failure, a fully hidden access policy is established for multiple attribute authorities. Secondly, to prevent key abuse, the proposed TR-HP-DABE can achieve the tracking and revocation of malicious users by using Key Encryption Key (KEK) trees and updating the partial ciphertext. Furthermore, the online/offline encryption and verifiable outsourced decryption are applied to improve its efficiency in practical smart healthcare. According to our analysis, the security and traceability of TR-HP-DABE can be proved. Finally, the performance evaluation of TR-HP-DABE is more effective than some existing typical ones.
{"title":"A traceable and revocable decentralized attribute-based encryption scheme with fully hidden access policy for cloud-based smart healthcare","authors":"Yue Dai , Lulu Xue , Bo Yang , Tao Wang , Kejia Zhang","doi":"10.1016/j.csi.2024.103936","DOIUrl":"10.1016/j.csi.2024.103936","url":null,"abstract":"<div><div>Smart healthcare is an emerging technology for enabling interaction between patients and medical personnel, medical institutions, and medical devices utilizing advanced Internet of Things (IoT) technologies. It has attracted significant attention from researchers because of the convenience of storing and sharing electronic medical records (EMRs) in the cloud. Given that a patient’s EMR contains sensitive individual information, it must be encrypted before uploading it to the cloud. As a solution for data confidentiality and fine-grained access control, the Ciphertext Policy Attribute-Based Encryption (CP-ABE) technique is proposed, which helps manipulate private personal data without explicit authorization. However, most CP-ABE schemes use a centralized mechanism which may lead to performance bottlenecks and single-point-of-failure issues. They will also be at risk of key abuse and privacy breaches in smart healthcare applications. To this end, in this paper, we investigate a traceable and revocable decentralized attribute-based encryption scheme with a fully hidden access policy (TR-HP-DABE). Firstly, to overcome the issues of user privacy leakage and single-point-of-failure, a fully hidden access policy is established for multiple attribute authorities. Secondly, to prevent key abuse, the proposed TR-HP-DABE can achieve the tracking and revocation of malicious users by using Key Encryption Key (KEK) trees and updating the partial ciphertext. Furthermore, the online/offline encryption and verifiable outsourced decryption are applied to improve its efficiency in practical smart healthcare. According to our analysis, the security and traceability of TR-HP-DABE can be proved. Finally, the performance evaluation of TR-HP-DABE is more effective than some existing typical ones.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103936"},"PeriodicalIF":4.1,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.csi.2024.103935
Luis E. Sánchez , Antonio Santos-Olmo , David G. Rosado , Carlos Blanco , Manuel A. Serrano , Haralambos Mouratidis , Eduardo Fernández-Medina
In a globalised world dependent on information technology, ensuring adequate protection of an organisation’s information assets has become a decisive factor for the longevity of the organisation’s operation. This is especially important when these organisations are critical infrastructures that provide essential services to nations and their citizens. However, to protect these assets, we must first be able to understand the risks to which they are subject and how to manage them properly. To understand and manage such the risks, we need first to acknowledge that organisations have changed, and they now have an increasing reliance on information assets, which in many cases are shared with other organisations. Such reliance and interconnectivity means that risks are constantly changing, they are dynamic, and potential mitigation does not just rely on the organisation’s own controls, but also on the controls put in place by the organisations with which it shares those assets. Taking the above requirements as essential, we have reviewed the state of the art, and we have concluded that current risk analysis and management systems are unable to meet all the needs inherent in this dynamic and evolving risk environment. This gap in the state of the art requires novel approaches that draw on the foundations of risk management, but they are adapted to the new challenges.
This article fulfils this gap in the literature with the introduction of MARISMA, a novel security risk analysis and management framework. MARISMA is oriented towards dynamic and adaptive risk management, considering external factors such as associative risks between organisations. MARISMA also contributes to the state of the art through newly developed mechanisms for knowledge reuse and dynamic learning. An important advantage of MARISMA is the connections between its elements that make it possible to reduce the subjectivity inherent in classical risk analysis systems, thereby generating suggestions that allow the translation of perceived security risks into real security risks. The framework comprises a reusable meta-pattern comprising different elements and their interdependencies, a supporting method that guides the entire process, and a cloud-based tool that automates data management and risk methods. MARISMA has been applied to many companies from different countries and sectors (government, maritime, energy, and pharmaceutical). In this paper, we demonstrate its applicability through its application to a real world case study involving a company in the technology sector.
{"title":"MARISMA: A modern and context-aware framework for assessing and managing information cybersecurity risks","authors":"Luis E. Sánchez , Antonio Santos-Olmo , David G. Rosado , Carlos Blanco , Manuel A. Serrano , Haralambos Mouratidis , Eduardo Fernández-Medina","doi":"10.1016/j.csi.2024.103935","DOIUrl":"10.1016/j.csi.2024.103935","url":null,"abstract":"<div><div>In a globalised world dependent on information technology, ensuring adequate protection of an organisation’s information assets has become a decisive factor for the longevity of the organisation’s operation. This is especially important when these organisations are critical infrastructures that provide essential services to nations and their citizens. However, to protect these assets, we must first be able to understand the risks to which they are subject and how to manage them properly. To understand and manage such the risks, we need first to acknowledge that organisations have changed, and they now have an increasing reliance on information assets, which in many cases are shared with other organisations. Such reliance and interconnectivity means that risks are constantly changing, they are dynamic, and potential mitigation does not just rely on the organisation’s own controls, but also on the controls put in place by the organisations with which it shares those assets. Taking the above requirements as essential, we have reviewed the state of the art, and we have concluded that current risk analysis and management systems are unable to meet all the needs inherent in this dynamic and evolving risk environment. This gap in the state of the art requires novel approaches that draw on the foundations of risk management, but they are adapted to the new challenges.</div><div>This article fulfils this gap in the literature with the introduction of MARISMA, a novel security risk analysis and management framework. MARISMA is oriented towards dynamic and adaptive risk management, considering external factors such as associative risks between organisations. MARISMA also contributes to the state of the art through newly developed mechanisms for knowledge reuse and dynamic learning. An important advantage of MARISMA is the connections between its elements that make it possible to reduce the subjectivity inherent in classical risk analysis systems, thereby generating suggestions that allow the translation of perceived security risks into real security risks. The framework comprises a reusable meta-pattern comprising different elements and their interdependencies, a supporting method that guides the entire process, and a cloud-based tool that automates data management and risk methods. MARISMA has been applied to many companies from different countries and sectors (government, maritime, energy, and pharmaceutical). In this paper, we demonstrate its applicability through its application to a real world case study involving a company in the technology sector.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103935"},"PeriodicalIF":4.1,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.csi.2024.103933
Florence Lehnert , Sophie Doublet , Gavin Sim
The application of electronic assessments (e-assessments) has increased, particularly among elementary-school-aged children. Paper-based assessments are frequently converted into digital formats for efficiency gains, with little thought given to their user experience (UX) and usability. Individual differences, particularly among young children, can inhibit test-takers from completing the assessment tasks that are not designed to match their needs and abilities. Consequently, studies have raised concerns about the generalizability and fairness of e-assessments. Whereas heuristic evaluation is a standard method for evaluating and enhancing the efficacy of a product with respect to a set of guidelines, more information is needed about its added value when designing e-assessments for children. This paper synthesizes heuristics on the basis of the literature and expert judgments to accommodate children's abilities for interacting with e-assessment platforms. We present a final set of 10 heuristics, validated and refined by applying a heuristic evaluation workshop and collecting 24 expert surveys. The results indicate that the derived heuristics can help evaluate the UX and usability-related aspects of e-assessments with 6- to 12-year-old children. Moreover, the present paper proposes recommendations for a framework for developing usability/UX heuristics that can be used to help researchers develop domain-specific heuristics in the future.
{"title":"Designing usability/user experience heuristics to evaluate e-assessments administered to children","authors":"Florence Lehnert , Sophie Doublet , Gavin Sim","doi":"10.1016/j.csi.2024.103933","DOIUrl":"10.1016/j.csi.2024.103933","url":null,"abstract":"<div><div>The application of electronic assessments (e-assessments) has increased, particularly among elementary-school-aged children. Paper-based assessments are frequently converted into digital formats for efficiency gains, with little thought given to their user experience (UX) and usability. Individual differences, particularly among young children, can inhibit test-takers from completing the assessment tasks that are not designed to match their needs and abilities. Consequently, studies have raised concerns about the generalizability and fairness of e-assessments. Whereas heuristic evaluation is a standard method for evaluating and enhancing the efficacy of a product with respect to a set of guidelines, more information is needed about its added value when designing e-assessments for children. This paper synthesizes heuristics on the basis of the literature and expert judgments to accommodate children's abilities for interacting with e-assessment platforms. We present a final set of 10 heuristics, validated and refined by applying a heuristic evaluation workshop and collecting 24 expert surveys. The results indicate that the derived heuristics can help evaluate the UX and usability-related aspects of e-assessments with 6- to 12-year-old children. Moreover, the present paper proposes recommendations for a framework for developing usability/UX heuristics that can be used to help researchers develop domain-specific heuristics in the future.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103933"},"PeriodicalIF":4.1,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-06DOI: 10.1016/j.csi.2024.103934
Deepa R , Karthick R , Jayaraj Velusamy , Senthilkumar R
This research aims to optimize the interference mitigation and improve system performance metrics, such as bit error rates, inter-carrier interference (ICI), and inter-symbol interference (ISI), by integrating the Redundant Discrete Wavelet Transform (RDWT) with the Arithmetic Optimization Algorithm (AOA). This will increase the spectral efficiency of MIMOOFDM systems for ultra-high data rate (UHDR) transmission in 5 G networks. The most important contribution of this study is the innovative combination of RDWT and AOA, which effectively addresses the down sampling issues in DWT-OFDM systems and significantly improves both error rates and data rates in high-speed wireless communication. Fifth-generation wireless networks require transmission at ultra-high data rates, which necessitates reducing ISI and ICI. Multiple-input multiple-output orthogonal frequency division multiplexing (MIMOOFDM) is employed to achieve the UHDR. The bandwidth and orthogonality of DWT-OFDM (discrete wavelet transform-based OFDM) are increased; however system performance is degraded due to down sampling. The redundant discrete wavelet transform (RDWT) is proposed for eliminating down sampling complexities. Simulation results demonstrate that RDWT effectively lowers bit error rates, ICI, and ISI by increasing the carrier-to-interference power ratio (CIR). The Arithmetic Optimization Algorithm is used to optimize ICI cancellation weights, further enhancing spectrum efficiency. The proposed method is executed in MATLAB and achieves notable performance gains: up to 82.95 % lower error rates and 39.88 % higher data rates compared to the existing methods.
Conclusion
The integration of RDWT with AOA represents a significant advancement in enhancing the spectral efficiency of MIMOOFDM systems for UHDR transmission in 5 G networks. The proposed method not only enhances system performance but also lays a foundation for future developments in high-speed wireless communication by addressing down sampling issues and optimizing interference mitigation.
{"title":"Performance analysis of multiple-input multiple-output orthogonal frequency division multiplexing system using arithmetic optimization algorithm","authors":"Deepa R , Karthick R , Jayaraj Velusamy , Senthilkumar R","doi":"10.1016/j.csi.2024.103934","DOIUrl":"10.1016/j.csi.2024.103934","url":null,"abstract":"<div><div>This research aims to optimize the interference mitigation and improve system performance metrics, such as bit error rates, inter-carrier interference (ICI), and inter-symbol interference (ISI), by integrating the Redundant Discrete Wavelet Transform (RDWT) with the Arithmetic Optimization Algorithm (AOA). This will increase the spectral efficiency of MIMO<img>OFDM systems for ultra-high data rate (UHDR) transmission in 5 G networks. The most important contribution of this study is the innovative combination of RDWT and AOA, which effectively addresses the down sampling issues in DWT-OFDM systems and significantly improves both error rates and data rates in high-speed wireless communication. Fifth-generation wireless networks require transmission at ultra-high data rates, which necessitates reducing ISI and ICI. Multiple-input multiple-output orthogonal frequency division multiplexing (MIMO<img>OFDM) is employed to achieve the UHDR. The bandwidth and orthogonality of DWT-OFDM (discrete wavelet transform-based OFDM) are increased; however system performance is degraded due to down sampling. The redundant discrete wavelet transform (RDWT) is proposed for eliminating down sampling complexities. Simulation results demonstrate that RDWT effectively lowers bit error rates, ICI, and ISI by increasing the carrier-to-interference power ratio (CIR). The Arithmetic Optimization Algorithm is used to optimize ICI cancellation weights, further enhancing spectrum efficiency. The proposed method is executed in MATLAB and achieves notable performance gains: up to 82.95 % lower error rates and 39.88 % higher data rates compared to the existing methods.</div></div><div><h3>Conclusion</h3><div>The integration of RDWT with AOA represents a significant advancement in enhancing the spectral efficiency of MIMO<img>OFDM systems for UHDR transmission in 5 G networks. The proposed method not only enhances system performance but also lays a foundation for future developments in high-speed wireless communication by addressing down sampling issues and optimizing interference mitigation.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103934"},"PeriodicalIF":4.1,"publicationDate":"2024-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-29DOI: 10.1016/j.csi.2024.103932
Samuel B , Kasturi K
Cloud computing is currently emerging as a developing technology in which a Cloud Service Provider (CSP) is a third-party organization that provides effective storage of data and facilities to a large client base. Saving information in a cloud offers users the satisfaction of accessing it without the need for direct knowledge of the distribution and management of an infrastructure. The primary objective is to develop a novel, secure, and privacy-preserving data-sharing model that utilizes deep-based key generation on blockchain in the cloud. Data communication is done using multiple entities. The research aims to develop a collaborative data-sharing method in the cloud for the authentication scheme for cloud security on blockchain and smart contracts. Initialization, registration, key generation, authentication of data sharing, and validation are carried out here. The proposed data-sharing model involves a revenue distribution model that depends on Multiple Services (MS) models to improve multiple cloud services. The security parameters namely passwords, hashing functions, key interpolation, and encryption are used for preserving the Data privacy and here the SpinalNet is used for generating keys. Furthermore, the devised SpinalNet_Genkey obtained a value of 45.001 MB, 0.002, and 0.003 sec for memory usage, revenue, and computation cost.
{"title":"A novel secure privacy-preserving data sharing model with deep-based key generation on the blockchain network in the cloud","authors":"Samuel B , Kasturi K","doi":"10.1016/j.csi.2024.103932","DOIUrl":"10.1016/j.csi.2024.103932","url":null,"abstract":"<div><div>Cloud computing is currently emerging as a developing technology in which a Cloud Service Provider (CSP) is a third-party organization that provides effective storage of data and facilities to a large client base. Saving information in a cloud offers users the satisfaction of accessing it without the need for direct knowledge of the distribution and management of an infrastructure. The primary objective is to develop a novel, secure, and privacy-preserving data-sharing model that utilizes deep-based key generation on blockchain in the cloud. Data communication is done using multiple entities. The research aims to develop a collaborative data-sharing method in the cloud for the authentication scheme for cloud security on blockchain and smart contracts. Initialization, registration, key generation, authentication of data sharing, and validation are carried out here. The proposed data-sharing model involves a revenue distribution model that depends on Multiple Services (MS) models to improve multiple cloud services. The security parameters namely passwords, hashing functions, key interpolation, and encryption are used for preserving the Data privacy and here the SpinalNet is used for generating keys. Furthermore, the devised SpinalNet_Genkey obtained a value of 45.001 MB, 0.002, and 0.003 sec for memory usage, revenue, and computation cost.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103932"},"PeriodicalIF":4.1,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-28DOI: 10.1016/j.csi.2024.103931
Arnoldas Budžys, Olga Kurasova, Viktor Medvedev
By enhancing user authentication protocols, especially in critical infrastructures vulnerable to complex cyberthreats, we present an advanced approach that integrates a deep learning-based model and data fusion techniques applied to analyze keystroke dynamics. With the growing need for robust security measures, especially in critical infrastructure environments, traditional authentication mechanisms often fail to cope with advanced threats. Our approach focuses on the unique behavioral biometric characteristics of keystrokes, which offers promising opportunities to improve user authentication processes. We have developed a data fusion-based methodology that utilizes the unique features of keystroke dynamics combined with deep learning techniques to improve user authentication systems. Using the capabilities of data fusion and deep learning, the proposed methodology not only captures the complex behavioral biometrics inherent in keystroke dynamics but also addresses the challenges posed by varying password lengths and typing styles. We conducted extensive experiments on several fixed-text datasets, including the Carnegie Mellon University dataset, the KeyRecs dataset, and the GREYC-NISLAB dataset, with a total of approximately 54,000 password records. Comprehensive experiments on various datasets with different password lengths have shown that our approach is scalable and accurate for user authentication, which significantly improves the security of critical infrastructure. By using interpolation-based data fusion techniques to standardize the keystroke data to a uniform length and employing a Siamese neural network with a triplet loss function, the best equal error rate of 0.13281 was achieved for the unseen fused data. The integration of deep learning and data fusion effectively generalizes to different user profiles, demonstrating its adaptability and accuracy in authenticating users in different scenarios. The findings are crucial for improving security in sensitive applications, ranging from accessing personal devices to protecting critical infrastructure.
{"title":"Integrating deep learning and data fusion for advanced keystroke dynamics authentication","authors":"Arnoldas Budžys, Olga Kurasova, Viktor Medvedev","doi":"10.1016/j.csi.2024.103931","DOIUrl":"10.1016/j.csi.2024.103931","url":null,"abstract":"<div><div>By enhancing user authentication protocols, especially in critical infrastructures vulnerable to complex cyberthreats, we present an advanced approach that integrates a deep learning-based model and data fusion techniques applied to analyze keystroke dynamics. With the growing need for robust security measures, especially in critical infrastructure environments, traditional authentication mechanisms often fail to cope with advanced threats. Our approach focuses on the unique behavioral biometric characteristics of keystrokes, which offers promising opportunities to improve user authentication processes. We have developed a data fusion-based methodology that utilizes the unique features of keystroke dynamics combined with deep learning techniques to improve user authentication systems. Using the capabilities of data fusion and deep learning, the proposed methodology not only captures the complex behavioral biometrics inherent in keystroke dynamics but also addresses the challenges posed by varying password lengths and typing styles. We conducted extensive experiments on several fixed-text datasets, including the Carnegie Mellon University dataset, the KeyRecs dataset, and the GREYC-NISLAB dataset, with a total of approximately 54,000 password records. Comprehensive experiments on various datasets with different password lengths have shown that our approach is scalable and accurate for user authentication, which significantly improves the security of critical infrastructure. By using interpolation-based data fusion techniques to standardize the keystroke data to a uniform length and employing a Siamese neural network with a triplet loss function, the best equal error rate of 0.13281 was achieved for the unseen fused data. The integration of deep learning and data fusion effectively generalizes to different user profiles, demonstrating its adaptability and accuracy in authenticating users in different scenarios. The findings are crucial for improving security in sensitive applications, ranging from accessing personal devices to protecting critical infrastructure.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103931"},"PeriodicalIF":4.1,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}