首页 > 最新文献

Software: Practice and Experience最新文献

英文 中文
Organizations' readiness for insider attacks: A process‐oriented approach 组织应对内部人员攻击的准备情况:以流程为导向的方法
Pub Date : 2024-03-14 DOI: 10.1002/spe.3327
Azzah A. AlGhamdi, Mahmood Niazi, Mohammad Alshayeb, Sajjad Mahmood
ContextOrganizations constantly strive to protect their assets from outsider attacks by implementing various security controls, such as data encryption algorithms, intrusion detection software, firewalls, and antivirus programs. Unfortunately, attackers strike not only from outside the organization but also from within. Such internal attacks are called insider attacks or threats, and the people responsible for them are insider attackers or insider threat agents. Insider attacks pose more significant risks and can result in greater organizational losses than outsider attacks. Thus, every organization should be vigilant regarding such attackers to protect its valuable resources from harm. Finding solutions to protect organizations from such attacks is critical. Despite the importance of this topic, little research has been conducted on providing solutions to mitigate insider attacks.ObjectiveThis study aims to develop an organizational readiness model to assess an organization's readiness for insider attacks.MethodWe conducted a multivocal literature review to identify practices that can be used to assess organizations' readiness against insider attacks. These practices were grouped into different knowledge areas of insider attacks for organizations. The insider attack readiness model was developed using identified best practices and knowledge areas: compliance, top management, human resources, and technical.ResultsThis model was evaluated at two levels—academic and real‐world environments. The evaluation results show that the proposed model can identify organizations' readiness against insider attacks.ConclusionThe proposed model can guide organizations through a secure environment against insider attacks.
背景组织通过实施各种安全控制措施,如数据加密算法、入侵检测软件、防火墙和防病毒程序,不断努力保护其资产免受外部攻击。不幸的是,攻击者不仅会从组织外部发动攻击,也会从组织内部发动攻击。这种内部攻击被称为内部攻击或威胁,其责任人是内部攻击者或内部威胁代理。与外部攻击相比,内部攻击带来的风险更大,可能导致更大的组织损失。因此,每个组织都应对这类攻击者保持警惕,以保护其宝贵的资源不受伤害。找到保护组织免受此类攻击的解决方案至关重要。本研究旨在开发一种组织准备就绪模型,用于评估组织应对内部人员攻击的准备就绪程度。方法我们进行了多方文献综述,以确定可用于评估组织应对内部人员攻击准备就绪程度的实践。这些实践被归类为组织内部人员攻击的不同知识领域。利用确定的最佳实践和知识领域(合规、高层管理、人力资源和技术),开发了内部人员攻击准备模型。评估结果表明,所提出的模型可以确定组织是否做好了防范内部人员攻击的准备。
{"title":"Organizations' readiness for insider attacks: A process‐oriented approach","authors":"Azzah A. AlGhamdi, Mahmood Niazi, Mohammad Alshayeb, Sajjad Mahmood","doi":"10.1002/spe.3327","DOIUrl":"https://doi.org/10.1002/spe.3327","url":null,"abstract":"ContextOrganizations constantly strive to protect their assets from outsider attacks by implementing various security controls, such as data encryption algorithms, intrusion detection software, firewalls, and antivirus programs. Unfortunately, attackers strike not only from outside the organization but also from within. Such internal attacks are called insider attacks or threats, and the people responsible for them are insider attackers or insider threat agents. Insider attacks pose more significant risks and can result in greater organizational losses than outsider attacks. Thus, every organization should be vigilant regarding such attackers to protect its valuable resources from harm. Finding solutions to protect organizations from such attacks is critical. Despite the importance of this topic, little research has been conducted on providing solutions to mitigate insider attacks.ObjectiveThis study aims to develop an organizational readiness model to assess an organization's readiness for insider attacks.MethodWe conducted a multivocal literature review to identify practices that can be used to assess organizations' readiness against insider attacks. These practices were grouped into different knowledge areas of insider attacks for organizations. The insider attack readiness model was developed using identified best practices and knowledge areas: compliance, top management, human resources, and technical.ResultsThis model was evaluated at two levels—academic and real‐world environments. The evaluation results show that the proposed model can identify organizations' readiness against insider attacks.ConclusionThe proposed model can guide organizations through a secure environment against insider attacks.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"128 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kulla‐RIV: A composing model with integrity verification for efficient and reliable data processing services Kulla-RIV:针对高效可靠数据处理服务的带完整性验证的组合模型
Pub Date : 2024-03-12 DOI: 10.1002/spe.3328
Hugo G. Reyes‐Anastacio, Jose L. Gonzalez‐Compeán, Victor J. Sosa‐Sosa, Ricardo Marcelín‐Jiménez, Miguel Morales‐Sandoval
This article presents the design and implementation of a reliable computing virtual container‐based model with integrity verification for data processing strategies named the reliability and integrity verification (RIV) scheme. It has been integrated into a system construction model as well as existing workflow engines (e.g., Kulla and Makeflow) for composing in‐memory systems. In the RIV scheme, the reliability (R) component is in charge of providing an implicit fault tolerance mechanism for the processes of data acquisition and storage that take place in a data processing system. The integrity verification (IV) component is in charge of ensuring that data transmitted/received between two processing stages are correct and are not modified during the transmission process. To show the feasibility of using the RIV scheme, real‐world applications were created by using different distributed and parallel systems to solve use cases of satellite and medical imagery processing. This evaluation revealed encouraging results as some solutions that assumed the cost (overhead) of using the RIV scheme, for example, Kulla (the Kulla‐RIV solution), achieve better response times than others without the RIV scheme (e.g., Makeflow) that remain exposed to the risks caused by to the lack of RIV strategies.
本文介绍了一种基于可靠计算虚拟容器的模型的设计与实现,该模型具有数据处理策略的完整性验证功能,被命名为可靠性与完整性验证(RIV)方案。它已被集成到系统构建模型以及现有的工作流引擎(如 Kulla 和 Makeflow)中,用于构建内存系统。在 RIV 方案中,可靠性(R)组件负责为数据处理系统中的数据采集和存储过程提供隐式容错机制。完整性验证(IV)组件负责确保在两个处理阶段之间传输/接收的数据是正确的,并且在传输过程中没有被修改。为了证明使用 RIV 方案的可行性,我们使用不同的分布式并行系统创建了实际应用,以解决卫星和医学图像处理的用例问题。评估结果令人鼓舞,因为一些承担了使用 RIV 方案的成本(开销)的解决方案(如 Kulla(Kulla-RIV 解决方案))比其他不使用 RIV 方案的解决方案(如 Makeflow)获得了更好的响应时间,而后者仍然面临着缺乏 RIV 策略所带来的风险。
{"title":"Kulla‐RIV: A composing model with integrity verification for efficient and reliable data processing services","authors":"Hugo G. Reyes‐Anastacio, Jose L. Gonzalez‐Compeán, Victor J. Sosa‐Sosa, Ricardo Marcelín‐Jiménez, Miguel Morales‐Sandoval","doi":"10.1002/spe.3328","DOIUrl":"https://doi.org/10.1002/spe.3328","url":null,"abstract":"This article presents the design and implementation of a reliable computing virtual container‐based model with integrity verification for data processing strategies named the reliability and integrity verification (RIV) scheme. It has been integrated into a system construction model as well as existing workflow engines (e.g., Kulla and Makeflow) for composing in‐memory systems. In the RIV scheme, the reliability (R) component is in charge of providing an implicit fault tolerance mechanism for the processes of data acquisition and storage that take place in a data processing system. The integrity verification (IV) component is in charge of ensuring that data transmitted/received between two processing stages are correct and are not modified during the transmission process. To show the feasibility of using the RIV scheme, real‐world applications were created by using different distributed and parallel systems to solve use cases of satellite and medical imagery processing. This evaluation revealed encouraging results as some solutions that assumed the cost (overhead) of using the RIV scheme, for example, Kulla (the Kulla‐RIV solution), achieve better response times than others without the RIV scheme (e.g., Makeflow) that remain exposed to the risks caused by to the lack of RIV strategies.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140126443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-mesh VPN performance evaluation for a secure edge-cloud continuum 安全边缘-云连续体的全网状 VPN 性能评估
Pub Date : 2024-03-11 DOI: 10.1002/spe.3329
Vojdan Kjorveziroski, Cristina Bernad, Katja Gilly, Sonja Filiposka
The recent introduction of full-mesh virtual private network (VPN) solutions which offer near native performance, coupled with modern encryption algorithms and easy scalability as a result of a central control plane have a strong potential to enable the implementation of a seamless edge-cloud continuum. To test the performance of existing solutions in this domain, we present a framework consisted of both essential and optional features that full-mesh VPN solutions need to support before they can be used for interconnecting geographically dispersed compute nodes. We then apply this framework on existing offerings and select three VPN solutions for further tests: Headscale, Netbird, and ZeroTier. We evaluate their features in the context of establishing an underlay network on top of which a Kubernetes overlay network can be created. We test pod-to-pod TCP and UDP throughput as well as Kubernetes application programming interface (API) response times, in multiple scenarios, accounting for adverse network conditions such as packet loss or packet delay. Based on the obtained measurement results and through analysis of the underlying strengths and weaknesses of the individual implementations, we draw conclusions on the preferred VPN solution depending on the use-case at hand, striking a balance between usability and performance.
最近推出的全网状虚拟专用网络(VPN)解决方案提供了接近本地的性能,加上现代加密算法和中央控制平面带来的易扩展性,具有实现边缘-云无缝连接的强大潜力。为了测试现有解决方案在这一领域的性能,我们提出了一个框架,其中包括全网状 VPN 解决方案在用于地理上分散的计算节点互联之前需要支持的基本功能和可选功能。然后,我们将该框架应用于现有产品,并选择三个 VPN 解决方案进行进一步测试:Headscale、Netbird 和 ZeroTier。我们在建立底层网络的背景下评估它们的功能,并在此基础上创建 Kubernetes 叠加网络。我们在多种场景下测试 pod 到 pod 的 TCP 和 UDP 吞吐量以及 Kubernetes 应用编程接口(API)的响应时间,并考虑到数据包丢失或数据包延迟等不利网络条件。根据所获得的测量结果,并通过分析各个实施方案的基本优缺点,我们得出结论,根据手头的使用案例,在可用性和性能之间取得平衡,选择首选的 VPN 解决方案。
{"title":"Full-mesh VPN performance evaluation for a secure edge-cloud continuum","authors":"Vojdan Kjorveziroski, Cristina Bernad, Katja Gilly, Sonja Filiposka","doi":"10.1002/spe.3329","DOIUrl":"https://doi.org/10.1002/spe.3329","url":null,"abstract":"The recent introduction of full-mesh virtual private network (VPN) solutions which offer near native performance, coupled with modern encryption algorithms and easy scalability as a result of a central control plane have a strong potential to enable the implementation of a seamless edge-cloud continuum. To test the performance of existing solutions in this domain, we present a framework consisted of both essential and optional features that full-mesh VPN solutions need to support before they can be used for interconnecting geographically dispersed compute nodes. We then apply this framework on existing offerings and select three VPN solutions for further tests: Headscale, Netbird, and ZeroTier. We evaluate their features in the context of establishing an underlay network on top of which a Kubernetes overlay network can be created. We test pod-to-pod TCP and UDP throughput as well as Kubernetes application programming interface (API) response times, in multiple scenarios, accounting for adverse network conditions such as packet loss or packet delay. Based on the obtained measurement results and through analysis of the underlying strengths and weaknesses of the individual implementations, we draw conclusions on the preferred VPN solution depending on the use-case at hand, striking a balance between usability and performance.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140117483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling continuous deployment techniques for quantum services 为量子服务提供持续部署技术
Pub Date : 2024-03-06 DOI: 10.1002/spe.3326
Javier Romero-Álvarez, Jaime Alvarado-Valiente, Enrique Moguel, Jose Garcia-Alonso, Juan M. Murillo
Early advances in quantum computing have provided new opportunities to tackle intricate problems in diverse areas such as cryptography, optimization, and simulation. However, current methodologies employed in quantum computing often require, among other things, a broad understanding of quantum hardware and low-level programming languages, posing challenges to software developers in effectively creating and implementing quantum services. This study advocates the adoption of software engineering principles in quantum computing, thereby establishing a higher level of hardware abstraction that allows developers to focus on application development. With this proposal, developers can design and deploy quantum services with less effort, which is similar to the facilitation provided by service-oriented computing for the development of conventional software services. This study introduces a continuous deployment strategy adapted to the development of quantum services that covers the creation and deployment of such services. For this purpose, an extension of the OpenAPI specification is proposed, which allows the generation of services that implement quantum algorithms. The proposal was validated through the creation of an application programming interface with diverse quantum algorithm implementations and evaluated through a survey of various developers and students who were introduced to the tool with positive results.
量子计算的早期进展为解决密码学、优化和模拟等不同领域的复杂问题提供了新的机遇。然而,目前量子计算所采用的方法往往需要对量子硬件和低级编程语言等有广泛的了解,这给软件开发人员有效创建和实现量子服务带来了挑战。本研究提倡在量子计算中采用软件工程原则,从而建立更高层次的硬件抽象,使开发人员能够专注于应用程序开发。有了这一建议,开发人员就能以更少的工作量设计和部署量子服务,这与面向服务的计算为传统软件服务开发提供的便利类似。本研究介绍了一种适用于量子服务开发的持续部署策略,它涵盖了此类服务的创建和部署。为此,本研究提出了 OpenAPI 规范的扩展,允许生成实施量子算法的服务。通过创建一个具有多种量子算法实现的应用编程接口,对该建议进行了验证,并通过对各种开发人员和学生的调查进行了评估。
{"title":"Enabling continuous deployment techniques for quantum services","authors":"Javier Romero-Álvarez, Jaime Alvarado-Valiente, Enrique Moguel, Jose Garcia-Alonso, Juan M. Murillo","doi":"10.1002/spe.3326","DOIUrl":"https://doi.org/10.1002/spe.3326","url":null,"abstract":"Early advances in quantum computing have provided new opportunities to tackle intricate problems in diverse areas such as cryptography, optimization, and simulation. However, current methodologies employed in quantum computing often require, among other things, a broad understanding of quantum hardware and low-level programming languages, posing challenges to software developers in effectively creating and implementing quantum services. This study advocates the adoption of software engineering principles in quantum computing, thereby establishing a higher level of hardware abstraction that allows developers to focus on application development. With this proposal, developers can design and deploy quantum services with less effort, which is similar to the facilitation provided by service-oriented computing for the development of conventional software services. This study introduces a continuous deployment strategy adapted to the development of quantum services that covers the creation and deployment of such services. For this purpose, an extension of the OpenAPI specification is proposed, which allows the generation of services that implement quantum algorithms. The proposal was validated through the creation of an application programming interface with diverse quantum algorithm implementations and evaluated through a survey of various developers and students who were introduced to the tool with positive results.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State‐of‐the‐practice in quality assurance in Java‐based open source software development 基于 Java 的开源软件开发质量保证的实践现状
Pub Date : 2024-03-05 DOI: 10.1002/spe.3321
Ali Khatami, Andy Zaidman
SummaryTo ensure the quality of software systems, software engineers can make use of a variety of quality assurance approaches, for example, software testing, modern code review, automated static analysis, and build automation. Each of these quality assurance practices have been studied in depth in isolation, but there is a clear knowledge gap when it comes to our understanding of how these approaches are being used in conjunction, or not. In our study, we broadly investigate whether and how these quality assurance approaches are being used in conjunction in the development of 1454 popular open source software projects on GitHub. Our study indicates that typically projects do not follow all quality assurance practices together with high intensity. In fact, we only observe weak correlation among some quality assurance practices. In general, our study provides a deeper understanding of how existing quality assurance approaches are currently being used in Java‐based open source software development. Besides, we specifically zoom in on the more mature projects in our dataset, and generally we observe that more mature projects are more intense in their application of the quality assurance practices, with more focus on their ASAT usage, and code reviewing, but no strong change in their CI usage.
摘要为了确保软件系统的质量,软件工程师可以使用各种质量保证方法,例如软件测试、现代代码审查、自动静态分析和构建自动化。这些质量保证方法中的每一种都被单独进行了深入研究,但在了解这些方法是否被结合使用方面,我们还存在明显的知识差距。在我们的研究中,我们广泛调查了 GitHub 上 1454 个流行开源软件项目在开发过程中是否以及如何结合使用这些质量保证方法。我们的研究表明,通常情况下,项目并不会高强度地同时采用所有质量保证方法。事实上,我们只观察到某些质量保证实践之间存在微弱的相关性。总的来说,我们的研究让我们更深入地了解了现有的质量保证方法目前是如何用于基于 Java 的开源软件开发的。此外,我们还特别放大了数据集中较为成熟的项目,总体而言,我们观察到较为成熟的项目在质量保证实践的应用上更为密集,他们更注重 ASAT 的使用和代码审查,但在 CI 的使用上没有很大的变化。
{"title":"State‐of‐the‐practice in quality assurance in Java‐based open source software development","authors":"Ali Khatami, Andy Zaidman","doi":"10.1002/spe.3321","DOIUrl":"https://doi.org/10.1002/spe.3321","url":null,"abstract":"SummaryTo ensure the quality of software systems, software engineers can make use of a variety of quality assurance approaches, for example, software testing, modern code review, automated static analysis, and build automation. Each of these quality assurance practices have been studied in depth in isolation, but there is a clear knowledge gap when it comes to our understanding of how these approaches are being used in conjunction, or not. In our study, we broadly investigate whether and how these quality assurance approaches are being used in conjunction in the development of 1454 popular open source software projects on GitHub. Our study indicates that typically projects do not follow all quality assurance practices together with high intensity. In fact, we only observe weak correlation among some quality assurance practices. In general, our study provides a deeper understanding of how existing quality assurance approaches are currently being used in Java‐based open source software development. Besides, we specifically zoom in on the more mature projects in our dataset, and generally we observe that more mature projects are more intense in their application of the quality assurance practices, with more focus on their ASAT usage, and code reviewing, but no strong change in their CI usage.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating FMI and ML/AI models on the open-source digital twin framework OpenTwins 在开源数字孪生框架 OpenTwins 上集成 FMI 和 ML/AI 模型
Pub Date : 2024-03-05 DOI: 10.1002/spe.3322
Sergio Infante, Cristian Martín, Julia Robles, Bartolomé Rubio, Manuel Díaz, Rafael González Perea, Pilar Montesinos, Emilio Camacho Poyato
The realm of digital twins is experiencing rapid growth and presents a wealth of opportunities for Industry 4.0. In conjunction with traditional simulation methods, digital twins offer a diverse range of possibilities. However, many existing tools in the domain of open-source digital twins concentrate on specific use cases and do not provide a versatile framework. In contrast, the open-source digital twin framework, OpenTwins, aims to provide a versatile framework that can be applied to a wide range of digital twin applications. In this article, we introduce a re-definition of the original OpenTwins platform that enables the management of custom simulation services and the management of FMI simulation services, which is one of the most widely used simulation standards in the industry and its coexistence with machine learning models, which enables the definition of the next-gen digital twins. Thanks to this integration, digital twins that reflect reality better can be developed, through hybrid models, where simulation data can feed the scarcity of machine learning data and so forth. As part of this project, a simulation model developed through the hydraulic software Epanet was validated in OpenTwins, in addition to an FMI simulation service. The hydraulic model was implemented and tested in an agricultural use case in collaboration with the University of Córdoba, Spain. A machine learning model has been developed to assess the behavior of an FMI simulation through machine learning.
数字孪生领域发展迅速,为工业 4.0 带来了大量机遇。结合传统的模拟方法,数字孪生提供了多种可能性。然而,开源数字孪生领域的许多现有工具都集中在特定的使用案例上,并没有提供一个通用的框架。与此相反,开源数字孪生框架 OpenTwins 的目标是提供一个可广泛应用于各种数字孪生应用的多功能框架。在本文中,我们介绍了对原始 OpenTwins 平台的重新定义,该平台可管理自定义仿真服务和 FMI 仿真服务,FMI 是业界使用最广泛的仿真标准之一,它与机器学习模型共存,可定义下一代数字孪生。得益于这种整合,可以通过混合模型开发出更好地反映现实的数字双胞胎,在混合模型中,仿真数据可以为稀缺的机器学习数据等提供养分。作为该项目的一部分,除了 FMI 仿真服务外,还在 OpenTwins 中验证了通过水力软件 Epanet 开发的仿真模型。与西班牙科尔多瓦大学合作,在一个农业应用案例中实施并测试了该水力模型。开发了一个机器学习模型,通过机器学习评估 FMI 模拟的行为。
{"title":"Integrating FMI and ML/AI models on the open-source digital twin framework OpenTwins","authors":"Sergio Infante, Cristian Martín, Julia Robles, Bartolomé Rubio, Manuel Díaz, Rafael González Perea, Pilar Montesinos, Emilio Camacho Poyato","doi":"10.1002/spe.3322","DOIUrl":"https://doi.org/10.1002/spe.3322","url":null,"abstract":"The realm of digital twins is experiencing rapid growth and presents a wealth of opportunities for Industry 4.0. In conjunction with traditional simulation methods, digital twins offer a diverse range of possibilities. However, many existing tools in the domain of open-source digital twins concentrate on specific use cases and do not provide a versatile framework. In contrast, the open-source digital twin framework, OpenTwins, aims to provide a versatile framework that can be applied to a wide range of digital twin applications. In this article, we introduce a re-definition of the original OpenTwins platform that enables the management of custom simulation services and the management of FMI simulation services, which is one of the most widely used simulation standards in the industry and its coexistence with machine learning models, which enables the definition of the next-gen digital twins. Thanks to this integration, digital twins that reflect reality better can be developed, through hybrid models, where simulation data can feed the scarcity of machine learning data and so forth. As part of this project, a simulation model developed through the hydraulic software Epanet was validated in OpenTwins, in addition to an FMI simulation service. The hydraulic model was implemented and tested in an agricultural use case in collaboration with the University of Córdoba, Spain. A machine learning model has been developed to assess the behavior of an FMI simulation through machine learning.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data model for enabling deep learning practices on discovery services of cyber-physical systems 在网络物理系统发现服务中实现深度学习实践的数据模型
Pub Date : 2024-03-04 DOI: 10.1002/spe.3325
Juan Alberto Llopis, Antonio Jesús Fernández-García, Javier Criado, Luis Iribarne, Antonio Corral
The W3C Web of Things (WoT) is a leading technology that facilitates dynamic information management in the Internet of Things (IoT). In most IoT scenarios, devices and their associated information change continuously, generating a large amount of data. Hence, to correctly use the information and the data generated by different devices, a new perspective of managing and ensuring data quality is recommended. Applying Data Science techniques to create the data model can help to manage and ensure data quality by creating a common schema that can be reused in future projects, as well as producing recommendations to facilitate Service Discovery. In addition, due to the dynamic devices that change over time or under specific circumstances, the data model created must be sufficiently abstract to add new instances and to support new requirements that devices should incorporate. The use of models helps to raise the abstraction level, adapting it to the continuous changes of devices by defining instances associated with the data model. This paper proposes two data models: one for Cyber-Physical Systems (CPS) to define device information fetched by a Discovery Service, and another for applying Deep Learning in natural language problems through a Transformer approach. The latter matches user queries in natural language sentences with WoT devices or services. These data models expand the Thing Description model to help find similar CPSs by giving a confidence level to each CPS based on features such as security and the number of times the device was accessed. The results show how the proposed models support the search process of CPSs in syntactic and natural language searches. Furthermore, the four levels of the FAIR principles are validated for the proposed data models, thus ensuring the data's transparency, reproducibility, and reusability.
W3C 物联网(WoT)是促进物联网(IoT)动态信息管理的领先技术。在大多数物联网场景中,设备及其相关信息会不断变化,产生大量数据。因此,为了正确使用不同设备产生的信息和数据,建议从新的角度来管理和确保数据质量。应用数据科学技术来创建数据模型,有助于通过创建可在未来项目中重复使用的通用模式来管理和确保数据质量,同时还能提出建议,促进服务发现。此外,由于动态设备会随时间或特定情况发生变化,因此创建的数据模型必须足够抽象,以便添加新实例并支持设备应纳入的新要求。使用模型有助于提高抽象程度,通过定义与数据模型相关的实例来适应设备的不断变化。本文提出了两种数据模型:一种用于网络物理系统(CPS),以定义由发现服务获取的设备信息;另一种用于通过变换器方法将深度学习应用于自然语言问题。后者将自然语言句子中的用户查询与 WoT 设备或服务相匹配。这些数据模型扩展了 "事物描述 "模型,根据设备的安全性和访问次数等特征为每个 CPS 设定置信度,从而帮助找到类似的 CPS。结果表明了所提出的模型如何在语法和自然语言搜索中支持 CPS 的搜索过程。此外,拟议的数据模型还验证了 FAIR 原则的四个层次,从而确保了数据的透明度、可重现性和可重用性。
{"title":"A data model for enabling deep learning practices on discovery services of cyber-physical systems","authors":"Juan Alberto Llopis, Antonio Jesús Fernández-García, Javier Criado, Luis Iribarne, Antonio Corral","doi":"10.1002/spe.3325","DOIUrl":"https://doi.org/10.1002/spe.3325","url":null,"abstract":"The W3C Web of Things (WoT) is a leading technology that facilitates dynamic information management in the Internet of Things (IoT). In most IoT scenarios, devices and their associated information change continuously, generating a large amount of data. Hence, to correctly use the information and the data generated by different devices, a new perspective of managing and ensuring data quality is recommended. Applying Data Science techniques to create the data model can help to manage and ensure data quality by creating a common schema that can be reused in future projects, as well as producing recommendations to facilitate Service Discovery. In addition, due to the dynamic devices that change over time or under specific circumstances, the data model created must be sufficiently abstract to add new instances and to support new requirements that devices should incorporate. The use of models helps to raise the abstraction level, adapting it to the continuous changes of devices by defining instances associated with the data model. This paper proposes two data models: one for Cyber-Physical Systems (CPS) to define device information fetched by a Discovery Service, and another for applying Deep Learning in natural language problems through a Transformer approach. The latter matches user queries in natural language sentences with WoT devices or services. These data models expand the Thing Description model to help find similar CPSs by giving a confidence level to each CPS based on features such as security and the number of times the device was accessed. The results show how the proposed models support the search process of CPSs in syntactic and natural language searches. Furthermore, the four levels of the FAIR principles are validated for the proposed data models, thus ensuring the data's transparency, reproducibility, and reusability.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the application of virtual and augmented reality in physical and occupational therapy 虚拟现实和增强现实技术在物理和职业疗法中的应用综述
Pub Date : 2024-03-02 DOI: 10.1002/spe.3323
Agrawal Luckykumar Dwarkadas, Viswanath Talasila, Rama Krishna Challa, Srinivasa K G
This paper includes a research review in five bibliographic databases on using the application of virtual reality (VR) and augmented reality (AR) in physical and occupational therapy (POT). This literature review addresses five research questions and two sub‐research questions. A total of 36 relevant studies were selected in the review based on the defined keywords and inclusion‐exclusion criteria. The primary motivation for using the application of VR and AR in POT is that it is accurate, involves higher patient participation, and requires less therapy recovery time. The standard software tool used is the Unity 3D game engine, and the common device used is the Oculus Rift HMD. Various applications of VR and AR consist of different VR environments and AR contents used in POT. Post‐stroke rehabilitation, rehabilitation exercises, pain management, mental and behavioral disorders, and autism in children are the main aspects addressed through the VR and AR environments. Literature review indicates that questionnaires, interviews, and observation are the primary metrics for measuring therapy's effectiveness. The study's findings show positive results such as reduced treatment time, nervousness, pain, hospitalization period, making therapy enjoyable and encouraging, improved quality of life, and focus on using the application of VR and AR in POT. This review will be relevant to researchers, VR and AR application designers, doctors, and patients using the application of VR and AR in POT. Further research addressing multiple participants with clinical trials, adding new VR environments and AR content in VR and AR applications, including follow‐up sessions, and increasing training sessions while using the application of VR and AR in POT are recommended.
本文通过五个文献数据库对虚拟现实(VR)和增强现实(AR)在物理和职业疗法(POT)中的应用进行了研究综述。该文献综述涉及五个研究问题和两个子研究问题。根据定义的关键词和纳入-排除标准,共筛选出 36 篇相关研究。在 POT 中应用 VR 和 AR 的主要动机是,其准确性高、患者参与度高、所需治疗恢复时间短。使用的标准软件工具是 Unity 3D 游戏引擎,常用设备是 Oculus Rift HMD。VR 和 AR 的各种应用包括在 POT 中使用不同的 VR 环境和 AR 内容。中风后康复、康复锻炼、疼痛管理、心理和行为障碍以及儿童自闭症是通过 VR 和 AR 环境解决的主要方面。文献综述表明,问卷调查、访谈和观察是衡量治疗效果的主要指标。研究结果显示了积极的效果,如缩短了治疗时间、减轻了紧张、疼痛、缩短了住院时间、使治疗变得愉快和令人鼓舞、提高了生活质量,并重点介绍了 VR 和 AR 在 POT 中的应用。本综述将对研究人员、VR 和 AR 应用设计人员、医生以及在 POT 中应用 VR 和 AR 的患者有借鉴意义。建议针对临床试验中的多个参与者开展进一步研究,在 VR 和 AR 应用程序中添加新的 VR 环境和 AR 内容,包括后续课程,以及在 POT 中应用 VR 和 AR 的同时增加培训课程。
{"title":"A review of the application of virtual and augmented reality in physical and occupational therapy","authors":"Agrawal Luckykumar Dwarkadas, Viswanath Talasila, Rama Krishna Challa, Srinivasa K G","doi":"10.1002/spe.3323","DOIUrl":"https://doi.org/10.1002/spe.3323","url":null,"abstract":"This paper includes a research review in five bibliographic databases on using the application of virtual reality (VR) and augmented reality (AR) in physical and occupational therapy (POT). This literature review addresses five research questions and two sub‐research questions. A total of 36 relevant studies were selected in the review based on the defined keywords and inclusion‐exclusion criteria. The primary motivation for using the application of VR and AR in POT is that it is accurate, involves higher patient participation, and requires less therapy recovery time. The standard software tool used is the Unity 3D game engine, and the common device used is the Oculus Rift HMD. Various applications of VR and AR consist of different VR environments and AR contents used in POT. Post‐stroke rehabilitation, rehabilitation exercises, pain management, mental and behavioral disorders, and autism in children are the main aspects addressed through the VR and AR environments. Literature review indicates that questionnaires, interviews, and observation are the primary metrics for measuring therapy's effectiveness. The study's findings show positive results such as reduced treatment time, nervousness, pain, hospitalization period, making therapy enjoyable and encouraging, improved quality of life, and focus on using the application of VR and AR in POT. This review will be relevant to researchers, VR and AR application designers, doctors, and patients using the application of VR and AR in POT. Further research addressing multiple participants with clinical trials, adding new VR environments and AR content in VR and AR applications, including follow‐up sessions, and increasing training sessions while using the application of VR and AR in POT are recommended.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140017429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fly: Femtolet-based edge-cloud framework for crop yield prediction using bidirectional long short-term memory Fly:利用双向长短期记忆进行作物产量预测的基于 Femtolet 的边缘云框架
Pub Date : 2024-02-29 DOI: 10.1002/spe.3324
Tanushree Dey, Somnath Bera, Bachchu Paul, Debashis De, Anwesha Mukherjee, Rajkumar Buyya
Crop yield prediction is a crucial area in agriculture that has a large impact on the economy of a country. This article proposes a crop yield prediction framework based on Internet of Things and edge computing. We have used a fifth generation network device referred to as femtolet as the edge device. The femtolet is a small cell base station that has high storage and high processing ability. The sensor nodes collect the soil and environmental data, and then the collected data is sent to the femtolet through the microcontrollers. The femtolet retrieves the weather-related data from the cloud, and then processes the sensor data and weather-related data using Bi-LSTM. The femtolet after processing the data sends the generated results to the cloud. The user can access the results from the cloud to predict the suitable crop for his/her land. This is observed that the suggested framework provides better accuracy, precision, recall, and F1-score compared to the state-of-the-art crop yield prediction frameworks. This is also demonstrated that the use of femtolet reduces the latency by ˜25% than the conventional edge-cloud framework.
农作物产量预测是农业中的一个关键领域,对一个国家的经济有很大影响。本文提出了一个基于物联网和边缘计算的作物产量预测框架。我们使用第五代网络设备 femtolet 作为边缘设备。femtolet 是一种小型基站,具有高存储和高处理能力。传感器节点收集土壤和环境数据,然后通过微控制器将收集到的数据发送到 femtolet。femtolet 从云端检索天气相关数据,然后使用 Bi-LSTM 处理传感器数据和天气相关数据。处理完数据后,femtolet 会将生成的结果发送到云端。用户可以从云端获取结果,预测适合其土地的作物。据观察,与最先进的作物产量预测框架相比,建议的框架提供了更好的准确度、精确度、召回率和 F1 分数。研究还表明,与传统的边缘云框架相比,femtolet 的使用减少了 25% 的延迟。
{"title":"Fly: Femtolet-based edge-cloud framework for crop yield prediction using bidirectional long short-term memory","authors":"Tanushree Dey, Somnath Bera, Bachchu Paul, Debashis De, Anwesha Mukherjee, Rajkumar Buyya","doi":"10.1002/spe.3324","DOIUrl":"https://doi.org/10.1002/spe.3324","url":null,"abstract":"Crop yield prediction is a crucial area in agriculture that has a large impact on the economy of a country. This article proposes a crop yield prediction framework based on Internet of Things and edge computing. We have used a fifth generation network device referred to as femtolet as the edge device. The femtolet is a small cell base station that has high storage and high processing ability. The sensor nodes collect the soil and environmental data, and then the collected data is sent to the femtolet through the microcontrollers. The femtolet retrieves the weather-related data from the cloud, and then processes the sensor data and weather-related data using Bi-LSTM. The femtolet after processing the data sends the generated results to the cloud. The user can access the results from the cloud to predict the suitable crop for his/her land. This is observed that the suggested framework provides better accuracy, precision, recall, and F1-score compared to the state-of-the-art crop yield prediction frameworks. This is also demonstrated that the use of femtolet reduces the latency by ˜25% than the conventional edge-cloud framework.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data privacy protection model based on blockchain in mobile edge computing 移动边缘计算中基于区块链的数据隐私保护模式
Pub Date : 2024-02-23 DOI: 10.1002/spe.3315
Junhua Wu, Xiangmei Bu, Guangshun Li, Guangwei Tian
Mobile edge computing (MEC) technology is widely used for real‐time and bandwidth‐intensive services, but its underlying heterogeneous architecture may lead to a variety of security and privacy issues. Blockchain provides novel solutions for data security and privacy protection in MEC. However, the scalability of traditional blockchain is difficult to meet the requirements of real‐time data processing, and the consensus mechanism is not suitable for resource‐constrained devices. Moreover, the access control of MEC data needs to be further improved. Given the above problems, a data privacy protection model based on sharding blockchain and access control is designed in this paper. First, a privacy‐preserving platform based on a sharding blockchain is designed. Reputation calculation and improved Proof‐of‐Work (PoW) consensus mechanism are proposed to accommodate resource‐constrained edge devices. The incentive mechanism with rewards and punishments is designed to constrain node behavior. A reward allocation algorithm is proposed to encourage nodes to actively contribute to obtaining more rewards. Second, an access control strategy using ciphertext policy attribute‐based encryption (CP‐ABE) and RSA is designed. A smart contract is deployed to implement the automatic access control function. The InterPlanetary File System is introduced to alleviate the blockchain storage burden. Finally, we analyze the security of the proposed privacy protection model and statistics of the GAS consumed by the access control policy. The experimental results show that the proposed data privacy protection model achieves fine‐grained control of access rights, and has higher throughput and security than traditional blockchain.
移动边缘计算(MEC)技术被广泛用于实时和带宽密集型服务,但其底层异构架构可能导致各种安全和隐私问题。区块链为 MEC 中的数据安全和隐私保护提供了新的解决方案。然而,传统区块链的可扩展性难以满足实时数据处理的要求,共识机制也不适合资源受限的设备。此外,MEC 数据的访问控制也有待进一步改进。鉴于上述问题,本文设计了一种基于分片区块链和访问控制的数据隐私保护模型。首先,设计了一个基于分片区块链的隐私保护平台。提出了声誉计算和改进的工作证明(PoW)共识机制,以适应资源受限的边缘设备。设计了奖惩激励机制来约束节点行为。此外,还提出了一种奖励分配算法,以鼓励节点为获得更多奖励做出积极贡献。其次,设计了一种使用基于密文策略属性加密(CP-ABE)和 RSA 的访问控制策略。通过部署智能合约来实现自动访问控制功能。此外,还引入了 InterPlanetary File System 来减轻区块链的存储负担。最后,我们分析了所提出的隐私保护模型的安全性,并统计了访问控制策略所消耗的 GAS。实验结果表明,所提出的数据隐私保护模型实现了对访问权限的细粒度控制,与传统区块链相比,具有更高的吞吐量和安全性。
{"title":"Data privacy protection model based on blockchain in mobile edge computing","authors":"Junhua Wu, Xiangmei Bu, Guangshun Li, Guangwei Tian","doi":"10.1002/spe.3315","DOIUrl":"https://doi.org/10.1002/spe.3315","url":null,"abstract":"Mobile edge computing (MEC) technology is widely used for real‐time and bandwidth‐intensive services, but its underlying heterogeneous architecture may lead to a variety of security and privacy issues. Blockchain provides novel solutions for data security and privacy protection in MEC. However, the scalability of traditional blockchain is difficult to meet the requirements of real‐time data processing, and the consensus mechanism is not suitable for resource‐constrained devices. Moreover, the access control of MEC data needs to be further improved. Given the above problems, a data privacy protection model based on sharding blockchain and access control is designed in this paper. First, a privacy‐preserving platform based on a sharding blockchain is designed. Reputation calculation and improved Proof‐of‐Work (PoW) consensus mechanism are proposed to accommodate resource‐constrained edge devices. The incentive mechanism with rewards and punishments is designed to constrain node behavior. A reward allocation algorithm is proposed to encourage nodes to actively contribute to obtaining more rewards. Second, an access control strategy using ciphertext policy attribute‐based encryption (CP‐ABE) and RSA is designed. A smart contract is deployed to implement the automatic access control function. The InterPlanetary File System is introduced to alleviate the blockchain storage burden. Finally, we analyze the security of the proposed privacy protection model and statistics of the GAS consumed by the access control policy. The experimental results show that the proposed data privacy protection model achieves fine‐grained control of access rights, and has higher throughput and security than traditional blockchain.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Software: Practice and Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1