首页 > 最新文献

2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)最新文献

英文 中文
Big Data Pipeline Scheduling and Adaptation on the Computing Continuum 计算连续体上的大数据管道调度与自适应
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00181
Dragi Kimovski, C. Bauer, Narges Mehran, R.-C. Prodan
The Computing Continuum, covering Cloud, Fog, and Edge systems, promises to provide on-demand resource-as-a-service for Internet applications with diverse requirements, ranging from extremely low latency to high-performance processing. However, eminent challenges in automating the resources man-agement of Big Data pipelines across the Computing Continuum remain. The resource management and adaptation for Big Data pipelines across the Computing Continuum require significant research effort, as the current data processing pipelines are dynamic. In contrast, traditional resource management strategies are static, leading to inefficient pipeline scheduling and overly complex process deployment. To address these needs, we propose in this work a scheduling and adaptation approach implemented as a software tool to lower the technological barriers to the management of Big Data pipelines over the Computing Continuum. The approach separates the static scheduling from the run-time execution, em-powering domain experts with little infrastructure and software knowledge to take an active part in the Big Data pipeline adaptation. We conduct a feasibility study using a digital healthcare use case to validate our approach. We illustrate concrete scenarios supported by demonstrating how the scheduling and adaptation tool and its implementation automate the management of the lifecycle of a remote patient monitoring, treatment, and care pipeline.
计算连续体涵盖云、雾和边缘系统,承诺为具有不同需求的互联网应用程序提供按需资源即服务,范围从极低延迟到高性能处理。然而,在跨计算连续体的大数据管道的自动化资源管理方面仍然存在突出的挑战。由于当前的数据处理管道是动态的,因此跨计算连续体的大数据管道的资源管理和适应需要大量的研究工作。相比之下,传统的资源管理策略是静态的,导致低效的管道调度和过于复杂的流程部署。为了满足这些需求,我们在这项工作中提出了一种作为软件工具实施的调度和适应方法,以降低计算连续体上大数据管道管理的技术障碍。该方法将静态调度与运行时执行分离开来,使基础设施和软件知识较少的领域专家能够积极参与大数据管道的适应。我们使用数字医疗用例进行可行性研究,以验证我们的方法。我们通过演示调度和调整工具及其实现如何自动管理远程患者监测、治疗和护理管道的生命周期,来说明支持的具体场景。
{"title":"Big Data Pipeline Scheduling and Adaptation on the Computing Continuum","authors":"Dragi Kimovski, C. Bauer, Narges Mehran, R.-C. Prodan","doi":"10.1109/COMPSAC54236.2022.00181","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00181","url":null,"abstract":"The Computing Continuum, covering Cloud, Fog, and Edge systems, promises to provide on-demand resource-as-a-service for Internet applications with diverse requirements, ranging from extremely low latency to high-performance processing. However, eminent challenges in automating the resources man-agement of Big Data pipelines across the Computing Continuum remain. The resource management and adaptation for Big Data pipelines across the Computing Continuum require significant research effort, as the current data processing pipelines are dynamic. In contrast, traditional resource management strategies are static, leading to inefficient pipeline scheduling and overly complex process deployment. To address these needs, we propose in this work a scheduling and adaptation approach implemented as a software tool to lower the technological barriers to the management of Big Data pipelines over the Computing Continuum. The approach separates the static scheduling from the run-time execution, em-powering domain experts with little infrastructure and software knowledge to take an active part in the Big Data pipeline adaptation. We conduct a feasibility study using a digital healthcare use case to validate our approach. We illustrate concrete scenarios supported by demonstrating how the scheduling and adaptation tool and its implementation automate the management of the lifecycle of a remote patient monitoring, treatment, and care pipeline.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Dual Batch Size Deep Learning for Distributed Parameter Server Systems 分布式参数服务器系统的高效双批处理深度学习
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00110
Kuan-Wei Lu, Pangfeng Liu, Ding-Yong Hong, Jan-Jan Wu
Distributed machine learning is essential for applying deep learning models with many data and parameters. Current researches on distributed machine learning focus on using more hardware devices powerful computing units for fast training. Consequently, the model training prefers a larger batch size to accelerate the training speed. However, the large batch training often suffers from poor accuracy due to poor generalization ability. Researchers have come up with many sophisticated methods to address this accuracy issue due to large batch sizes. These methods usually have complex mechanisms, thus making training more difficult. In addition, powerful training hardware for large batch sizes is expensive, and not all researchers can afford it. We propose a dual batch size learning scheme to address the batch size issue. We use the maximum batch size of our hardware for maximum training efficiency we can afford. In addition, we introduce a smaller batch size during the training to improve the model generalization ability. Using two different batch sizes in the same training simultaneously will reduce the testing loss and obtain a good generalization ability, with only a slight increase in the training time. We implement our dual batch size learning scheme and conduct experiments. By increasing 5% of the training time, we can reduce the loss from 1.429 to 1.246 in some cases. In addition, by appropriately adjusting the percentage of large and small batch sizes, we can increase the accuracy by 2.8% in some cases. With the additional 10% increase in training time, we can reduce the loss from 1.429 to 1.193. And after moderately adjusting the number of large batches and small batches used by GPUs, the accuracy can increase by 2.9%. Using two different batch sizes in the same training introduces two complications. First, the data processing speeds for two different batch sizes are different, so we must assign the data proportionally to maximize the overall processing speed. In addition, since the smaller batches will see fewer data due to the overall processing speed consideration, we proportionally adjust their contribution towards the global weight update in the parameter server. We use the ratio of data between the small and large batches to adjust the contribution. Experimental results indicate that this contribution adjustment increases the final accuracy by another 0.9%.
分布式机器学习对于应用具有许多数据和参数的深度学习模型至关重要。目前分布式机器学习的研究主要集中在使用更多的硬件设备、强大的计算单元来进行快速训练。因此,模型训练倾向于更大的批大小来加快训练速度。然而,由于泛化能力差,大批量训练往往存在准确率不高的问题。研究人员已经提出了许多复杂的方法来解决由于大量批量而导致的准确性问题。这些方法通常具有复杂的机制,因此使训练更加困难。此外,用于大批量的强大训练硬件是昂贵的,并不是所有的研究人员都能负担得起。我们提出了一种双批大小学习方案来解决批大小问题。我们使用硬件的最大批处理大小来获得我们所能负担得起的最大训练效率。此外,我们在训练过程中引入了较小的批大小,以提高模型的泛化能力。在同一训练中同时使用两个不同的批大小可以减少测试损失并获得良好的泛化能力,而训练时间只会稍微增加。我们实现了双批大小的学习方案并进行了实验。通过增加5%的训练时间,我们可以在某些情况下将损失从1.429降低到1.246。此外,通过适当调整大批量和小批量的百分比,我们可以在某些情况下将准确率提高2.8%。再增加10%的训练时间,我们可以将损失从1.429降低到1.193。适度调整gpu使用的大批量和小批量数量后,准确率可提高2.9%。在同一训练中使用两种不同的批大小会引入两个复杂性。首先,两种不同批大小的数据处理速度是不同的,因此我们必须按比例分配数据以最大限度地提高整体处理速度。此外,由于考虑到整体处理速度,较小的批将看到更少的数据,因此我们按比例调整它们对参数服务器中全局权重更新的贡献。我们使用小批量和大批量数据之间的比例来调整贡献。实验结果表明,这种贡献调整使最终精度又提高了0.9%。
{"title":"Efficient Dual Batch Size Deep Learning for Distributed Parameter Server Systems","authors":"Kuan-Wei Lu, Pangfeng Liu, Ding-Yong Hong, Jan-Jan Wu","doi":"10.1109/COMPSAC54236.2022.00110","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00110","url":null,"abstract":"Distributed machine learning is essential for applying deep learning models with many data and parameters. Current researches on distributed machine learning focus on using more hardware devices powerful computing units for fast training. Consequently, the model training prefers a larger batch size to accelerate the training speed. However, the large batch training often suffers from poor accuracy due to poor generalization ability. Researchers have come up with many sophisticated methods to address this accuracy issue due to large batch sizes. These methods usually have complex mechanisms, thus making training more difficult. In addition, powerful training hardware for large batch sizes is expensive, and not all researchers can afford it. We propose a dual batch size learning scheme to address the batch size issue. We use the maximum batch size of our hardware for maximum training efficiency we can afford. In addition, we introduce a smaller batch size during the training to improve the model generalization ability. Using two different batch sizes in the same training simultaneously will reduce the testing loss and obtain a good generalization ability, with only a slight increase in the training time. We implement our dual batch size learning scheme and conduct experiments. By increasing 5% of the training time, we can reduce the loss from 1.429 to 1.246 in some cases. In addition, by appropriately adjusting the percentage of large and small batch sizes, we can increase the accuracy by 2.8% in some cases. With the additional 10% increase in training time, we can reduce the loss from 1.429 to 1.193. And after moderately adjusting the number of large batches and small batches used by GPUs, the accuracy can increase by 2.9%. Using two different batch sizes in the same training introduces two complications. First, the data processing speeds for two different batch sizes are different, so we must assign the data proportionally to maximize the overall processing speed. In addition, since the smaller batches will see fewer data due to the overall processing speed consideration, we proportionally adjust their contribution towards the global weight update in the parameter server. We use the ratio of data between the small and large batches to adjust the contribution. Experimental results indicate that this contribution adjustment increases the final accuracy by another 0.9%.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124924676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Secure and Efficient Fine-Grained Deletion Approach over Encrypted Data 一种安全高效的加密数据细粒度删除方法
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00176
K. Lavania, Gaurang Gupta, D. Kumar
Documents are a common method of storing infor-mation and one of the most conventional forms of expression of ideas. Cloud servers store a user's documents with thousands of other users in place of physical storage devices. Indexes corresponding to the documents are also stored at the cloud server to enable the users to retrieve documents of their interest. The index includes keywords, document identities in which the keywords appear, along with Term Frequency-Inverse Document Frequency (TF-IDF) values which reflect the keywords' relevance scores of the dataset. Currently, there are no efficient methods to delete keywords from millions of documents over cloud servers while avoiding any compromise to the user's privacy. Most of the existing approaches use algorithms that divide a bigger problem into sub-problems and then combine them like divide and conquer problems. These approaches don't focus entirely on fine-grained deletion. This work is focused on achieving fine-grained deletion of keywords by keeping the size of the TF-IDF matrix constant after processing the deletion query, which comprises of keywords to be deleted. The experimental results of the proposed approach confirm that the precision of ranked search still remains very high after deletion without recalculation of the TF-IDF matrix.
文档是存储信息的常用方法,也是表达思想的最常用形式之一。云服务器将用户的文档与成千上万的其他用户一起存储在物理存储设备上。与文档对应的索引也存储在云服务器上,使用户能够检索他们感兴趣的文档。索引包括关键字、出现关键字的文档标识,以及反映数据集关键字相关分数的词频-逆文档频率(TF-IDF)值。目前,还没有一种有效的方法可以从云服务器上的数百万个文档中删除关键字,同时避免对用户隐私造成任何损害。大多数现有的方法使用的算法是将一个更大的问题分成子问题,然后像分而治之一样将它们组合起来。这些方法并不完全关注细粒度的删除。这项工作的重点是在处理删除查询后,通过保持TF-IDF矩阵的大小不变来实现对关键字的细粒度删除,该矩阵由待删除的关键字组成。该方法的实验结果证实,删除后不需要重新计算TF-IDF矩阵,排序搜索的精度仍然很高。
{"title":"A Secure and Efficient Fine-Grained Deletion Approach over Encrypted Data","authors":"K. Lavania, Gaurang Gupta, D. Kumar","doi":"10.1109/COMPSAC54236.2022.00176","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00176","url":null,"abstract":"Documents are a common method of storing infor-mation and one of the most conventional forms of expression of ideas. Cloud servers store a user's documents with thousands of other users in place of physical storage devices. Indexes corresponding to the documents are also stored at the cloud server to enable the users to retrieve documents of their interest. The index includes keywords, document identities in which the keywords appear, along with Term Frequency-Inverse Document Frequency (TF-IDF) values which reflect the keywords' relevance scores of the dataset. Currently, there are no efficient methods to delete keywords from millions of documents over cloud servers while avoiding any compromise to the user's privacy. Most of the existing approaches use algorithms that divide a bigger problem into sub-problems and then combine them like divide and conquer problems. These approaches don't focus entirely on fine-grained deletion. This work is focused on achieving fine-grained deletion of keywords by keeping the size of the TF-IDF matrix constant after processing the deletion query, which comprises of keywords to be deleted. The experimental results of the proposed approach confirm that the precision of ranked search still remains very high after deletion without recalculation of the TF-IDF matrix.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126634058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Software Architecture for Developing Distributed Games that Teach Coding and Algorithmic Thinking 开发分布式游戏的软件架构,教授编程和算法思维
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00023
Nearchos Paspallis, Nicos Kasenides, Andriani Piki
This paper presents an architecture for building multi player games that aim to teach coding skills and promote algorithmic thinking. The main requirements for the architecture are to enable quick and affordable development and deployment, support commodity client devices, and enable multiplayer, com-petitive gameplay. By demonstrating an evaluation case study, we show how the proposed architecture achieves these requirements. At its core, it realizes a distributed model extending the client-server paradigm, where multiple players can independently train, then compete in a multiplayer mode using a shared, cloud-based server. While the architecture is validated with a specific maze-themed case study game, we argue that the main principles of this approach can be reused to a wider range of multi player, educational games.
本文提出了一个构建多人游戏的架构,旨在教授编码技能和促进算法思维。该体系结构的主要需求是支持快速和负担得起的开发和部署,支持商品客户端设备,并支持多人竞争游戏。通过演示一个评估案例研究,我们展示了所建议的体系结构是如何实现这些需求的。在其核心,它实现了一个扩展客户机-服务器范式的分布式模型,其中多个玩家可以独立训练,然后使用共享的基于云的服务器在多人模式中竞争。虽然该架构是通过特定的迷宫主题案例研究游戏进行验证的,但我们认为该方法的主要原则可以重用到更广泛的多人教育游戏中。
{"title":"A Software Architecture for Developing Distributed Games that Teach Coding and Algorithmic Thinking","authors":"Nearchos Paspallis, Nicos Kasenides, Andriani Piki","doi":"10.1109/COMPSAC54236.2022.00023","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00023","url":null,"abstract":"This paper presents an architecture for building multi player games that aim to teach coding skills and promote algorithmic thinking. The main requirements for the architecture are to enable quick and affordable development and deployment, support commodity client devices, and enable multiplayer, com-petitive gameplay. By demonstrating an evaluation case study, we show how the proposed architecture achieves these requirements. At its core, it realizes a distributed model extending the client-server paradigm, where multiple players can independently train, then compete in a multiplayer mode using a shared, cloud-based server. While the architecture is validated with a specific maze-themed case study game, we argue that the main principles of this approach can be reused to a wider range of multi player, educational games.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115102101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Passive L1/L2 Edge Loop Detection Observing MAC addresses of L3 Core Switches 一种观察L3核心交换机MAC地址的新型被动L1/L2边缘环路检测方法
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00160
Motoyuki Ohmori
A L1 and/or L2 loop in a network may cause congestion and incur communication failures. It is, therefore, important to quickly and accurately detect a loop and locate its origin in order to eliminate the loop. To address this issue, this paper proposes a novel passive loop detection on edge ports of edge switches where end users' routers or terminals are accom-modated. The basic idea of the proposed detection is inspired by the nature that a MAC address of a L3 core switch should never be observed on an edge port. The MAC address is then observed by always-accepting MAC address authentication that can be easily deployed. The proposed detection can, therefore, accurately locate an edge port where a loop is formed, and avoid a failure to notify a network operator of a loop. In addition, the proposed detection can reduce a load on an edge switch more than the existing active detecting methods. Our evaluations in the real campus network have shown that the proposed method can detect the loop even where the existing methods cannot.
网络中的L1和/或L2环路可能导致拥塞和通信失败。因此,为了消除环路,快速准确地检测环路并确定其来源是非常重要的。为了解决这一问题,本文提出了一种新的无源环路检测方法,用于容纳终端用户路由器或终端的边缘交换机的边缘端口。所提出的检测的基本思想的灵感来自于L3核心交换机的MAC地址不应该在边缘端口上被观察到的性质。然后通过始终接受的MAC地址身份验证来观察MAC地址,这种身份验证可以很容易地部署。因此,所提出的检测可以准确地定位形成环路的边缘端口,并避免无法将环路通知网络运营商。此外,与现有的主动检测方法相比,所提出的检测方法可以减少边缘交换机的负载。我们在实际校园网中的评估表明,即使现有方法无法检测环路,该方法也可以检测到环路。
{"title":"A Novel Passive L1/L2 Edge Loop Detection Observing MAC addresses of L3 Core Switches","authors":"Motoyuki Ohmori","doi":"10.1109/COMPSAC54236.2022.00160","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00160","url":null,"abstract":"A L1 and/or L2 loop in a network may cause congestion and incur communication failures. It is, therefore, important to quickly and accurately detect a loop and locate its origin in order to eliminate the loop. To address this issue, this paper proposes a novel passive loop detection on edge ports of edge switches where end users' routers or terminals are accom-modated. The basic idea of the proposed detection is inspired by the nature that a MAC address of a L3 core switch should never be observed on an edge port. The MAC address is then observed by always-accepting MAC address authentication that can be easily deployed. The proposed detection can, therefore, accurately locate an edge port where a loop is formed, and avoid a failure to notify a network operator of a loop. In addition, the proposed detection can reduce a load on an edge switch more than the existing active detecting methods. Our evaluations in the real campus network have shown that the proposed method can detect the loop even where the existing methods cannot.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116096030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Colored Petri Net Reusing for Service Function Chaining Validation 用于业务功能链验证的彩色Petri网重用
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00243
Zhenyu Liu, Xuanyu Lou, Yajun Cui, Ying Zhao, Hua Li
With the development of software defined network and network function virtualization, network operators can flexibly deploy service function chains (SFC) to provide network security services more than before according to the network security requirements of business systems. At present, most research on verifying the correctness of SFC is based on whether the logical sequence between service functions (SF) in SFC is correct before deployment, and there is less research on verifying the correctness after SFC deployment. Therefore, this paper proposes a method of using Colored Petri Net (CPN) to establish a verification model offline and verify whether each SF deployment in SFC is correct after online deployment. After the SFC deployment is completed, the information is obtained online and input into the established model for verification. The experimental results show that the SFC correctness verification method proposed in this paper can effectively verify whether each SF in the deployed SFC is deployed correctly. In this process, the correctness of SF model is verified by using SF model in the model library, and the model reuse technology is preliminarily discussed.
随着软件定义网络和网络功能虚拟化的发展,网络运营商可以根据业务系统的网络安全需求,灵活部署业务功能链(SFC),提供比以往更多的网络安全服务。目前,验证SFC正确性的研究大多是基于部署前SFC中业务功能之间的逻辑顺序是否正确,部署后验证SFC正确性的研究较少。因此,本文提出了一种使用彩色Petri网(Colored Petri Net, CPN)离线建立验证模型的方法,在线部署后验证SFC中每个SF的部署是否正确。SFC部署完成后,在线获取信息并输入到建立的模型中进行验证。实验结果表明,本文提出的SFC正确性验证方法可以有效验证所部署SFC中的每个SF是否正确部署。在此过程中,通过在模型库中使用SF模型验证了SF模型的正确性,并对模型重用技术进行了初步探讨。
{"title":"Colored Petri Net Reusing for Service Function Chaining Validation","authors":"Zhenyu Liu, Xuanyu Lou, Yajun Cui, Ying Zhao, Hua Li","doi":"10.1109/COMPSAC54236.2022.00243","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00243","url":null,"abstract":"With the development of software defined network and network function virtualization, network operators can flexibly deploy service function chains (SFC) to provide network security services more than before according to the network security requirements of business systems. At present, most research on verifying the correctness of SFC is based on whether the logical sequence between service functions (SF) in SFC is correct before deployment, and there is less research on verifying the correctness after SFC deployment. Therefore, this paper proposes a method of using Colored Petri Net (CPN) to establish a verification model offline and verify whether each SF deployment in SFC is correct after online deployment. After the SFC deployment is completed, the information is obtained online and input into the established model for verification. The experimental results show that the SFC correctness verification method proposed in this paper can effectively verify whether each SF in the deployed SFC is deployed correctly. In this process, the correctness of SF model is verified by using SF model in the model library, and the model reuse technology is preliminarily discussed.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122670481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Driven Learning activities within a Digital Learning Environment to study the specialized language of Mathematics 在数字学习环境中进行数据驱动的学习活动,以学习数学专业语言
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00032
E. Corino, C. Fissore, M. Marchisio
In teaching it has become increasingly important to use didactic approaches that see students active and protagonists of their own learning. These approaches can often be supported by technologies, which also enable students to acquire digital skills and provide them with immediate and interactive feedback. In this paper we present recent research activities characterized by Data Driven Learning methodologies within a Digital Learning Environment integrated with an automatic formative assessment system to propose activities on the specialized language of Mathematics. In fact, Mathematics has always been one of the school disciplines in which students of all grades encounter the greatest difficulties. Numerous studies in Mathematics education have shown that the causes of disciplinary learning difficulties are the acquisition, understanding and management of one's language for specific purposes. The research activity involved 4 classes of two Italian secondary schools for a total of 80 students of grade 11 and their teachers. In this paper we study the impact that this type of activity has had on students, analyzing the students' responses to the final satisfaction questionnaire.
在教学中,使用教学方法让学生成为自己学习的主角变得越来越重要。这些方法通常可以得到技术的支持,这也使学生能够获得数字技能,并为他们提供即时和互动的反馈。在本文中,我们介绍了最近在数字学习环境中以数据驱动学习方法为特征的研究活动,该环境与自动形成性评估系统相结合,以提出有关数学专业语言的活动。事实上,数学一直是所有年级的学生遇到最大困难的学校学科之一。数学教育的大量研究表明,学科学习困难的原因是为了特定目的而习得、理解和管理语言。本次研究活动涉及意大利两所中学的4个班级,共80名11年级学生和他们的老师。在本文中,我们研究了这类活动对学生的影响,分析了学生对最终满意度问卷的反应。
{"title":"Data Driven Learning activities within a Digital Learning Environment to study the specialized language of Mathematics","authors":"E. Corino, C. Fissore, M. Marchisio","doi":"10.1109/COMPSAC54236.2022.00032","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00032","url":null,"abstract":"In teaching it has become increasingly important to use didactic approaches that see students active and protagonists of their own learning. These approaches can often be supported by technologies, which also enable students to acquire digital skills and provide them with immediate and interactive feedback. In this paper we present recent research activities characterized by Data Driven Learning methodologies within a Digital Learning Environment integrated with an automatic formative assessment system to propose activities on the specialized language of Mathematics. In fact, Mathematics has always been one of the school disciplines in which students of all grades encounter the greatest difficulties. Numerous studies in Mathematics education have shown that the causes of disciplinary learning difficulties are the acquisition, understanding and management of one's language for specific purposes. The research activity involved 4 classes of two Italian secondary schools for a total of 80 students of grade 11 and their teachers. In this paper we study the impact that this type of activity has had on students, analyzing the students' responses to the final satisfaction questionnaire.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122100485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Agile Framework for Security Requirements: A Preliminary Investigation 安全需求的敏捷框架:初步研究
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00076
S. Reddivari
Requirements engineering (RE) is a crucial component in successful software development process. The idea of embedding the non-functional requirements (NFRs) such as performance, maintainability, modifiability, and others into a new software system is often implemented by software engineers. However, security as a crucial NFR is often ignored in the software development process. In this paper we address the importance of security as a NFR in the software development process. To that end, we propose a lightweight novel agile framework to analyze security requirements. We evaluate the proposed framework with a qualitative analysis and determine how it is useful to requirement analysts.
需求工程(RE)是成功的软件开发过程中的一个重要组成部分。将诸如性能、可维护性、可修改性等非功能需求(NFRs)嵌入到新软件系统中的想法通常由软件工程师实现。然而,在软件开发过程中,安全性作为一个至关重要的NFR常常被忽略。在本文中,我们讨论了安全性作为NFR在软件开发过程中的重要性。为此,我们提出了一个轻量级的新型敏捷框架来分析安全需求。我们用定性分析来评估提议的框架,并确定它对需求分析人员的用处。
{"title":"An Agile Framework for Security Requirements: A Preliminary Investigation","authors":"S. Reddivari","doi":"10.1109/COMPSAC54236.2022.00076","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00076","url":null,"abstract":"Requirements engineering (RE) is a crucial component in successful software development process. The idea of embedding the non-functional requirements (NFRs) such as performance, maintainability, modifiability, and others into a new software system is often implemented by software engineers. However, security as a crucial NFR is often ignored in the software development process. In this paper we address the importance of security as a NFR in the software development process. To that end, we propose a lightweight novel agile framework to analyze security requirements. We evaluate the proposed framework with a qualitative analysis and determine how it is useful to requirement analysts.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128376500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Broaden Multidisciplinary Data Science Research by an Innovative Cyberinfrastructure Platform 利用创新的网络基础设施平台拓展多学科数据科学研究
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00074
Dan Lo, Kai Qian, Yong Shi, H. Shahriar, Chung Ng
Data science, machine learning, and distributed computational models have evolved dramatically over the last decade. Cloud and cluster computing is full-fledged and ready for processing big data. Data driven research and decision have become the trend in multiple disciplines. However, very few organizations have experienced the full impact or competitive advantage from their advanced data analytics initiatives despite significant investments in data science and machine learning. There are a number of issues resulting in such a phenomenon including difficult to maintain and configure a cluster, complex transition from a platform to another, sophisticated programming interfaces to machine learning libraries, network congestion, and most importantly lake of well-trained personnel to sanitize and analyze data. We propose a flexible heterogeneous computing cluster with off-the-shelf computers and a Blockly programming interface for multidisciplinary users such as cybersecurity ana-lyst, biologist, geologist, musician, and choreographer.
数据科学、机器学习和分布式计算模型在过去十年中发生了巨大的变化。云计算和集群计算已经成熟,可以处理大数据。数据驱动研究和决策已成为多学科发展的趋势。然而,尽管在数据科学和机器学习方面进行了大量投资,但很少有组织从他们的高级数据分析计划中体验到全面的影响或竞争优势。造成这种现象的原因有很多,包括难以维护和配置集群、从一个平台到另一个平台的复杂转换、到机器学习库的复杂编程接口、网络拥塞,以及最重要的是缺乏训练有素的人员来清理和分析数据。我们提出了一个灵活的异构计算集群与现成的计算机和块编程接口的多学科用户,如网络安全分析师,生物学家,地质学家,音乐家和编舞。
{"title":"Broaden Multidisciplinary Data Science Research by an Innovative Cyberinfrastructure Platform","authors":"Dan Lo, Kai Qian, Yong Shi, H. Shahriar, Chung Ng","doi":"10.1109/COMPSAC54236.2022.00074","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00074","url":null,"abstract":"Data science, machine learning, and distributed computational models have evolved dramatically over the last decade. Cloud and cluster computing is full-fledged and ready for processing big data. Data driven research and decision have become the trend in multiple disciplines. However, very few organizations have experienced the full impact or competitive advantage from their advanced data analytics initiatives despite significant investments in data science and machine learning. There are a number of issues resulting in such a phenomenon including difficult to maintain and configure a cluster, complex transition from a platform to another, sophisticated programming interfaces to machine learning libraries, network congestion, and most importantly lake of well-trained personnel to sanitize and analyze data. We propose a flexible heterogeneous computing cluster with off-the-shelf computers and a Blockly programming interface for multidisciplinary users such as cybersecurity ana-lyst, biologist, geologist, musician, and choreographer.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128643905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Baseline for Early Classification of Time Series in An Open World 开放世界中时间序列早期分类的基线
Pub Date : 2022-06-01 DOI: 10.1109/COMPSAC54236.2022.00055
Junwei Lv, Xuegang Hu
Early classification of time series aims to accurately predict the class label of a time series as early as possible, which is significant but challenging in many time-sensitive applications. Existing early classification methods hold a basic closed-world assumption that the classifier must have seen the classes of test samples. However, new samples that do not belong to any trained class may appear in the real world. In this paper, we first address the early classification in an open world and design two detectors to identify which known class or unknown class a sample belongs to. Specifically, based on the observed data, an early known-class detector is designed to determine the known-class confidence and an early unknown-class detector is designed to determine the unknown-class confidence according to the Minimum Reliable Length (MRL) and the Weibull distribution of each class. Experimental results evaluated on real-world datasets demonstrate that the proposed model can identify samples of unknown and known classes accurately and early.
时间序列的早期分类旨在尽早准确地预测时间序列的类别标签,这在许多时间敏感的应用中具有重要意义,但也具有挑战性。现有的早期分类方法持有一个基本的封闭世界假设,即分类器必须看到测试样本的类别。然而,不属于任何训练类的新样本可能会出现在现实世界中。在本文中,我们首先解决了开放世界中的早期分类问题,并设计了两个检测器来识别样本属于已知类还是未知类。具体而言,基于观测数据,根据最小可靠长度(MRL)和各类的威布尔分布,设计早期已知类检测器来确定已知类置信度,设计早期未知类检测器来确定未知类置信度。在实际数据集上的实验结果表明,该模型能够准确、早期地识别未知和已知类别的样本。
{"title":"A Baseline for Early Classification of Time Series in An Open World","authors":"Junwei Lv, Xuegang Hu","doi":"10.1109/COMPSAC54236.2022.00055","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00055","url":null,"abstract":"Early classification of time series aims to accurately predict the class label of a time series as early as possible, which is significant but challenging in many time-sensitive applications. Existing early classification methods hold a basic closed-world assumption that the classifier must have seen the classes of test samples. However, new samples that do not belong to any trained class may appear in the real world. In this paper, we first address the early classification in an open world and design two detectors to identify which known class or unknown class a sample belongs to. Specifically, based on the observed data, an early known-class detector is designed to determine the known-class confidence and an early unknown-class detector is designed to determine the unknown-class confidence according to the Minimum Reliable Length (MRL) and the Weibull distribution of each class. Experimental results evaluated on real-world datasets demonstrate that the proposed model can identify samples of unknown and known classes accurately and early.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129453948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1