首页 > 最新文献

IT-Information Technology最新文献

英文 中文
Wildfire prediction for California using and comparing Spatio-Temporal Knowledge Graphs 基于时空知识图的加州野火预测与比较
Q2 Computer Science Pub Date : 2023-11-09 DOI: 10.1515/itit-2023-0061
Martin Böckling, Heiko Paulheim, Sarah Detzler
Abstract The frequency of wildfires increases yearly and poses a constant threat to the environment and human beings. Different factors, for example surrounding infrastructure to an area (e.g., campfire sites or power lines) contribute to the occurrence of wildfires. In this paper, we propose using a Spatio-Temporal Knowledge Graph (STKG) based on OpenStreetMap (OSM) data for modeling such infrastructure. Based on that knowledge graph, we use the RDF2vec approach to create embeddings for predicting wildfires, and we align different vector spaces generated at each temporal step by partial rotation. In an experimental study, we determine the effect of the surrounding infrastructure by comparing different data composition strategies, which involve a prediction based on tabular data, a combination of tabular data and embeddings, and solely embeddings. We show that the incorporation of the STKG increases the prediction quality of wildfires.
摘要森林火灾的发生频率逐年增加,对环境和人类构成了持续的威胁。不同的因素,例如一个地区周围的基础设施(例如,营火点或电线)会导致野火的发生。在本文中,我们建议使用基于OpenStreetMap (OSM)数据的时空知识图(STKG)来建模此类基础设施。基于该知识图,我们使用RDF2vec方法创建用于预测野火的嵌入,并通过部分旋转对齐在每个时间步骤生成的不同向量空间。在一项实验研究中,我们通过比较不同的数据组合策略来确定周围基础设施的影响,这些策略包括基于表格数据的预测、表格数据和嵌入的组合以及单独嵌入。结果表明,STKG的加入提高了野火的预测质量。
{"title":"Wildfire prediction for California using and comparing Spatio-Temporal Knowledge Graphs","authors":"Martin Böckling, Heiko Paulheim, Sarah Detzler","doi":"10.1515/itit-2023-0061","DOIUrl":"https://doi.org/10.1515/itit-2023-0061","url":null,"abstract":"Abstract The frequency of wildfires increases yearly and poses a constant threat to the environment and human beings. Different factors, for example surrounding infrastructure to an area (e.g., campfire sites or power lines) contribute to the occurrence of wildfires. In this paper, we propose using a Spatio-Temporal Knowledge Graph (STKG) based on OpenStreetMap (OSM) data for modeling such infrastructure. Based on that knowledge graph, we use the RDF2vec approach to create embeddings for predicting wildfires, and we align different vector spaces generated at each temporal step by partial rotation. In an experimental study, we determine the effect of the surrounding infrastructure by comparing different data composition strategies, which involve a prediction based on tabular data, a combination of tabular data and embeddings, and solely embeddings. We show that the incorporation of the STKG increases the prediction quality of wildfires.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning in AI Factories – five theses for developing, managing and maintaining data-driven artificial intelligence at large scale 人工智能工厂中的机器学习——关于大规模开发、管理和维护数据驱动的人工智能的五篇论文
Q2 Computer Science Pub Date : 2023-11-09 DOI: 10.1515/itit-2023-0028
Wolfgang Hildesheim, Taras Holoyad, Thomas Schmid
Abstract The use of artificial intelligence (AI) is today’s dominating technological trend across all industries. With the maturing of deep learning and other data-driven techniques, AI has over the last decade become an essential component for an increasing number of products and services. In parallel to this development, technological advances have been accelerating the production of novel AI models from large-scale datasets. This global phenomenon has been driving the need for an efficient industrialized approach to develop, manage and maintain AI models at large scale. Such an approach is provided by the state-of-the-art operational concept termed AI Factory, which refers to an infrastructure for AI models and implements the idea of AI as a Service (AIaaS). Moreover, it ensures performance, transparency and reproducibility of AI models at any point in the continuous AI development process. This concept, however, does not only require new technologies and architectures, but also new job roles. Here, we discuss current trends, outline requirements and identify success factors for AI Factories. We conclude with recommendations for their successful use in practice as well as perspectives on future developments.
人工智能(AI)的使用是当今所有行业的主导技术趋势。随着深度学习和其他数据驱动技术的成熟,人工智能在过去十年中已成为越来越多产品和服务的重要组成部分。与此同时,技术进步也在加速从大规模数据集中产生新的人工智能模型。这一全球现象推动了对高效工业化方法的需求,以大规模开发、管理和维护人工智能模型。这种方法是由称为AI工厂的最先进的操作概念提供的,它指的是AI模型的基础设施,并实现了AI即服务(AIaaS)的思想。此外,它确保了人工智能模型在持续的人工智能开发过程中的任何一点的性能、透明度和可重复性。然而,这个概念不仅需要新的技术和架构,还需要新的工作角色。在这里,我们讨论了当前的趋势,概述了需求并确定了人工智能工厂的成功因素。最后,我们对它们在实践中的成功应用提出了建议,并对未来的发展提出了展望。
{"title":"Machine learning in AI Factories – five theses for developing, managing and maintaining data-driven artificial intelligence at large scale","authors":"Wolfgang Hildesheim, Taras Holoyad, Thomas Schmid","doi":"10.1515/itit-2023-0028","DOIUrl":"https://doi.org/10.1515/itit-2023-0028","url":null,"abstract":"Abstract The use of artificial intelligence (AI) is today’s dominating technological trend across all industries. With the maturing of deep learning and other data-driven techniques, AI has over the last decade become an essential component for an increasing number of products and services. In parallel to this development, technological advances have been accelerating the production of novel AI models from large-scale datasets. This global phenomenon has been driving the need for an efficient industrialized approach to develop, manage and maintain AI models at large scale. Such an approach is provided by the state-of-the-art operational concept termed AI Factory, which refers to an infrastructure for AI models and implements the idea of AI as a Service (AIaaS). Moreover, it ensures performance, transparency and reproducibility of AI models at any point in the continuous AI development process. This concept, however, does not only require new technologies and architectures, but also new job roles. Here, we discuss current trends, outline requirements and identify success factors for AI Factories. We conclude with recommendations for their successful use in practice as well as perspectives on future developments.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning applications 机器学习应用
Q2 Computer Science Pub Date : 2023-11-06 DOI: 10.1515/itit-2023-0109
Natanael Arndt, Paul Molitor, Ricardo Usbeck
{"title":"Machine learning applications","authors":"Natanael Arndt, Paul Molitor, Ricardo Usbeck","doi":"10.1515/itit-2023-0109","DOIUrl":"https://doi.org/10.1515/itit-2023-0109","url":null,"abstract":"","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135585270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning in sensor identification for industrial systems 工业系统传感器识别中的机器学习
Q2 Computer Science Pub Date : 2023-10-09 DOI: 10.1515/itit-2023-0051
Lucas Weber, Richard Lenz
Abstract This paper explores the potential and limitations of machine learning for sensor signal identification in complex industrial systems. The objective is a tool to assist engineers in finding the correct inputs to digital twins and simulations from a set of unlabeled sensor signals. A naive end-to-end machine learning approach is usually not applicable to this task, as it would require many comparable industrial systems to learn from. We present a semi-structured approach that uses observations from the manual classification of time series and combines different algorithms to partition the set of signals into smaller groups of signals that share common characteristics. Using a real-world dataset from several power plants, we evaluate our solution for scaling-invariant measurement identification and functional relationship inference using change-point correlations.
摘要本文探讨了机器学习在复杂工业系统中用于传感器信号识别的潜力和局限性。目标是帮助工程师从一组未标记的传感器信号中找到数字双胞胎和模拟的正确输入的工具。简单的端到端机器学习方法通常不适用于此任务,因为它需要许多可比较的工业系统来学习。我们提出了一种半结构化的方法,该方法使用时间序列人工分类的观察结果,并结合不同的算法将信号集划分为具有共同特征的更小的信号组。使用来自几个发电厂的真实数据集,我们评估了使用变化点相关性的缩放不变测量识别和功能关系推断的解决方案。
{"title":"Machine learning in sensor identification for industrial systems","authors":"Lucas Weber, Richard Lenz","doi":"10.1515/itit-2023-0051","DOIUrl":"https://doi.org/10.1515/itit-2023-0051","url":null,"abstract":"Abstract This paper explores the potential and limitations of machine learning for sensor signal identification in complex industrial systems. The objective is a tool to assist engineers in finding the correct inputs to digital twins and simulations from a set of unlabeled sensor signals. A naive end-to-end machine learning approach is usually not applicable to this task, as it would require many comparable industrial systems to learn from. We present a semi-structured approach that uses observations from the manual classification of time series and combines different algorithms to partition the set of signals into smaller groups of signals that share common characteristics. Using a real-world dataset from several power plants, we evaluate our solution for scaling-invariant measurement identification and functional relationship inference using change-point correlations.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning and cyber security 机器学习和网络安全
IF 0.9 Q2 Computer Science Pub Date : 2023-09-04 DOI: 10.1515/itit-2023-0050
S. Karius, Mandy Knöchel, Sascha Heße, Tim Reiprich
Abstract Cyber Security has gained a significant amount of perceived importance when talking about the risks and challenges that lie ahead in the field of information technology. A recent increase in high-profile incidents involving any form of cyber criminality have raised the awareness of threats that were formerly often hidden from public perception, e.g., with openly carried out attacks against critical infrastructure to accompany traditional forms of warfare, extending those to the cyberspace. Add to that very personal experience of everyday social engineering attacks, which are cast out like a fishing net on a large scale, e.g., to catch anyone not careful enough to double-check a suspicious email. But as the threat level rises and the attacks become even more sophisticated, so do the methods to mitigate (or at least recognize) them. Of central importance here are methods from the field of machine learning (ML). This article provides a comprehensive overview of applied ML methods in cyber security, illustrates the importance of ML for cyber security, and discusses issues and methods for generating good datasets for the training phase of ML methods used in cyber security. This includes own work on the topics of network traffic classification, the collection of real-world attacks using honeypot systems as well as the use of ML to generate artificial network traffic.
摘要当谈到信息技术领域面临的风险和挑战时,网络安全已经获得了相当大的重要性。最近,涉及任何形式网络犯罪的高调事件有所增加,这提高了人们对以前常常隐藏在公众认知之外的威胁的认识,例如,在传统战争形式的同时,公开对关键基础设施进行攻击,并将其扩展到网络空间。再加上日常社会工程攻击的个人经历,这些攻击像渔网一样被大规模抛出,例如,捕捉任何不小心仔细检查可疑电子邮件的人。但随着威胁级别的上升,攻击变得更加复杂,减轻(或至少识别)它们的方法也会随之增加。这里最重要的是来自机器学习(ML)领域的方法。本文全面概述了ML方法在网络安全中的应用,说明了ML对网络安全的重要性,并讨论了为网络安全中使用的ML方法的训练阶段生成良好数据集的问题和方法。这包括自己在网络流量分类、使用蜜罐系统收集真实世界的攻击以及使用ML生成人工网络流量等主题上的工作。
{"title":"Machine learning and cyber security","authors":"S. Karius, Mandy Knöchel, Sascha Heße, Tim Reiprich","doi":"10.1515/itit-2023-0050","DOIUrl":"https://doi.org/10.1515/itit-2023-0050","url":null,"abstract":"Abstract Cyber Security has gained a significant amount of perceived importance when talking about the risks and challenges that lie ahead in the field of information technology. A recent increase in high-profile incidents involving any form of cyber criminality have raised the awareness of threats that were formerly often hidden from public perception, e.g., with openly carried out attacks against critical infrastructure to accompany traditional forms of warfare, extending those to the cyberspace. Add to that very personal experience of everyday social engineering attacks, which are cast out like a fishing net on a large scale, e.g., to catch anyone not careful enough to double-check a suspicious email. But as the threat level rises and the attacks become even more sophisticated, so do the methods to mitigate (or at least recognize) them. Of central importance here are methods from the field of machine learning (ML). This article provides a comprehensive overview of applied ML methods in cyber security, illustrates the importance of ML for cyber security, and discusses issues and methods for generating good datasets for the training phase of ML methods used in cyber security. This includes own work on the topics of network traffic classification, the collection of real-world attacks using honeypot systems as well as the use of ML to generate artificial network traffic.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48414742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning in computational literary studies 计算机文学研究中的机器学习
IF 0.9 Q2 Computer Science Pub Date : 2023-08-25 DOI: 10.1515/itit-2023-0041
Hans Ole Hatzel, Haimo Stiemer, Chris Biemann, Evelyn Gius
Abstract In this article, we provide an overview of machine learning as it is applied in computational literary studies, the field of computational analysis of literary texts and literature related phenomena. We survey a number of scientific publications for the machine learning methodology the scholars used and explain concepts of machine learning and natural language processing while discussing our findings. We establish that besides transformer-based language models, researchers still make frequent use of more traditional, feature-based machine learning approaches; possible reasons for this are to be found in the challenging application of modern methods to the literature domain and in the more transparent nature of traditional approaches. We shed light on how machine learning-based approaches are integrated into a research process, which often proceeds primarily from the non-quantitative, interpretative approaches of non-digital literary studies. Finally, we conclude that the application of large language models in the computational literary studies domain may simplify the application of machine learning methodology going forward, if adequate approaches for the analysis of literary texts are found.
摘要在这篇文章中,我们概述了机器学习在计算文学研究、文学文本的计算分析领域和文学相关现象中的应用。我们调查了学者们使用的机器学习方法的一些科学出版物,并在讨论我们的发现时解释了机器学习和自然语言处理的概念。我们发现,除了基于转换器的语言模型外,研究人员仍然经常使用更传统的、基于特征的机器学习方法;这可能的原因在于现代方法在文学领域的挑战性应用,以及传统方法更透明的性质。我们揭示了基于机器学习的方法是如何融入研究过程的,研究过程通常主要来自非数字文学研究的非定量、解释性方法。最后,我们得出结论,如果找到足够的文学文本分析方法,大型语言模型在计算文学研究领域的应用可能会简化机器学习方法的应用。
{"title":"Machine learning in computational literary studies","authors":"Hans Ole Hatzel, Haimo Stiemer, Chris Biemann, Evelyn Gius","doi":"10.1515/itit-2023-0041","DOIUrl":"https://doi.org/10.1515/itit-2023-0041","url":null,"abstract":"Abstract In this article, we provide an overview of machine learning as it is applied in computational literary studies, the field of computational analysis of literary texts and literature related phenomena. We survey a number of scientific publications for the machine learning methodology the scholars used and explain concepts of machine learning and natural language processing while discussing our findings. We establish that besides transformer-based language models, researchers still make frequent use of more traditional, feature-based machine learning approaches; possible reasons for this are to be found in the challenging application of modern methods to the literature domain and in the more transparent nature of traditional approaches. We shed light on how machine learning-based approaches are integrated into a research process, which often proceeds primarily from the non-quantitative, interpretative approaches of non-digital literary studies. Finally, we conclude that the application of large language models in the computational literary studies domain may simplify the application of machine learning methodology going forward, if adequate approaches for the analysis of literary texts are found.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46585101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence for molecular communication 用于分子通信的人工智能
IF 0.9 Q2 Computer Science Pub Date : 2023-08-03 DOI: 10.1515/itit-2023-0029
Max Bartunik, J. Kirchner, Oliver Keszöcze
Abstract Molecular communication is a novel approach for data transmission between miniaturised devices, especially in contexts where electrical signals are to be avoided. The communication is based on sending molecules (or other particles) at nanoscale through a typically fluid channel instead of the “classical” approach of sending electrons over a wire. Molecular communication devices have a large potential in future medical applications as they offer an alternative to antenna-based transmission systems that may not be applicable due to size, temperature, or radiation constraints. The communication is achieved by transforming a digital signal into concentrations of molecules that represent the signal. These molecules are then detected at the other end of the communication channel and transformed back into a digital signal. Accurately modeling the transmission channel is often not possible which may be due to a lack of data or time-varying parameters of the channel (e.g., the movements of a person wearing a medical device). This makes the process of demodulating the signal (i.e., signal classification) very difficult. Many approaches for demodulation have been discussed in the literature with one particular approach having tremendous success – artificial neural networks. These artificial networks imitate the decision process in the human brain and are capable of reliably classifying even rather noisy input data. Training such a network relies on a large set of training data. As molecular communication as a technology is still in its early development phase, this data is not always readily available. In this paper, we discuss neural network-based demodulation approaches relying on synthetic simulation data based on theoretical channel models as well as works that base their network on actual measurements produced by a prototype test bed. In this work, we give a general overview over the field molecular communication, discuss the challenges in the demodulations process of transmitted signals, and present approaches to these challenges that are based on artificial neural networks.
摘要分子通信是一种在微型设备之间进行数据传输的新方法,尤其是在需要避免电信号的情况下。这种通信是基于通过典型的流体通道在纳米级发送分子(或其他粒子),而不是通过导线发送电子的“经典”方法。分子通信设备在未来的医疗应用中具有巨大的潜力,因为它们提供了一种基于天线的传输系统的替代方案,由于尺寸、温度或辐射限制,这些系统可能不适用。通信是通过将数字信号转换为代表信号的分子浓度来实现的。然后在通信通道的另一端检测到这些分子,并将其转换回数字信号。准确地建模传输信道通常是不可能的,这可能是由于缺乏数据或信道的时变参数(例如,佩戴医疗设备的人的运动)。这使得解调信号(即信号分类)的过程非常困难。文献中讨论了许多解调方法,其中一种方法取得了巨大成功——人工神经网络。这些人工网络模仿了人类大脑中的决策过程,即使是相当嘈杂的输入数据也能够可靠地分类。训练这样的网络依赖于大量的训练数据。由于分子通信技术仍处于早期发展阶段,因此这些数据并不总是现成的。在本文中,我们讨论了基于理论信道模型的合成模拟数据的神经网络解调方法,以及基于原型测试台产生的实际测量值的网络解调工作。在这项工作中,我们对分子通信领域进行了概述,讨论了传输信号解调过程中的挑战,并提出了基于人工神经网络的解决这些挑战的方法。
{"title":"Artificial intelligence for molecular communication","authors":"Max Bartunik, J. Kirchner, Oliver Keszöcze","doi":"10.1515/itit-2023-0029","DOIUrl":"https://doi.org/10.1515/itit-2023-0029","url":null,"abstract":"Abstract Molecular communication is a novel approach for data transmission between miniaturised devices, especially in contexts where electrical signals are to be avoided. The communication is based on sending molecules (or other particles) at nanoscale through a typically fluid channel instead of the “classical” approach of sending electrons over a wire. Molecular communication devices have a large potential in future medical applications as they offer an alternative to antenna-based transmission systems that may not be applicable due to size, temperature, or radiation constraints. The communication is achieved by transforming a digital signal into concentrations of molecules that represent the signal. These molecules are then detected at the other end of the communication channel and transformed back into a digital signal. Accurately modeling the transmission channel is often not possible which may be due to a lack of data or time-varying parameters of the channel (e.g., the movements of a person wearing a medical device). This makes the process of demodulating the signal (i.e., signal classification) very difficult. Many approaches for demodulation have been discussed in the literature with one particular approach having tremendous success – artificial neural networks. These artificial networks imitate the decision process in the human brain and are capable of reliably classifying even rather noisy input data. Training such a network relies on a large set of training data. As molecular communication as a technology is still in its early development phase, this data is not always readily available. In this paper, we discuss neural network-based demodulation approaches relying on synthetic simulation data based on theoretical channel models as well as works that base their network on actual measurements produced by a prototype test bed. In this work, we give a general overview over the field molecular communication, discuss the challenges in the demodulations process of transmitted signals, and present approaches to these challenges that are based on artificial neural networks.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46443869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine learning in run-time control of multicore processor systems 多核处理器系统运行时控制中的机器学习
IF 0.9 Q2 Computer Science Pub Date : 2023-08-02 DOI: 10.1515/itit-2023-0056
F. Maurer, Moritz Thoma, A. Surhonne, Bryan Donyanavard, A. Herkersdorf
Abstract Modern embedded and cyber-physical applications consist of critical and non-critical tasks co-located on multiprocessor systems on chip (MPSoCs). Co-location of tasks results in contention for shared resources, resulting in interference on interconnect, processing units, storage, etc. Hence, machine learning-based resource managers must operate even non-critical tasks within certain constraints to ensure proper execution of critical tasks. In this paper we demonstrate and evaluate countermeasures based on backup policies to enhance rule-based reinforcement learning to enforce constraints. Detailed experiments reveal the CPUs’ performance degradation caused by different designs, as well as their effectiveness in preventing constraint violations. Further, we exploit the interpretability of our approach to further improve the resource manager’s operation by adding designers’ experience into the rule set.
摘要现代嵌入式和网络物理应用程序由位于片上多处理器系统(MPSoC)上的关键任务和非关键任务组成。任务的协同定位会导致对共享资源的争夺,从而对互连、处理单元、存储等造成干扰。因此,基于机器学习的资源管理器必须在某些约束条件下操作甚至是非关键任务,以确保关键任务的正确执行。在本文中,我们展示并评估了基于备份策略的对策,以增强基于规则的强化学习,从而强制执行约束。详细的实验揭示了不同设计导致的CPU性能下降,以及它们在防止违反约束方面的有效性。此外,我们利用我们方法的可解释性,通过将设计者的经验添加到规则集中,进一步改进资源管理器的操作。
{"title":"Machine learning in run-time control of multicore processor systems","authors":"F. Maurer, Moritz Thoma, A. Surhonne, Bryan Donyanavard, A. Herkersdorf","doi":"10.1515/itit-2023-0056","DOIUrl":"https://doi.org/10.1515/itit-2023-0056","url":null,"abstract":"Abstract Modern embedded and cyber-physical applications consist of critical and non-critical tasks co-located on multiprocessor systems on chip (MPSoCs). Co-location of tasks results in contention for shared resources, resulting in interference on interconnect, processing units, storage, etc. Hence, machine learning-based resource managers must operate even non-critical tasks within certain constraints to ensure proper execution of critical tasks. In this paper we demonstrate and evaluate countermeasures based on backup policies to enhance rule-based reinforcement learning to enforce constraints. Detailed experiments reveal the CPUs’ performance degradation caused by different designs, as well as their effectiveness in preventing constraint violations. Further, we exploit the interpretability of our approach to further improve the resource manager’s operation by adding designers’ experience into the rule set.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49379016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an IoPT approach to create a cocktail robot station with robotic arm: components, interfaces and control 用IoPT方法制造带有机械臂的鸡尾酒机器人工作站:组件、接口和控制
IF 0.9 Q2 Computer Science Pub Date : 2023-06-01 DOI: 10.1515/itit-2023-0006
Nataliia Klievtsova, M. Fuschlberger
Abstract Successful and profitable companies in the 21st century have to integrate and adapt modern technologies to reach productivity and sustainability goals. While large enterprises have the resources (human, knowledge, and technology) to incorporate automation hardware and the required computational means to support automation, for small and medium enterprises (SMEs) finding the necessary resources is much more difficult. Even though benefits from such modifications are obvious (i.e., gain new clients, improve current process performance or break into new market), SMEs often lack the human resources and knowledge to implement automation. With the development of the Internet of Things (IoT), devices, sensors and platforms are becoming more affordable and available for companies and businesses, that are not connected with IT technologies, industrial sector, etc. In order to simplify automation for SMEs, simple and standardized integration procedures and best practice examples are important. In this paper we propose the concept and design of a smart bar system that is based on the Internet of Processes and Things (IoPT) concept which is able to prepare and serve drinks to clients based on smart features.
在21世纪,成功和盈利的公司必须整合和适应现代技术,以达到生产力和可持续发展的目标。虽然大型企业拥有资源(人力、知识和技术)来整合自动化硬件和所需的计算手段来支持自动化,但对于中小型企业(sme)来说,找到必要的资源要困难得多。即使从这些修改中获得的好处是显而易见的(例如,获得新客户,改进当前的过程性能或进入新的市场),中小企业通常缺乏实现自动化的人力资源和知识。随着物联网(IoT)的发展,与IT技术、工业部门等无关的设备、传感器和平台越来越便宜,越来越多的公司和企业可以使用。为了简化中小企业的自动化,简单和标准化的集成过程和最佳实践示例非常重要。本文提出了一种基于过程与物联网(IoPT)概念的智能酒吧系统的概念和设计,该系统能够基于智能特性为客户准备和提供饮料。
{"title":"Design of an IoPT approach to create a cocktail robot station with robotic arm: components, interfaces and control","authors":"Nataliia Klievtsova, M. Fuschlberger","doi":"10.1515/itit-2023-0006","DOIUrl":"https://doi.org/10.1515/itit-2023-0006","url":null,"abstract":"Abstract Successful and profitable companies in the 21st century have to integrate and adapt modern technologies to reach productivity and sustainability goals. While large enterprises have the resources (human, knowledge, and technology) to incorporate automation hardware and the required computational means to support automation, for small and medium enterprises (SMEs) finding the necessary resources is much more difficult. Even though benefits from such modifications are obvious (i.e., gain new clients, improve current process performance or break into new market), SMEs often lack the human resources and knowledge to implement automation. With the development of the Internet of Things (IoT), devices, sensors and platforms are becoming more affordable and available for companies and businesses, that are not connected with IT technologies, industrial sector, etc. In order to simplify automation for SMEs, simple and standardized integration procedures and best practice examples are important. In this paper we propose the concept and design of a smart bar system that is based on the Internet of Processes and Things (IoPT) concept which is able to prepare and serve drinks to clients based on smart features.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42255145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoT-enriched event log generation and quality analytics: a case study 物联网丰富的事件日志 生成与质量分析:一个案例研究
IF 0.9 Q2 Computer Science Pub Date : 2023-06-01 DOI: 10.1515/itit-2022-0077
J. Grüger, Lukas Malburg, Ralph Bergmann
Abstract Modern technologies such as the Internet of Things (IoT) are becoming increasingly important in various fields, including business process management (BPM) research. An important area of research in BPM is process mining, which can be used to analyze event logs e.g., to check the conformance of running processes. However, the data ingested in IoT environments often contain data quality issues (DQIs) due to system complexity and sensor heterogeneity, among other factors. To date, however, there has been little work on IoT event logs, DQIs occurring in them, and how to handle them. In this case study, we generate an IoT event log, perform a structured data quality analysis, and describe how we addressed the problems we encountered in pre-processing.
摘要物联网(IoT)等现代技术在各个领域变得越来越重要,包括业务流程管理(BPM)研究。BPM中的一个重要研究领域是流程挖掘,它可以用于分析事件日志,例如检查正在运行的流程的一致性。然而,由于系统复杂性和传感器异质性等因素,物联网环境中摄入的数据通常包含数据质量问题(DQI)。然而,到目前为止,关于物联网事件日志、其中发生的DQI以及如何处理它们的工作很少。在本案例研究中,我们生成物联网事件日志,执行结构化数据质量分析,并描述我们如何解决预处理中遇到的问题。
{"title":"IoT-enriched event log generation and quality analytics: a case study","authors":"J. Grüger, Lukas Malburg, Ralph Bergmann","doi":"10.1515/itit-2022-0077","DOIUrl":"https://doi.org/10.1515/itit-2022-0077","url":null,"abstract":"Abstract Modern technologies such as the Internet of Things (IoT) are becoming increasingly important in various fields, including business process management (BPM) research. An important area of research in BPM is process mining, which can be used to analyze event logs e.g., to check the conformance of running processes. However, the data ingested in IoT environments often contain data quality issues (DQIs) due to system complexity and sensor heterogeneity, among other factors. To date, however, there has been little work on IoT event logs, DQIs occurring in them, and how to handle them. In this case study, we generate an IoT event log, perform a structured data quality analysis, and describe how we addressed the problems we encountered in pre-processing.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46580373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IT-Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1