Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976113
A. Silva, A. Simões, Renata Blanc
Collaborative robots are being increasingly used by manufacturing companies due to their potential to help companies cope with market volatility. Before introducing this technology, companies face the decision phase where they determine the investment feasibility. Decision models for cobot adoption can assist decision-makers in this task, but they require previous identification of decision criteria. Since existing literature overlooked this issue, this study aims to provide a list of decision criteria that can be considered in the cobot adoption decision process. These criteria were identified by a literature review of the benefits, advantages, and disadvantages of cobot adoption. Results show that flexibility, competitiveness, ergonomics, quality, safety, space, mobility, ease of programming, technical features, human-robot collaboration, and productivity are important aspects to consider when deciding whether to invest in cobots. The findings of this study provide a better understanding of the decision process for cobot adoption by listing decision criteria along with some indicators, which is an important input for the design of a decision-making process.
{"title":"Criteria to consider in a decision model for collaborative robot (cobot) adoption: A literature review","authors":"A. Silva, A. Simões, Renata Blanc","doi":"10.1109/INDIN51773.2022.9976113","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976113","url":null,"abstract":"Collaborative robots are being increasingly used by manufacturing companies due to their potential to help companies cope with market volatility. Before introducing this technology, companies face the decision phase where they determine the investment feasibility. Decision models for cobot adoption can assist decision-makers in this task, but they require previous identification of decision criteria. Since existing literature overlooked this issue, this study aims to provide a list of decision criteria that can be considered in the cobot adoption decision process. These criteria were identified by a literature review of the benefits, advantages, and disadvantages of cobot adoption. Results show that flexibility, competitiveness, ergonomics, quality, safety, space, mobility, ease of programming, technical features, human-robot collaboration, and productivity are important aspects to consider when deciding whether to invest in cobots. The findings of this study provide a better understanding of the decision process for cobot adoption by listing decision criteria along with some indicators, which is an important input for the design of a decision-making process.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115377198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976163
Mohammad Samadi Gharajeh, Tiago Carvalho, L. M. Pinho
Parallel programming models (e.g., OpenMP) are more and more used to improve the performance of real-time applications in modern processors. Nevertheless, these processors have complex architectures, being very difficult to understand their timing behavior. The main challenge with most of existing works is that they apply static timing analysis for simpler models or measurement-based analysis using traditional platforms (e.g., single core) or considering only sequential algorithms. How to provide an efficient configuration for the allocation of the parallel program in the computing units of the processor is still an open challenge. This paper studies the problem of performing timing analysis on complex multi-core platforms, pointing out a methodology to understand the applications’ timing behavior, and guide the configuration of the platform. As an example, the paper uses an OpenMP-based program of the Heat benchmark on a NVIDIA Jetson AGX Xavier. The main objectives are to analyze the execution time of OpenMP tasks, specify the best configuration of OpenMP directives, identify critical tasks, and discuss the predictability of the system/application. A Linux perf based measurement tool, which has been extended by our team, is applied to measure each task across multiple executions in terms of total CPU cycles, the number of cache accesses, and the number of cache misses at different cache levels, including L1, L2 and L3. The evaluation process is performed using the measurement of the performance metrics by our tool to study the predictability of the system/application.
{"title":"Configuration of Parallel Real-Time Applications on Multi-Core Processors","authors":"Mohammad Samadi Gharajeh, Tiago Carvalho, L. M. Pinho","doi":"10.1109/INDIN51773.2022.9976163","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976163","url":null,"abstract":"Parallel programming models (e.g., OpenMP) are more and more used to improve the performance of real-time applications in modern processors. Nevertheless, these processors have complex architectures, being very difficult to understand their timing behavior. The main challenge with most of existing works is that they apply static timing analysis for simpler models or measurement-based analysis using traditional platforms (e.g., single core) or considering only sequential algorithms. How to provide an efficient configuration for the allocation of the parallel program in the computing units of the processor is still an open challenge. This paper studies the problem of performing timing analysis on complex multi-core platforms, pointing out a methodology to understand the applications’ timing behavior, and guide the configuration of the platform. As an example, the paper uses an OpenMP-based program of the Heat benchmark on a NVIDIA Jetson AGX Xavier. The main objectives are to analyze the execution time of OpenMP tasks, specify the best configuration of OpenMP directives, identify critical tasks, and discuss the predictability of the system/application. A Linux perf based measurement tool, which has been extended by our team, is applied to measure each task across multiple executions in terms of total CPU cycles, the number of cache accesses, and the number of cache misses at different cache levels, including L1, L2 and L3. The evaluation process is performed using the measurement of the performance metrics by our tool to study the predictability of the system/application.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116225126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976136
S. Sajjadi, N. Bazmohammadi, A. Amani, M. Jalili, J. Guerrero, Xinghuo Yu
In this paper, control of Battery Storage Systems (BSS) in power distribution grids with residential consumers as well as prosumers equipped with rooftop photovoltaic (PV) solar panels and Electric Vehicles (EV) is addressed. Different features of these Distributed Energy Resources (DERs), such as intermittent behaviour and the difference between the maximum generation time and the maximum demand, have caused several issues for electricity distributors in delivering high quality power. Smart control and scheduling of ESS and EVs is a promising approach to protect the grid against extra power injection from prosumers during day times while the benefit of household owners from DERs are still achieved. In this context, the performance of model-based controllers such as model predictive controllers (MPC) is compared with model-free data driven controllers (DDC) considering different complex scenarios that may happen in a distribution grid. The control objective is to minimize the difference between the net power exchanged with the main grid from the estimated average net load of prosumers. Our study on the real consumption data of about 40 residential consumers/prosumers in Victoria, Australia, demonstrates the strength of data-driven control approaches to deal with the complex environment of power distribution grids in the presence of DERs.
{"title":"Control of Battery Storage Systems in Residential Grids: Model-based vs. Data-Driven Approaches","authors":"S. Sajjadi, N. Bazmohammadi, A. Amani, M. Jalili, J. Guerrero, Xinghuo Yu","doi":"10.1109/INDIN51773.2022.9976136","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976136","url":null,"abstract":"In this paper, control of Battery Storage Systems (BSS) in power distribution grids with residential consumers as well as prosumers equipped with rooftop photovoltaic (PV) solar panels and Electric Vehicles (EV) is addressed. Different features of these Distributed Energy Resources (DERs), such as intermittent behaviour and the difference between the maximum generation time and the maximum demand, have caused several issues for electricity distributors in delivering high quality power. Smart control and scheduling of ESS and EVs is a promising approach to protect the grid against extra power injection from prosumers during day times while the benefit of household owners from DERs are still achieved. In this context, the performance of model-based controllers such as model predictive controllers (MPC) is compared with model-free data driven controllers (DDC) considering different complex scenarios that may happen in a distribution grid. The control objective is to minimize the difference between the net power exchanged with the main grid from the estimated average net load of prosumers. Our study on the real consumption data of about 40 residential consumers/prosumers in Victoria, Australia, demonstrates the strength of data-driven control approaches to deal with the complex environment of power distribution grids in the presence of DERs.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127875936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976132
Xiaoyue Liu, Cong Peng
Recently, deep transfer learning (TL) has successfully addressed the problem of fault diagnosis under variable operating conditions. Existing methods default that the source and target domains have the same label space, and solve distribution discrepancy problem under different working conditions by aligning their feature distributions. However, in the practical industry, is unlikely to guarantee the health conditions of the target domain data are consistent with the source domain. Therefore, industrial applications usually face the challenge of more difficult partial domain diagnosis scenarios. In this paper, a deep partial domain adaptation network based on a balanced alignment constraint strategy is proposed to realize cross-domain diagnosis. The proposed method combines balanced augmentation and subdomain alignment, which can effectively facilitate the positive transfer of shared categories. Meanwhile, the conditional entropy minimization is introduced to encourage the predictions of target domain samples with high confidence. The experimental results on the rolling bearing dataset verify the effectiveness and feasibility of the proposed method in handling the actual partial domain fault diagnosis problem.
{"title":"Partial Domain Intelligent Diagnosis Method for Rotor-Bearing System Based on Deep Learning","authors":"Xiaoyue Liu, Cong Peng","doi":"10.1109/INDIN51773.2022.9976132","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976132","url":null,"abstract":"Recently, deep transfer learning (TL) has successfully addressed the problem of fault diagnosis under variable operating conditions. Existing methods default that the source and target domains have the same label space, and solve distribution discrepancy problem under different working conditions by aligning their feature distributions. However, in the practical industry, is unlikely to guarantee the health conditions of the target domain data are consistent with the source domain. Therefore, industrial applications usually face the challenge of more difficult partial domain diagnosis scenarios. In this paper, a deep partial domain adaptation network based on a balanced alignment constraint strategy is proposed to realize cross-domain diagnosis. The proposed method combines balanced augmentation and subdomain alignment, which can effectively facilitate the positive transfer of shared categories. Meanwhile, the conditional entropy minimization is introduced to encourage the predictions of target domain samples with high confidence. The experimental results on the rolling bearing dataset verify the effectiveness and feasibility of the proposed method in handling the actual partial domain fault diagnosis problem.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976124
Li Zhao, Nathee Naktnasukanjn, Lei Mu, Haichuan Liu, Heping Pan
Along with the continuous development of capital markets and intelligent finance technologies, quantitative investment is entering into the most critical and challenging area – fundamental quantitative investment. So far, quantitative investment has been focused on automation of technical analysis and trading, while fundamental investment has been large discretionary. This paper provides an overview of quantitative investment and fundamental investment towards a fundamental quantitative investment theory and technical system based on multi-factor models. We start with reviewing relevant literature on modern financial quantitative investment and fundamental investment. Then we cover the theoretical basis and development of multi-factor models and their applications for stock selection, involving linear and non-linear relationships, machine learning, deep learning with neural networks, random forests, and Support Vector Machines (SVMs). We explore the frontiers of fundamental quantitative investment and shed light on the future research prospects.
{"title":"Fundamental Quantitative Investment Theory and Technical System Based On Multi-Factor Models","authors":"Li Zhao, Nathee Naktnasukanjn, Lei Mu, Haichuan Liu, Heping Pan","doi":"10.1109/INDIN51773.2022.9976124","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976124","url":null,"abstract":"Along with the continuous development of capital markets and intelligent finance technologies, quantitative investment is entering into the most critical and challenging area – fundamental quantitative investment. So far, quantitative investment has been focused on automation of technical analysis and trading, while fundamental investment has been large discretionary. This paper provides an overview of quantitative investment and fundamental investment towards a fundamental quantitative investment theory and technical system based on multi-factor models. We start with reviewing relevant literature on modern financial quantitative investment and fundamental investment. Then we cover the theoretical basis and development of multi-factor models and their applications for stock selection, involving linear and non-linear relationships, machine learning, deep learning with neural networks, random forests, and Support Vector Machines (SVMs). We explore the frontiers of fundamental quantitative investment and shed light on the future research prospects.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976180
Heping Pan
This paper walks through mathematical evolution of modern portfolio theory and multi-factor models and advances with a General Intelligent Portfolio Theory and underlying applications in stock markets. Following up the earlier form of the Intelligent Portfolio Theory, the new generalization extends in 3 dimensions: 1) three forms of intelligent portfolios – multi-asset multi-strategy, multi-strategy multi-asset and multi-trader; 2) strength investing with momentum rotation as an engine driving dynamic re-selection of assets or strategies or traders; 3) sector rotation in stock markets as a main form of strength investing and as a paradigm shift from diversification in portfolio theory. Applications in Chinese stock markets and international index futures are demonstrated with nontrivial performance achieved through testing on historical data.
{"title":"A General Intelligent Portfolio Theory with Strength Investing and Sector Rotation in Stock Markets","authors":"Heping Pan","doi":"10.1109/INDIN51773.2022.9976180","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976180","url":null,"abstract":"This paper walks through mathematical evolution of modern portfolio theory and multi-factor models and advances with a General Intelligent Portfolio Theory and underlying applications in stock markets. Following up the earlier form of the Intelligent Portfolio Theory, the new generalization extends in 3 dimensions: 1) three forms of intelligent portfolios – multi-asset multi-strategy, multi-strategy multi-asset and multi-trader; 2) strength investing with momentum rotation as an engine driving dynamic re-selection of assets or strategies or traders; 3) sector rotation in stock markets as a main form of strength investing and as a paradigm shift from diversification in portfolio theory. Applications in Chinese stock markets and international index futures are demonstrated with nontrivial performance achieved through testing on historical data.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976100
Haiping Wang, Xing Zhou
Utilizing a large set of variables that include transaction information, public attention, blockchain information, macroeconomic variables and technical indicators, we compare different deep learning models with baseline methods, such as statistical and machine learning models, on Bitcoin volatility forecast. We find that feature selection approach strongly affects model performance. The results show that a simple Long Short-Term Memory (LSTM) model outperforms other models when using individual feature selection method.
{"title":"Less is More: Bitcoin Volatility Forecast Using Feature Selection and Deep Learning Models","authors":"Haiping Wang, Xing Zhou","doi":"10.1109/INDIN51773.2022.9976100","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976100","url":null,"abstract":"Utilizing a large set of variables that include transaction information, public attention, blockchain information, macroeconomic variables and technical indicators, we compare different deep learning models with baseline methods, such as statistical and machine learning models, on Bitcoin volatility forecast. We find that feature selection approach strongly affects model performance. The results show that a simple Long Short-Term Memory (LSTM) model outperforms other models when using individual feature selection method.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116023557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976144
Alejandro López, Lucas Sakurada, Paulo Leitão, O. Casquero, E. Estévez, F. D. L. Prieta, M. Marcos
Cyber-Physical Systems (CPS) are devoted to be the main participants in Industry 4.0 (I4.0) solutions. In recent years, many authors have focused their efforts on making proposals for the design and implementation of CPS based on different digital technologies. However, the comparative evaluation of these I4.0 solutions is complex, since there is no uniform criterion when it comes to defining the test scenarios and the metrics to assess them. This paper presents a technology-independent CPS demonstrator for benchmarking I4.0 solutions. To that end, a set of testing scenarios, Key Performance Indicators and services were defined considering the available automation cells setup. The proposed demonstrator has been used to test an I4.0 solution based on a Multi-agent Systems (MAS) approach.
{"title":"Technology-Independent Demonstrator for Testing Industry 4.0 Solutions","authors":"Alejandro López, Lucas Sakurada, Paulo Leitão, O. Casquero, E. Estévez, F. D. L. Prieta, M. Marcos","doi":"10.1109/INDIN51773.2022.9976144","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976144","url":null,"abstract":"Cyber-Physical Systems (CPS) are devoted to be the main participants in Industry 4.0 (I4.0) solutions. In recent years, many authors have focused their efforts on making proposals for the design and implementation of CPS based on different digital technologies. However, the comparative evaluation of these I4.0 solutions is complex, since there is no uniform criterion when it comes to defining the test scenarios and the metrics to assess them. This paper presents a technology-independent CPS demonstrator for benchmarking I4.0 solutions. To that end, a set of testing scenarios, Key Performance Indicators and services were defined considering the available automation cells setup. The proposed demonstrator has been used to test an I4.0 solution based on a Multi-agent Systems (MAS) approach.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"38 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125736430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976074
V. Berezovsky, Yunfeng Bai, Ivan Sharshov, R. Aleshko, K. Shoshina, I. Vasendina
Using high resolution (HR) images collected from UAV, aerial craft or satellites is a research hotspot in the field forest areas analyzing. In practice, HR images are available for a small number of regions, while for the rest, the maximum density various around 1 px/m. HR image reconstruction is a well-known problem in computer vision. Recently, deep learning algorithms have achieved great success in image processing, so we have introduced them into the field of processing orthoimages. At the same time, we noticed that orthoimages generally have colorful blocks of different sizes. Taking into account this feature, we did not apply the classical algorithms directly, but made some improvements. Experiments show that the effect of proposed method is equivalent to the effect of classical algorithms, however, at the preprocessing stage, it significantly saves time. An approach to the forest areas analyzing, including image segmentation and the tree spices classification is proposed. The results of numerical calculations are presented.
{"title":"Orthoimage Super-Resolution via Deep Convolutional Neural Networks","authors":"V. Berezovsky, Yunfeng Bai, Ivan Sharshov, R. Aleshko, K. Shoshina, I. Vasendina","doi":"10.1109/INDIN51773.2022.9976074","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976074","url":null,"abstract":"Using high resolution (HR) images collected from UAV, aerial craft or satellites is a research hotspot in the field forest areas analyzing. In practice, HR images are available for a small number of regions, while for the rest, the maximum density various around 1 px/m. HR image reconstruction is a well-known problem in computer vision. Recently, deep learning algorithms have achieved great success in image processing, so we have introduced them into the field of processing orthoimages. At the same time, we noticed that orthoimages generally have colorful blocks of different sizes. Taking into account this feature, we did not apply the classical algorithms directly, but made some improvements. Experiments show that the effect of proposed method is equivalent to the effect of classical algorithms, however, at the preprocessing stage, it significantly saves time. An approach to the forest areas analyzing, including image segmentation and the tree spices classification is proposed. The results of numerical calculations are presented.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115051742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976177
Luyue Ji, Wen-Ruey Wu, Chaojie Gu, Jichao Bi, Shibo He, Zhiguo Shi
With the emergence of Industry 5.0, it is significant to enable efficient cooperation between humans and machines in the Industrial Internet of Things (IIoT). However, achieving real-time and reliable transmission of data flows deriving from time-sensitive applications in IIoT remains an open challenge. In this paper, we propose a three-layer software-defined IIoT (SDIIoT) architecture to enable multiple industrial services and flexible network configuration. In particular, when network services change frequently in SDIIoT, the delay of the control plane has a great influence on the end-to-end delay of data flows. To address this issue, we portray two different service curves of OpenFlow switches to adapt to dynamic network status based on Network Calculus (NC). To elevate resource efficiency and comply with friendly environments, we minimize the total worst-case network cost under strict resource constraints and transmission requirements by exploiting the joint flow routing and scheduling algorithm (JFRSA). Our numerical simulation results demonstrate the effectiveness and efficiency of our solution.
{"title":"Network Calculus-based Routing and Scheduling in Software-defined Industrial Internet of Things","authors":"Luyue Ji, Wen-Ruey Wu, Chaojie Gu, Jichao Bi, Shibo He, Zhiguo Shi","doi":"10.1109/INDIN51773.2022.9976177","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976177","url":null,"abstract":"With the emergence of Industry 5.0, it is significant to enable efficient cooperation between humans and machines in the Industrial Internet of Things (IIoT). However, achieving real-time and reliable transmission of data flows deriving from time-sensitive applications in IIoT remains an open challenge. In this paper, we propose a three-layer software-defined IIoT (SDIIoT) architecture to enable multiple industrial services and flexible network configuration. In particular, when network services change frequently in SDIIoT, the delay of the control plane has a great influence on the end-to-end delay of data flows. To address this issue, we portray two different service curves of OpenFlow switches to adapt to dynamic network status based on Network Calculus (NC). To elevate resource efficiency and comply with friendly environments, we minimize the total worst-case network cost under strict resource constraints and transmission requirements by exploiting the joint flow routing and scheduling algorithm (JFRSA). Our numerical simulation results demonstrate the effectiveness and efficiency of our solution.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}