Pub Date : 2022-07-25DOI: 10.1109/indin51773.2022.9976147
Hoa Tran-Dang, Dong-Seong Kim
Fog computing systems (FCS) have been widely integrated in the IoT-based applications aiming to improve the quality of services (QoS) such as low response service delay by performing the task computation nearby the task generation sources (i.e., IoT devices) on behalf of remote cloud servers. However, to achieve the objective of delay reduction remains challenging for offloading strategies due to the resource limitation of fog devices. In addition, a high rate of task requests combined with heavy tasks (i.e., large task size) may cause a high imbalance of workload distribution among the heterogeneous fog devices. To cope with the situation, this paper proposes a dynamic task offloading (DTO) approach, which is based on the resource states of fog devices to derive the task offloading policy dynamically. Accordingly, a task can be executed by either a single fog or multiple fog devices through parallel computation of subtasks to reduce the task execution delay. Through the extensive simulation analysis, the proposed approaches show potential advantages in reducing the average delay significantly in the systems with high rate of service requests and heterogeneous fog environment compared with the existing solutions.
{"title":"Dynamic Task Offloading Approach for Task Delay Reduction in the IoT-enabled Fog Computing Systems","authors":"Hoa Tran-Dang, Dong-Seong Kim","doi":"10.1109/indin51773.2022.9976147","DOIUrl":"https://doi.org/10.1109/indin51773.2022.9976147","url":null,"abstract":"Fog computing systems (FCS) have been widely integrated in the IoT-based applications aiming to improve the quality of services (QoS) such as low response service delay by performing the task computation nearby the task generation sources (i.e., IoT devices) on behalf of remote cloud servers. However, to achieve the objective of delay reduction remains challenging for offloading strategies due to the resource limitation of fog devices. In addition, a high rate of task requests combined with heavy tasks (i.e., large task size) may cause a high imbalance of workload distribution among the heterogeneous fog devices. To cope with the situation, this paper proposes a dynamic task offloading (DTO) approach, which is based on the resource states of fog devices to derive the task offloading policy dynamically. Accordingly, a task can be executed by either a single fog or multiple fog devices through parallel computation of subtasks to reduce the task execution delay. Through the extensive simulation analysis, the proposed approaches show potential advantages in reducing the average delay significantly in the systems with high rate of service requests and heterogeneous fog environment compared with the existing solutions.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114604675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976179
Y. Berezovskaya, Chen-Wei Yang, V. Vyatkin
Contemporary data centres consume electricity on an industrial scale and require control to improve energy efficiency and maintain high availability. The article proposes an idea and structure of the framework supporting development and validation of the multi-agent control for the energy-efficient data centre. The framework comprises two subsystems: the modelling toolbox and the controlling toolbox. This work focuses on such essential components of the controlling toolbox, as an individual controller. The reinforcement learning approach is applied to the controllers’ implementation. The server fan controller, named SF agent, is implemented based on the framework infrastructure and reinforcement learning approach. The agent’s capability of energy-saving is demonstrated.
{"title":"Reinforcement learning approach to implementation of individual controllers in data centre control system","authors":"Y. Berezovskaya, Chen-Wei Yang, V. Vyatkin","doi":"10.1109/INDIN51773.2022.9976179","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976179","url":null,"abstract":"Contemporary data centres consume electricity on an industrial scale and require control to improve energy efficiency and maintain high availability. The article proposes an idea and structure of the framework supporting development and validation of the multi-agent control for the energy-efficient data centre. The framework comprises two subsystems: the modelling toolbox and the controlling toolbox. This work focuses on such essential components of the controlling toolbox, as an individual controller. The reinforcement learning approach is applied to the controllers’ implementation. The server fan controller, named SF agent, is implemented based on the framework infrastructure and reinforcement learning approach. The agent’s capability of energy-saving is demonstrated.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121729507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976109
Maxim Friesen, Tian Tan, J. Jasperneite, Jie Wang
Increasing traffic congestion leads to significant costs, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today’s traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that can operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, many of these proposed approaches were validated using artificial traffic grids. This paper presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, modeled in LISA+ and currently employed in the real world at the studied intersections, are integrated into the traffic model and serve as a performance baseline. The performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach with LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.
{"title":"Multi-Agent Deep Reinforcement Learning For Real-World Traffic Signal Controls - A Case Study","authors":"Maxim Friesen, Tian Tan, J. Jasperneite, Jie Wang","doi":"10.1109/INDIN51773.2022.9976109","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976109","url":null,"abstract":"Increasing traffic congestion leads to significant costs, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today’s traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that can operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, many of these proposed approaches were validated using artificial traffic grids. This paper presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, modeled in LISA+ and currently employed in the real world at the studied intersections, are integrated into the traffic model and serve as a performance baseline. The performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach with LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123905730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The tracking control of hypersonic flight vehicle (HFV) is discussed in this paper, and the nonlinear model of HFV is assumed to be completely unknown. This problem is surely challenging because of the missing prior knowledge, but is more closer to reality since the exact mode of HFV is difficult to be obtained. A reinforcement learning (RL) based optimal controller is proposed for the tracking control of HFV. A model based RL algorithm is firstly proposed and then, based on this algorithm, a model free algorithm is constructed. For relaxing the environmental conditions, neural network (NN) is adopted for the approximation of Critic and Actor, and then a Greedy Policy based updated learning law for NN is derived. The presented RL based control strategy is carried on the nonlinear model of HFV to show its effectiveness.
{"title":"Reinforcement Learning based Optimal Tracking Control for Hypersonic Flight Vehicle: A Model Free Approach","authors":"Xiaoxiang Hu, Kejun Dong, Teng-Chieh Yang, Bing Xiao","doi":"10.1109/INDIN51773.2022.9976071","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976071","url":null,"abstract":"The tracking control of hypersonic flight vehicle (HFV) is discussed in this paper, and the nonlinear model of HFV is assumed to be completely unknown. This problem is surely challenging because of the missing prior knowledge, but is more closer to reality since the exact mode of HFV is difficult to be obtained. A reinforcement learning (RL) based optimal controller is proposed for the tracking control of HFV. A model based RL algorithm is firstly proposed and then, based on this algorithm, a model free algorithm is constructed. For relaxing the environmental conditions, neural network (NN) is adopted for the approximation of Critic and Actor, and then a Greedy Policy based updated learning law for NN is derived. The presented RL based control strategy is carried on the nonlinear model of HFV to show its effectiveness.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122642068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aspect-based Sentiment Classification (ASC) task is a challenge in Natural Language Processing (NLP) and is especially important for fields that require detailed analysis like finance. It aims to identify the sentiment polarity of specific aspects in sentences. In addition to tweets and posts directly related to finance, news from such as restaurants and e-commerce may also indirectly affect its stock prices. In previous approaches, attention-based neural network models were mostly adopted to implicitly connect aspects with opinion words for better aspect representations. However, due to the complexity of language and the presence of multiple aspects in a single sentence, these existing models often confuse connections. To tackle this problem, we propose a model named GAS-CL which encodes syntactical structure into aspect representations and refines it with a contrastive loss. Experiments on several datasets confirm that our approach can have better aspect representations and achieve a significant improvement.
{"title":"Graph Attention Network for Financial Aspect-based Sentiment Classification with Contrastive Learning","authors":"Zhenhuan Huang, Guansheng Wu, Xiang Qian, Baochang Zhang","doi":"10.1109/INDIN51773.2022.9976125","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976125","url":null,"abstract":"Aspect-based Sentiment Classification (ASC) task is a challenge in Natural Language Processing (NLP) and is especially important for fields that require detailed analysis like finance. It aims to identify the sentiment polarity of specific aspects in sentences. In addition to tweets and posts directly related to finance, news from such as restaurants and e-commerce may also indirectly affect its stock prices. In previous approaches, attention-based neural network models were mostly adopted to implicitly connect aspects with opinion words for better aspect representations. However, due to the complexity of language and the presence of multiple aspects in a single sentence, these existing models often confuse connections. To tackle this problem, we propose a model named GAS-CL which encodes syntactical structure into aspect representations and refines it with a contrastive loss. Experiments on several datasets confirm that our approach can have better aspect representations and achieve a significant improvement.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114800672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976124
Li Zhao, Nathee Naktnasukanjn, Lei Mu, Haichuan Liu, Heping Pan
Along with the continuous development of capital markets and intelligent finance technologies, quantitative investment is entering into the most critical and challenging area – fundamental quantitative investment. So far, quantitative investment has been focused on automation of technical analysis and trading, while fundamental investment has been large discretionary. This paper provides an overview of quantitative investment and fundamental investment towards a fundamental quantitative investment theory and technical system based on multi-factor models. We start with reviewing relevant literature on modern financial quantitative investment and fundamental investment. Then we cover the theoretical basis and development of multi-factor models and their applications for stock selection, involving linear and non-linear relationships, machine learning, deep learning with neural networks, random forests, and Support Vector Machines (SVMs). We explore the frontiers of fundamental quantitative investment and shed light on the future research prospects.
{"title":"Fundamental Quantitative Investment Theory and Technical System Based On Multi-Factor Models","authors":"Li Zhao, Nathee Naktnasukanjn, Lei Mu, Haichuan Liu, Heping Pan","doi":"10.1109/INDIN51773.2022.9976124","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976124","url":null,"abstract":"Along with the continuous development of capital markets and intelligent finance technologies, quantitative investment is entering into the most critical and challenging area – fundamental quantitative investment. So far, quantitative investment has been focused on automation of technical analysis and trading, while fundamental investment has been large discretionary. This paper provides an overview of quantitative investment and fundamental investment towards a fundamental quantitative investment theory and technical system based on multi-factor models. We start with reviewing relevant literature on modern financial quantitative investment and fundamental investment. Then we cover the theoretical basis and development of multi-factor models and their applications for stock selection, involving linear and non-linear relationships, machine learning, deep learning with neural networks, random forests, and Support Vector Machines (SVMs). We explore the frontiers of fundamental quantitative investment and shed light on the future research prospects.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the global industrial upgrading requires higher reliability and real-time performance of data communication, Time-sensitive Networking (TSN) has been widely studied. Al-though many TSN scheduling algorithms are designed, there is no standardized analysis report after scheduling and comprehensive scheduling performance evaluation. This paper presents a complete automatic report generation system to analyze the scheduling performance. To standardize various data in TSN-based manufacturing, a uniform auto-generated report model is defined based on the Open Platform Communication Unified Architecture (OPC UA). A learning-based performance evaluation (LPE) method is established to comprehensively analyze the performance of TSN scheduling. In LPE, analytical hierarchy process (AHP) and entropy weight method (EWM) is adopted to optimize the weight distribution of performance indexes objectively, and convolutional neural network (CNN) is used to get the final evaluation result rapidly. Compared with the previous evaluation methods, simulations show the training time of the evaluation method is significantly reduced.
{"title":"Learning-based Automatic Report Generation for Scheduling Performance in Time-Sensitive Networking","authors":"Lingzhi Li, Qimin Xu, Yanzhou Zhang, Lei Xu, Yingxiu Chen, Cailian Chen","doi":"10.1109/INDIN51773.2022.9976085","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976085","url":null,"abstract":"As the global industrial upgrading requires higher reliability and real-time performance of data communication, Time-sensitive Networking (TSN) has been widely studied. Al-though many TSN scheduling algorithms are designed, there is no standardized analysis report after scheduling and comprehensive scheduling performance evaluation. This paper presents a complete automatic report generation system to analyze the scheduling performance. To standardize various data in TSN-based manufacturing, a uniform auto-generated report model is defined based on the Open Platform Communication Unified Architecture (OPC UA). A learning-based performance evaluation (LPE) method is established to comprehensively analyze the performance of TSN scheduling. In LPE, analytical hierarchy process (AHP) and entropy weight method (EWM) is adopted to optimize the weight distribution of performance indexes objectively, and convolutional neural network (CNN) is used to get the final evaluation result rapidly. Compared with the previous evaluation methods, simulations show the training time of the evaluation method is significantly reduced.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129209665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976144
Alejandro López, Lucas Sakurada, Paulo Leitão, O. Casquero, E. Estévez, F. D. L. Prieta, M. Marcos
Cyber-Physical Systems (CPS) are devoted to be the main participants in Industry 4.0 (I4.0) solutions. In recent years, many authors have focused their efforts on making proposals for the design and implementation of CPS based on different digital technologies. However, the comparative evaluation of these I4.0 solutions is complex, since there is no uniform criterion when it comes to defining the test scenarios and the metrics to assess them. This paper presents a technology-independent CPS demonstrator for benchmarking I4.0 solutions. To that end, a set of testing scenarios, Key Performance Indicators and services were defined considering the available automation cells setup. The proposed demonstrator has been used to test an I4.0 solution based on a Multi-agent Systems (MAS) approach.
{"title":"Technology-Independent Demonstrator for Testing Industry 4.0 Solutions","authors":"Alejandro López, Lucas Sakurada, Paulo Leitão, O. Casquero, E. Estévez, F. D. L. Prieta, M. Marcos","doi":"10.1109/INDIN51773.2022.9976144","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976144","url":null,"abstract":"Cyber-Physical Systems (CPS) are devoted to be the main participants in Industry 4.0 (I4.0) solutions. In recent years, many authors have focused their efforts on making proposals for the design and implementation of CPS based on different digital technologies. However, the comparative evaluation of these I4.0 solutions is complex, since there is no uniform criterion when it comes to defining the test scenarios and the metrics to assess them. This paper presents a technology-independent CPS demonstrator for benchmarking I4.0 solutions. To that end, a set of testing scenarios, Key Performance Indicators and services were defined considering the available automation cells setup. The proposed demonstrator has been used to test an I4.0 solution based on a Multi-agent Systems (MAS) approach.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125736430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976114
Shao-Jun Xu, Hongxin Huan, Y. Qi, Guoxiang Guo, J. Yen
In financial field, predicting the future price of an asset has always been a hot topic. There are mainly two existing methods: One is to model the trend of asset prices in price prediction. Therefore, this method inevitably has a lag at the inflection point of the asset sequence. The other is to mine market opinion information from the internet to predict the future direction of prices. The challenge with this approach is that unstructured data processing and analysis is difficult. Therefore, we propose a method for asset movement prediction based on SABR [3] model. On the one hand, the market’s prediction of asset trends implied in options can be used to solve the hysteresis problem. On the other hand, options data is easy to process and analyze. In this article, we try to use a neural network model to capture the market’s view of the future trend of assets hidden in the stochastic volatility surface generated by the stochastic volatility model and establish a mapping relationship with asset prices. The results show that our methods can effectively eliminate the lag of price prediction and improve the accuracy of the prediction.
{"title":"Asset Movement Forcasting with the Implied Volatility Surface Analysis Based on SABR Model","authors":"Shao-Jun Xu, Hongxin Huan, Y. Qi, Guoxiang Guo, J. Yen","doi":"10.1109/INDIN51773.2022.9976114","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976114","url":null,"abstract":"In financial field, predicting the future price of an asset has always been a hot topic. There are mainly two existing methods: One is to model the trend of asset prices in price prediction. Therefore, this method inevitably has a lag at the inflection point of the asset sequence. The other is to mine market opinion information from the internet to predict the future direction of prices. The challenge with this approach is that unstructured data processing and analysis is difficult. Therefore, we propose a method for asset movement prediction based on SABR [3] model. On the one hand, the market’s prediction of asset trends implied in options can be used to solve the hysteresis problem. On the other hand, options data is easy to process and analyze. In this article, we try to use a neural network model to capture the market’s view of the future trend of assets hidden in the stochastic volatility surface generated by the stochastic volatility model and establish a mapping relationship with asset prices. The results show that our methods can effectively eliminate the lag of price prediction and improve the accuracy of the prediction.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123609643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/INDIN51773.2022.9976175
Y. Qi, Guoxiang Guo, Yang Wang, Jerome Yen
Nowadays, people are showing growing attention to the market movements. With more demand for market sentiment analysis and risk management, advanced investment tools are needed to assist the high frequency trading activities. Machine learning as a fast-growing tool provides people a new perspective to handle complex problems. Although financial data contains various information and is usually regarded as hard to concentrate into one unified dimension, our research aims to fuse the image processing method with the high frequency implied-volatility-based market sentiment analysis. In this way, our research implemented the real-time processing of the market data and proposes an innovative idea, applying the machine learning method to regress the market price using the two-dimensional discrete financial data, which is traditionally viewed as images. The proposed method shows satisfying performance in testing with tick-level S&P500 option dataset containing around 1.5 million trading record. To go further with the improvement of the economic image classification and represent the momentum factors of the implied volatility surface images, we also introduce the speed and acceleration of sequence images. Overall, we have reached 61.23% accuracy for implied volatility image classification, and 63.22% & 65.52% accuracy for financial image considering velocity and acceleration.
{"title":"Image Processing Based Implied Volatility Surface Analysis for Asset movement Forecasting","authors":"Y. Qi, Guoxiang Guo, Yang Wang, Jerome Yen","doi":"10.1109/INDIN51773.2022.9976175","DOIUrl":"https://doi.org/10.1109/INDIN51773.2022.9976175","url":null,"abstract":"Nowadays, people are showing growing attention to the market movements. With more demand for market sentiment analysis and risk management, advanced investment tools are needed to assist the high frequency trading activities. Machine learning as a fast-growing tool provides people a new perspective to handle complex problems. Although financial data contains various information and is usually regarded as hard to concentrate into one unified dimension, our research aims to fuse the image processing method with the high frequency implied-volatility-based market sentiment analysis. In this way, our research implemented the real-time processing of the market data and proposes an innovative idea, applying the machine learning method to regress the market price using the two-dimensional discrete financial data, which is traditionally viewed as images. The proposed method shows satisfying performance in testing with tick-level S&P500 option dataset containing around 1.5 million trading record. To go further with the improvement of the economic image classification and represent the momentum factors of the implied volatility surface images, we also introduce the speed and acceleration of sequence images. Overall, we have reached 61.23% accuracy for implied volatility image classification, and 63.22% & 65.52% accuracy for financial image considering velocity and acceleration.","PeriodicalId":359190,"journal":{"name":"2022 IEEE 20th International Conference on Industrial Informatics (INDIN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125276662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}