{"title":"Reinforcement Learning applied to Network Synchronization Systems","authors":"Alessandro Destro, G. Giorgi","doi":"10.1109/MN55117.2022.9887533","DOIUrl":null,"url":null,"abstract":"The design of suitable clock servo is a well-known problem in the context of network-based synchronization systems. Several approaches can be found in the current literature, typically based on PI-controllers or Kalman filtering. These methods require a thorough knowledge of the environment, i.e. clock model, stability parameters, temperature variations, network traffic load, traffic profile and so on. This a-priori knowledge is required to optimize the servo parameters, such as PI constants or transition matrices in a Kalman filter. In this paper we propose instead a clock servo based on the recent Reinforcement Learning approach. In this case a self-learning algorithm based on a deep-Q network learns how to synchronize a local clock only from experience and by exploiting a limited set of predefined actions. Encouraging preliminary results reported in this paper represent a first step to explore the potentiality of the reinforcement learning in synchronization systems typically characterized by an initial lack of knowledge or by a great environmental variability.","PeriodicalId":148281,"journal":{"name":"2022 IEEE International Symposium on Measurements & Networking (M&N)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Measurements & Networking (M&N)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MN55117.2022.9887533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The design of suitable clock servo is a well-known problem in the context of network-based synchronization systems. Several approaches can be found in the current literature, typically based on PI-controllers or Kalman filtering. These methods require a thorough knowledge of the environment, i.e. clock model, stability parameters, temperature variations, network traffic load, traffic profile and so on. This a-priori knowledge is required to optimize the servo parameters, such as PI constants or transition matrices in a Kalman filter. In this paper we propose instead a clock servo based on the recent Reinforcement Learning approach. In this case a self-learning algorithm based on a deep-Q network learns how to synchronize a local clock only from experience and by exploiting a limited set of predefined actions. Encouraging preliminary results reported in this paper represent a first step to explore the potentiality of the reinforcement learning in synchronization systems typically characterized by an initial lack of knowledge or by a great environmental variability.