Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180204
Y. Samarawickrama, V. Cionca
Industry 4.0 has created a strong pull for wireless communications. Industrial applications have tight communication constraints putting them in the class of Ultra Reliable, Low Latency Communication (URLLC). Polar codes have recently become a primary contender for satisfying URLLC requirements. Their performance is heavily dependent on the channel state and with industrial environments presenting extreme conditions with highly dynamic radio channels, obtaining high reliability from polar codes is challenging. Pilot Assisted Transmission allows channel estimation and can improve the reliability of polar codes in fading channels. However a detailed analysis of the impact of the channel dynamics and PAT scheme on the polar code performance is not available. This paper models the industrial radio channel as a Rayleigh channel affected by Doppler shift and delay spread. We evaluate the channel estimation and Bit Error Rate improvements that can be achieved using PAT with variable pilot interval. We detail the behaviour of polar codes subjected to Doppler shift and delay spread. Finally, we investigate the trade-off between reliability and maximum achievable data rate based on PAT interval and code rate. The existence of a trade-off indicates scope for optimization of PAT parameters depending on channel conditions.
{"title":"Polar code performance with Doppler shifts and reflections in Rayleigh fading for Industrial channels","authors":"Y. Samarawickrama, V. Cionca","doi":"10.1109/ISSC49989.2020.9180204","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180204","url":null,"abstract":"Industry 4.0 has created a strong pull for wireless communications. Industrial applications have tight communication constraints putting them in the class of Ultra Reliable, Low Latency Communication (URLLC). Polar codes have recently become a primary contender for satisfying URLLC requirements. Their performance is heavily dependent on the channel state and with industrial environments presenting extreme conditions with highly dynamic radio channels, obtaining high reliability from polar codes is challenging. Pilot Assisted Transmission allows channel estimation and can improve the reliability of polar codes in fading channels. However a detailed analysis of the impact of the channel dynamics and PAT scheme on the polar code performance is not available. This paper models the industrial radio channel as a Rayleigh channel affected by Doppler shift and delay spread. We evaluate the channel estimation and Bit Error Rate improvements that can be achieved using PAT with variable pilot interval. We detail the behaviour of polar codes subjected to Doppler shift and delay spread. Finally, we investigate the trade-off between reliability and maximum achievable data rate based on PAT interval and code rate. The existence of a trade-off indicates scope for optimization of PAT parameters depending on channel conditions.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129150538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180200
P. Corcoran, Hossein Javidnia, Joseph Lemley, Viktor Varkarakis
Recent Advances in Artificial Intelligence (AI), particularly in the field of compute vision, have been driven by the availability of large public datasets. However, as AI begins to move into embedded devices there will be a growing need for tools to acquire and re-acquire datasets from specific sensing systems to train new device models. In this paper, a roadmap in introduced for a data-acquisition framework that can build the large synthetic datasets required to train AI systems from small seed datasets. A key element to justify such a framework is the validation of the generated dataset and example results are shown from preliminary work on biometric (facial) datasets.
{"title":"Generative Augmented Dataset and Annotation Frameworks for Artificial Intelligence (GADAFAI)","authors":"P. Corcoran, Hossein Javidnia, Joseph Lemley, Viktor Varkarakis","doi":"10.1109/ISSC49989.2020.9180200","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180200","url":null,"abstract":"Recent Advances in Artificial Intelligence (AI), particularly in the field of compute vision, have been driven by the availability of large public datasets. However, as AI begins to move into embedded devices there will be a growing need for tools to acquire and re-acquire datasets from specific sensing systems to train new device models. In this paper, a roadmap in introduced for a data-acquisition framework that can build the large synthetic datasets required to train AI systems from small seed datasets. A key element to justify such a framework is the validation of the generated dataset and example results are shown from preliminary work on biometric (facial) datasets.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121566841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180198
Kieran Hughes, K. Mclaughlin, S. Sezer
Significant advancements in Intrusion Detection Systems has led to improved alerts. However, Intrusion Response Systems which aim to automatically respond to these alerts, is a research area which is not yet advanced enough to benefit from full automation. In Security Operations Centres, analysts can implement countermeasures using knowledge and past experience to adapt to new attacks. Attempts at automated Intrusion Response Systems fall short when a new attack occurs to which the system has no specific knowledge or effective countermeasure to apply, even leading to overkill countermeasures such as restarting services and blocking ports or IPs. In this paper, a countermeasure standard is proposed which enables countermeasure intelligence sharing, automated countermeasure adoption and execution by an Intrusion Response System. An attack scenario is created on an emulated network using the Common Open Research Emulator, where an insider attack attempts to exploit a buffer overflow on an Exim mail server. Experiments demonstrate that an Intrusion Response System with dynamic countermeasure knowledge can stop attacks that would otherwise succeed with a static predefined countermeasure approach.
入侵检测系统的重大进步导致了警报的改进。然而,旨在自动响应这些警报的入侵响应系统是一个尚不够先进的研究领域,无法从完全自动化中受益。在安全运营中心,分析人员可以利用知识和过去的经验实施对策,以适应新的攻击。当新的攻击发生时,系统没有特定的知识或有效的应对措施,甚至导致重新启动服务和阻止端口或ip等过度的应对措施,自动入侵响应系统的尝试就会失败。本文提出了一种对抗标准,实现了入侵响应系统对对抗情报的共享、对对抗的自动采用和执行。使用Common Open Research Emulator在模拟网络上创建攻击场景,其中内部攻击试图利用Exim邮件服务器上的缓冲区溢出。实验表明,具有动态对抗知识的入侵响应系统能够有效阻止静态预定义对抗方法无法成功实施的攻击。
{"title":"Dynamic Countermeasure Knowledge for Intrusion Response Systems","authors":"Kieran Hughes, K. Mclaughlin, S. Sezer","doi":"10.1109/ISSC49989.2020.9180198","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180198","url":null,"abstract":"Significant advancements in Intrusion Detection Systems has led to improved alerts. However, Intrusion Response Systems which aim to automatically respond to these alerts, is a research area which is not yet advanced enough to benefit from full automation. In Security Operations Centres, analysts can implement countermeasures using knowledge and past experience to adapt to new attacks. Attempts at automated Intrusion Response Systems fall short when a new attack occurs to which the system has no specific knowledge or effective countermeasure to apply, even leading to overkill countermeasures such as restarting services and blocking ports or IPs. In this paper, a countermeasure standard is proposed which enables countermeasure intelligence sharing, automated countermeasure adoption and execution by an Intrusion Response System. An attack scenario is created on an emulated network using the Common Open Research Emulator, where an insider attack attempts to exploit a buffer overflow on an Exim mail server. Experiments demonstrate that an Intrusion Response System with dynamic countermeasure knowledge can stop attacks that would otherwise succeed with a static predefined countermeasure approach.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132424713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180209
George D. O’Mahony, Philip J. Harris, Colin C. Murphy
Knowledge of the wireless channel is pivotal for wireless communication links but varies for multiple reasons. The radio spectrum changes due to the number of connected devices, demand, packet size or services in operation, while fading levels, obstacles, path losses, and spurious (non-)malicious interference fluctuate in the physical environment. Typically, these channels are applicable to the time series class of data science problems, as the primary data points are measured over a period. In the case of wireless sensor networks, which regularly provide the device to access point communication links in Internet of Things applications, determining the wireless channel in operation permits channel access. Generally, a clear channel assessment is performed to determine whether a wireless transmission can be executed, which is an approach containing limitations. In this study, received in-phase (I) and quadrature-phase (Q) samples are collected from the wireless channel using a software-defined radio (SDR) based procedure and directly analyzed using python and Matlab. Features are extracted from the probability density function and statistical analysis of the received I/Q samples and used as the training data for the two chosen machine learning methods. Data is collected and produced over wires, to avoid interfering with other networks, using SDRs and Raspberry Pi embedded devices, which utilize available open-source libraries. Data is examined for the signal-free (noise), legitimate signal (ZigBee) and jamming signal (continuous wave) cases in a live laboratory environment. Support vector machine and Random Forest models are each designed and compared as channel identifiers for these signal types.
{"title":"Investigating Supervised Machine Learning Techniques for Channel Identification in Wireless Sensor Networks","authors":"George D. O’Mahony, Philip J. Harris, Colin C. Murphy","doi":"10.1109/ISSC49989.2020.9180209","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180209","url":null,"abstract":"Knowledge of the wireless channel is pivotal for wireless communication links but varies for multiple reasons. The radio spectrum changes due to the number of connected devices, demand, packet size or services in operation, while fading levels, obstacles, path losses, and spurious (non-)malicious interference fluctuate in the physical environment. Typically, these channels are applicable to the time series class of data science problems, as the primary data points are measured over a period. In the case of wireless sensor networks, which regularly provide the device to access point communication links in Internet of Things applications, determining the wireless channel in operation permits channel access. Generally, a clear channel assessment is performed to determine whether a wireless transmission can be executed, which is an approach containing limitations. In this study, received in-phase (I) and quadrature-phase (Q) samples are collected from the wireless channel using a software-defined radio (SDR) based procedure and directly analyzed using python and Matlab. Features are extracted from the probability density function and statistical analysis of the received I/Q samples and used as the training data for the two chosen machine learning methods. Data is collected and produced over wires, to avoid interfering with other networks, using SDRs and Raspberry Pi embedded devices, which utilize available open-source libraries. Data is examined for the signal-free (noise), legitimate signal (ZigBee) and jamming signal (continuous wave) cases in a live laboratory environment. Support vector machine and Random Forest models are each designed and compared as channel identifiers for these signal types.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131565493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180208
John Hastings, D. Laverty, A. Jahic, D. Morrow, P. Brogan
In this paper, the Authors present MQTT (ISO/IEC 20922), coupled with Public-key Infrastructure (PKI) as being highly suited to the secure and timely delivery of the command and control messages required in a low-latency Automated Demand Response (ADR) system which makes use of domestic-level electrical loads connected to the Internet. Several use cases for ADR are introduced, and relevant security considerations are discussed; further emphasizing the suitability of the proposed infrastructure. The authors then describe their testbed platform for testing ADR functionality, and finally discuss the next steps towards getting these kinds of technologies to the next stage.
{"title":"Cyber-security considerations for domestic-level automated demand-response systems utilizing public-key infrastructure and ISO/IEC 20922","authors":"John Hastings, D. Laverty, A. Jahic, D. Morrow, P. Brogan","doi":"10.1109/ISSC49989.2020.9180208","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180208","url":null,"abstract":"In this paper, the Authors present MQTT (ISO/IEC 20922), coupled with Public-key Infrastructure (PKI) as being highly suited to the secure and timely delivery of the command and control messages required in a low-latency Automated Demand Response (ADR) system which makes use of domestic-level electrical loads connected to the Internet. Several use cases for ADR are introduced, and relevant security considerations are discussed; further emphasizing the suitability of the proposed infrastructure. The authors then describe their testbed platform for testing ADR functionality, and finally discuss the next steps towards getting these kinds of technologies to the next stage.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127953016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180155
J. M. González-Sopeña, V. Pakrashi, Bidisha Ghosh
Accurate wind power forecasts are a key tool for the correct operation of the grid and the energy trading market, particularly in regions with a large wind resource as Ireland, where wind energy comprises a large share of the electricity generated. A multi-step ahead wind power forecasting ensemble of models based on variational mode decomposition and extreme learning machines is employed in this paper to be applied for Irish wind farms. Data from two wind farms placed in different locations are used to show the suitability of the model for Ireland. The results show that the use of this full ensemble of models provides more reliable and robust forecasts for several prediction horizons and an improvement between 7% and 22% with respect to a single model. Additionally, the ensemble shows a low systematic error regardless of the prediction horizon.
{"title":"Multi-step ahead wind power forecasting for Ireland using an ensemble of VMD-ELM models","authors":"J. M. González-Sopeña, V. Pakrashi, Bidisha Ghosh","doi":"10.1109/ISSC49989.2020.9180155","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180155","url":null,"abstract":"Accurate wind power forecasts are a key tool for the correct operation of the grid and the energy trading market, particularly in regions with a large wind resource as Ireland, where wind energy comprises a large share of the electricity generated. A multi-step ahead wind power forecasting ensemble of models based on variational mode decomposition and extreme learning machines is employed in this paper to be applied for Irish wind farms. Data from two wind farms placed in different locations are used to show the suitability of the model for Ireland. The results show that the use of this full ensemble of models provides more reliable and robust forecasts for several prediction horizons and an improvement between 7% and 22% with respect to a single model. Additionally, the ensemble shows a low systematic error regardless of the prediction horizon.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116087362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180157
Waleed Alghamdi, M. Schukat
The Precision Time Protocol is essential for many time-sensitive and time-aware applications. However, it was never designed for security, and despite various approaches to harden this protocol against manipulation, it is still prone to cyber-attacks. Here Advanced Persistent Threats (APT) are of particular concern, as they may stealthily and over extended periods of time manipulate computer clocks that rely on the accurate functioning of this protocol. Simulating such attacks is difficult, as it requires firmware manipulation of network and PTP infrastructure components. Therefore, this paper proposes and demonstrates a programmable Man-in-the-Middle (pMitM) and a programmable injector (pInj) device that allow the implementation of a variety of attacks, enabling security researchers to quantify the impact of APTs on time synchronisation.
{"title":"Practical Implementation of APTs on PTP Time Synchronisation Networks","authors":"Waleed Alghamdi, M. Schukat","doi":"10.1109/ISSC49989.2020.9180157","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180157","url":null,"abstract":"The Precision Time Protocol is essential for many time-sensitive and time-aware applications. However, it was never designed for security, and despite various approaches to harden this protocol against manipulation, it is still prone to cyber-attacks. Here Advanced Persistent Threats (APT) are of particular concern, as they may stealthily and over extended periods of time manipulate computer clocks that rely on the accurate functioning of this protocol. Simulating such attacks is difficult, as it requires firmware manipulation of network and PTP infrastructure components. Therefore, this paper proposes and demonstrates a programmable Man-in-the-Middle (pMitM) and a programmable injector (pInj) device that allow the implementation of a variety of attacks, enabling security researchers to quantify the impact of APTs on time synchronisation.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114508401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180188
Shubhajit Basak, Hossein Javidnia, Faisal Khan, R. Mcdonnell, M. Schukat
Recent advances in deep learning methods have increased the performance of face detection and recognition systems. The accuracy of these models relies on the range of variation provided in the training data. Creating a dataset that represents all variations of real-world faces is not feasible as the control over the quality of the data decreases with the size of the dataset. Repeatability of data is another challenge as it is not possible to exactly recreate ‘real-world’ acquisition conditions outside of the laboratory. In this work, we explore a framework to synthetically generate facial data to be used as part of a toolchain to generate very large facial datasets with a high degree of control over facial and environmental variations. Such large datasets can be used for improved, targeted training of deep neural networks. In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities, providing full control over image variations such as pose, illumination, and background.
{"title":"Methodology for Building Synthetic Datasets with Virtual Humans","authors":"Shubhajit Basak, Hossein Javidnia, Faisal Khan, R. Mcdonnell, M. Schukat","doi":"10.1109/ISSC49989.2020.9180188","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180188","url":null,"abstract":"Recent advances in deep learning methods have increased the performance of face detection and recognition systems. The accuracy of these models relies on the range of variation provided in the training data. Creating a dataset that represents all variations of real-world faces is not feasible as the control over the quality of the data decreases with the size of the dataset. Repeatability of data is another challenge as it is not possible to exactly recreate ‘real-world’ acquisition conditions outside of the laboratory. In this work, we explore a framework to synthetically generate facial data to be used as part of a toolchain to generate very large facial datasets with a high degree of control over facial and environmental variations. Such large datasets can be used for improved, targeted training of deep neural networks. In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities, providing full control over image variations such as pose, illumination, and background.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127141344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180156
G. Nikolov, M. Kuhn, A. Mcgibney, Bernd-Ludwig Wenning
This paper presents a novel data rate prediction scheme. By combining online data rate estimation techniques with Long Short-Term Memory (LSTM) Neural Networks (NN), we are able to forecast the near future behaviour of the mobile channel. The prediction scheme is evaluated on data sets obtained from private and commercial mobile networks. By utilizing a Dense-Sparse-Dense (DSD) training in conjunction with weight rounding we reduce the size by a factor of 7.36 and complexity by 57% without any loss in accuracy of the model. Such an approach is especially attractive for low-end embedded-based hardware solutions where memory and processing power are limited.
{"title":"Reduced Complexity Approach for Uplink Rate Trajectory Prediction in Mobile Networks","authors":"G. Nikolov, M. Kuhn, A. Mcgibney, Bernd-Ludwig Wenning","doi":"10.1109/ISSC49989.2020.9180156","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180156","url":null,"abstract":"This paper presents a novel data rate prediction scheme. By combining online data rate estimation techniques with Long Short-Term Memory (LSTM) Neural Networks (NN), we are able to forecast the near future behaviour of the mobile channel. The prediction scheme is evaluated on data sets obtained from private and commercial mobile networks. By utilizing a Dense-Sparse-Dense (DSD) training in conjunction with weight rounding we reduce the size by a factor of 7.36 and complexity by 57% without any loss in accuracy of the model. Such an approach is especially attractive for low-end embedded-based hardware solutions where memory and processing power are limited.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115052691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISSC49989.2020.9180174
Vini Vijayan, Nigel McKelvey, J. Condell, P. Gardiner, J. Connolly
Wearable sensor technology is often used in healthcare environments for monitoring, diagnosis and recovery of patients. Wearable sensors can be used to detect movement throughout measurement of standardized functional tests, which are considered part of the assessment criteria for Activities of Daily Living (ADL). The volume of data collected by sensors for long term assessment of ambulatory movement can be very large in tuple size since they may contain detailed 3-D sensor information. Extracting recorded movement data corresponding to standardized functional tests from an entire data set is complex and time consuming. This paper examines whether standardized functional tests can be automatically detected from long term data collected by wearable technology devices using Artificial Intelligence (AI) techniques. The current research work is aligned with clinical trial data generated by patients who are suffering from Axial Spondylo Arthritis (axSpA). These datasets contain Inertial Measurement Unit (IMU) values corresponding to individual patient functional tests for axSpA. Rotation angles with respect to each functional test are plotted against time. Individual movements that form part of a functional test are constructed for training and testing the AI system. Individual movement patterns are split into training and testing data inputs and are used to train the Neural Network (NN) system and to estimate overall prediction accuracy of the NN system. NN model is trained in such a way that the learned system can predict new functional test patterns with respect to the trained data and it is compared with expected data set and returned the accuracy of prediction. Once the semi supervised learning phase of AI system has successfully finished with adequate amount of data, it is capable for automatically detect gait and posture changes of patients at home.
{"title":"Implementing Pattern Recognition and Matching techniques to automatically detect standardized functional tests from wearable technology","authors":"Vini Vijayan, Nigel McKelvey, J. Condell, P. Gardiner, J. Connolly","doi":"10.1109/ISSC49989.2020.9180174","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180174","url":null,"abstract":"Wearable sensor technology is often used in healthcare environments for monitoring, diagnosis and recovery of patients. Wearable sensors can be used to detect movement throughout measurement of standardized functional tests, which are considered part of the assessment criteria for Activities of Daily Living (ADL). The volume of data collected by sensors for long term assessment of ambulatory movement can be very large in tuple size since they may contain detailed 3-D sensor information. Extracting recorded movement data corresponding to standardized functional tests from an entire data set is complex and time consuming. This paper examines whether standardized functional tests can be automatically detected from long term data collected by wearable technology devices using Artificial Intelligence (AI) techniques. The current research work is aligned with clinical trial data generated by patients who are suffering from Axial Spondylo Arthritis (axSpA). These datasets contain Inertial Measurement Unit (IMU) values corresponding to individual patient functional tests for axSpA. Rotation angles with respect to each functional test are plotted against time. Individual movements that form part of a functional test are constructed for training and testing the AI system. Individual movement patterns are split into training and testing data inputs and are used to train the Neural Network (NN) system and to estimate overall prediction accuracy of the NN system. NN model is trained in such a way that the learned system can predict new functional test patterns with respect to the trained data and it is compared with expected data set and returned the accuracy of prediction. Once the semi supervised learning phase of AI system has successfully finished with adequate amount of data, it is capable for automatically detect gait and posture changes of patients at home.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116513503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}