Fog manufacturing combines Fog and Cloud computing in a manufacturing network to provide efficient data analytics and support real-time decision-making. Detecting anomalies, including imbalanced computational workloads and cyber-attacks, is critical to ensure reliable and responsive computation services. However, such anomalies often concur with dynamic offloading events where computation tasks are migrated from well-occupied Fog nodes to less-occupied ones to reduce the overall computation time latency and improve the throughput. Such concurrences jointly affect the system behaviors, which makes anomaly detection inaccurate. We propose a qualitative and quantitative (QQ) control chart to monitor system anomalies through identifying the changes of monitored runtime metric relationship (quantitative variables) under the presence of dynamic offloading (qualitative variable) using a risk-adjusted monitoring framework. Both the simulation and Fog manufacturing case studies show the advantage of the proposed method compared with the existing literature under the dynamic offloading influence.
{"title":"Monitoring Runtime Metrics of Fog Manufacturing via a Qualitative and Quantitative (QQ) Control Chart","authors":"Yifu Li, Lening Wang, Dongyoon Lee, R. Jin","doi":"10.1145/3501262","DOIUrl":"https://doi.org/10.1145/3501262","url":null,"abstract":"Fog manufacturing combines Fog and Cloud computing in a manufacturing network to provide efficient data analytics and support real-time decision-making. Detecting anomalies, including imbalanced computational workloads and cyber-attacks, is critical to ensure reliable and responsive computation services. However, such anomalies often concur with dynamic offloading events where computation tasks are migrated from well-occupied Fog nodes to less-occupied ones to reduce the overall computation time latency and improve the throughput. Such concurrences jointly affect the system behaviors, which makes anomaly detection inaccurate. We propose a qualitative and quantitative (QQ) control chart to monitor system anomalies through identifying the changes of monitored runtime metric relationship (quantitative variables) under the presence of dynamic offloading (qualitative variable) using a risk-adjusted monitoring framework. Both the simulation and Fog manufacturing case studies show the advantage of the proposed method compared with the existing literature under the dynamic offloading influence.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80086891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huanle Zhang, M. Uddin, F. Hao, S. Mukherjee, P. Mohapatra
Having an efficient onboarding process is a pivotal step to utilize and provision the IoT devices for accessing the network infrastructure. However, the current process to onboard IoT devices is time-consuming and labor-intensive, which makes the process vulnerable to human errors and security risks. In order to have a streamlined onboarding process, we need a mechanism to reliably associate each digital identity with each physical device. We design an onboarding mechanism called MAIDE to fill this technical gap. MAIDE is an Augmented Reality (AR)-facilitated app that systematically selects multiple measurement locations, calculates measurement time for each location and guides the user through the measurement process. The app also uses an optimized voting-based algorithm to derive the device-to-ID mapping based on measurement data. This method does not require any modification to existing IoT devices or the infrastructure and can be applied to all major wireless protocols such as BLE, and WiFi. Our extensive experiments show that MAIDE achieves high device-to-ID mapping accuracy. For example, to distinguish two devices on a ceiling in a typical enterprise environment, MAIDE achieves ~95% accuracy by measuring 5 seconds of Received Signal Strength (RSS) data for each measurement location when the devices are 4 feet apart.
{"title":"MAIDE: Augmented Reality (AR)-facilitated Mobile System for Onboarding of Internet of Things (IoT) Devices at Ease","authors":"Huanle Zhang, M. Uddin, F. Hao, S. Mukherjee, P. Mohapatra","doi":"10.1145/3506667","DOIUrl":"https://doi.org/10.1145/3506667","url":null,"abstract":"Having an efficient onboarding process is a pivotal step to utilize and provision the IoT devices for accessing the network infrastructure. However, the current process to onboard IoT devices is time-consuming and labor-intensive, which makes the process vulnerable to human errors and security risks. In order to have a streamlined onboarding process, we need a mechanism to reliably associate each digital identity with each physical device. We design an onboarding mechanism called MAIDE to fill this technical gap. MAIDE is an Augmented Reality (AR)-facilitated app that systematically selects multiple measurement locations, calculates measurement time for each location and guides the user through the measurement process. The app also uses an optimized voting-based algorithm to derive the device-to-ID mapping based on measurement data. This method does not require any modification to existing IoT devices or the infrastructure and can be applied to all major wireless protocols such as BLE, and WiFi. Our extensive experiments show that MAIDE achieves high device-to-ID mapping accuracy. For example, to distinguish two devices on a ceiling in a typical enterprise environment, MAIDE achieves ~95% accuracy by measuring 5 seconds of Received Signal Strength (RSS) data for each measurement location when the devices are 4 feet apart.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74096679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benazir Neha, S. K. Panda, P. Sahu, Kshira Sagar Sahoo, A. Gandomi
Osmotic computing in association with related computing paradigms (cloud, fog, and edge) emerges as a promising solution for handling bulk of security-critical as well as latency-sensitive data generated by the digital devices. It is a growing research domain that studies deployment, migration, and optimization of applications in the form of microservices across cloud/edge infrastructure. It presents dynamically tailored microservices in technology-centric environments by exploiting edge and cloud platforms. Osmotic computing promotes digital transformation and furnishes benefits to transportation, smart cities, education, and healthcare. In this article, we present a comprehensive analysis of osmotic computing through a systematic literature review approach. To ensure high-quality review, we conduct an advanced search on numerous digital libraries to extracting related studies. The advanced search strategy identifies 99 studies, from which 29 relevant studies are selected for a thorough review. We present a summary of applications in osmotic computing build on their key features. On the basis of the observations, we outline the research challenges for the applications in this research field. Finally, we discuss the security issues resolved and unresolved in osmotic computing.
{"title":"A Systematic Review on Osmotic Computing","authors":"Benazir Neha, S. K. Panda, P. Sahu, Kshira Sagar Sahoo, A. Gandomi","doi":"10.1145/3488247","DOIUrl":"https://doi.org/10.1145/3488247","url":null,"abstract":"Osmotic computing in association with related computing paradigms (cloud, fog, and edge) emerges as a promising solution for handling bulk of security-critical as well as latency-sensitive data generated by the digital devices. It is a growing research domain that studies deployment, migration, and optimization of applications in the form of microservices across cloud/edge infrastructure. It presents dynamically tailored microservices in technology-centric environments by exploiting edge and cloud platforms. Osmotic computing promotes digital transformation and furnishes benefits to transportation, smart cities, education, and healthcare. In this article, we present a comprehensive analysis of osmotic computing through a systematic literature review approach. To ensure high-quality review, we conduct an advanced search on numerous digital libraries to extracting related studies. The advanced search strategy identifies 99 studies, from which 29 relevant studies are selected for a thorough review. We present a summary of applications in osmotic computing build on their key features. On the basis of the observations, we outline the research challenges for the applications in this research field. Finally, we discuss the security issues resolved and unresolved in osmotic computing.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75887380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiago C. S. Xavier, Flávia Coimbra Delicato, Paulo F. Pires, Cláudio L. Amorim, Wei Li, Albert Y. Zomaya
In the Internet of Things (IoT) environment, the computing resources available in the cloud are often unable to meet the latency constraints of time critical applications due to the large distance between the cloud and data sources (IoT devices). The adoption of edge computing can help the cloud deliver services that meet time critical application requirements. However, it is challenging to meet the IoT application demands while using the resources smartly to reduce energy consumption at the edge of the network. In this context, we propose a fully distributed resource allocation algorithm for the IoT-edge-cloud environment, which (i) increases the infrastructure resource usage by promoting the collaboration between edge nodes, (ii) supports the heterogeneity and generic requirements of applications, and (iii) reduces the application latency and increases the energy efficiency of the edge. We compare our algorithm with a non-collaborative vertical offloading and with a horizontal approach based on edge collaboration. Results of simulations showed that the proposed algorithm is able to reduce 49.95% of the IoT application request end-to-end latency, increase 95.35% of the edge node utilization, and enhance the energy efficiency in terms of the edge node power consumption by 92.63% in comparison to the best performances of vertical and collaboration approaches.
{"title":"Managing Heterogeneous and Time-Sensitive IoT Applications through Collaborative and Energy-Aware Resource Allocation","authors":"Tiago C. S. Xavier, Flávia Coimbra Delicato, Paulo F. Pires, Cláudio L. Amorim, Wei Li, Albert Y. Zomaya","doi":"10.1145/3488248","DOIUrl":"https://doi.org/10.1145/3488248","url":null,"abstract":"In the Internet of Things (IoT) environment, the computing resources available in the cloud are often unable to meet the latency constraints of time critical applications due to the large distance between the cloud and data sources (IoT devices). The adoption of edge computing can help the cloud deliver services that meet time critical application requirements. However, it is challenging to meet the IoT application demands while using the resources smartly to reduce energy consumption at the edge of the network. In this context, we propose a fully distributed resource allocation algorithm for the IoT-edge-cloud environment, which (i) increases the infrastructure resource usage by promoting the collaboration between edge nodes, (ii) supports the heterogeneity and generic requirements of applications, and (iii) reduces the application latency and increases the energy efficiency of the edge. We compare our algorithm with a non-collaborative vertical offloading and with a horizontal approach based on edge collaboration. Results of simulations showed that the proposed algorithm is able to reduce 49.95% of the IoT application request end-to-end latency, increase 95.35% of the edge node utilization, and enhance the energy efficiency in terms of the edge node power consumption by 92.63% in comparison to the best performances of vertical and collaboration approaches.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90728269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Brauner, M. Dalibor, M. Jarke, Ike Kunze, I. Koren, G. Lakemeyer, M. Liebenberg, Judith Michael, J. Pennekamp, C. Quix, Bernhard Rumpe, Wil M.P. van der Aalst, Klaus Wehrle, A. Wortmann, M. Ziefle
The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.
{"title":"A Computer Science Perspective on Digital Transformation in Production","authors":"P. Brauner, M. Dalibor, M. Jarke, Ike Kunze, I. Koren, G. Lakemeyer, M. Liebenberg, Judith Michael, J. Pennekamp, C. Quix, Bernhard Rumpe, Wil M.P. van der Aalst, Klaus Wehrle, A. Wortmann, M. Ziefle","doi":"10.1145/3502265","DOIUrl":"https://doi.org/10.1145/3502265","url":null,"abstract":"The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81046170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many real-world scenarios, machine learning models fall short in prediction performance due to data characteristics changing from training on one source domain to testing on a target domain. There has been extensive research to address this problem with Domain Adaptation (DA) for learning domain invariant features. However, when considering advances for time series, those methods remain limited to the use of hard parameter sharing (HPS) between source and target models, and the use of domain adaptation objective function. To address these challenges, we propose a soft parameter sharing (SPS) DA architecture with representation learning while modeling the relation as non-linear between parameters of source and target models and modeling the adaptation loss function as the squared Maximum Mean Discrepancy (MMD). The proposed architecture advances the state-of-the-art for time series in the context of activity recognition and in fields with other modalities, where SPS has been limited to a linear relation. An additional contribution of our work is to provide a study that demonstrates the strengths and limitations of HPS versus SPS. Experiment results showed the success of the method in three domain adaptation cases of multivariate time series activity recognition with different users and sensors.
{"title":"Domain Adaptation with Representation Learning and Nonlinear Relation for Time Series","authors":"A. Hussein, Hazem Hajj","doi":"10.1145/3502905","DOIUrl":"https://doi.org/10.1145/3502905","url":null,"abstract":"In many real-world scenarios, machine learning models fall short in prediction performance due to data characteristics changing from training on one source domain to testing on a target domain. There has been extensive research to address this problem with Domain Adaptation (DA) for learning domain invariant features. However, when considering advances for time series, those methods remain limited to the use of hard parameter sharing (HPS) between source and target models, and the use of domain adaptation objective function. To address these challenges, we propose a soft parameter sharing (SPS) DA architecture with representation learning while modeling the relation as non-linear between parameters of source and target models and modeling the adaptation loss function as the squared Maximum Mean Discrepancy (MMD). The proposed architecture advances the state-of-the-art for time series in the context of activity recognition and in fields with other modalities, where SPS has been limited to a linear relation. An additional contribution of our work is to provide a study that demonstrates the strengths and limitations of HPS versus SPS. Experiment results showed the success of the method in three domain adaptation cases of multivariate time series activity recognition with different users and sensors.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikhail Fomichev, L. F. Abanto-Leon, Maximilian Stiegler, Alejandro Molina, Jakob Link, M. Hollick
Context-based copresence detection schemes are a necessary prerequisite to building secure and usable authentication systems in the Internet of Things (IoT). Such schemes allow one device to verify proximity of another device without user assistance utilizing their physical context (e.g., audio). The state-of-the-art copresence detection schemes suffer from two major limitations: (1) They cannot accurately detect copresence in low-entropy context (e.g., empty room with few events occurring) and insufficiently separated environments (e.g., adjacent rooms), (2) They require devices to have common sensors (e.g., microphones) to capture context, making them impractical on devices with heterogeneous sensors. We address these limitations, proposing Next2You, a novel copresence detection scheme utilizing channel state information (CSI). In particular, we leverage magnitude and phase values from a range of subcarriers specifying a Wi-Fi channel to capture a robust wireless context created when devices communicate. We implement Next2You on off-the-shelf smartphones relying only on ubiquitous Wi-Fi chipsets and evaluate it based on over 95 hours of CSI measurements that we collect in five real-world scenarios. Next2You achieves error rates below 4%, maintaining accurate copresence detection both in low-entropy context and insufficiently separated environments. We also demonstrate the capability of Next2You to work reliably in real-time and its robustness to various attacks.
{"title":"Next2You: Robust Copresence Detection Based on Channel State Information","authors":"Mikhail Fomichev, L. F. Abanto-Leon, Maximilian Stiegler, Alejandro Molina, Jakob Link, M. Hollick","doi":"10.1145/3491244","DOIUrl":"https://doi.org/10.1145/3491244","url":null,"abstract":"Context-based copresence detection schemes are a necessary prerequisite to building secure and usable authentication systems in the Internet of Things (IoT). Such schemes allow one device to verify proximity of another device without user assistance utilizing their physical context (e.g., audio). The state-of-the-art copresence detection schemes suffer from two major limitations: (1) They cannot accurately detect copresence in low-entropy context (e.g., empty room with few events occurring) and insufficiently separated environments (e.g., adjacent rooms), (2) They require devices to have common sensors (e.g., microphones) to capture context, making them impractical on devices with heterogeneous sensors. We address these limitations, proposing Next2You, a novel copresence detection scheme utilizing channel state information (CSI). In particular, we leverage magnitude and phase values from a range of subcarriers specifying a Wi-Fi channel to capture a robust wireless context created when devices communicate. We implement Next2You on off-the-shelf smartphones relying only on ubiquitous Wi-Fi chipsets and evaluate it based on over 95 hours of CSI measurements that we collect in five real-world scenarios. Next2You achieves error rates below 4%, maintaining accurate copresence detection both in low-entropy context and insufficiently separated environments. We also demonstrate the capability of Next2You to work reliably in real-time and its robustness to various attacks.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75436572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang-Yang Lin, Hamzeh Khazaei, Andrew Walenstein, A. Malton
Embedded sensors and smart devices have turned the environments around us into smart spaces that could automatically evolve, depending on the needs of users, and adapt to the new conditions. While smart spaces are beneficial and desired in many aspects, they could be compromised and expose privacy, security, or render the whole environment a hostile space in which regular tasks cannot be accomplished anymore. In fact, ensuring the security of smart spaces is a very challenging task due to the heterogeneity of devices, vast attack surface, and device resource limitations. The key objective of this study is to minimize the manual work in enforcing the security of smart spaces by leveraging the autonomic computing paradigm in the management of IoT environments. More specifically, we strive to build an autonomic manager that can monitor the smart space continuously, analyze the context, plan and execute countermeasures to maintain the desired level of security, and reduce liability and risks of security breaches. We follow the microservice architecture pattern and propose a generic ontology named Secure Smart Space Ontology (SSSO) for describing dynamic contextual information in security-enhanced smart spaces. Based on SSSO, we build an autonomic security manager with four layers that continuously monitors the managed spaces, analyzes contextual information and events, and automatically plans and implements adaptive security policies. As the evaluation, focusing on a current BlackBerry customer problem, we deployed the proposed autonomic security manager to maintain the security of a smart conference room with 32 devices and 66 services. The high performance of the proposed solution was also evaluated on a large-scale deployment with over 1.8 million triples.
{"title":"Autonomic Security Management for IoT Smart Spaces","authors":"Chang-Yang Lin, Hamzeh Khazaei, Andrew Walenstein, A. Malton","doi":"10.1145/3466696","DOIUrl":"https://doi.org/10.1145/3466696","url":null,"abstract":"Embedded sensors and smart devices have turned the environments around us into smart spaces that could automatically evolve, depending on the needs of users, and adapt to the new conditions. While smart spaces are beneficial and desired in many aspects, they could be compromised and expose privacy, security, or render the whole environment a hostile space in which regular tasks cannot be accomplished anymore. In fact, ensuring the security of smart spaces is a very challenging task due to the heterogeneity of devices, vast attack surface, and device resource limitations. The key objective of this study is to minimize the manual work in enforcing the security of smart spaces by leveraging the autonomic computing paradigm in the management of IoT environments. More specifically, we strive to build an autonomic manager that can monitor the smart space continuously, analyze the context, plan and execute countermeasures to maintain the desired level of security, and reduce liability and risks of security breaches. We follow the microservice architecture pattern and propose a generic ontology named Secure Smart Space Ontology (SSSO) for describing dynamic contextual information in security-enhanced smart spaces. Based on SSSO, we build an autonomic security manager with four layers that continuously monitors the managed spaces, analyzes contextual information and events, and automatically plans and implements adaptive security policies. As the evaluation, focusing on a current BlackBerry customer problem, we deployed the proposed autonomic security manager to maintain the security of a smart conference room with 32 devices and 66 services. The high performance of the proposed solution was also evaluated on a large-scale deployment with over 1.8 million triples.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80647608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.
{"title":"Cognitive Robotics on 5G Networks","authors":"Zhihan Lv, Liang Qiao, Qingjun Wang","doi":"10.1145/3414842","DOIUrl":"https://doi.org/10.1145/3414842","url":null,"abstract":"Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78772670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peining Zhen, Hai-Bao Chen, Yuan Cheng, Zhigang Ji, Bin Liu, Hao Yu
Mobile devices usually suffer from limited computation and storage resources, which seriously hinders them from deep neural network applications. In this article, we introduce a deeply tensor-compressed long short-term memory (LSTM) neural network for fast video-based facial expression recognition on mobile devices. First, a spatio-temporal facial expression recognition LSTM model is built by extracting time-series feature maps from facial clips. The LSTM-based spatio-temporal model is further deeply compressed by means of quantization and tensorization for mobile device implementation. Based on datasets of Extended Cohn-Kanade (CK+), MMI, and Acted Facial Expression in Wild 7.0, experimental results show that the proposed method achieves 97.96%, 97.33%, and 55.60% classification accuracy and significantly compresses the size of network model up to 221× with reduced training time per epoch by 60%. Our work is further implemented on the RK3399Pro mobile device with a Neural Process Engine. The latency of the feature extractor and LSTM predictor can be reduced 30.20× and 6.62× , respectively, on board with the leveraged compression methods. Furthermore, the spatio-temporal model costs only 57.19 MB of DRAM and 5.67W of power when running on the board.
{"title":"Fast Video Facial Expression Recognition by a Deeply Tensor-Compressed LSTM Neural Network for Mobile Devices","authors":"Peining Zhen, Hai-Bao Chen, Yuan Cheng, Zhigang Ji, Bin Liu, Hao Yu","doi":"10.1145/3464941","DOIUrl":"https://doi.org/10.1145/3464941","url":null,"abstract":"Mobile devices usually suffer from limited computation and storage resources, which seriously hinders them from deep neural network applications. In this article, we introduce a deeply tensor-compressed long short-term memory (LSTM) neural network for fast video-based facial expression recognition on mobile devices. First, a spatio-temporal facial expression recognition LSTM model is built by extracting time-series feature maps from facial clips. The LSTM-based spatio-temporal model is further deeply compressed by means of quantization and tensorization for mobile device implementation. Based on datasets of Extended Cohn-Kanade (CK+), MMI, and Acted Facial Expression in Wild 7.0, experimental results show that the proposed method achieves 97.96%, 97.33%, and 55.60% classification accuracy and significantly compresses the size of network model up to 221× with reduced training time per epoch by 60%. Our work is further implemented on the RK3399Pro mobile device with a Neural Process Engine. The latency of the feature extractor and LSTM predictor can be reduced 30.20× and 6.62× , respectively, on board with the leveraged compression methods. Furthermore, the spatio-temporal model costs only 57.19 MB of DRAM and 5.67W of power when running on the board.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75133616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}