Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8992970
Afshan Ejaz, Maria Rahim, S. Khoja
The use of gesture to interact with technology is widely gaining the popularity since they are not only easy to use but also easy to learn and remember. Moreover, gestures are very natural since they are used by human in their day to day life to communicate and interact with each other. Hence those gesture does not put greater cognitive load on human mind. The cognitive capabilities of older adult are less than of younger adults as older adults have low learnability and memorability. To carter this problem we have analyzed the impact of gesture usage on the cognitive load of older adults and how this cognitive affect the acceptability of those gestures. In addition to this, we have compared different types of gestures to understand which gestures are more accepted by the older adult. The types of gestures included were single finger gesture, multiple finger gesture, bimanual gesture, metaphoric gesture, complex gesture and simple gestures. To compare the usability, affordance, acceptability and cognitive load of these gesture we have developed seven hypotheses. After operationalizing the variable of these hypothesis, the experiment was conducted on the older adults. The results of experiment showed that gestures which are mapped to a metaphor had low cognitive load as compare to gesture that are not mapped. Moreover, the results also showed that performance of single figure gesture was better than multiple finger gesture. However, one handed gesture does not have better performance than bimanual gesture. Finally results showed that gestures with higher cognitive load have low acceptability rate among the older adults.
{"title":"The Effect of Cognitive Load on Gesture Acceptability of Older Adults in Mobile Application","authors":"Afshan Ejaz, Maria Rahim, S. Khoja","doi":"10.1109/UEMCON47517.2019.8992970","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8992970","url":null,"abstract":"The use of gesture to interact with technology is widely gaining the popularity since they are not only easy to use but also easy to learn and remember. Moreover, gestures are very natural since they are used by human in their day to day life to communicate and interact with each other. Hence those gesture does not put greater cognitive load on human mind. The cognitive capabilities of older adult are less than of younger adults as older adults have low learnability and memorability. To carter this problem we have analyzed the impact of gesture usage on the cognitive load of older adults and how this cognitive affect the acceptability of those gestures. In addition to this, we have compared different types of gestures to understand which gestures are more accepted by the older adult. The types of gestures included were single finger gesture, multiple finger gesture, bimanual gesture, metaphoric gesture, complex gesture and simple gestures. To compare the usability, affordance, acceptability and cognitive load of these gesture we have developed seven hypotheses. After operationalizing the variable of these hypothesis, the experiment was conducted on the older adults. The results of experiment showed that gestures which are mapped to a metaphor had low cognitive load as compare to gesture that are not mapped. Moreover, the results also showed that performance of single figure gesture was better than multiple finger gesture. However, one handed gesture does not have better performance than bimanual gesture. Finally results showed that gestures with higher cognitive load have low acceptability rate among the older adults.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"33 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123254654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993044
Soo-Yeon Ji, C. Kamhoua, Nandi O. Leslie, D. Jeong
Understanding network activities has become the most significant task in network security due to the rapid growth of the Internet and mobile devices usages. To protect our computing infrastructures and personal data from network intruders or attacks, identifying abnormal activities is critical. Extracting features from network traffic data is considered as an essential task to be performed because it affects the overall performances to identify the activities accurately. Although researchers proposed several approaches, they mainly focused on identifying the best possible technique to detect abnormal network activities. Only a few studies considered utilizing feature extraction techniques. In this paper, we introduced a new approach, with which an integrative information feature set is determined to identify abnormal network activities using wavelet transformation. Instead of extracting features by attributes, the approach uses all attributes information to extract features and to design a reliable learning model to detect abnormal activities by reducing false positives. Two machine learning techniques, Logistic Regression (LR) and Naive Bayes, are utilized to show the effectiveness of the approach. A visualization method is also used to emphasize our approach. As a result, we found that our proposed approach produces a better performance result with less computational time in detecting abnormal network activities.
{"title":"An Effective Approach to Classify Abnormal Network Traffic Activities using Wavelet Transform","authors":"Soo-Yeon Ji, C. Kamhoua, Nandi O. Leslie, D. Jeong","doi":"10.1109/UEMCON47517.2019.8993044","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993044","url":null,"abstract":"Understanding network activities has become the most significant task in network security due to the rapid growth of the Internet and mobile devices usages. To protect our computing infrastructures and personal data from network intruders or attacks, identifying abnormal activities is critical. Extracting features from network traffic data is considered as an essential task to be performed because it affects the overall performances to identify the activities accurately. Although researchers proposed several approaches, they mainly focused on identifying the best possible technique to detect abnormal network activities. Only a few studies considered utilizing feature extraction techniques. In this paper, we introduced a new approach, with which an integrative information feature set is determined to identify abnormal network activities using wavelet transformation. Instead of extracting features by attributes, the approach uses all attributes information to extract features and to design a reliable learning model to detect abnormal activities by reducing false positives. Two machine learning techniques, Logistic Regression (LR) and Naive Bayes, are utilized to show the effectiveness of the approach. A visualization method is also used to emphasize our approach. As a result, we found that our proposed approach produces a better performance result with less computational time in detecting abnormal network activities.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123163057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993008
A. R. Pratama, A. Lazovik, Marco Aiello
Indoor occupancy provides information about human occupation in the closed space, most notably, office and residential buildings. This information is useful in dwindling unnecessary energy usage, such as consumption in unoccupied spaces or energy-wasting due to unnecessarily active appliances. We present an empirical experiment on office occupancy detection using simple office sensors. We choose generic power meters and mobile phones. First, we classify beacon signals received by mobile phones into a room location. A workspace map is assumed to be available to facilitate the mapping between room locations and the occupancy state of users' workspace. Second, we infer the individual occupancy state utilizing the aggregated electricity consumption of occupant-related devices (i.e., monitors) in shared offices. The later solution helps to keep costs and intrusiveness level low compared to deploying a power meter for each device or user. We experiment in an work environment with two shared offices, a personal office, and a social corner involving five volunteers. Given the acquired data, three techniques based on machine learning, optimization, and probabilistic approach are implemented and compared to evaluate their performance. The results indicate that localization and occupancy based on beaconing works best for three of the five volunteers, reaching 95% F-measure. Further findings shows that occupancy inference based on the aggregated power consumption performs well for the four volunteers when using Decision Tree classification, reaching more than 90% F-measure. Our effort on the fusion of two modalities gives a positive result for all five volunteers, ranging from 92% to 99% F-measure.
{"title":"Office Multi-Occupancy Detection using BLE Beacons and Power Meters","authors":"A. R. Pratama, A. Lazovik, Marco Aiello","doi":"10.1109/UEMCON47517.2019.8993008","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993008","url":null,"abstract":"Indoor occupancy provides information about human occupation in the closed space, most notably, office and residential buildings. This information is useful in dwindling unnecessary energy usage, such as consumption in unoccupied spaces or energy-wasting due to unnecessarily active appliances. We present an empirical experiment on office occupancy detection using simple office sensors. We choose generic power meters and mobile phones. First, we classify beacon signals received by mobile phones into a room location. A workspace map is assumed to be available to facilitate the mapping between room locations and the occupancy state of users' workspace. Second, we infer the individual occupancy state utilizing the aggregated electricity consumption of occupant-related devices (i.e., monitors) in shared offices. The later solution helps to keep costs and intrusiveness level low compared to deploying a power meter for each device or user. We experiment in an work environment with two shared offices, a personal office, and a social corner involving five volunteers. Given the acquired data, three techniques based on machine learning, optimization, and probabilistic approach are implemented and compared to evaluate their performance. The results indicate that localization and occupancy based on beaconing works best for three of the five volunteers, reaching 95% F-measure. Further findings shows that occupancy inference based on the aggregated power consumption performs well for the four volunteers when using Decision Tree classification, reaching more than 90% F-measure. Our effort on the fusion of two modalities gives a positive result for all five volunteers, ranging from 92% to 99% F-measure.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114288656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993029
C. Kwan, David Gribben, T. Tran
Data collected in compressive measurement domain can save data storage and transmission costs. In this paper, we summarize new results in human target tracking and classification using compressive measurements directly. Two deep learning algorithms known as You Only Look Once (YOLO) and residual network (ResNet) have been applied. YOLO was used for object detection and tracking and ResNet was used for human classification. Extensive experiments using low quality and long range optical videos in the SENSIAC database showed that the proposed approach is promising.
压缩测量域采集的数据可以节省数据的存储和传输成本。本文总结了直接利用压缩测量进行人体目标跟踪和分类的新成果。应用了You Only Look Once (YOLO)和residual network (ResNet)两种深度学习算法。使用YOLO进行目标检测和跟踪,使用ResNet进行人体分类。利用senac数据库中的低质量和远程光学视频进行的大量实验表明,该方法是有前途的。
{"title":"Tracking and Classification of Multiple Human Objects Directly in Compressive Measurement Domain for Low Quality Optical Videos","authors":"C. Kwan, David Gribben, T. Tran","doi":"10.1109/UEMCON47517.2019.8993029","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993029","url":null,"abstract":"Data collected in compressive measurement domain can save data storage and transmission costs. In this paper, we summarize new results in human target tracking and classification using compressive measurements directly. Two deep learning algorithms known as You Only Look Once (YOLO) and residual network (ResNet) have been applied. YOLO was used for object detection and tracking and ResNet was used for human classification. Extensive experiments using low quality and long range optical videos in the SENSIAC database showed that the proposed approach is promising.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114747516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993076
A. Ghazo, Ratnesh Kumar
SCADA/ICS (Supervisory Control and Data Acqui-sition/Industrial Control Systems) networks are becoming targets of advanced multi-faceted attacks, and use of attack-graphs has been proposed to model complex attacks scenarios that exploit interdependence among existing atomic vulnerabilities to stitch together the attack-paths that might compromise a system-level security property. While such analysis of attack scenarios enables security administrators to establish appropriate security measurements to secure the system, practical considerations on time and cost limit their ability to address all system vulnerabilities at once. In this paper, we propose an approach that identifies label-cuts to automatically identify a set of critical-attacks that, when blocked, guarantee system security. We utilize the Strongly-Connected-Components (SCCs) of the given attack graph to generate an abstracted version of the attack-graph, a tree over the SCCs, and next use an iterative backward search over this tree to identify set of backward reachable SCCs, along with their outgoing edges and their labels, to identify a cut with a minimum number of labels that forms a critical-attacks set. We also report the implementation and validation of the proposed algorithm to a real-world case study, a SCADA network for a water treatment cyber-physical system.
{"title":"Identification of Critical-Attacks Set in an Attack-Graph","authors":"A. Ghazo, Ratnesh Kumar","doi":"10.1109/UEMCON47517.2019.8993076","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993076","url":null,"abstract":"SCADA/ICS (Supervisory Control and Data Acqui-sition/Industrial Control Systems) networks are becoming targets of advanced multi-faceted attacks, and use of attack-graphs has been proposed to model complex attacks scenarios that exploit interdependence among existing atomic vulnerabilities to stitch together the attack-paths that might compromise a system-level security property. While such analysis of attack scenarios enables security administrators to establish appropriate security measurements to secure the system, practical considerations on time and cost limit their ability to address all system vulnerabilities at once. In this paper, we propose an approach that identifies label-cuts to automatically identify a set of critical-attacks that, when blocked, guarantee system security. We utilize the Strongly-Connected-Components (SCCs) of the given attack graph to generate an abstracted version of the attack-graph, a tree over the SCCs, and next use an iterative backward search over this tree to identify set of backward reachable SCCs, along with their outgoing edges and their labels, to identify a cut with a minimum number of labels that forms a critical-attacks set. We also report the implementation and validation of the proposed algorithm to a real-world case study, a SCADA network for a water treatment cyber-physical system.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117317883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993105
Chuck Easttom
Cybersecurity is a comparatively new discipline, related to computer science, electrical engineering, and similar subjects. As a newer discipline it lacks some of the tools found in more established subject areas. As one example, many engineering disciplines have modeling languages specific for that engineering discipline. As two examples, software engineering utilizes Unified Modeling Language (UML) and systems engineering uses System Modeling Language (SysML). Cybersecurity engineering lacks such a generalized modeling language. Cybersecurity as a profession would be enhanced with a security specific modeling language. This paper describes such a modeling language. The model is described in sufficient detail to be actionable and applicable. However, suggestions for future work are also provided.
{"title":"SecML: A Proposed Modeling Language for CyberSecurity","authors":"Chuck Easttom","doi":"10.1109/UEMCON47517.2019.8993105","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993105","url":null,"abstract":"Cybersecurity is a comparatively new discipline, related to computer science, electrical engineering, and similar subjects. As a newer discipline it lacks some of the tools found in more established subject areas. As one example, many engineering disciplines have modeling languages specific for that engineering discipline. As two examples, software engineering utilizes Unified Modeling Language (UML) and systems engineering uses System Modeling Language (SysML). Cybersecurity engineering lacks such a generalized modeling language. Cybersecurity as a profession would be enhanced with a security specific modeling language. This paper describes such a modeling language. The model is described in sufficient detail to be actionable and applicable. However, suggestions for future work are also provided.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115073082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8992944
Andrew Camphouse, L. Ngalamou
The modern home is becoming more and more reliant on connected devices. Nearly every device from the refrigerator to the thermostat is capable of connecting to the internet and communicating with other devices. Information security is discussed as it relates to smart home and Internet of Things devices. Several examples of exploits discovered in popular smart home hardware are discussed. A network security and monitoring appliance for the home is proposed as a possible solution.
{"title":"Securing a Connected Home","authors":"Andrew Camphouse, L. Ngalamou","doi":"10.1109/UEMCON47517.2019.8992944","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8992944","url":null,"abstract":"The modern home is becoming more and more reliant on connected devices. Nearly every device from the refrigerator to the thermostat is capable of connecting to the internet and communicating with other devices. Information security is discussed as it relates to smart home and Internet of Things devices. Several examples of exploits discovered in popular smart home hardware are discussed. A network security and monitoring appliance for the home is proposed as a possible solution.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123095343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993089
Debjyoti Sinha, M. El-Sharkawy
In the field of computer, mobile and embedded vision Convolutional Neural Networks (CNNs) are deep learning models which play a significant role in object detection and recognition. MobileNet is one such efficient, light-weighted model for this purpose, but there are many constraints or challenges for the hardware deployment of such architectures into resource-constrained micro-controller units due to limited memory, energy and power. Also, the overall accuracy of the model generally decreases when the size and the total number of parameters are reduced by any method such as pruning or deep compression. The paper proposes three hybrid MobileNet architectures which has improved accuracy along-with reduced size, lesser number of layers, lower average computation time and very less overfitting as compared to the baseline MobileNet v1. The reason behind developing these models is to have a variant of the existing MobileNet model which will be easily deployable in memory constrained MCUs. We name the model having the smallest size (9.9 MB) as Thin MobileNet. We achieve an increase in accuracy by replacing the standard non-linear activation function ReLU with Drop Activation and introducing Random erasing regularization technique in place of drop out. The model size is reduced by using Separable Convolutions instead of the Depthwise separable convolutions used in the baseline MobileNet. Later on, we make our model shallow by eliminating a few unnecessary layers without a drop in the accuracy. The experimental results are based on training the model on CIFAR-10 dataset.
{"title":"Thin MobileNet: An Enhanced MobileNet Architecture","authors":"Debjyoti Sinha, M. El-Sharkawy","doi":"10.1109/UEMCON47517.2019.8993089","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993089","url":null,"abstract":"In the field of computer, mobile and embedded vision Convolutional Neural Networks (CNNs) are deep learning models which play a significant role in object detection and recognition. MobileNet is one such efficient, light-weighted model for this purpose, but there are many constraints or challenges for the hardware deployment of such architectures into resource-constrained micro-controller units due to limited memory, energy and power. Also, the overall accuracy of the model generally decreases when the size and the total number of parameters are reduced by any method such as pruning or deep compression. The paper proposes three hybrid MobileNet architectures which has improved accuracy along-with reduced size, lesser number of layers, lower average computation time and very less overfitting as compared to the baseline MobileNet v1. The reason behind developing these models is to have a variant of the existing MobileNet model which will be easily deployable in memory constrained MCUs. We name the model having the smallest size (9.9 MB) as Thin MobileNet. We achieve an increase in accuracy by replacing the standard non-linear activation function ReLU with Drop Activation and introducing Random erasing regularization technique in place of drop out. The model size is reduced by using Separable Convolutions instead of the Depthwise separable convolutions used in the baseline MobileNet. Later on, we make our model shallow by eliminating a few unnecessary layers without a drop in the accuracy. The experimental results are based on training the model on CIFAR-10 dataset.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129929635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993067
Anurag Thantharate, C. Beard, S. Marupaduga
In Fifth Generation (5G), wireless cellular networks, smartphone battery efficiency, and optimal utilization of power have become a matter of utmost importance. Battery and power are an area of significant challenges considering smartphones these days are equipped with advanced technological network features and systems. These features require much simultaneous power to make decisions and to transfer information between devices and network to provide best the user experience. Furthermore, to meet the demands of increased data capacity, data rate, and to provide the best quality of service, there is a need to adopt energy-efficient architectures. This paper presents system-level architectural changes on both User Equipment (UE) and Network elements along with a proposal to modify control signaling as part of Radio Resource Control messages using smartphone battery level. Additionally, we presented real-world 5G mmWave field results, showing impacts on device battery life in varying RF conditions and proposed methods to allocate optimal network resources and improve the energy efficiency by modifying radio layer parameters between devices and base stations. Without these proposed architecture level and system-level algorithm changes, realizing optimal and consistent 5G speeds will be near impossible.
{"title":"An Approach to Optimize Device Power Performance Towards Energy Efficient Next Generation 5G Networks","authors":"Anurag Thantharate, C. Beard, S. Marupaduga","doi":"10.1109/UEMCON47517.2019.8993067","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993067","url":null,"abstract":"In Fifth Generation (5G), wireless cellular networks, smartphone battery efficiency, and optimal utilization of power have become a matter of utmost importance. Battery and power are an area of significant challenges considering smartphones these days are equipped with advanced technological network features and systems. These features require much simultaneous power to make decisions and to transfer information between devices and network to provide best the user experience. Furthermore, to meet the demands of increased data capacity, data rate, and to provide the best quality of service, there is a need to adopt energy-efficient architectures. This paper presents system-level architectural changes on both User Equipment (UE) and Network elements along with a proposal to modify control signaling as part of Radio Resource Control messages using smartphone battery level. Additionally, we presented real-world 5G mmWave field results, showing impacts on device battery life in varying RF conditions and proposed methods to allocate optimal network resources and improve the energy efficiency by modifying radio layer parameters between devices and base stations. Without these proposed architecture level and system-level algorithm changes, realizing optimal and consistent 5G speeds will be near impossible.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128717999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/UEMCON47517.2019.8993036
O. Ajayi, O. Igbe, T. Saadawi
One of the effective ways of detecting malicious traffic in computer networks is intrusion detection systems (IDS). Though IDS identify malicious activities in a network, it might be difficult to detect distributed or coordinated attacks because they only have single vantage point. To combat this problem, cooperative intrusion detection system was proposed. In this detection system, nodes exchange attack features or signatures with a view of detecting an attack that has previously been detected by one of the other nodes in the system. Exchanging of attack features is necessary because a zero-day attacks (attacks without known signature) experienced in different locations are not the same. Although this solution enhanced the ability of a single IDS to respond to attacks that have been previously identified by cooperating nodes, malicious activities such as fake data injection, data manipulation or deletion and data consistency are problems threatening this approach. In this paper, we propose a solution that leverages blockchain's distributive technology, tamper-proof ability and data immutability to detect and prevent malicious activities and solve data consistency problems facing cooperative intrusion detection. Focusing on extraction, storage and distribution stages of cooperative intrusion detection, we develop a blockchain-based solution that securely extracts features or signatures, adds extra verification step, makes storage of these signatures and features distributive and data sharing secured. Performance evaluation of the system with respect to its response time and resistance to the features/signatures injection is presented. The result shows that the proposed solution prevents stored attack features or signature against malicious data injection, manipulation or deletion and has low latency.
{"title":"Consortium Blockchain-Based Architecture for Cyber-attack Signatures and Features Distribution","authors":"O. Ajayi, O. Igbe, T. Saadawi","doi":"10.1109/UEMCON47517.2019.8993036","DOIUrl":"https://doi.org/10.1109/UEMCON47517.2019.8993036","url":null,"abstract":"One of the effective ways of detecting malicious traffic in computer networks is intrusion detection systems (IDS). Though IDS identify malicious activities in a network, it might be difficult to detect distributed or coordinated attacks because they only have single vantage point. To combat this problem, cooperative intrusion detection system was proposed. In this detection system, nodes exchange attack features or signatures with a view of detecting an attack that has previously been detected by one of the other nodes in the system. Exchanging of attack features is necessary because a zero-day attacks (attacks without known signature) experienced in different locations are not the same. Although this solution enhanced the ability of a single IDS to respond to attacks that have been previously identified by cooperating nodes, malicious activities such as fake data injection, data manipulation or deletion and data consistency are problems threatening this approach. In this paper, we propose a solution that leverages blockchain's distributive technology, tamper-proof ability and data immutability to detect and prevent malicious activities and solve data consistency problems facing cooperative intrusion detection. Focusing on extraction, storage and distribution stages of cooperative intrusion detection, we develop a blockchain-based solution that securely extracts features or signatures, adds extra verification step, makes storage of these signatures and features distributive and data sharing secured. Performance evaluation of the system with respect to its response time and resistance to the features/signatures injection is presented. The result shows that the proposed solution prevents stored attack features or signature against malicious data injection, manipulation or deletion and has low latency.","PeriodicalId":187022,"journal":{"name":"2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127914531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}