Pub Date : 2020-09-01DOI: 10.1109/SMARTCOMP50058.2020.00066
Samaneh Zolfaghari, E. Khodabandehloo, Daniele Riboni
The rapid increase of the senior population in our societies calls for innovative tools to early detect symptoms of cognitive decline. To this aim, several methods have been recently proposed that exploit Internet of Things data and artificial intelligence techniques to recognize abnormal behaviors. In particular, the analysis of position traces may enable early detection of cognitive decline. However, indoor movement analysis introduces several challenges. Indeed, indoor movements are constrained by the ambient shape and by the presence of obstacles, and are affected by variability of activity execution. In this paper, we propose a novel method to identify abnormal indoor movement patterns that may indicate cognitive decline according to well known clinical models. Our method relies on trajectory segmentation, visual feature extraction from trajectory segments, and vision-based deep learning on the edge. In order to avoid privacy issues, we rely on indoor localization technologies without the use of cameras. Preliminary experimental results with a real-world dataset gathered from cognitively healthy persons and people with dementia show that this research direction is promising.
{"title":"Towards Vision-based Analysis of Indoor Trajectories for Cognitive Assessment","authors":"Samaneh Zolfaghari, E. Khodabandehloo, Daniele Riboni","doi":"10.1109/SMARTCOMP50058.2020.00066","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00066","url":null,"abstract":"The rapid increase of the senior population in our societies calls for innovative tools to early detect symptoms of cognitive decline. To this aim, several methods have been recently proposed that exploit Internet of Things data and artificial intelligence techniques to recognize abnormal behaviors. In particular, the analysis of position traces may enable early detection of cognitive decline. However, indoor movement analysis introduces several challenges. Indeed, indoor movements are constrained by the ambient shape and by the presence of obstacles, and are affected by variability of activity execution. In this paper, we propose a novel method to identify abnormal indoor movement patterns that may indicate cognitive decline according to well known clinical models. Our method relies on trajectory segmentation, visual feature extraction from trajectory segments, and vision-based deep learning on the edge. In order to avoid privacy issues, we rely on indoor localization technologies without the use of cameras. Preliminary experimental results with a real-world dataset gathered from cognitively healthy persons and people with dementia show that this research direction is promising.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125765366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/SMARTCOMP50058.2020.00041
Naima Khan, Nirmalya Roy
Water contamination has been a critical issue in many countries of the world including USA. Physical, chemical, biological, radio-logical substances can be the reason of this contamination. Drinking water systems are allowed to contain chlorine, calcium, lead, arsenic etc., at a certain level. However, there are expensive instruments and paper sensors to detect the quantity of minerals in water. But these instruments are not always convenient for easy determination of the quality of the sample as drinking water. Different minerals in the water reacts to heat heterogeneously. Some minerals (i.e., arsenic) stay in the water with noticeable amount even after reaching to boiling point. However, it requires cheaper and easier process to examine the quality of water samples for drinking from different sources. With this in mind, we experimented few water samples from different places of USA including artificially prepared samples by mixing different impurities. We investigated their heating property with the sample of marked safe drinking water. We collected thermal images with 10-seconds interval during cooling period of hot water samples from the boiling point to room temperature. We extracted features for each of the water samples with the combination of convolution and recurrent neural network based model and classified different water samples based on the added impurity types and sources from where the samples were collected. We also showed the feature distances of these water samples with the safe water sample. Our proposed framework can differentiate features for different impurities added in the water samples and detect different category of impurities with average accuracy of 70%.
{"title":"Water Quality Assessment with Thermal Images","authors":"Naima Khan, Nirmalya Roy","doi":"10.1109/SMARTCOMP50058.2020.00041","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00041","url":null,"abstract":"Water contamination has been a critical issue in many countries of the world including USA. Physical, chemical, biological, radio-logical substances can be the reason of this contamination. Drinking water systems are allowed to contain chlorine, calcium, lead, arsenic etc., at a certain level. However, there are expensive instruments and paper sensors to detect the quantity of minerals in water. But these instruments are not always convenient for easy determination of the quality of the sample as drinking water. Different minerals in the water reacts to heat heterogeneously. Some minerals (i.e., arsenic) stay in the water with noticeable amount even after reaching to boiling point. However, it requires cheaper and easier process to examine the quality of water samples for drinking from different sources. With this in mind, we experimented few water samples from different places of USA including artificially prepared samples by mixing different impurities. We investigated their heating property with the sample of marked safe drinking water. We collected thermal images with 10-seconds interval during cooling period of hot water samples from the boiling point to room temperature. We extracted features for each of the water samples with the combination of convolution and recurrent neural network based model and classified different water samples based on the added impurity types and sources from where the samples were collected. We also showed the feature distances of these water samples with the safe water sample. Our proposed framework can differentiate features for different impurities added in the water samples and detect different category of impurities with average accuracy of 70%.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121631904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we develop a deep learning model to distinguish dust from cloud and surface using satellite remote sensing image data. The occurrence of dust storms is increasing along with global climate change, especially in the arid and semi-arid regions. Originated from the soil, dust acts as a type of aerosol that causes significant impacts on the environment and human health. The dust and cloud data labels used in this paper are from CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite. The radiometric channels and geometric parameters from VIIRS (Visible Infrared Imaging Radiometer Suite) satellite sensor serve as features for our model. We trained and tested our deep learning model using 10,000 samples in March 2012. The developed model has five hidden layers and 512 neurons in each layer. The classification accuracy on the test set is 71.1%. In addition, we performed a shuffling procedure to identify the importance of features, which is calculated as the increase in the prediction error after we permute the feature's values. We also developed a method based on genetic algorithm to find the best subset of features for dust detection. The results show that the genetic algorithm can select a subset of features that have comparable performance as that of a model with all features. The shuffling procedure and the genetic algorithm both identify geometric information as important features for detecting mineral dust. The chosen subset will improve computational efficiency for dust detection and improve physical based methods.
{"title":"A Deep Learning Model for Detecting Dust in Earth's Atmosphere from Satellite Remote Sensing Data","authors":"Ping Hou, Pei Guo, Peng Wu, Jianwu Wang, A. Gangopadhyay, Zhibo Zhang","doi":"10.1109/SMARTCOMP50058.2020.00045","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00045","url":null,"abstract":"In this paper we develop a deep learning model to distinguish dust from cloud and surface using satellite remote sensing image data. The occurrence of dust storms is increasing along with global climate change, especially in the arid and semi-arid regions. Originated from the soil, dust acts as a type of aerosol that causes significant impacts on the environment and human health. The dust and cloud data labels used in this paper are from CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite. The radiometric channels and geometric parameters from VIIRS (Visible Infrared Imaging Radiometer Suite) satellite sensor serve as features for our model. We trained and tested our deep learning model using 10,000 samples in March 2012. The developed model has five hidden layers and 512 neurons in each layer. The classification accuracy on the test set is 71.1%. In addition, we performed a shuffling procedure to identify the importance of features, which is calculated as the increase in the prediction error after we permute the feature's values. We also developed a method based on genetic algorithm to find the best subset of features for dust detection. The results show that the genetic algorithm can select a subset of features that have comparable performance as that of a model with all features. The shuffling procedure and the genetic algorithm both identify geometric information as important features for detecting mineral dust. The chosen subset will improve computational efficiency for dust detection and improve physical based methods.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129730630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-03DOI: 10.1109/SMARTCOMP50058.2020.00064
V. Seethi, Pratool Bharti
In recent years, there have been a surge in ubiquitous technologies such as smartwatches and fitness trackers that can track the human physical activities effortlessly. These devices have enabled common citizens to track their physical fitness and encourage them to lead a healthy lifestyle. Among various exercises, walking and running are the most common ones people do in everyday life, either through commute, exercise, or doing household chores. If done at the right intensity, walking and running are sufficient enough to help individual reach the fitness and weight-loss goals. Therefore, it is important to measure walking/ running speed to estimate the burned calories along with preventing them from the risk of soreness, injury, and burnout. Existing wearable technologies use GPS sensor to measure the speed which is highly energy inefficient and does not work well indoors. In this paper, we design, implement and evaluate a convolutional neural network based algorithm that leverages accelerometer and gyroscope sensory data from the wrist-worn device to detect the speed with high precision. Data from 15 participants were collected while they were walking/running at different speeds on a treadmill. Our speed detection algorithm achieved 4.2% and 9.8% MAPE (Mean Absolute Error Percentage) value using 70-15-15 train-test-evaluation split and leave-one-out cross-validation evaluation strategy respectively.
{"title":"CNN-based Speed Detection Algorithm for Walking and Running using Wrist-worn Wearable Sensors","authors":"V. Seethi, Pratool Bharti","doi":"10.1109/SMARTCOMP50058.2020.00064","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00064","url":null,"abstract":"In recent years, there have been a surge in ubiquitous technologies such as smartwatches and fitness trackers that can track the human physical activities effortlessly. These devices have enabled common citizens to track their physical fitness and encourage them to lead a healthy lifestyle. Among various exercises, walking and running are the most common ones people do in everyday life, either through commute, exercise, or doing household chores. If done at the right intensity, walking and running are sufficient enough to help individual reach the fitness and weight-loss goals. Therefore, it is important to measure walking/ running speed to estimate the burned calories along with preventing them from the risk of soreness, injury, and burnout. Existing wearable technologies use GPS sensor to measure the speed which is highly energy inefficient and does not work well indoors. In this paper, we design, implement and evaluate a convolutional neural network based algorithm that leverages accelerometer and gyroscope sensory data from the wrist-worn device to detect the speed with high precision. Data from 15 participants were collected while they were walking/running at different speeds on a treadmill. Our speed detection algorithm achieved 4.2% and 9.8% MAPE (Mean Absolute Error Percentage) value using 70-15-15 train-test-evaluation split and leave-one-out cross-validation evaluation strategy respectively.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124232688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-24DOI: 10.1109/SMARTCOMP50058.2020.00065
Tibor Schneider, Xiaying Wang, Michael Hersche, L. Cavigelli, L. Benini
Motor-Imagery Brain-Machine Interfaces (MI-BMIs) promise direct and accessible communication between human brains and machines by analyzing brain activities recorded with Electroencephalography (EEG). Latency, reliability, and privacy constraints make it unsuitable to offload the computation to the cloud. Practical use cases demand a wearable, battery-operated device with low average power consumption for longterm use. Recently, sophisticated algorithms, in particular deep learning models, have emerged for classifying EEG signals. While reaching outstanding accuracy, these models often exceed the limitations of edge devices due to their memory and computational requirements. In this paper, we demonstrate algorithmic and implementation optimizations for EEGNET, a compact Convolutional Neural Network (CNN) suitable for many BMI paradigms. We quantize weights and activations to 8-bit fixed-point with a negligible accuracy loss of 0.4% on 4-class MI, and present an energy-efficient hardware-aware implementation on the Mr. Wolf parallel ultra-low power (PULP) System-on-Chip (SoC) by utilizing its custom RISC-VISA extensions and 8-core compute cluster. With our proposed optimization steps, we can obtain an overall speedup of 64 × and a reduction of up to 85% in memory footprint with respect to a single-core layer-wise baseline implementation. Our implementation takes only 5.82 ms and consumes 0.627 mJ per inference. With 21.0 GMAC/s/W, it is 256× more energy-efficient than an EEGNET implementation on an ARM Cortex-M7 (0.082 GMAC/s/W).
{"title":"Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain-Machine Interfaces","authors":"Tibor Schneider, Xiaying Wang, Michael Hersche, L. Cavigelli, L. Benini","doi":"10.1109/SMARTCOMP50058.2020.00065","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00065","url":null,"abstract":"Motor-Imagery Brain-Machine Interfaces (MI-BMIs) promise direct and accessible communication between human brains and machines by analyzing brain activities recorded with Electroencephalography (EEG). Latency, reliability, and privacy constraints make it unsuitable to offload the computation to the cloud. Practical use cases demand a wearable, battery-operated device with low average power consumption for longterm use. Recently, sophisticated algorithms, in particular deep learning models, have emerged for classifying EEG signals. While reaching outstanding accuracy, these models often exceed the limitations of edge devices due to their memory and computational requirements. In this paper, we demonstrate algorithmic and implementation optimizations for EEGNET, a compact Convolutional Neural Network (CNN) suitable for many BMI paradigms. We quantize weights and activations to 8-bit fixed-point with a negligible accuracy loss of 0.4% on 4-class MI, and present an energy-efficient hardware-aware implementation on the Mr. Wolf parallel ultra-low power (PULP) System-on-Chip (SoC) by utilizing its custom RISC-VISA extensions and 8-core compute cluster. With our proposed optimization steps, we can obtain an overall speedup of 64 × and a reduction of up to 85% in memory footprint with respect to a single-core layer-wise baseline implementation. Our implementation takes only 5.82 ms and consumes 0.627 mJ per inference. With 21.0 GMAC/s/W, it is 256× more energy-efficient than an EEGNET implementation on an ARM Cortex-M7 (0.082 GMAC/s/W).","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121252936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-24DOI: 10.1109/SMARTCOMP50058.2020.00028
Gevorg Yeghikyan, Felix L. Opolka, M. Nanni, B. Lepri, P. Lio’
A fundamental problem of interest to policy makers, urban planners, and other stakeholders involved in urban development is assessing the impact of planning and construction activities on mobility flows. This is a challenging task due to the different spatial, temporal, social, and economic factors influencing urban mobility flows. These flows, along with the influencing factors, can be modelled as attributed graphs with both node and edge features characterising locations in a city and the various types of relationships between them. In this paper, we address the problem of assessing origin-destination (OD) car flows between a location of interest and every other location in a city, given their features and the structural characteristics of the graph. We propose three neural network architectures, including graph neural networks (GNN), and conduct a systematic comparison between the proposed methods and state-of-the-art spatial interaction models, their modifications, and machine learning approaches. The objective of the paper is to address the practical problem of estimating potential flow between an urban project location and other locations in the city, where the features of the project location are known in advance. We evaluate the performance of the models on a regression task using a custom data set of attributed car OD flows in London. We also visualise the model performance by showing the spatial distribution of flow residuals across London.
{"title":"Learning Mobility Flows from Urban Features with Spatial Interaction Models and Neural Networks**To appear in the Proceedings of 2020 IEEE International Conference on Smart Computing (SMARTCOMP 2020)","authors":"Gevorg Yeghikyan, Felix L. Opolka, M. Nanni, B. Lepri, P. Lio’","doi":"10.1109/SMARTCOMP50058.2020.00028","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00028","url":null,"abstract":"A fundamental problem of interest to policy makers, urban planners, and other stakeholders involved in urban development is assessing the impact of planning and construction activities on mobility flows. This is a challenging task due to the different spatial, temporal, social, and economic factors influencing urban mobility flows. These flows, along with the influencing factors, can be modelled as attributed graphs with both node and edge features characterising locations in a city and the various types of relationships between them. In this paper, we address the problem of assessing origin-destination (OD) car flows between a location of interest and every other location in a city, given their features and the structural characteristics of the graph. We propose three neural network architectures, including graph neural networks (GNN), and conduct a systematic comparison between the proposed methods and state-of-the-art spatial interaction models, their modifications, and machine learning approaches. The objective of the paper is to address the practical problem of estimating potential flow between an urban project location and other locations in the city, where the features of the project location are known in advance. We evaluate the performance of the models on a regression task using a custom data set of attributed car OD flows in London. We also visualise the model performance by showing the spatial distribution of flow residuals across London.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131947675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-10DOI: 10.1109/SMARTCOMP50058.2020.00026
Afiya Ayman, Michael Wilbur, Amutheezan Sivagnanam, Philip Pugliese, A. Dubey, Aron Laszka
Due to increasing concerns about environmental impact, operating costs, and energy security, public transit agencies are seeking to reduce their fuel use by employing electric vehicles (EVs), However, because of the high upfront cost of EVs, most agencies can afford only mixed fleets of internal-combustion and electric vehicles. Making the best use of these mixed fleets presents a challenge for agencies since optimizing the assignment of vehicles to transit routes, scheduling charging, etc. require accurate predictions of electricity and fuel use. Recent advances in sensor-based technologies, data analytics, and machine learning enable remedying this situation; however, to the best of our knowledge, there exists no framework that would integrate all relevant data into a route-level prediction model for public transit. In this paper, we present a novel framework for the data-driven prediction of route-level energy use for mixed-vehicle transit fleets, which we evaluate using data collected from the bus fleet of CARTA, the public transit authority of Chattanooga, TN. We present a data collection and storage framework, which we use to capture system-level data, including traffic and weather conditions, and high-frequency vehicle-level data, including location traces, fuel or electricity use, etc. We present domain-specific methods and algorithms for integrating and cleansing data from various sources, including street and elevation maps. Finally, we train and evaluate machine learning models, including deep neural networks, decision trees, and linear regression, on our integrated dataset. Our results show that neural networks provide accurate estimates, while other models can help us discover relations between energy use and factors such as road and weather conditions.
{"title":"Data-Driven Prediction of Route-Level Energy Use for Mixed-Vehicle Transit Fleets","authors":"Afiya Ayman, Michael Wilbur, Amutheezan Sivagnanam, Philip Pugliese, A. Dubey, Aron Laszka","doi":"10.1109/SMARTCOMP50058.2020.00026","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00026","url":null,"abstract":"Due to increasing concerns about environmental impact, operating costs, and energy security, public transit agencies are seeking to reduce their fuel use by employing electric vehicles (EVs), However, because of the high upfront cost of EVs, most agencies can afford only mixed fleets of internal-combustion and electric vehicles. Making the best use of these mixed fleets presents a challenge for agencies since optimizing the assignment of vehicles to transit routes, scheduling charging, etc. require accurate predictions of electricity and fuel use. Recent advances in sensor-based technologies, data analytics, and machine learning enable remedying this situation; however, to the best of our knowledge, there exists no framework that would integrate all relevant data into a route-level prediction model for public transit. In this paper, we present a novel framework for the data-driven prediction of route-level energy use for mixed-vehicle transit fleets, which we evaluate using data collected from the bus fleet of CARTA, the public transit authority of Chattanooga, TN. We present a data collection and storage framework, which we use to capture system-level data, including traffic and weather conditions, and high-frequency vehicle-level data, including location traces, fuel or electricity use, etc. We present domain-specific methods and algorithms for integrating and cleansing data from various sources, including street and elevation maps. Finally, we train and evaluate machine learning models, including deep neural networks, decision trees, and linear regression, on our integrated dataset. Our results show that neural networks provide accurate estimates, while other models can help us discover relations between energy use and factors such as road and weather conditions.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121149916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-16DOI: 10.1109/SMARTCOMP50058.2020.00069
M. A. U. Alam, Dhawal Kapadia
Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainabil-ity. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity.
{"title":"LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment","authors":"M. A. U. Alam, Dhawal Kapadia","doi":"10.1109/SMARTCOMP50058.2020.00069","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00069","url":null,"abstract":"Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainabil-ity. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121867826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-06DOI: 10.1109/SMARTCOMP50058.2020.00038
Pooja Gupta, Volkan Dedeoglu, K. Najeebullah, S. Kanhere, R. Jurdak
Personal IoT data is a new economic asset that individuals can trade to generate revenue on the emerging data marketplaces. Typically, marketplaces are centralized systems that raise concerns of privacy, single point of failure, little transparency and involve trusted intermediaries to be fair. Furthermore, the battery-operated IoT devices limit the amount of IoT data to be traded in real-time that affects buyer/seller satisfaction and hence, impacting the sustainability and usability of such a marketplace. This work proposes to utilize blockchain technology to realize a trusted and transparent decentralized marketplace for contract compliance for trading IoT data streams generated by battery-operated IoT devices in real-time. The contribution of this paper is two-fold: (1) we propose an autonomous blockchain-based marketplace equipped with essential functionalities such as agreement framework, pricing model and rating mechanism to create an effective marketplace framework without involving a mediator, (2) we propose a mechanism for selection and allocation of buyers' demands on seller's devices under quality and battery constraints. We present a proof-of-concept implementation in Ethereum to demonstrate the feasibility of the framework. We investigated the impact of buyer's demand on the battery drainage of the IoT devices under different scenarios through extensive simulations. Our results show that this approach is viable and benefits the seller and buyer for creating a sustainable marketplace model for trading IoT data in real-time from battery-powered IoT devices.
{"title":"Energy-aware Demand Selection and Allocation for Real-time IoT Data Trading","authors":"Pooja Gupta, Volkan Dedeoglu, K. Najeebullah, S. Kanhere, R. Jurdak","doi":"10.1109/SMARTCOMP50058.2020.00038","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00038","url":null,"abstract":"Personal IoT data is a new economic asset that individuals can trade to generate revenue on the emerging data marketplaces. Typically, marketplaces are centralized systems that raise concerns of privacy, single point of failure, little transparency and involve trusted intermediaries to be fair. Furthermore, the battery-operated IoT devices limit the amount of IoT data to be traded in real-time that affects buyer/seller satisfaction and hence, impacting the sustainability and usability of such a marketplace. This work proposes to utilize blockchain technology to realize a trusted and transparent decentralized marketplace for contract compliance for trading IoT data streams generated by battery-operated IoT devices in real-time. The contribution of this paper is two-fold: (1) we propose an autonomous blockchain-based marketplace equipped with essential functionalities such as agreement framework, pricing model and rating mechanism to create an effective marketplace framework without involving a mediator, (2) we propose a mechanism for selection and allocation of buyers' demands on seller's devices under quality and battery constraints. We present a proof-of-concept implementation in Ethereum to demonstrate the feasibility of the framework. We investigated the impact of buyer's demand on the battery drainage of the IoT devices under different scenarios through extensive simulations. Our results show that this approach is viable and benefits the seller and buyer for creating a sustainable marketplace model for trading IoT data in real-time from battery-powered IoT devices.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117020667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-05DOI: 10.1109/SMARTCOMP50058.2020.00042
Nathaniel Hudson, Hana Khamfroush, Brent Harrison, Adam Craig
Smart cities are a growing paradigm in the design of systems that interact with one another for informed and efficient decision making, empowered by data and technology, of resources in a city. The diffusion of information to citizens in a smart city will rely on social trends and smart advertisement. Online social networks (OSNs) are prominent and increasingly important platforms to spread information, observe social trends, and advertise new products. To maximize the benefits of such platforms in sharing information, many groups invest in finding ways to maximize the expected number of clicks as a proxy of these platform's performance. As such, the study of click-through rate (CTR) prediction of advertisements, in environments like online social media, is of much interest. Prior works build machine learning (ML) using user-specific data to classify whether a user will click on an advertisement or not. For our work, we consider a large set of Facebook advertisement data (with no user data) and categorize targeted interests into thematic groups we call conceptual nodes. ML models are trained using the advertisement data to perform CTR prediction with conceptual node combinations. We then cast the problem of finding the optimal combination of conceptual nodes as an optimization problem. Given a certain budget $k$, we are interested in finding the optimal combination of conceptual nodes that maximize the CTR. We discuss the hardness and possible NP-hardness of the optimization problem. Then, we propose a greedy algorithm and a genetic algorithm to find near-optimal combinations of conceptual nodes in polynomial time, with the genetic algorithm nearly matching the optimal solution. We observe that simple ML models can exhibit the high Pearson correlation coefficients w.r.t. click predictions and real click values. Additionally, we find that the conceptual nodes of “politics”, “celebrity”, and “organization” are notably more influential than other considered conceptual nodes.
智能城市是一种不断发展的系统设计范式,这些系统相互作用,通过数据和技术,对城市资源进行知情和有效的决策。在智慧城市中,信息向市民的传播将依赖于社会趋势和智能广告。在线社交网络(Online social network,简称osn)是信息传播、社会趋势观察和新产品宣传的重要平台。为了最大限度地发挥这些平台在信息共享方面的优势,许多组织投资于寻找最大化预期点击次数的方法,以此作为这些平台性能的代理。因此,在像在线社交媒体这样的环境中,对广告点击率(CTR)预测的研究非常有趣。之前的工作使用特定于用户的数据构建机器学习(ML),以分类用户是否会点击广告。在我们的工作中,我们考虑了大量的Facebook广告数据(没有用户数据),并将目标兴趣分类为我们称之为概念节点的主题组。使用广告数据训练ML模型,使用概念节点组合执行CTR预测。然后,我们将寻找概念节点的最优组合的问题转换为优化问题。给定一定的预算$k$,我们感兴趣的是找到最大化点击率的概念节点的最佳组合。我们讨论了优化问题的硬度和可能的np硬度。然后,我们提出了一种贪心算法和一种遗传算法,在多项式时间内找到概念节点的近最优组合,遗传算法几乎匹配最优解。我们观察到,简单的ML模型可以显示高Pearson相关系数w.r.t.点击预测和真实点击值。此外,我们发现“政治”、“名人”和“组织”的概念节点明显比其他考虑的概念节点更有影响力。
{"title":"Smart Advertisement for Maximal Clicks in Online Social Networks Without User Data","authors":"Nathaniel Hudson, Hana Khamfroush, Brent Harrison, Adam Craig","doi":"10.1109/SMARTCOMP50058.2020.00042","DOIUrl":"https://doi.org/10.1109/SMARTCOMP50058.2020.00042","url":null,"abstract":"Smart cities are a growing paradigm in the design of systems that interact with one another for informed and efficient decision making, empowered by data and technology, of resources in a city. The diffusion of information to citizens in a smart city will rely on social trends and smart advertisement. Online social networks (OSNs) are prominent and increasingly important platforms to spread information, observe social trends, and advertise new products. To maximize the benefits of such platforms in sharing information, many groups invest in finding ways to maximize the expected number of clicks as a proxy of these platform's performance. As such, the study of click-through rate (CTR) prediction of advertisements, in environments like online social media, is of much interest. Prior works build machine learning (ML) using user-specific data to classify whether a user will click on an advertisement or not. For our work, we consider a large set of Facebook advertisement data (with no user data) and categorize targeted interests into thematic groups we call conceptual nodes. ML models are trained using the advertisement data to perform CTR prediction with conceptual node combinations. We then cast the problem of finding the optimal combination of conceptual nodes as an optimization problem. Given a certain budget $k$, we are interested in finding the optimal combination of conceptual nodes that maximize the CTR. We discuss the hardness and possible NP-hardness of the optimization problem. Then, we propose a greedy algorithm and a genetic algorithm to find near-optimal combinations of conceptual nodes in polynomial time, with the genetic algorithm nearly matching the optimal solution. We observe that simple ML models can exhibit the high Pearson correlation coefficients w.r.t. click predictions and real click values. Additionally, we find that the conceptual nodes of “politics”, “celebrity”, and “organization” are notably more influential than other considered conceptual nodes.","PeriodicalId":346827,"journal":{"name":"2020 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}