In cloud computing, broadly two facets of Distributed Denial-of-Service (DDoS) attack exist. The attacker uses Internet Protocol (IP) spoofing technique for launching the DDoS attack to disguise the source's identity. Consequently, its detection becomes a crucial and challenging task. The objective of the paper is to propose an adaptive and lightweight approach which can detect the low and high rate spoofed DDoS attack traffic accurately. The approach is implemented in a closed cloud environment. The experimental results showed that the approach can effectively detect internal and external low/high rate spoofed DDoS attacks with 99.3% accuracy and provides better performance.
{"title":"A Lightweight Approach to Detect the Low/High Rate IP Spoofed Cloud DDoS Attacks","authors":"Neha Agrawal, S. Tapaswi","doi":"10.1109/SC2.2017.25","DOIUrl":"https://doi.org/10.1109/SC2.2017.25","url":null,"abstract":"In cloud computing, broadly two facets of Distributed Denial-of-Service (DDoS) attack exist. The attacker uses Internet Protocol (IP) spoofing technique for launching the DDoS attack to disguise the source's identity. Consequently, its detection becomes a crucial and challenging task. The objective of the paper is to propose an adaptive and lightweight approach which can detect the low and high rate spoofed DDoS attack traffic accurately. The approach is implemented in a closed cloud environment. The experimental results showed that the approach can effectively detect internal and external low/high rate spoofed DDoS attacks with 99.3% accuracy and provides better performance.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124023482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a new parallel and distributed algorithm for the resolution of the satisfiability problem. The proposed algorithm is based on algorithm portfolio and is intended to be used for servicing requests in a distributed cloud. The core of our contribution is the modeling of the optimal resource sharing schedule in parallel executions and the proposition of heuristics for its approximation. For this purpose, we reformulate a computational problem introduced in prior work. The main assumption is that it is possible to learn the optimal resource sharing from traces collected on past executions on a representative set of instances. We show that the learning can be formalized as a set coverage problem. Then, we propose to solve it by approximation and dynamic programming algorithms. These algorithms are based on classical greedy algorithms for the maximum coverage problem. Finally, we conduct an experimental evaluation for comparing the performance of the various proposed algorithms. The results show that some algorithms become more competitive if we intend to determine the trade-off between their quality and the runtime required for their computation.
{"title":"A Distributed Cloud Service for the Resolution of SAT","authors":"Yanik Ngoko, D. Trystram, C. Cérin","doi":"10.1109/SC2.2017.9","DOIUrl":"https://doi.org/10.1109/SC2.2017.9","url":null,"abstract":"In this paper, we introduce a new parallel and distributed algorithm for the resolution of the satisfiability problem. The proposed algorithm is based on algorithm portfolio and is intended to be used for servicing requests in a distributed cloud. The core of our contribution is the modeling of the optimal resource sharing schedule in parallel executions and the proposition of heuristics for its approximation. For this purpose, we reformulate a computational problem introduced in prior work. The main assumption is that it is possible to learn the optimal resource sharing from traces collected on past executions on a representative set of instances. We show that the learning can be formalized as a set coverage problem. Then, we propose to solve it by approximation and dynamic programming algorithms. These algorithms are based on classical greedy algorithms for the maximum coverage problem. Finally, we conduct an experimental evaluation for comparing the performance of the various proposed algorithms. The results show that some algorithms become more competitive if we intend to determine the trade-off between their quality and the runtime required for their computation.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132404993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The cyber world is an ever-changing world, cyber security is most importance and touches the lives of everyone on the cyber world including: Researchers, students, businesses, academia, and novice user. The paper suggests a body of knowledge that incorporate the view of academia as well as practitioners. This research attempts to put basic step and a frame work for cyber security body of knowledge and to allow practitioners and academicians to face the problem of lack of standardization. Furthermore, the paper attempt to bridge the gap between the different audience. The gap is so broad that the term of cyber security is not agreed upon even in spelling. The suggested body of knowledge may not be perfect yet it is a step forward.
{"title":"Cyber Security Body of Knowledge","authors":"Evon M. O. Abu-Taieh","doi":"10.1109/SC2.2017.23","DOIUrl":"https://doi.org/10.1109/SC2.2017.23","url":null,"abstract":"The cyber world is an ever-changing world, cyber security is most importance and touches the lives of everyone on the cyber world including: Researchers, students, businesses, academia, and novice user. The paper suggests a body of knowledge that incorporate the view of academia as well as practitioners. This research attempts to put basic step and a frame work for cyber security body of knowledge and to allow practitioners and academicians to face the problem of lack of standardization. Furthermore, the paper attempt to bridge the gap between the different audience. The gap is so broad that the term of cyber security is not agreed upon even in spelling. The suggested body of knowledge may not be perfect yet it is a step forward.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"90 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132031519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Map matching is the process of matching a series of recorded geographic coordinates (e.g., a GPS trajectory) to a road network. Due to GPS positioning errors and the sampling constraints, the GPS data collected by the GPS devices are not precise, and the location of a user cannot always be correctly shown on the map. Unfortunately, most current map-matching algorithms only consider the distance between the GPS points and the road segments, the topology of the road network, and the speed constraint of the road segment to determine the matching results. In this paper, we propose a spatio-temporal based matching algorithm (STD-matching) for low-sampling-rate GPS trajectories. STD-matching considers the spatial features such as the distance information and topology of the road network, the speed constraints of the road network, and the realtime moving direction which shows the movement of the user. In our experiments, we compare STD-matching with three existing algorithms, the ST-matching algorithm, the stMM algorithm, and the HMM-RCM algorithm, using a real data set. The experiment results show that our STD-matching algorithm outperforms the three existing algorithms in terms of matching accuracy.
{"title":"A Hidden Markov Model-Based Map-Matching Approach for Low-Sampling-Rate GPS Trajectories","authors":"Yu-Ling Hsueh, Ho-Chian Chen, Wei-Jie Huang","doi":"10.1109/SC2.2017.52","DOIUrl":"https://doi.org/10.1109/SC2.2017.52","url":null,"abstract":"Map matching is the process of matching a series of recorded geographic coordinates (e.g., a GPS trajectory) to a road network. Due to GPS positioning errors and the sampling constraints, the GPS data collected by the GPS devices are not precise, and the location of a user cannot always be correctly shown on the map. Unfortunately, most current map-matching algorithms only consider the distance between the GPS points and the road segments, the topology of the road network, and the speed constraint of the road segment to determine the matching results. In this paper, we propose a spatio-temporal based matching algorithm (STD-matching) for low-sampling-rate GPS trajectories. STD-matching considers the spatial features such as the distance information and topology of the road network, the speed constraints of the road network, and the realtime moving direction which shows the movement of the user. In our experiments, we compare STD-matching with three existing algorithms, the ST-matching algorithm, the stMM algorithm, and the HMM-RCM algorithm, using a real data set. The experiment results show that our STD-matching algorithm outperforms the three existing algorithms in terms of matching accuracy.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"73 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126106989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to speed up the processing, MapReduce invokes many mappers and reducers concurrently. Each mapper sends the intermediate map-outputs to reducers according to the key of data. For some big data with the property of data skew, some partitions will own a huge amounts of data. Thus, some reducers need more time to process their assigned partitions, resulting in increasing the total execution time. This paper proposes a balanced partition method to divide the intermediate map-outputs evenly. The balanced partition method has a preprocessing mapreduce (mapper1 and reducer1) by which partitioner is derived. The mapper1 is used to counting key frequencies by employing trie data structure efficiently. In reducer1, based on all the key frequencies, many sub-partitions are derived by cut-points and these sub-partitions are evenly distributed to partitions. The cut-points and the mapping table are used in every mappers of the application mapreduce for partitioning the intermediate map-outputs evenly, resulting in reducing the execution time.
{"title":"Reducing Imbalance Ratio in MapReduce","authors":"Hsing-Lung Chen, Y. Shen","doi":"10.1109/SC2.2017.54","DOIUrl":"https://doi.org/10.1109/SC2.2017.54","url":null,"abstract":"In order to speed up the processing, MapReduce invokes many mappers and reducers concurrently. Each mapper sends the intermediate map-outputs to reducers according to the key of data. For some big data with the property of data skew, some partitions will own a huge amounts of data. Thus, some reducers need more time to process their assigned partitions, resulting in increasing the total execution time. This paper proposes a balanced partition method to divide the intermediate map-outputs evenly. The balanced partition method has a preprocessing mapreduce (mapper1 and reducer1) by which partitioner is derived. The mapper1 is used to counting key frequencies by employing trie data structure efficiently. In reducer1, based on all the key frequencies, many sub-partitions are derived by cut-points and these sub-partitions are evenly distributed to partitions. The cut-points and the mapping table are used in every mappers of the application mapreduce for partitioning the intermediate map-outputs evenly, resulting in reducing the execution time.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127000921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays we have entered the big data era. Hadoop, one of the popular big data processing platforms, has many parameters that relate closely to the utilization of resources (e.g. CPU or memory). Tuning these parameters thus becomes one of the important approaches to improve the resource utilization of Hadoop. However, tuning parameters manually is impractical because the time cost fortuning is too high. Hence it is necessary to configure parameters automatically and quickly to optimize resource utilization. The former auto-tuning methods often take a long time before getting the optimal configuration, which would reduce the overall resource efficiency of cluster. In this paper, we propose mrEtalon, an adaptive tuning framework to recommend a near-optimal configuration for the new job in a short time. mrEtalon sets a configuration repository to provide candidate configurations, as well as a collaborative filtering based recommendation engine that can accelerate the optimization for parameters. We have deployed mrEtalon in our experimental cluster, and the results demonstrate that, for a new MapReduce application, compared to the former methods, mrEtalon can reduce the recommend time to 20% to 30% while keeping nearly the same recommendation quality.
{"title":"A Recommendation-Based Parameter Tuning Approach for Hadoop","authors":"Lin Cai, Yong Qi, Jingwei Li","doi":"10.1109/SC2.2017.41","DOIUrl":"https://doi.org/10.1109/SC2.2017.41","url":null,"abstract":"Nowadays we have entered the big data era. Hadoop, one of the popular big data processing platforms, has many parameters that relate closely to the utilization of resources (e.g. CPU or memory). Tuning these parameters thus becomes one of the important approaches to improve the resource utilization of Hadoop. However, tuning parameters manually is impractical because the time cost fortuning is too high. Hence it is necessary to configure parameters automatically and quickly to optimize resource utilization. The former auto-tuning methods often take a long time before getting the optimal configuration, which would reduce the overall resource efficiency of cluster. In this paper, we propose mrEtalon, an adaptive tuning framework to recommend a near-optimal configuration for the new job in a short time. mrEtalon sets a configuration repository to provide candidate configurations, as well as a collaborative filtering based recommendation engine that can accelerate the optimization for parameters. We have deployed mrEtalon in our experimental cluster, and the results demonstrate that, for a new MapReduce application, compared to the former methods, mrEtalon can reduce the recommend time to 20% to 30% while keeping nearly the same recommendation quality.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many influential factors in PM2.5, reducing the emission of PM2.5 is one of international subjects. In recent years, it is indicated that one of the sources of secondary PM2.5 is the complex chemical reaction between NH3 and air pollutants (VOCs, particulate matter, NOx, SOx). The Committee on Agriculture of FAO indicates that 64% of NH3 emission on the earth surface is derived from stock raising which motivates this study to discuss following two subjects based on Open Government Data. Subject 1 calculates the effect of the controlled air pollutants (VOCs, particulate matter, NOx, SOx) and the quantity of livestock (e.g. pigs, chickens and so on) nearby the air monitoring stations on the annual mean of PM2.5. Subject 2 uses Apache Spark as Cloud computing platform, the air monitoring stations are geographically clustered by K-medoids to calculate the Spearman's correlation coefficient of pollution source and PM2.5 of each cluster. The experimental results show that the monitoring station with more air pollutants and livestock raised nearby has higher annual mean PM2.5 concentration. The results are expected to provide the government bodies to make environmental decisions and the plants and livestock farms to install air monitors to analyze the air quality data. Our ultimate goals are to improve the environment and reduce both the emission of PM2.5 and the probability of getting cardiovascular disease.
{"title":"Analysis of Influential Factors in Secondary PM2.5 by K-Medoids and Correlation Coefficient","authors":"Jui-Hung Chang, Chien-Yuan Tseng, Hung-Hsi Chiang, Ren-Hung Hwang","doi":"10.1109/SC2.2017.34","DOIUrl":"https://doi.org/10.1109/SC2.2017.34","url":null,"abstract":"There are many influential factors in PM2.5, reducing the emission of PM2.5 is one of international subjects. In recent years, it is indicated that one of the sources of secondary PM2.5 is the complex chemical reaction between NH3 and air pollutants (VOCs, particulate matter, NOx, SOx). The Committee on Agriculture of FAO indicates that 64% of NH3 emission on the earth surface is derived from stock raising which motivates this study to discuss following two subjects based on Open Government Data. Subject 1 calculates the effect of the controlled air pollutants (VOCs, particulate matter, NOx, SOx) and the quantity of livestock (e.g. pigs, chickens and so on) nearby the air monitoring stations on the annual mean of PM2.5. Subject 2 uses Apache Spark as Cloud computing platform, the air monitoring stations are geographically clustered by K-medoids to calculate the Spearman's correlation coefficient of pollution source and PM2.5 of each cluster. The experimental results show that the monitoring station with more air pollutants and livestock raised nearby has higher annual mean PM2.5 concentration. The results are expected to provide the government bodies to make environmental decisions and the plants and livestock farms to install air monitors to analyze the air quality data. Our ultimate goals are to improve the environment and reduce both the emission of PM2.5 and the probability of getting cardiovascular disease.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129061613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edge computing enables new types of services which operate at the network edge. There are important use cases in pervasive computing, ambient intelligence and the Internet of Things (IoT) for edge computing. In this demo paper we present microclouds deployed at the networks edge in the Guifi.net community network leveraging an open extensible platform called Cloudy. The demonstration focuses on the following aspects: The usage of Cloudy for end users, the services of Cloudy to build microclouds, and the application scenarios of IoT data management within microclouds.
{"title":"Building Microclouds at the Network Edge with the Cloudy Platform","authors":"Felix Freitag, R. P. Centelles, L. Navarro","doi":"10.1109/SC2.2017.49","DOIUrl":"https://doi.org/10.1109/SC2.2017.49","url":null,"abstract":"Edge computing enables new types of services which operate at the network edge. There are important use cases in pervasive computing, ambient intelligence and the Internet of Things (IoT) for edge computing. In this demo paper we present microclouds deployed at the networks edge in the Guifi.net community network leveraging an open extensible platform called Cloudy. The demonstration focuses on the following aspects: The usage of Cloudy for end users, the services of Cloudy to build microclouds, and the application scenarios of IoT data management within microclouds.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133778241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jui-Hung Chang, Chien-Yuan Tseng, Ren-Hung Hwang, Mingcao Ma
Google's search engine has recorded the popularity of a great number of tourism-related hot words. Prior to vacationing, many people will search the four dimensions of tourism, namely food, fashion, accommodation and transportation, on the Internet before an overseas trip. Exploring the correlation between popularity trends of tourism-related hot words and the number of tourists visiting a particular destination is a potentially valuable research area for the tourist industry. Therefore, this study counted the occurrence frequency of words related to Japanese tourism in the Google search engine and in tourism articles on electronic news websites. With these data, it calculated the Pearson correlation coefficient of the number of Taiwanese tourists visiting Japan "n" months later. Additionally, a deep learning (Artificial Neural Network) model was established, and the relationship between the popularity scores of tourism-related hot words and the interval of the number of Taiwanese tourists in Japan was examined. The research results show that the popularity of tourism-related hot words on Google is highly related to the number of Taiwanese tourists visiting Japan.
{"title":"Using ANN to Analyze the Correlation Between Tourism-Related Hot Words and Tourist Numbers: A Case Study in Japan","authors":"Jui-Hung Chang, Chien-Yuan Tseng, Ren-Hung Hwang, Mingcao Ma","doi":"10.1109/SC2.2017.27","DOIUrl":"https://doi.org/10.1109/SC2.2017.27","url":null,"abstract":"Google's search engine has recorded the popularity of a great number of tourism-related hot words. Prior to vacationing, many people will search the four dimensions of tourism, namely food, fashion, accommodation and transportation, on the Internet before an overseas trip. Exploring the correlation between popularity trends of tourism-related hot words and the number of tourists visiting a particular destination is a potentially valuable research area for the tourist industry. Therefore, this study counted the occurrence frequency of words related to Japanese tourism in the Google search engine and in tourism articles on electronic news websites. With these data, it calculated the Pearson correlation coefficient of the number of Taiwanese tourists visiting Japan \"n\" months later. Additionally, a deep learning (Artificial Neural Network) model was established, and the relationship between the popularity scores of tourism-related hot words and the interval of the number of Taiwanese tourists in Japan was examined. The research results show that the popularity of tourism-related hot words on Google is highly related to the number of Taiwanese tourists visiting Japan.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127046436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-Zheng Lai, Chih-Hua Tai, Yue-Shan Chang, Kuo-Hsuan Chung
In recent years, biofeedback has been widely applied into diagnosis and treatment of various diseases. There are also increasingly research exploiting various ICT (Information & Communication Technology) technologies, such as cloud technology, to achieve diagnosis and treatment. Therefore, how to use mobile cloud technology to assist the disease's diagnosis, to record treatment status, and to infer the result will be an important issue. In this paper, we will propose a mobile cloud platform and framework for the patient of mental illness for evaluating medication response through a variety of biofeedback information collection, integration, and fusion, so that physician can know the patient's situation. The physiological data including Heart Rate Variability and Brain Wave are collected through wearable sensors. And the psychological data is collected through monthly mood chart. The biofeedback physiological and psychological data can be fused into together to show the medication response after patient taking some medicines. An APP for the framework has been developed to show the effectiveness.
{"title":"A Mobile Cloud-Based Biofeedback Platform for Evaluating Medication Response","authors":"Yu-Zheng Lai, Chih-Hua Tai, Yue-Shan Chang, Kuo-Hsuan Chung","doi":"10.1109/SC2.2017.35","DOIUrl":"https://doi.org/10.1109/SC2.2017.35","url":null,"abstract":"In recent years, biofeedback has been widely applied into diagnosis and treatment of various diseases. There are also increasingly research exploiting various ICT (Information & Communication Technology) technologies, such as cloud technology, to achieve diagnosis and treatment. Therefore, how to use mobile cloud technology to assist the disease's diagnosis, to record treatment status, and to infer the result will be an important issue. In this paper, we will propose a mobile cloud platform and framework for the patient of mental illness for evaluating medication response through a variety of biofeedback information collection, integration, and fusion, so that physician can know the patient's situation. The physiological data including Heart Rate Variability and Brain Wave are collected through wearable sensors. And the psychological data is collected through monthly mood chart. The biofeedback physiological and psychological data can be fused into together to show the medication response after patient taking some medicines. An APP for the framework has been developed to show the effectiveness.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116492138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}