The Cooperative Adaptive Cruise Control (CACC) for Human and Autonomous Self-Driving aims to achieve active safe driving that avoids vehicle accidents or traffic jam by exchanging the road traffic information (e.g., traffic flow, traffic density, velocity variation, etc.) among neighbor vehicles. However, in CACC, the butterfly effect is happened while exhibiting asynchronous brakes that easily lead to backward shockwaves and difficult to be removed. Several critical issues should be addressed in CACC, including: 1) difficult to adaptively control the inter-vehicle distances among neighbor vehicles and the vehicle speed, 2) suffering from the butterfly effect, 3) unstable vehicle traffic flow, etc. For addressing above issues in CACC, this paper proposes the Mobile Edge Computing-based vehicular cloud of Cooperative Adaptive Driving (CAD) approach to avoid shockwaves efficiently in platoon driving. Numerical results demonstrate that CAD approach outperforms the compared approaches in number of shockwaves, average vehicle velocity, and average travel time. Additionally, the adaptive platoon length is determined according to the traffic information gathered from the global and local clouds.
{"title":"Mobile Edge Computing-Based Vehicular Cloud of Cooperative Adaptive Driving for Platooning Autonomous Self Driving","authors":"Ren-Hung Huang, Ben-Jye Chang, Yueh-Lin Tsai, Ying-Hsin Liang","doi":"10.1109/SC2.2017.13","DOIUrl":"https://doi.org/10.1109/SC2.2017.13","url":null,"abstract":"The Cooperative Adaptive Cruise Control (CACC) for Human and Autonomous Self-Driving aims to achieve active safe driving that avoids vehicle accidents or traffic jam by exchanging the road traffic information (e.g., traffic flow, traffic density, velocity variation, etc.) among neighbor vehicles. However, in CACC, the butterfly effect is happened while exhibiting asynchronous brakes that easily lead to backward shockwaves and difficult to be removed. Several critical issues should be addressed in CACC, including: 1) difficult to adaptively control the inter-vehicle distances among neighbor vehicles and the vehicle speed, 2) suffering from the butterfly effect, 3) unstable vehicle traffic flow, etc. For addressing above issues in CACC, this paper proposes the Mobile Edge Computing-based vehicular cloud of Cooperative Adaptive Driving (CAD) approach to avoid shockwaves efficiently in platoon driving. Numerical results demonstrate that CAD approach outperforms the compared approaches in number of shockwaves, average vehicle velocity, and average travel time. Additionally, the adaptive platoon length is determined according to the traffic information gathered from the global and local clouds.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130769253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Frank, Marcus Hilbrich, Sebastian Lehrig, Steffen Becker
Context: Software developers face complex, connected, and large software projects. The development of such systems involves design decisions that directly impact the quality of the software. For an early decision making, software developers can use model-based prediction approaches for (non-)functional quality properties. Unfortunately, the accuracy of these approaches is challenged by newly introduced hardware features like multiple cores within a single CPU (multicores) and their dependence on shared memory and other shared resources. Objectives: Our goal is to understand whether and how existing model-based performance prediction approaches face this challenge. We plan to use gained insights as foundation for enriching existing prediction approaches with capabilities to predict systems running on multicores. Methods: We perform a Systematic Literature Review (SLR) to identify current model-based prediction approaches in the context of multicores. Results: Our SLR covers the software engineering, embedded systems, High Performance Computing, and Software Performance Engineering domains for which we examined 34 sources in detail. We found various performance prediction approaches which tries to increase prediction accuracy for multicore systems by including shared memory designs to the prediction models. Conclusion: However, our results show that the memory designs models are only in an initial phase. Further research has to be done to improve cache, memory, and memory bandwidth model as well as to include auto tuner support.
{"title":"Parallelization, Modeling, and Performance Prediction in the Multi-/Many Core Area: A Systematic Literature Review","authors":"Markus Frank, Marcus Hilbrich, Sebastian Lehrig, Steffen Becker","doi":"10.1109/SC2.2017.15","DOIUrl":"https://doi.org/10.1109/SC2.2017.15","url":null,"abstract":"Context: Software developers face complex, connected, and large software projects. The development of such systems involves design decisions that directly impact the quality of the software. For an early decision making, software developers can use model-based prediction approaches for (non-)functional quality properties. Unfortunately, the accuracy of these approaches is challenged by newly introduced hardware features like multiple cores within a single CPU (multicores) and their dependence on shared memory and other shared resources. Objectives: Our goal is to understand whether and how existing model-based performance prediction approaches face this challenge. We plan to use gained insights as foundation for enriching existing prediction approaches with capabilities to predict systems running on multicores. Methods: We perform a Systematic Literature Review (SLR) to identify current model-based prediction approaches in the context of multicores. Results: Our SLR covers the software engineering, embedded systems, High Performance Computing, and Software Performance Engineering domains for which we examined 34 sources in detail. We found various performance prediction approaches which tries to increase prediction accuracy for multicore systems by including shared memory designs to the prediction models. Conclusion: However, our results show that the memory designs models are only in an initial phase. Further research has to be done to improve cache, memory, and memory bandwidth model as well as to include auto tuner support.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of Industry 4.0 we seek to create a smart factory environment in which everything is connected and well coordinated. Smart factories will also be connected to cloud service and/or all kinds of partners outside the boundary of the factory to achieve even better efficiency. However network connectivity also brings threats along with the promise of better efficiency, and makes Smart factories more vulnerable to intruders. There were already security incidents such as Iran's nuclear facilities' infection by the Stuxnet virus and German's steel mill destroyed by hackers in 2014. To protect smart factories from such threats traditional means of intrusion detection on the Internet could be used, but we must also refine them and have them adapted to the context of Industry 4.0. For example, network traffic in a smart factory might be more uniformed and predictable compared to the traffic on the Internet, but one should tolerate much less anomaly as the traffic is usually mission critical, and will cause much more loss once intrusion happens. The most widely used signature-based intrusion detection systems come with a large library of signatures that contains known attack have been proved to be very useful, but without the ability to detect unknown attack. We turn to supervised data mining algorithms to detect intrusions, which will help us to detect intrusions with similar properties with known attacks but not necessarily fully match the signatures in the library. In this study a simulated smart factory environment was built and a series of attacks were implemented. Neural network and decision trees were used to classify the traffic generated from this simulated environment. From the experiments we conclude that for the data set we used, decision tree performed better than neural network for detecting intrusion as it provides better accuracy, lower false negative rate and faster model building time.
{"title":"Using Data Mining Methods to Detect Simulated Intrusions on a Modbus Network","authors":"Szu-Chuang Li, Yennun Huang, Bo-Chen Tai, Chi Lin","doi":"10.1109/SC2.2017.29","DOIUrl":"https://doi.org/10.1109/SC2.2017.29","url":null,"abstract":"In the era of Industry 4.0 we seek to create a smart factory environment in which everything is connected and well coordinated. Smart factories will also be connected to cloud service and/or all kinds of partners outside the boundary of the factory to achieve even better efficiency. However network connectivity also brings threats along with the promise of better efficiency, and makes Smart factories more vulnerable to intruders. There were already security incidents such as Iran's nuclear facilities' infection by the Stuxnet virus and German's steel mill destroyed by hackers in 2014. To protect smart factories from such threats traditional means of intrusion detection on the Internet could be used, but we must also refine them and have them adapted to the context of Industry 4.0. For example, network traffic in a smart factory might be more uniformed and predictable compared to the traffic on the Internet, but one should tolerate much less anomaly as the traffic is usually mission critical, and will cause much more loss once intrusion happens. The most widely used signature-based intrusion detection systems come with a large library of signatures that contains known attack have been proved to be very useful, but without the ability to detect unknown attack. We turn to supervised data mining algorithms to detect intrusions, which will help us to detect intrusions with similar properties with known attacks but not necessarily fully match the signatures in the library. In this study a simulated smart factory environment was built and a series of attacks were implemented. Neural network and decision trees were used to classify the traffic generated from this simulated environment. From the experiments we conclude that for the data set we used, decision tree performed better than neural network for detecting intrusion as it provides better accuracy, lower false negative rate and faster model building time.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130165786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data has emerged as promising technology to handle huge and special data. Processing Big data involves selecting the appropriate services and resources thanks to the variety of services offered by different Cloud providers. Such selection is difficult, especially if a set of Big data requirements should be met. In this paper, we propose a dynamic cloud service selection scheme that assess Big data requirements, dynamically map these to the most available cloud services, and then recommend the best match services that fulfill different Big data processing requests. Our selection is conducted in two stages: 1) relies on a Big data task profile that efficiently capture Big data task's requirements and map them to QoS parameters, and then classify cloud providers that best satisfy these requirements, 2) uses the list of selected providers from stage 1 to further select the appropriate Cloud services to fulfill the overall Big Data task requirements. We extend the Analytic Hierarchy Process (AHP) based ranking mechanism to cope with the problem of multi-criteria selection. We conduct a set of experiments using simulated cloud setup to evaluate our selection scheme as well as the extended AHP against other selection techniques. The results show that our selection approach outperforms the others and select efficiently the appropriate cloud services that guarantee Big data task's QoS requirements.
{"title":"Quality Profile-Based Cloud Service Selection for Fulfilling Big Data Processing Requirements","authors":"M. Serhani, Hadeel T. El Kassabi, Ikbal Taleb","doi":"10.1109/SC2.2017.30","DOIUrl":"https://doi.org/10.1109/SC2.2017.30","url":null,"abstract":"Big data has emerged as promising technology to handle huge and special data. Processing Big data involves selecting the appropriate services and resources thanks to the variety of services offered by different Cloud providers. Such selection is difficult, especially if a set of Big data requirements should be met. In this paper, we propose a dynamic cloud service selection scheme that assess Big data requirements, dynamically map these to the most available cloud services, and then recommend the best match services that fulfill different Big data processing requests. Our selection is conducted in two stages: 1) relies on a Big data task profile that efficiently capture Big data task's requirements and map them to QoS parameters, and then classify cloud providers that best satisfy these requirements, 2) uses the list of selected providers from stage 1 to further select the appropriate Cloud services to fulfill the overall Big Data task requirements. We extend the Analytic Hierarchy Process (AHP) based ranking mechanism to cope with the problem of multi-criteria selection. We conduct a set of experiments using simulated cloud setup to evaluate our selection scheme as well as the extended AHP against other selection techniques. The results show that our selection approach outperforms the others and select efficiently the appropriate cloud services that guarantee Big data task's QoS requirements.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ExpanStor is proposed to provide high security and reliability in multi-cloud storage. Compared with the existing multi-cloud storages, expanStor have the following distinctive features and advantages. Firstly, expanStor uses client-server architecture to realize multiple-devices, multiple-user use case. The combination of local database and remote database storing the metadata of the files avoids the single point of failure. Secondly, expanStor supports LDPC codes to provide high level of security and reliability with high efficiency. Thirdly, a dynamic distributor is proposed to place the data dynamically so that higher reliability and even distribution can be achieved when there are more available Cloud Storage Providers.
{"title":"ExpanStor: Multiple Cloud Storage with Dynamic Data Distribution","authors":"Yongmei Wei, Fengmin Chen, D. C. Sheng","doi":"10.1109/SC2.2017.20","DOIUrl":"https://doi.org/10.1109/SC2.2017.20","url":null,"abstract":"ExpanStor is proposed to provide high security and reliability in multi-cloud storage. Compared with the existing multi-cloud storages, expanStor have the following distinctive features and advantages. Firstly, expanStor uses client-server architecture to realize multiple-devices, multiple-user use case. The combination of local database and remote database storing the metadata of the files avoids the single point of failure. Secondly, expanStor supports LDPC codes to provide high level of security and reliability with high efficiency. Thirdly, a dynamic distributor is proposed to place the data dynamically so that higher reliability and even distribution can be achieved when there are more available Cloud Storage Providers.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123494842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joonseok Park, Dong Yun, Ungsoo Kim, Keunhyuk Yeom
Hybrid Cloud, which is a cloud deployment model that integrates the public cloud and private cloud, is gaining considerable attention recently. The key technology for building a Hybrid Cloud environment involves the integration of different types of clouds. In this paper, the concept of Hybrid CSB and a method to solve the cloud integration problem are proposed. We present a structure for recommending a service based on a pattern according to users' requirements. In addition, we propose a method for integrating the recommended services with an integration script and a script generation process. The recommendation and integration method proposed in this study is expected to be used as an underlying technology to facilitate the transition to the Hybrid Cloud environment.
{"title":"Pattern-Based Cloud Service Recommendation and Integration for Hybrid Cloud","authors":"Joonseok Park, Dong Yun, Ungsoo Kim, Keunhyuk Yeom","doi":"10.1109/SC2.2017.40","DOIUrl":"https://doi.org/10.1109/SC2.2017.40","url":null,"abstract":"Hybrid Cloud, which is a cloud deployment model that integrates the public cloud and private cloud, is gaining considerable attention recently. The key technology for building a Hybrid Cloud environment involves the integration of different types of clouds. In this paper, the concept of Hybrid CSB and a method to solve the cloud integration problem are proposed. We present a structure for recommending a service based on a pattern according to users' requirements. In addition, we propose a method for integrating the recommended services with an integration script and a script generation process. The recommendation and integration method proposed in this study is expected to be used as an underlying technology to facilitate the transition to the Hybrid Cloud environment.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123699065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The number of crowd computing applications is rapidly growing; however, they currently lack unification and interoperability as each platform usually has its own model of tasks, resources and computation process. We aim at the development of a unifying ontology-driven platform that would support deployment of various human-based applications. Key features of the proposed human-computer cloud platform are ontologies and digital contracts. Ontological mechanisms (ability to precisely define semantics and use inference to find related terms) are employed to find and allocate human resources required by software applications. Whereas digital contracts are leveraged to achieve predictability required by cloud users (application developers). The paper describes major principles behind the platform.
{"title":"Platform-as-a-Service for Human-Based Applications: Ontology-Driven Approach","authors":"A. Smirnov, A. Ponomarev, T. Levashova, N. Shilov","doi":"10.1109/SC2.2017.31","DOIUrl":"https://doi.org/10.1109/SC2.2017.31","url":null,"abstract":"The number of crowd computing applications is rapidly growing; however, they currently lack unification and interoperability as each platform usually has its own model of tasks, resources and computation process. We aim at the development of a unifying ontology-driven platform that would support deployment of various human-based applications. Key features of the proposed human-computer cloud platform are ontologies and digital contracts. Ontological mechanisms (ability to precisely define semantics and use inference to find related terms) are employed to find and allocate human resources required by software applications. Whereas digital contracts are leveraged to achieve predictability required by cloud users (application developers). The paper describes major principles behind the platform.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117352779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Server clustering is a cost-effective solution to increase the service capacity and system reliability. It also gives greater scalability for handling the growing and huge amount of service demands. Nowadays, cloud platforms take advantage of virtualization technology and make their actual hosts virtualized. In this study, we explore the issues of implementing server clusters based on virtual machines (VM), including architectures and load distribution algorithms. We utilize Linux Virtual Server (LVS) to design several kinds of VM-based server clusters with different architectures, i.e. Single VM Cluster (SVMC), Hierarchical Multiple VM Clusters (HVMC), and Distributed Multiple VM Clusters (MVMC). In order to provide better load balance among real servers in the cluster, load distribution algorithms originally developed for the server clusters should be redesigned or adapted to VM-based clusters. Therefore, we further propose two kinds of load distribution algorithms named Virtual Machine Least Connections (VMLC) and Virtual Machine Weighted Least Connections (VMWLC). These algorithms not only consider the server loading, but also take into account the difference between physical machines (PMs) and VMs to balance the server loads. Practical implementation on Linux and experimental results show that VM clusters with the single architecture (i.e. SVMC) or the hierarchical architecture (i.e. HVMC) obtain significantly higher performance than the distributed VM cluster (i.e. MVMC) that consists of multiple VM clusters with a DNS to spread the load to VM clusters. The proposed load distribution algorithms outperform the Weighted Least Connections (WLC) which does not distinguish PMs from VMs.
{"title":"Design and Implementation of Scalable and Load-Balanced Virtual Machine Clusters","authors":"Jia-Hong Chang, Hui-Sheng Cheng, Mei-Ling Chiang","doi":"10.1109/SC2.2017.14","DOIUrl":"https://doi.org/10.1109/SC2.2017.14","url":null,"abstract":"Server clustering is a cost-effective solution to increase the service capacity and system reliability. It also gives greater scalability for handling the growing and huge amount of service demands. Nowadays, cloud platforms take advantage of virtualization technology and make their actual hosts virtualized. In this study, we explore the issues of implementing server clusters based on virtual machines (VM), including architectures and load distribution algorithms. We utilize Linux Virtual Server (LVS) to design several kinds of VM-based server clusters with different architectures, i.e. Single VM Cluster (SVMC), Hierarchical Multiple VM Clusters (HVMC), and Distributed Multiple VM Clusters (MVMC). In order to provide better load balance among real servers in the cluster, load distribution algorithms originally developed for the server clusters should be redesigned or adapted to VM-based clusters. Therefore, we further propose two kinds of load distribution algorithms named Virtual Machine Least Connections (VMLC) and Virtual Machine Weighted Least Connections (VMWLC). These algorithms not only consider the server loading, but also take into account the difference between physical machines (PMs) and VMs to balance the server loads. Practical implementation on Linux and experimental results show that VM clusters with the single architecture (i.e. SVMC) or the hierarchical architecture (i.e. HVMC) obtain significantly higher performance than the distributed VM cluster (i.e. MVMC) that consists of multiple VM clusters with a DNS to spread the load to VM clusters. The proposed load distribution algorithms outperform the Weighted Least Connections (WLC) which does not distinguish PMs from VMs.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134561245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the daily news published on the web, in general, they can be classified into various categories, such as social, politics, entertainment, and so on. These classifications motivate users to watch the desired information. If the classification is wrong, user cannot catch accurately context. How to accurately classify the daily news is becoming an important issue. In this paper, we will propose a method to enhance the effectiveness of news classification. We will utilize the term frequency appeared in variety of classified historical news to training the weighting of each category of each term. And then classify the test news based on the weighting. We propose a framework and an algorithm to training the weighting of each term. The training data, which are over 3500 Chinese news, are collected from UDN and LTN, which are two major electrical news portals in Taiwan. Based on the weighting mechanism, we conduct some experiments to evaluate the effectiveness of the algorithm. The test data are 170 Chinese news, which are collected from Google. The result shows that the traditional manually classification method has up to 13% error classification.
{"title":"Enhancing Classification Effectiveness of Chinese News Based on Term Frequency","authors":"Tzu-Yi Chan, Yue-Shan Chang","doi":"10.1109/SC2.2017.26","DOIUrl":"https://doi.org/10.1109/SC2.2017.26","url":null,"abstract":"For the daily news published on the web, in general, they can be classified into various categories, such as social, politics, entertainment, and so on. These classifications motivate users to watch the desired information. If the classification is wrong, user cannot catch accurately context. How to accurately classify the daily news is becoming an important issue. In this paper, we will propose a method to enhance the effectiveness of news classification. We will utilize the term frequency appeared in variety of classified historical news to training the weighting of each category of each term. And then classify the test news based on the weighting. We propose a framework and an algorithm to training the weighting of each term. The training data, which are over 3500 Chinese news, are collected from UDN and LTN, which are two major electrical news portals in Taiwan. Based on the weighting mechanism, we conduct some experiments to evaluate the effectiveness of the algorithm. The test data are 170 Chinese news, which are collected from Google. The result shows that the traditional manually classification method has up to 13% error classification.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensor networks (WSNs) play a significant role in monitoring the physical or environmental conditions at different locations. Each sensor node may die if it runs out of energy. The sensor nodes consume substantial energy in processing sensing data; however, few studies focus on the data processing because the sensor nodes need to store the process protocol in their ROMs. A software-defined networking (SDN) structure can solve many issues in WSNs, such as a change in network structure and the insertion of new network applications that do not need to be implemented at the time of deployment of sensor nodes. This paper proposes a new method to increase the network prohormones. We design the flow table according to the limitations of WSN applications to ensure that all sensing data can meet the requirements of each application.
{"title":"An Energy-Efficient SDN-Based Data Collection Strategy for Wireless Sensor Networks","authors":"Wen-Hwa Liao, Ssu-Chi Kuai","doi":"10.1109/SC2.2017.21","DOIUrl":"https://doi.org/10.1109/SC2.2017.21","url":null,"abstract":"Wireless sensor networks (WSNs) play a significant role in monitoring the physical or environmental conditions at different locations. Each sensor node may die if it runs out of energy. The sensor nodes consume substantial energy in processing sensing data; however, few studies focus on the data processing because the sensor nodes need to store the process protocol in their ROMs. A software-defined networking (SDN) structure can solve many issues in WSNs, such as a change in network structure and the insertion of new network applications that do not need to be implemented at the time of deployment of sensor nodes. This paper proposes a new method to increase the network prohormones. We design the flow table according to the limitations of WSN applications to ensure that all sensing data can meet the requirements of each application.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"64 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133869474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}