Li-Ju Chen, Guan-Wei Chang, Hung-Ta Pai, Lei Yen, Hsin-Piao Lin
As 4G services became more widely available, the number of 4G users has increased greatly, and the services began to suffer from the huge loads for base stations. Fortunately, 3GPP proposed Carrier Aggregation (CA), a method to aggregate component carriers (CCs) and increase the bandwidth to 100 MHz. In order to avoid low efficiency for a cell edge user, we position Wi-Fi stations around the cell edges and utilize the 5 GHz unlicensed band along with CA to achieve a high transmission rate. However, the method for allocating these resources to user equipment (UE) became a significant issue. Given the facts above, the goal of this thesis is to provide a solution for network performance optimization. To achieve this, we design a smart resource allocation scheme with the help of optimization schemes, such as Genetic Algorithm (GA), under different frequency bands (intra-or inter-band CA) and in the Orthogonal Frequency Division Multiple Access (OFDMA) system for the downlink (DL) of Long-Term Evolution-Advanced (LTE-A). In these two algorithms, the simulation is conducted every transmission time interval (TTI) and lasts for 100 TTIs. Improved GA can enhance convergence by 20% over GA, the Improved GA base.
{"title":"Resource Allocation Algorithms for LTE over Wi-Fi Spectrum","authors":"Li-Ju Chen, Guan-Wei Chang, Hung-Ta Pai, Lei Yen, Hsin-Piao Lin","doi":"10.1109/ICS.2016.0144","DOIUrl":"https://doi.org/10.1109/ICS.2016.0144","url":null,"abstract":"As 4G services became more widely available, the number of 4G users has increased greatly, and the services began to suffer from the huge loads for base stations. Fortunately, 3GPP proposed Carrier Aggregation (CA), a method to aggregate component carriers (CCs) and increase the bandwidth to 100 MHz. In order to avoid low efficiency for a cell edge user, we position Wi-Fi stations around the cell edges and utilize the 5 GHz unlicensed band along with CA to achieve a high transmission rate. However, the method for allocating these resources to user equipment (UE) became a significant issue. Given the facts above, the goal of this thesis is to provide a solution for network performance optimization. To achieve this, we design a smart resource allocation scheme with the help of optimization schemes, such as Genetic Algorithm (GA), under different frequency bands (intra-or inter-band CA) and in the Orthogonal Frequency Division Multiple Access (OFDMA) system for the downlink (DL) of Long-Term Evolution-Advanced (LTE-A). In these two algorithms, the simulation is conducted every transmission time interval (TTI) and lasts for 100 TTIs. Improved GA can enhance convergence by 20% over GA, the Improved GA base.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114005002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although efforts have been made on discovering and searching API usage patterns, how to categorize and recommend follow-up API usage patterns is still largely unexplored. This paper advances the state-of-the-art by proposing two methods for categorizing and recommending API usage patterns: first, categories of the usage patterns are automatically identified based on a proposed degree centrality-based clustering algorithm, and second, follow-up usage patterns of an adopted pattern are recommended based on a proposed metric of measuring distances between patterns. In the experimental evaluations, the patterns categorization can achieve 85.4% precision rate with 83% recall rate. The patterns recommendation had approximately half a chance of correctly predicting the follow-up patterns that were actually used by the programmers.
{"title":"Categorizing and Recommending API Usage Patterns Based on Degree Centralities and Pattern Distances","authors":"Shin-Jie Lee, Wu-Chen Su, C. Huang, Jie-Lin You","doi":"10.1109/ICS.2016.0120","DOIUrl":"https://doi.org/10.1109/ICS.2016.0120","url":null,"abstract":"Although efforts have been made on discovering and searching API usage patterns, how to categorize and recommend follow-up API usage patterns is still largely unexplored. This paper advances the state-of-the-art by proposing two methods for categorizing and recommending API usage patterns: first, categories of the usage patterns are automatically identified based on a proposed degree centrality-based clustering algorithm, and second, follow-up usage patterns of an adopted pattern are recommended based on a proposed metric of measuring distances between patterns. In the experimental evaluations, the patterns categorization can achieve 85.4% precision rate with 83% recall rate. The patterns recommendation had approximately half a chance of correctly predicting the follow-up patterns that were actually used by the programmers.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122549285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tzu-Chi Huang, Kuo-Chih Chu, Jia-Hui Lin, C. Shieh
A MapReduce system gradually becomes a popular platform for developing cloud applications while MapReduce is the de facto standard programming model of the applications. However, a MapReduce system may suffer intermediate data skew to degrade performances because input data is unpredictable and the Map function of the application may generate different quantities of intermediate data according to the application algorithm. A MapReduce system can use the Idempotent Task Cache System (ITCS) proposed in this paper to handle intermediate data skew. A MapReduce system can avoid negative performance impacts of intermediate data skew with ITCS by using caches to skip the high workload of processing skewed intermediate data in certain Reduce tasks. In experiments, a MapReduce system is tested with several popular applications to prove that ITCS not only alleviates performance penalties when intermediate data skew happens, but also greatly outperforms native MapReduce systems without any help of ITCS.
{"title":"Idempotent Task Cache System for Handling Intermediate Data Skew in MapReduce on Cloud Computing","authors":"Tzu-Chi Huang, Kuo-Chih Chu, Jia-Hui Lin, C. Shieh","doi":"10.1109/ICS.2016.0111","DOIUrl":"https://doi.org/10.1109/ICS.2016.0111","url":null,"abstract":"A MapReduce system gradually becomes a popular platform for developing cloud applications while MapReduce is the de facto standard programming model of the applications. However, a MapReduce system may suffer intermediate data skew to degrade performances because input data is unpredictable and the Map function of the application may generate different quantities of intermediate data according to the application algorithm. A MapReduce system can use the Idempotent Task Cache System (ITCS) proposed in this paper to handle intermediate data skew. A MapReduce system can avoid negative performance impacts of intermediate data skew with ITCS by using caches to skip the high workload of processing skewed intermediate data in certain Reduce tasks. In experiments, a MapReduce system is tested with several popular applications to prove that ITCS not only alleviates performance penalties when intermediate data skew happens, but also greatly outperforms native MapReduce systems without any help of ITCS.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiun-How Kao, Jyun-Han Dai, R. Ko, Yu-Ting Kuang, Chi-Ping Lai, Ching-Hao Mao
Several common file synchronization services (such as GoogleDrive, Dropbox and so on) are employed as infrastructure for being used by command and control(C&C) and data exfiltration, saying Man-in-the-Cloud (MITC) attacks. MITC is not easily detected by common security measures result in without using any exploits, and re-configuration of these services can easily turn them into an attack tool. In this study, we propose Interactive Visualization Threats Explorer that can be with intuition to aware the potential cloud threats hiding in data and eventually improve the analyzing effectiveness significantly. Drill-down and quick response visualization analytics provides cloud administrators full and deep views between cloud resources and users behavior. In addition, Collaborative Risk Estimator which considers users social and business workflow behavior enhance analysis performance. By learning from past behavior of an individual user and social network relations, rolling up behavior models to continue adapt enterprise environment changes. Analyst can quickly aware high risk access behavior locality from abnormal cloud resource access and drill-down the unusual patterns and access behavior. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.
{"title":"MITC Viz: Visual Analytics for Man-in-the-Cloud Threats Awareness","authors":"Chiun-How Kao, Jyun-Han Dai, R. Ko, Yu-Ting Kuang, Chi-Ping Lai, Ching-Hao Mao","doi":"10.1109/ICS.2016.0068","DOIUrl":"https://doi.org/10.1109/ICS.2016.0068","url":null,"abstract":"Several common file synchronization services (such as GoogleDrive, Dropbox and so on) are employed as infrastructure for being used by command and control(C&C) and data exfiltration, saying Man-in-the-Cloud (MITC) attacks. MITC is not easily detected by common security measures result in without using any exploits, and re-configuration of these services can easily turn them into an attack tool. In this study, we propose Interactive Visualization Threats Explorer that can be with intuition to aware the potential cloud threats hiding in data and eventually improve the analyzing effectiveness significantly. Drill-down and quick response visualization analytics provides cloud administrators full and deep views between cloud resources and users behavior. In addition, Collaborative Risk Estimator which considers users social and business workflow behavior enhance analysis performance. By learning from past behavior of an individual user and social network relations, rolling up behavior models to continue adapt enterprise environment changes. Analyst can quickly aware high risk access behavior locality from abnormal cloud resource access and drill-down the unusual patterns and access behavior. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123978769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the development of Internet of Things (IoT) is increasingly emphasized, how to adopt interconnected sensors and devices in conjunction with situation-awareness based system to smarten living applications has paid great attentions. On a multiple-units connected environment, a scheme of event-condition-action (ECA) is frequently used to serve automatic operation functions in the light of defined rules. According to the previous works, the ECA approach was often adopted in IC designs but less applied in advancing human-centered IoT applications, one of the main problems is the difficulty in directly generating ECA rules from human-centered application scenarios. This paper proposes a scenario-triggered, state-based rule generation (shortly S2RG) process which enables a complete but simplified way to conduct ECA rule for human-centered IoT applications. With the operation of the presented S2RG process, this paper also demonstrates ECA rules for two exemplary power saving scenarios for smarter energy applications.
{"title":"Constructing ECA Rule for IoT Application through a Novel S2RG Process: The Exemplary ECA Rules for Smarter Energy Applications","authors":"Yu-Tso Chen, Ching-Chung Chen, Hao-Yun Chang, Hsin-Shan Lin, Hsuan-Ting Chang","doi":"10.1109/ICS.2016.0114","DOIUrl":"https://doi.org/10.1109/ICS.2016.0114","url":null,"abstract":"As the development of Internet of Things (IoT) is increasingly emphasized, how to adopt interconnected sensors and devices in conjunction with situation-awareness based system to smarten living applications has paid great attentions. On a multiple-units connected environment, a scheme of event-condition-action (ECA) is frequently used to serve automatic operation functions in the light of defined rules. According to the previous works, the ECA approach was often adopted in IC designs but less applied in advancing human-centered IoT applications, one of the main problems is the difficulty in directly generating ECA rules from human-centered application scenarios. This paper proposes a scenario-triggered, state-based rule generation (shortly S2RG) process which enables a complete but simplified way to conduct ECA rule for human-centered IoT applications. With the operation of the presented S2RG process, this paper also demonstrates ECA rules for two exemplary power saving scenarios for smarter energy applications.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129533236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the concept of smart city being gradually prevailed in recent years, the wireless routing algorithm for Low-Rate Wireless Personal Area Networks is an important research for data collecting network, which is used for managing and collecting sensing data in the city. Chiu and Chen proposed an adaptive link-list routing algorithm with wormhole mechanism. The algorithm is a low collision wireless protocol, which is suitable for data collection systems such as intelligent street lighting, smart meters and smart appliances. In the algorithm some unstable status may be occurred due to the environmental interference and the inappropriate design of the protocol. In this paper, we proposed a rescuing algorithm for link-list wireless network with wormhole mechanism to overcome some problems in the link-list network, such as the node losing problem, the path-optimized construction problem and the acknowledge packet confliction problem. We use the network simulation-3 to verify the efficacy of the rescuing algorithm. The results prove that the recuing algorithm can solve the routing problem of the link-list network and help the link-list network to transfer the data quickly. The results show that the link-list network with the rescuing algorithm can build a stable and rapid data collecting network system.
{"title":"Rescuing Algorithm for Link-List Wireless Network with Wormhole Mechanism","authors":"J. Chiu, Hong-Wei Chiu, Ting-Tung Tsou","doi":"10.1109/ICS.2016.0101","DOIUrl":"https://doi.org/10.1109/ICS.2016.0101","url":null,"abstract":"Due to the concept of smart city being gradually prevailed in recent years, the wireless routing algorithm for Low-Rate Wireless Personal Area Networks is an important research for data collecting network, which is used for managing and collecting sensing data in the city. Chiu and Chen proposed an adaptive link-list routing algorithm with wormhole mechanism. The algorithm is a low collision wireless protocol, which is suitable for data collection systems such as intelligent street lighting, smart meters and smart appliances. In the algorithm some unstable status may be occurred due to the environmental interference and the inappropriate design of the protocol. In this paper, we proposed a rescuing algorithm for link-list wireless network with wormhole mechanism to overcome some problems in the link-list network, such as the node losing problem, the path-optimized construction problem and the acknowledge packet confliction problem. We use the network simulation-3 to verify the efficacy of the rescuing algorithm. The results prove that the recuing algorithm can solve the routing problem of the link-list network and help the link-list network to transfer the data quickly. The results show that the link-list network with the rescuing algorithm can build a stable and rapid data collecting network system.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122262719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Java applications form an important class of applications running in the data center and in the cloud. They may perform better when more memory can be used in the heap, as the time spent in garbage collections is reduced. However, when ample CPU is available and memory is tight, such Java applications may do well with a smaller heap as it can absorb the cost of more garbage collections. In the cloud, the amount of resources available may vary from time to time. This paper investigates an approach based on the statistical design of experiments and performance data analytics to make resource trade-offs, between CPU and memory, to increase datacenter efficiency in the cloud.
{"title":"Resource Trade-Offs for Java Applications in the Cloud","authors":"K. Chow, Pranita Maldikar, Khun Ban","doi":"10.1109/ICS.2016.0113","DOIUrl":"https://doi.org/10.1109/ICS.2016.0113","url":null,"abstract":"Java applications form an important class of applications running in the data center and in the cloud. They may perform better when more memory can be used in the heap, as the time spent in garbage collections is reduced. However, when ample CPU is available and memory is tight, such Java applications may do well with a smaller heap as it can absorb the cost of more garbage collections. In the cloud, the amount of resources available may vary from time to time. This paper investigates an approach based on the statistical design of experiments and performance data analytics to make resource trade-offs, between CPU and memory, to increase datacenter efficiency in the cloud.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main goal of this study is to predict common and exclusive linear epitopes from two different grouper iridovirus protein sequences and launch their applications to vaccine design. The prediction mechanism is essentially based on integrating previously developed linear/conformational epitope prediction systems, the structural prediction system (Phyre2), and the sequencestructure alignment tools. The predicted two protein structures of iridovirus were aligned by a structure alignment system for identifying virtual structural variations. If the predicted linear epitopes appeared to be the variant geometrical conformations and located on protein surface, they could be assumed as exclusive epitope candidates. Inversely, the conserved linear epitopes located on surface with high antigenicity could be considered as common linear epitopes for vaccine design. Through combining both sequence and structural alignment results and surface structure validation, two conserved segments and one partial conserved segment were found suitable for designing as linear epitopes for the two different iridoviruses. In addition, both grouper iridovirus sequences possess one unique segment respectively, and which can be considered as exclusive liner epitope for each iridovirus. All these predicted linear epitopes would be evaluated by suitable biological experiments for further verification.
{"title":"Linear Epitope Prediction for Grouper Iridovirus Antigens","authors":"Tao-Chuan Shih, Tun-Wen Pai, Li-Ping Ho, H. Chou","doi":"10.1109/ICS.2016.0019","DOIUrl":"https://doi.org/10.1109/ICS.2016.0019","url":null,"abstract":"The main goal of this study is to predict common and exclusive linear epitopes from two different grouper iridovirus protein sequences and launch their applications to vaccine design. The prediction mechanism is essentially based on integrating previously developed linear/conformational epitope prediction systems, the structural prediction system (Phyre2), and the sequencestructure alignment tools. The predicted two protein structures of iridovirus were aligned by a structure alignment system for identifying virtual structural variations. If the predicted linear epitopes appeared to be the variant geometrical conformations and located on protein surface, they could be assumed as exclusive epitope candidates. Inversely, the conserved linear epitopes located on surface with high antigenicity could be considered as common linear epitopes for vaccine design. Through combining both sequence and structural alignment results and surface structure validation, two conserved segments and one partial conserved segment were found suitable for designing as linear epitopes for the two different iridoviruses. In addition, both grouper iridovirus sequences possess one unique segment respectively, and which can be considered as exclusive liner epitope for each iridovirus. All these predicted linear epitopes would be evaluated by suitable biological experiments for further verification.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114907351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread use of Internet especially search engine, people nowadays can easily browse the Web through network of URL. In this study, features of webpages on Internet have been analyzed carefully and a search-based text steganography model has been proposed. The model is based on a hypothesis that features of huge amount data on Internet can make secret message sender find a webpage that contains all the information to describe a secret message. So that the sender no longer needs to modify the webpage as cover data. But is the hypothesis reasonable? Therefore, this paper proofs mainly that such an ideal webpage will exist under some assumptions from the perspective of information theory and practice respectively. Meanwhile, the steganography framework based on searching a webpage containing the sent secret message is designed. Experi-mental results show that the proposed method provides a high embedding capacity and has good imperceptibility.
{"title":"An Approach to Text Steganography Based on Search in Internet","authors":"Shangwei Shi, Yining Qi, Yongfeng Huang","doi":"10.1109/ICS.2016.0052","DOIUrl":"https://doi.org/10.1109/ICS.2016.0052","url":null,"abstract":"With the widespread use of Internet especially search engine, people nowadays can easily browse the Web through network of URL. In this study, features of webpages on Internet have been analyzed carefully and a search-based text steganography model has been proposed. The model is based on a hypothesis that features of huge amount data on Internet can make secret message sender find a webpage that contains all the information to describe a secret message. So that the sender no longer needs to modify the webpage as cover data. But is the hypothesis reasonable? Therefore, this paper proofs mainly that such an ideal webpage will exist under some assumptions from the perspective of information theory and practice respectively. Meanwhile, the steganography framework based on searching a webpage containing the sent secret message is designed. Experi-mental results show that the proposed method provides a high embedding capacity and has good imperceptibility.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130338625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a graph G = (V, E) with non-negative edge lengths, a subset R ⊂ V, a Steiner tree for R in G is an acyclic subgraph of G interconnecting all vertices in R and a terminal Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. A bottleneck edge of a Steiner tree is an edge with the largest length in the Steiner tree. The bottleneck Steiner tree problem (BSTP) (respectively, the bottleneck terminal Steiner tree problem (BTSTP)) is to find a Steiner tree (respectively, a terminal Steiner tree) for R in G with minimum length of a bottleneck edge. For any arbitrary tree T, lenb(T) denotes the length of a bottleneck edge in T. Let Topt(G, BSTP) and Topt(G, BTSTP) denote the optimal solutions for the BSTP and the BTSTP in G, respectively. Given a graph G = (V, E) with non-negative edge lengths, a subset E0 ⊂ E, a number h = |E E0|, and a subset R ⊂ V, the incremental bottleneck Steiner tree problem (respectively, the incremental bottleneck terminal Steiner tree problem) is to find a sequence of edge sets {E0 ⊂ E1 ⊂ E2 ⊂ … ⊂ Eh = E} with |EiEi-1| = 1 such that Σh i=1 lenb(Topt(Gi, BSTP)) (respectively, Σh i=1 lenb(Topt(Gi, BTSTP))) is minimized, where Gi = (V, Ei). In this paper, we prove that the incremental bottleneck Steiner tree problem is NP-hard. Then we show that there is no polynomial time approximation algorithm achieving a performance ratio of (1-ε) × ln |R|, 0
{"title":"On the Complexities of the Incremental Bottleneck and Bottleneck Terminal Steiner Tree Problems","authors":"Yen Hung Chen","doi":"10.1109/ICS.2016.0010","DOIUrl":"https://doi.org/10.1109/ICS.2016.0010","url":null,"abstract":"Given a graph G = (V, E) with non-negative edge lengths, a subset R ⊂ V, a Steiner tree for R in G is an acyclic subgraph of G interconnecting all vertices in R and a terminal Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. A bottleneck edge of a Steiner tree is an edge with the largest length in the Steiner tree. The bottleneck Steiner tree problem (BSTP) (respectively, the bottleneck terminal Steiner tree problem (BTSTP)) is to find a Steiner tree (respectively, a terminal Steiner tree) for R in G with minimum length of a bottleneck edge. For any arbitrary tree T, lenb(T) denotes the length of a bottleneck edge in T. Let Topt(G, BSTP) and Topt(G, BTSTP) denote the optimal solutions for the BSTP and the BTSTP in G, respectively. Given a graph G = (V, E) with non-negative edge lengths, a subset E0 ⊂ E, a number h = |E E0|, and a subset R ⊂ V, the incremental bottleneck Steiner tree problem (respectively, the incremental bottleneck terminal Steiner tree problem) is to find a sequence of edge sets {E0 ⊂ E1 ⊂ E2 ⊂ … ⊂ Eh = E} with |EiEi-1| = 1 such that Σh i=1 lenb(Topt(Gi, BSTP)) (respectively, Σh i=1 lenb(Topt(Gi, BTSTP))) is minimized, where Gi = (V, Ei). In this paper, we prove that the incremental bottleneck Steiner tree problem is NP-hard. Then we show that there is no polynomial time approximation algorithm achieving a performance ratio of (1-ε) × ln |R|, 0","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132515575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}