首页 > 最新文献

2016 International Computer Symposium (ICS)最新文献

英文 中文
Using Reinforcement Learning to Achieve Two Wheeled Self Balancing Control 利用强化学习实现两轮自平衡控制
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0029
Ching-Lung Chang, Shih-Yu Chang
The non-linear, unstable system of the two wheeled self-balancing robot has made it a popular research subject within the past decade. This paper outlines the design of a two wheeled robot with self balancing control systems using Reinforcement Learning. The BeagleBone Black platform was used to design the two wheeled robot. Along with the motor, the robot was also equipped with an accelerometer and gyroscope. Using the Q-Learning method, adjustments to the motor were made according to the dip angle and the angular velocity at that given time to return the robot to balance. The experimental results show that using this reinforcement learning method, the robot has the ability to quickly return to a balanced state under any dip angle.
两轮自平衡机器人系统的非线性、不稳定性使其成为近十年来研究的热点。本文概述了一种基于强化学习的两轮机器人自平衡控制系统的设计。BeagleBone Black平台用于设计两轮机器人。除了马达,机器人还配备了加速度计和陀螺仪。采用Q-Learning方法,根据给定时间的倾角和角速度对电机进行调整,使机器人恢复平衡。实验结果表明,采用这种强化学习方法,机器人在任何倾角下都能快速恢复到平衡状态。
{"title":"Using Reinforcement Learning to Achieve Two Wheeled Self Balancing Control","authors":"Ching-Lung Chang, Shih-Yu Chang","doi":"10.1109/ICS.2016.0029","DOIUrl":"https://doi.org/10.1109/ICS.2016.0029","url":null,"abstract":"The non-linear, unstable system of the two wheeled self-balancing robot has made it a popular research subject within the past decade. This paper outlines the design of a two wheeled robot with self balancing control systems using Reinforcement Learning. The BeagleBone Black platform was used to design the two wheeled robot. Along with the motor, the robot was also equipped with an accelerometer and gyroscope. Using the Q-Learning method, adjustments to the motor were made according to the dip angle and the angular velocity at that given time to return the robot to balance. The experimental results show that using this reinforcement learning method, the robot has the ability to quickly return to a balanced state under any dip angle.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
On the Complexities of the Incremental Bottleneck and Bottleneck Terminal Steiner Tree Problems 增量瓶颈和瓶颈末端斯坦纳树问题的复杂性
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0010
Yen Hung Chen
Given a graph G = (V, E) with non-negative edge lengths, a subset R ⊂ V, a Steiner tree for R in G is an acyclic subgraph of G interconnecting all vertices in R and a terminal Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. A bottleneck edge of a Steiner tree is an edge with the largest length in the Steiner tree. The bottleneck Steiner tree problem (BSTP) (respectively, the bottleneck terminal Steiner tree problem (BTSTP)) is to find a Steiner tree (respectively, a terminal Steiner tree) for R in G with minimum length of a bottleneck edge. For any arbitrary tree T, lenb(T) denotes the length of a bottleneck edge in T. Let Topt(G, BSTP) and Topt(G, BTSTP) denote the optimal solutions for the BSTP and the BTSTP in G, respectively. Given a graph G = (V, E) with non-negative edge lengths, a subset E0 ⊂ E, a number h = |E E0|, and a subset R ⊂ V, the incremental bottleneck Steiner tree problem (respectively, the incremental bottleneck terminal Steiner tree problem) is to find a sequence of edge sets {E0 ⊂ E1 ⊂ E2 ⊂ … ⊂ Eh = E} with |EiEi-1| = 1 such that Σh i=1 lenb(Topt(Gi, BSTP)) (respectively, Σh i=1 lenb(Topt(Gi, BTSTP))) is minimized, where Gi = (V, Ei). In this paper, we prove that the incremental bottleneck Steiner tree problem is NP-hard. Then we show that there is no polynomial time approximation algorithm achieving a performance ratio of (1-ε) × ln |R|, 0
给定一个边长度为非负的图G = (V, E),一个子集R∧V, G中R的斯坦纳树是G的一个连接R中所有顶点的无环子图,并且末端斯坦纳树定义为G中以R的所有顶点为叶子的斯坦纳树。斯坦纳树的瓶颈边是斯坦纳树中长度最大的边。瓶颈斯坦纳树问题(BSTP)(分别为瓶颈终端斯坦纳树问题(BTSTP))是在G中寻找R的一棵具有最小瓶颈边长度的斯坦纳树(分别为终端斯坦纳树)。对于任意树T, lenb(T)表示T中瓶颈边的长度。令Topt(G, BSTP)和Topt(G, BTSTP)分别表示G中BSTP和BTSTP的最优解。给定一个图G = (V, E)与非负边的长度,一个子集E0⊂E、h = | E E0 |, R⊂V的一个子集,增量瓶颈Steiner树问题(分别增量瓶颈终端Steiner树问题)是找到一个边缘序列集{E0⊂E1⊂E2⊂…⊂呃= E}与| Ei Ei-1 | = 1,Σh i = 1 lenb (Topt (Gi BSTP))(分别Σh i = 1 lenb (Topt (Gi BTSTP)))最小化,Gi = (V, Ei)。本文证明了增量瓶颈Steiner树问题是np困难的。然后我们证明了没有多项式时间逼近算法可以达到(1-ε) × ln |R|, 0的性能比
{"title":"On the Complexities of the Incremental Bottleneck and Bottleneck Terminal Steiner Tree Problems","authors":"Yen Hung Chen","doi":"10.1109/ICS.2016.0010","DOIUrl":"https://doi.org/10.1109/ICS.2016.0010","url":null,"abstract":"Given a graph G = (V, E) with non-negative edge lengths, a subset R ⊂ V, a Steiner tree for R in G is an acyclic subgraph of G interconnecting all vertices in R and a terminal Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. A bottleneck edge of a Steiner tree is an edge with the largest length in the Steiner tree. The bottleneck Steiner tree problem (BSTP) (respectively, the bottleneck terminal Steiner tree problem (BTSTP)) is to find a Steiner tree (respectively, a terminal Steiner tree) for R in G with minimum length of a bottleneck edge. For any arbitrary tree T, lenb(T) denotes the length of a bottleneck edge in T. Let Topt(G, BSTP) and Topt(G, BTSTP) denote the optimal solutions for the BSTP and the BTSTP in G, respectively. Given a graph G = (V, E) with non-negative edge lengths, a subset E0 ⊂ E, a number h = |E E0|, and a subset R ⊂ V, the incremental bottleneck Steiner tree problem (respectively, the incremental bottleneck terminal Steiner tree problem) is to find a sequence of edge sets {E0 ⊂ E1 ⊂ E2 ⊂ … ⊂ Eh = E} with |EiEi-1| = 1 such that Σh i=1 lenb(Topt(Gi, BSTP)) (respectively, Σh i=1 lenb(Topt(Gi, BTSTP))) is minimized, where Gi = (V, Ei). In this paper, we prove that the incremental bottleneck Steiner tree problem is NP-hard. Then we show that there is no polynomial time approximation algorithm achieving a performance ratio of (1-ε) × ln |R|, 0","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132515575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Gaussian Mixture Model for Indoor Positioning Accuracy 室内定位精度的增强高斯混合模型
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0099
C. Tseng, Jing-Shyang Yen
Received Signal Strength Indicator (RSSI) is used in indoor positioning for measuring object distance to the base station. However, acquiring accurate RSSI values is challenging because wireless interference factors, such as multipath decline interference, make RSSI values of the same object fluctuate over time. Therefore, instead of a single RSSI, RSSI acquisition will collect a set of RSSI values from which the most moderate RSSI is derived. For this purpose, we propose an Enhanced Gaussian Mixture Model (EGMM) to derive a more precise RSSI for improving indoor positioning accuracy. EGMM enhances Gaussian Mixture Model (GMM) by applying Akaike information criterion (AIC) to determine the best K value for GMM to divide RSSI values into K sets representing signals from different paths. Then, EGMM identifies the most appropriate set of RSSI values to derive a more precise RSSI and thus improves the accuracy of indoor positioning. Our EGMM solution performs well in an open indoor space. The experiment is conducted with iBeacon devices, and the average error distance of EGMM is about 64% of those generated by existing Gaussian filtering. The average positioning error of EGMM is about 0.48 meter, which is adequate to indoor positioning accuracy.
RSSI (Received Signal Strength Indicator)用于室内定位,用于测量目标到基站的距离。然而,获取准确的RSSI值具有挑战性,因为无线干扰因素,如多径衰落干扰,会使同一对象的RSSI值随时间波动。因此,RSSI采集将收集一组RSSI值,而不是单个RSSI,从中派生出最适中的RSSI。为此,我们提出了一种增强高斯混合模型(EGMM),以获得更精确的RSSI,以提高室内定位精度。EGMM对高斯混合模型(GMM)进行了改进,利用赤池信息准则(Akaike information criterion, AIC)确定GMM的最佳K值,将RSSI值划分为代表不同路径信号的K集。然后,EGMM识别最合适的RSSI值集合,得到更精确的RSSI,从而提高室内定位的精度。我们的EGMM解决方案在开放的室内空间中表现良好。在iBeacon设备上进行实验,EGMM的平均误差距离约为现有高斯滤波的64%。EGMM的平均定位误差约为0.48 m,足以满足室内定位精度。
{"title":"Enhanced Gaussian Mixture Model for Indoor Positioning Accuracy","authors":"C. Tseng, Jing-Shyang Yen","doi":"10.1109/ICS.2016.0099","DOIUrl":"https://doi.org/10.1109/ICS.2016.0099","url":null,"abstract":"Received Signal Strength Indicator (RSSI) is used in indoor positioning for measuring object distance to the base station. However, acquiring accurate RSSI values is challenging because wireless interference factors, such as multipath decline interference, make RSSI values of the same object fluctuate over time. Therefore, instead of a single RSSI, RSSI acquisition will collect a set of RSSI values from which the most moderate RSSI is derived. For this purpose, we propose an Enhanced Gaussian Mixture Model (EGMM) to derive a more precise RSSI for improving indoor positioning accuracy. EGMM enhances Gaussian Mixture Model (GMM) by applying Akaike information criterion (AIC) to determine the best K value for GMM to divide RSSI values into K sets representing signals from different paths. Then, EGMM identifies the most appropriate set of RSSI values to derive a more precise RSSI and thus improves the accuracy of indoor positioning. Our EGMM solution performs well in an open indoor space. The experiment is conducted with iBeacon devices, and the average error distance of EGMM is about 64% of those generated by existing Gaussian filtering. The average positioning error of EGMM is about 0.48 meter, which is adequate to indoor positioning accuracy.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132240245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Linear Epitope Prediction for Grouper Iridovirus Antigens 石斑鱼虹膜病毒抗原的线性表位预测
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0019
Tao-Chuan Shih, Tun-Wen Pai, Li-Ping Ho, H. Chou
The main goal of this study is to predict common and exclusive linear epitopes from two different grouper iridovirus protein sequences and launch their applications to vaccine design. The prediction mechanism is essentially based on integrating previously developed linear/conformational epitope prediction systems, the structural prediction system (Phyre2), and the sequencestructure alignment tools. The predicted two protein structures of iridovirus were aligned by a structure alignment system for identifying virtual structural variations. If the predicted linear epitopes appeared to be the variant geometrical conformations and located on protein surface, they could be assumed as exclusive epitope candidates. Inversely, the conserved linear epitopes located on surface with high antigenicity could be considered as common linear epitopes for vaccine design. Through combining both sequence and structural alignment results and surface structure validation, two conserved segments and one partial conserved segment were found suitable for designing as linear epitopes for the two different iridoviruses. In addition, both grouper iridovirus sequences possess one unique segment respectively, and which can be considered as exclusive liner epitope for each iridovirus. All these predicted linear epitopes would be evaluated by suitable biological experiments for further verification.
本研究的主要目的是预测两种不同石斑鱼虹膜病毒蛋白序列的共同和排他线性表位,并将其应用于疫苗设计。预测机制基本上是基于整合先前开发的线性/构象表位预测系统、结构预测系统(Phyre2)和序列结构比对工具。通过结构比对系统对虹膜病毒预测的两种蛋白质结构进行比对,以识别虚拟结构变异。如果预测的线性表位表现为不同的几何构象,并且位于蛋白质表面,则可以认为它们是排他性表位候选者。相反,位于高抗原性表面的保守线性表位可以作为疫苗设计的共同线性表位。结合序列和结构比对结果和表面结构验证,发现两个保守片段和一个部分保守片段适合设计为两种不同虹膜病毒的线性表位。此外,两种虹膜病毒序列都有一个独特的片段,可以认为这是每一种虹膜病毒的排他性线性表位。所有这些预测的线性表位将通过合适的生物学实验进行评估,以进一步验证。
{"title":"Linear Epitope Prediction for Grouper Iridovirus Antigens","authors":"Tao-Chuan Shih, Tun-Wen Pai, Li-Ping Ho, H. Chou","doi":"10.1109/ICS.2016.0019","DOIUrl":"https://doi.org/10.1109/ICS.2016.0019","url":null,"abstract":"The main goal of this study is to predict common and exclusive linear epitopes from two different grouper iridovirus protein sequences and launch their applications to vaccine design. The prediction mechanism is essentially based on integrating previously developed linear/conformational epitope prediction systems, the structural prediction system (Phyre2), and the sequencestructure alignment tools. The predicted two protein structures of iridovirus were aligned by a structure alignment system for identifying virtual structural variations. If the predicted linear epitopes appeared to be the variant geometrical conformations and located on protein surface, they could be assumed as exclusive epitope candidates. Inversely, the conserved linear epitopes located on surface with high antigenicity could be considered as common linear epitopes for vaccine design. Through combining both sequence and structural alignment results and surface structure validation, two conserved segments and one partial conserved segment were found suitable for designing as linear epitopes for the two different iridoviruses. In addition, both grouper iridovirus sequences possess one unique segment respectively, and which can be considered as exclusive liner epitope for each iridovirus. All these predicted linear epitopes would be evaluated by suitable biological experiments for further verification.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114907351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MITC Viz: Visual Analytics for Man-in-the-Cloud Threats Awareness MITC Viz:云中人威胁感知的可视化分析
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0068
Chiun-How Kao, Jyun-Han Dai, R. Ko, Yu-Ting Kuang, Chi-Ping Lai, Ching-Hao Mao
Several common file synchronization services (such as GoogleDrive, Dropbox and so on) are employed as infrastructure for being used by command and control(C&C) and data exfiltration, saying Man-in-the-Cloud (MITC) attacks. MITC is not easily detected by common security measures result in without using any exploits, and re-configuration of these services can easily turn them into an attack tool. In this study, we propose Interactive Visualization Threats Explorer that can be with intuition to aware the potential cloud threats hiding in data and eventually improve the analyzing effectiveness significantly. Drill-down and quick response visualization analytics provides cloud administrators full and deep views between cloud resources and users behavior. In addition, Collaborative Risk Estimator which considers users social and business workflow behavior enhance analysis performance. By learning from past behavior of an individual user and social network relations, rolling up behavior models to continue adapt enterprise environment changes. Analyst can quickly aware high risk access behavior locality from abnormal cloud resource access and drill-down the unusual patterns and access behavior. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.
一些常见的文件同步服务(如GoogleDrive, Dropbox等)被用作命令和控制(C&C)和数据泄露(MITC)攻击的基础设施。在不使用任何漏洞的情况下,普通安全措施不容易检测到MITC,并且重新配置这些服务很容易将它们变成攻击工具。在本研究中,我们提出了交互式可视化威胁资源管理器,可以直观地发现隐藏在数据中的潜在云威胁,最终显著提高分析效率。向下钻取和快速响应可视化分析为云管理员提供了云资源和用户行为之间的全面和深入的视图。此外,考虑用户社交行为和业务工作流行为的协同风险估计器提高了分析性能。通过学习个人用户过去的行为和社会网络的关系,积累行为模型,不断适应企业环境的变化。分析人员可以从异常的云资源访问中快速识别高风险访问行为的位置,并对异常模式和访问行为进行深入分析。为了说明这种方法的有效性,我们在两个真实世界的数据集上进行了示例探索,以检测和理解正在进行的潜在高级持续威胁。
{"title":"MITC Viz: Visual Analytics for Man-in-the-Cloud Threats Awareness","authors":"Chiun-How Kao, Jyun-Han Dai, R. Ko, Yu-Ting Kuang, Chi-Ping Lai, Ching-Hao Mao","doi":"10.1109/ICS.2016.0068","DOIUrl":"https://doi.org/10.1109/ICS.2016.0068","url":null,"abstract":"Several common file synchronization services (such as GoogleDrive, Dropbox and so on) are employed as infrastructure for being used by command and control(C&C) and data exfiltration, saying Man-in-the-Cloud (MITC) attacks. MITC is not easily detected by common security measures result in without using any exploits, and re-configuration of these services can easily turn them into an attack tool. In this study, we propose Interactive Visualization Threats Explorer that can be with intuition to aware the potential cloud threats hiding in data and eventually improve the analyzing effectiveness significantly. Drill-down and quick response visualization analytics provides cloud administrators full and deep views between cloud resources and users behavior. In addition, Collaborative Risk Estimator which considers users social and business workflow behavior enhance analysis performance. By learning from past behavior of an individual user and social network relations, rolling up behavior models to continue adapt enterprise environment changes. Analyst can quickly aware high risk access behavior locality from abnormal cloud resource access and drill-down the unusual patterns and access behavior. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123978769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Resource Trade-Offs for Java Applications in the Cloud 云中的Java应用程序的资源权衡
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0113
K. Chow, Pranita Maldikar, Khun Ban
Java applications form an important class of applications running in the data center and in the cloud. They may perform better when more memory can be used in the heap, as the time spent in garbage collections is reduced. However, when ample CPU is available and memory is tight, such Java applications may do well with a smaller heap as it can absorb the cost of more garbage collections. In the cloud, the amount of resources available may vary from time to time. This paper investigates an approach based on the statistical design of experiments and performance data analytics to make resource trade-offs, between CPU and memory, to increase datacenter efficiency in the cloud.
Java应用程序构成了运行在数据中心和云中应用程序的重要类别。当堆中可以使用更多内存时,它们的性能可能会更好,因为花在垃圾收集上的时间减少了。但是,当有足够的CPU可用而内存紧张时,这样的Java应用程序可能会使用较小的堆,因为它可以吸收更多垃圾收集的成本。在云中,可用资源的数量可能会不时变化。本文研究了一种基于实验统计设计和性能数据分析的方法,以在CPU和内存之间进行资源权衡,以提高云中的数据中心效率。
{"title":"Resource Trade-Offs for Java Applications in the Cloud","authors":"K. Chow, Pranita Maldikar, Khun Ban","doi":"10.1109/ICS.2016.0113","DOIUrl":"https://doi.org/10.1109/ICS.2016.0113","url":null,"abstract":"Java applications form an important class of applications running in the data center and in the cloud. They may perform better when more memory can be used in the heap, as the time spent in garbage collections is reduced. However, when ample CPU is available and memory is tight, such Java applications may do well with a smaller heap as it can absorb the cost of more garbage collections. In the cloud, the amount of resources available may vary from time to time. This paper investigates an approach based on the statistical design of experiments and performance data analytics to make resource trade-offs, between CPU and memory, to increase datacenter efficiency in the cloud.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rescuing Algorithm for Link-List Wireless Network with Wormhole Mechanism 基于虫洞机制的链表无线网络抢救算法
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0101
J. Chiu, Hong-Wei Chiu, Ting-Tung Tsou
Due to the concept of smart city being gradually prevailed in recent years, the wireless routing algorithm for Low-Rate Wireless Personal Area Networks is an important research for data collecting network, which is used for managing and collecting sensing data in the city. Chiu and Chen proposed an adaptive link-list routing algorithm with wormhole mechanism. The algorithm is a low collision wireless protocol, which is suitable for data collection systems such as intelligent street lighting, smart meters and smart appliances. In the algorithm some unstable status may be occurred due to the environmental interference and the inappropriate design of the protocol. In this paper, we proposed a rescuing algorithm for link-list wireless network with wormhole mechanism to overcome some problems in the link-list network, such as the node losing problem, the path-optimized construction problem and the acknowledge packet confliction problem. We use the network simulation-3 to verify the efficacy of the rescuing algorithm. The results prove that the recuing algorithm can solve the routing problem of the link-list network and help the link-list network to transfer the data quickly. The results show that the link-list network with the rescuing algorithm can build a stable and rapid data collecting network system.
近年来,随着智慧城市概念的逐步普及,低速率无线个人区域网络的无线路由算法是数据采集网络的重要研究内容,用于管理和采集城市中的传感数据。Chiu和Chen提出了一种基于虫洞机制的自适应链表路由算法。该算法是一种低碰撞无线协议,适用于智能路灯、智能电表、智能家电等数据采集系统。在该算法中,由于环境干扰和协议设计不当,可能会出现一些不稳定状态。本文提出了一种基于虫洞机制的链表无线网络抢救算法,以克服链表网络中存在的节点丢失问题、路径优化构建问题和确认包冲突问题。我们使用网络仿真-3验证了救助算法的有效性。结果表明,该约简算法能够解决链表网络的路由问题,有助于链表网络快速传输数据。结果表明,采用拯救算法的链表网络可以构建一个稳定、快速的数据采集网络系统。
{"title":"Rescuing Algorithm for Link-List Wireless Network with Wormhole Mechanism","authors":"J. Chiu, Hong-Wei Chiu, Ting-Tung Tsou","doi":"10.1109/ICS.2016.0101","DOIUrl":"https://doi.org/10.1109/ICS.2016.0101","url":null,"abstract":"Due to the concept of smart city being gradually prevailed in recent years, the wireless routing algorithm for Low-Rate Wireless Personal Area Networks is an important research for data collecting network, which is used for managing and collecting sensing data in the city. Chiu and Chen proposed an adaptive link-list routing algorithm with wormhole mechanism. The algorithm is a low collision wireless protocol, which is suitable for data collection systems such as intelligent street lighting, smart meters and smart appliances. In the algorithm some unstable status may be occurred due to the environmental interference and the inappropriate design of the protocol. In this paper, we proposed a rescuing algorithm for link-list wireless network with wormhole mechanism to overcome some problems in the link-list network, such as the node losing problem, the path-optimized construction problem and the acknowledge packet confliction problem. We use the network simulation-3 to verify the efficacy of the rescuing algorithm. The results prove that the recuing algorithm can solve the routing problem of the link-list network and help the link-list network to transfer the data quickly. The results show that the link-list network with the rescuing algorithm can build a stable and rapid data collecting network system.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122262719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Approach to Text Steganography Based on Search in Internet 一种基于Internet搜索的文本隐写方法
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0052
Shangwei Shi, Yining Qi, Yongfeng Huang
With the widespread use of Internet especially search engine, people nowadays can easily browse the Web through network of URL. In this study, features of webpages on Internet have been analyzed carefully and a search-based text steganography model has been proposed. The model is based on a hypothesis that features of huge amount data on Internet can make secret message sender find a webpage that contains all the information to describe a secret message. So that the sender no longer needs to modify the webpage as cover data. But is the hypothesis reasonable? Therefore, this paper proofs mainly that such an ideal webpage will exist under some assumptions from the perspective of information theory and practice respectively. Meanwhile, the steganography framework based on searching a webpage containing the sent secret message is designed. Experi-mental results show that the proposed method provides a high embedding capacity and has good imperceptibility.
随着互联网尤其是搜索引擎的广泛使用,人们可以很容易地通过URL网络来浏览网页。本研究仔细分析了互联网上网页的特点,提出了一种基于搜索的文本隐写模型。该模型基于一个假设,即互联网上数据量巨大的特点可以使秘密消息发送者找到一个包含所有信息的网页来描述秘密消息。这样发件人就不再需要修改网页作为封面数据。但是这个假设合理吗?因此,本文主要从信息学理论和信息学实践的角度,在一定的假设下,证明这样一个理想的网页是存在的。同时,设计了基于搜索包含发送的秘密消息的网页的隐写框架。实验结果表明,该方法具有较高的嵌入容量和良好的不可感知性。
{"title":"An Approach to Text Steganography Based on Search in Internet","authors":"Shangwei Shi, Yining Qi, Yongfeng Huang","doi":"10.1109/ICS.2016.0052","DOIUrl":"https://doi.org/10.1109/ICS.2016.0052","url":null,"abstract":"With the widespread use of Internet especially search engine, people nowadays can easily browse the Web through network of URL. In this study, features of webpages on Internet have been analyzed carefully and a search-based text steganography model has been proposed. The model is based on a hypothesis that features of huge amount data on Internet can make secret message sender find a webpage that contains all the information to describe a secret message. So that the sender no longer needs to modify the webpage as cover data. But is the hypothesis reasonable? Therefore, this paper proofs mainly that such an ideal webpage will exist under some assumptions from the perspective of information theory and practice respectively. Meanwhile, the steganography framework based on searching a webpage containing the sent secret message is designed. Experi-mental results show that the proposed method provides a high embedding capacity and has good imperceptibility.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130338625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Resource Allocation Algorithms for LTE over Wi-Fi Spectrum 基于Wi-Fi频谱的LTE资源分配算法
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0144
Li-Ju Chen, Guan-Wei Chang, Hung-Ta Pai, Lei Yen, Hsin-Piao Lin
As 4G services became more widely available, the number of 4G users has increased greatly, and the services began to suffer from the huge loads for base stations. Fortunately, 3GPP proposed Carrier Aggregation (CA), a method to aggregate component carriers (CCs) and increase the bandwidth to 100 MHz. In order to avoid low efficiency for a cell edge user, we position Wi-Fi stations around the cell edges and utilize the 5 GHz unlicensed band along with CA to achieve a high transmission rate. However, the method for allocating these resources to user equipment (UE) became a significant issue. Given the facts above, the goal of this thesis is to provide a solution for network performance optimization. To achieve this, we design a smart resource allocation scheme with the help of optimization schemes, such as Genetic Algorithm (GA), under different frequency bands (intra-or inter-band CA) and in the Orthogonal Frequency Division Multiple Access (OFDMA) system for the downlink (DL) of Long-Term Evolution-Advanced (LTE-A). In these two algorithms, the simulation is conducted every transmission time interval (TTI) and lasts for 100 TTIs. Improved GA can enhance convergence by 20% over GA, the Improved GA base.
随着4G业务的普及,4G用户数量大幅增加,基站的巨大负载开始影响业务。幸运的是,3GPP提出了载波聚合(CA),这是一种聚合组件载波(cc)并将带宽增加到100mhz的方法。为了避免蜂窝边缘用户的低效率,我们将Wi-Fi站放置在蜂窝边缘附近,并利用5ghz免授权频段与CA一起实现高传输速率。然而,将这些资源分配给用户设备(UE)的方法成为一个重要问题。鉴于上述事实,本文的目标是为网络性能优化提供一个解决方案。为了实现这一目标,我们在不同频段(带内或带间CA)和正交频分多址(OFDMA)系统的下行链路(DL)下,利用遗传算法(GA)等优化方案设计了一种智能资源分配方案。在这两种算法中,每个传输时间间隔(TTI)进行一次仿真,仿真持续时间为100个TTI。改进的遗传算法比改进的遗传算法基数提高了20%的收敛性。
{"title":"Resource Allocation Algorithms for LTE over Wi-Fi Spectrum","authors":"Li-Ju Chen, Guan-Wei Chang, Hung-Ta Pai, Lei Yen, Hsin-Piao Lin","doi":"10.1109/ICS.2016.0144","DOIUrl":"https://doi.org/10.1109/ICS.2016.0144","url":null,"abstract":"As 4G services became more widely available, the number of 4G users has increased greatly, and the services began to suffer from the huge loads for base stations. Fortunately, 3GPP proposed Carrier Aggregation (CA), a method to aggregate component carriers (CCs) and increase the bandwidth to 100 MHz. In order to avoid low efficiency for a cell edge user, we position Wi-Fi stations around the cell edges and utilize the 5 GHz unlicensed band along with CA to achieve a high transmission rate. However, the method for allocating these resources to user equipment (UE) became a significant issue. Given the facts above, the goal of this thesis is to provide a solution for network performance optimization. To achieve this, we design a smart resource allocation scheme with the help of optimization schemes, such as Genetic Algorithm (GA), under different frequency bands (intra-or inter-band CA) and in the Orthogonal Frequency Division Multiple Access (OFDMA) system for the downlink (DL) of Long-Term Evolution-Advanced (LTE-A). In these two algorithms, the simulation is conducted every transmission time interval (TTI) and lasts for 100 TTIs. Improved GA can enhance convergence by 20% over GA, the Improved GA base.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114005002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Categorizing and Recommending API Usage Patterns Based on Degree Centralities and Pattern Distances 基于度中心性和模式距离的API使用模式分类和推荐
Pub Date : 2016-12-01 DOI: 10.1109/ICS.2016.0120
Shin-Jie Lee, Wu-Chen Su, C. Huang, Jie-Lin You
Although efforts have been made on discovering and searching API usage patterns, how to categorize and recommend follow-up API usage patterns is still largely unexplored. This paper advances the state-of-the-art by proposing two methods for categorizing and recommending API usage patterns: first, categories of the usage patterns are automatically identified based on a proposed degree centrality-based clustering algorithm, and second, follow-up usage patterns of an adopted pattern are recommended based on a proposed metric of measuring distances between patterns. In the experimental evaluations, the patterns categorization can achieve 85.4% precision rate with 83% recall rate. The patterns recommendation had approximately half a chance of correctly predicting the follow-up patterns that were actually used by the programmers.
尽管在发现和搜索API使用模式方面已经做了很多工作,但是如何分类和推荐后续的API使用模式在很大程度上仍然没有得到探索。本文通过提出两种对API使用模式进行分类和推荐的方法来推进这一技术的发展:首先,基于提出的基于度中心性的聚类算法自动识别使用模式的类别;其次,基于提出的测量模式之间距离的度量标准推荐所采用模式的后续使用模式。在实验评价中,模式分类准确率达到85.4%,召回率达到83%。模式推荐有大约一半的机会正确预测程序员实际使用的后续模式。
{"title":"Categorizing and Recommending API Usage Patterns Based on Degree Centralities and Pattern Distances","authors":"Shin-Jie Lee, Wu-Chen Su, C. Huang, Jie-Lin You","doi":"10.1109/ICS.2016.0120","DOIUrl":"https://doi.org/10.1109/ICS.2016.0120","url":null,"abstract":"Although efforts have been made on discovering and searching API usage patterns, how to categorize and recommend follow-up API usage patterns is still largely unexplored. This paper advances the state-of-the-art by proposing two methods for categorizing and recommending API usage patterns: first, categories of the usage patterns are automatically identified based on a proposed degree centrality-based clustering algorithm, and second, follow-up usage patterns of an adopted pattern are recommended based on a proposed metric of measuring distances between patterns. In the experimental evaluations, the patterns categorization can achieve 85.4% precision rate with 83% recall rate. The patterns recommendation had approximately half a chance of correctly predicting the follow-up patterns that were actually used by the programmers.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122549285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2016 International Computer Symposium (ICS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1