首页 > 最新文献

2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)最新文献

英文 中文
Non-volatile Memory Driver for Applying Automated Tiered Storage with Fast Memory and Slow Flash Storage 非易失性存储器驱动程序,用于应用具有快速存储器和慢速闪存的自动分层存储
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00029
Kazuichi Oe, T. Nanri
Automated tiered storage with fast memory and slow flash storage (ATSMF) is a hybrid storage system located between non-volatile memories (NVMs) and solid state drives (SSDs). ATSMF aims to reduce average response time for inputoutput (IO) accesses by migrating concentrated IO access areas from SSD to NVM. However, the current ATSMF implementation cannot reduce average response time sufficiently because of the bottleneck caused by the Linux brd driver, which is used for the NVM access driver. The response time of the brd driver is more than ten times larger than memory access speed. To reduce the average response time sufficiently, we developed a block-level driver for NVM called a "two-mode (2M) memory driver." The 2M memory driver has both the. map IO access mode and direct IO access mode to reduce the response time while maintaining compatibility with the Linux device-mapper framework. The direct IO access mode has a drastically lower response time than the Linux brd driver because the ATSMF driver can execute the IO access function of 2M memory driver directly. Experimental results also indicate that ATSMF using the 2M memory driver reduces the IO access response time to less than that of ATSMF using the Linux brd driver in most cases.
具有快速内存和慢速闪存的自动分层存储(ATSMF)是一种介于非易失性存储器(nvm)和固态驱动器(ssd)之间的混合存储系统。ATSMF旨在通过将集中的IO访问区域从SSD迁移到NVM来减少IO访问的平均响应时间。然而,由于Linux brd驱动程序(用于NVM访问驱动程序)造成的瓶颈,目前的ATSMF实现不能充分减少平均响应时间。brd驱动程序的响应时间比内存访问速度大十倍以上。为了充分减少平均响应时间,我们为NVM开发了一个块级驱动程序,称为“双模式(2M)内存驱动程序”。2M内存驱动程序具有。映射IO访问模式和直接IO访问模式,以减少响应时间,同时保持与Linux设备映射器框架的兼容性。由于ATSMF驱动程序可以直接执行2M内存驱动程序的IO访问功能,因此直接IO访问模式的响应时间比Linux brd驱动程序要短得多。实验结果还表明,在大多数情况下,使用2M内存驱动的ATSMF将IO访问响应时间缩短到小于使用Linux brd驱动的ATSMF。
{"title":"Non-volatile Memory Driver for Applying Automated Tiered Storage with Fast Memory and Slow Flash Storage","authors":"Kazuichi Oe, T. Nanri","doi":"10.1109/CANDARW.2018.00029","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00029","url":null,"abstract":"Automated tiered storage with fast memory and slow flash storage (ATSMF) is a hybrid storage system located between non-volatile memories (NVMs) and solid state drives (SSDs). ATSMF aims to reduce average response time for inputoutput (IO) accesses by migrating concentrated IO access areas from SSD to NVM. However, the current ATSMF implementation cannot reduce average response time sufficiently because of the bottleneck caused by the Linux brd driver, which is used for the NVM access driver. The response time of the brd driver is more than ten times larger than memory access speed. To reduce the average response time sufficiently, we developed a block-level driver for NVM called a \"two-mode (2M) memory driver.\" The 2M memory driver has both the. map IO access mode and direct IO access mode to reduce the response time while maintaining compatibility with the Linux device-mapper framework. The direct IO access mode has a drastically lower response time than the Linux brd driver because the ATSMF driver can execute the IO access function of 2M memory driver directly. Experimental results also indicate that ATSMF using the 2M memory driver reduces the IO access response time to less than that of ATSMF using the Linux brd driver in most cases.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114495075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discovering New Malware Families Using a Linguistic-Based Macros Detection Method 使用基于语言的宏检测方法发现新的恶意软件家族
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00085
Hiroya Miura, M. Mimura, Hidema Tanaka
In recent years, the number of targeted email attacks using malicious macros has been increasing. Malicious macros are malware which is written in Visual Basic for Application. Since much source code of malicious macros is highly obfuscated, the source code contains many obfuscated words such as random numbers or strings. Today, new malware families are frequently discovered. To detect unseen malicious macros, previous work proposed a method using natural language techniques. The proposed method separates macro's source code into words, and detects malicious macros based on the appearance frequency. This method could detect unseen malicious macros. However, the unseen malicious macros might consist of known malware families. Furthermore, the mechanism and effectiveness of this method are not clear. In particular, detecting new malware families is a top priority. Hence, this paper reveals the mechanism and effectiveness of this method to detect new malware families. Our experiment shows that using only malicious macros for feature extraction and consolidating obfuscated words into a word were effective. We confirmed this method could discover 89% of new malware families.
近年来,使用恶意宏的针对性邮件攻击数量不断增加。恶意宏是用Visual Basic for Application编写的恶意软件。由于许多恶意宏的源代码是高度混淆的,因此源代码包含许多混淆的单词,例如随机数或字符串。今天,新的恶意软件家族经常被发现。为了检测看不见的恶意宏,以前的工作提出了一种使用自然语言技术的方法。该方法将宏的源代码分离成单词,并根据出现频率检测恶意宏。此方法可以检测不可见的恶意宏。然而,看不见的恶意宏可能由已知的恶意软件家族组成。此外,该方法的机制和有效性尚不清楚。特别是,检测新的恶意软件家族是当务之急。因此,本文揭示了该方法检测新恶意软件家族的机制和有效性。我们的实验表明,仅使用恶意宏进行特征提取并将混淆的单词合并为一个单词是有效的。我们证实这种方法可以发现89%的新恶意软件家族。
{"title":"Discovering New Malware Families Using a Linguistic-Based Macros Detection Method","authors":"Hiroya Miura, M. Mimura, Hidema Tanaka","doi":"10.1109/CANDARW.2018.00085","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00085","url":null,"abstract":"In recent years, the number of targeted email attacks using malicious macros has been increasing. Malicious macros are malware which is written in Visual Basic for Application. Since much source code of malicious macros is highly obfuscated, the source code contains many obfuscated words such as random numbers or strings. Today, new malware families are frequently discovered. To detect unseen malicious macros, previous work proposed a method using natural language techniques. The proposed method separates macro's source code into words, and detects malicious macros based on the appearance frequency. This method could detect unseen malicious macros. However, the unseen malicious macros might consist of known malware families. Furthermore, the mechanism and effectiveness of this method are not clear. In particular, detecting new malware families is a top priority. Hence, this paper reveals the mechanism and effectiveness of this method to detect new malware families. Our experiment shows that using only malicious macros for feature extraction and consolidating obfuscated words into a word were effective. We confirmed this method could discover 89% of new malware families.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122844200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Improving Apache Spark's Cache Mechanism with LRC-Based Method Using Bloom Filter 基于lrc的Bloom Filter改进Apache Spark的缓存机制
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00096
Hideo Inagaki, Ryota Kawashima, H. Matsuo
Memory-and-Disk caching is a common caching mechanism for temporal output in Apache Spark. However, it causes performance degradation when memory usage has reached its limit because of the Spark's LRU (Least Recently Used) based cache management. Existing studies have reported that replacement of LRU-based cache mechanism to LRC (Least Reference Count) based one that is a more accurate indicator of the likelihood of future data access. However, frequently used partitions cannot be determined because Spark accesses all of partitions for user-driven RDD operations, even if partitions do not include necessary data. In this paper, we propose a cache management method that enables allocating necessary partitions to the memory by introducing the bloom filter into existing methods. The bloom filter prevents unnecessary partitions from being processed because partitions are checked whether required data is contained. Furthermore, frequently used partitions can be properly determined by measuring the reference count of partitions. We implemented two architecture types, the driver-side bloom filter and the executor-side bloom filter, to consider the optimal place of the bloom filter. Evaluation results showed that the execution time of the driver-side implementation was reduced by 89% in a filter-test benchmark based on the LRC-based method.
内存和磁盘缓存是Apache Spark中用于临时输出的常用缓存机制。然而,由于Spark基于LRU(最近最少使用)的缓存管理,当内存使用达到极限时,它会导致性能下降。现有的研究报告称,将基于lru的缓存机制替换为基于LRC(最小引用计数)的缓存机制,这是未来数据访问可能性的更准确指标。但是,不能确定经常使用的分区,因为Spark访问所有分区进行用户驱动的RDD操作,即使分区不包含必要的数据。在本文中,我们提出了一种缓存管理方法,该方法通过在现有方法中引入bloom过滤器来分配必要的内存分区。布隆过滤器防止处理不必要的分区,因为会检查分区是否包含所需的数据。此外,可以通过测量分区的引用计数来正确地确定经常使用的分区。我们实现了两种架构类型,驱动端布隆过滤器和执行端布隆过滤器,以考虑布隆过滤器的最佳位置。评估结果表明,在基于lrc方法的过滤器测试基准测试中,驾驶员侧实现的执行时间减少了89%。
{"title":"Improving Apache Spark's Cache Mechanism with LRC-Based Method Using Bloom Filter","authors":"Hideo Inagaki, Ryota Kawashima, H. Matsuo","doi":"10.1109/CANDARW.2018.00096","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00096","url":null,"abstract":"Memory-and-Disk caching is a common caching mechanism for temporal output in Apache Spark. However, it causes performance degradation when memory usage has reached its limit because of the Spark's LRU (Least Recently Used) based cache management. Existing studies have reported that replacement of LRU-based cache mechanism to LRC (Least Reference Count) based one that is a more accurate indicator of the likelihood of future data access. However, frequently used partitions cannot be determined because Spark accesses all of partitions for user-driven RDD operations, even if partitions do not include necessary data. In this paper, we propose a cache management method that enables allocating necessary partitions to the memory by introducing the bloom filter into existing methods. The bloom filter prevents unnecessary partitions from being processed because partitions are checked whether required data is contained. Furthermore, frequently used partitions can be properly determined by measuring the reference count of partitions. We implemented two architecture types, the driver-side bloom filter and the executor-side bloom filter, to consider the optimal place of the bloom filter. Evaluation results showed that the execution time of the driver-side implementation was reduced by 89% in a filter-test benchmark based on the LRC-based method.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121029368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing Genetic Algorithm and Particle Filtering as an Alternative for Indoor Device Positioning 采用遗传算法和粒子滤波作为室内设备定位的备选方案
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00009
Guilherme David Branco, J. Bordim
Radio signals may contribute to seamless interactions with physical objects providing means to guide users from their position to a particular object within a room or store for instance. To achieve such a goal, a mechanism is needed to allow users to identify and locate objects of interest. Trilateration, fingerprinting and particle filter are usually employed as mechanisms for position estimation in indoor environments. This paper explores the the use of Genetic Algorithms (GA) combined with Particle Filter (PF) mechanism as an alternative to estimate indoor object position. The proposed scheme, named EPF (Evolutionary Particle Filter) has been compared to particle filter and trilateration. Simulation results show that the proposed EPF improves positioning accuracy by 1.5 cm (10%) and 30 cm (300%) over particle filter and trilateration, respectively.
例如,无线电信号可有助于与物理对象的无缝交互,从而提供将用户从其位置引导到房间或商店内的特定对象的方法。为了实现这一目标,需要一种机制来允许用户识别和定位感兴趣的对象。在室内环境中,位置估计通常采用三边测量、指纹识别和粒子滤波等方法。本文探讨了利用遗传算法(GA)结合粒子滤波(PF)机制作为室内目标位置估计的替代方法。该方法被命名为EPF(进化粒子滤波),并与粒子滤波和三边滤波进行了比较。仿真结果表明,该方法比粒子滤波和三边滤波分别提高了1.5 cm(10%)和30 cm(300%)的定位精度。
{"title":"Employing Genetic Algorithm and Particle Filtering as an Alternative for Indoor Device Positioning","authors":"Guilherme David Branco, J. Bordim","doi":"10.1109/CANDARW.2018.00009","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00009","url":null,"abstract":"Radio signals may contribute to seamless interactions with physical objects providing means to guide users from their position to a particular object within a room or store for instance. To achieve such a goal, a mechanism is needed to allow users to identify and locate objects of interest. Trilateration, fingerprinting and particle filter are usually employed as mechanisms for position estimation in indoor environments. This paper explores the the use of Genetic Algorithms (GA) combined with Particle Filter (PF) mechanism as an alternative to estimate indoor object position. The proposed scheme, named EPF (Evolutionary Particle Filter) has been compared to particle filter and trilateration. Simulation results show that the proposed EPF improves positioning accuracy by 1.5 cm (10%) and 30 cm (300%) over particle filter and trilateration, respectively.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Large Scale Packet Capture and Network Flow Analysis on Hadoop 面向Hadoop的大规模数据包捕获和网络流分析
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00043
M. Z. N. L. Saavedra, W. E. Yu
Network traffic continues to grow yearly at a compounded rate. However, network traffic is still being analyzed on vertically scaled machines that do not scale as well as distributed computing platforms. Hadoop's horizontally scalable ecosystem provides a better environment for processing these network captures stored in packet capture (PCAP) files. This paper proposes a framework called hcap for analyzing PCAPs on Hadoop inspired by the Rseaux IP Europens' (RIPE's) existing hadoop-pcap library but built completely from the ground up. The hcap framework improves several aspects of the hadoop-pcap library, namely protocol, error, and log handling. Results show that, while other methods still outperform hcap, it not only performs better than hadoop-pcap by 15% in scan queries and 18% in join queries, but it's more tolerant to broken PCAP entries which reduces preprocessing time and data loss, while also speeding up the conversion process used in other methods by 85%.
网络流量每年继续以复合速度增长。然而,网络流量仍然是在垂直扩展的机器上进行分析的,这种机器的扩展能力不如分布式计算平台。Hadoop的水平可扩展生态系统为处理存储在包捕获(PCAP)文件中的网络捕获提供了更好的环境。本文提出了一个名为hcap的框架,用于分析Hadoop上的pcap,该框架的灵感来自Rseaux IP Europens (RIPE)现有的Hadoop -pcap库,但完全是从头开始构建的。hcap框架改进了hadoop-pcap库的几个方面,即协议、错误和日志处理。结果表明,虽然其他方法的性能仍然优于hcap,但hcap不仅在扫描查询方面比hadoop-pcap性能好15%,在连接查询方面比hadoop-pcap性能好18%,而且它对PCAP条目的损坏更宽容,从而减少了预处理时间和数据丢失,同时还将其他方法中使用的转换过程加快了85%。
{"title":"Towards Large Scale Packet Capture and Network Flow Analysis on Hadoop","authors":"M. Z. N. L. Saavedra, W. E. Yu","doi":"10.1109/CANDARW.2018.00043","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00043","url":null,"abstract":"Network traffic continues to grow yearly at a compounded rate. However, network traffic is still being analyzed on vertically scaled machines that do not scale as well as distributed computing platforms. Hadoop's horizontally scalable ecosystem provides a better environment for processing these network captures stored in packet capture (PCAP) files. This paper proposes a framework called hcap for analyzing PCAPs on Hadoop inspired by the Rseaux IP Europens' (RIPE's) existing hadoop-pcap library but built completely from the ground up. The hcap framework improves several aspects of the hadoop-pcap library, namely protocol, error, and log handling. Results show that, while other methods still outperform hcap, it not only performs better than hadoop-pcap by 15% in scan queries and 18% in join queries, but it's more tolerant to broken PCAP entries which reduces preprocessing time and data loss, while also speeding up the conversion process used in other methods by 85%.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121399392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Suppressing Chain Size of Blockchain-Based Information Sharing for Swarm Robotic Systems 基于区块链的群体机器人系统信息共享抑制链长
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00102
Y. Nishida, Kosuke Kaneko, Subodh Sharma, K. Sakurai
Swarm robotics is a research field in which a group of autonomous robots execute tasks through cooperative works. Sharing information among robots is a central function for an optimal performance of the system. Given that the swarm network structure constantly changes when robots move, it becomes difficult to guarantee on information sharing by all swarm members. We, in this work, propose an approach for information sharing on swarm robotic systems by using Blockchain technology. A function of distributed ledger in Blockchain technology has possibility to solve the information sharing problem and to easily synchronize their state. However, because Blockchain persistently keeps past transactions, the increase of its chain size is one of the serious issues to manage Blockchain technology. In this paper, we introduce a methodology to share information among autonomous robots and demonstrate through experiments that how the differences in data size recorded in the blockchain affect the chain size. As a result, compared with our previous approach, we succeeded in suppressing increase in chain size by using the proposal approach; it was reduced the amount of increase in chain size about 73.0% when each node repeatedly shared about 2.8KB image data by 100 times.
群机器人是一组自主机器人通过协同工作来执行任务的研究领域。机器人之间的信息共享是优化系统性能的核心功能。由于群体网络结构随着机器人的移动而不断变化,很难保证所有群体成员的信息共享。在这项工作中,我们提出了一种利用区块链技术实现群体机器人系统信息共享的方法。区块链技术中的分布式账本功能,有可能解决信息共享问题,并方便地同步其状态。然而,由于区块链持续保存过去的交易,其链大小的增加是管理区块链技术的严重问题之一。在本文中,我们介绍了一种在自主机器人之间共享信息的方法,并通过实验证明了区块链中记录的数据大小差异如何影响链的大小。结果表明,与我们之前的方法相比,我们使用提议方法成功地抑制了链大小的增加;当每个节点重复共享约2.8KB的图像数据100次时,链大小增长量减少约73.0%。
{"title":"Suppressing Chain Size of Blockchain-Based Information Sharing for Swarm Robotic Systems","authors":"Y. Nishida, Kosuke Kaneko, Subodh Sharma, K. Sakurai","doi":"10.1109/CANDARW.2018.00102","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00102","url":null,"abstract":"Swarm robotics is a research field in which a group of autonomous robots execute tasks through cooperative works. Sharing information among robots is a central function for an optimal performance of the system. Given that the swarm network structure constantly changes when robots move, it becomes difficult to guarantee on information sharing by all swarm members. We, in this work, propose an approach for information sharing on swarm robotic systems by using Blockchain technology. A function of distributed ledger in Blockchain technology has possibility to solve the information sharing problem and to easily synchronize their state. However, because Blockchain persistently keeps past transactions, the increase of its chain size is one of the serious issues to manage Blockchain technology. In this paper, we introduce a methodology to share information among autonomous robots and demonstrate through experiments that how the differences in data size recorded in the blockchain affect the chain size. As a result, compared with our previous approach, we succeeded in suppressing increase in chain size by using the proposal approach; it was reduced the amount of increase in chain size about 73.0% when each node repeatedly shared about 2.8KB image data by 100 times.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132371256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Neural Cryptography Based on the Topology Evolving Neural Networks 基于拓扑演化神经网络的神经密码学
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00091
Yuetong Zhu, Danilo Vasconcellos Vargas, K. Sakurai
Modern cryptographic schemes is developed based on the mathematical theory. Recently works show a new direction about cryptography based on the neural networks. Instead of learning a specific algorithm, a cryptographic scheme is generated automatically. While one kind of neural network is used to achieve the scheme, the idea of the neural cryptography can be realized by other neural network architecture is unknown. In this paper, we make use of this property to create neural cryptography scheme on a new topology evolving neural network architecture called Spectrum-diverse unified neuroevolution architecture. First, experiments are conducted to verify that Spectrum-diverse unified neuroevolution architecture is able to achieve automatic encryption and decryption. Subsequently, we do experiments to achieve the neural symmetric cryptosystem by using adversarial training.
现代密码方案是在数学理论的基础上发展起来的。近年来的研究显示了基于神经网络的密码学研究的新方向。它不需要学习特定的算法,而是自动生成一个加密方案。虽然使用了一种神经网络来实现该方案,但神经密码的思想是否可以通过其他神经网络架构来实现是未知的。在本文中,我们利用这一特性在一种新的拓扑进化神经网络体系结构上创建了神经密码方案,称为频谱多样化统一神经进化体系结构。首先,通过实验验证了频谱多样化的统一神经进化架构能够实现自动加解密。随后,我们利用对抗性训练的方法进行了神经对称密码系统的实验。
{"title":"Neural Cryptography Based on the Topology Evolving Neural Networks","authors":"Yuetong Zhu, Danilo Vasconcellos Vargas, K. Sakurai","doi":"10.1109/CANDARW.2018.00091","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00091","url":null,"abstract":"Modern cryptographic schemes is developed based on the mathematical theory. Recently works show a new direction about cryptography based on the neural networks. Instead of learning a specific algorithm, a cryptographic scheme is generated automatically. While one kind of neural network is used to achieve the scheme, the idea of the neural cryptography can be realized by other neural network architecture is unknown. In this paper, we make use of this property to create neural cryptography scheme on a new topology evolving neural network architecture called Spectrum-diverse unified neuroevolution architecture. First, experiments are conducted to verify that Spectrum-diverse unified neuroevolution architecture is able to achieve automatic encryption and decryption. Subsequently, we do experiments to achieve the neural symmetric cryptosystem by using adversarial training.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132454304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Cache Replacement Policy with Considering Global Fluctuations of Priority Values 考虑全局优先级波动的缓存替换策略
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00077
J. Tada
In the high-associativity caches, the hardware overheads of the cache replacement policy become problem. To avoid this problem, the Adaptive Demotion Policy (ADP) is proposed. The ADP focuses on the priority value demotion at a cache miss, and it can achieve a higher performance compared with conventional cache replacement policies. The ADP can be implemented with small hardware resources, and the priority value update logic can be implemented with a small hardware cost. The ADP can suit for various applications by the appropriate selection of its insertion, promotion and selection policies. If the dynamic selection of the suitable policies for the running application is possible, the performance of the cache replacement policy will be increased. In order to achieve the dynamic selection of the suitable policies, this paper focuses on the global fluctuations of the priority values. At first, the cache is partitioned into several partitions. At every cache access, the total of priority values in each partition is calculated. At every set interval, the fluctuations of total priority values in all the partitions are checked, and the information is used to detect the behavior of the application. This paper adapts this mechanism to the ADP, and the adapted cache replacement policy is called the ADP-G. The performance evaluation shows that the ADP-G achieves the MPKI reductions and the IPC improvements, compared to the LRU policy, the RRIP policy and the ADP.
在高关联性缓存中,缓存替换策略的硬件开销成为问题。为了避免这一问题,提出了自适应降级策略(ADP)。ADP侧重于缓存丢失时优先级值的降低,与传统的缓存替换策略相比,它可以实现更高的性能。ADP可以用较少的硬件资源实现,优先级值更新逻辑可以用较少的硬件成本实现。通过适当选择插入、提升和选择策略,ADP可以适应各种应用。如果可以动态地为正在运行的应用程序选择合适的策略,那么缓存替换策略的性能将得到提高。为了实现对合适策略的动态选择,本文重点研究了优先级值的全局波动。首先,缓存被划分为几个分区。在每次缓存访问时,都会计算每个分区中的优先级值的总和。在每个设置的间隔中,检查所有分区中总优先级值的波动,并使用该信息检测应用程序的行为。本文将该机制应用于ADP,并将适应的缓存替换策略称为ADP- g。性能评价表明,与LRU策略、RRIP策略和ADP策略相比,ADP- g策略实现了MPKI的降低和IPC的提高。
{"title":"A Cache Replacement Policy with Considering Global Fluctuations of Priority Values","authors":"J. Tada","doi":"10.1109/CANDARW.2018.00077","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00077","url":null,"abstract":"In the high-associativity caches, the hardware overheads of the cache replacement policy become problem. To avoid this problem, the Adaptive Demotion Policy (ADP) is proposed. The ADP focuses on the priority value demotion at a cache miss, and it can achieve a higher performance compared with conventional cache replacement policies. The ADP can be implemented with small hardware resources, and the priority value update logic can be implemented with a small hardware cost. The ADP can suit for various applications by the appropriate selection of its insertion, promotion and selection policies. If the dynamic selection of the suitable policies for the running application is possible, the performance of the cache replacement policy will be increased. In order to achieve the dynamic selection of the suitable policies, this paper focuses on the global fluctuations of the priority values. At first, the cache is partitioned into several partitions. At every cache access, the total of priority values in each partition is calculated. At every set interval, the fluctuations of total priority values in all the partitions are checked, and the information is used to detect the behavior of the application. This paper adapts this mechanism to the ADP, and the adapted cache replacement policy is called the ADP-G. The performance evaluation shows that the ADP-G achieves the MPKI reductions and the IPC improvements, compared to the LRU policy, the RRIP policy and the ADP.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133302982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Trace-Driven Performance Prediction Method for Exploring NoC Design Optimization 探索NoC设计优化的循迹驱动性能预测方法
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00042
Naoya Niwa, Tomohiro Totoki, Hiroki Matsutani, M. Koibuchi, H. Amano
The performance prediction for a NoC-based Chip Multi-Processor (CMP) is one of the main design concerns. Generally, there is a trade-off between accuracy and time overhead on the performance prediction of computer systems. In particular, the time overhead is proportional or exponential to the number of cores when using a cycle-accurate full-system simulation, such as gem5. In this study, we propose an accurate and scalable method to predict the influence of design NoC parameters on its performance. Our method counts the number of execution cycles when employing the target NoC based on the statistics of one-time execution of a full-system simulation using a fully-connected NoC. To evaluate the accuracy and execution time overhead, we use the case that randomly generates allocations of processors with 3D mesh topology NoC. Its Mean Absolute Percentage Error of the estimated cycles is about 4.7%, and the Maximum Absolute Percentage Error is about 8.5%.
基于cpu的芯片多处理器(CMP)的性能预测是设计的主要关注点之一。通常,在计算机系统的性能预测中,存在准确性和时间开销之间的权衡。特别是,当使用周期精确的全系统仿真(如gem5)时,时间开销与内核数量成正比或成指数。在本研究中,我们提出了一种准确且可扩展的方法来预测设计NoC参数对其性能的影响。我们的方法基于使用全连接NoC的全系统模拟的一次性执行统计数据,计算使用目标NoC时的执行周期数。为了评估准确性和执行时间开销,我们使用了随机生成具有3D网格拓扑NoC的处理器分配的情况。估计周期的平均绝对百分比误差约为4.7%,最大绝对百分比误差约为8.5%。
{"title":"An Trace-Driven Performance Prediction Method for Exploring NoC Design Optimization","authors":"Naoya Niwa, Tomohiro Totoki, Hiroki Matsutani, M. Koibuchi, H. Amano","doi":"10.1109/CANDARW.2018.00042","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00042","url":null,"abstract":"The performance prediction for a NoC-based Chip Multi-Processor (CMP) is one of the main design concerns. Generally, there is a trade-off between accuracy and time overhead on the performance prediction of computer systems. In particular, the time overhead is proportional or exponential to the number of cores when using a cycle-accurate full-system simulation, such as gem5. In this study, we propose an accurate and scalable method to predict the influence of design NoC parameters on its performance. Our method counts the number of execution cycles when employing the target NoC based on the statistics of one-time execution of a full-system simulation using a fully-connected NoC. To evaluate the accuracy and execution time overhead, we use the case that randomly generates allocations of processors with 3D mesh topology NoC. Its Mean Absolute Percentage Error of the estimated cycles is about 4.7%, and the Maximum Absolute Percentage Error is about 8.5%.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131060522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of Machine Learning Techniques on Prediction of Future Processor Performance 机器学习技术在未来处理器性能预测中的应用
Pub Date : 2018-11-01 DOI: 10.1109/CANDARW.2018.00044
Goktug Inal, Gürhan Küçük
Today, processors utilize many datapath resources with various sizes. In this study, we focus on single thread microprocessors, and apply machine learning techniques to predict processors' future performance trend by collecting and processing processor statistics. This type of a performance prediction can be useful for many ongoing computer architecture research topics. Today, these studies mostly rely on history-and threshold-based prediction schemes, which collect statistics and decide on new resource configurations depending on the results of those threshold conditions at runtime. The proposed offline training-based machine learning methodology is an orthogonal technique, which may further improve the performance of such existing algorithms. We show that our neural network based prediction mechanism achieves around 70% accuracy for predicting performance trend (gain or loss in the near future) of applications. This is a noticeably better result compared to accuracy results obtained by naïve history based prediction models.
今天,处理器利用许多不同大小的数据路径资源。在本研究中,我们将重点放在单线程微处理器上,并通过收集和处理处理器统计数据,应用机器学习技术来预测处理器未来的性能趋势。这种类型的性能预测对于许多正在进行的计算机体系结构研究主题非常有用。目前,这些研究主要依赖于基于历史和阈值的预测方案,这些方案收集统计数据,并根据运行时这些阈值条件的结果决定新的资源配置。本文提出的基于离线训练的机器学习方法是一种正交技术,可以进一步提高现有算法的性能。我们表明,我们基于神经网络的预测机制在预测应用程序的性能趋势(近期的增益或损失)方面达到了70%左右的准确率。与naïve基于历史的预测模型获得的精度结果相比,这是一个明显更好的结果。
{"title":"Application of Machine Learning Techniques on Prediction of Future Processor Performance","authors":"Goktug Inal, Gürhan Küçük","doi":"10.1109/CANDARW.2018.00044","DOIUrl":"https://doi.org/10.1109/CANDARW.2018.00044","url":null,"abstract":"Today, processors utilize many datapath resources with various sizes. In this study, we focus on single thread microprocessors, and apply machine learning techniques to predict processors' future performance trend by collecting and processing processor statistics. This type of a performance prediction can be useful for many ongoing computer architecture research topics. Today, these studies mostly rely on history-and threshold-based prediction schemes, which collect statistics and decide on new resource configurations depending on the results of those threshold conditions at runtime. The proposed offline training-based machine learning methodology is an orthogonal technique, which may further improve the performance of such existing algorithms. We show that our neural network based prediction mechanism achieves around 70% accuracy for predicting performance trend (gain or loss in the near future) of applications. This is a noticeably better result compared to accuracy results obtained by naïve history based prediction models.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130499728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1