Due to the dynamic uncertainty factors in a complex environment, such as flight conditions, movable obstacles and other sudden threats. It is a challenge to realize the real-time path planning of Unmanned Aerial Vehicles (UAV). In this paper, the method is proposed with a model of the dynamic environment and a method of pheromone diffusion ant colony optimization (PDACO) to solve the real-time path planning of UAV in a dynamic environment. The translational obstacle method and the random obstacle method can efficiently simulate the dynamic environment. PDACO takes advantage of pheromone diffusion characteristics in an ant colony, and diffuses the pheromones to adjacent paths after each iteration, thus expanding the guidance range of pheromones. When the environment changes, the pheromone diffusion method can quickly plan new paths and accelerate the convergence of the algorithm. Simulation results show that the dynamic environment model accords with the actual situation. Compared with four algorithms, PDACO ensures that the UAV can optimize a new path with shorter path length and computing time when environment changes. The proposed method is feasible and effective.
{"title":"Dynamic Path Planning of UAV Based on Pheromone Diffusion Ant Colony Algorithm","authors":"Bin Zhou, Yan Guo, Ning Li, Cuntao Liu","doi":"10.1145/3507971.3507975","DOIUrl":"https://doi.org/10.1145/3507971.3507975","url":null,"abstract":"Due to the dynamic uncertainty factors in a complex environment, such as flight conditions, movable obstacles and other sudden threats. It is a challenge to realize the real-time path planning of Unmanned Aerial Vehicles (UAV). In this paper, the method is proposed with a model of the dynamic environment and a method of pheromone diffusion ant colony optimization (PDACO) to solve the real-time path planning of UAV in a dynamic environment. The translational obstacle method and the random obstacle method can efficiently simulate the dynamic environment. PDACO takes advantage of pheromone diffusion characteristics in an ant colony, and diffuses the pheromones to adjacent paths after each iteration, thus expanding the guidance range of pheromones. When the environment changes, the pheromone diffusion method can quickly plan new paths and accelerate the convergence of the algorithm. Simulation results show that the dynamic environment model accords with the actual situation. Compared with four algorithms, PDACO ensures that the UAV can optimize a new path with shorter path length and computing time when environment changes. The proposed method is feasible and effective.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114020372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Hamzah, Diyanatul Husna, Fransiskus Astha Ekadiyanto, K. Purnama, Afif Nurul Hidayati, Mauridhi Hery Purnomo, Supeno Mardi Susiki Nugroho, Reza Fuad Rachmadi, I. Nurtanio, Anak Agung Ratna
To fulfill health as basic human needs, health care services technology is always improved but also must maintain security and privacy of the data due to huge amount of data is created and distributed. Telemedicine with telecommunication technology offers convenience and cheaper cost, but still is vulnerable to cyber attacks, making it a threat for patients’ data's privacy. One approach that can be used to secure the data is applying access control to the data. Immutable blockchain can helps to enforce access control so it cannot be violated. We propose a system which implements Ethereum for the blockchain and React web application for the interface of the system. Data management and the access control are provided through Ethereum smart contracts, and the access control requires different role-based permission to upload and access the patients’ data. The system works successfully with average time taken to verify the role is 1.8033 seconds per session. The privacy of the patients’ data is ensured because only the patients that are allowed to keep the ID of their own data.
{"title":"Access Control Design Based on User Role Type in Telemedicine System Using Ethereum Blockchain","authors":"Muhammad Hamzah, Diyanatul Husna, Fransiskus Astha Ekadiyanto, K. Purnama, Afif Nurul Hidayati, Mauridhi Hery Purnomo, Supeno Mardi Susiki Nugroho, Reza Fuad Rachmadi, I. Nurtanio, Anak Agung Ratna","doi":"10.1145/3507971.3507991","DOIUrl":"https://doi.org/10.1145/3507971.3507991","url":null,"abstract":"To fulfill health as basic human needs, health care services technology is always improved but also must maintain security and privacy of the data due to huge amount of data is created and distributed. Telemedicine with telecommunication technology offers convenience and cheaper cost, but still is vulnerable to cyber attacks, making it a threat for patients’ data's privacy. One approach that can be used to secure the data is applying access control to the data. Immutable blockchain can helps to enforce access control so it cannot be violated. We propose a system which implements Ethereum for the blockchain and React web application for the interface of the system. Data management and the access control are provided through Ethereum smart contracts, and the access control requires different role-based permission to upload and access the patients’ data. The system works successfully with average time taken to verify the role is 1.8033 seconds per session. The privacy of the patients’ data is ensured because only the patients that are allowed to keep the ID of their own data.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"46 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114021162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article applies several common speech noise reduction methods to Morse signal noise reduction, and compares and analyzes these several noise reduction methods. The paper introduces the Morse signal and the necessity of noise reduction and the principles, processes, advantages and disadvantages of different noise reduction methods such as band-pass filtering, spectral subtraction and other traditional methods, and noise reduction based on U-Net image segmentation. The effects of noise reduction methods are analyzed by the comparison of time-domain diagram and frequency-domain diagram of the same Morse signal before and after noise reduction. Finally, it is concluded that the noise reduction based on U-Net image segmentation has a better effect.
{"title":"Analysis of Morse Signal Noise Reduction Methods","authors":"Yiran Wang, Xiangdong You, Cuiling Tang","doi":"10.1145/3507971.3508003","DOIUrl":"https://doi.org/10.1145/3507971.3508003","url":null,"abstract":"This article applies several common speech noise reduction methods to Morse signal noise reduction, and compares and analyzes these several noise reduction methods. The paper introduces the Morse signal and the necessity of noise reduction and the principles, processes, advantages and disadvantages of different noise reduction methods such as band-pass filtering, spectral subtraction and other traditional methods, and noise reduction based on U-Net image segmentation. The effects of noise reduction methods are analyzed by the comparison of time-domain diagram and frequency-domain diagram of the same Morse signal before and after noise reduction. Finally, it is concluded that the noise reduction based on U-Net image segmentation has a better effect.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115906983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaijie Han, Min Nie, Guang Yang, J. Wang, Ting Gao, Feng Tian, Jingqi Li
∗In order to improve the communication performance during longdistance transmission of quantum states, this paper establishes a quantum relay model between the sender and the receiver to achieve long-distance communication. Specifically, we propose a quantum state entanglement purification process based on fishing strategy by analyzing the model of quantum repeater and then build a wide area network architecture on this basis. Finally, numerical simulation results represent that the bit error rate of quantum information transmission decreases with the fidelity of entanglement purification increases and the throughput rate increases with the probability of successful entanglement purification increases. Therefore, the use of quantum repeater can effectively improve the transmission performance of quantum states as well as provide a theoretical basis for future practical secure quantum wide area network communication.
{"title":"Quantum Satellite Wide Area Network Construction and Performance Simulation Based on Quantum Repeater","authors":"Kaijie Han, Min Nie, Guang Yang, J. Wang, Ting Gao, Feng Tian, Jingqi Li","doi":"10.1145/3507971.3507998","DOIUrl":"https://doi.org/10.1145/3507971.3507998","url":null,"abstract":"∗In order to improve the communication performance during longdistance transmission of quantum states, this paper establishes a quantum relay model between the sender and the receiver to achieve long-distance communication. Specifically, we propose a quantum state entanglement purification process based on fishing strategy by analyzing the model of quantum repeater and then build a wide area network architecture on this basis. Finally, numerical simulation results represent that the bit error rate of quantum information transmission decreases with the fidelity of entanglement purification increases and the throughput rate increases with the probability of successful entanglement purification increases. Therefore, the use of quantum repeater can effectively improve the transmission performance of quantum states as well as provide a theoretical basis for future practical secure quantum wide area network communication.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117196986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequential recommender models have become a research hotspot in the field of current recommender systems due to its excellent ability to describe users’ dynamic preferences. Sequential recommender models based on deep learning have achieved state-of-the-art results. However, with the increasing number of users and items, the traditional item embedding table may consume a huge amount of memory so that the model may be more difficult to deploy to resource-limited devices. In this paper, we propose a multi-dimensional quantization encoding(MDQE) method to resolve this issue. MDQE mainly consists of two compression techniques. We first divide items into several groups according to the interaction frequency of items and assign different dimensions to each group to construct multi-dimensional group-wise embedding tables. Then, we use mapping matrices to transform the multi-dimensional group-wise embedding tables into quantized codebooks for further compressing. The experiments on three real-world datasets demonstrate that the proposed MDQE can achieve up to 13.86x compression ratio with negligible accuracy loss during inference.
{"title":"Compressing Embedding Table via Multi-dimensional Quantization Encoding for Sequential Recommender Model","authors":"Feng Wang, Miaomiao Dai, Xudong Li, Liquan Pan","doi":"10.1145/3507971.3508010","DOIUrl":"https://doi.org/10.1145/3507971.3508010","url":null,"abstract":"Sequential recommender models have become a research hotspot in the field of current recommender systems due to its excellent ability to describe users’ dynamic preferences. Sequential recommender models based on deep learning have achieved state-of-the-art results. However, with the increasing number of users and items, the traditional item embedding table may consume a huge amount of memory so that the model may be more difficult to deploy to resource-limited devices. In this paper, we propose a multi-dimensional quantization encoding(MDQE) method to resolve this issue. MDQE mainly consists of two compression techniques. We first divide items into several groups according to the interaction frequency of items and assign different dimensions to each group to construct multi-dimensional group-wise embedding tables. Then, we use mapping matrices to transform the multi-dimensional group-wise embedding tables into quantized codebooks for further compressing. The experiments on three real-world datasets demonstrate that the proposed MDQE can achieve up to 13.86x compression ratio with negligible accuracy loss during inference.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fuzzing has become one of the most widely used technology for discovering software vulnerabilities thanks to its effectiveness. However, even the state-of-the-art fuzzers are not very efficient at identifying promising seeds. Coverage-guided fuzzers like American Fuzzy Lop (AFL) usually employ single criterion to evaluate the quality of seeds that may pass up potential seeds. To overcome this problem, we design a potential seed selection scheme, called MOAFL. The key idea is to measure seed potential utilizing multiple objectives and prioritize promising seeds that are more likely to generate interesting seeds via mutation. More specifically, MOAFL leverages lightweight swarm intelligence techniques like Multi-Objective Particle Swarm Optimization (MOPSO) to handle multi-criteria seed selection, which allows MOAFL to choose promising seeds effectively. We implement this scheme based on AFL and our evaluations on LAVA-M dataset and 7 popular real-world programs demonstrate that MOAFL significantly increases the code coverage over AFL.
{"title":"MOAFL: Potential Seed Selection with Multi-Objective Particle Swarm Optimization","authors":"Jinman Jiang, Rui Ma, Xiajing Wang, Jinyuan He, Donghai Tian, Jiating Li","doi":"10.1145/3507971.3507977","DOIUrl":"https://doi.org/10.1145/3507971.3507977","url":null,"abstract":"Fuzzing has become one of the most widely used technology for discovering software vulnerabilities thanks to its effectiveness. However, even the state-of-the-art fuzzers are not very efficient at identifying promising seeds. Coverage-guided fuzzers like American Fuzzy Lop (AFL) usually employ single criterion to evaluate the quality of seeds that may pass up potential seeds. To overcome this problem, we design a potential seed selection scheme, called MOAFL. The key idea is to measure seed potential utilizing multiple objectives and prioritize promising seeds that are more likely to generate interesting seeds via mutation. More specifically, MOAFL leverages lightweight swarm intelligence techniques like Multi-Objective Particle Swarm Optimization (MOPSO) to handle multi-criteria seed selection, which allows MOAFL to choose promising seeds effectively. We implement this scheme based on AFL and our evaluations on LAVA-M dataset and 7 popular real-world programs demonstrate that MOAFL significantly increases the code coverage over AFL.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116252882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to solve the flexibility boundary security access problem of new-type electric power system, this paper has designed a cross-system identity authentication system which can help power grid and systems both on the generation side and demand side to build a foundation of trust for further data sharing. This system can realize a variety of cryptograph-based unified identity authentication such as PKI-based, IBC-based, blockchain-based identity authentication, etc. This system is also designed to meet high concurrency requirements of identity authentication. In order to verify the fucntion of the CSIA system, this paper has also make the system workflow simulated. The test results showed that the system can provide a cross-system identity authentication service effectively for systems and devices based on different identity authentication technology.
{"title":"Design of cross-system identity authentication system suitable for new-type electric power system","authors":"Meng Xu, Feng Zhai, Longhao Liu, Xiaobing Liang","doi":"10.1145/3507971.3507992","DOIUrl":"https://doi.org/10.1145/3507971.3507992","url":null,"abstract":"In order to solve the flexibility boundary security access problem of new-type electric power system, this paper has designed a cross-system identity authentication system which can help power grid and systems both on the generation side and demand side to build a foundation of trust for further data sharing. This system can realize a variety of cryptograph-based unified identity authentication such as PKI-based, IBC-based, blockchain-based identity authentication, etc. This system is also designed to meet high concurrency requirements of identity authentication. In order to verify the fucntion of the CSIA system, this paper has also make the system workflow simulated. The test results showed that the system can provide a cross-system identity authentication service effectively for systems and devices based on different identity authentication technology.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The limited resources and complex application environment of Internet of Things (IoT) devices, making them vulnerable to cyberattacks and natural disasters. Thus, how to improve the robustness of the IoT topology becomes a critical issue. Existing research on the robustness of IoT topology mostly uses heuristic algorithms, and the high computational cost cannot meet the needs of topology optimization in low-latency IoT scenarios. To solve this problem, this paper proposes an intelligent robustness optimization method for IoT using graph neural networks (TRO-GNN). The method first uses the graph neural network to extract the evolution characteristics from the initial IoT topology to the highly robust topology from the data set, and then the output of the graph neural network is transformed into an effective predicted topology by using the designed robustness search strategy. The experimental results show that TRO-GNN effectively improve the robustness of scale-free IoT topology against malicious attacks, and the computational cost is low.
{"title":"An Intelligent Robustness Optimization Method for Internet of Things Using Graph Neural Networks","authors":"Yabin Peng, Caixia Liu, Shuxin Liu, Kai Wang","doi":"10.1145/3507971.3508000","DOIUrl":"https://doi.org/10.1145/3507971.3508000","url":null,"abstract":"The limited resources and complex application environment of Internet of Things (IoT) devices, making them vulnerable to cyberattacks and natural disasters. Thus, how to improve the robustness of the IoT topology becomes a critical issue. Existing research on the robustness of IoT topology mostly uses heuristic algorithms, and the high computational cost cannot meet the needs of topology optimization in low-latency IoT scenarios. To solve this problem, this paper proposes an intelligent robustness optimization method for IoT using graph neural networks (TRO-GNN). The method first uses the graph neural network to extract the evolution characteristics from the initial IoT topology to the highly robust topology from the data set, and then the output of the graph neural network is transformed into an effective predicted topology by using the designed robustness search strategy. The experimental results show that TRO-GNN effectively improve the robustness of scale-free IoT topology against malicious attacks, and the computational cost is low.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115050274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A joint framework is proposed for the 3D trajectory design of an unmanned aerial vehicle (UAV) as an flying base station under the time-varying scenarios of users’ mobility and communication request probability changes. The problem of 3D trajectory design is formulated for maximizing the throughput during a UAV’s flying period while satisfying the rate requirement of all ground users (GUEs). Specifically, we consider that GUEs change their positions and communication request probabilities at each time slot; the UAV needs to predict these changes so that it can design its 3D trajectory in advance to achieve the optimization target. In an effort to solve this pertinent problem, an echo state network (ESN) based prediction algorithm is first proposed for predicting the positions and communication request probabilities of GUEs. Based on these predictions, a Deep Reinforcement Learning (DRL) method is then invoked for finding the optimal deployment locations of UAV in each time slots. The proposed method 1) uses ESN based predictions to represent a part of DRL agent’s state; 2) designs the action and reward for DRL agent to learn the environment and its dynamics; 3) makes optimal strategy under the guidance of a double deep Q network (DDQN). The simulation results show that the UAV can dynamically adjust its trajectory to adapt to time-varying scenarios through our proposed algorithm and throughput gains of about 10.68% are achieved.
{"title":"3D Trajectory Design of UAV Based on Deep Reinforcement Learning in Time-varying Scenes","authors":"Qingya Li, Li Guo, Chao Dong, Xidong Mu","doi":"10.1145/3507971.3507982","DOIUrl":"https://doi.org/10.1145/3507971.3507982","url":null,"abstract":"A joint framework is proposed for the 3D trajectory design of an unmanned aerial vehicle (UAV) as an flying base station under the time-varying scenarios of users’ mobility and communication request probability changes. The problem of 3D trajectory design is formulated for maximizing the throughput during a UAV’s flying period while satisfying the rate requirement of all ground users (GUEs). Specifically, we consider that GUEs change their positions and communication request probabilities at each time slot; the UAV needs to predict these changes so that it can design its 3D trajectory in advance to achieve the optimization target. In an effort to solve this pertinent problem, an echo state network (ESN) based prediction algorithm is first proposed for predicting the positions and communication request probabilities of GUEs. Based on these predictions, a Deep Reinforcement Learning (DRL) method is then invoked for finding the optimal deployment locations of UAV in each time slots. The proposed method 1) uses ESN based predictions to represent a part of DRL agent’s state; 2) designs the action and reward for DRL agent to learn the environment and its dynamics; 3) makes optimal strategy under the guidance of a double deep Q network (DDQN). The simulation results show that the UAV can dynamically adjust its trajectory to adapt to time-varying scenarios through our proposed algorithm and throughput gains of about 10.68% are achieved.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121028605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianying Liu, Liandong Chen, Rui Shi, J. Xu, AN Liu
Current distant supervised relation extraction algorithms based on Neural Networks mostly use long short-term memory networks and convolutional neural networks, which cannot capture long-distance features of sentences. This paper proposes a distant supervised relation extraction model based on hierarchical attention mechanism, which uses self-attention mechanism to calculate features between words, and sentence-level soft-attention mechanism to extract dimensionality of sentence features. Compared with the previous method, the proposed model can better capture sentence features and improve the effect of sentence relation classification. On the dataset NYT-10, compared with the PCNN_ATT algorithm, the P@100, P@200, and P@300 indicators increase by 4.8%, 4.9% and 2.3%, respectively, and the AUC indicator increases by 1.1%.
{"title":"Distant Supervised Relation Extraction with Hierarchical Attention Mechanism","authors":"Jianying Liu, Liandong Chen, Rui Shi, J. Xu, AN Liu","doi":"10.1145/3507971.3507980","DOIUrl":"https://doi.org/10.1145/3507971.3507980","url":null,"abstract":"Current distant supervised relation extraction algorithms based on Neural Networks mostly use long short-term memory networks and convolutional neural networks, which cannot capture long-distance features of sentences. This paper proposes a distant supervised relation extraction model based on hierarchical attention mechanism, which uses self-attention mechanism to calculate features between words, and sentence-level soft-attention mechanism to extract dimensionality of sentence features. Compared with the previous method, the proposed model can better capture sentence features and improve the effect of sentence relation classification. On the dataset NYT-10, compared with the PCNN_ATT algorithm, the P@100, P@200, and P@300 indicators increase by 4.8%, 4.9% and 2.3%, respectively, and the AUC indicator increases by 1.1%.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127796463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}