首页 > 最新文献

2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)最新文献

英文 中文
A Noise-Assisted Polar Code Attempt Decoding Algorithm 一种噪声辅助极化码尝试译码算法
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386557
He Zhanquan, L. Xiaoqing, W. Shusheng, Qian YueZhen
Aiming at the characteristics of Polar codes, this paper proposes a noise-assisted Polar code attempt decoding algorithm based on the analysis of frame error rate and bit error rate to improve Polar code decoding performance. The basic principle of the algorithm is that when the encoded data frame is transmitted through the channel, the signal-to-noise ratio is decreased. When the received signal exceeds the decoding capability and cannot be decoded normally, the auxiliary man-made noise is added to make the received signal to have a lower decision probability. The addition of low-power man-made noise will result in further randomization of the region with poor reliability of the original sampled data signal (close to zeros for BPSK modulation), and the bit rollover probability of the higher reliability of the sampled data is smaller. Under this condition, attempt to decode, if check data correct, decode finished, otherwise regenerate man-made noise for the next attempt until the decoding passes or reaches the maximum. This algorithm is a post-compensation processing method for existing algorithms, which play an incremental role.
针对Polar码的特点,在分析帧错误率和误码率的基础上,提出了一种基于噪声辅助的Polar码尝试译码算法,以提高Polar码的译码性能。该算法的基本原理是当编码后的数据帧通过信道传输时,降低信噪比。当接收信号超过解码能力,无法正常解码时,加入辅助的人为噪声,使接收信号具有较低的决策概率。低功率人为噪声的加入会导致原始采样数据信号可靠性较差的区域(BPSK调制时接近于零)进一步随机化,且采样数据可靠性越高的位翻转概率越小。在这种情况下,尝试解码,如果检查数据正确,解码完成,否则为下一次尝试重新产生人为噪声,直到解码通过或达到最大值。该算法是对现有算法的一种后补偿处理方法,起到增量的作用。
{"title":"A Noise-Assisted Polar Code Attempt Decoding Algorithm","authors":"He Zhanquan, L. Xiaoqing, W. Shusheng, Qian YueZhen","doi":"10.1109/ICCRD51685.2021.9386557","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386557","url":null,"abstract":"Aiming at the characteristics of Polar codes, this paper proposes a noise-assisted Polar code attempt decoding algorithm based on the analysis of frame error rate and bit error rate to improve Polar code decoding performance. The basic principle of the algorithm is that when the encoded data frame is transmitted through the channel, the signal-to-noise ratio is decreased. When the received signal exceeds the decoding capability and cannot be decoded normally, the auxiliary man-made noise is added to make the received signal to have a lower decision probability. The addition of low-power man-made noise will result in further randomization of the region with poor reliability of the original sampled data signal (close to zeros for BPSK modulation), and the bit rollover probability of the higher reliability of the sampled data is smaller. Under this condition, attempt to decode, if check data correct, decode finished, otherwise regenerate man-made noise for the next attempt until the decoding passes or reaches the maximum. This algorithm is a post-compensation processing method for existing algorithms, which play an incremental role.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"45 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120922551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surgical Action and Instrument Detection Based on Multiscale Information Fusion 基于多尺度信息融合的手术动作与器械检测
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386349
Wenting Xu, Ruiguo Liu, Weifeng Zhang, Z. Chao, F. Jia
The detection of surgical actions and instruments plays a very important role in computer-assisted endoscopic surgery. However, organ deformation and narrow surgical field increase the task difficulty. Accordingly, the problems of the detection of surgical actions and instruments have not been solved yet. In this paper, we proposed a multiscale fusion feature pyramid network (MSF-FPN) to merge low-level semantic information and high-level semantic information. Firstly, the feature map effectively aggregates the information by the initial layer of the pyramid network, and then diverges after the cross-transmission of the feature information in the middle layer. Finally, a strong semantic feature map was obtained in the output layer. Experiments verified that the average precision of the proposed MSF-FPN on the public endoscopic surgeon action detection (ESAD) dataset is increased by 2.9% and 1.5% compared with the general FPN and path aggregation network (PANet), and the average precision on the proposed cataract-based object detection (COD) dataset is increased by 4.3% and 2.6%, respectively.
在计算机辅助内镜手术中,手术动作和器械的检测起着非常重要的作用。然而,器官变形和手术视野狭窄增加了任务难度。因此,手术动作和器械的检测问题尚未得到解决。本文提出了一种多尺度融合特征金字塔网络(MSF-FPN)来融合低级语义信息和高级语义信息。首先,特征映射通过金字塔网络的初始层对信息进行有效聚合,然后在中间层对特征信息进行交叉传输后发散。最后,在输出层得到一个强语义特征映射。实验验证了所提出的MSF-FPN在公共内镜外科医生动作检测(ESAD)数据集上的平均精度比一般FPN和路径聚合网络(PANet)分别提高了2.9%和1.5%,在基于白内障的目标检测(COD)数据集上的平均精度分别提高了4.3%和2.6%。
{"title":"Surgical Action and Instrument Detection Based on Multiscale Information Fusion","authors":"Wenting Xu, Ruiguo Liu, Weifeng Zhang, Z. Chao, F. Jia","doi":"10.1109/ICCRD51685.2021.9386349","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386349","url":null,"abstract":"The detection of surgical actions and instruments plays a very important role in computer-assisted endoscopic surgery. However, organ deformation and narrow surgical field increase the task difficulty. Accordingly, the problems of the detection of surgical actions and instruments have not been solved yet. In this paper, we proposed a multiscale fusion feature pyramid network (MSF-FPN) to merge low-level semantic information and high-level semantic information. Firstly, the feature map effectively aggregates the information by the initial layer of the pyramid network, and then diverges after the cross-transmission of the feature information in the middle layer. Finally, a strong semantic feature map was obtained in the output layer. Experiments verified that the average precision of the proposed MSF-FPN on the public endoscopic surgeon action detection (ESAD) dataset is increased by 2.9% and 1.5% compared with the general FPN and path aggregation network (PANet), and the average precision on the proposed cataract-based object detection (COD) dataset is increased by 4.3% and 2.6%, respectively.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132215043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Design of Volunteer Computing System Based on Blockchain 基于区块链的志愿者计算系统设计
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386703
Boxuan Shan
In recent years, the application range of blockchain has become more and more extensive. This paper proposed a volunteer computing system design based on the blockchain. This system takes advantage of the decentralized, persistent and auditable characteristics of the blockchain to solve the scalability and single point of failure problem under the traditional centralized C/S framework and provides a certain traceability for volunteer calculations. Increased scalability implies that the system can accommodate more computing jobs and participants. Solving the single-point failure problem implies that the volunteer computing system can provide researchers with longer-term and more stable computing resources. The traceability implies that anyone can view the blockchain to understand which volunteers have participated in every computing work in history and the adoption of the results provided by each volunteer. At the same time, with the help of the persistence and auditability of the blockchain, the authenticity of history can be ensured. This paper studies the C/S framework volunteer computing as well as blockchain, and proposes a framework for volunteer computing based on the blockchain. Then discussed its pros and cons in terms of feasibility, scalability, security, authenticity, traceability etc.
近年来,区块链的应用范围越来越广泛。本文提出了一种基于区块链的志愿计算系统设计。该系统利用区块链的去中心化、持久化和可审计的特点,解决了传统集中式C/S框架下的可扩展性和单点故障问题,并为志愿者计算提供了一定的可追溯性。增加的可伸缩性意味着系统可以容纳更多的计算工作和参与者。解决单点故障问题意味着志愿计算系统可以为研究人员提供更长期、更稳定的计算资源。可追溯性意味着任何人都可以查看区块链,了解哪些志愿者参与了历史上的每一项计算工作,并采用每个志愿者提供的结果。同时,借助区块链的持久性和可审计性,可以保证历史的真实性。本文研究了C/S框架志愿计算和区块链,提出了一个基于区块链的志愿计算框架。然后从可行性、可扩展性、安全性、真实性、可追溯性等方面讨论了其优缺点。
{"title":"A Design of Volunteer Computing System Based on Blockchain","authors":"Boxuan Shan","doi":"10.1109/ICCRD51685.2021.9386703","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386703","url":null,"abstract":"In recent years, the application range of blockchain has become more and more extensive. This paper proposed a volunteer computing system design based on the blockchain. This system takes advantage of the decentralized, persistent and auditable characteristics of the blockchain to solve the scalability and single point of failure problem under the traditional centralized C/S framework and provides a certain traceability for volunteer calculations. Increased scalability implies that the system can accommodate more computing jobs and participants. Solving the single-point failure problem implies that the volunteer computing system can provide researchers with longer-term and more stable computing resources. The traceability implies that anyone can view the blockchain to understand which volunteers have participated in every computing work in history and the adoption of the results provided by each volunteer. At the same time, with the help of the persistence and auditability of the blockchain, the authenticity of history can be ensured. This paper studies the C/S framework volunteer computing as well as blockchain, and proposes a framework for volunteer computing based on the blockchain. Then discussed its pros and cons in terms of feasibility, scalability, security, authenticity, traceability etc.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131199398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SRWGANTV: Image Super-Resolution Through Wasserstein Generative Adversarial Networks with Total Variational Regularization SRWGANTV:基于全变分正则化的Wasserstein生成对抗网络的图像超分辨率
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386518
Jun Shao, Liang Chen, Yi Wu
The study of generative adversarial networks (GAN) has enormously promoted the research work on single image super-resolution (SISR) problem. SRGAN firstly apply GAN to SISR reconstruction, which has achieved good results. However, SRGAN sacrifices the fidelity. At the same time, it is well known that the GANs are difficult to train and the improper training fails the SISR results easily. Recently, Wasserstein Generative Adversarial Network with gradient penalty (WGAN-GP) has been proposed to alleviate these issues at the expense of performance of the model with a relatively simple training process. However, we find that applying WGAN-GP to SISR still suffers from training instability, leading to failure to obtain a good SR result. To address this problem, we present an image super resolution framework base on enhanced WGAN (SRWGAN-TV). We introduce the total variational (TV) regularization term into the loss function of WGAN. The total variational (TV) regularization term can stabilize the network training and improve the quality of generated images. Experimental results on public datasets show that the proposed method achieves superior performance in both quantitative and qualitative measurements.
生成对抗网络(GAN)的研究极大地促进了单幅图像超分辨率(SISR)问题的研究。SRGAN首次将GAN应用于SISR重构,取得了较好的效果。然而,SRGAN牺牲了保真度。同时,众所周知,gan很难训练,训练不当容易导致SISR结果失效。最近,Wasserstein梯度惩罚生成对抗网络(WGAN-GP)被提出,以牺牲模型性能为代价,以相对简单的训练过程来缓解这些问题。然而,我们发现将WGAN-GP应用于SISR仍然存在训练不稳定性,导致无法获得良好的SR结果。为了解决这一问题,我们提出了一种基于增强型WGAN的图像超分辨率框架(SRWGAN-TV)。在WGAN的损失函数中引入了总变分正则化项。总变分(TV)正则化项可以稳定网络训练,提高生成图像的质量。在公共数据集上的实验结果表明,该方法在定量和定性测量方面都取得了较好的效果。
{"title":"SRWGANTV: Image Super-Resolution Through Wasserstein Generative Adversarial Networks with Total Variational Regularization","authors":"Jun Shao, Liang Chen, Yi Wu","doi":"10.1109/ICCRD51685.2021.9386518","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386518","url":null,"abstract":"The study of generative adversarial networks (GAN) has enormously promoted the research work on single image super-resolution (SISR) problem. SRGAN firstly apply GAN to SISR reconstruction, which has achieved good results. However, SRGAN sacrifices the fidelity. At the same time, it is well known that the GANs are difficult to train and the improper training fails the SISR results easily. Recently, Wasserstein Generative Adversarial Network with gradient penalty (WGAN-GP) has been proposed to alleviate these issues at the expense of performance of the model with a relatively simple training process. However, we find that applying WGAN-GP to SISR still suffers from training instability, leading to failure to obtain a good SR result. To address this problem, we present an image super resolution framework base on enhanced WGAN (SRWGAN-TV). We introduce the total variational (TV) regularization term into the loss function of WGAN. The total variational (TV) regularization term can stabilize the network training and improve the quality of generated images. Experimental results on public datasets show that the proposed method achieves superior performance in both quantitative and qualitative measurements.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133493683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Convolutional Neural Network with Background Exclusion for Crowd Counting in Non-uniform Population Distribution Scenes 基于背景排除的卷积神经网络在非均匀人口分布场景下的人群计数
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386369
Xinfeng Zhang, Lisha Zuo, Baoqing Yang, Shuhan Chen, Bin Li
The crowd counting in public places is a wildly concerned issue in the fields of public safety, activity planning, and space design. The current crowd counting methods are mainly aimed at the situation that the crowd is full of the whole scene, which cannot be applied to practical applications due to the actual crowd is non-uniform distributed in the scene. The complex background caused by non-uniform population distribution affects the accuracy of crowd counting. Therefore, we propose a convolutional neural network with background exclusion for crowd counting. Firstly, we divide the image into blocks and then use the residual network to determine whether each block contains crowd, to eliminate the clutter background area and avoid the background interference to crowd counting. Secondly, we use the dilated convolution and asymmetric convolution to estimate the crowd density map of image blocks containing crowd. Finally, the crowd density map of all crowd areas is integrated to obtain the crowd counting results of the whole scene. We collect some images of more general scenes, such as the crowd is only a part of the whole image, and construct Non-uniformly Distributed Crowd (NDC 2020) dataset. We conduct experiments on ShanghaiTech datasets and NDC 2020 dataset. Experiment results show that our method is superior to the existing crowd counting methods in the scene of non-uniform population distribution.
公共场所人群统计是公共安全、活动规划、空间设计等领域普遍关注的问题。目前的人群计数方法主要针对人群充满整个场景的情况,由于实际人群在场景中的分布不均匀,无法应用于实际应用。人口分布不均匀导致的复杂背景影响了人群计数的准确性。因此,我们提出了一种具有背景排除的卷积神经网络用于人群计数。首先将图像分割成若干块,然后利用残差网络判断每个块是否包含人群,消除背景杂波区域,避免背景对人群计数的干扰。其次,我们利用扩展卷积和非对称卷积来估计包含人群的图像块的人群密度图。最后,对所有人群区域的人群密度图进行整合,得到整个场景的人群计数结果。我们收集了一些更一般场景的图像,例如人群只是整个图像的一部分,并构建了非均匀分布人群(NDC 2020)数据集。我们在上海科技数据集和NDC 2020数据集上进行了实验。实验结果表明,该方法在非均匀人群分布场景下优于现有的人群计数方法。
{"title":"A Convolutional Neural Network with Background Exclusion for Crowd Counting in Non-uniform Population Distribution Scenes","authors":"Xinfeng Zhang, Lisha Zuo, Baoqing Yang, Shuhan Chen, Bin Li","doi":"10.1109/ICCRD51685.2021.9386369","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386369","url":null,"abstract":"The crowd counting in public places is a wildly concerned issue in the fields of public safety, activity planning, and space design. The current crowd counting methods are mainly aimed at the situation that the crowd is full of the whole scene, which cannot be applied to practical applications due to the actual crowd is non-uniform distributed in the scene. The complex background caused by non-uniform population distribution affects the accuracy of crowd counting. Therefore, we propose a convolutional neural network with background exclusion for crowd counting. Firstly, we divide the image into blocks and then use the residual network to determine whether each block contains crowd, to eliminate the clutter background area and avoid the background interference to crowd counting. Secondly, we use the dilated convolution and asymmetric convolution to estimate the crowd density map of image blocks containing crowd. Finally, the crowd density map of all crowd areas is integrated to obtain the crowd counting results of the whole scene. We collect some images of more general scenes, such as the crowd is only a part of the whole image, and construct Non-uniformly Distributed Crowd (NDC 2020) dataset. We conduct experiments on ShanghaiTech datasets and NDC 2020 dataset. Experiment results show that our method is superior to the existing crowd counting methods in the scene of non-uniform population distribution.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"9 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123782000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Yolo on Mask Detection Task Yolo在掩码检测任务中的应用
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386366
Ren Liu, Ziang Ren
2020 has been a year marked by the COVID-19 pandemic. This event has caused disruptions to many aspects of normal life. An important aspect in reducing the impact of the pandemic is to control its spread. Studies have shown that one effective method in reducing the transmission of COVID-19 is to wear masks. Strict mask-wearing policies have been met with not only public sensation but also practical difficulty. We cannot hope to manually check if everyone on a street is wearing a mask properly. Existing technology to help automate mask checking uses deep learning models on real-time surveillance camera footages. The current dominant method to perform real-time mask detection uses Mask-R-CNN with ResNet as backbone. While giving good detection results, this method is computationally intensive and its efficiency in real-time face mask detection is not ideal. Our research proposes a new approach to the mask detection by replacing Mask-R-CNN with a more efficient model "YOLO" to increase the processing speed of real-time mask detection and not compromise on accuracy. Besides, given the small volume as well as extreme imbalance of the mask detection datasets, we adopt a latest progress made in few-shot visual classification, simple CNAPs, to improve the classification performance.
2020年是2019冠状病毒病大流行的一年。这一事件对正常生活的许多方面造成了干扰。减少这一流行病影响的一个重要方面是控制其传播。研究表明,戴口罩是减少新冠病毒传播的有效方法之一。严格的戴口罩政策不仅在公众中引起轰动,而且在实践中也遇到了困难。我们不能指望手动检查街上的每个人是否都戴了口罩。现有的帮助自动检查掩模的技术是在实时监控摄像头的画面上使用深度学习模型。目前主要的实时掩码检测方法是以ResNet为骨干的mask - r - cnn。该方法在获得较好的检测结果的同时,计算量较大,在实时检测中效率不理想。我们的研究提出了一种新的掩码检测方法,用更高效的“YOLO”模型代替mask - r - cnn,提高实时掩码检测的处理速度,同时不影响精度。此外,由于蒙版检测数据集体积小且极不平衡,我们采用了少镜头视觉分类的最新进展——简单的CNAPs来提高分类性能。
{"title":"Application of Yolo on Mask Detection Task","authors":"Ren Liu, Ziang Ren","doi":"10.1109/ICCRD51685.2021.9386366","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386366","url":null,"abstract":"2020 has been a year marked by the COVID-19 pandemic. This event has caused disruptions to many aspects of normal life. An important aspect in reducing the impact of the pandemic is to control its spread. Studies have shown that one effective method in reducing the transmission of COVID-19 is to wear masks. Strict mask-wearing policies have been met with not only public sensation but also practical difficulty. We cannot hope to manually check if everyone on a street is wearing a mask properly. Existing technology to help automate mask checking uses deep learning models on real-time surveillance camera footages. The current dominant method to perform real-time mask detection uses Mask-R-CNN with ResNet as backbone. While giving good detection results, this method is computationally intensive and its efficiency in real-time face mask detection is not ideal. Our research proposes a new approach to the mask detection by replacing Mask-R-CNN with a more efficient model \"YOLO\" to increase the processing speed of real-time mask detection and not compromise on accuracy. Besides, given the small volume as well as extreme imbalance of the mask detection datasets, we adopt a latest progress made in few-shot visual classification, simple CNAPs, to improve the classification performance.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133842497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Video Quality Assessment by Sparse Representation and Dynamic Atom Classification 基于稀疏表示和动态原子分类的视频质量评估
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386597
Zihui Zhang, Zongyao Hu
Finding that not all dictionary atoms are closely related to degradation in visual signal, we innovatively design a distortion sensitivity guided Dynamic Atom Classification (DAC) strategy to separate distorted signal. Then, we propose a novel DAC-based full-reference video quality assessment (VQA) method. The method includes two parts: spatial quality evaluation and temporal quality evaluation. Spatially, we train a distortion-aware dictionary, get sparse representation of video patches, and dynamically classify activated dictionary atoms. Every frame is separated into difference and basic components, and spatial similarity is aggregated by component similarities. Temporally, we calculate gradient similarity of frame difference to capture motion information. The experimental results indicate the effectiveness of the proposed algorithm compared with state-of-art VQA methods.
发现并非所有字典原子都与视觉信号的退化密切相关,我们创新地设计了一种以失真灵敏度为导向的动态原子分类(DAC)策略来分离失真信号。然后,我们提出了一种新的基于dac的全参考视频质量评估方法。该方法包括空间质量评价和时间质量评价两部分。在空间上,我们训练了一个扭曲感知字典,得到视频补丁的稀疏表示,并对激活的字典原子进行动态分类。将每一帧分解为差分量和基本分量,通过分量相似度聚合空间相似度。在时间上,我们计算帧差的梯度相似度来捕获运动信息。实验结果表明,与现有的VQA方法相比,该算法是有效的。
{"title":"Video Quality Assessment by Sparse Representation and Dynamic Atom Classification","authors":"Zihui Zhang, Zongyao Hu","doi":"10.1109/ICCRD51685.2021.9386597","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386597","url":null,"abstract":"Finding that not all dictionary atoms are closely related to degradation in visual signal, we innovatively design a distortion sensitivity guided Dynamic Atom Classification (DAC) strategy to separate distorted signal. Then, we propose a novel DAC-based full-reference video quality assessment (VQA) method. The method includes two parts: spatial quality evaluation and temporal quality evaluation. Spatially, we train a distortion-aware dictionary, get sparse representation of video patches, and dynamically classify activated dictionary atoms. Every frame is separated into difference and basic components, and spatial similarity is aggregated by component similarities. Temporally, we calculate gradient similarity of frame difference to capture motion information. The experimental results indicate the effectiveness of the proposed algorithm compared with state-of-art VQA methods.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on RFID Virtual Tag Location Algorithm Based on Monte Carlo 基于蒙特卡罗的RFID虚拟标签定位算法研究
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386426
Tingliang Guan, Ding Wang, Yixin Su
Aiming at the problems of large positioning error and low positioning effect at the boundary of traditional RFID virtual tag positioning algorithm, this paper compares the influence of four different interpolation methods of virtual reference tag signal strength on positioning accuracy, and proposes a RFID virtual tag positioning algorithm based on Monte Carlo. The algorithm uses dynamic particles to replace the traditional static reference tag; introduces particle swarm optimization algorithm to update the Monte Carlo sample particle swarm, and gives different weights to the sampling particles based on the signal strength difference between the sampling particles and the undetermined tags, and finally completes the localization of the unknown tags through Monte Carlo resampling. The simulation results show that the algorithm can effectively improve the accuracy and stability of RFID positioning system compared with the traditional virtual tag positioning algorithm.
针对传统RFID虚拟标签定位算法在边界处定位误差大、定位效果低的问题,比较了四种不同的虚拟参考标签信号强度插值方法对定位精度的影响,提出了一种基于蒙特卡罗的RFID虚拟标签定位算法。该算法采用动态粒子代替传统的静态参考标签;引入粒子群优化算法对蒙特卡罗样本粒子群进行更新,根据采样粒子与未确定标签之间的信号强度差,对采样粒子赋予不同的权重,最后通过蒙特卡罗重采样完成未知标签的定位。仿真结果表明,与传统的虚拟标签定位算法相比,该算法能有效提高RFID定位系统的精度和稳定性。
{"title":"Research on RFID Virtual Tag Location Algorithm Based on Monte Carlo","authors":"Tingliang Guan, Ding Wang, Yixin Su","doi":"10.1109/ICCRD51685.2021.9386426","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386426","url":null,"abstract":"Aiming at the problems of large positioning error and low positioning effect at the boundary of traditional RFID virtual tag positioning algorithm, this paper compares the influence of four different interpolation methods of virtual reference tag signal strength on positioning accuracy, and proposes a RFID virtual tag positioning algorithm based on Monte Carlo. The algorithm uses dynamic particles to replace the traditional static reference tag; introduces particle swarm optimization algorithm to update the Monte Carlo sample particle swarm, and gives different weights to the sampling particles based on the signal strength difference between the sampling particles and the undetermined tags, and finally completes the localization of the unknown tags through Monte Carlo resampling. The simulation results show that the algorithm can effectively improve the accuracy and stability of RFID positioning system compared with the traditional virtual tag positioning algorithm.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121513650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Synchronization Optimization Technique for OpenMP OpenMP的同步优化技术
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386475
Zhaochu Deng, Jianjiang Li, Jie Lin
In recent years, even though the chip density can still increase, it is difficult to increase the main frequency. Performance improvements for single processors may be close to their limits. So multi-core processors are coming on the scene. In order to make full use of multi-core platforms, we must find the inherent parallelism of programs and write programs that can execute in parallel. OpenMP standard is widely used in parallel programming because of its good portability and ease of use. For OpenMP programs generated by parallel compilers and OpenMP programs which only have simple parallelism, they belong to programs with insufficient optimization. In OpenMP parallel execution, synchronization control is one of the main overheads. Its unnecessary barrier synchronization reduces the performance of parallel program. This paper discusses an optimization technology of OpenMP program. Firstly, the parallel area of OpenMP program is merged and expanded to reduce the overhead of parallel and serial switching in execution, and at the same time, it is convenient for next steps to optimize the program. Then explicit the implicit synchronization in OpenMP program. Finally, data dependency analysis is carried out for the context of each synchronization, and unnecessary synchronizations are deleted. The program is tested with the program running time as an index to evaluate the performance of the test program. The experimental results show that the optimization strategy proposed in this paper correctly reconstructs the parallel regions and reduces its execution overhead; it reduces the number of redundant synchronizations and effectively improve the performance of OpenMP program.
近年来,即使芯片密度仍然可以增加,但主频率很难增加。单处理器的性能改进可能接近其极限。所以多核处理器出现了。为了充分利用多核平台,我们必须找到程序固有的并行性,编写能够并行执行的程序。OpenMP标准由于其良好的可移植性和易用性在并行编程中得到了广泛的应用。对于由并行编译器生成的OpenMP程序和仅具有简单并行性的OpenMP程序,属于优化不足的程序。在OpenMP并行执行中,同步控制是主要开销之一。它的不必要的屏障同步降低了并行程序的性能。本文讨论了OpenMP程序的优化技术。首先,对OpenMP程序的并行区域进行合并和扩展,以减少并行和串行切换在执行过程中的开销,同时也便于下一步的程序优化。然后在OpenMP程序中显式实现隐式同步。最后,对每次同步的上下文进行数据依赖分析,删除不必要的同步。以程序运行时间作为评价测试程序性能的指标,对程序进行测试。实验结果表明,本文提出的优化策略正确地重构了并行区域,降低了执行开销;它减少了冗余同步的次数,有效地提高了OpenMP程序的性能。
{"title":"A Synchronization Optimization Technique for OpenMP","authors":"Zhaochu Deng, Jianjiang Li, Jie Lin","doi":"10.1109/ICCRD51685.2021.9386475","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386475","url":null,"abstract":"In recent years, even though the chip density can still increase, it is difficult to increase the main frequency. Performance improvements for single processors may be close to their limits. So multi-core processors are coming on the scene. In order to make full use of multi-core platforms, we must find the inherent parallelism of programs and write programs that can execute in parallel. OpenMP standard is widely used in parallel programming because of its good portability and ease of use. For OpenMP programs generated by parallel compilers and OpenMP programs which only have simple parallelism, they belong to programs with insufficient optimization. In OpenMP parallel execution, synchronization control is one of the main overheads. Its unnecessary barrier synchronization reduces the performance of parallel program. This paper discusses an optimization technology of OpenMP program. Firstly, the parallel area of OpenMP program is merged and expanded to reduce the overhead of parallel and serial switching in execution, and at the same time, it is convenient for next steps to optimize the program. Then explicit the implicit synchronization in OpenMP program. Finally, data dependency analysis is carried out for the context of each synchronization, and unnecessary synchronizations are deleted. The program is tested with the program running time as an index to evaluate the performance of the test program. The experimental results show that the optimization strategy proposed in this paper correctly reconstructs the parallel regions and reduces its execution overhead; it reduces the number of redundant synchronizations and effectively improve the performance of OpenMP program.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128248366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Two Stage Operation Optimization of Active Distribution Network Based on IMOHS Algorithm 基于IMOHS算法的有源配电网两阶段运行优化
Pub Date : 2021-01-05 DOI: 10.1109/ICCRD51685.2021.9386491
Xueshun Ye, Kaiyuan He, Tianyuan Kang, Muke Bai
In order to solve the problem of optimal dispatching between source, network and flexible loads, and improve the safety and stability operation level of distribution network, this paper proposes a two stage multi-objective control method for distribution network. The first stage optimization control is the day ahead scheduling, the control objectives is source and load balance, and minimum of the longest path of feeder in distribution network. The second stage is hourly control, the control objectives of static voltage stability margin and the active power loss are used to effectively improve the coordination, safety, and economy of distribution network. An improved multi-objective harmony search algorithm is proposed to solve this two stage multi-objective optimization model. Finally, the effectiveness of the proposed two stage multi-objective control method is verified by IEEE 33 bus system with distributed generations.
为了解决源、网、柔性负荷之间的最优调度问题,提高配电网的安全稳定运行水平,提出了一种配电网两阶段多目标控制方法。第一阶段优化控制为日前调度,控制目标为源与负荷平衡,配电网中馈线最长路径最小。第二阶段为分时控制,以静态电压稳定裕度和有功损耗为控制目标,有效提高配电网的协调性、安全性和经济性。针对这一两阶段多目标优化模型,提出了一种改进的多目标和谐搜索算法。最后,通过IEEE 33总线分布式世代系统验证了所提两阶段多目标控制方法的有效性。
{"title":"Two Stage Operation Optimization of Active Distribution Network Based on IMOHS Algorithm","authors":"Xueshun Ye, Kaiyuan He, Tianyuan Kang, Muke Bai","doi":"10.1109/ICCRD51685.2021.9386491","DOIUrl":"https://doi.org/10.1109/ICCRD51685.2021.9386491","url":null,"abstract":"In order to solve the problem of optimal dispatching between source, network and flexible loads, and improve the safety and stability operation level of distribution network, this paper proposes a two stage multi-objective control method for distribution network. The first stage optimization control is the day ahead scheduling, the control objectives is source and load balance, and minimum of the longest path of feeder in distribution network. The second stage is hourly control, the control objectives of static voltage stability margin and the active power loss are used to effectively improve the coordination, safety, and economy of distribution network. An improved multi-objective harmony search algorithm is proposed to solve this two stage multi-objective optimization model. Finally, the effectiveness of the proposed two stage multi-objective control method is verified by IEEE 33 bus system with distributed generations.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130473207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1