首页 > 最新文献

Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision最新文献

英文 中文
Computation Offloading for Better Real-Time Technical Market Analysis on Mobile Devices 移动设备上更好的实时技术市场分析的计算卸载
Gufeng Shen
∗Computation offloading is currently future-oriented, which has not been large-range deployed. However, it is a useful tool for the growing computing requirements for mobile devices. Now trading apps, such as TradingView and Futu, tend to provide either the full functionality to run real-time scripts like variants of technical, or autonomous trading strategies, turning out to increase computation scale dramatically or providing just limited functionalities. Current solutions either degrade responsibility of the mobile devices or use cloud computing, which produces more latency compared to using 5GMobile Edge Computing (MEC) units. This paper proposes a novel comparison of computing locally (or on MEC units) and a method to evaluate the offloaded acceleration rate. The result shows the suitable measure to offload computation to MEC units. In addition, it also shows that it is possible to process real-time scripts on the fog layer in some situations. It can be concluded that the proposed method reduces the latency of the whole trading system.
*计算卸载目前是面向未来的,尚未大规模部署。然而,对于移动设备日益增长的计算需求来说,它是一个有用的工具。现在交易应用程序,如TradingView和Futu,倾向于提供完整的功能来运行实时脚本(如技术变体)或自主交易策略,结果是大幅增加计算规模或只提供有限的功能。目前的解决方案要么降低了移动设备的责任,要么使用云计算,与使用5g移动边缘计算(MEC)单元相比,云计算产生了更多的延迟。本文提出了一种新的局部计算(或在MEC单元上)的比较和一种评估卸载加速度的方法。结果表明,采用适当的措施可以将计算任务转移到MEC单元。此外,它还表明,在某些情况下,可以在雾层上处理实时脚本。可以得出结论,该方法降低了整个交易系统的延迟。
{"title":"Computation Offloading for Better Real-Time Technical Market Analysis on Mobile Devices","authors":"Gufeng Shen","doi":"10.1145/3469951.3469964","DOIUrl":"https://doi.org/10.1145/3469951.3469964","url":null,"abstract":"∗Computation offloading is currently future-oriented, which has not been large-range deployed. However, it is a useful tool for the growing computing requirements for mobile devices. Now trading apps, such as TradingView and Futu, tend to provide either the full functionality to run real-time scripts like variants of technical, or autonomous trading strategies, turning out to increase computation scale dramatically or providing just limited functionalities. Current solutions either degrade responsibility of the mobile devices or use cloud computing, which produces more latency compared to using 5GMobile Edge Computing (MEC) units. This paper proposes a novel comparison of computing locally (or on MEC units) and a method to evaluate the offloaded acceleration rate. The result shows the suitable measure to offload computation to MEC units. In addition, it also shows that it is possible to process real-time scripts on the fog layer in some situations. It can be concluded that the proposed method reduces the latency of the whole trading system.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"41 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on UAV Signal Classification Algorithm Based on Deep Learning 基于深度学习的无人机信号分类算法研究
Yunsong Zhao
∗With the continuous development of Unmanned Aerial Vehicle (UAV) technology and its industry, the detection and recognition technology of UAV have attracted the attention of researchers. In this paper, the author focuses on the defects and deficiencies of traditional radar, visual and acoustic UAV detection technology. Considering that the UAV’s own radio communication signal can be used for detection, a UAV signal classification method based on deep learning is proposed. This algorithm can extract the characteristics of UAV Communication Law, so as to achieve the target classification. The experimental results show that the average recognition rate of UAV is 95% in the test, and the recognition rate of most types of UAVs is more than 98%. In addition, the classification rate for the flight attitudes of UAVs can reach more than 95%. Therefore, it can be concluded that the classification algorithm designed in this paper can effectively meet the needs of UAV detection and recognition in the actual scene.
随着无人机技术及其产业的不断发展,无人机的检测与识别技术引起了研究者的广泛关注。本文着重分析了传统雷达、视觉和声学无人机探测技术的缺陷和不足。考虑到无人机自身的无线电通信信号可用于检测,提出了一种基于深度学习的无人机信号分类方法。该算法可以提取无人机通信规律的特征,从而实现目标分类。实验结果表明,测试中无人机的平均识别率为95%,大多数型号无人机的识别率都在98%以上。此外,对无人机飞行姿态的分类率可达到95%以上。因此,可以得出结论,本文设计的分类算法可以有效地满足实际场景中无人机检测识别的需求。
{"title":"Research on UAV Signal Classification Algorithm Based on Deep Learning","authors":"Yunsong Zhao","doi":"10.1145/3469951.3469956","DOIUrl":"https://doi.org/10.1145/3469951.3469956","url":null,"abstract":"∗With the continuous development of Unmanned Aerial Vehicle (UAV) technology and its industry, the detection and recognition technology of UAV have attracted the attention of researchers. In this paper, the author focuses on the defects and deficiencies of traditional radar, visual and acoustic UAV detection technology. Considering that the UAV’s own radio communication signal can be used for detection, a UAV signal classification method based on deep learning is proposed. This algorithm can extract the characteristics of UAV Communication Law, so as to achieve the target classification. The experimental results show that the average recognition rate of UAV is 95% in the test, and the recognition rate of most types of UAVs is more than 98%. In addition, the classification rate for the flight attitudes of UAVs can reach more than 95%. Therefore, it can be concluded that the classification algorithm designed in this paper can effectively meet the needs of UAV detection and recognition in the actual scene.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clothing Image Retrieval Based on Parts Detection and Segmentation 基于部位检测与分割的服装图像检索
Qiubo Huang, X. Han, Ting Lu, Guohua Liu
With the rapid development of E-commerce, more and more users are buying clothes through the Internet, and "image search" for clothing images has become a popular research direction. The current "image search" technology mainly relies on the results of feature extraction of the whole image, but cannot focus on the parts of the clothing, and the background of the clothing image is generally complex, resulting in low accuracy of clothing image retrieval, so we propose a retrieval method based on clothing image detection and segmentation. Firstly, Mask R-CNN is used to detect and segment the image to get the information of garment body, collar parts, sleeve category and pocket positions, then VGG16 is used to extract 512-dimensional features from the garment body and collar parts, based on this information, the similarity between the garment to be retrieved and the garment in the database is calculated one by one. We calculate the similarity by weighting the cosine similarity of 512-dimensional features of the garment body and collar, as well as the similarity of the sleeves and pockets. The search results are presented to the user according to the descending order of similarity. The experimental results show that the method can focus on the whole garment as well as their parts, thus enabling retrieval based on garment style. It also allows users to adjust the weights of each part and can return the search results that best meet their individual needs
随着电子商务的快速发展,越来越多的用户通过互联网购买服装,对服装图片的“图片搜索”成为一个热门的研究方向。目前的“图像搜索”技术主要依赖于对整个图像的特征提取结果,而不能对服装的局部进行重点提取,并且服装图像的背景一般比较复杂,导致服装图像检索的准确率较低,因此我们提出了一种基于服装图像检测与分割的检索方法。首先利用Mask R-CNN对图像进行检测和分割,得到服装本体、领部、袖子类别和口袋位置等信息,然后利用VGG16从服装本体和领部提取512维特征,根据这些信息逐一计算待检索服装与数据库中服装的相似度。我们通过加权服装主体和衣领的512维特征的余弦相似度以及袖子和口袋的相似度来计算相似度。搜索结果按照相似度降序呈现给用户。实验结果表明,该方法既可以关注服装的整体,也可以关注服装的各个部分,从而实现基于服装风格的检索。它还允许用户调整每个部分的权重,并可以返回最符合他们个人需求的搜索结果
{"title":"Clothing Image Retrieval Based on Parts Detection and Segmentation","authors":"Qiubo Huang, X. Han, Ting Lu, Guohua Liu","doi":"10.1145/3469951.3469961","DOIUrl":"https://doi.org/10.1145/3469951.3469961","url":null,"abstract":"With the rapid development of E-commerce, more and more users are buying clothes through the Internet, and \"image search\" for clothing images has become a popular research direction. The current \"image search\" technology mainly relies on the results of feature extraction of the whole image, but cannot focus on the parts of the clothing, and the background of the clothing image is generally complex, resulting in low accuracy of clothing image retrieval, so we propose a retrieval method based on clothing image detection and segmentation. Firstly, Mask R-CNN is used to detect and segment the image to get the information of garment body, collar parts, sleeve category and pocket positions, then VGG16 is used to extract 512-dimensional features from the garment body and collar parts, based on this information, the similarity between the garment to be retrieved and the garment in the database is calculated one by one. We calculate the similarity by weighting the cosine similarity of 512-dimensional features of the garment body and collar, as well as the similarity of the sleeves and pockets. The search results are presented to the user according to the descending order of similarity. The experimental results show that the method can focus on the whole garment as well as their parts, thus enabling retrieval based on garment style. It also allows users to adjust the weights of each part and can return the search results that best meet their individual needs","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123967839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Multilevel Thresholding Approach for Acne Detection in Medical Treatment 医学治疗中痤疮检测的多级阈值法
Nguyen Pham Nguyen Xuan, Tham Tran Thi, Thang DO Minh, Duy Tran Ngoc Bao
In the quantitative assessment on the success of treatment, the automatic detection of acne pixels from digital color images would be helpful. In this paper, we proposed an automatic acne detection method through the processing of facial images taken by the smartphone based on the image processing. In this approach, the RGB image is transformed into various color spaces based on the differences between features of each acne lesion type. This method has been used the a* channel of the CIELab color space to detect the inflammatory acne (papules and pustules). The S channel of HSV color space was used to detect the non-inflammatory acne (whiteheads and blackheads). A multi-level threshold is then used to make acne extraction and blob detection. The effectiveness of the proposed procedure is shown by experimental results. We showed the possibility of detecting 4 types of acne lesions (whiteheads, blackheads, papules, pustules) with different skin colors and different smartphones in this experiment by applying a combination of several color spaces. The result shows a recall of about 85.71% in detecting different acne types at a reasonable processing time. This is the remise to help doctors to assess the level of acne on the patient's face in an effective and time-saving way.
在对治疗效果进行定量评价时,从数字彩色图像中自动检测痤疮像素将有所帮助。在本文中,我们提出了一种基于图像处理的方法,通过对智能手机拍摄的面部图像进行处理,实现痤疮的自动检测。该方法根据痤疮病变类型特征的差异,将RGB图像转换成不同的颜色空间。该方法利用CIELab颜色空间的a*通道检测炎性痤疮(丘疹和脓疱)。采用HSV色彩空间S通道检测非炎性痤疮(白头和黑头)。然后使用多级阈值进行痤疮提取和斑点检测。实验结果表明了该方法的有效性。在这个实验中,我们展示了在不同肤色和不同智能手机的情况下,通过几个颜色空间的组合,可以检测到4种类型的痤疮病变(白头、黑头、丘疹、脓疱)。结果表明,在合理的加工时间内,检测不同类型痤疮的召回率约为85.71%。这是帮助医生以一种有效和节省时间的方式评估患者脸上痤疮水平的方法。
{"title":"A Multilevel Thresholding Approach for Acne Detection in Medical Treatment","authors":"Nguyen Pham Nguyen Xuan, Tham Tran Thi, Thang DO Minh, Duy Tran Ngoc Bao","doi":"10.1145/3469951.3469955","DOIUrl":"https://doi.org/10.1145/3469951.3469955","url":null,"abstract":"In the quantitative assessment on the success of treatment, the automatic detection of acne pixels from digital color images would be helpful. In this paper, we proposed an automatic acne detection method through the processing of facial images taken by the smartphone based on the image processing. In this approach, the RGB image is transformed into various color spaces based on the differences between features of each acne lesion type. This method has been used the a* channel of the CIELab color space to detect the inflammatory acne (papules and pustules). The S channel of HSV color space was used to detect the non-inflammatory acne (whiteheads and blackheads). A multi-level threshold is then used to make acne extraction and blob detection. The effectiveness of the proposed procedure is shown by experimental results. We showed the possibility of detecting 4 types of acne lesions (whiteheads, blackheads, papules, pustules) with different skin colors and different smartphones in this experiment by applying a combination of several color spaces. The result shows a recall of about 85.71% in detecting different acne types at a reasonable processing time. This is the remise to help doctors to assess the level of acne on the patient's face in an effective and time-saving way.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125743233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Improving Car Point-Cloud Tracking Via Detection Updates 通过检测更新改进汽车点云跟踪
Yashar Deldjoo, Tommaso Di Noia, Eugenio Di Sciascio, Gaetano Pernisco, V. Renó, E. Stella
Most autonomous driving applications leverage RGB images representing the surrounding environment that contain useful appearance features but with a cost in terms of geometric features. On the other side, 3D point clouds generated by LIDAR sensors can provide more geometric 3D information with high accuracy and robustness but with a loss on appearance features. Regardless of the adopted technology, object tracking in autonomous driving scenarios suffers from the so-called error drift in detecting objects over time/frames. This work investigates the car tracking problem in an urban scenario, leveraging 3D point clouds. In particular, we have set our goal to mitigate the typical error drift that characterizes the classic tracking algorithm and, to this aim, proposed a system able to reduce the drift error by detection. An extensive experimental evaluation on the KITTI dataset shows the improvement in our solution's performance compared to state-of-the-art approaches.
大多数自动驾驶应用程序都利用RGB图像来表示周围环境,这些图像包含有用的外观特征,但在几何特征方面存在成本。另一方面,激光雷达传感器生成的三维点云可以提供更多的几何三维信息,具有较高的精度和鲁棒性,但在外观特征上有所损失。无论采用何种技术,自动驾驶场景中的目标跟踪都会受到所谓的随时间/帧检测目标的误差漂移的影响。这项工作研究了城市场景中利用3D点云的汽车跟踪问题。特别是,我们的目标是减轻典型跟踪算法的典型误差漂移,并为此提出了一种能够通过检测来减少漂移误差的系统。对KITTI数据集的广泛实验评估表明,与最先进的方法相比,我们的解决方案的性能有所提高。
{"title":"Towards Improving Car Point-Cloud Tracking Via Detection Updates","authors":"Yashar Deldjoo, Tommaso Di Noia, Eugenio Di Sciascio, Gaetano Pernisco, V. Renó, E. Stella","doi":"10.1145/3469951.3469957","DOIUrl":"https://doi.org/10.1145/3469951.3469957","url":null,"abstract":"Most autonomous driving applications leverage RGB images representing the surrounding environment that contain useful appearance features but with a cost in terms of geometric features. On the other side, 3D point clouds generated by LIDAR sensors can provide more geometric 3D information with high accuracy and robustness but with a loss on appearance features. Regardless of the adopted technology, object tracking in autonomous driving scenarios suffers from the so-called error drift in detecting objects over time/frames. This work investigates the car tracking problem in an urban scenario, leveraging 3D point clouds. In particular, we have set our goal to mitigate the typical error drift that characterizes the classic tracking algorithm and, to this aim, proposed a system able to reduce the drift error by detection. An extensive experimental evaluation on the KITTI dataset shows the improvement in our solution's performance compared to state-of-the-art approaches.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123003926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two Stream Pose Guided Network for Vehicle Re-identification 车辆再识别的双流姿态引导网络
Saifullah Tumrani, Parivish Parivish, A. Khan, Wazir Ali
Vehicle Re-Identification is the task of finding images of the same vehicle with different views across a surveillance camera network, which is a very beneficial yet challenging task. Huge intra-class differences and small inter-class difference makes this task hard to tackle. Appearance-based information is utilized in this paper to cope with vehicle re-identification problem; we have proposed a deep learning technique by incorporating poses of vehicles generated by pose estimation network and visual information. When query image is given, the two-stream network generates a feature embedding by concatenating pose feature from pose network. Extensive experiments are done on two of the benchmark datasets of vehicle re-identification VeRi-776 and VehicleID. Experimental results are supporting the competitiveness of the proposed method with recent state-of-the-art methods.
车辆再识别是在监控摄像机网络中从不同视角找到同一车辆的图像,这是一项非常有益但具有挑战性的任务。巨大的阶级内差异和微小的阶级间差异使得这一任务难以解决。本文利用基于外观的信息来处理车辆再识别问题;我们提出了一种结合姿态估计网络和视觉信息生成的车辆姿态的深度学习技术。当给定查询图像时,双流网络通过将姿态网络中的姿态特征连接起来生成特征嵌入。在车辆再识别的基准数据集VeRi-776和VehicleID上进行了大量的实验。实验结果支持所提出的方法与最新的最先进的方法的竞争力。
{"title":"Two Stream Pose Guided Network for Vehicle Re-identification","authors":"Saifullah Tumrani, Parivish Parivish, A. Khan, Wazir Ali","doi":"10.1145/3469951.3469954","DOIUrl":"https://doi.org/10.1145/3469951.3469954","url":null,"abstract":"Vehicle Re-Identification is the task of finding images of the same vehicle with different views across a surveillance camera network, which is a very beneficial yet challenging task. Huge intra-class differences and small inter-class difference makes this task hard to tackle. Appearance-based information is utilized in this paper to cope with vehicle re-identification problem; we have proposed a deep learning technique by incorporating poses of vehicles generated by pose estimation network and visual information. When query image is given, the two-stream network generates a feature embedding by concatenating pose feature from pose network. Extensive experiments are done on two of the benchmark datasets of vehicle re-identification VeRi-776 and VehicleID. Experimental results are supporting the competitiveness of the proposed method with recent state-of-the-art methods.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reducing the Annotation Cost of Whole Slide Histology Images using Active Learning 利用主动学习方法降低切片组织图像的标注成本
Xu Jin, Hong An, Jue Wang, Ke Wen, Zheng Wu
Histopathology serves as the gold standard for tumor diagnosis. Whole slide scanners have made computer vision-based methods available for pathologists to locate regions of high diagnostic significance. An essential step of whole slide image (WSI) diagnosis is the segmentation of the tumor region by generating a tumor probability heatmap. Most WSI diagnosis methods use patch-based classifiers or segmentation models, they both require a large set of training patches from annotated WSIs. Annotating WSIs is time-consuming and laborious. Active learning is a method that can suggest the most informative unlabeled data for annotation, but traditional active learning methods are not directly applicable for WSIs. Meanwhile, unannotated WSIs also contain rich information that can be further exploited by self-supervised learning. By utilizing unannotated data alongside active learning, we proposed a self-supervised active learning framework for tumor region segmentation of WSIs. The proposed method is evaluated on the public available CAMELYON dataset and achieved satisfying performance using 3% of the annotated data.
组织病理学是肿瘤诊断的金标准。整个切片扫描仪使得基于计算机视觉的方法可用于病理学家定位具有高诊断意义的区域。通过生成肿瘤概率热图对肿瘤区域进行分割是全幻灯片诊断的一个重要步骤。大多数WSI诊断方法使用基于补丁的分类器或分割模型,它们都需要大量来自标注WSI的训练补丁集。注释wsi既耗时又费力。主动学习是一种可以建议最具信息量的未标记数据进行标注的方法,但传统的主动学习方法并不直接适用于wsi。同时,未标注的wsi还包含丰富的信息,可以通过自监督学习进一步利用。通过利用无标注数据和主动学习,我们提出了一种用于wsi肿瘤区域分割的自监督主动学习框架。在CAMELYON公共数据集上对该方法进行了评估,使用3%的标注数据获得了令人满意的性能。
{"title":"Reducing the Annotation Cost of Whole Slide Histology Images using Active Learning","authors":"Xu Jin, Hong An, Jue Wang, Ke Wen, Zheng Wu","doi":"10.1145/3469951.3469960","DOIUrl":"https://doi.org/10.1145/3469951.3469960","url":null,"abstract":"Histopathology serves as the gold standard for tumor diagnosis. Whole slide scanners have made computer vision-based methods available for pathologists to locate regions of high diagnostic significance. An essential step of whole slide image (WSI) diagnosis is the segmentation of the tumor region by generating a tumor probability heatmap. Most WSI diagnosis methods use patch-based classifiers or segmentation models, they both require a large set of training patches from annotated WSIs. Annotating WSIs is time-consuming and laborious. Active learning is a method that can suggest the most informative unlabeled data for annotation, but traditional active learning methods are not directly applicable for WSIs. Meanwhile, unannotated WSIs also contain rich information that can be further exploited by self-supervised learning. By utilizing unannotated data alongside active learning, we proposed a self-supervised active learning framework for tumor region segmentation of WSIs. The proposed method is evaluated on the public available CAMELYON dataset and achieved satisfying performance using 3% of the annotated data.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123294822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Real-Time Single-Shot Multi-Face Detection, Landmark Localization, and Gender Classification 实时单镜头多人脸检测、地标定位和性别分类
T. Shen, D. Wang, Kayton Wai Keung Cheung, M. C. Chan, King Hung Chiu, Yiu Kei Li
Face detection and gender classification by Deep Neural Networks can find application in areas such as video surveillance, customized advertisement, and human-computer interaction. This paper presents a real-time single-shot multi-face gender detector based on Convolutional neural network (CNN). The proposed method not only detects face but also classifies the gender of persons in the wild, meaning in images with a high variability in pose, illumination, and occlusion. To train and evaluate the results, a new annotated set of face images is created. Our experimental results show that the proposed method achieves excellent performance in term of speed and accuracy.
基于深度神经网络的人脸检测和性别分类可以在视频监控、定制广告、人机交互等领域得到应用。本文提出了一种基于卷积神经网络(CNN)的实时单镜头多人脸性别检测器。所提出的方法不仅可以检测人脸,还可以在野外对人的性别进行分类,这意味着在姿势,照明和遮挡的高度变化的图像中。为了训练和评估结果,创建了一组新的带注释的人脸图像。实验结果表明,该方法在速度和精度方面都取得了优异的成绩。
{"title":"A Real-Time Single-Shot Multi-Face Detection, Landmark Localization, and Gender Classification","authors":"T. Shen, D. Wang, Kayton Wai Keung Cheung, M. C. Chan, King Hung Chiu, Yiu Kei Li","doi":"10.1145/3469951.3469952","DOIUrl":"https://doi.org/10.1145/3469951.3469952","url":null,"abstract":"Face detection and gender classification by Deep Neural Networks can find application in areas such as video surveillance, customized advertisement, and human-computer interaction. This paper presents a real-time single-shot multi-face gender detector based on Convolutional neural network (CNN). The proposed method not only detects face but also classifies the gender of persons in the wild, meaning in images with a high variability in pose, illumination, and occlusion. To train and evaluate the results, a new annotated set of face images is created. Our experimental results show that the proposed method achieves excellent performance in term of speed and accuracy.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116319306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Integration of Machine Learning with MEC for Intelligent Applications 智能应用中机器学习与MEC的集成
Zhou Ye
∗In recent years, telecom operators and large companies are eager to obtain value from the edge of the network, and the priority of cloud computing has been transferred from the center to the edge. In addition, with the comprehensive deployment of 5G base station (BS), the number of 5G users has been largely increased. For 5G users, they expect to have a better experience of high bandwidth and low latency. Thus, the Mobile Edge Computing (MEC) came into being. MEC brings the capability from the center to the edge of the mobile network. Requests and data of User equipment (UE) has been underlined in MEC. These requests and data will be analyzed and disposed at the edge without being uploaded to the cloud center, which diminishes the latency efficiently. Besides, with the help of machine learning, MEC can show a better performance. This paper is aimed at studying superiorities of MEC itself and integration of machine learning with MEC, and intelligent applications they will bring. This paper first discusses the concept and architecture of MEC, then the advantages of MEC are listed. Next, the improvements of integration of machine learning with MEC and the intelligent applications which employ these technologies will be introduced. Finally, the deficiencies and future research trend of MEC will be discussed. After that, conclusion can be drought that MEC augment the performance of speed, security and privacy, energy saving and reliability. Furthermore, integration of machine learning with MEC can provide better resource management and offloading decision.
*近年来,电信运营商和大公司都渴望从网络边缘获取价值,云计算的优先权已经从中心转移到边缘。此外,随着5G基站(BS)的全面部署,5G用户数量大幅增加。对于5G用户来说,他们希望拥有更好的高带宽和低延迟体验。因此,移动边缘计算(MEC)应运而生。MEC将移动网络的能力从中心带到边缘。用户设备(UE)的请求和数据已在MEC中下划线。这些请求和数据将在边缘进行分析和处理,而无需上传到云中心,这有效地减少了延迟。此外,在机器学习的帮助下,MEC可以表现出更好的性能。本文旨在研究MEC本身的优势以及机器学习与MEC的集成,以及它们将带来的智能应用。本文首先讨论了MEC的概念和体系结构,然后列举了MEC的优点。接下来,将介绍机器学习与MEC集成的改进以及采用这些技术的智能应用。最后,对MEC的不足和未来的研究趋势进行了讨论。在此基础上,可以得出结论,MEC增强了速度、安全和隐私、节能和可靠性的性能。此外,机器学习与MEC的集成可以提供更好的资源管理和卸载决策。
{"title":"Integration of Machine Learning with MEC for Intelligent Applications","authors":"Zhou Ye","doi":"10.1145/3469951.3469966","DOIUrl":"https://doi.org/10.1145/3469951.3469966","url":null,"abstract":"∗In recent years, telecom operators and large companies are eager to obtain value from the edge of the network, and the priority of cloud computing has been transferred from the center to the edge. In addition, with the comprehensive deployment of 5G base station (BS), the number of 5G users has been largely increased. For 5G users, they expect to have a better experience of high bandwidth and low latency. Thus, the Mobile Edge Computing (MEC) came into being. MEC brings the capability from the center to the edge of the mobile network. Requests and data of User equipment (UE) has been underlined in MEC. These requests and data will be analyzed and disposed at the edge without being uploaded to the cloud center, which diminishes the latency efficiently. Besides, with the help of machine learning, MEC can show a better performance. This paper is aimed at studying superiorities of MEC itself and integration of machine learning with MEC, and intelligent applications they will bring. This paper first discusses the concept and architecture of MEC, then the advantages of MEC are listed. Next, the improvements of integration of machine learning with MEC and the intelligent applications which employ these technologies will be introduced. Finally, the deficiencies and future research trend of MEC will be discussed. After that, conclusion can be drought that MEC augment the performance of speed, security and privacy, energy saving and reliability. Furthermore, integration of machine learning with MEC can provide better resource management and offloading decision.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115837346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Feature Selection Methods on Arrhythmia Dataset 心律失常数据集特征选择方法的比较
Liu Ziheng
Cardiac arrhythmia is a common sign of heart disease. In modern society, heart disease is always one of the main diseases threatening human health. Medical instruments collect related attributes to make better diagnosis prediction of the disease. This paper applies different feature selection methods including filters and wrappers combining with machine learning methods (SVM, Naive Bayes, Random Forest, C4.5) on the arrhythmia dataset to compare their performances. Results show that filters and wrappers perform both well while filters cost less time. Among them, random forest with the wrapper method has the highest accuracy.
心律失常是心脏病的常见症状。在现代社会,心脏病一直是威胁人类健康的主要疾病之一。医疗仪器收集相关属性,对疾病进行更好的诊断预测。本文结合机器学习方法(SVM,朴素贝叶斯,随机森林,C4.5),在心律失常数据集上应用过滤器和包装器等不同的特征选择方法,比较它们的性能。结果表明,过滤器和包装器的性能都很好,而且过滤器的运行时间更短。其中,随机森林的包装方法准确率最高。
{"title":"Comparison of Feature Selection Methods on Arrhythmia Dataset","authors":"Liu Ziheng","doi":"10.1145/3469951.3469963","DOIUrl":"https://doi.org/10.1145/3469951.3469963","url":null,"abstract":"Cardiac arrhythmia is a common sign of heart disease. In modern society, heart disease is always one of the main diseases threatening human health. Medical instruments collect related attributes to make better diagnosis prediction of the disease. This paper applies different feature selection methods including filters and wrappers combining with machine learning methods (SVM, Naive Bayes, Random Forest, C4.5) on the arrhythmia dataset to compare their performances. Results show that filters and wrappers perform both well while filters cost less time. Among them, random forest with the wrapper method has the highest accuracy.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123342859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1