首页 > 最新文献

Computer Science Research Notes最新文献

英文 中文
Stylized Sketch Generation using Convolutional Networks 使用卷积网络生成程式化草图
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2901.1.5
Mayur Hemani, Abhishek Sinha, Balaji Krishnamurthy
The task of synthesizing sketches from photographs has been pursued with image processing methods and supervised learning based approaches. The former lack flexibility and the latter require large quantities of ground-truth data which is hard to obtain because of the manual effort required. We present a convolutional neural network based framework for sketch generation that does not require ground-truth data for training and produces various styles of sketches. The method combines simple analytic loss functions that correspond to characteristics of the sketch. The network is trained on and evaluated for human face images. Several stylized variations of sketches are obtained by varying the parameters of the loss functions. The paper also discusses the implicit abstraction afforded by the deep convolutional network approach which results in high quality sketch output.
利用图像处理方法和基于监督学习的方法来完成从照片中合成草图的任务。前者缺乏灵活性,后者需要大量的地面真实数据,而这些数据由于需要人工努力而难以获得。我们提出了一个基于卷积神经网络的草图生成框架,该框架不需要训练的真值数据,并产生各种风格的草图。该方法结合了与草图特征相对应的简单解析损失函数。该网络在人脸图像上进行训练和评估。通过改变损失函数的参数,得到了几种程式化的草图变体。本文还讨论了由深度卷积网络方法提供的隐式抽象,从而产生高质量的草图输出。
{"title":"Stylized Sketch Generation using Convolutional Networks","authors":"Mayur Hemani, Abhishek Sinha, Balaji Krishnamurthy","doi":"10.24132/csrn.2019.2901.1.5","DOIUrl":"https://doi.org/10.24132/csrn.2019.2901.1.5","url":null,"abstract":"The task of synthesizing sketches from photographs has been pursued with image processing methods and supervised learning based approaches. The former lack flexibility and the latter require large quantities of ground-truth data which is hard to obtain because of the manual effort required. We present a convolutional neural network based framework for sketch generation that does not require ground-truth data for training and produces various styles of sketches. The method combines simple analytic loss functions that correspond to characteristics of the sketch. The network is trained on and evaluated for human face images. Several stylized variations of sketches are obtained by varying the parameters of the loss functions. The paper also discusses the implicit abstraction afforded by the deep convolutional network approach which results in high quality sketch output.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115647003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN Approaches for Dorsal Hand Vein Based Identification 基于手背静脉识别的CNN方法
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2902.2.7
Szidónia Lefkovits, László Lefkovits, L. Szilágyi
In this paper we present a dorsal hand vein recognition method based on convolutional neural networks (CNN). We implemented and compared two CNNs trained from end-to-end to the most important state-of-the-art deep learning architectures (AlexNet, VGG, ResNet and SqueezeNet). We applied the transfer learning and finetuning techniques for the purpose of dorsal hand vein-based identification. The experiments carried out studied the accuracy and training behaviour of these network architectures. The system was trained and evaluated on the best-known database in this field, the NCUT, which contains low resolution, low contrast images. Therefore, different pre-processing steps were required, leading us to investigate the influence of a series of image quality enhancement methods such as Gaussian smoothing, inhomogeneity correction, contrast limited adaptive histogram equalization, ordinal image encoding, and coarse vein segmentation based on geometricalconsiderations. The results show high recognition accuracy for almost every such CNN-based setup.
提出了一种基于卷积神经网络(CNN)的手背静脉识别方法。我们实现并比较了两个cnn从端到端训练到最重要的最先进的深度学习架构(AlexNet, VGG, ResNet和SqueezeNet)。我们将迁移学习和微调技术应用于基于手背静脉的识别。实验研究了这些网络结构的准确率和训练行为。该系统在该领域最著名的数据库NCUT上进行了训练和评估,该数据库包含低分辨率、低对比度的图像。因此,需要不同的预处理步骤,这导致我们研究了一系列图像质量增强方法的影响,如高斯平滑、非均匀性校正、对比度有限的自适应直方图均衡化、有序图像编码和基于几何考虑的粗静脉分割。结果表明,几乎所有基于cnn的设置都具有较高的识别精度。
{"title":"CNN Approaches for Dorsal Hand Vein Based Identification","authors":"Szidónia Lefkovits, László Lefkovits, L. Szilágyi","doi":"10.24132/csrn.2019.2902.2.7","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.7","url":null,"abstract":"In this paper we present a dorsal hand vein recognition method based on convolutional neural networks (CNN). We implemented and compared two CNNs trained from end-to-end to the most important state-of-the-art deep learning architectures (AlexNet, VGG, ResNet and SqueezeNet). We applied the transfer learning and finetuning techniques for the purpose of dorsal hand vein-based identification. The experiments carried out studied the accuracy and training behaviour of these network architectures. The system was trained and evaluated on the best-known database in this field, the NCUT, which contains low resolution, low contrast images. Therefore, different pre-processing steps were required, leading us to investigate the influence of a series of image quality enhancement methods such as Gaussian smoothing, inhomogeneity correction, contrast limited adaptive histogram equalization, ordinal image encoding, and coarse vein segmentation based on geometricalconsiderations. The results show high recognition accuracy for almost every such CNN-based setup.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117033378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compressed Exposure Sequences for HDR Imaging 压缩曝光序列的HDR成像
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2901.1.17
S. Sekmen, A. Akyüz
High dynamic range (HDR) imaging techniques allow photographers to capture the luminance distribution in the real-world as it is, freeing them from the limitations of capture and display devices. One common approach for creating HDR images is the multiple exposures technique (MET). This technique is preferred by many photographers as multiple exposures can be captured with off-the-shelf digital cameras and later combined into an HDR image. In this study, we propose a storage scheme in order to simplify the maintenance and usability of such sequences. In our scheme, multiple exposures are stored inside a single JPEG file with the main image representing a user-selected reference exposure. Other exposures are not directly stored, but rather their differences with each other and the reference is stored in a compressed manner in the metadata section of the same file. This allows a significant reduction in file size without impacting quality. If necessary the original exposures can be reconstructed from this single JPEG file, which in turn can be used in a standard HDR workflow.
高动态范围(HDR)成像技术允许摄影师捕捉真实世界中的亮度分布,使他们摆脱捕捉和显示设备的限制。创建HDR图像的一种常用方法是多次曝光技术(MET)。这种技术是许多摄影师的首选,因为多次曝光可以用现成的数码相机拍摄,然后组合成HDR图像。在这项研究中,我们提出了一种存储方案,以简化这些序列的维护和可用性。在我们的方案中,多个曝光存储在单个JPEG文件中,主图像表示用户选择的参考曝光。其他公开不直接存储,而是以压缩的方式存储在同一文件的元数据部分中,它们之间的差异和引用。这允许在不影响质量的情况下显著减小文件大小。如果有必要,原始曝光可以从这个单一的JPEG文件重建,这反过来又可以在标准的HDR工作流中使用。
{"title":"Compressed Exposure Sequences for HDR Imaging","authors":"S. Sekmen, A. Akyüz","doi":"10.24132/csrn.2019.2901.1.17","DOIUrl":"https://doi.org/10.24132/csrn.2019.2901.1.17","url":null,"abstract":"High dynamic range (HDR) imaging techniques allow photographers to capture the luminance distribution in the real-world as it is, freeing them from the limitations of capture and display devices. One common approach for creating HDR images is the multiple exposures technique (MET). This technique is preferred by many photographers as multiple exposures can be captured with off-the-shelf digital cameras and later combined into an HDR image. In this study, we propose a storage scheme in order to simplify the maintenance and usability of such sequences. In our scheme, multiple exposures are stored inside a single JPEG file with the main image representing a user-selected reference exposure. Other exposures are not directly stored, but rather their differences with each other and the reference is stored in a compressed manner in the metadata section of the same file. This allows a significant reduction in file size without impacting quality. If necessary the original exposures can be reconstructed from this single JPEG file, which in turn can be used in a standard HDR workflow.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124025486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Analytics Sensemaking on Different Platforms 沉浸式分析在不同平台上的意义
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2902.2.9
Sebastian Blum, Gokhan Cetin, W. Stuerzlinger
In this work we investigated sensemaking activities on different immersive platforms. We observed user s during a classification task on a very large wall-display system (experiment I) and in a modern Virtual Reality headset (experiment II). In experiment II, we also evaluated a condition with a VR headset with an extended field of view, through a sparse peripheral display. We evaluated the results across the two studies by analyzing quantitative and qualitative data, such as task completion time, number of classifications, followed strategies, and shape of clusters. The results showed differences in user behaviors between the different immersive platforms, i.e., the very large display wall and the VR headset. Even though quantitative data showed no significant differences, qualitatively, users used additional strategies on the wall-display, which hints at a deeper level of sensemaking compared to a VR Headset. The qualitative and quantitative results of the comparison between VR Headsets do not indicate that users perform differently with a VR Headset with an extended field of view.
在这项工作中,我们调查了不同沉浸式平台上的意义生成活动。我们在一个非常大的墙壁显示系统(实验I)和一个现代虚拟现实头显(实验II)上观察用户s的分类任务。在实验II中,我们还通过一个稀疏的外围显示评估了一个具有扩展视野的VR头显的情况。我们通过分析定量和定性数据来评估两项研究的结果,如任务完成时间、分类数量、遵循的策略和集群的形状。研究结果表明,在超大显示墙和VR头显两种沉浸式平台上,用户行为存在差异。尽管定量数据没有显示出明显的差异,但从质量上讲,用户在壁挂式显示器上使用了额外的策略,这表明与VR头显相比,用户在更深层次上进行了感知。VR头显之间的定性和定量比较结果并不表明用户在使用扩展视场的VR头显时表现不同。
{"title":"Immersive Analytics Sensemaking on Different Platforms","authors":"Sebastian Blum, Gokhan Cetin, W. Stuerzlinger","doi":"10.24132/csrn.2019.2902.2.9","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.9","url":null,"abstract":"In this work we investigated sensemaking activities on different immersive platforms. We observed user s during a classification task on a very large wall-display system (experiment I) and in a modern Virtual Reality headset (experiment II). In experiment II, we also evaluated a condition with a VR headset with an extended field of view, through a sparse peripheral display. We evaluated the results across the two studies by analyzing quantitative and qualitative data, such as task completion time, number of classifications, followed strategies, and shape of clusters. The results showed differences in user behaviors between the different immersive platforms, i.e., the very large display wall and the VR headset. Even though quantitative data showed no significant differences, qualitatively, users used additional strategies on the wall-display, which hints at a deeper level of sensemaking compared to a VR Headset. The qualitative and quantitative results of the comparison between VR Headsets do not indicate that users perform differently with a VR Headset with an extended field of view.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122583538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real Time Pedestrian and Object Detection and Tracking-based Deep Learning. Application to Drone Visual Tracking 基于深度学习的实时行人和目标检测与跟踪。无人机视觉跟踪的应用
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2902.2.5
R. Khemmar, M. Gouveia, B. Decoux, J. Ertaud
This work aims to show the new approaches in embedded vision dedicated to object detection and tracking for drone visual control. Object/Pedestrian detection has been carried out through two methods: 1. Classical image processing approach through improved Histogram Oriented Gradient (HOG) and Deformable Part Model (DPM) based detection and pattern recognition methods. In this step, we present our improved HOG/DPM approach allowing the detection of a target object in real time. The developed approach allows us not only to detect the object (pedestrian) but also to estimates the distance between the target and the drone. 2. Object/Pedestrian detection-based Deep Learning approach. The target position estimation has been carried out within image analysis. After this, the system sends instruction to the drone engine in order to correct its position and to track target. For this visual servoing, we have applied our improved HOG approach and implemented two kinds of PID controllers. The platform has been validated under different scenarios by comparing measured data to ground truth data given by the drone GPS. Several tests which were ca1rried out at ESIGELEC car park and Rouen city center validate the developed platform.
本工作旨在展示嵌入式视觉中用于无人机视觉控制的目标检测和跟踪的新方法。目标/行人检测主要通过两种方法进行:经典图像处理方法通过改进的直方图定向梯度(HOG)和可变形部件模型(DPM)为基础的检测和模式识别方法。在这一步中,我们提出了改进的HOG/DPM方法,允许实时检测目标物体。该方法不仅可以检测目标(行人),还可以估计目标与无人机之间的距离。2. 基于对象/行人检测的深度学习方法。在图像分析中进行了目标位置估计。在此之后,系统向无人机引擎发送指令,以纠正其位置并跟踪目标。对于这种视觉伺服,我们采用了改进的HOG方法,并实现了两种PID控制器。通过将实测数据与无人机GPS给出的地面真实数据进行比较,验证了该平台在不同场景下的有效性。在ESIGELEC停车场和鲁昂市中心进行了几次测试,验证了开发的平台。
{"title":"Real Time Pedestrian and Object Detection and Tracking-based Deep Learning. Application to Drone Visual Tracking","authors":"R. Khemmar, M. Gouveia, B. Decoux, J. Ertaud","doi":"10.24132/csrn.2019.2902.2.5","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.5","url":null,"abstract":"This work aims to show the new approaches in embedded vision dedicated to object detection and tracking for drone visual control. Object/Pedestrian detection has been carried out through two methods: 1. Classical image processing approach through \u0000improved Histogram Oriented Gradient (HOG) and Deformable Part Model (DPM) based detection and pattern recognition methods. In this step, we present our improved HOG/DPM approach allowing the detection of a target object in real time. The developed \u0000approach allows us not only to detect the object (pedestrian) but also to estimates the distance between the target and the drone. 2. Object/Pedestrian detection-based Deep Learning approach. The target position estimation has been carried out within image \u0000analysis. After this, the system sends instruction to the drone engine in order to correct its position and to track target. For this visual servoing, we have applied our improved HOG approach and implemented two kinds of PID controllers. The platform has been \u0000validated under different scenarios by comparing measured data to ground truth data given by the drone GPS. Several tests which were ca1rried out at ESIGELEC car park and Rouen city center validate the developed platform.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125440009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Porting A Visual Inertial SLAM Algorithm To Android Devices 将视觉惯性SLAM算法移植到Android设备
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2902.2.2
Jannis Möller, Benjamin Meyer, M. Eisemann
Simultaneous Localization and Mapping aims to identify the current position of an agent and to map his surroundings at the same time. Visual inertial SLAM algorithms use input from visual and motion sensors for this task. Since modern smartphones are equipped with both needed sensors, using VI-SLAM applications becomes feasible, with Augmented Reality being one of the most promising application areas. Android, having the largest market share of all mobile operating systems, is of special interest as the target platform. For iOS there already exists a high-quality open source implementation for VI-SLAM: The framework VINS-Mobile. In this work we discuss what steps are necessary for porting it to the Android operating system. We provide a practical guide to the main challenge: The correct calibration of device specific parameters for any Android smartphone. We present our results using the Samsung Galaxy S7 and show further improvement possibilities.
同时定位和映射旨在识别智能体的当前位置,同时绘制其周围环境的地图。视觉惯性SLAM算法使用来自视觉和运动传感器的输入来完成这项任务。由于现代智能手机配备了所需的传感器,使用VI-SLAM应用变得可行,而增强现实是最有前途的应用领域之一。在所有移动操作系统中占有最大市场份额的Android,作为目标平台尤其引人关注。对于iOS来说,VI-SLAM已经有了一个高质量的开源实现:VINS-Mobile框架。在这项工作中,我们将讨论将其移植到Android操作系统所需的步骤。我们提供了一个实用的指南来解决主要的挑战:正确校准任何Android智能手机的设备特定参数。我们使用三星Galaxy S7展示了我们的结果,并展示了进一步改进的可能性。
{"title":"Porting A Visual Inertial SLAM Algorithm To Android Devices","authors":"Jannis Möller, Benjamin Meyer, M. Eisemann","doi":"10.24132/csrn.2019.2902.2.2","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.2","url":null,"abstract":"Simultaneous Localization and Mapping aims to identify the current position of an agent and to map his surroundings at the same time. Visual inertial SLAM algorithms use input from visual and motion sensors for this task. Since modern smartphones are equipped with both needed sensors, using VI-SLAM applications becomes feasible, with Augmented Reality being one of the most promising application areas. Android, having the largest market share of all mobile operating systems, is of special interest as the target platform. For iOS there already exists a high-quality open source implementation for VI-SLAM: The framework VINS-Mobile. In this work we discuss what steps are necessary for porting it to the Android operating system. We provide a practical guide to the main challenge: The correct calibration of device specific parameters for any Android smartphone. We present our results using the Samsung Galaxy S7 and show further improvement possibilities.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collateral effects of the Kalman Filter on the Throughput of a Head-Tracker for Mobile Devices 卡尔曼滤波对移动设备头部跟踪器吞吐量的附带影响
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2901.1.14
Maria Francesca Roig-Maimó, Ramon Mas-Sansó
We have developed an image-based head-tracker interface for mobile devices that uses the information of the front camera to detect and track the user’s nose position and translate its movements into a pointing metaphor to the device. However, as already noted in the literature, the measurement errors of the motion tracking leads to a noticeable jittering of the perceived motion. To counterbalance this unpleasant and unwanted behavior, we have applied a Kalman filter to smooth the obtained positions. In this paper we focus on the effect that the use of a Kalman filter can have on the throughput of the interface. Throughput is the human performance measure proposed by the ISO 9241-411 for evaluating the efficiency and effectiveness of non-keyboard input devices. The softness and precision improvements that the Kalman filter infers in the tracking of the cursor are subjectively evident. However, its effects on the ISO’s throughput have to be measured objectively to get an estimation of the benefits and drawbacks of applying a Kalman filter to a pointing device.
我们已经为移动设备开发了一个基于图像的头部跟踪器界面,它使用前置摄像头的信息来检测和跟踪用户的鼻子位置,并将其运动转化为指向设备的隐喻。然而,正如文献中已经指出的那样,运动跟踪的测量误差导致感知运动的明显抖动。为了平衡这种不愉快和不希望的行为,我们应用了卡尔曼滤波器来平滑获得的位置。本文重点讨论了卡尔曼滤波器的使用对接口吞吐量的影响。吞吐量是由ISO 9241-411提出的评估非键盘输入设备的效率和有效性的人类性能指标。从主观上看,卡尔曼滤波在光标跟踪中的柔和性和精度提高是显而易见的。然而,它对ISO吞吐量的影响必须客观地测量,以获得对指向设备应用卡尔曼滤波器的优点和缺点的估计。
{"title":"Collateral effects of the Kalman Filter on the Throughput of a Head-Tracker for Mobile Devices","authors":"Maria Francesca Roig-Maimó, Ramon Mas-Sansó","doi":"10.24132/csrn.2019.2901.1.14","DOIUrl":"https://doi.org/10.24132/csrn.2019.2901.1.14","url":null,"abstract":"We have developed an image-based head-tracker interface for mobile devices that uses the information of the front camera to detect and track the user’s nose position and translate its movements into a pointing metaphor to the device. However, as already noted in the literature, the measurement errors of the motion tracking leads to a noticeable jittering of the perceived motion. To counterbalance this unpleasant and unwanted behavior, we have applied a Kalman filter to smooth the obtained positions. In this paper we focus on the effect that the use of a Kalman filter can have on the throughput of the interface. Throughput is the human performance measure proposed by the ISO 9241-411 for evaluating the efficiency and effectiveness of non-keyboard input devices. The softness and precision improvements that the Kalman filter infers in the tracking of the cursor are subjectively evident. However, its effects on the ISO’s throughput have to be measured objectively to get an estimation of the benefits and drawbacks of applying a Kalman filter to a pointing device.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124682336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving facial attraction in videos 提高视频中的面部吸引力
Pub Date : 1900-01-01 DOI: 10.24132/csrn.2019.2902.2.8
Maycon Prado Rocha Silva, J. M. D. Martino
The face plays an important role both socially and culturally and has been extensively studied especially in investigations on perception. It is accepted that an attractive face tends to draw and keep the attention of the observer for a longer time. Drawing and keeping the attention is an important issue that can be beneficial in a variety of applications, including advertising, journalism, and education. In this article, we present a fully automated process to improve the attractiveness of faces in images and video. Our approach automatically identifies points of interest on the face and measures the distances between them, fusing the use of classifiers searches the database of reference face images deemed to be attractive to identify the pattern of points of interest more adequate to improve the attractiveness. The modified points of interest are projected in real-time onto a three-dimensional face mesh to support the consistent transformation of the face in a video sequence. In addition to the geometric transformation, texture is also automatically smoothed through a smoothing mask and weighted sum of textures. The process as a whole enables the improving of attractiveness not only in images but also in videos in real time.
脸在社会和文化上都扮演着重要的角色,特别是在感知调查中被广泛研究。人们普遍认为,一张漂亮的脸往往能吸引并保持观察者更长时间的注意力。吸引和保持注意力是一个很重要的问题,在各种应用中都是有益的,包括广告、新闻和教育。在这篇文章中,我们提出了一个完全自动化的过程来提高图像和视频中人脸的吸引力。我们的方法自动识别人脸上的兴趣点并测量它们之间的距离,融合使用分类器搜索被认为具有吸引力的参考人脸图像数据库,以识别更充分的兴趣点模式来提高吸引力。将修改后的兴趣点实时投影到三维人脸网格上,以支持视频序列中人脸的一致变换。除了几何变换,纹理也通过平滑蒙版和纹理加权和自动平滑。整个过程不仅可以实时提高图像的吸引力,还可以实时提高视频的吸引力。
{"title":"Improving facial attraction in videos","authors":"Maycon Prado Rocha Silva, J. M. D. Martino","doi":"10.24132/csrn.2019.2902.2.8","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.8","url":null,"abstract":"The face plays an important role both socially and culturally and has been extensively studied especially in investigations on perception. It is accepted that an attractive face tends to draw and keep the attention of the observer for a longer time. Drawing and keeping the attention is an important issue that can be beneficial in a variety of applications, including advertising, journalism, and education. In this article, we present a fully automated process to improve the attractiveness of faces in images and video. Our approach automatically identifies points of interest on the face and measures the distances between them, fusing the use of classifiers searches the database of reference face images deemed to be attractive to identify the pattern of points of interest more adequate to improve the attractiveness. The modified points of interest are projected in real-time onto a three-dimensional face mesh to support the consistent transformation of the face in a video sequence. In addition to the geometric transformation, texture is also automatically smoothed through a smoothing mask and weighted sum of textures. The process as a whole enables the improving of attractiveness not only in images but also in videos in real time.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125031083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Research Notes
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1