首页 > 最新文献

Photography, Mobile, and Immersive Imaging最新文献

英文 中文
Light Field Perception Enhancement for Integral Displays 集成显示器的光场感知增强
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-269
Basel Salahieh, Yi Wu, Oscar Nestares
{"title":"Light Field Perception Enhancement for Integral Displays","authors":"Basel Salahieh, Yi Wu, Oscar Nestares","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-269","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-269","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"13 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VCX: An industry initiative to create an objective camera module evaluation for mobile devices VCX:为移动设备创建客观相机模块评估的行业首创
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-172
Dietmar Wueller, Uwe Artmann, V. Rao, G. Reif, J. Kramer, Fabian Knauf
Due to the fast evolving technologies and the increasing importance of Social Media, the camera is one of the most important components of today’s mobile phones. Nowadays, smartphones are taking over a big share of the compact camera market. A simple reason for this might be revealed by the famous quote: “The best camera is the one that’s with you”. But with the vast choice of devices and great promises of manufacturers, there is a demand to characterize image quality and performance in very simple terms in order to provide information that helps choosing the best-suited device. The current existing evaluation systems are either not entirely objective or are under development and haven't reached a useful level yet. Therefore the industry itself has gotten together and created a new objective quality evaluation system named Valued Camera eXperience (VCX). It is designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device. Members of the initiative so fare are: Apple, Huawei, Image Engineering, LG, Mediatec, Nomicam, Oppo, TCL, Vivo, and Vodafone. Introduction Why another mobile camera evaluation standard? In fact the basis for VCX existed way before CPIQ or DxOMark. In the early 2000 Vodafone as one of the main carriers in Europe looked into the quality of cellphones, which they bundled with their contracts. One of the important parts of these phones were and still are the cameras. So Vodafone decided to define KPIs (key performance indicators) based on ISO standards to assess the quality of cell phone camera modules. To define the KPIs Vodafone needed to get a feeling about the camera performance and consulted Image Engineering to get some guidance and to help with tests. In 2013 Vodafone decided to take the KPIs to the next level. Cameras in cell phones had outgrown the former KPIs and a lot of new technologies had been implemented. Therefore an update was needed and Vodafone asked Image Engineering to update the physical measurements in order to get a complete picture of the camera performance. In the background Vodafone worked on converting the physical measurements into an objective quality rating system. At that time the system was called Vodafone Camera eXperience. In 2015 the system was updated according to the latest ISO standards and in 2016 Vodafone and Image Engineering decided that due to a lack of resources within Vodafone that Image Engineering should make the system public and move it forward under the neutral name Valued Camera eXperience. This was done at Photokina in Cologne in September 2016. The feedback and interest from the industry was so good that in late 2016 the idea was born to make this an open industry standard managed by the industry. So in March 2017 a conference was held in Duesseldorf and the decision was made to found a non profit organization named VCX-Forum e.V. Today VCX-Forum e.V. has X members that decide on the path forward in the future. Figure 1: T
由于技术的快速发展和社交媒体的日益重要,相机是当今手机最重要的部件之一。如今,智能手机在小型相机市场上占据了很大的份额。一个简单的原因也许可以用一句名言来解释:“最好的相机是和你在一起的那台”。但是,随着设备的大量选择和制造商的巨大承诺,人们需要用非常简单的术语来描述图像质量和性能,以便提供有助于选择最适合设备的信息。目前现有的评价制度要么不完全客观,要么正在发展中,尚未达到有用的水平。因此,业内人士联合起来,创建了一个新的客观质量评价体系,名为“有价值的相机体验”(VCX)。它的设计是为了反映用户体验关于图像质量和相机在移动设备的性能。目前该计划的成员包括:苹果、华为、Image Engineering、LG、联发科技、Nomicam、Oppo、TCL、Vivo和沃达丰。为什么另一个手机相机的评价标准?事实上,VCX的基础早在CPIQ或DxOMark之前就存在了。2000年初,作为欧洲主要运营商之一的沃达丰(Vodafone)调查了手机的质量,并将其与合同捆绑在一起。这些手机最重要的部分之一就是摄像头。因此,沃达丰决定根据ISO标准定义kpi(关键绩效指标)来评估手机摄像头模块的质量。为了定义kpi,沃达丰需要了解相机的性能,并咨询了图像工程,以获得一些指导并帮助进行测试。2013年,沃达丰决定将kpi提升到一个新的水平。手机上的摄像头已经超越了以前的kpi,并且已经实施了许多新技术。因此需要更新,沃达丰要求图像工程更新物理测量,以获得相机性能的完整图像。在后台,沃达丰致力于将物理测量转换为客观的质量评级系统。当时该系统被称为沃达丰相机体验。2015年,该系统根据最新的ISO标准进行了更新,2016年,沃达丰和图像工程部门决定,由于沃达丰内部缺乏资源,图像工程部门应该将该系统公开,并以中立的名称“有价值的相机体验”向前推进。这是2016年9月在科隆的Photokina上完成的。来自业界的反馈和兴趣是如此之好,以至于在2016年底,这个想法诞生了,使其成为一个由行业管理的开放行业标准。因此,2017年3月在杜塞尔多夫举行了一次会议,并决定成立一个名为VCX-Forum e.V.的非营利组织。今天VCX-Forum e.V.有X个成员,他们决定未来的发展道路。图1:VCX路线图VCX基于5个原则,保证结果可以映射到现实生活体验。VCX测量应确保开箱即用的体验。VCX保持100%目标3。VCX应公开透明。VCX应聘请/使用独立的成像实验室进行测试。VCX应寻求持续改进(VCX测量应确保开箱即用的体验):该原则规定被测设备应从无偏/无污染的来源获得,即从销售被测设备的商店随机取样。这确保不接受来自供应商的特殊样品或定制的硬件/软件。结果是从投放市场的设备中获得的。设备使用默认相机应用程序和设置进行测试(闪光灯测试用例除外)。如果由于市场力量和/或用户要求在VCX保护伞下发布设备结果而产生需要,则该结果明确标记为“临时”。原则2。(VCX应保持100%客观):关于如何从测量中创建分数的完整过程是基于对被测设备的客观分析,然后是固定和2 | VCX |第1次会议有价值的相机体验
{"title":"VCX: An industry initiative to create an objective camera module evaluation for mobile devices","authors":"Dietmar Wueller, Uwe Artmann, V. Rao, G. Reif, J. Kramer, Fabian Knauf","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-172","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-172","url":null,"abstract":"Due to the fast evolving technologies and the increasing importance of Social Media, the camera is one of the most important components of today’s mobile phones. Nowadays, smartphones are taking over a big share of the compact camera market. A simple reason for this might be revealed by the famous quote: “The best camera is the one that’s with you”. But with the vast choice of devices and great promises of manufacturers, there is a demand to characterize image quality and performance in very simple terms in order to provide information that helps choosing the best-suited device. The current existing evaluation systems are either not entirely objective or are under development and haven't reached a useful level yet. Therefore the industry itself has gotten together and created a new objective quality evaluation system named Valued Camera eXperience (VCX). It is designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device. Members of the initiative so fare are: Apple, Huawei, Image Engineering, LG, Mediatec, Nomicam, Oppo, TCL, Vivo, and Vodafone. Introduction Why another mobile camera evaluation standard? In fact the basis for VCX existed way before CPIQ or DxOMark. In the early 2000 Vodafone as one of the main carriers in Europe looked into the quality of cellphones, which they bundled with their contracts. One of the important parts of these phones were and still are the cameras. So Vodafone decided to define KPIs (key performance indicators) based on ISO standards to assess the quality of cell phone camera modules. To define the KPIs Vodafone needed to get a feeling about the camera performance and consulted Image Engineering to get some guidance and to help with tests. In 2013 Vodafone decided to take the KPIs to the next level. Cameras in cell phones had outgrown the former KPIs and a lot of new technologies had been implemented. Therefore an update was needed and Vodafone asked Image Engineering to update the physical measurements in order to get a complete picture of the camera performance. In the background Vodafone worked on converting the physical measurements into an objective quality rating system. At that time the system was called Vodafone Camera eXperience. In 2015 the system was updated according to the latest ISO standards and in 2016 Vodafone and Image Engineering decided that due to a lack of resources within Vodafone that Image Engineering should make the system public and move it forward under the neutral name Valued Camera eXperience. This was done at Photokina in Cologne in September 2016. The feedback and interest from the industry was so good that in late 2016 the idea was born to make this an open industry standard managed by the industry. So in March 2017 a conference was held in Duesseldorf and the decision was made to found a non profit organization named VCX-Forum e.V. Today VCX-Forum e.V. has X members that decide on the path forward in the future. Figure 1: T","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124016282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Relative Impact of Key Rendering Parameters on Perceived Quality of VR Imagery Captured by the Facebook Surround 360 Camera 关键渲染参数对Facebook 360度环绕相机拍摄的VR图像感知质量的相对影响
Pub Date : 2018-01-28 DOI: 10.2352/issn.2470-1173.2018.05.pmii-183
Nora Pfund, Nitin Sampat, J. Viggiano
High quality, 360 capture for Cinematic VR is a relatively new and rapidly evolving technology. The field demands very high quality, distortion- free 360 capture which is not possible with cameras that depend on fish- eye lenses for capturing a 360 field of view. The Facebook Surround 360 Camera, one of the few “players” in this space, is an open-source license design that Facebook has released for anyone that chooses to build it from off-the-shelf components and generate 8K stereo output using open-source licensed rendering software. However, the components are expensive and the system itself is extremely demanding in terms of computer hardware and software. Because of this, there have been very few implementations of this design and virtually no real deployment in the field. We have implemented the system, based on Facebook’s design, and have been testing and deploying it in various situations; even generating short video clips. We have discovered in our recent experience that high quality, 360 capture comes with its own set of new challenges. As an example, even the most fundamental tools of photography like “exposure” become difficult because one is always faced with ultra-high dynamic range scenes (one camera is pointing directly at the sun and the others may be pointing to a dark shadow). The conventional imaging pipeline is further complicated by the fact that the stitching software has different effects on various as- pects of the calibration or pipeline optimization. Most of our focus to date has been on optimizing the imaging pipeline and improving the qual- ity of the output for viewing in an Oculus Rift headset. We designed a controlled experiment to study 5 key parameters in the rendering pipeline– black level, neutral balance, color correction matrix (CCM), geometric calibration and vignetting. By varying all of these parameters in a combinatorial manner, we were able to assess the relative impact of these parameters on the perceived image quality of the output. Our results thus far indicate that the output image quality is greatly influenced by the black level of the individual cameras (the Facebook cam- era comprised of 17 cameras whose output need to be stitched to obtain a 360 view). Neutral balance is least sensitive. We are most confused about the results we obtain from accurately calculating and applying the CCM for each individual camera. We obtained improved results by using the average of the matrices for all cameras. Future work includes evaluating the effects of geometric calibration and vignetting on quality.
高质量,360度捕捉电影VR是一项相对较新的和快速发展的技术。该领域需要非常高的质量,无失真的360度捕捉,这是不可能的相机,依靠鱼眼镜头捕捉360度视野。Facebook 360全景相机是这个领域为数不多的“播放器”之一,是Facebook发布的一个开源许可设计,任何人都可以选择从现成的组件中构建它,并使用开源许可的渲染软件生成8K立体声输出。然而,这些组件价格昂贵,而且系统本身对计算机硬件和软件的要求极高。正因为如此,这种设计的实现很少,实际上也没有在现场实际部署。我们已经根据Facebook的设计实现了这个系统,并在各种情况下进行了测试和部署;甚至生成短视频剪辑。我们在最近的经验中发现,高质量的360度捕捉伴随着一系列新的挑战。举个例子,即使是像“曝光”这样最基本的摄影工具也会变得困难,因为人们总是面对超高动态范围的场景(一个相机直接指向太阳,而其他相机可能指向阴影)。由于拼接软件对标定或优化管道各方面的影响不同,使得传统的成像管道更加复杂。到目前为止,我们的大部分重点都放在优化成像管道和提高在Oculus Rift耳机中观看的输出质量上。我们设计了一个对照实验,研究了渲染管道中的5个关键参数——黑色水平、中性平衡、颜色校正矩阵(CCM)、几何校准和渐晕。通过以组合方式改变所有这些参数,我们能够评估这些参数对输出的感知图像质量的相对影响。到目前为止,我们的结果表明,输出图像质量受到单个相机的黑电平的极大影响(Facebook相机时代由17个相机组成,其输出需要缝合以获得360度视图)。中性平衡最不敏感。我们最困惑的是我们从精确计算和应用CCM为每个单独的相机得到的结果。通过对所有相机的矩阵求平均值,得到了改进的结果。未来的工作包括评估几何校准和渐晕对质量的影响。
{"title":"Relative Impact of Key Rendering Parameters on Perceived Quality of VR Imagery Captured by the Facebook Surround 360 Camera","authors":"Nora Pfund, Nitin Sampat, J. Viggiano","doi":"10.2352/issn.2470-1173.2018.05.pmii-183","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2018.05.pmii-183","url":null,"abstract":"High quality, 360 capture for Cinematic VR is a relatively new and rapidly evolving technology. The field demands very high quality, distortion- free 360 capture which is not possible with cameras that depend on fish- eye lenses for capturing a 360 field of view. The Facebook Surround 360 Camera, one of the few “players” in this space, is an open-source license design that Facebook has released for anyone that chooses to build it from off-the-shelf components and generate 8K stereo output using open-source licensed rendering software. However, the components are expensive and the system itself is extremely demanding in terms of computer hardware and software. Because of this, there have been very few implementations of this design and virtually no real deployment in the field. We have implemented the system, based on Facebook’s design, and have been testing and deploying it in various situations; even generating short video clips. We have discovered in our recent experience that high quality, 360 capture comes with its own set of new challenges. As an example, even the most fundamental tools of photography like “exposure” become difficult because one is always faced with ultra-high dynamic range scenes (one camera is pointing directly at the sun and the others may be pointing to a dark shadow). The conventional imaging pipeline is further complicated by the fact that the stitching software has different effects on various as- pects of the calibration or pipeline optimization. Most of our focus to date has been on optimizing the imaging pipeline and improving the qual- ity of the output for viewing in an Oculus Rift headset. We designed a controlled experiment to study 5 key parameters in the rendering pipeline– black level, neutral balance, color correction matrix (CCM), geometric calibration and vignetting. By varying all of these parameters in a combinatorial manner, we were able to assess the relative impact of these parameters on the perceived image quality of the output. Our results thus far indicate that the output image quality is greatly influenced by the black level of the individual cameras (the Facebook cam- era comprised of 17 cameras whose output need to be stitched to obtain a 360 view). Neutral balance is least sensitive. We are most confused about the results we obtain from accurately calculating and applying the CCM for each individual camera. We obtained improved results by using the average of the matrices for all cameras. Future work includes evaluating the effects of geometric calibration and vignetting on quality.","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124158457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lessons from design, construction, and use of various multicameras 从设计,施工和使用各种多摄像机的经验教训
Pub Date : 2018-01-28 DOI: 10.2352/issn.2470-1173.2018.05.pmii-182
H. Dietz, C. Demaree, P. Eberhart, Chelsea Kuball, J. Wu
A multicamera, array camera, cluster camera, or “supercamera” incorporates two or more component cameras in a single system that functions as a camera with superior performance or special capabilities. Many camera arrays have been built by many organizations, yet creating an effective multicamera has not become significantly easier. This paper attempts to provide some useful insights toward simplifying the design, construction, and use of multicameras. Nine multicameras our group built for diverse purposes between 1999 and 2017 are described in some detail, including four built during Summer 2017 using some of the proposed simplifications.
多摄像机、阵列摄像机、集群摄像机或“超级摄像机”将两个或多个组件摄像机集成在一个系统中,作为具有卓越性能或特殊功能的摄像机。许多组织已经建立了许多相机阵列,但是创建一个有效的多摄像机并没有变得明显容易。本文试图为简化多摄像机的设计、构造和使用提供一些有用的见解。我们小组在1999年至2017年期间为不同目的建造了9台多摄像机,其中包括2017年夏季使用一些建议的简化建造的4台多摄像机。
{"title":"Lessons from design, construction, and use of various multicameras","authors":"H. Dietz, C. Demaree, P. Eberhart, Chelsea Kuball, J. Wu","doi":"10.2352/issn.2470-1173.2018.05.pmii-182","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2018.05.pmii-182","url":null,"abstract":"A multicamera, array camera, cluster camera, or “supercamera” incorporates two or more component cameras in a single system that functions as a camera with superior performance or special capabilities. Many camera arrays have been built by many organizations, yet creating an effective multicamera has not become significantly easier. This paper attempts to provide some useful insights toward simplifying the design, construction, and use of multicameras. Nine multicameras our group built for diverse purposes between 1999 and 2017 are described in some detail, including four built during Summer 2017 using some of the proposed simplifications.","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122719358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Automatic Tuning Method for Camera Denoising and Sharpening based on a Perception Model 基于感知模型的相机去噪锐化自动调优方法
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-442
Weijuan Xi, Huanzhao Zeng, Jonathan B. Phillips
{"title":"An Automatic Tuning Method for Camera Denoising and Sharpening based on a Perception Model","authors":"Weijuan Xi, Huanzhao Zeng, Jonathan B. Phillips","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-442","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-442","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127159448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing Image Acquisition Systems for Autonomous Driving 优化自动驾驶图像采集系统
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-161
H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell
Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens
不同应用对图像采集系统的任务要求差异很大:消费者摄影的要求可能不相关,甚至可能干扰汽车、医疗和其他应用的要求。成像行业为特定应用创造镜头和传感器设计的卓越能力已经在移动计算市场得到了证明。我们可以预期,如果我们明确了其他市场的要求,这个行业可以进一步创新。本文解释了一种开发图像系统设计的方法,以满足自动驾驶汽车应用的任务要求。构建大量的图像采集系统并用真实的驾驶数据对每个系统进行评估是不切实际的;因此,我们组装了一个模拟环境,以便在早期阶段提供指导。开源和免费软件(isetcam, iset3d和isetauto)使用光线追踪来定量计算场景辐射如何通过多元素镜头传播以形成传感器辐照度。然后,该软件将辐照度转换为传感器像素响应,占大量传感器参数。这使得用户能够应用不同类型的图像处理管道来生成用于训练和测试自动驾驶中使用的卷积网络的图像。我们使用模拟环境来评估不同摄像机和网络的性能。自动驾驶汽车的图像传感器市场可以分为两个部分。一些图像传感器数据被用作乘客的图像,例如在驾驶员倒车时渲染汽车后方的视图。其他图像传感器数据由引导车辆的计算算法使用;这些传感器的输出从来没有呈现给人眼。可以合理地预期,这两个系统的光学设计、传感器参数和图像处理管道将有所不同。消费类摄影的移动成像应用占据了市场主导地位,推动了该行业向像素非常小(1微米)、大量像素、拜耳彩色滤光片阵列和红外截止滤光片的方向发展。用于自动驾驶汽车决策系统应用的图像传感器市场尚处于萌芽阶段,而此类应用最理想的功能尚未确定。目前的产品包括具有更大像素的传感器,包含四分之一红色滤波器和四分之三透明滤波器的彩色滤波器阵列,以及无红外截止滤波器(例如安森美半导体;Omnivision)。消费者摄影和自动驾驶汽车对光学特性(如景深效果)的要求也可能有所不同。消费者摄影看重窄景深图像(散景),而自动驾驶看重大景深以支持镜头
{"title":"Optimizing Image Acquisition Systems for Autonomous Driving","authors":"H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-161","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-161","url":null,"abstract":"Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Overview of State-of-the-Art Algorithms for Stack-Based High-Dynamic Range (HDR) Imaging 基于堆栈的高动态范围(HDR)成像的最新算法概述
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-311
P. Sen
Modern digital cameras have very limited dynamic range, which makes them unable to capture the full range of illumination in natural scenes. Since this prevents them from accurately photographing visible detail, researchers have spent the last two decades developing algorithms for high-dynamic range (HDR) imaging which can capture a wider range of illumination and therefore allow us to reconstruct richer images of natural scenes. The most practical of these methods are stack-based approaches which take a set of images at different exposure levels and then merge them together to form the final HDR result. However, these algorithms produce ghost-like artifacts when the scene has motion or the camera is not perfectly static. In this paper, we present an overview of state-of-the-art deghosting algorithms for stackbased HDR imaging and discuss some of the tradeoffs of each.
现代数码相机的动态范围非常有限,这使得它们无法捕捉到自然场景中的全范围照明。由于这阻碍了他们准确地拍摄可见细节,研究人员花了过去二十年的时间开发高动态范围(HDR)成像算法,可以捕捉更大范围的照明,从而使我们能够重建更丰富的自然场景图像。这些方法中最实用的是基于堆栈的方法,它采用一组不同曝光水平的图像,然后将它们合并在一起形成最终的HDR结果。然而,当场景有运动或相机不是完全静止时,这些算法会产生幽灵般的伪影。在本文中,我们概述了基于堆栈的HDR成像的最先进的去重影算法,并讨论了每种算法的一些权衡。
{"title":"Overview of State-of-the-Art Algorithms for Stack-Based High-Dynamic Range (HDR) Imaging","authors":"P. Sen","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-311","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-311","url":null,"abstract":"Modern digital cameras have very limited dynamic range, which makes them unable to capture the full range of illumination in natural scenes. Since this prevents them from accurately photographing visible detail, researchers have spent the last two decades developing algorithms for high-dynamic range (HDR) imaging which can capture a wider range of illumination and therefore allow us to reconstruct richer images of natural scenes. The most practical of these methods are stack-based approaches which take a set of images at different exposure levels and then merge them together to form the final HDR result. However, these algorithms produce ghost-like artifacts when the scene has motion or the camera is not perfectly static. In this paper, we present an overview of state-of-the-art deghosting algorithms for stackbased HDR imaging and discuss some of the tradeoffs of each.","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114509014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards Perceptual Evaluation of Six Degrees of Freedom Virtual Reality Rendering from Stacked OmniStereo Representation 基于堆叠全立体表示的六自由度虚拟现实渲染的感知评价
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-352
Jayant Thatte, B. Girod
Allowing viewers to explore virtual reality in a headmounted display with six degrees of freedom (6-DoF) greatly enhances the associated immersion and comfort. It makes the experience more compelling compared to a fixed-viewpoint 2-DoF rendering produced by conventional algorithms using data from a stationary camera rig. In this work, we use subjective testing to study the relative importance of, and the interaction between, motion parallax and binocular disparity as depth cues that shape the perception of 3D environments by human viewers. Additionally, we use the recorded head trajectories to estimate the distribution of the head movements of a sedentary viewer exploring a virtual environment with 6-DoF. Finally, we demonstrate a real-time virtual reality rendering system that uses a Stacked OmniStereo intermediary representation to provide a 6-DoF viewing experience by utilizing data from a stationary camera rig. We outline the challenges involved in developing such a system and discuss the limitations of our approach. Introduction Cinematic virtual reality is a subfield of virtual reality (VR) that deals with live-action or natural environments captured using a camera system, in contrast to computer generated scenes rendered from synthetic 3D models. With the advent of modern camera rigs, ever-faster compute capability, and a new generation of head-mounted displays (HMDs), cinematic VR is well-poised to enter the mainstream market. However, the lack of an underlying 3D scene model makes it significantly more challenging to render accurate motion parallax in natural VR scenes. As a result, all the live-action VR content available today is rendered from a fixed vantage point disregarding any positional information from the HMD. The resulting mismatch in the perceived motion between the visual and the vestibular systems gives rise to significant discomfort including nausea, headache, and disorientation [1] [2]. Additionally, motion parallax is an important depth cue [3] and rendering VR content without motion parallax also makes the experience less immersive. Furthermore, since the axis of head rotation does not pass through the eyes, head rotation even from a fixed position leads to a small translation of the eyes and therefore cannot be accurately modelled using pure rotation. The following are the key contributions of our work. 1. We present a subjective study aimed at understanding the contributions of motion parallax and binocular stereopsis to perceptual quality of experience in VR 2. We use the recorded head trajectories of the study participants to estimate the distribution of the head movements of a sedentary viewer immersed in a 6-DoF virtual environment 3. We demonstrate a real-time VR rendering system that provides a 6-DoF viewing experience The rest of the paper is organized as follows. The following section gives an overview of the related work. The next three sections detail the three contributions of our work: the results of the s
允许观众在头戴式显示器中探索虚拟现实,具有六自由度(6-DoF),大大增强了相关的沉浸感和舒适度。与使用固定摄像机数据的传统算法生成的固定视点2-DoF渲染相比,它使体验更加引人注目。在这项工作中,我们使用主观测试来研究运动视差和双目视差作为深度线索的相对重要性,以及两者之间的相互作用,这些线索塑造了人类观众对3D环境的感知。此外,我们使用记录的头部轨迹来估计久坐的观看者探索具有6自由度的虚拟环境时头部运动的分布。最后,我们演示了一个实时虚拟现实渲染系统,该系统使用堆叠OmniStereo中介表示,通过利用固定摄像机的数据提供6自由度的观看体验。我们概述了开发这样一个系统所涉及的挑战,并讨论了我们的方法的局限性。电影虚拟现实是虚拟现实(VR)的一个子领域,它处理使用相机系统捕获的真人或自然环境,而不是由合成3D模型渲染的计算机生成的场景。随着现代摄像机的出现,更快的计算能力和新一代头戴式显示器(hmd)的出现,电影VR已经准备好进入主流市场。然而,缺乏潜在的3D场景模型使得在自然VR场景中渲染准确的运动视差更具挑战性。因此,今天所有的实景VR内容都是从一个固定的有利位置呈现的,而不考虑来自HMD的任何位置信息。视觉和前庭系统之间感知运动的不匹配导致了严重的不适,包括恶心、头痛和定向障碍[1][2]。此外,运动视差是一个重要的深度提示[3],在没有运动视差的情况下渲染VR内容也会降低体验的沉浸感。此外,由于头部旋转的轴不通过眼睛,因此即使从固定位置旋转头部也会导致眼睛的微小平移,因此无法使用纯旋转精确建模。以下是我们工作的主要贡献。1. 我们提出了一项主观研究,旨在了解运动视差和双目立体视觉对VR 2中感知体验质量的贡献。我们使用研究参与者记录的头部轨迹来估计沉浸在6自由度虚拟环境中久坐的观众头部运动的分布。我们演示了一个实时VR渲染系统,它提供了一个6自由度的观看体验。下面的部分给出了相关工作的概述。接下来的三个部分详细介绍了我们工作的三个贡献:主观测试的结果,估计的头部运动分布,以及提出的实时渲染系统。最后两部分分别概述了未来的工作和结论。
{"title":"Towards Perceptual Evaluation of Six Degrees of Freedom Virtual Reality Rendering from Stacked OmniStereo Representation","authors":"Jayant Thatte, B. Girod","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-352","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-352","url":null,"abstract":"Allowing viewers to explore virtual reality in a headmounted display with six degrees of freedom (6-DoF) greatly enhances the associated immersion and comfort. It makes the experience more compelling compared to a fixed-viewpoint 2-DoF rendering produced by conventional algorithms using data from a stationary camera rig. In this work, we use subjective testing to study the relative importance of, and the interaction between, motion parallax and binocular disparity as depth cues that shape the perception of 3D environments by human viewers. Additionally, we use the recorded head trajectories to estimate the distribution of the head movements of a sedentary viewer exploring a virtual environment with 6-DoF. Finally, we demonstrate a real-time virtual reality rendering system that uses a Stacked OmniStereo intermediary representation to provide a 6-DoF viewing experience by utilizing data from a stationary camera rig. We outline the challenges involved in developing such a system and discuss the limitations of our approach. Introduction Cinematic virtual reality is a subfield of virtual reality (VR) that deals with live-action or natural environments captured using a camera system, in contrast to computer generated scenes rendered from synthetic 3D models. With the advent of modern camera rigs, ever-faster compute capability, and a new generation of head-mounted displays (HMDs), cinematic VR is well-poised to enter the mainstream market. However, the lack of an underlying 3D scene model makes it significantly more challenging to render accurate motion parallax in natural VR scenes. As a result, all the live-action VR content available today is rendered from a fixed vantage point disregarding any positional information from the HMD. The resulting mismatch in the perceived motion between the visual and the vestibular systems gives rise to significant discomfort including nausea, headache, and disorientation [1] [2]. Additionally, motion parallax is an important depth cue [3] and rendering VR content without motion parallax also makes the experience less immersive. Furthermore, since the axis of head rotation does not pass through the eyes, head rotation even from a fixed position leads to a small translation of the eyes and therefore cannot be accurately modelled using pure rotation. The following are the key contributions of our work. 1. We present a subjective study aimed at understanding the contributions of motion parallax and binocular stereopsis to perceptual quality of experience in VR 2. We use the recorded head trajectories of the study participants to estimate the distribution of the head movements of a sedentary viewer immersed in a 6-DoF virtual environment 3. We demonstrate a real-time VR rendering system that provides a 6-DoF viewing experience The rest of the paper is organized as follows. The following section gives an overview of the related work. The next three sections detail the three contributions of our work: the results of the s","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123888070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Image Systems Simulation for 360° Camera Rigs 360°相机平台的图像系统仿真
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-353
Trisha Lian, J. Farrell, B. Wandell
Camera arrays are used to acquire the 360◦ surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360◦ panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360. Introduction Head mounted visual displays can provide a compelling and immersive experience of a three-dimensional scene. Because the experience can be very impactful, there is a great deal of interest in developing applications ranging from clinical medicine, behavioral change, entertainment, education and experience-sharing [1] [2]. In some applications, computer graphics is used to generate content, providing a realistic, but not real, experience (e.g., video games). In other applications, the content is acquired from a real event (e.g., sports, concerts, news, or family gathering) using camera arrays (rigs) and subsequent extensive image processing that capture and render the environment (Figure 1). The design of these rigs involves many different engineering decisions, including the selection of lenses, sensors, and camera positions. In addition to the rig, there are many choices of how to store and process the acquired content. For example, data from multiple cameras are often transformed into a stereo pair of 360◦ panoramas [3] by stitching together images captured by multiple cameras. Based on the user’s head position and orientation, data are extracted from the panorama and rendered on a head mounted display. There is no single quality-limiting element of this system, and moreover, interactions between the hardware and software design choices limit how well metrics of individual components predict overall system quality. To create a good experience, we must be able to assess the combination of hardware and software components that comprise the entire system. Building and testing a complete rig is costly and slow; hence, it can be useful to obtain guidance about design choices by using Figure 1. Overview of the hardware and software components that combine in an camera rig for an immersive head-mounted display
相机阵列用于获取360度环绕视频数据呈现在3D沉浸式显示器上。这些阵列的设计涉及大量的决策,从相机的位置和方向到镜头和传感器的选择。我们实现了一个开源软件环境(iset360),以支持工程师设计和评估虚拟和增强现实应用的相机阵列。该软件使用基于物理的光线追踪来模拟3D虚拟光谱场景,并通过多元素球面透镜跟踪这些光线,以计算成像传感器的辐照度。然后,该软件模拟成像传感器来预测捕获的图像。传感器数据可以处理,以产生立体和单视角360度全景,通常用于虚拟现实应用。通过模拟整个捕获管道,我们可以可视化系统组件的变化是如何影响系统性能的。我们通过模拟各种不同的相机平台来演示该软件的使用,包括Facebook Surround360、GoPro Odyssey、GoPro Omni和三星Gear 360。头戴式视觉显示器可以提供引人注目的沉浸式三维场景体验。由于这种体验可能非常有影响力,因此人们对开发从临床医学、行为改变、娱乐、教育和经验分享等领域的应用非常感兴趣[1][2]。在某些应用中,计算机图形被用来生成内容,提供逼真但不真实的体验(例如,视频游戏)。在其他应用中,内容是从真实事件(例如,体育,音乐会,新闻或家庭聚会)中获取的,使用相机阵列(钻机)和随后的广泛图像处理来捕获和渲染环境(图1)。这些钻机的设计涉及许多不同的工程决策,包括镜头,传感器和相机位置的选择。除了钻机之外,如何存储和处理获取的内容还有许多选择。例如,通过将多个摄像机捕获的图像拼接在一起,通常将来自多个摄像机的数据转换为360度全景的立体对[3]。根据用户头部的位置和方向,从全景图中提取数据并呈现在头戴式显示器上。这个系统中没有单一的质量限制元素,而且,硬件和软件设计选择之间的交互限制了单个组件的度量如何很好地预测整个系统的质量。为了创造良好的体验,我们必须能够评估组成整个系统的硬件和软件组件的组合。建造和测试一个完整的钻井平台既昂贵又缓慢;因此,使用图1获得有关设计选择的指导是很有用的。概述了用于沉浸式头戴式显示应用程序的相机钻机中的硬件和软件组件。(A)模拟包括一个3D光谱场景,相机钻机的定义,和个别相机的规格。这个模拟产生一组图像输出。(B)然后这些图像被一系列的软件算法处理。在这种情况下,我们展示了一个生成中间全景表示的管道,以及根据用户头部位置渲染图像的视口计算。系统的仿真。本文介绍了模拟受控三维真实场景和图像采集系统的软件工具,以生成特定硬件选择产生的图像。这些图像是拼接和渲染算法的输入。仿真使工程师能够探索不同设计选择对整个成像系统的影响,包括真实场景、硬件组件和后处理算法。软件实现iset360软件对360摄像机的图像采集流水线进行建模,该软件由MATLAB和c++两部分组成。模拟软件在ISET GitHub项目中的三个存储库中免费提供:https://github.com/ISET 1。图2和图3总结了工作流的初始阶段。代码的第一部分创建逼真的3D场景,并计算给定镜头描述的预期传感器辐照度。为此,我们从使用3D建模软件(例如Blender或Maya)构建的3D虚拟场景开始。场景被转换成与PBRT兼容的格式[4],用c++实现。PBRT是一个定量的计算机图形工具,我们用它来计算传感器的辐照度,因为光从3D场景传播,通过镜头,并到传感器表面。我们增强了PBRT代码以返回多光谱图像,模拟透镜衍射并模拟光场[5]。这三个存储库分别是iset360、iset3d和isetcam
{"title":"Image Systems Simulation for 360° Camera Rigs","authors":"Trisha Lian, J. Farrell, B. Wandell","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-353","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-353","url":null,"abstract":"Camera arrays are used to acquire the 360◦ surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360◦ panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360. Introduction Head mounted visual displays can provide a compelling and immersive experience of a three-dimensional scene. Because the experience can be very impactful, there is a great deal of interest in developing applications ranging from clinical medicine, behavioral change, entertainment, education and experience-sharing [1] [2]. In some applications, computer graphics is used to generate content, providing a realistic, but not real, experience (e.g., video games). In other applications, the content is acquired from a real event (e.g., sports, concerts, news, or family gathering) using camera arrays (rigs) and subsequent extensive image processing that capture and render the environment (Figure 1). The design of these rigs involves many different engineering decisions, including the selection of lenses, sensors, and camera positions. In addition to the rig, there are many choices of how to store and process the acquired content. For example, data from multiple cameras are often transformed into a stereo pair of 360◦ panoramas [3] by stitching together images captured by multiple cameras. Based on the user’s head position and orientation, data are extracted from the panorama and rendered on a head mounted display. There is no single quality-limiting element of this system, and moreover, interactions between the hardware and software design choices limit how well metrics of individual components predict overall system quality. To create a good experience, we must be able to assess the combination of hardware and software components that comprise the entire system. Building and testing a complete rig is costly and slow; hence, it can be useful to obtain guidance about design choices by using Figure 1. Overview of the hardware and software components that combine in an camera rig for an immersive head-mounted display","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multispectral, high dynamic range, time domain continuous imaging 多光谱,高动态范围,时域连续成像
Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-409
H. Dietz, P. Eberhart, C. Demaree
{"title":"Multispectral, high dynamic range, time domain continuous imaging","authors":"H. Dietz, P. Eberhart, C. Demaree","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-409","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-409","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124057278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Photography, Mobile, and Immersive Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1