Pub Date : 2024-01-01DOI: 10.2352/EI.2024.36.11.HVEI-214
Alex D Hwang, Jaehyun Jung, Alex Bowers, Eli Peli
Avoiding person-to-person collisions is critical for visual field loss patients. Any intervention claiming to improve the safety of such patients should empirically demonstrate its efficacy. To design a VR mobility testing platform presenting multiple pedestrians, a distinction between colliding and non-colliding pedestrians must be clearly defined. We measured nine normally sighted subjects' collision envelopes (CE; an egocentric boundary distinguishing collision and non-collision) and found it changes based on the approaching pedestrian's bearing angle and speed. For person-to-person collision events for the VR mobility testing platform, non-colliding pedestrians should not evade the CE.
{"title":"Egocentric Boundaries on Distinguishing Colliding and Non-Colliding Pedestrians while Walking in a Virtual Environment.","authors":"Alex D Hwang, Jaehyun Jung, Alex Bowers, Eli Peli","doi":"10.2352/EI.2024.36.11.HVEI-214","DOIUrl":"10.2352/EI.2024.36.11.HVEI-214","url":null,"abstract":"<p><p>Avoiding person-to-person collisions is critical for visual field loss patients. Any intervention claiming to improve the safety of such patients should empirically demonstrate its efficacy. To design a VR mobility testing platform presenting multiple pedestrians, a distinction between colliding and non-colliding pedestrians must be clearly defined. We measured nine normally sighted subjects' collision envelopes (CE; an egocentric boundary distinguishing collision and non-collision) and found it changes based on the approaching pedestrian's bearing angle and speed. For person-to-person collision events for the VR mobility testing platform, non-colliding pedestrians should not evade the CE.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"36 ","pages":"2141-2148"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.2.sda-b02
Andrew J. Woods, Nicolas S. Holliman, Takashi Kawai, Bjorn Sommer
This manuscript serves as an introduction to the conference proceedings for the 34th annual Stereoscopic Displays and Applications conference and also provides an overview of the conference.
这份手稿作为第34届年度立体显示和应用会议的会议记录的介绍,也提供了会议的概述。
{"title":"34th Annual Stereoscopic Displays and Applications Conference - Introduction","authors":"Andrew J. Woods, Nicolas S. Holliman, Takashi Kawai, Bjorn Sommer","doi":"10.2352/ei.2023.35.2.sda-b02","DOIUrl":"https://doi.org/10.2352/ei.2023.35.2.sda-b02","url":null,"abstract":"This manuscript serves as an introduction to the conference proceedings for the 34th annual Stereoscopic Displays and Applications conference and also provides an overview of the conference.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.7.image-278
Yang Cai, Mel Siegel
We present a head-mounted holographic display system for thermographic image overlay, biometric sensing, and wireless telemetry. The system is lightweight and reconfigurable for multiple field applications, including object contour detection and enhancement, breathing rate detection, and telemetry over a mobile phone for peer-to-peer communication and incident command dashboard. Due to the constraints of the limited computing power of an embedded system, we developed a lightweight image processing algorithm for edge detection and breath rate detection, as well as an image compression codec. The system can be integrated into a helmet or personal protection equipment such as a face shield or goggles. It can be applied to firefighting, medical emergency response, and other first-response operations. Finally, we present a case study of "Cold Trailing" for forest fire containment.
{"title":"Wearable multispectral imaging and telemetry at edge","authors":"Yang Cai, Mel Siegel","doi":"10.2352/ei.2023.35.7.image-278","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-278","url":null,"abstract":"We present a head-mounted holographic display system for thermographic image overlay, biometric sensing, and wireless telemetry. The system is lightweight and reconfigurable for multiple field applications, including object contour detection and enhancement, breathing rate detection, and telemetry over a mobile phone for peer-to-peer communication and incident command dashboard. Due to the constraints of the limited computing power of an embedded system, we developed a lightweight image processing algorithm for edge detection and breath rate detection, as well as an image compression codec. The system can be integrated into a helmet or personal protection equipment such as a face shield or goggles. It can be applied to firefighting, medical emergency response, and other first-response operations. Finally, we present a case study of \"Cold Trailing\" for forest fire containment.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.2.sda-a02
Abstract The Stereoscopic Displays and Applications Conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display to perception. The conference brings together practitioners and researchers from industry and academia to facilitate an exchange of current information on stereoscopic imaging topics. The highly popular conference demonstration session provides authors with a perfect additional opportunity to showcase their work. The long-running SD&A 3D Theater Session provides conference attendees with a wonderful opportunity to see how 3D content is being created and exhibited around the world. Publishing your work at SD&A offers excellent exposure—across all publication outlets, SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic imaging field (Google Scholar, May 2013).
{"title":"Stereoscopic Displays and Applications XXXIV Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.2.sda-a02","DOIUrl":"https://doi.org/10.2352/ei.2023.35.2.sda-a02","url":null,"abstract":"Abstract The Stereoscopic Displays and Applications Conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display to perception. The conference brings together practitioners and researchers from industry and academia to facilitate an exchange of current information on stereoscopic imaging topics. The highly popular conference demonstration session provides authors with a perfect additional opportunity to showcase their work. The long-running SD&A 3D Theater Session provides conference attendees with a wonderful opportunity to see how 3D content is being created and exhibited around the world. Publishing your work at SD&A offers excellent exposure—across all publication outlets, SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic imaging field (Google Scholar, May 2013).","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.3.mobmu-351
Yuriy Reznik, Nabajeet Barman
In recent years, we have seen significant progress in advanced image upscaling techniques, sometimes called super-resolution, ML-based, or AI-based upscaling. Such algorithms are now available not only in form of specialized software but also in drivers and SDKs supplied with modern graphics cards. Upscaling functions in NVIDIA Maxine SDK is one of the recent examples. However, to take advantage of this functionality in video streaming applications, one needs to (a) quantify the impacts of super-resolution techniques on the perceived visual quality, (b) implement video rendering incorporating super-resolution upscaling techniques, and (c) implement new bitrate+resolution adaptation algorithms in streaming players, enabling such players to deliver better quality of experience or better efficiency (e.g. reduce bandwidth usage) or both. Towards this end, in this paper, we propose several techniques that may be helpful to the implementation community. First, we offer a model quantifying the impacts of super resolution upscaling on the perceived quality. Our model is based on the Westerink-Roufs model connecting the true resolution of images/videos to perceived quality, with several additional parameters added, allowing its tuning to specific implementations of super-resolution techniques. We verify this model by using several recent datasets including MOS scores measured for several conventional up-scaling and super-resolution algorithms. Then, we propose an improved adaptation logic for video streaming players, considering video bitrates, encoded video resolutions, player size, and the upscaling method. This improved logic relies on our modified Westerink-Roufs model to predict perceived quality and suggests choices of renditions that would deliver the best quality for given display and upscaling method characteristics. Finally, we study the impacts of the proposed techniques and show that they can deliver practically appreciable results in terms of the expected QoE improvements and bandwidth savings.
{"title":"Improving the performance of web-streaming by super-resolution upscaling techniques","authors":"Yuriy Reznik, Nabajeet Barman","doi":"10.2352/ei.2023.35.3.mobmu-351","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-351","url":null,"abstract":"In recent years, we have seen significant progress in advanced image upscaling techniques, sometimes called super-resolution, ML-based, or AI-based upscaling. Such algorithms are now available not only in form of specialized software but also in drivers and SDKs supplied with modern graphics cards. Upscaling functions in NVIDIA Maxine SDK is one of the recent examples. However, to take advantage of this functionality in video streaming applications, one needs to (a) quantify the impacts of super-resolution techniques on the perceived visual quality, (b) implement video rendering incorporating super-resolution upscaling techniques, and (c) implement new bitrate+resolution adaptation algorithms in streaming players, enabling such players to deliver better quality of experience or better efficiency (e.g. reduce bandwidth usage) or both. Towards this end, in this paper, we propose several techniques that may be helpful to the implementation community. First, we offer a model quantifying the impacts of super resolution upscaling on the perceived quality. Our model is based on the Westerink-Roufs model connecting the true resolution of images/videos to perceived quality, with several additional parameters added, allowing its tuning to specific implementations of super-resolution techniques. We verify this model by using several recent datasets including MOS scores measured for several conventional up-scaling and super-resolution algorithms. Then, we propose an improved adaptation logic for video streaming players, considering video bitrates, encoded video resolutions, player size, and the upscaling method. This improved logic relies on our modified Westerink-Roufs model to predict perceived quality and suggests choices of renditions that would deliver the best quality for given display and upscaling method characteristics. Finally, we study the impacts of the proposed techniques and show that they can deliver practically appreciable results in terms of the expected QoE improvements and bandwidth savings.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.4.mwsf-376
Xiyang Luo, Michael Goebel, Elnaz Barshan, Feng Yang
In this work, we present an efficient multi-bit deep image watermarking method that is cover-agnostic yet also robust to geometric distortions such as translation and scaling as well as other distortions such as JPEG compression and noise. Our design consists of a light-weight watermark encoder jointly trained with a deep neural network based decoder. Such a design allows us to retain the efficiency of the encoder while fully utilizing the power of a deep neural network. Moreover, the watermark encoder is independent of the image content, allowing users to pre-generate the watermarks for further efficiency. To offer robustness towards geometric transformations, we introduced a learned model for predicting the scale and offset of the watermarked images. Moreover, our watermark encoder is independent of the image content, making the generated watermarks universally applicable to different cover images. Experiments show that our method outperforms comparably efficient watermarking methods by a large margin.
{"title":"LECA: A learned approach for efficient cover-agnostic watermarking","authors":"Xiyang Luo, Michael Goebel, Elnaz Barshan, Feng Yang","doi":"10.2352/ei.2023.35.4.mwsf-376","DOIUrl":"https://doi.org/10.2352/ei.2023.35.4.mwsf-376","url":null,"abstract":"In this work, we present an efficient multi-bit deep image watermarking method that is cover-agnostic yet also robust to geometric distortions such as translation and scaling as well as other distortions such as JPEG compression and noise. Our design consists of a light-weight watermark encoder jointly trained with a deep neural network based decoder. Such a design allows us to retain the efficiency of the encoder while fully utilizing the power of a deep neural network. Moreover, the watermark encoder is independent of the image content, allowing users to pre-generate the watermarks for further efficiency. To offer robustness towards geometric transformations, we introduced a learned model for predicting the scale and offset of the watermarked images. Moreover, our watermark encoder is independent of the image content, making the generated watermarks universally applicable to different cover images. Experiments show that our method outperforms comparably efficient watermarking methods by a large margin.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.8.iqsp-314
Shubham Ravindra Alai, Radhesh Bhat
An image signal processor (ISP) transforms a sensor's raw image into a RGB image for use in computer or human vision applications. ISP is composed of various functional blocks and each block contributes uniquely to make the image best suitable for the target application. Whereas, each block consists of several hyperparameters and each hyperparameter needs to be tuned (usually done manually by experts in an iterative manner) to achieve the target image quality. The tuning becomes challenging and increasingly iterative especially in low to very low light conditions where the amount of details preserved by the sensor is limited and ISP parameters have to be tuned to balance the amount of details recovered, noise, sharpness, contrast etc. To extract maximum information out of the image, usually it is required to increase the ISO gain which eventually impacts the noise and color accuracy. Also, the number of ISP parameters that need to be tuned are huge and it becomes impractical to consider all of them in such low light conditions to arrive at the best possible settings. To tackle challenges in manual tuning, especially for low light conditions we have implemented an automatic hyperparameter optimization model that can tune the low lux images so that they are perceptually equivalent to high-lux images. The experiments for IQ validation are carried out under challenging low light conditions and scenarios using Qualcomm’s Spectra ISP simulator with a 13MP OV sensor, and the performance of automatic tuned IQ is compared with manual tuned IQ for human vision use-cases. With experimental results, we have proved that with the help of evolutionary algorithms and local optimization it is possible to optimize the ISP parameters such that without using any of the KPI metrics still low-lux image/ image captured with different ISP (test image) can perceptually be improved that are equivalent to high-lux or well-tuned (reference) image.
{"title":"Optimization of ISP parameters for low light conditions using a non-linear reference based approach","authors":"Shubham Ravindra Alai, Radhesh Bhat","doi":"10.2352/ei.2023.35.8.iqsp-314","DOIUrl":"https://doi.org/10.2352/ei.2023.35.8.iqsp-314","url":null,"abstract":"An image signal processor (ISP) transforms a sensor's raw image into a RGB image for use in computer or human vision applications. ISP is composed of various functional blocks and each block contributes uniquely to make the image best suitable for the target application. Whereas, each block consists of several hyperparameters and each hyperparameter needs to be tuned (usually done manually by experts in an iterative manner) to achieve the target image quality. The tuning becomes challenging and increasingly iterative especially in low to very low light conditions where the amount of details preserved by the sensor is limited and ISP parameters have to be tuned to balance the amount of details recovered, noise, sharpness, contrast etc. To extract maximum information out of the image, usually it is required to increase the ISO gain which eventually impacts the noise and color accuracy. Also, the number of ISP parameters that need to be tuned are huge and it becomes impractical to consider all of them in such low light conditions to arrive at the best possible settings. To tackle challenges in manual tuning, especially for low light conditions we have implemented an automatic hyperparameter optimization model that can tune the low lux images so that they are perceptually equivalent to high-lux images. The experiments for IQ validation are carried out under challenging low light conditions and scenarios using Qualcomm’s Spectra ISP simulator with a 13MP OV sensor, and the performance of automatic tuned IQ is compared with manual tuned IQ for human vision use-cases. With experimental results, we have proved that with the help of evolutionary algorithms and local optimization it is possible to optimize the ISP parameters such that without using any of the KPI metrics still low-lux image/ image captured with different ISP (test image) can perceptually be improved that are equivalent to high-lux or well-tuned (reference) image.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.11.hpci-a11
Abstract In recent years, the rapid development of imaging systems and the growth of compute-intensive imaging algorithms have led to a strong demand for High Performance Computing (HPC) for efficient image processing. However, the two communities, imaging and HPC, have largely remained separate, with little synergy. This conference focuses on research topics that converge HPC and imaging research with an emphasis on advanced HPC facilities and techniques for imaging systems/algorithms and applications. In addition, the conference provides a unique platform that brings imaging and HPC people together and discusses emerging research topics and techniques that benefit both the HPC and imaging community. Papers are solicited on all aspects of research, development, and application of high-performance computing or efficient computing algorithms and systems for imaging applications.
{"title":"High Performance Computing for Imaging 2023 Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.11.hpci-a11","DOIUrl":"https://doi.org/10.2352/ei.2023.35.11.hpci-a11","url":null,"abstract":"Abstract In recent years, the rapid development of imaging systems and the growth of compute-intensive imaging algorithms have led to a strong demand for High Performance Computing (HPC) for efficient image processing. However, the two communities, imaging and HPC, have largely remained separate, with little synergy. This conference focuses on research topics that converge HPC and imaging research with an emphasis on advanced HPC facilities and techniques for imaging systems/algorithms and applications. In addition, the conference provides a unique platform that brings imaging and HPC people together and discusses emerging research topics and techniques that benefit both the HPC and imaging community. Papers are solicited on all aspects of research, development, and application of high-performance computing or efficient computing algorithms and systems for imaging applications.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.7.image-a07
Abstract Recent progress at the intersection of deep learning and imaging has created a new wave of interest in imaging and multimedia analytics topics, from social media sharing to augmented reality, from food and nutrition to health surveillance, from remote sensing and agriculture to wildlife and environment monitoring. Compared to many subjects in traditional imaging, these topics are more multi-disciplinary in nature. This conference will provide a forum for researchers and engineers from various related areas, both academic and industrial, to exchange ideas and share research results in this rapidly evolving field.
{"title":"Imaging and Multimedia Analytics at the Edge 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.7.image-a07","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-a07","url":null,"abstract":"Abstract Recent progress at the intersection of deep learning and imaging has created a new wave of interest in imaging and multimedia analytics topics, from social media sharing to augmented reality, from food and nutrition to health surveillance, from remote sensing and agriculture to wildlife and environment monitoring. Compared to many subjects in traditional imaging, these topics are more multi-disciplinary in nature. This conference will provide a forum for researchers and engineers from various related areas, both academic and industrial, to exchange ideas and share research results in this rapidly evolving field.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.6.iss-a06
Abstract Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as automotive, space, medical, scientific and industrial applications. The advantages of low-power, low-noise, high-resolution, high-geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency have led to a number of revolutionary uses. ISS focuses on image sensing for consumer, industrial, medical, and scientific applications, as well as embedded image processing, and pipeline tuning for these camera systems. This conference will serve to bring together researchers, scientists, and engineers working in these fields, and provides the opportunity for quick publication of their work. Topics can include, but are not limited to, research and applications in image sensors and detectors, camera/sensor characterization, ISP pipelines and tuning, image artifact correction and removal, image reconstruction, color calibration, image enhancement, HDR imaging, light-field imaging, multi-frame processing, computational photography, 3D imaging, 360/cinematic VR cameras, camera image quality evaluation and metrics, novel imaging applications, imaging system design, and deep learning applications in imaging.
{"title":"Imaging Sensors and Systems 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.6.iss-a06","DOIUrl":"https://doi.org/10.2352/ei.2023.35.6.iss-a06","url":null,"abstract":"Abstract Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as automotive, space, medical, scientific and industrial applications. The advantages of low-power, low-noise, high-resolution, high-geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency have led to a number of revolutionary uses. ISS focuses on image sensing for consumer, industrial, medical, and scientific applications, as well as embedded image processing, and pipeline tuning for these camera systems. This conference will serve to bring together researchers, scientists, and engineers working in these fields, and provides the opportunity for quick publication of their work. Topics can include, but are not limited to, research and applications in image sensors and detectors, camera/sensor characterization, ISP pipelines and tuning, image artifact correction and removal, image reconstruction, color calibration, image enhancement, HDR imaging, light-field imaging, multi-frame processing, computational photography, 3D imaging, 360/cinematic VR cameras, camera image quality evaluation and metrics, novel imaging applications, imaging system design, and deep learning applications in imaging.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}