Pub Date : 2021-03-01DOI: 10.2352/j.imagingsci.technol.2021.65.2.020503
G. Geleijnse, M. Hakkesteegt, J. D. Groot, R. M. Metselaar
Abstract ENT-flexible endoscopes are important for nasopharyngoscopy examinations. However, hospitals do not apply standardized tests to assess the image quality of the endoscopes they use or consider to purchase. The authors evaluated the Rez Checker Target Nano Matte, a test chart that was designed by Image Science Associates for this purpose. The target was placed in a custom setup designed to position the endoscope tip at a distance of 3.0 cm. The primary metrics, including opto-electronic conversion function, noise, modulation transfer function, and color fidelity, were measured using a custom MATLAB script for the Pentax VNL9-CP, Olympus ENF-V4, and Xion XN HD flexible endoscope. For each of these endoscopes, three examples were measured ten times. In addition, 38 Pentax VNL9-CP endoscopes that are regularly used in their clinic were measured to validate this method for the purpose of quality control. They found that the Rez Checker Target Nano Matte can be used to reliably assess image performance metrics for both procurement and quality control. Whether better image quality also improves the diagnostic accuracy of the ENT-specialist seems plausible, but has to be established.
{"title":"Measuring Image Quality of ENT Chip-on-tip Endoscopes","authors":"G. Geleijnse, M. Hakkesteegt, J. D. Groot, R. M. Metselaar","doi":"10.2352/j.imagingsci.technol.2021.65.2.020503","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2021.65.2.020503","url":null,"abstract":"Abstract ENT-flexible endoscopes are important for nasopharyngoscopy examinations. However, hospitals do not apply standardized tests to assess the image quality of the endoscopes they use or consider to purchase. The authors evaluated the Rez Checker Target Nano Matte,\u0000 a test chart that was designed by Image Science Associates for this purpose. The target was placed in a custom setup designed to position the endoscope tip at a distance of 3.0 cm. The primary metrics, including opto-electronic conversion function, noise, modulation transfer function,\u0000 and color fidelity, were measured using a custom MATLAB script for the Pentax VNL9-CP, Olympus ENF-V4, and Xion XN HD flexible endoscope. For each of these endoscopes, three examples were measured ten times. In addition, 38 Pentax VNL9-CP endoscopes that are regularly used in their clinic\u0000 were measured to validate this method for the purpose of quality control. They found that the Rez Checker Target Nano Matte can be used to reliably assess image performance metrics for both procurement and quality control. Whether better image quality also improves the diagnostic accuracy\u0000 of the ENT-specialist seems plausible, but has to be established.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"65 1","pages":"20503-1-20503-7"},"PeriodicalIF":1.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42968122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this article, a passive haptic learning method for Taiwanese Braille writing was developed for visually impaired individuals through the employment of an effective user-friendly learning strategy. This system was designed with portability and low cost by applying the learning concept of passive haptic learning. This system designed a pair of gloves for visually impaired people to study Braille writing. Furthermore, we also designed a Braille writing teaching system for visually impaired people to learn and practice the Braille writing. Depending on the learning content, the corresponding vibration motors on the glove fingertips vibrate to produce the Taiwan Braille input gestures. The visually impaired people then feel tactile vibration feedback from the glove fingertips. In addition, the corresponding auditory feedback is provided from the Braille writing teaching system. After receiving a series of tactile vibration feedback, user’s finger muscles could memorize the corresponding Braille input gestures by the passive haptic learning. In the practice mode, the teaching system randomly selects practice content and announces the selected content in an auditory manner. Visually impaired users must then input the corresponding Braille codes by using the Braille writing input module. This mode can further reinforce users’ memorization of correct Braille codes for Mandarin characters.
{"title":"Passive Haptic Learning of Taiwanese Braille Writing for Visually Impaired Individuals","authors":"C. Chou, Yi-Zeng Hsieh, Shih-Syun Lin, Tao-Jen Yang, Wei-An Chen, Yung-Long Chu, Hong-Lin Chang","doi":"10.2352/j.imagingsci.technol.2021.65.2.020402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2021.65.2.020402","url":null,"abstract":"Abstract In this article, a passive haptic learning method for Taiwanese Braille writing was developed for visually impaired individuals through the employment of an effective user-friendly learning strategy. This system was designed with portability and low cost by applying\u0000 the learning concept of passive haptic learning. This system designed a pair of gloves for visually impaired people to study Braille writing. Furthermore, we also designed a Braille writing teaching system for visually impaired people to learn and practice the Braille writing. Depending on\u0000 the learning content, the corresponding vibration motors on the glove fingertips vibrate to produce the Taiwan Braille input gestures. The visually impaired people then feel tactile vibration feedback from the glove fingertips. In addition, the corresponding auditory feedback is provided from\u0000 the Braille writing teaching system. After receiving a series of tactile vibration feedback, user’s finger muscles could memorize the corresponding Braille input gestures by the passive haptic learning. In the practice mode, the teaching system randomly selects practice content and announces\u0000 the selected content in an auditory manner. Visually impaired users must then input the corresponding Braille codes by using the Braille writing input module. This mode can further reinforce users’ memorization of correct Braille codes for Mandarin characters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"65 1","pages":"20402-1-20402-9"},"PeriodicalIF":1.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49180824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020504
B. Shin, Jeong-Kweon Seo
Abstract In this study, the authors generate panoramic images using feature-based registration for drone-based aerial thermal images. In the case of drone aerial images, the distortion of the photographing angle due to the unstableness in the shooting altitude deteriorates the performance of the stitching. Furthermore, for the thermal aerial images, the same objects photographed at the same time zone may have different colors due to the relative temperature, which may lead to a more severe condition to be stitched. Applying the scale-invariant feature transform descriptor, they propose a posteriori outlier rejection scheme to estimate the hypothesis of the mapping function for the stitching of consecutive thermal aerial images. By extension of the method of optimal choice of initial candidate inliers (OCICI) and a posteriori outlier rejection scheme using cross-correlation calculus, the authors obtained elaborate stitching of thermal aerial images. Their proposed method is numerically verified for its quality by comparing it with other possible approaches of post-outlier rejection treatments employed of OCICI. Also, after the Poisson blending using the finite difference method is conducted, the stitching performance is compared with some benchmark software such as Matlab-toolbox, OpenCV, Autopano Giga, Hugin, and PTGui.
{"title":"A Posteriori Outlier Rejection Approach Owing to the Well-ordering Property of a Sample Consensus Method for the Stitching of Drone-based Thermal Aerial Images","authors":"B. Shin, Jeong-Kweon Seo","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020504","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020504","url":null,"abstract":"Abstract In this study, the authors generate panoramic images using feature-based registration for drone-based aerial thermal images. In the case of drone aerial images, the distortion of the photographing angle due to the unstableness in the shooting altitude deteriorates\u0000 the performance of the stitching. Furthermore, for the thermal aerial images, the same objects photographed at the same time zone may have different colors due to the relative temperature, which may lead to a more severe condition to be stitched. Applying the scale-invariant feature transform\u0000 descriptor, they propose a posteriori outlier rejection scheme to estimate the hypothesis of the mapping function for the stitching of consecutive thermal aerial images. By extension of the method of optimal choice of initial candidate inliers (OCICI) and a posteriori outlier rejection scheme\u0000 using cross-correlation calculus, the authors obtained elaborate stitching of thermal aerial images. Their proposed method is numerically verified for its quality by comparing it with other possible approaches of post-outlier rejection treatments employed of OCICI. Also, after the Poisson\u0000 blending using the finite difference method is conducted, the stitching performance is compared with some benchmark software such as Matlab-toolbox, OpenCV, Autopano Giga, Hugin, and PTGui.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"65 1","pages":"20504-1-20504-15"},"PeriodicalIF":1.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45512906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020403
Yu-Xiang Zhao, Yi-Zeng Hsieh, Shih-Syun Lin
Abstract With advances in technology, photo booths equipped with automatic capturing systems have gradually replaced the identification (ID) photo service provided by photography studios, thereby enabling consumers to save a considerable amount of time and money. Common automatic capturing systems employ text and voice instructions to guide users in capturing their ID photos; however, the capturing results may not conform to ID photo specifications. To address this issue, this study proposes an ID photo capturing algorithm that can automatically detect facial contours and adjust the size of captured images. The authors adopted a deep learning method (You Only Look Once) to detect the face and applied a semi-automatic annotation technique of facial landmarks to find the lip and chin regions from the facial region. In the experiments, subjects were seated at various distances and heights for testing the performance of the proposed algorithm. The experimental results show that the proposed algorithm can effectively and accurately capture ID photos that satisfy the required specifications.
摘要随着技术的进步,配备自动拍摄系统的照相馆逐渐取代了照相馆提供的身份证照相服务,从而使消费者节省了大量的时间和金钱。常见的自动捕获系统采用文本和语音指令来引导用户捕获他们的ID照片;然而,捕捉结果可能不符合ID照片规范。为了解决这个问题,本研究提出了一种ID照片捕获算法,该算法可以自动检测面部轮廓并调整捕获图像的大小。作者采用了一种深度学习方法(You Only Look Once)来检测面部,并应用面部标志的半自动注释技术从面部区域中找到嘴唇和下巴区域。在实验中,受试者坐在不同的距离和高度,以测试所提出算法的性能。实验结果表明,该算法能够有效、准确地捕捉到满足规范要求的身份证照片。
{"title":"The Development of an Identification Photo Booth System based on a Deep Learning Automatic Image Capturing Method","authors":"Yu-Xiang Zhao, Yi-Zeng Hsieh, Shih-Syun Lin","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020403","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2021.65.2.020403","url":null,"abstract":"Abstract With advances in technology, photo booths equipped with automatic capturing systems have gradually replaced the identification (ID) photo service provided by photography studios, thereby enabling consumers to save a considerable amount of time and money. Common automatic\u0000 capturing systems employ text and voice instructions to guide users in capturing their ID photos; however, the capturing results may not conform to ID photo specifications. To address this issue, this study proposes an ID photo capturing algorithm that can automatically detect facial contours\u0000 and adjust the size of captured images. The authors adopted a deep learning method (You Only Look Once) to detect the face and applied a semi-automatic annotation technique of facial landmarks to find the lip and chin regions from the facial region. In the experiments, subjects were seated\u0000 at various distances and heights for testing the performance of the proposed algorithm. The experimental results show that the proposed algorithm can effectively and accurately capture ID photos that satisfy the required specifications.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"65 1","pages":"20403-1-20403-10"},"PeriodicalIF":1.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48749707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/j.imagingsci.technol.2021.65.1.01050110.2352/j.imagingsci.technol.2021.65.1.010501
Yuan JiaYong, Longchen Ma, Maoyi Tian, Lu Xiushan
Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.
{"title":"Registration and Fusion of UAV LiDAR System Sequence Images and Laser Point Clouds","authors":"Yuan JiaYong, Longchen Ma, Maoyi Tian, Lu Xiushan","doi":"10.2352/j.imagingsci.technol.2021.65.1.01050110.2352/j.imagingsci.technol.2021.65.1.010501","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2021.65.1.01050110.2352/j.imagingsci.technol.2021.65.1.010501","url":null,"abstract":"Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and\u0000 lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they\u0000 convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are\u0000 matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate\u0000 a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45265019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.2352/j.imagingsci.technol.2020.64.6.060402
Lei Zhang, Linna Ji, Hualong Jiang, Fengbao Yang, Xiaoxia Wang
Abstract Multi-modal image fusion can more accurately describe the features of a scene than a single image. Because of the different imaging mechanisms, the difference between multi-modal images is great, which leads to poor contrast of the fused images. Therefore, a simple and effective spatial domain fusion algorithm based on variable parameter fractional difference enhancement is proposed. Based on the characteristics of fractional difference enhancement, a variable parameter fractional difference is introduced, the multi-modal images are repeatedly enhanced, and multiple enhanced images are obtained. A correlation coefficient is applied to constrain the number of enhancement cycles. In addition, an energy contrast is used to extract the contrast features of the image, and the tangent function is simultaneously used to obtain the fusion weight to attain multiple contrast-enhanced initialization fusion images. Finally, the weighted average is applied to obtain the final fused image. Experimental results demonstrate that the proposed fusion algorithm can effectively preserve the contrast features between images and improve the quality of fused images.
{"title":"Multi-modal Image Fusion Algorithm based on Variable Parameter Fractional Difference Enhancement","authors":"Lei Zhang, Linna Ji, Hualong Jiang, Fengbao Yang, Xiaoxia Wang","doi":"10.2352/j.imagingsci.technol.2020.64.6.060402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.6.060402","url":null,"abstract":"Abstract Multi-modal image fusion can more accurately describe the features of a scene than a single image. Because of the different imaging mechanisms, the difference between multi-modal images is great, which leads to poor contrast of the fused images. Therefore, a simple\u0000 and effective spatial domain fusion algorithm based on variable parameter fractional difference enhancement is proposed. Based on the characteristics of fractional difference enhancement, a variable parameter fractional difference is introduced, the multi-modal images are repeatedly enhanced,\u0000 and multiple enhanced images are obtained. A correlation coefficient is applied to constrain the number of enhancement cycles. In addition, an energy contrast is used to extract the contrast features of the image, and the tangent function is simultaneously used to obtain the fusion weight\u0000 to attain multiple contrast-enhanced initialization fusion images. Finally, the weighted average is applied to obtain the final fused image. Experimental results demonstrate that the proposed fusion algorithm can effectively preserve the contrast features between images and improve the quality\u0000 of fused images.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"60402-1-60402-12"},"PeriodicalIF":1.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47572122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.2352/j.imagingsci.technol.2020.64.5.050401
Anan Tanwilaisiri, P. Kajondecha
Abstract A fused deposition modeling (FDM) printing machine and a paste extrusion system were integrated, and supercapacitor samples were fabricated using a combination of two three-dimensional (3D) printing techniques. The FDM provided a simple method for creating a frame of electric double layer capacitor (EDLC) samples. The paste extrusion system offered the possibility of depositing different materials to complete the functions of the EDLC samples. A combination of these two 3D printing methods offered one continuous manufacturing process with a high accuracy of manufacturing. Different materials were used to build current collectors and electrodes. Silver and carbon conductive paints were used as current collector materials. Different electrode materials based on activated carbon (AC), carbon conductive paint, and their combination were prepared as three different slurries and deposited to form the electrodes of EDLC samples. The results showed that silver conductive paint was a suitable material for constructing current collectors, and carbon conductive paint mixed with AC was highly effective for use as an electrode material for supercapacitors.
{"title":"Three-dimensional Printing of Supercapacitors based on Different Electrodes","authors":"Anan Tanwilaisiri, P. Kajondecha","doi":"10.2352/j.imagingsci.technol.2020.64.5.050401","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050401","url":null,"abstract":"Abstract A fused deposition modeling (FDM) printing machine and a paste extrusion system were integrated, and supercapacitor samples were fabricated using a combination of two three-dimensional (3D) printing techniques. The FDM provided a simple method for creating a frame\u0000 of electric double layer capacitor (EDLC) samples. The paste extrusion system offered the possibility of depositing different materials to complete the functions of the EDLC samples. A combination of these two 3D printing methods offered one continuous manufacturing process with a high accuracy\u0000 of manufacturing. Different materials were used to build current collectors and electrodes. Silver and carbon conductive paints were used as current collector materials. Different electrode materials based on activated carbon (AC), carbon conductive paint, and their combination were prepared\u0000 as three different slurries and deposited to form the electrodes of EDLC samples. The results showed that silver conductive paint was a suitable material for constructing current collectors, and carbon conductive paint mixed with AC was highly effective for use as an electrode material for\u0000 supercapacitors.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"50401-1-50401-10"},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.2352/j.imagingsci.technol.2020.64.5.050402
P. Jonglearttrakull, K. Fushinobu, M. Kadonaga
Abstract The evaporation rate of a droplet was explained in relation to the thickness of the boundary layer and the condition near the droplet’s surface. However, the number of results obtained from experiments is very limited. This study aims to investigate the thickness of the boundary layer of an ethanol‐water mixture droplet and its effect on the evaporation rate by Z-type Schlieren visualization. Single and double droplets are tested and compared to identify the effect of the second droplet on the average and instantaneous evaporation rate. The double droplet’s lifetime is found to be longer than the single droplet’s lifetime. The formation of a larger vapor region on the top of the droplet indicates a higher instantaneous evaporation rate. The thickness of the boundary layer is found to increase with increase in ethanol concentration. Furthermore, a larger vapor distribution area is found in the case of higher ethanol concentration, which explains the faster evaporation rate at higher ethanol concentration.
{"title":"Effects of the Thickness of Boundary Layer on Droplet’s Evaporation Rate","authors":"P. Jonglearttrakull, K. Fushinobu, M. Kadonaga","doi":"10.2352/j.imagingsci.technol.2020.64.5.050402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050402","url":null,"abstract":"Abstract The evaporation rate of a droplet was explained in relation to the thickness of the boundary layer and the condition near the droplet’s surface. However, the number of results obtained from experiments is very limited. This study aims to investigate the thickness\u0000 of the boundary layer of an ethanol‐water mixture droplet and its effect on the evaporation rate by Z-type Schlieren visualization. Single and double droplets are tested and compared to identify the effect of the second droplet on the average and instantaneous evaporation rate. The\u0000 double droplet’s lifetime is found to be longer than the single droplet’s lifetime. The formation of a larger vapor region on the top of the droplet indicates a higher instantaneous evaporation rate. The thickness of the boundary layer is found to increase with increase in ethanol\u0000 concentration. Furthermore, a larger vapor distribution area is found in the case of higher ethanol concentration, which explains the faster evaporation rate at higher ethanol concentration.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47683718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract For mass production, multiple color halftoning screen printing (MCHSP) can be considered as the alternative textile printing technology when vivid color gradation is needed and the cost of digital printing is a concern. Essentially, MCHSP utilizes the same equipment as traditional screen printing to print overlapping multiple color gradation under halftoning patterns by applying dedicated treatments on color separation and calibration. To ensure color quality, equipment calibration and tone curve compensation are required to compensate for the variables arising from equipment setup and heterogeneous fabrics. In this research, the authors present a procedure of tone curve compensation to eliminate the discrepancy from heterogeneous fabrics. The experimental results based on 55 samples of 44 different fabrics show the effectiveness of compensation and reveal the distribution of average compensation percentage across fabrics.
{"title":"Tone Curve Compensation of Multiple Color Halftoning Screen Printing for Heterogeneous Fabrics","authors":"Chao-Lung Yang, Chih-Hao Chien, Yen-Ping Lin, Chi‐Hsun Chien","doi":"10.2352/j.imagingsci.technol.2020.64.5.050406","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050406","url":null,"abstract":"Abstract For mass production, multiple color halftoning screen printing (MCHSP) can be considered as the alternative textile printing technology when vivid color gradation is needed and the cost of digital printing is a concern. Essentially, MCHSP utilizes the same equipment\u0000 as traditional screen printing to print overlapping multiple color gradation under halftoning patterns by applying dedicated treatments on color separation and calibration. To ensure color quality, equipment calibration and tone curve compensation are required to compensate for the variables\u0000 arising from equipment setup and heterogeneous fabrics. In this research, the authors present a procedure of tone curve compensation to eliminate the discrepancy from heterogeneous fabrics. The experimental results based on 55 samples of 44 different fabrics show the effectiveness of compensation\u0000 and reveal the distribution of average compensation percentage across fabrics.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41928512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.5.050403
María Cristina Rodríguez-Rivero, J. Philpott, Alex B. Hann, J. L. Harries, Rónán Daly
Abstract Continuous inkjet printing relies on steering charged droplets accurately to the surface by using electric fields. A vital component is the set of deflecting electrodes within the printhead, which create these fields. Unwanted deposition of ink on the electrodes, known as build-up, is a concern for operators because this modifies the applied electric field, affects long-term reliability, and requires manual intervention. However, this has not been widely reported or explored. Here, the authors report a laser-based high-speed visualization technique to observe build-up and show that it stems from small satellite droplets that break off from the main printed drops. They characterize the material build-up and reveal its nanoscale particulate nature. Combining the tracking with characterization allows quantifying the charge-to-mass ratio of these droplets. This study provides a route to understanding the build-up phenomenon, and it will enable optimization of printing conditions and printing reliability.
{"title":"Deflecting the Issue: The Origin of Nanoscale Material Build-up in Continuous Inkjet Printing","authors":"María Cristina Rodríguez-Rivero, J. Philpott, Alex B. Hann, J. L. Harries, Rónán Daly","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.5.050403","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.5.050403","url":null,"abstract":"Abstract Continuous inkjet printing relies on steering charged droplets accurately to the surface by using electric fields. A vital component is the set of deflecting electrodes within the printhead, which create these fields. Unwanted deposition of ink on the electrodes,\u0000 known as build-up, is a concern for operators because this modifies the applied electric field, affects long-term reliability, and requires manual intervention. However, this has not been widely reported or explored. Here, the authors report a laser-based high-speed visualization technique\u0000 to observe build-up and show that it stems from small satellite droplets that break off from the main printed drops. They characterize the material build-up and reveal its nanoscale particulate nature. Combining the tracking with characterization allows quantifying the charge-to-mass ratio\u0000 of these droplets. This study provides a route to understanding the build-up phenomenon, and it will enable optimization of printing conditions and printing reliability.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46095921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}