Pub Date : 2020-09-01DOI: 10.2352/j.imagingsci.technol.2020.64.5.050405
Young-Woo Park, M. Noh
Abstract Recently, the three-dimensional (3D) printing technique has attracted much attention for creating objects of arbitrary shape and manufacturing. For the first time, in this work, we present the fabrication of an inkjet printed low-cost 3D temperature sensor on a 3D-shaped thermoplastic substrate suitable for packaging, flexible electronics, and other printed applications. The design, fabrication, and testing of a 3D printed temperature sensor are presented. The sensor pattern is designed using a computer-aided design program and fabricated by drop-on-demand inkjet printing using a magnetostrictive inkjet printhead at room temperature. The sensor pattern is printed using commercially available conductive silver nanoparticle ink. A moving speed of 90 mm/min is chosen to print the sensor pattern. The inkjet printed temperature sensor is demonstrated, and it is characterized by good electrical properties, exhibiting good sensitivity and linearity. The results indicate that 3D inkjet printing technology may have great potential for applications in sensor fabrication.
{"title":"Fabrication of 3D Temperature Sensor Using Magnetostrictive Inkjet Printhead","authors":"Young-Woo Park, M. Noh","doi":"10.2352/j.imagingsci.technol.2020.64.5.050405","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050405","url":null,"abstract":"Abstract Recently, the three-dimensional (3D) printing technique has attracted much attention for creating objects of arbitrary shape and manufacturing. For the first time, in this work, we present the fabrication of an inkjet printed low-cost 3D temperature sensor on a 3D-shaped\u0000 thermoplastic substrate suitable for packaging, flexible electronics, and other printed applications. The design, fabrication, and testing of a 3D printed temperature sensor are presented. The sensor pattern is designed using a computer-aided design program and fabricated by drop-on-demand\u0000 inkjet printing using a magnetostrictive inkjet printhead at room temperature. The sensor pattern is printed using commercially available conductive silver nanoparticle ink. A moving speed of 90 mm/min is chosen to print the sensor pattern. The inkjet printed temperature sensor is demonstrated,\u0000 and it is characterized by good electrical properties, exhibiting good sensitivity and linearity. The results indicate that 3D inkjet printing technology may have great potential for applications in sensor fabrication.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45930771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.2352/j.imagingsci.technol.2020.64.5.050404
Jiangping Yuan, Hua Li, Baohui Xu, G. Chen
Abstract To explore the effects of geometric features on the color similarity perception of displayed three-dimensional (3D) tablets designed by color 3D modeling techniques or printed by color 3D printing techniques, two subjective similarity scaling tasks were conducted for color tablets with four shape features (circular, oval, triangular-columnar, and rounded-cuboid shapes) and four notch features (straight V, straight U, crisscross V, and crisscross U shapes) displayed on a calibrated monitor using the nine-level category judgement method. Invited observers were asked to assort all displayed samples into tablet groups using six surface colors (aqua blue, bright green, pink, orange yellow, bright red, and silvery white), and all perceived similarity values were recorded and compared to original samples successively. The results showed that the similarity perception of tested tablets was inapparently affected by the given shape features and notch features, and it should be judged by a flexible interval rather than by a fixed color difference. This research provides practical insight into the visualization of color similarity perception for displayed personalized tablets to advance precision medicine by 3D printing.
{"title":"Impact of Geometric Features on Color Similarity Perception of Displayed 3D Tablets","authors":"Jiangping Yuan, Hua Li, Baohui Xu, G. Chen","doi":"10.2352/j.imagingsci.technol.2020.64.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050404","url":null,"abstract":"Abstract To explore the effects of geometric features on the color similarity perception of displayed three-dimensional (3D) tablets designed by color 3D modeling techniques or printed by color 3D printing techniques, two subjective similarity scaling tasks were conducted\u0000 for color tablets with four shape features (circular, oval, triangular-columnar, and rounded-cuboid shapes) and four notch features (straight V, straight U, crisscross V, and crisscross U shapes) displayed on a calibrated monitor using the nine-level category judgement method. Invited observers\u0000 were asked to assort all displayed samples into tablet groups using six surface colors (aqua blue, bright green, pink, orange yellow, bright red, and silvery white), and all perceived similarity values were recorded and compared to original samples successively. The results showed that the\u0000 similarity perception of tested tablets was inapparently affected by the given shape features and notch features, and it should be judged by a flexible interval rather than by a fixed color difference. This research provides practical insight into the visualization of color similarity perception\u0000 for displayed personalized tablets to advance precision medicine by 3D printing.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49180094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408
Jiaqi Guo
Abstract In order to reconstruct and identify three-dimensional (3D) images, an image identification algorithm based on a deep learning compensation transformation matrix of main component feature dimensionality reduction is proposed, including line matching with point matching as the base, 3D reconstruction of point and line integration, parallelization automatic differentiation applied to bundle adjustment, parallelization positive definite matrix system solution applied to bundle adjustment, and an improved classifier based on a deep compensation transformation matrix. Based on the INRIA database, the performance and reconstruction effect of the algorithm are verified. The accuracy rate and success rate are compared with L1APG, VTD, CT, MT, etc. The results show that random transformation and re-sampling of samples during training can improve the performance of the classifier prediction algorithm under the condition that the training time is short. The reconstructed image obtained by the algorithm described in this study has a low correlation with the original image, with high number of pixels change rate (NPCR) and unified average changing intensity (UACI) values and low peak signal to noise ratio (PSNR) values. Image reconstruction effect is better with image capacity advantage. Compared with other algorithms, the proposed algorithm has certain advantages in accuracy and success rate with stable performance and good robustness. Therefore, it can be concluded that image recognition based on the dimension reduction of principal component features provides good recognition effect, which is of guiding significance for research in the image recognition field.
{"title":"Image Identification Algorithm of Deep Compensation Transformation Matrix based on Main Component Feature Dimensionality Reduction","authors":"Jiaqi Guo","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408","url":null,"abstract":"Abstract In order to reconstruct and identify three-dimensional (3D) images, an image identification algorithm based on a deep learning compensation transformation matrix of main component feature dimensionality reduction is proposed, including line matching with point matching\u0000 as the base, 3D reconstruction of point and line integration, parallelization automatic differentiation applied to bundle adjustment, parallelization positive definite matrix system solution applied to bundle adjustment, and an improved classifier based on a deep compensation transformation\u0000 matrix. Based on the INRIA database, the performance and reconstruction effect of the algorithm are verified. The accuracy rate and success rate are compared with L1APG, VTD, CT, MT, etc. The results show that random transformation and re-sampling of samples during training can improve the\u0000 performance of the classifier prediction algorithm under the condition that the training time is short. The reconstructed image obtained by the algorithm described in this study has a low correlation with the original image, with high number of pixels change rate (NPCR) and unified average\u0000 changing intensity (UACI) values and low peak signal to noise ratio (PSNR) values. Image reconstruction effect is better with image capacity advantage. Compared with other algorithms, the proposed algorithm has certain advantages in accuracy and success rate with stable performance and good\u0000 robustness. Therefore, it can be concluded that image recognition based on the dimension reduction of principal component features provides good recognition effect, which is of guiding significance for research in the image recognition field.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40408-1-40408-8"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43180252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409
Xiuyan Tian, Haifang Li, Hongxia Deng
Abstract Object detection and tracking is an indispensable module in airborne optoelectronic equipment, and its detection and tracking performance is directly related to the accuracy of object perception. Recently, the improved Siamese network tracking algorithm has achieved excellent results on various challenging data sets. However, most of the improved algorithms use local fixed search strategies, which cannot update the template. In addition, the template will introduce background interference, which will lead to tracking drift and eventually cause tracking failure. In order to solve these problems, this article proposes an improved fully connected Siamese tracking algorithm combined with object contour extraction and object detection, which uses the contour template of the object instead of the bounding-box template to reduce the background clutter interference. First, the contour detection network automatically obtains the closed contour information of the object and uses the flood-filling clustering algorithm to obtain the contour template. Then, the contour template and the search area are fed into the improved Siamese network to obtain the optimal tracking score value and adaptively update the contour template. If the object is fully obscured or lost, the YoLo v3 network is used to search the object in the entire field of view to achieve stable tracking throughout the process. A large number of qualitative and quantitative simulation results on benchmark test data set and the flying data set show that the improved model can not only improve the object tracking performance under complex backgrounds, but also improve the response time of airborne systems, which has high engineering application value.
{"title":"Object Tracking Algorithm based on Improved Siamese Convolutional Networks Combined with Deep Contour Extraction and Object Detection Under Airborne Platform","authors":"Xiuyan Tian, Haifang Li, Hongxia Deng","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409","url":null,"abstract":"Abstract Object detection and tracking is an indispensable module in airborne optoelectronic equipment, and its detection and tracking performance is directly related to the accuracy of object perception. Recently, the improved Siamese network tracking algorithm has achieved\u0000 excellent results on various challenging data sets. However, most of the improved algorithms use local fixed search strategies, which cannot update the template. In addition, the template will introduce background interference, which will lead to tracking drift and eventually cause tracking\u0000 failure. In order to solve these problems, this article proposes an improved fully connected Siamese tracking algorithm combined with object contour extraction and object detection, which uses the contour template of the object instead of the bounding-box template to reduce the background\u0000 clutter interference. First, the contour detection network automatically obtains the closed contour information of the object and uses the flood-filling clustering algorithm to obtain the contour template. Then, the contour template and the search area are fed into the improved Siamese network\u0000 to obtain the optimal tracking score value and adaptively update the contour template. If the object is fully obscured or lost, the YoLo v3 network is used to search the object in the entire field of view to achieve stable tracking throughout the process. A large number of qualitative and\u0000 quantitative simulation results on benchmark test data set and the flying data set show that the improved model can not only improve the object tracking performance under complex backgrounds, but also improve the response time of airborne systems, which has high engineering application value.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40409-1-40409-11"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45720604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/j.imagingsci.technol.2020.64.4.040404
I. Ding, C.-M. Ruan
Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.
{"title":"Speaker Identity Recognition by Acoustic and Visual Data Fusion through Personal Privacy for Smart Care and Service Applications","authors":"I. Ding, C.-M. Ruan","doi":"10.2352/j.imagingsci.technol.2020.64.4.040404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040404","url":null,"abstract":"Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially\u0000 be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information\u0000 obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and\u0000 reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently\u0000 performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design.\u0000 Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve\u0000 excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed\u0000 approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40404-1-40404-16"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48786748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/j.imagingsci.technol.2020.64.4.040401
Ga Young Kim, Sang Hyeok Lee, Sung Min Kim
Abstract This study proposed a novel intensity weighting approach using a convolutional neural network (CNN) for fast and accurate optic disc (OD) segmentation in a fundus image. The proposed method mainly consisted of three steps involving CNN-based importance calculation of pixel, image reconstruction, and OD segmentation. In the first step, the CNN model composed of four convolution and pooling layers was designed and trained. Then, the heat map was generated by applying a gradient-weighted class activation map algorithm to the final convolution layer of the model. In the next step, each of the pixels on the image was assigned a weight based on the previously obtained heat map. In addition, the retinal vessel that may interfere with OD segmentation was detected and substituted based on the nearest neighbor pixels. Finally, the OD region was segmented using Otsu’s method. As a result, the proposed method achieved a high segmentation accuracy of 98.61%, which was improved about 4.61% than the result without the weight assignment.
{"title":"A Novel Intensity Weighting Approach Using Convolutional Neural Network for Optic Disc Segmentation in Fundus Image","authors":"Ga Young Kim, Sang Hyeok Lee, Sung Min Kim","doi":"10.2352/j.imagingsci.technol.2020.64.4.040401","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040401","url":null,"abstract":"Abstract This study proposed a novel intensity weighting approach using a convolutional neural network (CNN) for fast and accurate optic disc (OD) segmentation in a fundus image. The proposed method mainly consisted of three steps involving CNN-based importance calculation\u0000 of pixel, image reconstruction, and OD segmentation. In the first step, the CNN model composed of four convolution and pooling layers was designed and trained. Then, the heat map was generated by applying a gradient-weighted class activation map algorithm to the final convolution layer of\u0000 the model. In the next step, each of the pixels on the image was assigned a weight based on the previously obtained heat map. In addition, the retinal vessel that may interfere with OD segmentation was detected and substituted based on the nearest neighbor pixels. Finally, the OD region was\u0000 segmented using Otsu’s method. As a result, the proposed method achieved a high segmentation accuracy of 98.61%, which was improved about 4.61% than the result without the weight assignment.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40401-1-40401-9"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42857105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/j.imagingsci.technol.2020.64.4.040402
W. Tsai, Hung-Ju Chen
Abstract Headlight is the most explicit and stable image feature in nighttime scenes. This study proposes a headlight detection and pairing algorithm that adapts to numerous scenes to achieve accurate vehicle detection in the nighttime. This algorithm improved the conventional histogram equalization by using the difference before and after the equalization to suppress the ground reflection and noise. Then, headlight detection was completed based on this difference as a feature. In addition, the authors combined coordinate information, moving distance, symmetry, and stable time to implement headlight pairing, thus enabling vehicle detection in the nighttime. This study effectively overcame complex scenes such as high-speed movement, multi-headlight, and rains. Finally, the algorithm was verified by videos of highway scenes; the detection rate was as high as 96.67%. It can be implemented on the Raspberry Pi embedded platform, and its execution speed can reach 25 frames per second.
{"title":"Effective Reflection Suppression Method for Vehicle Detection in Complex Nighttime Traffic Scenes","authors":"W. Tsai, Hung-Ju Chen","doi":"10.2352/j.imagingsci.technol.2020.64.4.040402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040402","url":null,"abstract":"Abstract Headlight is the most explicit and stable image feature in nighttime scenes. This study proposes a headlight detection and pairing algorithm that adapts to numerous scenes to achieve accurate vehicle detection in the nighttime. This algorithm improved the conventional\u0000 histogram equalization by using the difference before and after the equalization to suppress the ground reflection and noise. Then, headlight detection was completed based on this difference as a feature. In addition, the authors combined coordinate information, moving distance, symmetry,\u0000 and stable time to implement headlight pairing, thus enabling vehicle detection in the nighttime. This study effectively overcame complex scenes such as high-speed movement, multi-headlight, and rains. Finally, the algorithm was verified by videos of highway scenes; the detection rate was\u0000 as high as 96.67%. It can be implemented on the Raspberry Pi embedded platform, and its execution speed can reach 25 frames per second.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40402-1-40402-9"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48227043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410
Guanghui Zhang, X. Feng
Abstract Objective: To study the application of image processing technology in cesarean section of placenta previa, thereby reducing the occurrence of high-risk pregnancy. Methods: First, the method of gray image enhancement is analyzed. This method enhances the gray difference between the target and the background, highlights useful information, summarizes the source and type of noise, and proposes common filtering and noise reduction methods to suppress the noise. For edge detection, pixel-level edge detection operators and sub-pixel-level edge detection operators are summarized. The Canny edge detection operator and the Gaussian fitting sub-pixel edge detection operator are introduced in detail, and innovative improvements are carried out for resolving the deficiencies of the algorithm. Results: The improved adaptive iterative segmentation thresholding method results in a threshold of T = 98 and 11 iterations. The image segmentation quality of the improved Otsu method has been greatly enhanced. After the second segmentation, the improved Otsu method finds the optimal threshold T = 76. Conclusion: Color Doppler ultrasound image processing technology has excellent application in placenta previa cesarean section.
{"title":"Applying Color Doppler Image based Virtual Surgery in Placenta Previa Cesarean Section","authors":"Guanghui Zhang, X. Feng","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410","url":null,"abstract":"Abstract Objective: To study the application of image processing technology in cesarean section of placenta previa, thereby reducing the occurrence of high-risk pregnancy. Methods: First, the method of gray image enhancement is analyzed. This method enhances the gray difference\u0000 between the target and the background, highlights useful information, summarizes the source and type of noise, and proposes common filtering and noise reduction methods to suppress the noise. For edge detection, pixel-level edge detection operators and sub-pixel-level edge detection operators\u0000 are summarized. The Canny edge detection operator and the Gaussian fitting sub-pixel edge detection operator are introduced in detail, and innovative improvements are carried out for resolving the deficiencies of the algorithm. Results: The improved adaptive iterative segmentation thresholding\u0000 method results in a threshold of T = 98 and 11 iterations. The image segmentation quality of the improved Otsu method has been greatly enhanced. After the second segmentation, the improved Otsu method finds the optimal threshold T = 76. Conclusion: Color Doppler ultrasound image\u0000 processing technology has excellent application in placenta previa cesarean section.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40410-1-40410-10"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44365430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407
Ran Pang, He Huang, Tri Dev Acharya
Abstract Yongding River is one of the five major river systems in Beijing. It is located to the west of Beijing. It has influenced culture along its basin. The river supports both rural and urban areas. Furthermore, it influences economic development, water conservation, and the natural environment. However, during the past few decades, due to the combined effect of increasing population and economic activities, a series of changes have led to problems such as the reduction in water volume and the exposure of the riverbed. In this study, remote sensing images were used to derive land cover maps and compare spatiotemporal changes during the past 40 years. As a result, the following data were found: forest changed least; cropland area increased to a large extent; bareland area was reduced by a maximum of 63%; surface water area in the study area was lower from 1989 to 1999 because of the excessive use of water in human activities, but it increased by 92% from 2010 to 2018 as awareness about protecting the environment arose; there was a small increase in the built-up area, but this was more planned. These results reveal that water conservancy construction, agroforestry activities, and increasing urbanization have a great impact on the surrounding environment of the Yongding River (Beijing section). This study discusses in detail how the current situation can be attributed to of human activities, policies, economic development, and ecological conservation Furthermore, it suggests improvement by strengthening the governance of the riverbed and the riverside. These results and discussion can be a reference and provide decision support for the management of southwest Beijing or similar river basins in peri-urban areas.
{"title":"Spatiotemporal Changes of Riverbed and Surrounding Environment in Yongding River (Beijing section) in the Past 40 Years","authors":"Ran Pang, He Huang, Tri Dev Acharya","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407","url":null,"abstract":"Abstract Yongding River is one of the five major river systems in Beijing. It is located to the west of Beijing. It has influenced culture along its basin. The river supports both rural and urban areas. Furthermore, it influences economic development, water conservation,\u0000 and the natural environment. However, during the past few decades, due to the combined effect of increasing population and economic activities, a series of changes have led to problems such as the reduction in water volume and the exposure of the riverbed. In this study, remote sensing images\u0000 were used to derive land cover maps and compare spatiotemporal changes during the past 40 years. As a result, the following data were found: forest changed least; cropland area increased to a large extent; bareland area was reduced by a maximum of 63%; surface water area in the study area\u0000 was lower from 1989 to 1999 because of the excessive use of water in human activities, but it increased by 92% from 2010 to 2018 as awareness about protecting the environment arose; there was a small increase in the built-up area, but this was more planned. These results reveal that water\u0000 conservancy construction, agroforestry activities, and increasing urbanization have a great impact on the surrounding environment of the Yongding River (Beijing section). This study discusses in detail how the current situation can be attributed to of human activities, policies, economic development,\u0000 and ecological conservation Furthermore, it suggests improvement by strengthening the governance of the riverbed and the riverside. These results and discussion can be a reference and provide decision support for the management of southwest Beijing or similar river basins in peri-urban areas.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40407-1-40407-13"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46109464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411
Liu Zhao, Qiang Li, Ching-Hsin Wang, Yuancan Liao
Abstract The accuracy of three-dimensional (3D) brain tumor image segmentation is of great significance to brain tumor diagnosis. To enhance the accuracy of segmentation, this study proposes an algorithm integrating a cascaded anisotropic fully convolutional neural network (FCNN) and the hybrid level set method. The algorithm first performs bias field correction and gray value normalization on T1, T1C, T2, and fluid-attenuated inversion recovery magnetic resonance imaging (MRI) images for preprocessing. It then uses a cascading mechanism to perform preliminary segmentation of whole tumors, tumor cores, and enhancing tumors by an anisotropic FCNN based on the relationships among the locations of the three types of tumor structures. This simplifies multiclass brain tumor image segmentation problems into three binary classification problems. At the same time, the anisotropic FCNN adopts dense connections and multiscale feature merging to further enhance performance. Model training is respectively conducted on the axial, coronal, and sagittal planes, and the segmentation results from the three different orthogonal views are combined. Finally, the hybrid level set method is adopted to refine the brain tumor boundaries in the preliminary segmentation results, thereby completing fine segmentation. The results indicate that the proposed algorithm can achieve 3D MRI brain tumor image segmentation of high accuracy and stability. Comparison of the whole-tumor, tumor-core, and enhancing-tumor segmentation results with the gold standards produced Dice similarity coefficients (Dice) of 0.9113, 0.8581, and 0.7976, respectively.
{"title":"3D Brain Tumor Image Segmentation Integrating Cascaded Anisotropic Fully Convolutional Neural Network and Hybrid Level Set Method","authors":"Liu Zhao, Qiang Li, Ching-Hsin Wang, Yuancan Liao","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411","url":null,"abstract":"Abstract The accuracy of three-dimensional (3D) brain tumor image segmentation is of great significance to brain tumor diagnosis. To enhance the accuracy of segmentation, this study proposes an algorithm integrating a cascaded anisotropic fully convolutional neural network\u0000 (FCNN) and the hybrid level set method. The algorithm first performs bias field correction and gray value normalization on T1, T1C, T2, and fluid-attenuated inversion recovery magnetic resonance imaging (MRI) images for preprocessing. It then uses a cascading mechanism to perform preliminary\u0000 segmentation of whole tumors, tumor cores, and enhancing tumors by an anisotropic FCNN based on the relationships among the locations of the three types of tumor structures. This simplifies multiclass brain tumor image segmentation problems into three binary classification problems. At the\u0000 same time, the anisotropic FCNN adopts dense connections and multiscale feature merging to further enhance performance. Model training is respectively conducted on the axial, coronal, and sagittal planes, and the segmentation results from the three different orthogonal views are combined.\u0000 Finally, the hybrid level set method is adopted to refine the brain tumor boundaries in the preliminary segmentation results, thereby completing fine segmentation. The results indicate that the proposed algorithm can achieve 3D MRI brain tumor image segmentation of high accuracy and stability.\u0000 Comparison of the whole-tumor, tumor-core, and enhancing-tumor segmentation results with the gold standards produced Dice similarity coefficients (Dice) of 0.9113, 0.8581, and 0.7976, respectively.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40411-1-40411-10"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42976196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}