首页 > 最新文献

Journal of Imaging Science and Technology最新文献

英文 中文
Fabrication of 3D Temperature Sensor Using Magnetostrictive Inkjet Printhead 用磁致伸缩喷墨头制造三维温度传感器
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-09-01 DOI: 10.2352/j.imagingsci.technol.2020.64.5.050405
Young-Woo Park, M. Noh
Abstract Recently, the three-dimensional (3D) printing technique has attracted much attention for creating objects of arbitrary shape and manufacturing. For the first time, in this work, we present the fabrication of an inkjet printed low-cost 3D temperature sensor on a 3D-shaped thermoplastic substrate suitable for packaging, flexible electronics, and other printed applications. The design, fabrication, and testing of a 3D printed temperature sensor are presented. The sensor pattern is designed using a computer-aided design program and fabricated by drop-on-demand inkjet printing using a magnetostrictive inkjet printhead at room temperature. The sensor pattern is printed using commercially available conductive silver nanoparticle ink. A moving speed of 90 mm/min is chosen to print the sensor pattern. The inkjet printed temperature sensor is demonstrated, and it is characterized by good electrical properties, exhibiting good sensitivity and linearity. The results indicate that 3D inkjet printing technology may have great potential for applications in sensor fabrication.
摘要近年来,三维打印技术在创建任意形状的物体和制造方面引起了人们的广泛关注。在这项工作中,我们首次在3D形状的热塑性基底上制造了一种喷墨打印的低成本3D温度传感器,适用于包装、柔性电子和其他打印应用。介绍了一种3D打印温度传感器的设计、制造和测试。传感器图案是使用计算机辅助设计程序设计的,并通过在室温下使用磁致伸缩喷墨打印头的按需喷墨打印来制造。传感器图案是使用市售的导电银纳米粒子油墨印刷的。选择90mm/min的移动速度来打印传感器图案。演示了喷墨打印温度传感器,该传感器具有良好的电学性能、良好的灵敏度和线性。结果表明,3D喷墨打印技术在传感器制造中具有巨大的应用潜力。
{"title":"Fabrication of 3D Temperature Sensor Using Magnetostrictive Inkjet Printhead","authors":"Young-Woo Park, M. Noh","doi":"10.2352/j.imagingsci.technol.2020.64.5.050405","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050405","url":null,"abstract":"Abstract Recently, the three-dimensional (3D) printing technique has attracted much attention for creating objects of arbitrary shape and manufacturing. For the first time, in this work, we present the fabrication of an inkjet printed low-cost 3D temperature sensor on a 3D-shaped\u0000 thermoplastic substrate suitable for packaging, flexible electronics, and other printed applications. The design, fabrication, and testing of a 3D printed temperature sensor are presented. The sensor pattern is designed using a computer-aided design program and fabricated by drop-on-demand\u0000 inkjet printing using a magnetostrictive inkjet printhead at room temperature. The sensor pattern is printed using commercially available conductive silver nanoparticle ink. A moving speed of 90 mm/min is chosen to print the sensor pattern. The inkjet printed temperature sensor is demonstrated,\u0000 and it is characterized by good electrical properties, exhibiting good sensitivity and linearity. The results indicate that 3D inkjet printing technology may have great potential for applications in sensor fabrication.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45930771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Geometric Features on Color Similarity Perception of Displayed 3D Tablets 几何特征对显示3D平板颜色相似感的影响
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-09-01 DOI: 10.2352/j.imagingsci.technol.2020.64.5.050404
Jiangping Yuan, Hua Li, Baohui Xu, G. Chen
Abstract To explore the effects of geometric features on the color similarity perception of displayed three-dimensional (3D) tablets designed by color 3D modeling techniques or printed by color 3D printing techniques, two subjective similarity scaling tasks were conducted for color tablets with four shape features (circular, oval, triangular-columnar, and rounded-cuboid shapes) and four notch features (straight V, straight U, crisscross V, and crisscross U shapes) displayed on a calibrated monitor using the nine-level category judgement method. Invited observers were asked to assort all displayed samples into tablet groups using six surface colors (aqua blue, bright green, pink, orange yellow, bright red, and silvery white), and all perceived similarity values were recorded and compared to original samples successively. The results showed that the similarity perception of tested tablets was inapparently affected by the given shape features and notch features, and it should be judged by a flexible interval rather than by a fixed color difference. This research provides practical insight into the visualization of color similarity perception for displayed personalized tablets to advance precision medicine by 3D printing.
摘要为了探讨几何特征对通过彩色3D建模技术设计或通过彩色3D打印技术打印的显示三维(3D)平板的颜色相似性感知的影响,使用九级类别判断方法,对在校准监视器上显示的具有四种形状特征(圆形、椭圆形、三角柱形和圆形长方体)和四种凹口特征(直V形、直U形、交叉V形和交叉U形)的彩色平板进行了两个主观相似性标度任务。受邀观察者被要求使用六种表面颜色(水蓝色、亮绿色、粉色、橙黄色、亮红色和银白色)将所有展示的样品分为片剂组,并记录所有感知的相似性值,并依次与原始样品进行比较。结果表明,给定的形状特征和缺口特征对受试片剂的相似性感知没有明显影响,应该用一个灵活的区间来判断,而不是用固定的色差来判断。本研究为显示个性化药片的颜色相似性感知可视化提供了实用见解,以通过3D打印推进精准医疗。
{"title":"Impact of Geometric Features on Color Similarity Perception of Displayed 3D Tablets","authors":"Jiangping Yuan, Hua Li, Baohui Xu, G. Chen","doi":"10.2352/j.imagingsci.technol.2020.64.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.5.050404","url":null,"abstract":"Abstract To explore the effects of geometric features on the color similarity perception of displayed three-dimensional (3D) tablets designed by color 3D modeling techniques or printed by color 3D printing techniques, two subjective similarity scaling tasks were conducted\u0000 for color tablets with four shape features (circular, oval, triangular-columnar, and rounded-cuboid shapes) and four notch features (straight V, straight U, crisscross V, and crisscross U shapes) displayed on a calibrated monitor using the nine-level category judgement method. Invited observers\u0000 were asked to assort all displayed samples into tablet groups using six surface colors (aqua blue, bright green, pink, orange yellow, bright red, and silvery white), and all perceived similarity values were recorded and compared to original samples successively. The results showed that the\u0000 similarity perception of tested tablets was inapparently affected by the given shape features and notch features, and it should be judged by a flexible interval rather than by a fixed color difference. This research provides practical insight into the visualization of color similarity perception\u0000 for displayed personalized tablets to advance precision medicine by 3D printing.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49180094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image Identification Algorithm of Deep Compensation Transformation Matrix based on Main Component Feature Dimensionality Reduction 基于主成分特征降维的深度补偿变换矩阵图像识别算法
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408
Jiaqi Guo
Abstract In order to reconstruct and identify three-dimensional (3D) images, an image identification algorithm based on a deep learning compensation transformation matrix of main component feature dimensionality reduction is proposed, including line matching with point matching as the base, 3D reconstruction of point and line integration, parallelization automatic differentiation applied to bundle adjustment, parallelization positive definite matrix system solution applied to bundle adjustment, and an improved classifier based on a deep compensation transformation matrix. Based on the INRIA database, the performance and reconstruction effect of the algorithm are verified. The accuracy rate and success rate are compared with L1APG, VTD, CT, MT, etc. The results show that random transformation and re-sampling of samples during training can improve the performance of the classifier prediction algorithm under the condition that the training time is short. The reconstructed image obtained by the algorithm described in this study has a low correlation with the original image, with high number of pixels change rate (NPCR) and unified average changing intensity (UACI) values and low peak signal to noise ratio (PSNR) values. Image reconstruction effect is better with image capacity advantage. Compared with other algorithms, the proposed algorithm has certain advantages in accuracy and success rate with stable performance and good robustness. Therefore, it can be concluded that image recognition based on the dimension reduction of principal component features provides good recognition effect, which is of guiding significance for research in the image recognition field.
摘要为了重建和识别三维图像,提出了一种基于主成分降维深度学习补偿变换矩阵的图像识别算法,包括以点匹配为基础的线匹配、点线积分的三维重建、应用于束平差的并行化自动微分、应用于束平差的并行化正定矩阵系统解、并提出了一种基于深度补偿变换矩阵的改进分类器。基于INRIA数据库,验证了该算法的性能和重构效果。并与L1APG、VTD、CT、MT等的准确率和成功率进行了比较。结果表明,在训练时间较短的情况下,在训练过程中对样本进行随机变换和重采样可以提高分类器预测算法的性能。本文算法得到的重构图像与原始图像的相关性较低,像素数变化率(NPCR)和统一平均变化强度(UACI)值较高,峰值信噪比(PSNR)值较低。具有图像容量优势,图像重建效果较好。与其他算法相比,该算法在准确率和成功率方面具有一定优势,性能稳定,鲁棒性好。由此可见,基于主成分特征降维的图像识别具有良好的识别效果,对图像识别领域的研究具有指导意义。
{"title":"Image Identification Algorithm of Deep Compensation Transformation Matrix based on Main Component Feature Dimensionality Reduction","authors":"Jiaqi Guo","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040408","url":null,"abstract":"Abstract In order to reconstruct and identify three-dimensional (3D) images, an image identification algorithm based on a deep learning compensation transformation matrix of main component feature dimensionality reduction is proposed, including line matching with point matching\u0000 as the base, 3D reconstruction of point and line integration, parallelization automatic differentiation applied to bundle adjustment, parallelization positive definite matrix system solution applied to bundle adjustment, and an improved classifier based on a deep compensation transformation\u0000 matrix. Based on the INRIA database, the performance and reconstruction effect of the algorithm are verified. The accuracy rate and success rate are compared with L1APG, VTD, CT, MT, etc. The results show that random transformation and re-sampling of samples during training can improve the\u0000 performance of the classifier prediction algorithm under the condition that the training time is short. The reconstructed image obtained by the algorithm described in this study has a low correlation with the original image, with high number of pixels change rate (NPCR) and unified average\u0000 changing intensity (UACI) values and low peak signal to noise ratio (PSNR) values. Image reconstruction effect is better with image capacity advantage. Compared with other algorithms, the proposed algorithm has certain advantages in accuracy and success rate with stable performance and good\u0000 robustness. Therefore, it can be concluded that image recognition based on the dimension reduction of principal component features provides good recognition effect, which is of guiding significance for research in the image recognition field.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40408-1-40408-8"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43180252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Tracking Algorithm based on Improved Siamese Convolutional Networks Combined with Deep Contour Extraction and Object Detection Under Airborne Platform 机载平台下基于改进Siamese卷积网络结合深度轮廓提取和目标检测的目标跟踪算法
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409
Xiuyan Tian, Haifang Li, Hongxia Deng
Abstract Object detection and tracking is an indispensable module in airborne optoelectronic equipment, and its detection and tracking performance is directly related to the accuracy of object perception. Recently, the improved Siamese network tracking algorithm has achieved excellent results on various challenging data sets. However, most of the improved algorithms use local fixed search strategies, which cannot update the template. In addition, the template will introduce background interference, which will lead to tracking drift and eventually cause tracking failure. In order to solve these problems, this article proposes an improved fully connected Siamese tracking algorithm combined with object contour extraction and object detection, which uses the contour template of the object instead of the bounding-box template to reduce the background clutter interference. First, the contour detection network automatically obtains the closed contour information of the object and uses the flood-filling clustering algorithm to obtain the contour template. Then, the contour template and the search area are fed into the improved Siamese network to obtain the optimal tracking score value and adaptively update the contour template. If the object is fully obscured or lost, the YoLo v3 network is used to search the object in the entire field of view to achieve stable tracking throughout the process. A large number of qualitative and quantitative simulation results on benchmark test data set and the flying data set show that the improved model can not only improve the object tracking performance under complex backgrounds, but also improve the response time of airborne systems, which has high engineering application value.
摘要目标检测与跟踪是机载光电设备中不可缺少的模块,其检测与跟踪性能直接关系到目标感知的准确性。近年来,改进的Siamese网络跟踪算法在各种具有挑战性的数据集上取得了优异的效果。然而,大多数改进算法使用的是局部固定搜索策略,无法更新模板。此外,模板会引入背景干扰,导致跟踪漂移,最终导致跟踪失败。为了解决这些问题,本文提出了一种将目标轮廓提取和目标检测相结合的改进的全连通暹罗跟踪算法,该算法使用目标轮廓模板代替边界盒模板来减少背景杂波干扰。首先,轮廓检测网络自动获取目标的闭合轮廓信息,并利用洪水填充聚类算法获取轮廓模板;然后,将轮廓模板和搜索区域输入到改进的Siamese网络中,获得最优跟踪分值并自适应更新轮廓模板。如果目标完全被遮挡或丢失,则使用YoLo v3网络在整个视场中搜索目标,以实现整个过程的稳定跟踪。在基准测试数据集和飞行数据集上进行的大量定性和定量仿真结果表明,改进后的模型不仅能提高复杂背景下的目标跟踪性能,还能提高机载系统的响应时间,具有较高的工程应用价值。
{"title":"Object Tracking Algorithm based on Improved Siamese Convolutional Networks Combined with Deep Contour Extraction and Object Detection Under Airborne Platform","authors":"Xiuyan Tian, Haifang Li, Hongxia Deng","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040409","url":null,"abstract":"Abstract Object detection and tracking is an indispensable module in airborne optoelectronic equipment, and its detection and tracking performance is directly related to the accuracy of object perception. Recently, the improved Siamese network tracking algorithm has achieved\u0000 excellent results on various challenging data sets. However, most of the improved algorithms use local fixed search strategies, which cannot update the template. In addition, the template will introduce background interference, which will lead to tracking drift and eventually cause tracking\u0000 failure. In order to solve these problems, this article proposes an improved fully connected Siamese tracking algorithm combined with object contour extraction and object detection, which uses the contour template of the object instead of the bounding-box template to reduce the background\u0000 clutter interference. First, the contour detection network automatically obtains the closed contour information of the object and uses the flood-filling clustering algorithm to obtain the contour template. Then, the contour template and the search area are fed into the improved Siamese network\u0000 to obtain the optimal tracking score value and adaptively update the contour template. If the object is fully obscured or lost, the YoLo v3 network is used to search the object in the entire field of view to achieve stable tracking throughout the process. A large number of qualitative and\u0000 quantitative simulation results on benchmark test data set and the flying data set show that the improved model can not only improve the object tracking performance under complex backgrounds, but also improve the response time of airborne systems, which has high engineering application value.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40409-1-40409-11"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45720604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Speaker Identity Recognition by Acoustic and Visual Data Fusion through Personal Privacy for Smart Care and Service Applications 基于个人隐私的声音和视觉数据融合的说话人身份识别,用于智能护理和服务应用
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/j.imagingsci.technol.2020.64.4.040404
I. Ding, C.-M. Ruan
Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.
摘要随着物联网技术的快速发展,基于语音命令的语音识别等智能服务应用和基于上下文感知的情绪识别等智能护理应用将受到广泛关注,并可能成为智能家居或办公环境的需求。在这种智能应用中,室内空间中特定成员的身份识别将是一个关键问题。在这项研究中,开发了一种组合的视听身份识别方法。在该方法中,将人脸检测获得的视觉信息纳入声学高斯似然计算中,用于构建说话人分类树,以显著增强基于高斯混合模型(GMM)的说话人识别方法。这项研究考虑了被监控者的隐私,降低了监控的程度。此外,采用了流行的包含麦克风阵列的Kinect传感器设备来获取人的声学语音数据。所提出的视听身份识别方法在特定的室内空间中只部署了两个摄像头,以便方便地进行人脸检测并快速确定特定空间中的总人数。使用人脸检测获得的这种与室内空间中的人数有关的信息被用来有效地调节精确的GMM扬声器分类树设计。针对GMM说话人识别方法,提出了两种基于人脸检测的说话人分类树方案——二元说话人分类树(GMM-BT)和非二元说话人识别树(GMM-NBT)。所提出的GMM-BT和GMM-NBT方法分别获得了84.28%和83%的优秀身份识别率;这两个值都高于传统GMM方法的识别率(80.5%)。此外,由于在一般的视听说话人识别任务中不需要极其复杂的人脸识别计算,因此该方法快速有效,平均识别时间仅略微增加0.051s。
{"title":"Speaker Identity Recognition by Acoustic and Visual Data Fusion through Personal Privacy for Smart Care and Service Applications","authors":"I. Ding, C.-M. Ruan","doi":"10.2352/j.imagingsci.technol.2020.64.4.040404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040404","url":null,"abstract":"Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially\u0000 be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information\u0000 obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and\u0000 reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently\u0000 performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design.\u0000 Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve\u0000 excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed\u0000 approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40404-1-40404-16"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48786748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Intensity Weighting Approach Using Convolutional Neural Network for Optic Disc Segmentation in Fundus Image 一种新的基于卷积神经网络的眼底图像强度加权分割方法
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/j.imagingsci.technol.2020.64.4.040401
Ga Young Kim, Sang Hyeok Lee, Sung Min Kim
Abstract This study proposed a novel intensity weighting approach using a convolutional neural network (CNN) for fast and accurate optic disc (OD) segmentation in a fundus image. The proposed method mainly consisted of three steps involving CNN-based importance calculation of pixel, image reconstruction, and OD segmentation. In the first step, the CNN model composed of four convolution and pooling layers was designed and trained. Then, the heat map was generated by applying a gradient-weighted class activation map algorithm to the final convolution layer of the model. In the next step, each of the pixels on the image was assigned a weight based on the previously obtained heat map. In addition, the retinal vessel that may interfere with OD segmentation was detected and substituted based on the nearest neighbor pixels. Finally, the OD region was segmented using Otsu’s method. As a result, the proposed method achieved a high segmentation accuracy of 98.61%, which was improved about 4.61% than the result without the weight assignment.
摘要本研究提出了一种新的强度加权方法,使用卷积神经网络(CNN)在眼底图像中快速准确地分割视盘(OD)。该方法主要包括三个步骤,包括基于CNN的像素重要性计算、图像重建和OD分割。在第一步中,设计并训练了由四个卷积和池化层组成的CNN模型。然后,通过将梯度加权类激活图算法应用于模型的最终卷积层来生成热图。在下一步中,基于先前获得的热图,为图像上的每个像素分配一个权重。此外,基于最近邻像素检测并替换可能干扰OD分割的视网膜血管。最后,使用Otsu的方法对OD区域进行分割。结果表明,该方法实现了98.61%的高分割精度,比没有权重分配的结果提高了4.61%。
{"title":"A Novel Intensity Weighting Approach Using Convolutional Neural Network for Optic Disc Segmentation in Fundus Image","authors":"Ga Young Kim, Sang Hyeok Lee, Sung Min Kim","doi":"10.2352/j.imagingsci.technol.2020.64.4.040401","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040401","url":null,"abstract":"Abstract This study proposed a novel intensity weighting approach using a convolutional neural network (CNN) for fast and accurate optic disc (OD) segmentation in a fundus image. The proposed method mainly consisted of three steps involving CNN-based importance calculation\u0000 of pixel, image reconstruction, and OD segmentation. In the first step, the CNN model composed of four convolution and pooling layers was designed and trained. Then, the heat map was generated by applying a gradient-weighted class activation map algorithm to the final convolution layer of\u0000 the model. In the next step, each of the pixels on the image was assigned a weight based on the previously obtained heat map. In addition, the retinal vessel that may interfere with OD segmentation was detected and substituted based on the nearest neighbor pixels. Finally, the OD region was\u0000 segmented using Otsu’s method. As a result, the proposed method achieved a high segmentation accuracy of 98.61%, which was improved about 4.61% than the result without the weight assignment.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40401-1-40401-9"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42857105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effective Reflection Suppression Method for Vehicle Detection in Complex Nighttime Traffic Scenes 复杂夜间交通场景下车辆检测的有效反射抑制方法
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/j.imagingsci.technol.2020.64.4.040402
W. Tsai, Hung-Ju Chen
Abstract Headlight is the most explicit and stable image feature in nighttime scenes. This study proposes a headlight detection and pairing algorithm that adapts to numerous scenes to achieve accurate vehicle detection in the nighttime. This algorithm improved the conventional histogram equalization by using the difference before and after the equalization to suppress the ground reflection and noise. Then, headlight detection was completed based on this difference as a feature. In addition, the authors combined coordinate information, moving distance, symmetry, and stable time to implement headlight pairing, thus enabling vehicle detection in the nighttime. This study effectively overcame complex scenes such as high-speed movement, multi-headlight, and rains. Finally, the algorithm was verified by videos of highway scenes; the detection rate was as high as 96.67%. It can be implemented on the Raspberry Pi embedded platform, and its execution speed can reach 25 frames per second.
摘要车灯是夜景中最明显、最稳定的图像特征。本研究提出一种适应多场景的前照灯检测与配对算法,实现夜间车辆的准确检测。该算法对传统的直方图均衡化进行了改进,利用均衡化前后的差值来抑制地面反射和噪声。然后,基于这种差异作为特征完成前照灯检测。此外,作者结合坐标信息、移动距离、对称性和稳定时间实现了前照灯配对,从而实现了夜间车辆检测。这项研究有效地克服了高速运动、多重前照灯、下雨等复杂场景。最后,通过高速公路场景视频对算法进行验证;检出率高达96.67%。它可以在树莓派嵌入式平台上实现,其执行速度可以达到每秒25帧。
{"title":"Effective Reflection Suppression Method for Vehicle Detection in Complex Nighttime Traffic Scenes","authors":"W. Tsai, Hung-Ju Chen","doi":"10.2352/j.imagingsci.technol.2020.64.4.040402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2020.64.4.040402","url":null,"abstract":"Abstract Headlight is the most explicit and stable image feature in nighttime scenes. This study proposes a headlight detection and pairing algorithm that adapts to numerous scenes to achieve accurate vehicle detection in the nighttime. This algorithm improved the conventional\u0000 histogram equalization by using the difference before and after the equalization to suppress the ground reflection and noise. Then, headlight detection was completed based on this difference as a feature. In addition, the authors combined coordinate information, moving distance, symmetry,\u0000 and stable time to implement headlight pairing, thus enabling vehicle detection in the nighttime. This study effectively overcame complex scenes such as high-speed movement, multi-headlight, and rains. Finally, the algorithm was verified by videos of highway scenes; the detection rate was\u0000 as high as 96.67%. It can be implemented on the Raspberry Pi embedded platform, and its execution speed can reach 25 frames per second.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40402-1-40402-9"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48227043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Color Doppler Image based Virtual Surgery in Placenta Previa Cesarean Section 基于彩色多普勒图像的虚拟手术在前置胎盘剖宫产术中的应用
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410
Guanghui Zhang, X. Feng
Abstract Objective: To study the application of image processing technology in cesarean section of placenta previa, thereby reducing the occurrence of high-risk pregnancy. Methods: First, the method of gray image enhancement is analyzed. This method enhances the gray difference between the target and the background, highlights useful information, summarizes the source and type of noise, and proposes common filtering and noise reduction methods to suppress the noise. For edge detection, pixel-level edge detection operators and sub-pixel-level edge detection operators are summarized. The Canny edge detection operator and the Gaussian fitting sub-pixel edge detection operator are introduced in detail, and innovative improvements are carried out for resolving the deficiencies of the algorithm. Results: The improved adaptive iterative segmentation thresholding method results in a threshold of T = 98 and 11 iterations. The image segmentation quality of the improved Otsu method has been greatly enhanced. After the second segmentation, the improved Otsu method finds the optimal threshold T = 76. Conclusion: Color Doppler ultrasound image processing technology has excellent application in placenta previa cesarean section.
摘要目的:研究图像处理技术在前置胎盘剖宫产术中的应用,从而减少高危妊娠的发生。方法:首先分析了灰度图像增强的方法。该方法增强了目标与背景的灰度差,突出了有用信息,总结了噪声的来源和类型,提出了常用的滤波和降噪方法来抑制噪声。对于边缘检测,总结了像素级边缘检测算子和亚像素级边缘检测算子。详细介绍了Canny边缘检测算子和高斯拟合亚像素边缘检测算子,并针对算法的不足进行了创新性改进。结果:改进的自适应迭代分割阈值法得到的阈值为T = 98,迭代次数为11次。改进后的Otsu方法大大提高了图像分割的质量。经过第二次分割,改进的Otsu方法找到最优阈值T = 76。结论:彩色多普勒超声图像处理技术在前置胎盘剖宫产术中具有良好的应用价值。
{"title":"Applying Color Doppler Image based Virtual Surgery in Placenta Previa Cesarean Section","authors":"Guanghui Zhang, X. Feng","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040410","url":null,"abstract":"Abstract Objective: To study the application of image processing technology in cesarean section of placenta previa, thereby reducing the occurrence of high-risk pregnancy. Methods: First, the method of gray image enhancement is analyzed. This method enhances the gray difference\u0000 between the target and the background, highlights useful information, summarizes the source and type of noise, and proposes common filtering and noise reduction methods to suppress the noise. For edge detection, pixel-level edge detection operators and sub-pixel-level edge detection operators\u0000 are summarized. The Canny edge detection operator and the Gaussian fitting sub-pixel edge detection operator are introduced in detail, and innovative improvements are carried out for resolving the deficiencies of the algorithm. Results: The improved adaptive iterative segmentation thresholding\u0000 method results in a threshold of T = 98 and 11 iterations. The image segmentation quality of the improved Otsu method has been greatly enhanced. After the second segmentation, the improved Otsu method finds the optimal threshold T = 76. Conclusion: Color Doppler ultrasound image\u0000 processing technology has excellent application in placenta previa cesarean section.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40410-1-40410-10"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44365430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spatiotemporal Changes of Riverbed and Surrounding Environment in Yongding River (Beijing section) in the Past 40 Years 近40年永定河(北京段)河床及周边环境的时空变化
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407
Ran Pang, He Huang, Tri Dev Acharya
Abstract Yongding River is one of the five major river systems in Beijing. It is located to the west of Beijing. It has influenced culture along its basin. The river supports both rural and urban areas. Furthermore, it influences economic development, water conservation, and the natural environment. However, during the past few decades, due to the combined effect of increasing population and economic activities, a series of changes have led to problems such as the reduction in water volume and the exposure of the riverbed. In this study, remote sensing images were used to derive land cover maps and compare spatiotemporal changes during the past 40 years. As a result, the following data were found: forest changed least; cropland area increased to a large extent; bareland area was reduced by a maximum of 63%; surface water area in the study area was lower from 1989 to 1999 because of the excessive use of water in human activities, but it increased by 92% from 2010 to 2018 as awareness about protecting the environment arose; there was a small increase in the built-up area, but this was more planned. These results reveal that water conservancy construction, agroforestry activities, and increasing urbanization have a great impact on the surrounding environment of the Yongding River (Beijing section). This study discusses in detail how the current situation can be attributed to of human activities, policies, economic development, and ecological conservation Furthermore, it suggests improvement by strengthening the governance of the riverbed and the riverside. These results and discussion can be a reference and provide decision support for the management of southwest Beijing or similar river basins in peri-urban areas.
摘要永定河是北京五大水系之一。它位于北京的西面。它影响了其流域的文化。这条河支持农村和城市地区。此外,它还影响经济发展、水利和自然环境。然而,在过去的几十年里,由于人口增加和经济活动的共同作用,一系列的变化导致了水量减少和河床裸露等问题。在这项研究中,遥感图像被用来绘制土地覆盖图,并比较过去40年的时空变化。结果发现:森林变化最小;耕地面积大幅度增加;裸地面积最多减少了63%;1989年至1999年,由于人类活动过度用水,研究区域的地表水面积较低,但随着环境保护意识的提高,2010年至2018年,地表水面积增加了92%;建成区面积略有增加,但这是有计划的。这些结果表明,水利建设、农林活动和日益加剧的城市化对永定河(北京段)的周边环境产生了巨大影响。本研究详细讨论了如何将目前的情况归因于人类活动、政策、经济发展和生态保护。此外,还建议通过加强河床和河岸的治理来改善现状。这些结果和讨论可为北京西南或类似城市周边流域的管理提供参考和决策支持。
{"title":"Spatiotemporal Changes of Riverbed and Surrounding Environment in Yongding River (Beijing section) in the Past 40 Years","authors":"Ran Pang, He Huang, Tri Dev Acharya","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040407","url":null,"abstract":"Abstract Yongding River is one of the five major river systems in Beijing. It is located to the west of Beijing. It has influenced culture along its basin. The river supports both rural and urban areas. Furthermore, it influences economic development, water conservation,\u0000 and the natural environment. However, during the past few decades, due to the combined effect of increasing population and economic activities, a series of changes have led to problems such as the reduction in water volume and the exposure of the riverbed. In this study, remote sensing images\u0000 were used to derive land cover maps and compare spatiotemporal changes during the past 40 years. As a result, the following data were found: forest changed least; cropland area increased to a large extent; bareland area was reduced by a maximum of 63%; surface water area in the study area\u0000 was lower from 1989 to 1999 because of the excessive use of water in human activities, but it increased by 92% from 2010 to 2018 as awareness about protecting the environment arose; there was a small increase in the built-up area, but this was more planned. These results reveal that water\u0000 conservancy construction, agroforestry activities, and increasing urbanization have a great impact on the surrounding environment of the Yongding River (Beijing section). This study discusses in detail how the current situation can be attributed to of human activities, policies, economic development,\u0000 and ecological conservation Furthermore, it suggests improvement by strengthening the governance of the riverbed and the riverside. These results and discussion can be a reference and provide decision support for the management of southwest Beijing or similar river basins in peri-urban areas.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40407-1-40407-13"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46109464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D Brain Tumor Image Segmentation Integrating Cascaded Anisotropic Fully Convolutional Neural Network and Hybrid Level Set Method 结合级联各向异性全卷积神经网络和混合水平集方法的三维脑肿瘤图像分割
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2020-07-01 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411
Liu Zhao, Qiang Li, Ching-Hsin Wang, Yuancan Liao
Abstract The accuracy of three-dimensional (3D) brain tumor image segmentation is of great significance to brain tumor diagnosis. To enhance the accuracy of segmentation, this study proposes an algorithm integrating a cascaded anisotropic fully convolutional neural network (FCNN) and the hybrid level set method. The algorithm first performs bias field correction and gray value normalization on T1, T1C, T2, and fluid-attenuated inversion recovery magnetic resonance imaging (MRI) images for preprocessing. It then uses a cascading mechanism to perform preliminary segmentation of whole tumors, tumor cores, and enhancing tumors by an anisotropic FCNN based on the relationships among the locations of the three types of tumor structures. This simplifies multiclass brain tumor image segmentation problems into three binary classification problems. At the same time, the anisotropic FCNN adopts dense connections and multiscale feature merging to further enhance performance. Model training is respectively conducted on the axial, coronal, and sagittal planes, and the segmentation results from the three different orthogonal views are combined. Finally, the hybrid level set method is adopted to refine the brain tumor boundaries in the preliminary segmentation results, thereby completing fine segmentation. The results indicate that the proposed algorithm can achieve 3D MRI brain tumor image segmentation of high accuracy and stability. Comparison of the whole-tumor, tumor-core, and enhancing-tumor segmentation results with the gold standards produced Dice similarity coefficients (Dice) of 0.9113, 0.8581, and 0.7976, respectively.
摘要三维(3D)脑肿瘤图像分割的准确性对脑肿瘤的诊断具有重要意义。为了提高分割精度,本文提出了一种将级联各向异性全卷积神经网络(FCNN)与混合水平集方法相结合的算法。该算法首先对T1、T1C、T2和流体衰减反演恢复磁共振成像(MRI)图像进行偏置场校正和灰度值归一化预处理。然后利用级联机制,基于三种肿瘤结构位置之间的关系,通过各向异性FCNN对肿瘤整体、肿瘤核心进行初步分割,并对肿瘤进行增强。将多类脑肿瘤图像分割问题简化为三个二值分类问题。同时,各向异性FCNN采用密集连接和多尺度特征合并,进一步提高了性能。分别在轴面、冠状面和矢状面进行模型训练,并将三种不同正交视图的分割结果进行组合。最后,采用混合水平集方法对初步分割结果中的脑肿瘤边界进行细化,从而完成精细分割。结果表明,该算法能够实现高精度、稳定的三维MRI脑肿瘤图像分割。将全肿瘤、肿瘤核心和增强肿瘤分割结果与金标准进行比较,得到的Dice相似系数(Dice)分别为0.9113、0.8581和0.7976。
{"title":"3D Brain Tumor Image Segmentation Integrating Cascaded Anisotropic Fully Convolutional Neural Network and Hybrid Level Set Method","authors":"Liu Zhao, Qiang Li, Ching-Hsin Wang, Yuancan Liao","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2020.64.4.040411","url":null,"abstract":"Abstract The accuracy of three-dimensional (3D) brain tumor image segmentation is of great significance to brain tumor diagnosis. To enhance the accuracy of segmentation, this study proposes an algorithm integrating a cascaded anisotropic fully convolutional neural network\u0000 (FCNN) and the hybrid level set method. The algorithm first performs bias field correction and gray value normalization on T1, T1C, T2, and fluid-attenuated inversion recovery magnetic resonance imaging (MRI) images for preprocessing. It then uses a cascading mechanism to perform preliminary\u0000 segmentation of whole tumors, tumor cores, and enhancing tumors by an anisotropic FCNN based on the relationships among the locations of the three types of tumor structures. This simplifies multiclass brain tumor image segmentation problems into three binary classification problems. At the\u0000 same time, the anisotropic FCNN adopts dense connections and multiscale feature merging to further enhance performance. Model training is respectively conducted on the axial, coronal, and sagittal planes, and the segmentation results from the three different orthogonal views are combined.\u0000 Finally, the hybrid level set method is adopted to refine the brain tumor boundaries in the preliminary segmentation results, thereby completing fine segmentation. The results indicate that the proposed algorithm can achieve 3D MRI brain tumor image segmentation of high accuracy and stability.\u0000 Comparison of the whole-tumor, tumor-core, and enhancing-tumor segmentation results with the gold standards produced Dice similarity coefficients (Dice) of 0.9113, 0.8581, and 0.7976, respectively.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"64 1","pages":"40411-1-40411-10"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42976196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Journal of Imaging Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1