Kana Kato, M. Nemoto, Yuichi Kimura, Y. Kiyohara, H. Koga, N. Yamazaki, G. Christensen, C. Ingvar, K. Nielsen, A. Nakamura, T. Sota, T. Nagaoka
{"title":"基于数据增强的黑色素瘤自动诊断系统性能改进","authors":"Kana Kato, M. Nemoto, Yuichi Kimura, Y. Kiyohara, H. Koga, N. Yamazaki, G. Christensen, C. Ingvar, K. Nielsen, A. Nakamura, T. Sota, T. Nagaoka","doi":"10.14326/abe.9.62","DOIUrl":null,"url":null,"abstract":"Color information is an important tool for diagnosing melanoma. In this study, we used a hyper-spectral imager (HSI), which can measure color information in detail, to develop an automated melanoma diagnosis system. In recent years, the effectiveness of deep learning has become more widely accepted in the field of image recognition. We therefore integrated the deep convolutional neural network with transfer learning into our system. We tried data augmentation to demonstrate how our system improves diagnostic performance. 283 melanoma lesions and 336 non-melanoma lesions were used for the analysis. The data measured by HSI, called the hyperspectral data (HSD), were converted to a single-wavelength image averaged over plus or minus 3 nm. We used GoogLeNet which was pre-trained by ImageNet and then was transferred to analyze the HSD. In the transfer learning, we used not only the original HSD but also artificial augmentation dataset to improve the melanoma classification performance of GoogLeNet. Since GoogLeNet requires three-channel images as input, three wavelengths were selected from those single-wavelength images and assigned to three channels in wavelength order from short to long. The sensitivity and specificity of our system were estimated by 5-fold cross-val-idation. The results of a combination of 530, 560, and 590 nm (combination A) and 500, 620, and 740 nm (com-bination B) were compared. We also compared the diagnostic performance with and without the data augmentation. All images were augmented by inverting the image vertically and/or horizontally. Without data augmentation, the respective sensitivity and specificity of our system were 77.4% and 75.6% for combination A and 73.1% and 80.6% for combination B. With data augmentation, these numbers improved to 79.9% and 82.4% for combination A and 76.7% and 82.2% for combination B. From these results, we conclude that the diagnostic performance of our system has been improved by data augmentation. Furthermore, our system suc-ceeds to differentiate melanoma with a sensitivity of almost 80%. (Less)","PeriodicalId":54017,"journal":{"name":"Advanced Biomedical Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.8000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Performance Improvement of Automated Melanoma Diagnosis System by Data Augmentation\",\"authors\":\"Kana Kato, M. Nemoto, Yuichi Kimura, Y. Kiyohara, H. Koga, N. Yamazaki, G. Christensen, C. Ingvar, K. Nielsen, A. Nakamura, T. Sota, T. Nagaoka\",\"doi\":\"10.14326/abe.9.62\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Color information is an important tool for diagnosing melanoma. In this study, we used a hyper-spectral imager (HSI), which can measure color information in detail, to develop an automated melanoma diagnosis system. In recent years, the effectiveness of deep learning has become more widely accepted in the field of image recognition. We therefore integrated the deep convolutional neural network with transfer learning into our system. We tried data augmentation to demonstrate how our system improves diagnostic performance. 283 melanoma lesions and 336 non-melanoma lesions were used for the analysis. The data measured by HSI, called the hyperspectral data (HSD), were converted to a single-wavelength image averaged over plus or minus 3 nm. We used GoogLeNet which was pre-trained by ImageNet and then was transferred to analyze the HSD. In the transfer learning, we used not only the original HSD but also artificial augmentation dataset to improve the melanoma classification performance of GoogLeNet. Since GoogLeNet requires three-channel images as input, three wavelengths were selected from those single-wavelength images and assigned to three channels in wavelength order from short to long. The sensitivity and specificity of our system were estimated by 5-fold cross-val-idation. The results of a combination of 530, 560, and 590 nm (combination A) and 500, 620, and 740 nm (com-bination B) were compared. We also compared the diagnostic performance with and without the data augmentation. All images were augmented by inverting the image vertically and/or horizontally. Without data augmentation, the respective sensitivity and specificity of our system were 77.4% and 75.6% for combination A and 73.1% and 80.6% for combination B. With data augmentation, these numbers improved to 79.9% and 82.4% for combination A and 76.7% and 82.2% for combination B. From these results, we conclude that the diagnostic performance of our system has been improved by data augmentation. Furthermore, our system suc-ceeds to differentiate melanoma with a sensitivity of almost 80%. (Less)\",\"PeriodicalId\":54017,\"journal\":{\"name\":\"Advanced Biomedical Engineering\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2020-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced Biomedical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14326/abe.9.62\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Biomedical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14326/abe.9.62","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Performance Improvement of Automated Melanoma Diagnosis System by Data Augmentation
Color information is an important tool for diagnosing melanoma. In this study, we used a hyper-spectral imager (HSI), which can measure color information in detail, to develop an automated melanoma diagnosis system. In recent years, the effectiveness of deep learning has become more widely accepted in the field of image recognition. We therefore integrated the deep convolutional neural network with transfer learning into our system. We tried data augmentation to demonstrate how our system improves diagnostic performance. 283 melanoma lesions and 336 non-melanoma lesions were used for the analysis. The data measured by HSI, called the hyperspectral data (HSD), were converted to a single-wavelength image averaged over plus or minus 3 nm. We used GoogLeNet which was pre-trained by ImageNet and then was transferred to analyze the HSD. In the transfer learning, we used not only the original HSD but also artificial augmentation dataset to improve the melanoma classification performance of GoogLeNet. Since GoogLeNet requires three-channel images as input, three wavelengths were selected from those single-wavelength images and assigned to three channels in wavelength order from short to long. The sensitivity and specificity of our system were estimated by 5-fold cross-val-idation. The results of a combination of 530, 560, and 590 nm (combination A) and 500, 620, and 740 nm (com-bination B) were compared. We also compared the diagnostic performance with and without the data augmentation. All images were augmented by inverting the image vertically and/or horizontally. Without data augmentation, the respective sensitivity and specificity of our system were 77.4% and 75.6% for combination A and 73.1% and 80.6% for combination B. With data augmentation, these numbers improved to 79.9% and 82.4% for combination A and 76.7% and 82.2% for combination B. From these results, we conclude that the diagnostic performance of our system has been improved by data augmentation. Furthermore, our system suc-ceeds to differentiate melanoma with a sensitivity of almost 80%. (Less)