G. Apostolopoulos, A. Koutras, D. Anyfantis, Ioanna Christoyianni
{"title":"A Comparative Analysis of Breast Cancer Diagnosis by Fusing Visual and Semantic Feature Descriptors","authors":"G. Apostolopoulos, A. Koutras, D. Anyfantis, Ioanna Christoyianni","doi":"10.1109/BIBE52308.2021.9635481","DOIUrl":null,"url":null,"abstract":"Computer-aided Diagnosis (CAD) systems have become a significant assistance tool, that are used to help identify abnormal/normal regions of interest in mammograms faster and more effectively than human readers. In this work, we propose a new approach for breast cancer identification of all type of lesions in digital mammograms by combining low-and high-level mammogram descriptors in a compact form. The proposed method consists of two major stages: Initially, a feature extraction process that utilizes two dimensional discrete transforms based on ART, Shapelets and textural representations based on Gabor filter banks, is used to extract low-level visual descriptors. To further improve our method's performance, the semantic information of each mammogram given by radiologists is encoded in a 16-bit length word high-level feature vector. All features are stored in a quaternion and fused using the L2 norm prior to their presentation to the classification module. For the classification task, each ROS is recognized using two different classification models, Ada Boost and Random Forest. The proposed method is evaluated on regions taken from the DDSM database. The results show that Ada Boost outperforms Random Forest in terms of accuracy (99.2%$(\\pm 0.527)$ against 93.78% $(\\pm 1.659))$, precision, recall and F-measure. Both classifiers achieve a mean accuracy of 33% and 38% higher than using only visual descriptors, showing that semantic information can indeed improve the diagnosis when it is combined with standard visual features.","PeriodicalId":343724,"journal":{"name":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBE52308.2021.9635481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Computer-aided Diagnosis (CAD) systems have become a significant assistance tool, that are used to help identify abnormal/normal regions of interest in mammograms faster and more effectively than human readers. In this work, we propose a new approach for breast cancer identification of all type of lesions in digital mammograms by combining low-and high-level mammogram descriptors in a compact form. The proposed method consists of two major stages: Initially, a feature extraction process that utilizes two dimensional discrete transforms based on ART, Shapelets and textural representations based on Gabor filter banks, is used to extract low-level visual descriptors. To further improve our method's performance, the semantic information of each mammogram given by radiologists is encoded in a 16-bit length word high-level feature vector. All features are stored in a quaternion and fused using the L2 norm prior to their presentation to the classification module. For the classification task, each ROS is recognized using two different classification models, Ada Boost and Random Forest. The proposed method is evaluated on regions taken from the DDSM database. The results show that Ada Boost outperforms Random Forest in terms of accuracy (99.2%$(\pm 0.527)$ against 93.78% $(\pm 1.659))$, precision, recall and F-measure. Both classifiers achieve a mean accuracy of 33% and 38% higher than using only visual descriptors, showing that semantic information can indeed improve the diagnosis when it is combined with standard visual features.