Pub Date : 2017-01-01Epub Date: 2017-11-06DOI: 10.1155/2017/8318906
Jacopo Biasetti, Kaushik Sampath, Angel Cortez, Alaleh Azhir, Assaf A Gilad, Thomas S Kickler, Tobias Obser, Zaverio M Ruggeri, Joseph Katz
Tracking cells and proteins' phenotypic changes in deep suspensions is critical for the direct imaging of blood-related phenomena in in vitro replica of cardiovascular systems and blood-handling devices. This paper introduces fluorescence imaging techniques for space and time resolved detection of platelet activation, von Willebrand factor (VWF) conformational changes, and VWF-platelet interaction in deep suspensions. Labeled VWF, platelets, and VWF-platelet strands are suspended in deep cuvettes, illuminated, and imaged with a high-sensitivity EM-CCD camera, allowing detection using an exposure time of 1 ms. In-house postprocessing algorithms identify and track the moving signals. Recombinant VWF-eGFP (rVWF-eGFP) and VWF labeled with an FITC-conjugated polyclonal antibody are employed. Anti-P-Selectin FITC-conjugated antibodies and the calcium-sensitive probe Indo-1 are used to detect activated platelets. A positive correlation between the mean number of platelets detected per image and the percentage of activated platelets determined through flow cytometry is obtained, validating the technique. An increase in the number of rVWF-eGFP signals upon exposure to shear stress demonstrates the technique's ability to detect breakup of self-aggregates. VWF globular and unfolded conformations and self-aggregation are also observed. The ability to track the size and shape of VWF-platelet strands in space and time provides means to detect pro- and antithrombotic processes.
{"title":"Space and Time Resolved Detection of Platelet Activation and von Willebrand Factor Conformational Changes in Deep Suspensions.","authors":"Jacopo Biasetti, Kaushik Sampath, Angel Cortez, Alaleh Azhir, Assaf A Gilad, Thomas S Kickler, Tobias Obser, Zaverio M Ruggeri, Joseph Katz","doi":"10.1155/2017/8318906","DOIUrl":"https://doi.org/10.1155/2017/8318906","url":null,"abstract":"<p><p>Tracking cells and proteins' phenotypic changes in deep suspensions is critical for the direct imaging of blood-related phenomena in <i>in vitro</i> replica of cardiovascular systems and blood-handling devices. This paper introduces fluorescence imaging techniques for space and time resolved detection of platelet activation, von Willebrand factor (VWF) conformational changes, and VWF-platelet interaction in deep suspensions. Labeled VWF, platelets, and VWF-platelet strands are suspended in deep cuvettes, illuminated, and imaged with a high-sensitivity EM-CCD camera, allowing detection using an exposure time of 1 ms. In-house postprocessing algorithms identify and track the moving signals. Recombinant VWF-eGFP (rVWF-eGFP) and VWF labeled with an FITC-conjugated polyclonal antibody are employed. Anti-P-Selectin FITC-conjugated antibodies and the calcium-sensitive probe Indo-1 are used to detect activated platelets. A positive correlation between the mean number of platelets detected per image and the percentage of activated platelets determined through flow cytometry is obtained, validating the technique. An increase in the number of rVWF-eGFP signals upon exposure to shear stress demonstrates the technique's ability to detect breakup of self-aggregates. VWF globular and unfolded conformations and self-aggregation are also observed. The ability to track the size and shape of VWF-platelet strands in space and time provides means to detect pro- and antithrombotic processes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"8318906"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/8318906","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35650754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-03-19DOI: 10.1155/2017/1985796
Dinh Tuan Tran, Ryuhei Sakurai, Hirotake Yamazoe, Joo-Ho Lee
In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.
{"title":"Phase Segmentation Methods for an Automatic Surgical Workflow Analysis.","authors":"Dinh Tuan Tran, Ryuhei Sakurai, Hirotake Yamazoe, Joo-Ho Lee","doi":"10.1155/2017/1985796","DOIUrl":"https://doi.org/10.1155/2017/1985796","url":null,"abstract":"In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"1985796"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/1985796","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34912735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.
{"title":"Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images.","authors":"Hirokazu Nosato, Hidenori Sakanashi, Eiichi Takahashi, Masahiro Murakawa, Hiroshi Aoki, Ken Takeuchi, Yasuo Suzuki","doi":"10.1155/2017/7089213","DOIUrl":"https://doi.org/10.1155/2017/7089213","url":null,"abstract":"<p><p>Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"7089213"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/7089213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34778449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-10-19DOI: 10.1155/2017/7232751
Roberta Ferretti, Silvana G Dellepiane
This paper describes a method based on an automatic segmentation process to coregister carpal bones of the same patient imaged at different time points. A rigid registration was chosen to avoid artificial bone deformations and to allow finding eventual differences in the bone shape due to erosion, disease regression, or other eventual pathological signs. The actual registration step is performed on the basis of principal inertial axes of each carpal bone volume, as estimated from the inertia matrix. In contrast to already published approaches, the proposed method suggests splitting the 3D rotation into successive rotations about one axis at a time (the so-called basic or elemental rotations). In such a way, singularity and ambiguity drawbacks affecting other classical methods, for instance, the Euler angles method, are addressed. The proposed method was quantitatively evaluated using a set of real magnetic resonance imaging (MRI) sequences acquired at two different times from healthy wrists and by choosing a direct volumetric comparison as a cost function. Both the segmentation and registration steps are not based on a priori models, and they are therefore able to obtain good results even in pathological cases, as proven by the visual evaluation of actual pathological cases.
{"title":"Multitemporal Volume Registration for the Analysis of Rheumatoid Arthritis Evolution in the Wrist.","authors":"Roberta Ferretti, Silvana G Dellepiane","doi":"10.1155/2017/7232751","DOIUrl":"10.1155/2017/7232751","url":null,"abstract":"<p><p>This paper describes a method based on an automatic segmentation process to coregister carpal bones of the same patient imaged at different time points. A rigid registration was chosen to avoid artificial bone deformations and to allow finding eventual differences in the bone shape due to erosion, disease regression, or other eventual pathological signs. The actual registration step is performed on the basis of principal inertial axes of each carpal bone volume, as estimated from the inertia matrix. In contrast to already published approaches, the proposed method suggests splitting the 3D rotation into successive rotations about one axis at a time (the so-called basic or elemental rotations). In such a way, singularity and ambiguity drawbacks affecting other classical methods, for instance, the Euler angles method, are addressed. The proposed method was quantitatively evaluated using a set of real magnetic resonance imaging (MRI) sequences acquired at two different times from healthy wrists and by choosing a direct volumetric comparison as a cost function. Both the segmentation and registration steps are not based on a priori models, and they are therefore able to obtain good results even in pathological cases, as proven by the visual evaluation of actual pathological cases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"7232751"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5672126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35216080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-04-09DOI: 10.1155/2017/9604178
Shanshan Chen, Bensheng Qiu, Feng Zhao, Chao Li, Hongwei Du
Compressed sensing (CS) has been applied to accelerate magnetic resonance imaging (MRI) for many years. Due to the lack of translation invariance of the wavelet basis, undersampled MRI reconstruction based on discrete wavelet transform may result in serious artifacts. In this paper, we propose a CS-based reconstruction scheme, which combines complex double-density dual-tree discrete wavelet transform (CDDDT-DWT) with fast iterative shrinkage/soft thresholding algorithm (FISTA) to efficiently reduce such visual artifacts. The CDDDT-DWT has the characteristics of shift invariance, high degree, and a good directional selectivity. In addition, FISTA has an excellent convergence rate, and the design of FISTA is simple. Compared with conventional CS-based reconstruction methods, the experimental results demonstrate that this novel approach achieves higher peak signal-to-noise ratio (PSNR), larger signal-to-noise ratio (SNR), better structural similarity index (SSIM), and lower relative error.
{"title":"Fast Compressed Sensing MRI Based on Complex Double-Density Dual-Tree Discrete Wavelet Transform.","authors":"Shanshan Chen, Bensheng Qiu, Feng Zhao, Chao Li, Hongwei Du","doi":"10.1155/2017/9604178","DOIUrl":"https://doi.org/10.1155/2017/9604178","url":null,"abstract":"<p><p>Compressed sensing (CS) has been applied to accelerate magnetic resonance imaging (MRI) for many years. Due to the lack of translation invariance of the wavelet basis, undersampled MRI reconstruction based on discrete wavelet transform may result in serious artifacts. In this paper, we propose a CS-based reconstruction scheme, which combines complex double-density dual-tree discrete wavelet transform (CDDDT-DWT) with fast iterative shrinkage/soft thresholding algorithm (FISTA) to efficiently reduce such visual artifacts. The CDDDT-DWT has the characteristics of shift invariance, high degree, and a good directional selectivity. In addition, FISTA has an excellent convergence rate, and the design of FISTA is simple. Compared with conventional CS-based reconstruction methods, the experimental results demonstrate that this novel approach achieves higher peak signal-to-noise ratio (PSNR), larger signal-to-noise ratio (SNR), better structural similarity index (SSIM), and lower relative error.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"9604178"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/9604178","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34980329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction.
{"title":"Incorporating Colour Information for Computer-Aided Diagnosis of Melanoma from Dermoscopy Images: A Retrospective Survey and Critical Analysis","authors":"Ali Madooei, M. S. Drew","doi":"10.1155/2016/4868305","DOIUrl":"https://doi.org/10.1155/2016/4868305","url":null,"abstract":"Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2016 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2016/4868305","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64402480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
El-Sayed H. Ibrahim, Joseph G. Cernigliaro, M. Bridges, R. Pooley, W. Haley
The purpose of this work was to investigate the performance of currently available magnetic resonance imaging (MRI) for detecting kidney stones, compared to computed tomography (CT) results, and to determine the characteristics of successfully detected stones. Patients who had undergone both abdominal/pelvic CT and MRI exams within 30 days were studied. The images were reviewed by two expert radiologists blinded to the patients' respective radiological diagnoses. The study consisted of four steps: (1) reviewing the MRI images and determining whether any kidney stone(s) are identified; (2) reviewing the corresponding CT images and confirming whether kidney stones are identified; (3) reviewing the MRI images a second time, armed with the information from the corresponding CT, noting whether any kidney stones are positively identified that were previously missed; (4) for all stones MRI-confirmed on previous steps, the radiologist experts being asked to answer whether in retrospect, with knowledge of size and location on corresponding CT, these stones would be affirmed as confidently identified on MRI or not. In this best-case scenario involving knowledge of stones and their locations on concurrent CT, radiologist experts detected 19% of kidney stones on MRI, with stone size being a major factor for stone identification.
{"title":"The Capabilities and Limitations of Clinical Magnetic Resonance Imaging for Detecting Kidney Stones: A Retrospective Study","authors":"El-Sayed H. Ibrahim, Joseph G. Cernigliaro, M. Bridges, R. Pooley, W. Haley","doi":"10.1155/2016/4935656","DOIUrl":"https://doi.org/10.1155/2016/4935656","url":null,"abstract":"The purpose of this work was to investigate the performance of currently available magnetic resonance imaging (MRI) for detecting kidney stones, compared to computed tomography (CT) results, and to determine the characteristics of successfully detected stones. Patients who had undergone both abdominal/pelvic CT and MRI exams within 30 days were studied. The images were reviewed by two expert radiologists blinded to the patients' respective radiological diagnoses. The study consisted of four steps: (1) reviewing the MRI images and determining whether any kidney stone(s) are identified; (2) reviewing the corresponding CT images and confirming whether kidney stones are identified; (3) reviewing the MRI images a second time, armed with the information from the corresponding CT, noting whether any kidney stones are positively identified that were previously missed; (4) for all stones MRI-confirmed on previous steps, the radiologist experts being asked to answer whether in retrospect, with knowledge of size and location on corresponding CT, these stones would be affirmed as confidently identified on MRI or not. In this best-case scenario involving knowledge of stones and their locations on concurrent CT, radiologist experts detected 19% of kidney stones on MRI, with stone size being a major factor for stone identification.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2016 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2016/4935656","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64404153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linearity and shift invariance (LSI) characteristics of magnetic particle imaging (MPI) are important properties for quantitative medical diagnosis applications. The MPI image equations have been theoretically shown to exhibit LSI; however, in practice, the necessary filtering action removes the first harmonic information, which destroys the LSI characteristics. This lost information can be constant in the x-space reconstruction method. Available recovery algorithms, which are based on signal matching of multiple partial field of views (pFOVs), require much processing time and a priori information at the start of imaging. In this paper, a fast analytical recovery algorithm is proposed to restore the LSI properties of the x-space MPI images, representable as an image of discrete concentrations of magnetic material. The method utilizes the one-dimensional (1D) x-space imaging kernel and properties of the image and lost image equations. The approach does not require overlapping of pFOVs, and its complexity depends only on a small-sized system of linear equations; therefore, it can reduce the processing time. Moreover, the algorithm only needs a priori information which can be obtained at one imaging process. Considering different particle distributions, several simulations are conducted, and results of 1D and 2D imaging demonstrate the effectiveness of the proposed approach.
{"title":"An Analytical Approach for Fast Recovery of the LSI Properties in Magnetic Particle Imaging","authors":"H. Jabbari, Jungwon Yoon","doi":"10.1155/2016/6120713","DOIUrl":"https://doi.org/10.1155/2016/6120713","url":null,"abstract":"Linearity and shift invariance (LSI) characteristics of magnetic particle imaging (MPI) are important properties for quantitative medical diagnosis applications. The MPI image equations have been theoretically shown to exhibit LSI; however, in practice, the necessary filtering action removes the first harmonic information, which destroys the LSI characteristics. This lost information can be constant in the x-space reconstruction method. Available recovery algorithms, which are based on signal matching of multiple partial field of views (pFOVs), require much processing time and a priori information at the start of imaging. In this paper, a fast analytical recovery algorithm is proposed to restore the LSI properties of the x-space MPI images, representable as an image of discrete concentrations of magnetic material. The method utilizes the one-dimensional (1D) x-space imaging kernel and properties of the image and lost image equations. The approach does not require overlapping of pFOVs, and its complexity depends only on a small-sized system of linear equations; therefore, it can reduce the processing time. Moreover, the algorithm only needs a priori information which can be obtained at one imaging process. Considering different particle distributions, several simulations are conducted, and results of 1D and 2D imaging demonstrate the effectiveness of the proposed approach.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2016 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2016/6120713","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64459066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Klock, E. Iuanow, Bilal H. Malik, N. Obuchowski, J. Wiskin, M. Lenox
Objectives. This study presents correlations between cross-sectional anatomy of human female breasts and Quantitative Transmission (QT) Ultrasound, does discriminate classifier analysis to validate the speed of sound correlations, and does a visual grading analysis comparing QT Ultrasound with mammography. Materials and Methods. Human cadaver breasts were imaged using QT Ultrasound, sectioned, and photographed. Biopsies confirmed microanatomy and areas were correlated with QT Ultrasound images. Measurements were taken in live subjects from QT Ultrasound images and values of speed of sound for each identified anatomical structure were plotted. Finally, a visual grading analysis was performed on images to determine whether radiologists' confidence in identifying breast structures with mammography (XRM) is comparable to QT Ultrasound. Results. QT Ultrasound identified all major anatomical features of the breast, and speed of sound calculations showed specific values for different breast tissues. Using linear discriminant analysis overall accuracy is 91.4%. Using visual grading analysis readers scored the image quality on QT Ultrasound as better than on XRM in 69%–90% of breasts for specific tissues. Conclusions. QT Ultrasound provides accurate anatomic information and high tissue specificity using speed of sound information. Quantitative Transmission Ultrasound can distinguish different types of breast tissue with high resolution and accuracy.
{"title":"Anatomy-Correlated Breast Imaging and Visual Grading Analysis Using Quantitative Transmission Ultrasound™","authors":"J. Klock, E. Iuanow, Bilal H. Malik, N. Obuchowski, J. Wiskin, M. Lenox","doi":"10.1155/2016/7570406","DOIUrl":"https://doi.org/10.1155/2016/7570406","url":null,"abstract":"Objectives. This study presents correlations between cross-sectional anatomy of human female breasts and Quantitative Transmission (QT) Ultrasound, does discriminate classifier analysis to validate the speed of sound correlations, and does a visual grading analysis comparing QT Ultrasound with mammography. Materials and Methods. Human cadaver breasts were imaged using QT Ultrasound, sectioned, and photographed. Biopsies confirmed microanatomy and areas were correlated with QT Ultrasound images. Measurements were taken in live subjects from QT Ultrasound images and values of speed of sound for each identified anatomical structure were plotted. Finally, a visual grading analysis was performed on images to determine whether radiologists' confidence in identifying breast structures with mammography (XRM) is comparable to QT Ultrasound. Results. QT Ultrasound identified all major anatomical features of the breast, and speed of sound calculations showed specific values for different breast tissues. Using linear discriminant analysis overall accuracy is 91.4%. Using visual grading analysis readers scored the image quality on QT Ultrasound as better than on XRM in 69%–90% of breasts for specific tissues. Conclusions. QT Ultrasound provides accurate anatomic information and high tissue specificity using speed of sound information. Quantitative Transmission Ultrasound can distinguish different types of breast tissue with high resolution and accuracy.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"38 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2016/7570406","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64525727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peishan Dai, Hanwei Sheng, Jianmei Zhang, Ling Li, Jing Wu, Min Fan
Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.
{"title":"Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing","authors":"Peishan Dai, Hanwei Sheng, Jianmei Zhang, Ling Li, Jing Wu, Min Fan","doi":"10.1155/2016/5075612","DOIUrl":"https://doi.org/10.1155/2016/5075612","url":null,"abstract":"Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2016 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2016/5075612","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64410629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}