首页 > 最新文献

2015 International Conference on Systems, Signals and Image Processing (IWSSIP)最新文献

英文 中文
Detecting the optimal active contour in the computed tomography image by using entropy to choose coefficients in energy equation 利用熵值选择能量方程中的系数来检测计算机断层扫描图像中的最优活动轮廓
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7313932
M. Pham
In this paper, we present the method that it uses image entropy to choose coefficients in energy equation of active contour. We calculate the entropy of the computed tomography image with each pair of tension and elastic coefficient. When the active contour is optimal, the image has minimum entropy (i.e. the image is less change). In addition to solve energy equation (the optimal problem), we use dynamic programming with constraints, these increase computed efficiency of method. They are compound conditions to detect the optimal active contour in computed tomography image.
本文提出了利用图像熵选择活动轮廓能量方程系数的方法。我们用每一对张力系数和弹性系数计算计算机断层图像的熵。当活动轮廓最优时,图像的熵最小(即图像变化较小)。除了求解能量方程(最优问题)外,还采用了带约束的动态规划方法,提高了算法的计算效率。它们是检测计算机断层扫描图像中最优活动轮廓的复合条件。
{"title":"Detecting the optimal active contour in the computed tomography image by using entropy to choose coefficients in energy equation","authors":"M. Pham","doi":"10.1109/IWSSIP.2015.7313932","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313932","url":null,"abstract":"In this paper, we present the method that it uses image entropy to choose coefficients in energy equation of active contour. We calculate the entropy of the computed tomography image with each pair of tension and elastic coefficient. When the active contour is optimal, the image has minimum entropy (i.e. the image is less change). In addition to solve energy equation (the optimal problem), we use dynamic programming with constraints, these increase computed efficiency of method. They are compound conditions to detect the optimal active contour in computed tomography image.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116742216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast scale space image decomposition 快速尺度空间图像分解
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7313926
A. Alsam, H. J. Rivertz
The removal of high frequencies from an image while retaining edges, is a complicated problem that has many solutions in the literature. Most of these solutions are, however, iterative and computationally expensive. In this paper, we introduce a direct method with three basic steps. In the first, the image is convolved with a Gaussian function of a defined size. In the second the gradients of the blurred image are compared with those of the original and a third gradient that is the minimum of the two at each pixel is composed. Finally, the combined gradient is integrated in the Fourier domain to obtain the result.
在保留边缘的同时从图像中去除高频是一个复杂的问题,在文献中有许多解决方案。然而,这些解决方案中的大多数都是迭代的,并且计算成本很高。本文介绍了一种包含三个基本步骤的直接法。在第一种方法中,图像与一个定义大小的高斯函数进行卷积。在第二种方法中,将模糊图像的梯度与原始图像的梯度进行比较,并组成第三个梯度,该梯度是两个梯度在每个像素处的最小值。最后在傅里叶域中对组合梯度进行积分得到结果。
{"title":"Fast scale space image decomposition","authors":"A. Alsam, H. J. Rivertz","doi":"10.1109/IWSSIP.2015.7313926","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313926","url":null,"abstract":"The removal of high frequencies from an image while retaining edges, is a complicated problem that has many solutions in the literature. Most of these solutions are, however, iterative and computationally expensive. In this paper, we introduce a direct method with three basic steps. In the first, the image is convolved with a Gaussian function of a defined size. In the second the gradients of the blurred image are compared with those of the original and a third gradient that is the minimum of the two at each pixel is composed. Finally, the combined gradient is integrated in the Fourier domain to obtain the result.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinal blood vessels extraction using morphological operations 形态学操作视网膜血管提取
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314227
V. Kurilová, J. Pavlovičová, M. Oravec, Radoslav Rakar, Igor Marcek
Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysms, haemorrhages or neovascularisations could be erroneously exchanged. We developed two independent methods; every method is the combination of different morphological operations with different structural elements (different types and sizes). Images from standard database with blood vessels marked by ophthalmologist were used for evaluation. Sensitivity, specificity and accuracy were used as measures of methods efficiency. Both approaches show promising results and could be used as a part of image preprocessing before pathological retinal findings detection algorithms.
视网膜血管的自动提取是视网膜图像计算机辅助诊断的重要内容。如果不切除血管,有病理表现的结构如微动脉瘤、出血或新生血管可能会被错误地置换。我们开发了两种独立的方法;每种方法都是不同形态操作与不同结构元素(不同类型和大小)的结合。使用眼科医生标记血管的标准数据库图像进行评估。以敏感性、特异性和准确性作为方法有效性的衡量标准。这两种方法都显示出良好的结果,可以作为病理视网膜发现检测算法之前的图像预处理的一部分。
{"title":"Retinal blood vessels extraction using morphological operations","authors":"V. Kurilová, J. Pavlovičová, M. Oravec, Radoslav Rakar, Igor Marcek","doi":"10.1109/IWSSIP.2015.7314227","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314227","url":null,"abstract":"Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysms, haemorrhages or neovascularisations could be erroneously exchanged. We developed two independent methods; every method is the combination of different morphological operations with different structural elements (different types and sizes). Images from standard database with blood vessels marked by ophthalmologist were used for evaluation. Sensitivity, specificity and accuracy were used as measures of methods efficiency. Both approaches show promising results and could be used as a part of image preprocessing before pathological retinal findings detection algorithms.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A dynamic cost-centric risk impact metrics development 动态的以成本为中心的风险影响度量开发
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314230
T. Hamid, D. Al-Jumeily
The goal of this work is to advance a new methodology to measure a dynamic severity cost impact for each host by developing the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics to create a Dynamic Vulnerability Scoring System (DVSS) based on Intrinsic, Time-based and Ecological Metric. The interactions between vulnerabilities are considered and dynamic impact metric is developed, which can be seen as a baseline between the static metric and the interaction between the exposures. A new method has developed to represent a unique severity dynamic cost of the total weight of all vulnerabilities for each host to represent the cost-centric severity for each state.
本工作的目标是通过开发基于基础、时间和环境指标的通用漏洞评分系统(CVSS)来创建基于内在、时间和生态指标的动态漏洞评分系统(DVSS),提出一种新的方法来测量每个主机的动态严重性成本影响。考虑了漏洞之间的相互作用,并开发了动态影响度量,它可以被视为静态度量和暴露之间的相互作用之间的基线。提出了一种新的方法来表示每个主机上所有漏洞总权重的唯一严重性动态代价,以表示每个状态的以代价为中心的严重性。
{"title":"A dynamic cost-centric risk impact metrics development","authors":"T. Hamid, D. Al-Jumeily","doi":"10.1109/IWSSIP.2015.7314230","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314230","url":null,"abstract":"The goal of this work is to advance a new methodology to measure a dynamic severity cost impact for each host by developing the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics to create a Dynamic Vulnerability Scoring System (DVSS) based on Intrinsic, Time-based and Ecological Metric. The interactions between vulnerabilities are considered and dynamic impact metric is developed, which can be seen as a baseline between the static metric and the interaction between the exposures. A new method has developed to represent a unique severity dynamic cost of the total weight of all vulnerabilities for each host to represent the cost-centric severity for each state.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131951325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bo(V)W models for object recognition from video 用于视频对象识别的Bo(V)W模型
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314184
Warren Rieutort-Louis, Ognjen Arandjelovic
In this paper we introduce two novel methods for object recognition from video. Our major contributions are (i) the use of dense, overlapping local descriptors as means of accurately capturing the appearance of generic, even untextured objects, (ii) a framework for employing such sets for recognition using video, (iii) a detailed empirical examination of different aspects of the proposed model and (iv) a comparative performance evaluation on a large object database. We describe and compare two bag-of-visual-words (BoVW)-based representations of an object's appearance in a video sequence, one using a per-sequence bag-of-words and one using a set of per-frame bag-of-words. Empirical results demonstrate the effectiveness of both representations with a somewhat favourable performance of the former.
本文介绍了两种新的视频目标识别方法。我们的主要贡献是(i)使用密集,重叠的局部描述符作为准确捕获通用甚至无纹理物体外观的手段,(ii)使用此类集合进行视频识别的框架,(iii)对所提议模型的不同方面进行详细的实证检查,以及(iv)在大型对象数据库上进行比较性能评估。我们描述并比较了视频序列中物体外观的两个基于视觉词袋(BoVW)的表示,一个使用按序列的词袋,另一个使用一组按帧的词袋。实证结果证明了两种表征的有效性,前者的表现有些有利。
{"title":"Bo(V)W models for object recognition from video","authors":"Warren Rieutort-Louis, Ognjen Arandjelovic","doi":"10.1109/IWSSIP.2015.7314184","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314184","url":null,"abstract":"In this paper we introduce two novel methods for object recognition from video. Our major contributions are (i) the use of dense, overlapping local descriptors as means of accurately capturing the appearance of generic, even untextured objects, (ii) a framework for employing such sets for recognition using video, (iii) a detailed empirical examination of different aspects of the proposed model and (iv) a comparative performance evaluation on a large object database. We describe and compare two bag-of-visual-words (BoVW)-based representations of an object's appearance in a video sequence, one using a per-sequence bag-of-words and one using a set of per-frame bag-of-words. Empirical results demonstrate the effectiveness of both representations with a somewhat favourable performance of the former.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133910205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Measurement setup for automatized baselining of WLAN network performance 用于WLAN网络性能自动基线的测量装置
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7313928
T. Berisha, Cise Midoglu, Samira Homayouni, P. Svoboda, M. Rupp
WLAN technology defined by the IEEE 802.11 standard family delivers ever increasing raw data rates with each new standard. However, raw data rate does not reflect real-world performance for end users. This paper proposes an automated setup to conduct performance measurements for WLAN APs in consideration of network performance metrics. The main objective is to create a simple baseline for benchmarking to find the best among the devices. The setup is able to measure data rate, RSSI and jitter in WLAN uplink and downlink. It is a repeatable and reliable mechanism, which can be further extended to different scenarios and use-cases. We also present and discuss the preliminary numerical results. This is only the first step toward a fully automated setup in an anechoic chamber.
由IEEE 802.11标准家族定义的WLAN技术在每个新标准下提供不断增加的原始数据速率。但是,原始数据速率并不能反映最终用户的实际性能。本文提出了一种自动设置,用于在考虑网络性能指标的情况下对WLAN ap进行性能测量。主要目标是为基准测试创建一个简单的基线,以便在设备中找到最佳设备。该装置能够测量无线局域网上行和下行链路的数据速率、RSSI和抖动。它是一种可重复且可靠的机制,可以进一步扩展到不同的场景和用例。我们还提出并讨论了初步的数值结果。这只是在消声室中实现全自动设置的第一步。
{"title":"Measurement setup for automatized baselining of WLAN network performance","authors":"T. Berisha, Cise Midoglu, Samira Homayouni, P. Svoboda, M. Rupp","doi":"10.1109/IWSSIP.2015.7313928","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313928","url":null,"abstract":"WLAN technology defined by the IEEE 802.11 standard family delivers ever increasing raw data rates with each new standard. However, raw data rate does not reflect real-world performance for end users. This paper proposes an automated setup to conduct performance measurements for WLAN APs in consideration of network performance metrics. The main objective is to create a simple baseline for benchmarking to find the best among the devices. The setup is able to measure data rate, RSSI and jitter in WLAN uplink and downlink. It is a repeatable and reliable mechanism, which can be further extended to different scenarios and use-cases. We also present and discuss the preliminary numerical results. This is only the first step toward a fully automated setup in an anechoic chamber.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
X-ray image analysis for cultural heritage investigations 文物调查用x射线图像分析
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314239
C. Armeanu
Image processing of radiograms is one of the fields that have known a very fast development since the digitalization of the radiological imagery. Depending on the objects that are studied, different approaches have been developed. In Cultural Heritage investigations image processing, the image analysis can go from conversion of image types and classes, morphological filtering, deblurring, and other image enhancement tools, to image transforms or refinement of regions of interest. In the present work, historical aretefacts, that are not to be opened, will be investigated by X-ray, and then the images will be processed and enhanced. Also for the case studied it would be analyzed the possibility of 3D reconstruction of an object of interest, inside the studied object by an alternative method.
放射图像的图像处理是自放射图像数字化以来发展很快的领域之一。根据研究对象的不同,研究人员开发了不同的方法。在文物调查图像处理中,图像分析可以从图像类型和类别的转换、形态滤波、去模糊和其他图像增强工具,到图像变换或感兴趣区域的细化。在本工作中,将对不能打开的历史文物进行x射线调查,然后对图像进行处理和增强。此外,对于所研究的案例,将分析通过替代方法在研究对象内部对感兴趣的对象进行3D重建的可能性。
{"title":"X-ray image analysis for cultural heritage investigations","authors":"C. Armeanu","doi":"10.1109/IWSSIP.2015.7314239","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314239","url":null,"abstract":"Image processing of radiograms is one of the fields that have known a very fast development since the digitalization of the radiological imagery. Depending on the objects that are studied, different approaches have been developed. In Cultural Heritage investigations image processing, the image analysis can go from conversion of image types and classes, morphological filtering, deblurring, and other image enhancement tools, to image transforms or refinement of regions of interest. In the present work, historical aretefacts, that are not to be opened, will be investigated by X-ray, and then the images will be processed and enhanced. Also for the case studied it would be analyzed the possibility of 3D reconstruction of an object of interest, inside the studied object by an alternative method.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114732428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compressive sensing for power spectrum estimation of multi-dimensional processes under missing data 缺失数据下多维过程功率谱估计的压缩感知
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314202
Yuanjin Zhang, Liam A. Comerford, M. Beer, I. Kougioumtzoglou
A compressive sensing (CS) based approach is applied in conjunction with an adaptive basis re-weighting procedure for multi-dimensional stochastic process power spectrum estimation. In particular, the problem of sampling gaps in stochastic process records, occurring for reasons such as sensor failures, data corruption, and bandwidth limitations, is addressed. Specifically, due to the fact that many stochastic process records such as wind, sea wave and earthquake excitations can be represented with relative sparsity in the frequency domain, a CS framework can be applied for power spectrum estimation. By relying on signal sparsity, and the assumption that multiple records are available upon which to produce a spectral estimate, it has been shown that a re-weighted CS approach succeeds in estimating power spectra with satisfactory accuracy. Of key importance in this paper is the extension from one-dimensional vector processes to a broader class of problems involving multidimensional stochastic fields. Numerical examples demonstrate the effectiveness of the approach when records are subjected to up to 75% missing data.
将基于压缩感知的方法与自适应基重加权方法相结合,应用于多维随机过程功率谱估计。特别是,随机过程记录的采样间隙问题,发生的原因,如传感器故障,数据损坏,和带宽限制,被解决。具体而言,由于风、海浪和地震等随机过程记录在频域上可以用相对稀疏性表示,因此可以采用CS框架进行功率谱估计。通过依赖于信号稀疏性,并假设有多个记录可用于产生谱估计,已经证明了一种重新加权的CS方法能够以令人满意的精度成功地估计功率谱。本文的关键是将一维向量过程扩展到涉及多维随机场的更广泛的一类问题。数值算例表明,当记录遭受高达75%的丢失数据时,该方法是有效的。
{"title":"Compressive sensing for power spectrum estimation of multi-dimensional processes under missing data","authors":"Yuanjin Zhang, Liam A. Comerford, M. Beer, I. Kougioumtzoglou","doi":"10.1109/IWSSIP.2015.7314202","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314202","url":null,"abstract":"A compressive sensing (CS) based approach is applied in conjunction with an adaptive basis re-weighting procedure for multi-dimensional stochastic process power spectrum estimation. In particular, the problem of sampling gaps in stochastic process records, occurring for reasons such as sensor failures, data corruption, and bandwidth limitations, is addressed. Specifically, due to the fact that many stochastic process records such as wind, sea wave and earthquake excitations can be represented with relative sparsity in the frequency domain, a CS framework can be applied for power spectrum estimation. By relying on signal sparsity, and the assumption that multiple records are available upon which to produce a spectral estimate, it has been shown that a re-weighted CS approach succeeds in estimating power spectra with satisfactory accuracy. Of key importance in this paper is the extension from one-dimensional vector processes to a broader class of problems involving multidimensional stochastic fields. Numerical examples demonstrate the effectiveness of the approach when records are subjected to up to 75% missing data.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127671370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High resolution volume quantification of the knee joint space based on a semi-automatic segmentation of computed tomography images 基于计算机断层扫描图像半自动分割的膝关节空间高分辨率体积量化
Pub Date : 2015-11-02 DOI: 10.1109/IWSSIP.2015.7314201
H. Mezlini, Rabaa Youssef, H. Bouhadoun, E. Budyn, J. Laredo, S. Sevestre, C. Chappard
Osteoarthritis (OA) is a joint disorder that causes pain, stiffness and decreased mobility. Knee OA presents the greatest morbidity. The main characteristic of OA is the cartilage loss inducing joint space (JS) narrowing. Usually, the progression of OA is monitored by the minimum JS measurement on 2D X-rays images. New dedicated systems based on cone beam computed tomography, providing enough image quality and with favourable dose characteristics are under development. With these new systems, it would be possible to follow the 3D JS changes. High resolution peripheral computed tomography (HR-pQCT) usually used for assessing the trabecular and cortical bone mineral density have been performed on specimen knees with an isotropic voxel of 82 microns. We present here a new semi-automatic segmentation method to measure the 3D local variations of JS. The experiments have been done on HR-pQCT data set and the results have been extended to other computed tomography images with low resolution and/or with cone beam geometry.
骨关节炎(OA)是一种关节疾病,会导致疼痛、僵硬和活动能力下降。膝关节OA发病率最高。骨性关节炎的主要特征是软骨丢失导致关节间隙(JS)狭窄。通常,通过在二维x射线图像上测量最小JS来监测OA的进展。新的基于锥形束计算机断层扫描的专用系统,提供足够的图像质量和良好的剂量特性正在开发中。有了这些新系统,就有可能遵循3D JS的变化。高分辨率外周计算机断层扫描(HR-pQCT)通常用于评估小梁和皮质骨矿物质密度,已在膝关节标本上进行了各向同性体素为82微米。本文提出了一种新的半自动分割方法来测量JS的三维局部变化。在HR-pQCT数据集上进行了实验,并将实验结果扩展到其他低分辨率和/或锥束几何的计算机断层扫描图像上。
{"title":"High resolution volume quantification of the knee joint space based on a semi-automatic segmentation of computed tomography images","authors":"H. Mezlini, Rabaa Youssef, H. Bouhadoun, E. Budyn, J. Laredo, S. Sevestre, C. Chappard","doi":"10.1109/IWSSIP.2015.7314201","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314201","url":null,"abstract":"Osteoarthritis (OA) is a joint disorder that causes pain, stiffness and decreased mobility. Knee OA presents the greatest morbidity. The main characteristic of OA is the cartilage loss inducing joint space (JS) narrowing. Usually, the progression of OA is monitored by the minimum JS measurement on 2D X-rays images. New dedicated systems based on cone beam computed tomography, providing enough image quality and with favourable dose characteristics are under development. With these new systems, it would be possible to follow the 3D JS changes. High resolution peripheral computed tomography (HR-pQCT) usually used for assessing the trabecular and cortical bone mineral density have been performed on specimen knees with an isotropic voxel of 82 microns. We present here a new semi-automatic segmentation method to measure the 3D local variations of JS. The experiments have been done on HR-pQCT data set and the results have been extended to other computed tomography images with low resolution and/or with cone beam geometry.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multi-layer feature extractions for image classification — Knowledge from deep CNNs 图像分类的多层特征提取——来自深度cnn的知识
Pub Date : 2015-10-30 DOI: 10.1109/IWSSIP.2015.7313924
K. Ueki, Tetsunori Kobayashi
Recently, there has been considerable research into the application of deep learning to image recognition. Notably, deep convolutional neural networks (CNNs) have achieved excellent performance in a number of image classification tasks, compared with conventional methods based on techniques such as Bag-of-Features (BoF) using local descriptors. In this paper, to cultivate a better understanding of the structure of CNN, we focus on the characteristics of deep CNNs, and adapt them to SIFT+BoF-based methods to improve the classification accuracy. We introduce the multi-layer structure of CNNs into the classification pipeline of the BoF framework, and conduct experiments to confirm the effectiveness of this approach using a fine-grained visual categorization dataset. The results show that the average classification rate is improved from 52.4% to 69.8%.
近年来,人们对深度学习在图像识别中的应用进行了大量的研究。值得注意的是,与基于局部描述符的特征袋(BoF)等技术的传统方法相比,深度卷积神经网络(cnn)在许多图像分类任务中取得了优异的性能。在本文中,为了更好地理解CNN的结构,我们关注深度CNN的特点,并将其与基于SIFT+ bof的方法相适应,以提高分类精度。我们将cnn的多层结构引入到BoF框架的分类管道中,并使用细粒度视觉分类数据集进行实验,验证了该方法的有效性。结果表明,平均分类率由52.4%提高到69.8%。
{"title":"Multi-layer feature extractions for image classification — Knowledge from deep CNNs","authors":"K. Ueki, Tetsunori Kobayashi","doi":"10.1109/IWSSIP.2015.7313924","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313924","url":null,"abstract":"Recently, there has been considerable research into the application of deep learning to image recognition. Notably, deep convolutional neural networks (CNNs) have achieved excellent performance in a number of image classification tasks, compared with conventional methods based on techniques such as Bag-of-Features (BoF) using local descriptors. In this paper, to cultivate a better understanding of the structure of CNN, we focus on the characteristics of deep CNNs, and adapt them to SIFT+BoF-based methods to improve the classification accuracy. We introduce the multi-layer structure of CNNs into the classification pipeline of the BoF framework, and conduct experiments to confirm the effectiveness of this approach using a fine-grained visual categorization dataset. The results show that the average classification rate is improved from 52.4% to 69.8%.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"481 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122331259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2015 International Conference on Systems, Signals and Image Processing (IWSSIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1