首页 > 最新文献

6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.最新文献

英文 中文
Snake-based liver lesion segmentation 基于蛇形的肝脏病变分割
Pub Date : 2004-03-28 DOI: 10.1109/IAI.2004.1300971
C. Krishnamurthy, J.J. Rodriguez, R. Gillies
A novel and robust method for accurate segmentation of liver lesions is discussed. The initial contour for the snake is formed using edge and region information. The modified snake, guided by fuzzy edge information, deforms from this initial position, providing an accurate representation of the lesion boundary with few iterations and minimal user interaction. Results obtained from this algorithm are comparable to those obtained from manual segmentation by a trained radiologist.
讨论了一种新的、鲁棒的肝脏病变精确分割方法。利用边缘和区域信息形成蛇的初始轮廓。修改后的蛇在模糊边缘信息的引导下,从这个初始位置变形,以很少的迭代和最小的用户交互提供了病变边界的准确表示。从该算法获得的结果与由训练有素的放射科医生手工分割获得的结果相当。
{"title":"Snake-based liver lesion segmentation","authors":"C. Krishnamurthy, J.J. Rodriguez, R. Gillies","doi":"10.1109/IAI.2004.1300971","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300971","url":null,"abstract":"A novel and robust method for accurate segmentation of liver lesions is discussed. The initial contour for the snake is formed using edge and region information. The modified snake, guided by fuzzy edge information, deforms from this initial position, providing an accurate representation of the lesion boundary with few iterations and minimal user interaction. Results obtained from this algorithm are comparable to those obtained from manual segmentation by a trained radiologist.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116063600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Imaging and rendering of oil paintings using a multi-band camera 用多波段相机对油画进行成像和渲染
Pub Date : 2004-03-28 DOI: 10.1109/IAI.2004.1300934
S. Tominaga, N. Tanaka, T. Komada
The paper proposes a method for the imaging and rendering of art paintings using a multi-band camera system. The surface shape of an art painting is considered as a rough plane rather than a three-dimensional curved surface. First. we present an algorithm for estimating the surface normal at each pixel point, based on a photometric stereo without using a rangefinder. Next, an algorithm is presented for estimating the spectral reflectance function from a set of pixel values acquired at different illumination directions. Then, the surface reflectance and normal data are used for estimating the light reflection properties. The Torrance-Sparrow model is used for model fitting and parameter estimation. Finally, an experiment using an oil painting is executed for demonstrating the feasibility of the proposed method.
本文提出了一种利用多波段相机系统对艺术绘画进行成像和渲染的方法。艺术绘画的表面形状被认为是一个粗糙的平面,而不是一个三维的曲面。第一。我们提出了一种算法来估计表面法线在每个像素点,基于光度立体不使用测距仪。其次,提出了一种从不同光照方向获取的一组像素值估计光谱反射率函数的算法。然后,利用表面反射率和法向数据估计光反射特性。采用Torrance-Sparrow模型进行模型拟合和参数估计。最后,用一幅油画进行了实验,以证明所提出方法的可行性。
{"title":"Imaging and rendering of oil paintings using a multi-band camera","authors":"S. Tominaga, N. Tanaka, T. Komada","doi":"10.1109/IAI.2004.1300934","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300934","url":null,"abstract":"The paper proposes a method for the imaging and rendering of art paintings using a multi-band camera system. The surface shape of an art painting is considered as a rough plane rather than a three-dimensional curved surface. First. we present an algorithm for estimating the surface normal at each pixel point, based on a photometric stereo without using a rangefinder. Next, an algorithm is presented for estimating the spectral reflectance function from a set of pixel values acquired at different illumination directions. Then, the surface reflectance and normal data are used for estimating the light reflection properties. The Torrance-Sparrow model is used for model fitting and parameter estimation. Finally, an experiment using an oil painting is executed for demonstrating the feasibility of the proposed method.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132710518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Voting-based grouping and interpretation of visual motion 基于投票的视觉运动分组和解释
Pub Date : 2004-03-28 DOI: 10.1109/IAI.2004.1300976
M. Nicolescu, G. Medioni
A main difficulty for estimating camera and scene geometry from a set of point correspondences is caused by the presence of false matches and independently moving objects. Given two images, after obtaining the matching points, they are usually filtered by an outlier rejection step before being used to solve for epipolar geometry and 3D structure estimation. In the presence of moving objects, image registration becomes a more challenging problem, as the matching and registration phases become interdependent. We propose a novel approach that decouples the above operations, allowing for explicit and separate handling of matching, outlier rejection, grouping, and recovery of camera and scene structure. The method is based on a voting-based computational framework for motion analysis; it determines an accurate representation, in terms of dense velocities, segmented motion regions and boundaries, by using only the smoothness of image motion, followed by the extraction of scene and camera 3D geometry.
从一组点对应中估计相机和场景几何形状的主要困难是由于存在错误匹配和独立移动的物体。给定两幅图像,在获得匹配点后,通常进行离群值抑制步骤过滤,然后用于求解极面几何和三维结构估计。在运动物体存在的情况下,图像配准成为一个更具挑战性的问题,因为匹配和配准阶段变得相互依赖。我们提出了一种新颖的方法,将上述操作解耦,允许明确和单独处理匹配、异常值拒绝、分组和相机和场景结构的恢复。该方法基于基于投票的运动分析计算框架;它通过仅使用图像运动的平滑度来确定密度速度、分割运动区域和边界方面的准确表示,然后提取场景和相机3D几何形状。
{"title":"Voting-based grouping and interpretation of visual motion","authors":"M. Nicolescu, G. Medioni","doi":"10.1109/IAI.2004.1300976","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300976","url":null,"abstract":"A main difficulty for estimating camera and scene geometry from a set of point correspondences is caused by the presence of false matches and independently moving objects. Given two images, after obtaining the matching points, they are usually filtered by an outlier rejection step before being used to solve for epipolar geometry and 3D structure estimation. In the presence of moving objects, image registration becomes a more challenging problem, as the matching and registration phases become interdependent. We propose a novel approach that decouples the above operations, allowing for explicit and separate handling of matching, outlier rejection, grouping, and recovery of camera and scene structure. The method is based on a voting-based computational framework for motion analysis; it determines an accurate representation, in terms of dense velocities, segmented motion regions and boundaries, by using only the smoothness of image motion, followed by the extraction of scene and camera 3D geometry.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130738058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal phase congruency 时相一致性
Pub Date : 2004-03-28 DOI: 10.1109/IAI.2004.1300948
P. J. Myerscough, M. Nixon
We describe a robust moving feature detector that extracts feature points and feature velocities from a sequence of images. We develop a new approach based on phase congruency to include interpolation of feature orientation and improvements in robustness due to correlations in the image sequence. This new temporal phase congruency operator shows improved capabilities on a series of different real image types, as well as a noise analysis on synthetic images.
我们描述了一种鲁棒的运动特征检测器,它从一系列图像中提取特征点和特征速度。我们开发了一种基于相位一致性的新方法,包括特征方向的插值和由于图像序列中的相关性而提高的鲁棒性。这种新的时间相位同余算子在一系列不同的真实图像类型上显示出改进的能力,以及对合成图像的噪声分析。
{"title":"Temporal phase congruency","authors":"P. J. Myerscough, M. Nixon","doi":"10.1109/IAI.2004.1300948","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300948","url":null,"abstract":"We describe a robust moving feature detector that extracts feature points and feature velocities from a sequence of images. We develop a new approach based on phase congruency to include interpolation of feature orientation and improvements in robustness due to correlations in the image sequence. This new temporal phase congruency operator shows improved capabilities on a series of different real image types, as well as a noise analysis on synthetic images.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132347129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Recognition of isolated handwritten Farsi/Arabic alphanumeric using fractal codes 识别孤立的手写波斯语/阿拉伯字母数字使用分形代码
Pub Date : 2004-03-28 DOI: 10.1109/IAI.2004.1300954
S. Mozaffari, K. Faez, H. Kanan
We propose a new method for isolated handwritten Farsi/Arabic characters and numerals recognition using fractal codes. Fractal codes represent affine transformations which, when iteratively applied to the range-domain pairs in an arbitrary initial image, give results close to the given image. Each fractal code consists of six parameters, such as corresponding domain coordinates for each range block, brightness offset and an affine transformation, which are used as inputs for a multilayer perceptron neural network for learning and identifying an input. This method is robust to scale and frame size changes. Farsi's 32 characters are categorized to 8 different classes in which the characters are very similar to each other. There are ten digits in the Farsi/Arabic languages, but since two of them are not used in postal codes in Iran, only 8 more classes are needed for digits. According to experimental results, classification rates of 91.37% and 87.26% were obtained for digits and characters respectively on the test sets gathered from various people with different educational background and different ages.
本文提出了一种基于分形码的孤立手写波斯语/阿拉伯语字符和数字识别新方法。分形码表示仿射变换,当迭代应用于任意初始图像的范围-域对时,得到接近给定图像的结果。每个分形码由六个参数组成,如每个范围块对应的域坐标、亮度偏移和仿射变换,这些参数被用作多层感知器神经网络的输入,用于学习和识别输入。该方法对缩放和帧大小变化具有鲁棒性。波斯语的32个汉字被分为8个不同的类别,这些类别中的字符彼此非常相似。波斯语/阿拉伯语中有10个数字,但由于其中两个在伊朗的邮政编码中不使用,因此只需要8个数字类。实验结果表明,在不同教育背景和不同年龄人群的测试集上,数字和字符的分类率分别为91.37%和87.26%。
{"title":"Recognition of isolated handwritten Farsi/Arabic alphanumeric using fractal codes","authors":"S. Mozaffari, K. Faez, H. Kanan","doi":"10.1109/IAI.2004.1300954","DOIUrl":"https://doi.org/10.1109/IAI.2004.1300954","url":null,"abstract":"We propose a new method for isolated handwritten Farsi/Arabic characters and numerals recognition using fractal codes. Fractal codes represent affine transformations which, when iteratively applied to the range-domain pairs in an arbitrary initial image, give results close to the given image. Each fractal code consists of six parameters, such as corresponding domain coordinates for each range block, brightness offset and an affine transformation, which are used as inputs for a multilayer perceptron neural network for learning and identifying an input. This method is robust to scale and frame size changes. Farsi's 32 characters are categorized to 8 different classes in which the characters are very similar to each other. There are ten digits in the Farsi/Arabic languages, but since two of them are not used in postal codes in Iran, only 8 more classes are needed for digits. According to experimental results, classification rates of 91.37% and 87.26% were obtained for digits and characters respectively on the test sets gathered from various people with different educational background and different ages.","PeriodicalId":326040,"journal":{"name":"6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134456594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1