{"title":"Automatic scale selection as a pre-processing stage to interpreting real-world data","authors":"T. Lindeberg","doi":"10.1109/TAI.1996.560799","DOIUrl":null,"url":null,"abstract":"Summary form only given. We perceive objects in the world as meaningful entities only over certain ranges of scale. This fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. After a brief review of the main ideas behind a scale-space representation, I describe a systematic methodology for generating hypotheses about interesting scale levels in image data based on a general principle stating that local extrema over scales of different combinations of normalized derivatives are likely candidates to correspond to interesting image structures. Specifically, I show how this idea can be used for formulating feature detectors which automatically adapt their local scales of processing to the local image structure. I show how the scale selection approach applies to various types of feature detection problems in early vision. In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck.","PeriodicalId":209171,"journal":{"name":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","volume":"138 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Eighth IEEE International Conference on Tools with Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAI.1996.560799","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Summary form only given. We perceive objects in the world as meaningful entities only over certain ranges of scale. This fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. After a brief review of the main ideas behind a scale-space representation, I describe a systematic methodology for generating hypotheses about interesting scale levels in image data based on a general principle stating that local extrema over scales of different combinations of normalized derivatives are likely candidates to correspond to interesting image structures. Specifically, I show how this idea can be used for formulating feature detectors which automatically adapt their local scales of processing to the local image structure. I show how the scale selection approach applies to various types of feature detection problems in early vision. In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck.