... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing最新文献
Takahiro Okabe, Yuhi Kondo, Kris M. Kitani, Yoichi Sato
Most previous methods for generic object recognition explicitly or implicitly assume that an image contains objects from a single category, although objects from multiple categories often appear together in an image. In this paper, we present a novel method for object recognition that explicitly deals with objects of multiple categories coexisting in an image. Furthermore, our proposed method aims to recognize objects by taking advantage of a scene’s context represented by the co-occurrence relationship between object categories. Specifically, our method estimates the mixture ratios of multiple categories in an image via MAP regression, where the likelihood is computed based on the linear combination model of frequency distributions of local features, and the prior probability is computed from the co-occurrence relation. We conducted a number of experiments using the PASCAL dataset, and obtained the results that lend support to the effectiveness of the proposed method.
{"title":"Recognizing multiple objects based on co-occurrence of categories","authors":"Takahiro Okabe, Yuhi Kondo, Kris M. Kitani, Yoichi Sato","doi":"10.2201/NIIPI.2010.7.6","DOIUrl":"https://doi.org/10.2201/NIIPI.2010.7.6","url":null,"abstract":"Most previous methods for generic object recognition explicitly or implicitly assume that an image contains objects from a single category, although objects from multiple categories often appear together in an image. In this paper, we present a novel method for object recognition that explicitly deals with objects of multiple categories coexisting in an image. Furthermore, our proposed method aims to recognize objects by taking advantage of a scene’s context represented by the co-occurrence relationship between object categories. Specifically, our method estimates the mixture ratios of multiple categories in an image via MAP regression, where the likelihood is computed based on the linear combination model of frequency distributions of local features, and the prior probability is computed from the co-occurrence relation. We conducted a number of experiments using the PASCAL dataset, and obtained the results that lend support to the effectiveness of the proposed method.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"8 1","pages":"43"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89550091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Songkran Jarusirisawad, Takahide Hosokawa, H. Saito
We present a plane-sweep algorithm for removing occluding objects in front of the objective scene from multiple weakly-calibrated cameras. Projective grid space (PGS), a weak cameras calibration framework, is used to obtain geometrical relations between cameras. Plane-sweep algorithm works by implicitly reconstructing the depth maps of the targeted view. By excluding the occluding objects from the volume of the sweeping planes, we can generate new views without the occluding objects. The results show the effectiveness of the proposed method and it is fast enough to run in several frames per second on a consumer PC by implementing the proposed plane-sweep algorithm in graphics processing unit (GPU).
{"title":"Diminished reality using plane-sweep algorithm with weakly-calibrated cameras","authors":"Songkran Jarusirisawad, Takahide Hosokawa, H. Saito","doi":"10.2201/NIIPI.2010.7.3","DOIUrl":"https://doi.org/10.2201/NIIPI.2010.7.3","url":null,"abstract":"We present a plane-sweep algorithm for removing occluding objects in front of the objective scene from multiple weakly-calibrated cameras. Projective grid space (PGS), a weak cameras calibration framework, is used to obtain geometrical relations between cameras. Plane-sweep algorithm works by implicitly reconstructing the depth maps of the targeted view. By excluding the occluding objects from the volume of the sweeping planes, we can generate new views without the occluding objects. The results show the effectiveness of the proposed method and it is fast enough to run in several frames per second on a consumer PC by implementing the proposed plane-sweep algorithm in graphics processing unit (GPU).","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"50 1","pages":"11-20"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87499077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D imaging and video technologies are of growing interest in recent times because of having potential applications in many fields such as robotics, visualization, 3DTV, autonomous vehicles, driver assistance, “flying eyes”, intelligent human-machine interfaces, and so forth. To advance those technologies, computer vision and pattern recognition, which originated separately, are today increasingly interacting. In fact, papers published in this special issue address areas of both computer vision and pattern recognition. After going through a rigorous anonymous peer reviewing process and several revisions, seven papers are finally presented in this special issue. The first paper by Kobayashi, Sakaue and Sato, entitled “Multiple view geometry of projector-camera systems from virtual mutual projection” presents a calibration method for projector-camera systems. The proposed method generates virtual mutual projections between projectors and cameras by considering the shadow of cameras and the shadow of projectors and then discusses multiple view geometry for calibration. This work advances existing multiple view geometry so that it can also deal with projector-camera systems. The second paper by Jarusirisawad, Hosokawa and Saito, entitled “Diminished reality using plane-sweep algorithm with weakly-calibrated cameras” addresses an on-line method for generating free viewpoint views using captured images at different viewpoints. In this paper, the plane-sweep algorithm already proposed by the authors is extended from in the Euclidean space to in the projective space. The advantage of this method exists in excluding occluding objects for view synthesis. In “Object segmentation under varying illumination: stochastic background model considering spatial local-
{"title":"3D image and video technology","authors":"A. Sugimoto, Yoichi Sato, R. Klette","doi":"10.2201/NIIPI.2010.7.1","DOIUrl":"https://doi.org/10.2201/NIIPI.2010.7.1","url":null,"abstract":"3D imaging and video technologies are of growing interest in recent times because of having potential applications in many fields such as robotics, visualization, 3DTV, autonomous vehicles, driver assistance, “flying eyes”, intelligent human-machine interfaces, and so forth. To advance those technologies, computer vision and pattern recognition, which originated separately, are today increasingly interacting. In fact, papers published in this special issue address areas of both computer vision and pattern recognition. After going through a rigorous anonymous peer reviewing process and several revisions, seven papers are finally presented in this special issue. The first paper by Kobayashi, Sakaue and Sato, entitled “Multiple view geometry of projector-camera systems from virtual mutual projection” presents a calibration method for projector-camera systems. The proposed method generates virtual mutual projections between projectors and cameras by considering the shadow of cameras and the shadow of projectors and then discusses multiple view geometry for calibration. This work advances existing multiple view geometry so that it can also deal with projector-camera systems. The second paper by Jarusirisawad, Hosokawa and Saito, entitled “Diminished reality using plane-sweep algorithm with weakly-calibrated cameras” addresses an on-line method for generating free viewpoint views using captured images at different viewpoints. In this paper, the plane-sweep algorithm already proposed by the authors is extended from in the Euclidean space to in the projective space. The advantage of this method exists in excluding occluding objects for view synthesis. In “Object segmentation under varying illumination: stochastic background model considering spatial local-","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"24 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81567446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new approach for clustering faces of characters in a recorded television title. The clustering results are used to catalog video clips based on subjects’ faces for quick scene access. The main goal is to obtain a result for cataloging in tolerable waiting time after the recording, which is less than 3 minutes per hour of video clips. Although conventional face recognition-based clustering methods can obtain good results, they require considerable processing time. To enable high-speed processing, we use similarities of shots where the characters appear to estimate corresponding faces instead of calculating distance between each facial feature. Two similar shot-based clustering (SSC) methods are proposed. The first method only uses SSC and the second method uses face thumbnail clustering (FTC) as well. The experiment shows that the average processing time per hour of video clips was 350 ms and 31 seconds for SSC and SSC+FTC, respectively, despite the decrease in the average number of different person’s faces in a catalog being 6.0% and 0.9% compared to face recognition-based clustering.
{"title":"Fast face clustering based on shot similarity for browsing video","authors":"Koji Yamamoto, Osamu Yamaguchi, Hisashi Aoki","doi":"10.2201/NIIPI.2010.7.7","DOIUrl":"https://doi.org/10.2201/NIIPI.2010.7.7","url":null,"abstract":"In this paper, we propose a new approach for clustering faces of characters in a recorded television title. The clustering results are used to catalog video clips based on subjects’ faces for quick scene access. The main goal is to obtain a result for cataloging in tolerable waiting time after the recording, which is less than 3 minutes per hour of video clips. Although conventional face recognition-based clustering methods can obtain good results, they require considerable processing time. To enable high-speed processing, we use similarities of shots where the characters appear to estimate corresponding faces instead of calculating distance between each facial feature. Two similar shot-based clustering (SSC) methods are proposed. The first method only uses SSC and the second method uses face thumbnail clustering (FTC) as well. The experiment shows that the average processing time per hour of video clips was 350 ms and 31 seconds for SSC and SSC+FTC, respectively, despite the decrease in the average number of different person’s faces in a catalog being 6.0% and 0.9% compared to face recognition-based clustering.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"18 1","pages":"53"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83297480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the following sorting problem: We are given n bins with two balls in each bin. Balls in the ith bin are numbered n + 1 − i. We can swap two balls from adjacent bins. How many number of swaps are needed in order to sort balls, i.e., move every ball to the bin with the same number. For this problem the best known solution requires almost 4 n 2 swaps. In this paper, we show an algorithm which solves this problem using less than 2n 2 3 swaps. Since it is known that the lower bound of the number of swaps is 2n 2 /3 = 2n 2 3 − n 3 , our result is almost tight. Furthermore, we show that for n = 2 m + 1( m ≥ 0) the algorithm is optimal.
我们研究下面的排序问题:我们有n个箱子,每个箱子里有两个球。第i个箱子里的球编号为n + 1 - i。我们可以交换两个相邻箱子里的球。需要多少次交换才能对球进行排序,即,将每个球移到具有相同数量的箱子中。对于这个问题,最著名的解决方案几乎需要4 n 2个交换。在本文中,我们展示了一种算法,该算法使用少于2n23的交换来解决这个问题。由于已知交换次数的下界是2n2 /3 = 2n2 3 - n3,我们的结果几乎是紧的。进一步,我们证明了当n = 2 m + 1(m≥0)时,算法是最优的。
{"title":"An almost optimal algorithm for Winkler's sorting pairs in bins (Special issue : Theoretical computer science and discrete mathematics)","authors":"Hiro Ito, Junichi Teruyama, Yuichi Yoshida","doi":"10.2201/niipi.2012.9.2","DOIUrl":"https://doi.org/10.2201/niipi.2012.9.2","url":null,"abstract":"We investigate the following sorting problem: We are given n bins with two balls in each bin. Balls in the ith bin are numbered n + 1 − i. We can swap two balls from adjacent bins. How many number of swaps are needed in order to sort balls, i.e., move every ball to the bin with the same number. For this problem the best known solution requires almost 4 n 2 swaps. In this paper, we show an algorithm which solves this problem using less than 2n 2 3 swaps. Since it is known that the lower bound of the number of swaps is 2n 2 /3 = 2n 2 3 − n 3 , our result is almost tight. Furthermore, we show that for n = 2 m + 1( m ≥ 0) the algorithm is optimal.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"741 1","pages":"3-7"},"PeriodicalIF":0.0,"publicationDate":"2010-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76838144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Tanaka, Atsushi Shimada, Daisaku Arita, R. Taniguchi
We propose a new method for background modeling. Our method is based on the two complementary approaches. One uses the probability density function (PDF) to approximate background model. The PDF is estimated non-parametrically by using Parzen density estimation. Then, foreground object is detected based on the estimated PDF. The method is based on the evaluation of the local texture at pixel-level resolution which reduces the effects of variations in lighting. Fusing those approachs realizes robust object detection under varying illumination. Several experiments show the effectiveness of our approach.
{"title":"Object segmentation under varying illumination: Stochastic background model considering spatial locality","authors":"T. Tanaka, Atsushi Shimada, Daisaku Arita, R. Taniguchi","doi":"10.2201/NIIPI.2010.7.4","DOIUrl":"https://doi.org/10.2201/NIIPI.2010.7.4","url":null,"abstract":"We propose a new method for background modeling. Our method is based on the two complementary approaches. One uses the probability density function (PDF) to approximate background model. The PDF is estimated non-parametrically by using Parzen density estimation. Then, foreground object is detected based on the estimated PDF. The method is based on the evaluation of the local texture at pixel-level resolution which reduces the effects of variations in lighting. Fusing those approachs realizes robust object detection under varying illumination. Several experiments show the effectiveness of our approach.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"13 1","pages":"21-31"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85244521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a statistical string similarity model for approximate matching in information linkage. The proposed similarity model is an extension of hidden Markov model and its learnable ability realizes string matching function adaptable to various information sources. The main contribution of this paper is to develop an efficient learning algorithm for estimating parameters of the statistical similarity model. The proposed algorithm is based on the Expectation-Maximization (EM) technique where dynamic programing technique is used to update parameters in EM process.
{"title":"Statistical string similarity model for information linkage","authors":"A. Takasu","doi":"10.2201/NIIPI.2009.6.7","DOIUrl":"https://doi.org/10.2201/NIIPI.2009.6.7","url":null,"abstract":"This paper proposes a statistical string similarity model for approximate matching in information linkage. The proposed similarity model is an extension of hidden Markov model and its learnable ability realizes string matching function adaptable to various information sources. The main contribution of this paper is to develop an efficient learning algorithm for estimating parameters of the statistical similarity model. The proposed algorithm is based on the Expectation-Maximization (EM) technique where dynamic programing technique is used to update parameters in EM process.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"39 1","pages":"57"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87156363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Millions of humans have been disseminating information through WWW and this amount is explosively increasing. However, a person’s ability to digest this information is limited. Therefore, smart information and communication technologies are required to help them effectively and efficiently use this information. This special issue includes some of the more recent advances in ICT technologies that support people in acquiring information and gaining knowledge from the vast amount of disseminated information from the information explosion, and provides a forum for discussing the research directions in this field. This special issue consists of a survey paper, three research papers, and two technical notes that have undergone one or more cycles of anonymous peer reviews and revisions. The first paper, “Researches on image retrieval and use in information explosion era”, by Masashi Inoue, surveys the researches on image retrieval and its utilization from four aspects, i.e., information access and organization technology, computing infrastructure for large-scale image access, human-system interaction, and social issues around image media. The second paper, “Utilization of external knowledge for personal name disambiguation”, by Quang Minh Vu, Atsuhiro Takasu, and Jun Adachi, presents a name disambiguation method for identifying people appearing on Web pages. It discriminates people having the same name using text around the person’s name on a Web page. For this purpose, it introduces a recent statistical text model based on latent topics to extract features for the person name disambiguation problem. The third paper, “Building web page collections efficiently exploiting local surrounding pages”, by Yuxin Wang and Keizo Oyama, presents a web page collection framework. This paper focuses on a high-quality page classification method for the framework. To perform high-quality classification, it uses a two-phase classi-
{"title":"Leading ICT technologies in the Information Explosion","authors":"J. Adachi, A. Takasu","doi":"10.2201/NiiPi.2009.6.1","DOIUrl":"https://doi.org/10.2201/NiiPi.2009.6.1","url":null,"abstract":"Millions of humans have been disseminating information through WWW and this amount is explosively increasing. However, a person’s ability to digest this information is limited. Therefore, smart information and communication technologies are required to help them effectively and efficiently use this information. This special issue includes some of the more recent advances in ICT technologies that support people in acquiring information and gaining knowledge from the vast amount of disseminated information from the information explosion, and provides a forum for discussing the research directions in this field. This special issue consists of a survey paper, three research papers, and two technical notes that have undergone one or more cycles of anonymous peer reviews and revisions. The first paper, “Researches on image retrieval and use in information explosion era”, by Masashi Inoue, surveys the researches on image retrieval and its utilization from four aspects, i.e., information access and organization technology, computing infrastructure for large-scale image access, human-system interaction, and social issues around image media. The second paper, “Utilization of external knowledge for personal name disambiguation”, by Quang Minh Vu, Atsuhiro Takasu, and Jun Adachi, presents a name disambiguation method for identifying people appearing on Web pages. It discriminates people having the same name using text around the person’s name on a Web page. For this purpose, it introduces a recent statistical text model based on latent topics to extract features for the person name disambiguation problem. The third paper, “Building web page collections efficiently exploiting local surrounding pages”, by Yuxin Wang and Keizo Oyama, presents a web page collection framework. This paper focuses on a high-quality page classification method for the framework. To perform high-quality classification, it uses a two-phase classi-","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"11 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91172203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a model/method for observing the potential lexical productivity of head elements in nominal compounds, and compare the productivity of a few high-frequency head elements in a technical corpus. Much work has been done on various aspects of nominal compounds. Most of this work, however, has been devoted to “syntagmatic” aspects at various levels, such as semantic compositionality, possible variations, lexical cohesiveness, etc. while the “paradigmatic” aspects of nominal compounds have received relatively little attention. By providing a “paradigmatic” perspective for analysing nominal compounds, this paper aims to help build a truly integrative approach to analysing and processing nominal compounds.
{"title":"Computing the potential lexical productivity of head elements in nominal compounds using the textual corpus","authors":"K. Kageura","doi":"10.2201/NIIPI.2009.6.6","DOIUrl":"https://doi.org/10.2201/NIIPI.2009.6.6","url":null,"abstract":"In this paper, we propose a model/method for observing the potential lexical productivity of head elements in nominal compounds, and compare the productivity of a few high-frequency head elements in a technical corpus. Much work has been done on various aspects of nominal compounds. Most of this work, however, has been devoted to “syntagmatic” aspects at various levels, such as semantic compositionality, possible variations, lexical cohesiveness, etc. while the “paradigmatic” aspects of nominal compounds have received relatively little attention. By providing a “paradigmatic” perspective for analysing nominal compounds, this paper aims to help build a truly integrative approach to analysing and processing nominal compounds.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"38 1","pages":"49"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77873664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akiko Aizawa, A. Takasu, Daiji Fukagawa, Masao Takaku, J. Adachi
We propose a two-layered architecture for information identification that is specifically targeted towards academic information. We first introduce the basic notion of information identification, or linkage, that connects fragmented information referring to the same objects or people in the world. We then propose a linkage system that is composed of bibliography and researcher identification layers. As an illustrative example, the results of a coauthor relationship analysis are also shown.
{"title":"Academic linkage: A linkage platform for large volumes of academic information","authors":"Akiko Aizawa, A. Takasu, Daiji Fukagawa, Masao Takaku, J. Adachi","doi":"10.2201/NIIPI.2009.6.5","DOIUrl":"https://doi.org/10.2201/NIIPI.2009.6.5","url":null,"abstract":"We propose a two-layered architecture for information identification that is specifically targeted towards academic information. We first introduce the basic notion of information identification, or linkage, that connects fragmented information referring to the same objects or people in the world. We then propose a linkage system that is composed of bibliography and researcher identification layers. As an illustrative example, the results of a coauthor relationship analysis are also shown.","PeriodicalId":91638,"journal":{"name":"... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing","volume":"47 1","pages":"41"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75234437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
... Proceedings of the ... IEEE International Conference on Progress in Informatics and Computing. IEEE International Conference on Progress in Informatics and Computing