K. N. Iyer, Madhusoodhana Chari, Hariprasad Kannan
Depth image based rendering (DIBR) is one of the key technologies to realize future three dimensional television (3DTV) systems. In this paper, we propose a novel depth image based rendering system for stereoscopic view generation. Existing method of parallax calculation based on uniform scaling of depth values can lead to artifacts in the 3D view. We propose a method of parallax estimation based on non-uniform scaling of depth values via histogram analysis of depth map. Experimental results showed that the proposed method minimizes aforementioned artifacts while maintaining sufficient depth quality.
{"title":"A Novel Approach to Depth Image Based Rendering Based on Non-Uniform Scaling of Depth Values","authors":"K. N. Iyer, Madhusoodhana Chari, Hariprasad Kannan","doi":"10.1109/FGCNS.2008.46","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.46","url":null,"abstract":"Depth image based rendering (DIBR) is one of the key technologies to realize future three dimensional television (3DTV) systems. In this paper, we propose a novel depth image based rendering system for stereoscopic view generation. Existing method of parallax calculation based on uniform scaling of depth values can lead to artifacts in the 3D view. We propose a method of parallax estimation based on non-uniform scaling of depth values via histogram analysis of depth map. Experimental results showed that the proposed method minimizes aforementioned artifacts while maintaining sufficient depth quality.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130165825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As quality data is important for data mining, reversely data mining is necessary to measure the quality of data. Specifically, in XML, the issue of quality data for mining purposes and also using data mining techniques for quality measures is becoming more necessary as a massive amount of data is being stored and represented over the Web. We propose two important interrelated issues: how quality XML data is useful for data mining in XML and how data mining in XML is used to measure the quality data for XML. When we address both issues, we consider XML constraints because constraints in XML can be used for quality measurement in XML data and also for finding some important patterns and association rules in XML data mining. We note that XML constraints can play an important role for data quality and data mining in XML. We address the theoretical framework rather than solutions. Our research framework is towards the broader task of data mining and data quality for XML data integrations.
{"title":"Quality Data for Data Mining and Data Mining for Quality Data: A Constraint Based Approach in XML","authors":"M. Shahriar, S. Anam","doi":"10.1109/FGCNS.2008.74","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.74","url":null,"abstract":"As quality data is important for data mining, reversely data mining is necessary to measure the quality of data. Specifically, in XML, the issue of quality data for mining purposes and also using data mining techniques for quality measures is becoming more necessary as a massive amount of data is being stored and represented over the Web. We propose two important interrelated issues: how quality XML data is useful for data mining in XML and how data mining in XML is used to measure the quality data for XML. When we address both issues, we consider XML constraints because constraints in XML can be used for quality measurement in XML data and also for finding some important patterns and association rules in XML data mining. We note that XML constraints can play an important role for data quality and data mining in XML. We address the theoretical framework rather than solutions. Our research framework is towards the broader task of data mining and data quality for XML data integrations.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116427370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We aim at optimize the TCP performance over UMTS access network challenged by the large delay bandwidth product that is mainly caused by the latency from the link layer ARQ retransmissions and diversity technique at physical layer. We propose to place a split TCP proxy at GGSN nodes which is between UMTS access network and Internet. The split proxy divides the bandwidth delay product into two parts, resulting in two TCP connections with smaller bandwidth delay products. Simulation results show, the split TCP proxy can significantly improve the TCP performance under high bit rate DCH channel scenario (e.g.256 kbps). Besides, the split TCP proxy brings more performance gain for downloading large files than downloading small ones. Finally, an aggressive initial TCP congestion window size at proxy can brings even more performance gain for radio links with high data rates DCH channels with large delay bandwidth product.
{"title":"TCP Performance Enhancement for UMTS Access Network","authors":"Liang Hu","doi":"10.1109/FGCNS.2008.159","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.159","url":null,"abstract":"We aim at optimize the TCP performance over UMTS access network challenged by the large delay bandwidth product that is mainly caused by the latency from the link layer ARQ retransmissions and diversity technique at physical layer. We propose to place a split TCP proxy at GGSN nodes which is between UMTS access network and Internet. The split proxy divides the bandwidth delay product into two parts, resulting in two TCP connections with smaller bandwidth delay products. Simulation results show, the split TCP proxy can significantly improve the TCP performance under high bit rate DCH channel scenario (e.g.256 kbps). Besides, the split TCP proxy brings more performance gain for downloading large files than downloading small ones. Finally, an aggressive initial TCP congestion window size at proxy can brings even more performance gain for radio links with high data rates DCH channels with large delay bandwidth product.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gwang-Gook Lee, Byeoung-su Kim, Kee-Hwan Ka, Hyoung-ki Kim, Ja-Young Yoon, Jae-Jun Kim, Whoiyul Kim
This paper presents a new model of a management system for large-scale buildings that aims to deal with both static and dynamic spatial information. For this purpose, 3D CAD, 3D GIS and image processing are integrated. The geometrical information of a building is managed by GIS using a database built from 3D CAD. Also, the dynamic spatial information of a building (i.e., flow size of pedestrians) is obtained using image processing. We implemented a prototype version of the proposed system to a real environment and it showed promising results for a running system covering a wide area.
{"title":"Prototype Development of a Spatial Information Management System for Large-Scale Buildings","authors":"Gwang-Gook Lee, Byeoung-su Kim, Kee-Hwan Ka, Hyoung-ki Kim, Ja-Young Yoon, Jae-Jun Kim, Whoiyul Kim","doi":"10.1109/FGCNS.2008.51","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.51","url":null,"abstract":"This paper presents a new model of a management system for large-scale buildings that aims to deal with both static and dynamic spatial information. For this purpose, 3D CAD, 3D GIS and image processing are integrated. The geometrical information of a building is managed by GIS using a database built from 3D CAD. Also, the dynamic spatial information of a building (i.e., flow size of pedestrians) is obtained using image processing. We implemented a prototype version of the proposed system to a real environment and it showed promising results for a running system covering a wide area.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128175455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Small and medium sized enterprises (SMEs) in China feature their big quantity and fast developing speed. Itpsilas a huge requirement for them to establish e-commence business through their website portals. This paper proposes a template-based E-commence website Builder (TEB) for them, which is easy to use and flexible to extend. Each template in TEB is composed of multiple template pages with tags. Template engine can parse the pages and tags and construct template websites in an intelligent way. The appearance design of a template is separated from the data tag development. These tasks are distributed to different actors to perform in a simultaneous way. A case study shows the efficiency of website building and template development with built-in tools of TEB.
{"title":"TEB: A Template-Based E-commence Website Builder for SMEs","authors":"Ying Jiang, Hui Dong","doi":"10.1109/FGCNS.2008.91","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.91","url":null,"abstract":"Small and medium sized enterprises (SMEs) in China feature their big quantity and fast developing speed. Itpsilas a huge requirement for them to establish e-commence business through their website portals. This paper proposes a template-based E-commence website Builder (TEB) for them, which is easy to use and flexible to extend. Each template in TEB is composed of multiple template pages with tags. Template engine can parse the pages and tags and construct template websites in an intelligent way. The appearance design of a template is separated from the data tag development. These tasks are distributed to different actors to perform in a simultaneous way. A case study shows the efficiency of website building and template development with built-in tools of TEB.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127436401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital content services in home network provide a various kind of services such as cultural life and offer convenience, accessibility and effectiveness to users. But, digital contents are easy to be duplicated illegally and used fraudulent by malicious party which break down the development of digital content services in smart home environment. To protect digital contents from those problems, watermark or DRM (digital rights management) technique has been researched. The existing technique is only for a specific format, which do not have a universal feature. In addition, there is a technique using encryption algorithms to protect digital contents but the problem is when lots of digital contents are used, encrypting and decrypting times are extremely long to be processed. This research suggests the way on how to provide the digital contents service well in smart home environment. The proposed method using a partly decoded method of digital contents header and permutations recombination about separated contents are being used widely to provide DRM services on all kind of digital contents. And, it makes the process of encryption and decryption on digital contents faster and it includes the function to keep the existing security so it is very flexible for various kinds of digital contents.
{"title":"A Study on the Permutation and Recombination Method Digital Contents for DRM in Smart Home Environment","authors":"Eun-Gyeom Jang, Byung-Ok Jeong, Byoung-Soo Koh, Young-Rak Choi","doi":"10.1109/FGCNS.2008.18","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.18","url":null,"abstract":"The digital content services in home network provide a various kind of services such as cultural life and offer convenience, accessibility and effectiveness to users. But, digital contents are easy to be duplicated illegally and used fraudulent by malicious party which break down the development of digital content services in smart home environment. To protect digital contents from those problems, watermark or DRM (digital rights management) technique has been researched. The existing technique is only for a specific format, which do not have a universal feature. In addition, there is a technique using encryption algorithms to protect digital contents but the problem is when lots of digital contents are used, encrypting and decrypting times are extremely long to be processed. This research suggests the way on how to provide the digital contents service well in smart home environment. The proposed method using a partly decoded method of digital contents header and permutations recombination about separated contents are being used widely to provide DRM services on all kind of digital contents. And, it makes the process of encryption and decryption on digital contents faster and it includes the function to keep the existing security so it is very flexible for various kinds of digital contents.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127498932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an equipment development for PET banding header that complements the steel banding machine currently being installed in Pohang Steel Works, South Korea. The PET banding header was developed due to the damage done on the surface of the cold rolling products whilst being transported. Because the PET Banding Header was designed as a friction-binding technology against the existing heat binding technology, the intensity concentrated on the binding area was significantly improved and its efficiency was also increased because it was designed to be a wider range of the banding. In addition, for the cost-cutting of the new equipment in development, the PET Banding header allows for both the Steel Banding Machine and PET Banding Machine to be utilized together. As a result, being applied to the facilities in the field, we were able to reduce the facility investment, demonstrate the efficient facility maintenance, and more importantly, solve the complaints from the customers.
{"title":"Development of Automation Strapping Machine Using PET Band","authors":"HwangRyol Ryu, kiSung yoo, Chintae Choi","doi":"10.1109/FGCNS.2008.129","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.129","url":null,"abstract":"We present an equipment development for PET banding header that complements the steel banding machine currently being installed in Pohang Steel Works, South Korea. The PET banding header was developed due to the damage done on the surface of the cold rolling products whilst being transported. Because the PET Banding Header was designed as a friction-binding technology against the existing heat binding technology, the intensity concentrated on the binding area was significantly improved and its efficiency was also increased because it was designed to be a wider range of the banding. In addition, for the cost-cutting of the new equipment in development, the PET Banding header allows for both the Steel Banding Machine and PET Banding Machine to be utilized together. As a result, being applied to the facilities in the field, we were able to reduce the facility investment, demonstrate the efficient facility maintenance, and more importantly, solve the complaints from the customers.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125650636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method to compute the similarity of two blog posts is proposed in this paper. This method mainly has two parts including keywords extraction and semantic similarity measurement. During keywords extraction part, the method utilizes particular post features to extract keywords from one blog post with the aim to improve the correlation rate. In order to compute the similarity of any two blog posts more effectively, semantic similarity measurement part make use of personal ontology that denotes single post, the aim is to transform the blog post similarity into the personal ontology similarity. This method has already been applied in an original retrieval system which supports searching not only by several keywords, but also by a blog post (in the type of URL). The experimental results demonstrate that the proposed method is effective.
{"title":"Research on Blog Similarity Based on Ontology","authors":"S. Yan, Zhao Lu, Junzhong Gu","doi":"10.1109/FGCNS.2008.8","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.8","url":null,"abstract":"A new method to compute the similarity of two blog posts is proposed in this paper. This method mainly has two parts including keywords extraction and semantic similarity measurement. During keywords extraction part, the method utilizes particular post features to extract keywords from one blog post with the aim to improve the correlation rate. In order to compute the similarity of any two blog posts more effectively, semantic similarity measurement part make use of personal ontology that denotes single post, the aim is to transform the blog post similarity into the personal ontology similarity. This method has already been applied in an original retrieval system which supports searching not only by several keywords, but also by a blog post (in the type of URL). The experimental results demonstrate that the proposed method is effective.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a robust ordering scheme for entropy coding in gray-level image. The issue is to reduce additional information needed when bitstream is transmitted. The proposed scheme uses the ordering method of co-occurrence count about gray-levels in neighboring pixels. That is, gray-levels are substituted by their ordering numbers without additional information. From the results of computer simulation, it is verified that the proposed scheme could be reduced the compression bit rate by up to 44.12%, 18.41% comparing to the entropy coding and conventional ordering scheme respectively. So our scheme can be successfully applied to the application areas that require of losslessness and data compaction.
{"title":"A Robust Ordering Scheme for Entropy Coding in Gray-Level Image","authors":"N. Kim, Kang-Soo You, Hoon-Sung Kwak","doi":"10.1109/FGCNS.2008.31","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.31","url":null,"abstract":"In this paper, we propose a robust ordering scheme for entropy coding in gray-level image. The issue is to reduce additional information needed when bitstream is transmitted. The proposed scheme uses the ordering method of co-occurrence count about gray-levels in neighboring pixels. That is, gray-levels are substituted by their ordering numbers without additional information. From the results of computer simulation, it is verified that the proposed scheme could be reduced the compression bit rate by up to 44.12%, 18.41% comparing to the entropy coding and conventional ordering scheme respectively. So our scheme can be successfully applied to the application areas that require of losslessness and data compaction.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129275194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information Content (IC) is an important dimension of assessing the semantic similarity between two terms or word senses in word knowledge. The conventional method of obtaining IC of word senses is to combine knowledge of their hierarchical structure from an ontology like WordNet with actual usage in text as derived from a large corpus. In this paper, a new model of IC is presented, which relies on hierarchical structure alone. The model considers not only the hyponyms of each word sense but also its depth in the structure. The IC value is easier to calculate based on our model, and when used as the basis of a similarity approach it yields judgments that correlate more closely with human assessments than others, which using IC value obtained only considering the hyponyms and IC value got by employing corpus analysis.
{"title":"A New Model of Information Content for Semantic Similarity in WordNet","authors":"Zili Zhou, Yanna Wang, Junzhong Gu","doi":"10.1109/FGCNS.2008.16","DOIUrl":"https://doi.org/10.1109/FGCNS.2008.16","url":null,"abstract":"Information Content (IC) is an important dimension of assessing the semantic similarity between two terms or word senses in word knowledge. The conventional method of obtaining IC of word senses is to combine knowledge of their hierarchical structure from an ontology like WordNet with actual usage in text as derived from a large corpus. In this paper, a new model of IC is presented, which relies on hierarchical structure alone. The model considers not only the hyponyms of each word sense but also its depth in the structure. The IC value is easier to calculate based on our model, and when used as the basis of a similarity approach it yields judgments that correlate more closely with human assessments than others, which using IC value obtained only considering the hyponyms and IC value got by employing corpus analysis.","PeriodicalId":370780,"journal":{"name":"2008 Second International Conference on Future Generation Communication and Networking Symposia","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128798499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}