Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000433
J. Suhonen, Marko Hännikäinen, O. Lehtoranta, M. Kuorilehto, T. Hämäläinen, M. Niemi
Real-time streaming video is expected to emerge as a key service in different telecommunications systems, including wireless networks. This paper presents the functionality and implementation of the wireless Video Control Protocol (VCP). The protocol has been implemented for developing the functionality for real-time video stream transmission over heterogeneous wireless network technologies. VCP is embedded into a wireless video demonstrator. The demonstrator consists of Windows NT hosts containing a real-time H.263 encoder, video stream parsing functionality, and several network connections, such as wireless LAN, Bluetooth and GSM data. The protocol contains functionality for protecting the video stream transfer and adapting different network technologies together.
{"title":"Video transfer control protocol for a wireless video demonstrator","authors":"J. Suhonen, Marko Hännikäinen, O. Lehtoranta, M. Kuorilehto, T. Hämäläinen, M. Niemi","doi":"10.1109/ITCC.2002.1000433","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000433","url":null,"abstract":"Real-time streaming video is expected to emerge as a key service in different telecommunications systems, including wireless networks. This paper presents the functionality and implementation of the wireless Video Control Protocol (VCP). The protocol has been implemented for developing the functionality for real-time video stream transmission over heterogeneous wireless network technologies. VCP is embedded into a wireless video demonstrator. The demonstrator consists of Windows NT hosts containing a real-time H.263 encoder, video stream parsing functionality, and several network connections, such as wireless LAN, Bluetooth and GSM data. The protocol contains functionality for protecting the video stream transfer and adapting different network technologies together.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125795831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000437
J. Siddiqi, Babak Akhgar, Carl Davies, S. Al-Khayatt
This paper challenges the basis of highly optimistic predictions for the growth of business-to-consumer (B2C) e-commerce, and a number of barriers to it are discussed. In a corporate age, when 'the customer is king', it is questioned whether Internet shopping really is what consumers 'want' or 'need'. The discussion provides a socio-technical exposition underlying this optimism and counters it by a prudent and realistic model for progress in the area of e-commerce. As a result, we suggest that, in the near future, e-commerce may mature as a steadily growing niche market - an alternative way to shop rather than the way. We suggest that the Internet will probably evolve as an all-embracing communication and information tool, rather than the 'shopping mall of the future'. A discussion on the future of B2C e-commerce is presented.
{"title":"E-commerce: continuous growth or leveling out?","authors":"J. Siddiqi, Babak Akhgar, Carl Davies, S. Al-Khayatt","doi":"10.1109/ITCC.2002.1000437","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000437","url":null,"abstract":"This paper challenges the basis of highly optimistic predictions for the growth of business-to-consumer (B2C) e-commerce, and a number of barriers to it are discussed. In a corporate age, when 'the customer is king', it is questioned whether Internet shopping really is what consumers 'want' or 'need'. The discussion provides a socio-technical exposition underlying this optimism and counters it by a prudent and realistic model for progress in the area of e-commerce. As a result, we suggest that, in the near future, e-commerce may mature as a steadily growing niche market - an alternative way to shop rather than the way. We suggest that the Internet will probably evolve as an all-embracing communication and information tool, rather than the 'shopping mall of the future'. A discussion on the future of B2C e-commerce is presented.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000370
C. Cocianu, L. State, P. Vlamos
The effectiveness of restoration techniques mainly depends on the accuracy of the image modeling. One of the most popular degradation models is based on the assumption that the image blur can be modeled as a superposition with an impulse response H that may be space variant and its output is subject to an additive noise. Our research aimed at the use of statistical concepts and tools for developing a new class of image restoration algorithms. Several variants of a heuristic scatter matrix based algorithm (HSBA), the algorithm HBA that uses the Bhattacharyya coefficient for image restoration, the heuristic regression based algorithm for image restoration and new approaches of image restoration based on the innovation algorithm are reported. The LMS type algorithm AMVR is presented. A comparative study is performed and reported on the quality and efficiency of the presented noise removal algorithms.
{"title":"On a certain class of algorithms for noise removal in image processing: a comparative study","authors":"C. Cocianu, L. State, P. Vlamos","doi":"10.1109/ITCC.2002.1000370","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000370","url":null,"abstract":"The effectiveness of restoration techniques mainly depends on the accuracy of the image modeling. One of the most popular degradation models is based on the assumption that the image blur can be modeled as a superposition with an impulse response H that may be space variant and its output is subject to an additive noise. Our research aimed at the use of statistical concepts and tools for developing a new class of image restoration algorithms. Several variants of a heuristic scatter matrix based algorithm (HSBA), the algorithm HBA that uses the Bhattacharyya coefficient for image restoration, the heuristic regression based algorithm for image restoration and new approaches of image restoration based on the innovation algorithm are reported. The LMS type algorithm AMVR is presented. A comparative study is performed and reported on the quality and efficiency of the presented noise removal algorithms.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133791259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000427
Zhijun Lei, N. Georganas
In order to allow users to use handheld devices accessing video information, such as downloading and playing video files, there is a need to downscale the compressed video into lower spatial resolution and lower transmission bit rate. In this work, transcoding the compressed H.263 video into low spatial-resolution is discussed and realized. To reduce the computation cost, motion vectors from the incoming video stream are resampled and reused. We propose a novel approach to refine motion vectors adaptively according to the motion of every frame or every macroblock in a frame. The proposed approach can improve the video quality and reduce predictive residues of every frame, hence reduce the transmission bit rate. Implementation results suggest that the proposed approach produces better image quality and lower transmission bit rate than a number of previous approaches.
{"title":"H.263 video transcoding for spatial resolution downscaling","authors":"Zhijun Lei, N. Georganas","doi":"10.1109/ITCC.2002.1000427","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000427","url":null,"abstract":"In order to allow users to use handheld devices accessing video information, such as downloading and playing video files, there is a need to downscale the compressed video into lower spatial resolution and lower transmission bit rate. In this work, transcoding the compressed H.263 video into low spatial-resolution is discussed and realized. To reduce the computation cost, motion vectors from the incoming video stream are resampled and reused. We propose a novel approach to refine motion vectors adaptively according to the motion of every frame or every macroblock in a frame. The proposed approach can improve the video quality and reduce predictive residues of every frame, hence reduce the transmission bit rate. Implementation results suggest that the proposed approach produces better image quality and lower transmission bit rate than a number of previous approaches.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130639954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000384
A. Duong, Minh Tran, Luong Han Co
The National Institute of Standards and Technology (NIST) has just publicly announced the Rijndael block cipher proposed by Vincent Rijmen and Joan Daemen to be the Advanced Encryption Standard (AES) from October 2nd 2000. Basing on its mathematical and cryptography basis we have studied and devised extended versions of this new standard to improve its strength and resistance against the rapidly increasing capability and strength of computers. We describe the results of our recent experiments related to this problem.
{"title":"The extended Rijndael-like block ciphers","authors":"A. Duong, Minh Tran, Luong Han Co","doi":"10.1109/ITCC.2002.1000384","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000384","url":null,"abstract":"The National Institute of Standards and Technology (NIST) has just publicly announced the Rijndael block cipher proposed by Vincent Rijmen and Joan Daemen to be the Advanced Encryption Standard (AES) from October 2nd 2000. Basing on its mathematical and cryptography basis we have studied and devised extended versions of this new standard to improve its strength and resistance against the rapidly increasing capability and strength of computers. We describe the results of our recent experiments related to this problem.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131037508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000435
F. Cross, E. Lawrence
This research investigates the synergies of applying a rapid application development (RAD) process model to support e-business project management. The paper reports on the results of 11 unstructured interviews with practicing e-business project managers. The interviews were undertaken: (1) to document the current development approaches being used by e-business project managers, and (2) to identify the risks and issues with which they are dealing. The research outcome supports a RAD approach for e-business development and develops a framework of strategies and tactics which e-business project managers could use to defend themselves against the threat of failure in a difficult and rapidly changing environment.
{"title":"Synergies in applying RAD to support e-business project management","authors":"F. Cross, E. Lawrence","doi":"10.1109/ITCC.2002.1000435","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000435","url":null,"abstract":"This research investigates the synergies of applying a rapid application development (RAD) process model to support e-business project management. The paper reports on the results of 11 unstructured interviews with practicing e-business project managers. The interviews were undertaken: (1) to document the current development approaches being used by e-business project managers, and (2) to identify the risks and issues with which they are dealing. The research outcome supports a RAD approach for e-business development and develops a framework of strategies and tactics which e-business project managers could use to defend themselves against the threat of failure in a difficult and rapidly changing environment.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000445
Jian Lu
The emergence of content delivery networks (CDNs) has helped to improve the efficiency of delivering streaming media. In this paper, we argue with evidence that the edge delivery paradigm behind current CDNs cannot scale up to delivering high-quality broadband video content, such as DVD movies. This is because there is no sufficient and affordable bandwidth or quality of service (QoS) in the subscriber loop. Additionally, it is difficult to scale up edge delivery with increasing numbers of users and to aggregate bandwidth demand in services such as video on demand. We describe a new architecture that extends the current CDN design with a second tier of surrogate servers. These second-tier servers, called leaf servers, are placed inside local area networks and networked homes with broadband Internet connections. High scalability and QoS can be achieved because media content is served to clients by massively distributed leaf servers within the subscriber loop.
{"title":"An architecture for delivering broadband video over the Internet","authors":"Jian Lu","doi":"10.1109/ITCC.2002.1000445","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000445","url":null,"abstract":"The emergence of content delivery networks (CDNs) has helped to improve the efficiency of delivering streaming media. In this paper, we argue with evidence that the edge delivery paradigm behind current CDNs cannot scale up to delivering high-quality broadband video content, such as DVD movies. This is because there is no sufficient and affordable bandwidth or quality of service (QoS) in the subscriber loop. Additionally, it is difficult to scale up edge delivery with increasing numbers of users and to aggregate bandwidth demand in services such as video on demand. We describe a new architecture that extends the current CDN design with a second tier of surrogate servers. These second-tier servers, called leaf servers, are placed inside local area networks and networked homes with broadband Internet connections. High scalability and QoS can be achieved because media content is served to clients by massively distributed leaf servers within the subscriber loop.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133187367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000366
S. Winkler, E. Gelasca, T. Ebrahimi
The reliable evaluation of the performance of watermarking algorithms is difficult. An important aspect in this process is the assessment of the visibility of the watermark. We address this issue and propose a methodology for evaluating the visual quality of watermarked video. Using a software tool that measures different types of perceptual video artifacts, we determine the most relevant impairments and design the corresponding objective metrics. We demonstrate their performance through subjective experiments on several different watermarking algorithms and video sequences.
{"title":"Perceptual quality assessment for video watermarking","authors":"S. Winkler, E. Gelasca, T. Ebrahimi","doi":"10.1109/ITCC.2002.1000366","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000366","url":null,"abstract":"The reliable evaluation of the performance of watermarking algorithms is difficult. An important aspect in this process is the assessment of the visibility of the watermark. We address this issue and propose a methodology for evaluating the visual quality of watermarked video. Using a software tool that measures different types of perceptual video artifacts, we determine the most relevant impairments and design the corresponding objective metrics. We demonstrate their performance through subjective experiments on several different watermarking algorithms and video sequences.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132410643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000409
Said Mirza Pahlevi, H. Kitagawa
Current crawler-based search engines usually return a long list of search results containing a lot of noise documents. By indexing collected documents on a topic path in taxonomy, taxonomy-based search engines can improve the search result quality. However the searches are limited to the locally compiled databases. We propose an adaptive Web search method to improve the search result quality enabling the users to search many databases existing in the Web space. The method has a characteristic that combines the taxonomy-based search engines and a machine learning technique. More specifically, we construct a rule-based classifier using pre-classified documents provided by a taxonomy-based search engine based on a selected context category on its taxonomy, and then use it to modify the user query. The resulting modified query will be sent to the crawler-based search engines and the returned results will be presented to the user. We evaluate the effectiveness of our method by showing that the returned results from the modified query almost contain documents that will be categorized into the selected context category.
{"title":"Taxonomy-based adaptive Web search method","authors":"Said Mirza Pahlevi, H. Kitagawa","doi":"10.1109/ITCC.2002.1000409","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000409","url":null,"abstract":"Current crawler-based search engines usually return a long list of search results containing a lot of noise documents. By indexing collected documents on a topic path in taxonomy, taxonomy-based search engines can improve the search result quality. However the searches are limited to the locally compiled databases. We propose an adaptive Web search method to improve the search result quality enabling the users to search many databases existing in the Web space. The method has a characteristic that combines the taxonomy-based search engines and a machine learning technique. More specifically, we construct a rule-based classifier using pre-classified documents provided by a taxonomy-based search engine based on a selected context category on its taxonomy, and then use it to modify the user query. The resulting modified query will be sent to the crawler-based search engines and the returned results will be presented to the user. We evaluate the effectiveness of our method by showing that the returned results from the modified query almost contain documents that will be categorized into the selected context category.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000365
J. Domingo-Ferrer, F. Sebé
After the failure of copy prevention methods, watermarking stays the main technical safeguard of electronic copyright. There are many properties that a watermarking scheme should offer such as imperceptibility and robustness. The robustness property measures the resistance of the watermark against some attacks, which attempt to remove it partially or completely. Nowadays, many watermarking schemes exist each of them robust against a certain list of attacks but vulnerable to many others. It is not always easy to obtain new schemes that resist more and more attacks. This paper proposes general mixture techniques to combine the properties of several watermarking methods so as to obtain watermarked objects robust against most of the attacks survived by the combined methods.
{"title":"Enhancing watermark robustness through mixture of watermarked digital objects","authors":"J. Domingo-Ferrer, F. Sebé","doi":"10.1109/ITCC.2002.1000365","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000365","url":null,"abstract":"After the failure of copy prevention methods, watermarking stays the main technical safeguard of electronic copyright. There are many properties that a watermarking scheme should offer such as imperceptibility and robustness. The robustness property measures the resistance of the watermark against some attacks, which attempt to remove it partially or completely. Nowadays, many watermarking schemes exist each of them robust against a certain list of attacks but vulnerable to many others. It is not always easy to obtain new schemes that resist more and more attacks. This paper proposes general mixture techniques to combine the properties of several watermarking methods so as to obtain watermarked objects robust against most of the attacks survived by the combined methods.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114366241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}