Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000407
M. Villanova-Oliver, J. Gensel, H. Martin, Christelle Erb
Web-based information systems (WIS) are now widely used for diffusing and processing information over the network. Methodological guidelines which assist WIS developers in their task must take into account the specificities of WIS (hypermedia structured information, navigation features, etc.). This paper presents KIWIS, a generator of WIS, which addresses the issue of designing and automatically deploying such WIS. Using KIWIS, designers can specify, at a conceptual level, the features of the WIS to be generated. The features are made operational by KIWIS by instantiating different models dedicated to the description of the application domain, the expected functionalities, and some features concerning adaptability. Any WIS described and generated with KIWIS can be considered adaptable since users can progressively access the content of information while the presentation of information respects the graphical charters selected or defined by users.
{"title":"Design and generation of adaptable Web information systems with KIWIS","authors":"M. Villanova-Oliver, J. Gensel, H. Martin, Christelle Erb","doi":"10.1109/ITCC.2002.1000407","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000407","url":null,"abstract":"Web-based information systems (WIS) are now widely used for diffusing and processing information over the network. Methodological guidelines which assist WIS developers in their task must take into account the specificities of WIS (hypermedia structured information, navigation features, etc.). This paper presents KIWIS, a generator of WIS, which addresses the issue of designing and automatically deploying such WIS. Using KIWIS, designers can specify, at a conceptual level, the features of the WIS to be generated. The features are made operational by KIWIS by instantiating different models dedicated to the description of the application domain, the expected functionalities, and some features concerning adaptability. Any WIS described and generated with KIWIS can be considered adaptable since users can progressively access the content of information while the presentation of information respects the graphical charters selected or defined by users.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130969776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000364
S. Moskowitz
Quality is subjective, Quality can be objectified by the industry standards process represented by such consumer items as compact disc ("CD") and digital versatile disc ("DVD"). What is lacking is a means for not only associating the creation of valued intangible assets and extensions of recognition but establishing responsibility for copies that may be digitized or pass through a digital domain. Digital watermarking exists at a convergence point between piracy and privacy. Watermarks serve as a receipt for information commerce. There is not likely to be a single digital watermark encoding scheme that best handles the trade-offs between security, robustness, and quality but several architectures to handle various concerns. The most commercially useful watermarking schemes are keybased, combining cryptographic features with models of perception. Most importantly, in audio watermarking there currently exists mature technologies which have been proven to be statistically inaudible. In this paper, a description of several of the decoding system applications, and why watermarks are a necessary feature of any workable market for the commercial exchange of content is highlighted.
{"title":"What is acceptable quality in the application of digital watermarking: trade-offs of security, robustness and quality","authors":"S. Moskowitz","doi":"10.1109/ITCC.2002.1000364","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000364","url":null,"abstract":"Quality is subjective, Quality can be objectified by the industry standards process represented by such consumer items as compact disc (\"CD\") and digital versatile disc (\"DVD\"). What is lacking is a means for not only associating the creation of valued intangible assets and extensions of recognition but establishing responsibility for copies that may be digitized or pass through a digital domain. Digital watermarking exists at a convergence point between piracy and privacy. Watermarks serve as a receipt for information commerce. There is not likely to be a single digital watermark encoding scheme that best handles the trade-offs between security, robustness, and quality but several architectures to handle various concerns. The most commercially useful watermarking schemes are keybased, combining cryptographic features with models of perception. Most importantly, in audio watermarking there currently exists mature technologies which have been proven to be statistically inaudible. In this paper, a description of several of the decoding system applications, and why watermarks are a necessary feature of any workable market for the commercial exchange of content is highlighted.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132713109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000444
Jin-Wook Baek, Gyu-tae Kim, H. Yeom
The two most significant planning factors in mobile agent planning (MAP) are the number of agents used and each agent's itinerary. These two planning factors must be well-scheduled, since badly-scheduled factors can cause longer execution times because of the higher routing costs. In addition to these two factors, the time constraints that reside on the nodes of the information repository (i.e. the information servers) also have to be dealt with. Consider the nodes that present correct information only for a certain time interval. If an agent is sent to gather information and arrives earlier than a specified update time, it may retrieve useless or corrupted information. To cope with these types of information retrieval, we propose a time-constrained MAP method which finds the minimum number of agents needed and the best scheduled agent itineraries for retrieving information from a distributed computing environment. The method works under the time constraints mentioned above, allows the completion time to be lower-bounded and minimizes routing overheads. Simulation results show that the proposed method produces results that are highly applicable to the time-constrained distributed information retrieval problem domain.
{"title":"Cost-effective planning of timed mobile agents","authors":"Jin-Wook Baek, Gyu-tae Kim, H. Yeom","doi":"10.1109/ITCC.2002.1000444","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000444","url":null,"abstract":"The two most significant planning factors in mobile agent planning (MAP) are the number of agents used and each agent's itinerary. These two planning factors must be well-scheduled, since badly-scheduled factors can cause longer execution times because of the higher routing costs. In addition to these two factors, the time constraints that reside on the nodes of the information repository (i.e. the information servers) also have to be dealt with. Consider the nodes that present correct information only for a certain time interval. If an agent is sent to gather information and arrives earlier than a specified update time, it may retrieve useless or corrupted information. To cope with these types of information retrieval, we propose a time-constrained MAP method which finds the minimum number of agents needed and the best scheduled agent itineraries for retrieving information from a distributed computing environment. The method works under the time constraints mentioned above, allows the completion time to be lower-bounded and minimizes routing overheads. Simulation results show that the proposed method produces results that are highly applicable to the time-constrained distributed information retrieval problem domain.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133759231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000389
Guangkun Sun, Jianzhong Li
Distributed digital libraries allow users to access data of different modalities, from different information sources, and ranked by different criteria. Most applications make too many assumptions, and need too much information. We assume that each information retrieval model is satisfactory in its own context. Based on this assumption, we propose two results processing methods: Ranking by Sources (RBS) and Simply Merging Results (SMR). In RBS, we define satisfied ranking, which is the ranking satisfying most source rankings, and satisfied distance, which indicates how a specific source ranking suits the satisfied ranking. RBS groups the results by the ranked sources, which is sorted by their satisfied distances. In SMR, for each result, we substitute the normalized score for its original scores, and then merge them using normalized scores. The experiment showed that our methods are very feasible in the rapid expanding distributed digital libraries.
{"title":"Results processing in a heterogeneous word","authors":"Guangkun Sun, Jianzhong Li","doi":"10.1109/ITCC.2002.1000389","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000389","url":null,"abstract":"Distributed digital libraries allow users to access data of different modalities, from different information sources, and ranked by different criteria. Most applications make too many assumptions, and need too much information. We assume that each information retrieval model is satisfactory in its own context. Based on this assumption, we propose two results processing methods: Ranking by Sources (RBS) and Simply Merging Results (SMR). In RBS, we define satisfied ranking, which is the ranking satisfying most source rankings, and satisfied distance, which indicates how a specific source ranking suits the satisfied ranking. RBS groups the results by the ranked sources, which is sorted by their satisfied distances. In SMR, for each result, we substitute the normalized score for its original scores, and then merge them using normalized scores. The experiment showed that our methods are very feasible in the rapid expanding distributed digital libraries.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121538370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000430
K. Asrar-Haghighi, Y. P. Fallah, H. Alnuweiri
This paper presents the design of an MPEG-4 multicast streaming system. The signalling and delivery layer of this system conforms to the recommendations made by Part 6 of the MPEG-4 standard - the Delivery Multimedia Integration Framework (DMIF). We present the issues involved in designing our multicast streaming server and client taking into account the levels of abstraction required. The system enables multi-client transparent media streaming across the Internet through its data and control planes. This is done through extending the DMIF layer to provide multicast group functionality and signalling.
{"title":"Delivery of MPEG-4 object based multimedia in a multicast environment","authors":"K. Asrar-Haghighi, Y. P. Fallah, H. Alnuweiri","doi":"10.1109/ITCC.2002.1000430","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000430","url":null,"abstract":"This paper presents the design of an MPEG-4 multicast streaming system. The signalling and delivery layer of this system conforms to the recommendations made by Part 6 of the MPEG-4 standard - the Delivery Multimedia Integration Framework (DMIF). We present the issues involved in designing our multicast streaming server and client taking into account the levels of abstraction required. The system enables multi-client transparent media streaming across the Internet through its data and control planes. This is done through extending the DMIF layer to provide multicast group functionality and signalling.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1080/1206212X.2005.11441769
S. Chalasani, R. Boppana
Several medium-to-large companies are currently in the process of using external hosting to deploy their Internet applications. External hosting by such application service providers (ASPs) as IBM provides a low-cost, secure and reliable way for companies to deploy e-commerce enterprise applications without the need to purchase costly infrastructure. However, while designing software for applications that run on the servers of external hosts, performance may suffer if the data and the application reside at two different locations. In this paper, we provide a technique for improving the performance in such applications. This technique is implemented in large industrial software systems. We provide the software architecture and preliminary performance results.
{"title":"Software architectures for e-commerce computing systems with external hosting","authors":"S. Chalasani, R. Boppana","doi":"10.1080/1206212X.2005.11441769","DOIUrl":"https://doi.org/10.1080/1206212X.2005.11441769","url":null,"abstract":"Several medium-to-large companies are currently in the process of using external hosting to deploy their Internet applications. External hosting by such application service providers (ASPs) as IBM provides a low-cost, secure and reliable way for companies to deploy e-commerce enterprise applications without the need to purchase costly infrastructure. However, while designing software for applications that run on the servers of external hosts, performance may suffer if the data and the application reside at two different locations. In this paper, we provide a technique for improving the performance in such applications. This technique is implemented in large industrial software systems. We provide the software architecture and preliminary performance results.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132222631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000388
B. Sitohang
Parallel execution of relational algebra operators can be performed on single computer, multiprogramming or multitasking operating systems. In doing so, time response can be improved, compared to sequential execution, using concepts of: free operators, set of free operators, and degree of freedom of operators. Parallel execution of relational algebra operators introduced, using the concepts above, can be adapted to the distributed database system environment, where each free operator executed in one specific location/computer (relatively to the other free operator of the transaction). Consequently, improvement of performance depends on: the number of free operators of the transaction; the number of sets of free operators; degree of freedom of operators; and data location relatively to the location where the operator executed.
{"title":"Parallel execution of relational algebra operator under distributed database systems","authors":"B. Sitohang","doi":"10.1109/ITCC.2002.1000388","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000388","url":null,"abstract":"Parallel execution of relational algebra operators can be performed on single computer, multiprogramming or multitasking operating systems. In doing so, time response can be improved, compared to sequential execution, using concepts of: free operators, set of free operators, and degree of freedom of operators. Parallel execution of relational algebra operators introduced, using the concepts above, can be adapted to the distributed database system environment, where each free operator executed in one specific location/computer (relatively to the other free operator of the transaction). Consequently, improvement of performance depends on: the number of free operators of the transaction; the number of sets of free operators; degree of freedom of operators; and data location relatively to the location where the operator executed.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"108 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114265520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000429
H. Stern, O. Hadar, Nir Friedman
This paper presents a new optimal multiplexing scheme for compressed video streams based on a piecewise linear approximation of the accumulative data curve of each stream. A linear programming algorithm is provided, which takes into account different constraints of each client. It is shown that the algorithm succeeds in obtaining maximum bandwidth utilization with Quality of Service (QoS) guarantees. The algorithm takes into account the interaction between the multiplexed streams and the individual streams, and simultaneously finds the optimum total multiplexed schedule and individual stream schedules that minimizes the peak transmission rate. In addition, the algorithm, due to the linear programming formulation, is bounded in polynomial time. The simulation results show a significant reduction in peak rate and rate variability of the aggregated stream, compared to the non-smoothing case. Therefore the proposed scheme allows an increase in the number of simultaneously served video streams.
{"title":"Optimal video stream multiplexing through linear programming","authors":"H. Stern, O. Hadar, Nir Friedman","doi":"10.1109/ITCC.2002.1000429","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000429","url":null,"abstract":"This paper presents a new optimal multiplexing scheme for compressed video streams based on a piecewise linear approximation of the accumulative data curve of each stream. A linear programming algorithm is provided, which takes into account different constraints of each client. It is shown that the algorithm succeeds in obtaining maximum bandwidth utilization with Quality of Service (QoS) guarantees. The algorithm takes into account the interaction between the multiplexed streams and the individual streams, and simultaneously finds the optimum total multiplexed schedule and individual stream schedules that minimizes the peak transmission rate. In addition, the algorithm, due to the linear programming formulation, is bounded in polynomial time. The simulation results show a significant reduction in peak rate and rate variability of the aggregated stream, compared to the non-smoothing case. Therefore the proposed scheme allows an increase in the number of simultaneously served video streams.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000400
S. Vasikarla, M. Hanmandlu
This paper presents a contour matching technique for the identification of an object model corresponding to an observed object from a list of object models from range data. There are three types of edge data associated with the object and the models. These data are utilized in a hierarchical fashion, each time employing one type of edge data for pruning the models during the matching. The matching uses quarternion theory and is more suitable for the recognition of symmetric objects. The results are illustrated through simulated examples.
{"title":"Contour based matching technique for 3D object recognition","authors":"S. Vasikarla, M. Hanmandlu","doi":"10.1109/ITCC.2002.1000400","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000400","url":null,"abstract":"This paper presents a contour matching technique for the identification of an object model corresponding to an observed object from a list of object models from range data. There are three types of edge data associated with the object and the models. These data are utilized in a hierarchical fashion, each time employing one type of edge data for pruning the models during the matching. The matching uses quarternion theory and is more suitable for the recognition of symmetric objects. The results are illustrated through simulated examples.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121556714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-08DOI: 10.1109/ITCC.2002.1000417
Shinsuke Nishida
This paper proposes a non-scanning display digital algorithm for still and moving pictures. This innovative method is independent of display device or camera/transmission hardware configuration limited by the number of scanning lines. The concept is the realization of effective image data transmission to display devices in which the image is converted as digital bit data.
{"title":"Non-scanning display method","authors":"Shinsuke Nishida","doi":"10.1109/ITCC.2002.1000417","DOIUrl":"https://doi.org/10.1109/ITCC.2002.1000417","url":null,"abstract":"This paper proposes a non-scanning display digital algorithm for still and moving pictures. This innovative method is independent of display device or camera/transmission hardware configuration limited by the number of scanning lines. The concept is the realization of effective image data transmission to display devices in which the image is converted as digital bit data.","PeriodicalId":115190,"journal":{"name":"Proceedings. International Conference on Information Technology: Coding and Computing","volume":"397 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122862037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}