Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240451
N. Prat, J. Akoka, I. Comyn-Wattiau
Business intelligence is based on data warehouses. Data warehouses use a multidimensional model, which represents relevant facts and their measures according to different dimensions. Based on this model, OLAP cubes may be defined, enabling decision makers to analyze and synthesize data. Ontologies (and, more specifically, OWL ontologies) are a key component of the semantic Web. This paper proposes an approach to represent multidimensional models as OWL-DL ontologies. To this end, it presents the multidimensional metamodel, the concepts of OWL-DL, and transformation rules for mapping a multidimensional model into and OWL-DL ontology. It then illustrates application to a case study with a simplified example of a spatiotemporal data warehouse. The transformation rules are refined to deal with spatiotemporal data warehouses, applied step by step, and the resulting ontology is implemented in the Protégé ontology tool. As illustrated by the case study, our approach enables better formalization and inferencing, thanks to OWL-DL. The ontology may also be used to represent OLAP cubes on the semantic Web (with RDF), by defining these cubes as instances of the OWL-DL multidimensional ontology.
{"title":"Transforming multidimensional models into OWL-DL ontologies","authors":"N. Prat, J. Akoka, I. Comyn-Wattiau","doi":"10.1109/RCIS.2012.6240451","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240451","url":null,"abstract":"Business intelligence is based on data warehouses. Data warehouses use a multidimensional model, which represents relevant facts and their measures according to different dimensions. Based on this model, OLAP cubes may be defined, enabling decision makers to analyze and synthesize data. Ontologies (and, more specifically, OWL ontologies) are a key component of the semantic Web. This paper proposes an approach to represent multidimensional models as OWL-DL ontologies. To this end, it presents the multidimensional metamodel, the concepts of OWL-DL, and transformation rules for mapping a multidimensional model into and OWL-DL ontology. It then illustrates application to a case study with a simplified example of a spatiotemporal data warehouse. The transformation rules are refined to deal with spatiotemporal data warehouses, applied step by step, and the resulting ontology is implemented in the Protégé ontology tool. As illustrated by the case study, our approach enables better formalization and inferencing, thanks to OWL-DL. The ontology may also be used to represent OLAP cubes on the semantic Web (with RDF), by defining these cubes as instances of the OWL-DL multidimensional ontology.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123892451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240456
Margit Schwab
In the paper at hand compliance in the context of business process modeling is discussed. The approach suggested is a contribution to the topic `compliance by design'. The focus of this compliance approach is on the development of quantitative compliance indicators in order to evaluate the fit of the business process model to compliance parameters. The calculation of these indicators contributes to optimize business process models regarding their compliance design. The calculation algorithm of the compliance indicators reverts to existing simulation algorithms. Finally, first steps towards the development of such compliance indicators are presented in the later part of the paper.
{"title":"Process-based compliance: Probabilities","authors":"Margit Schwab","doi":"10.1109/RCIS.2012.6240456","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240456","url":null,"abstract":"In the paper at hand compliance in the context of business process modeling is discussed. The approach suggested is a contribution to the topic `compliance by design'. The focus of this compliance approach is on the development of quantitative compliance indicators in order to evaluate the fit of the business process model to compliance parameters. The calculation of these indicators contributes to optimize business process models regarding their compliance design. The calculation algorithm of the compliance indicators reverts to existing simulation algorithms. Finally, first steps towards the development of such compliance indicators are presented in the later part of the paper.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129012369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240463
Ludmila Penicina
Business processes are key corporate assets both generating and requiring knowledge. During business process modeling phase in order to create an accurate and realistic process model a business process analyst requires existing business process knowledge embedded in internal organizational resources such as documentation and other software artifacts as well as in external resources such as legal documents repositories, standards, regulations and business process frameworks. The aim of this research is to design a service model that will identify and extract knowledge related to business processes from existing external and internal sources and store it as ontology enabling extracted knowledge to be represented in a machine readable format. This research is in the initial analysis phase of the first year of the doctoral studies.
{"title":"Knowledge service model for business process design","authors":"Ludmila Penicina","doi":"10.1109/RCIS.2012.6240463","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240463","url":null,"abstract":"Business processes are key corporate assets both generating and requiring knowledge. During business process modeling phase in order to create an accurate and realistic process model a business process analyst requires existing business process knowledge embedded in internal organizational resources such as documentation and other software artifacts as well as in external resources such as legal documents repositories, standards, regulations and business process frameworks. The aim of this research is to design a service model that will identify and extract knowledge related to business processes from existing external and internal sources and store it as ontology enabling extracted knowledge to be represented in a machine readable format. This research is in the initial analysis phase of the first year of the doctoral studies.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124475914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240422
S. Silva, João Araújo, A. Rodrigues, Matías Urbieta, A. Moreira, S. Gordillo, G. Rossi
Web Geographic Information Systems (GIS) are systems composed by software, hardware, spatial data and computing operations, which aim to collect, model, store, share, retrieve, manipulate and display geographically referenced data. The development of online geospatial applications is currently on the rise, but this type of application often involves dealing with concerns (i.e., properties) which are inherently volatile, implying a considerable effort for system evolution. Nevertheless, geospatial concerns (e.g., temporarily blocked streets), although changeable, are reusable. However, lack of modularization in software artifacts (including system's models) can compromise reusability. In this context, the use of requirements analysis patterns, enriched with aspect-oriented modeling techniques, can support reusability and improve modularity. In this paper, we introduce requirements analysis patterns for geospatial concerns, to facilitate modularity in GIS Web applications. These patterns are generated from the domain analysis of Web GIS applications and described using a template which is supported by a comprehensive tool, enabling the completion of specific geospatial patterns.
{"title":"Reuse of spatial concerns based on aspectual requirements analysis patterns","authors":"S. Silva, João Araújo, A. Rodrigues, Matías Urbieta, A. Moreira, S. Gordillo, G. Rossi","doi":"10.1109/RCIS.2012.6240422","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240422","url":null,"abstract":"Web Geographic Information Systems (GIS) are systems composed by software, hardware, spatial data and computing operations, which aim to collect, model, store, share, retrieve, manipulate and display geographically referenced data. The development of online geospatial applications is currently on the rise, but this type of application often involves dealing with concerns (i.e., properties) which are inherently volatile, implying a considerable effort for system evolution. Nevertheless, geospatial concerns (e.g., temporarily blocked streets), although changeable, are reusable. However, lack of modularization in software artifacts (including system's models) can compromise reusability. In this context, the use of requirements analysis patterns, enriched with aspect-oriented modeling techniques, can support reusability and improve modularity. In this paper, we introduce requirements analysis patterns for geospatial concerns, to facilitate modularity in GIS Web applications. These patterns are generated from the domain analysis of Web GIS applications and described using a template which is supported by a comprehensive tool, enabling the completion of specific geospatial patterns.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117009640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240466
Thorsten Winsemann, V. Köppen
Persistence of redundant data in Data Warehouses is often simply justified with an achievement of better performance when accessing data for analysis and reporting. However, there are other reasons to store data persistently, which are often not recognized when designing Data Warehouses. As processing and maintenance of data is complex and requires huge effort, less redundancy downsizes effort. Latest in-memory technologies enable good response times for data access. That arises the question, what data for what purposes really need to be stored persistently. We present a compendium of purposes for data persistence and use it as a basis for decision-making whether to store data or not.
{"title":"Persistence in Data Warehousing","authors":"Thorsten Winsemann, V. Köppen","doi":"10.1109/RCIS.2012.6240466","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240466","url":null,"abstract":"Persistence of redundant data in Data Warehouses is often simply justified with an achievement of better performance when accessing data for analysis and reporting. However, there are other reasons to store data persistently, which are often not recognized when designing Data Warehouses. As processing and maintenance of data is complex and requires huge effort, less redundancy downsizes effort. Latest in-memory technologies enable good response times for data access. That arises the question, what data for what purposes really need to be stored persistently. We present a compendium of purposes for data persistence and use it as a basis for decision-making whether to store data or not.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120898105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240424
I. Schmitt, David Zellhöfer
The utility of preferences within the database domain is widely accepted. Preferences provide an effective means for query personalization and information filtering. Nevertheless, two preference approaches - qualitative and quantitative ones - do still compete. In this paper, we contribute to the bridging of both approaches and compare their expressive power and different usage scenarios. In order to combine qualitative and quantitative preferences, mappings are introduced and discussed, which transform a query from one approach into its counter-part. We consider Chomicki's preference formulas and as a quantitative approach our CQQL approach that extends the relational calculus with proximity predicates. In order to facilitate query formulation for the user, we extend the CQQL approach to condition learning. That is, user-defined preferences amongst database objects serve as input to learn logical conditions within a CQQL query. Hereby, we can support the user in the cognitively demanding task of query formulation.
{"title":"Condition learning from user preferences","authors":"I. Schmitt, David Zellhöfer","doi":"10.1109/RCIS.2012.6240424","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240424","url":null,"abstract":"The utility of preferences within the database domain is widely accepted. Preferences provide an effective means for query personalization and information filtering. Nevertheless, two preference approaches - qualitative and quantitative ones - do still compete. In this paper, we contribute to the bridging of both approaches and compare their expressive power and different usage scenarios. In order to combine qualitative and quantitative preferences, mappings are introduced and discussed, which transform a query from one approach into its counter-part. We consider Chomicki's preference formulas and as a quantitative approach our CQQL approach that extends the relational calculus with proximity predicates. In order to facilitate query formulation for the user, we extend the CQQL approach to condition learning. That is, user-defined preferences amongst database objects serve as input to learn logical conditions within a CQQL query. Hereby, we can support the user in the cognitively demanding task of query formulation.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132808444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240421
Iliana Iankoulova, M. Daneva
Many publications have dealt with various types of security requirements in cloud computing but not all types have been explored in sufficient depth. It is also hard to understand which types of requirements have been under-researched and which are most investigated. This paper's goal is to provide a comprehensive and structured overview of cloud computing security requirements and solutions. We carried out a systematic review and identified security requirements from previous publications that we classified in nine sub-areas: Access Control, Attack/Harm Detection, Non-repudiation, Integrity, Security Auditing, Physical Protection, Privacy, Recovery, and Prosecution. We found that (i) the least researched sub-areas are non-repudiation, physical protection, recovery and prosecution, and that (ii) access control, integrity and auditability are the most researched sub-areas.
{"title":"Cloud computing security requirements: A systematic review","authors":"Iliana Iankoulova, M. Daneva","doi":"10.1109/RCIS.2012.6240421","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240421","url":null,"abstract":"Many publications have dealt with various types of security requirements in cloud computing but not all types have been explored in sufficient depth. It is also hard to understand which types of requirements have been under-researched and which are most investigated. This paper's goal is to provide a comprehensive and structured overview of cloud computing security requirements and solutions. We carried out a systematic review and identified security requirements from previous publications that we classified in nine sub-areas: Access Control, Attack/Harm Detection, Non-repudiation, Integrity, Security Auditing, Physical Protection, Privacy, Recovery, and Prosecution. We found that (i) the least researched sub-areas are non-repudiation, physical protection, recovery and prosecution, and that (ii) access control, integrity and auditability are the most researched sub-areas.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240455
Andres Jimenez Ramirez, I. Barba, C. D. Valle, B. Weber
The manual specification of imperative business process (BP) models can be very complex and time-consuming, potentially leading to non-optimized models or even errors. To support process analysts in the definition of these models, a method for generating optimized configurable BP models from a constraint-based specification by considering multiple objectives is described. A constraint-based specification typically allows for several different ways of executing it leading to several enactment plans which can, however, vary greatly in respect to how well different performance objective functions can be achieved. We therefore automatically generate different plans and select the ones which fit best the objectives of the company. The generated plans are then merged into an optimized configurable BP model to support the model expert in choosing the most appropriate plan depending on the importance of each objective at configuration time.
{"title":"Generating multi-objective optimized configurable business process models","authors":"Andres Jimenez Ramirez, I. Barba, C. D. Valle, B. Weber","doi":"10.1109/RCIS.2012.6240455","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240455","url":null,"abstract":"The manual specification of imperative business process (BP) models can be very complex and time-consuming, potentially leading to non-optimized models or even errors. To support process analysts in the definition of these models, a method for generating optimized configurable BP models from a constraint-based specification by considering multiple objectives is described. A constraint-based specification typically allows for several different ways of executing it leading to several enactment plans which can, however, vary greatly in respect to how well different performance objective functions can be achieved. We therefore automatically generate different plans and select the ones which fit best the objectives of the company. The generated plans are then merged into an optimized configurable BP model to support the model expert in choosing the most appropriate plan depending on the importance of each objective at configuration time.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131023923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240423
D. Costal, Xavier Franch
There are many approaches that propose the use of measures for assessing the quality of conceptual schemas. Many of these measures focus purely on the syntactic aspects of the conceptual schema diagrams, e.g. their size, their shape, etc. Similarities among different measures may be found both at the intra-model level (i.e., several measures over the same type of diagram are defined following the same layout) and at the intermodel level (i.e., measures over different types of diagrams are similar considering an appropriate metaschema correspondence). In this paper we analyse these similarities for a particular family of diagrams used in conceptual modelling, those that can be ultimately seen as a combination of nodes and edges of different types. We propose a unifying measuring framework for this family to facilitate the measure definition process and illustrate its application on a particular type, namely business process diagrams.
{"title":"A unifying framework for the definition of syntactic measures over conceptual schema diagrams","authors":"D. Costal, Xavier Franch","doi":"10.1109/RCIS.2012.6240423","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240423","url":null,"abstract":"There are many approaches that propose the use of measures for assessing the quality of conceptual schemas. Many of these measures focus purely on the syntactic aspects of the conceptual schema diagrams, e.g. their size, their shape, etc. Similarities among different measures may be found both at the intra-model level (i.e., several measures over the same type of diagram are defined following the same layout) and at the intermodel level (i.e., measures over different types of diagrams are similar considering an appropriate metaschema correspondence). In this paper we analyse these similarities for a particular family of diagrams used in conceptual modelling, those that can be ultimately seen as a combination of nodes and edges of different types. We propose a unifying measuring framework for this family to facilitate the measure definition process and illustrate its application on a particular type, namely business process diagrams.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"33 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134405520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-16DOI: 10.1109/RCIS.2012.6240450
Jérémie Melchior, J. Vanderdonckt, P. V. Roy
This paper introduces, motivates, defines, and exemplifies the concept of distribution graph as a way for modelling and developing Distributed User Interfaces of interactive systems. A distribution graph consists of a state chart model enriched as follows: states represent individual states of entities involved in the distribution as well as a collective representation of their synchronization; transitions are represented by event-condition-actions where the action part consists of a distribution script. A distribution script expresses the distribution behaviour based on distribution primitives. These primitives are basic operations that manipulate parts or wholes of user interface for distribution at run-time. These primitives are themselves implemented on top of an environment for distributed computing that is implemented for four major computing platforms (i.e., Microsoft Windows, Mac OS X, Linux, and Mobile Linux). Thanks to the capabilities provided by this environment, the user interfaces belonging to these distributed systems can be run indifferently on any of these computing platforms. This paper defines the new concepts introduced for this purpose, i.e., distribution primitive, distribution script, and distribution graph, and demonstrates how they can effectively support distributed user interfaces.
本文介绍、激发、定义并举例说明了分布图的概念,将其作为一种建模和开发交互系统的分布式用户界面的方法。分布图由一个状态图模型组成,其丰富如下:状态表示分布中涉及的实体的单个状态,以及它们同步的集体表示;转换由事件-条件-操作表示,其中操作部分由分布脚本组成。分发脚本基于分发原语表达分发行为。这些原语是在运行时为分发操作部分或整个用户界面的基本操作。这些原语本身是在分布式计算环境之上实现的,分布式计算环境是为四个主要的计算平台(即Microsoft Windows、Mac OS X、Linux和Mobile Linux)实现的。由于该环境提供的功能,属于这些分布式系统的用户界面可以在任何这些计算平台上运行。本文定义了为此引入的新概念,即分布原语、分布脚本和分布图,并演示了它们如何有效地支持分布式用户界面。
{"title":"Modelling and developing distributed user interfaces based on distribution graph","authors":"Jérémie Melchior, J. Vanderdonckt, P. V. Roy","doi":"10.1109/RCIS.2012.6240450","DOIUrl":"https://doi.org/10.1109/RCIS.2012.6240450","url":null,"abstract":"This paper introduces, motivates, defines, and exemplifies the concept of distribution graph as a way for modelling and developing Distributed User Interfaces of interactive systems. A distribution graph consists of a state chart model enriched as follows: states represent individual states of entities involved in the distribution as well as a collective representation of their synchronization; transitions are represented by event-condition-actions where the action part consists of a distribution script. A distribution script expresses the distribution behaviour based on distribution primitives. These primitives are basic operations that manipulate parts or wholes of user interface for distribution at run-time. These primitives are themselves implemented on top of an environment for distributed computing that is implemented for four major computing platforms (i.e., Microsoft Windows, Mac OS X, Linux, and Mobile Linux). Thanks to the capabilities provided by this environment, the user interfaces belonging to these distributed systems can be run indifferently on any of these computing platforms. This paper defines the new concepts introduced for this purpose, i.e., distribution primitive, distribution script, and distribution graph, and demonstrates how they can effectively support distributed user interfaces.","PeriodicalId":130476,"journal":{"name":"2012 Sixth International Conference on Research Challenges in Information Science (RCIS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115740037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}