Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.73
M. Yokohata, Tomotaka Maeda, Y. Okabe
An on-demand power supply network is proposed in i-Energy project as a method to achieve the goal of powersaving in home. In the on-demand power supply network, power requests of devices are classified by priority of Quality of Life. When each device requires power, the device sends a power request message which contains required power and the priority to the network. When the network accepts a request message, the network supplies power to the device. In this paper, we focus on Power over Ethernet(PoE), in which power requests are sent from PD(Power Device) to the PSE(Power Supply Equipment) by Link Layer Discovery Protocol and Physical Layer. However PSE cannot allocate power for several PDs according to priority. We propose fair power allocation algorithms in terms of the priority from PSE to PD for minimizing decrease of QoL. We measured power requests and allocations time of PDs by using real PoE equipment. We show allocation time to complete the process within a certain period of time even in the worst case, that is, many devices send power request messages at the same time.
{"title":"Power Allocation Algorithms of PoE for On-Demand Power Supply","authors":"M. Yokohata, Tomotaka Maeda, Y. Okabe","doi":"10.1109/COMPSACW.2013.73","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.73","url":null,"abstract":"An on-demand power supply network is proposed in i-Energy project as a method to achieve the goal of powersaving in home. In the on-demand power supply network, power requests of devices are classified by priority of Quality of Life. When each device requires power, the device sends a power request message which contains required power and the priority to the network. When the network accepts a request message, the network supplies power to the device. In this paper, we focus on Power over Ethernet(PoE), in which power requests are sent from PD(Power Device) to the PSE(Power Supply Equipment) by Link Layer Discovery Protocol and Physical Layer. However PSE cannot allocate power for several PDs according to priority. We propose fair power allocation algorithms in terms of the priority from PSE to PD for minimizing decrease of QoL. We measured power requests and allocations time of PDs by using real PoE equipment. We show allocation time to complete the process within a certain period of time even in the worst case, that is, many devices send power request messages at the same time.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.93
Aseel Hmood, J. Rilling
The financial community assesses and analyzes fundamental qualities of stocks to predict their future performance. During the analysis different external and internal factors are considered which can affect the stock price. Financial analysts use indicators and analysis patterns, such as such as Moving Averages, Crossover patterns, and M-Top/W-Bottom patterns to determine stock price trends and potential trading opportunities. Similar to the stock market, also qualities of software systems are part of larger ecosystems which are affected by internal and external factors. Our research provides a cross disciplinary approach which takes advantages of these financial indicators and analysis patterns and re-applies them for the analysis and prediction of evolvability qualities in software system. We conducted several case studies to illustrate the applicability of our approach.
{"title":"Analyzing and Predicting Software Quality Trends Using Financial Patterns","authors":"Aseel Hmood, J. Rilling","doi":"10.1109/COMPSACW.2013.93","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.93","url":null,"abstract":"The financial community assesses and analyzes fundamental qualities of stocks to predict their future performance. During the analysis different external and internal factors are considered which can affect the stock price. Financial analysts use indicators and analysis patterns, such as such as Moving Averages, Crossover patterns, and M-Top/W-Bottom patterns to determine stock price trends and potential trading opportunities. Similar to the stock market, also qualities of software systems are part of larger ecosystems which are affected by internal and external factors. Our research provides a cross disciplinary approach which takes advantages of these financial indicators and analysis patterns and re-applies them for the analysis and prediction of evolvability qualities in software system. We conducted several case studies to illustrate the applicability of our approach.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.101
Xiaoyi Zhang, Zheng Zheng, Yueni Zhu, K. Cai
The future development trend of many supply systems is to become distributed, highlighting the requirements of agile and comprehensive decisions for both risk evaluation and protective approaches. According to the requirements, this paper proposes a distributed risk evaluation model, the distributed τ-interdiction median (DRIM) model for multiechelon supply systems, which enables the supply system to estimate the hazard using distributed computational resources. Furthermore, a protective resources allocation approach, the DRIM based protection approach (DRIMP approach) is introduced, aiming at making rational defensive strategies that consider the benefits of each facility. The experiment in typical data sets indicates that DRIM and DRIMP approach are able to fulfill the agility, distributed computing, and vendor-neutral requirements. The defensive strategies achieved by DRIMP approach are more rational in the distributed environment, compared to current centralized methods.
{"title":"A Distributed Protective Approach for Multiechelon Supply Systems","authors":"Xiaoyi Zhang, Zheng Zheng, Yueni Zhu, K. Cai","doi":"10.1109/COMPSACW.2013.101","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.101","url":null,"abstract":"The future development trend of many supply systems is to become distributed, highlighting the requirements of agile and comprehensive decisions for both risk evaluation and protective approaches. According to the requirements, this paper proposes a distributed risk evaluation model, the distributed τ-interdiction median (DRIM) model for multiechelon supply systems, which enables the supply system to estimate the hazard using distributed computational resources. Furthermore, a protective resources allocation approach, the DRIM based protection approach (DRIMP approach) is introduced, aiming at making rational defensive strategies that consider the benefits of each facility. The experiment in typical data sets indicates that DRIM and DRIMP approach are able to fulfill the agility, distributed computing, and vendor-neutral requirements. The defensive strategies achieved by DRIMP approach are more rational in the distributed environment, compared to current centralized methods.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.24
V. Chimisliu, F. Wotawa
In model-based testing the size of the used model has a great impact on the time for computing test cases. In model checking, dependence relations have been used in slicing of specifications in order to obtain reduced models pertinent to criteria of interest. In specifications described using state based formalisms slicing involves the removal of transitions and merging of states thus obtaining a structural modified specification. Using such a specification for model based test case generation where sequences of transitions represent test cases might provide traces that are not valid on a correctly behaving implementation. In order to avoid such trouble, we suggest the use of control, data and communication dependences for identifying parts of the model that can be excluded so that the remaining specification can be safely employed for test case generation. This information is included in test purposes which are then used in the test case generation process. We present also first empirical results obtained by using several models from industry and literature.
{"title":"Using Dependency Relations to Improve Test Case Generation from UML Statecharts","authors":"V. Chimisliu, F. Wotawa","doi":"10.1109/COMPSACW.2013.24","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.24","url":null,"abstract":"In model-based testing the size of the used model has a great impact on the time for computing test cases. In model checking, dependence relations have been used in slicing of specifications in order to obtain reduced models pertinent to criteria of interest. In specifications described using state based formalisms slicing involves the removal of transitions and merging of states thus obtaining a structural modified specification. Using such a specification for model based test case generation where sequences of transitions represent test cases might provide traces that are not valid on a correctly behaving implementation. In order to avoid such trouble, we suggest the use of control, data and communication dependences for identifying parts of the model that can be excluded so that the remaining specification can be safely employed for test case generation. This information is included in test purposes which are then used in the test case generation process. We present also first empirical results obtained by using several models from industry and literature.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115158445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.114
E. Tramontana
Developing large systems exhibiting a high degree of modularity can be a difficult task even for experienced developers. Hindering modularity has several armful effects, such as decreased readability, higher complexity and difficulties to reuse and evolve components. This paper assists developers to achieve modularity of components by providing a way to automatically characterise the concerns within components according to the APIs they are based on. This allows finding the degree of tangling and scattering of concerns over methods and classes. Moreover, by means of the proposed approach developers are given suggestions on how to reduce tangling of some components, thanks to the use of a metric and refactoring techniques. For systems comprising thousand of classes this is a valuable support, since unassisted developers could miss appropriate refactoring opportunities, due to the large number of details they should take into account.
{"title":"Automatically Characterising Components with Concerns and Reducing Tangling","authors":"E. Tramontana","doi":"10.1109/COMPSACW.2013.114","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.114","url":null,"abstract":"Developing large systems exhibiting a high degree of modularity can be a difficult task even for experienced developers. Hindering modularity has several armful effects, such as decreased readability, higher complexity and difficulties to reuse and evolve components. This paper assists developers to achieve modularity of components by providing a way to automatically characterise the concerns within components according to the APIs they are based on. This allows finding the degree of tangling and scattering of concerns over methods and classes. Moreover, by means of the proposed approach developers are given suggestions on how to reduce tangling of some components, thanks to the use of a metric and refactoring techniques. For systems comprising thousand of classes this is a valuable support, since unassisted developers could miss appropriate refactoring opportunities, due to the large number of details they should take into account.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.45
Tung-Hsiang Chou
This research tries to use IT-enabled service to create an intelligent robot for entertainment from service aspect and it also implements a real scenario of kart racer by using LEGO NXT. First, this research uses service perspective to analyze what kind of services will be needed and then this research tries to use a service blueprint to illustrate the services of intelligent robot (iRobot). The hardware of iRobot is developed by LEGO Corporation, in order to apply these service theories into a real environment, this research uses a kart racer example to implement irobots for racing. The irobot is invented by Lego and it is also called Mindstorms NXT. NXT is a programmable robotic kit that is released by LEGO and this research uses it to complete a real scenario. Secondly, irobot combines several technologies and multi-agents systems (MAS) such as remote control techniques, bluetooth agent, intelligent environmental detection agent, and other agents. iRobot tries to increases more fun in its example and it also uses the example to validate the design with service perspective.
{"title":"The Service Design of Intelligent Robot (iRobot) for Entertainment","authors":"Tung-Hsiang Chou","doi":"10.1109/COMPSACW.2013.45","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.45","url":null,"abstract":"This research tries to use IT-enabled service to create an intelligent robot for entertainment from service aspect and it also implements a real scenario of kart racer by using LEGO NXT. First, this research uses service perspective to analyze what kind of services will be needed and then this research tries to use a service blueprint to illustrate the services of intelligent robot (iRobot). The hardware of iRobot is developed by LEGO Corporation, in order to apply these service theories into a real environment, this research uses a kart racer example to implement irobots for racing. The irobot is invented by Lego and it is also called Mindstorms NXT. NXT is a programmable robotic kit that is released by LEGO and this research uses it to complete a real scenario. Secondly, irobot combines several technologies and multi-agents systems (MAS) such as remote control techniques, bluetooth agent, intelligent environmental detection agent, and other agents. iRobot tries to increases more fun in its example and it also uses the example to validate the design with service perspective.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122099634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.47
Fu-Shiung Hsieh, Jim-Bon Lin
Workflow scheduling in multi-agent systems (MAS) is a challenging problem due to the computational complexity involved, distributed architecture and dependency of different agents' workflows. How to develop a problem solver that can be applied in MAS to achieve coherent and consistent workflow schedules that can meet a customer's order is an important issue. In this paper, we propose a solution methodology for scheduling workflows in MAS. Our solution combines the multi-agent system architecture, contract net protocol and workflow models specified by Petri nets. Our solution algorithm is developed based on transformation of workflow model to network models. A subgradient algorithm and a heuristic algorithm are applied to find the solutions. A problem solver for workflow scheduling in MAS has been implemented.
{"title":"A Problem Solver for Scheduling Workflows in Multi-agents Systems Based on Petri Nets","authors":"Fu-Shiung Hsieh, Jim-Bon Lin","doi":"10.1109/COMPSACW.2013.47","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.47","url":null,"abstract":"Workflow scheduling in multi-agent systems (MAS) is a challenging problem due to the computational complexity involved, distributed architecture and dependency of different agents' workflows. How to develop a problem solver that can be applied in MAS to achieve coherent and consistent workflow schedules that can meet a customer's order is an important issue. In this paper, we propose a solution methodology for scheduling workflows in MAS. Our solution combines the multi-agent system architecture, contract net protocol and workflow models specified by Petri nets. Our solution algorithm is developed based on transformation of workflow model to network models. A subgradient algorithm and a heuristic algorithm are applied to find the solutions. A problem solver for workflow scheduling in MAS has been implemented.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126130868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.67
Chieh-Yuan Tsai, Bo-Han Lai
How to provide a high quality service according to consumer preference becomes a critical issue for amusement park to survive in a rapidly changing environment. To fulfill the need, this research proposes a customized visiting route service that provides tourists what facilities they should visit and in what order. In the studied environment, all regions are covered by Radio-Frequency Identification (RFID) readers so that the visiting behavior of a tourist (i.e. visiting location, sequences, and corresponding timestamps) can be collected and stored in a route database. The proposed route recommendation service consists of two major modules. The first module is to discover the frequent Location-Item-Time (LIT) sequential patterns using the proposed sequential pattern mining procedure. In the second module, the route suggestion procedure will filter the LIT sequential patterns under the constraints of intended-visiting time, favorite regions with its related visiting time, and favorite recreation facilities, then select the top-k suggested routes to guide the visitors. To show the feasibility of the proposed route recommendation system, the Tokyo DisneySea in Japan is used as an example. Based on the experimental results, it is clear that the recommended route can not only follow previous tourists' visiting experiences but also satisfy the visitor's customized requirement.
{"title":"A Customized Visiting Route Service under RFID Environment","authors":"Chieh-Yuan Tsai, Bo-Han Lai","doi":"10.1109/COMPSACW.2013.67","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.67","url":null,"abstract":"How to provide a high quality service according to consumer preference becomes a critical issue for amusement park to survive in a rapidly changing environment. To fulfill the need, this research proposes a customized visiting route service that provides tourists what facilities they should visit and in what order. In the studied environment, all regions are covered by Radio-Frequency Identification (RFID) readers so that the visiting behavior of a tourist (i.e. visiting location, sequences, and corresponding timestamps) can be collected and stored in a route database. The proposed route recommendation service consists of two major modules. The first module is to discover the frequent Location-Item-Time (LIT) sequential patterns using the proposed sequential pattern mining procedure. In the second module, the route suggestion procedure will filter the LIT sequential patterns under the constraints of intended-visiting time, favorite regions with its related visiting time, and favorite recreation facilities, then select the top-k suggested routes to guide the visitors. To show the feasibility of the proposed route recommendation system, the Tokyo DisneySea in Japan is used as an example. Based on the experimental results, it is clear that the recommended route can not only follow previous tourists' visiting experiences but also satisfy the visitor's customized requirement.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126455059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.32
Konrad Johannes Reiche, Edzard Höfig
Public government data refers to documents and proceedings which are freely available and accessible. Repositories facilitate the collection, publishing and distribution of data in a centralized and possibly standardized way. Metadata is used to catalog and organize the provided data. The operationality and interoperability depends on the metadata quality. In order to measure the efficiency of a repository the metadata quality needs to be quantified. Quality assessment is considered to be most reliable when carried out by a human expert. This approach, however, is not always feasible. Hence, an automatic assessment of the quality of metadata should be pursued. Proposed metrics from the field of metadata quality assessment are taken, implemented and applied to three public government data repositories, namely GovData.de (Germany), data.gov.uk (United Kingdom) and publicdata.eu (Europe). Five quality metrics were applied: completeness, weighted completeness, accuracy, richness of information and accessibility. The metrics and their implementation will be discussed in detail and the results evaluated.
{"title":"Implementation of Metadata Quality Metrics and Application on Public Government Data","authors":"Konrad Johannes Reiche, Edzard Höfig","doi":"10.1109/COMPSACW.2013.32","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.32","url":null,"abstract":"Public government data refers to documents and proceedings which are freely available and accessible. Repositories facilitate the collection, publishing and distribution of data in a centralized and possibly standardized way. Metadata is used to catalog and organize the provided data. The operationality and interoperability depends on the metadata quality. In order to measure the efficiency of a repository the metadata quality needs to be quantified. Quality assessment is considered to be most reliable when carried out by a human expert. This approach, however, is not always feasible. Hence, an automatic assessment of the quality of metadata should be pursued. Proposed metrics from the field of metadata quality assessment are taken, implemented and applied to three public government data repositories, namely GovData.de (Germany), data.gov.uk (United Kingdom) and publicdata.eu (Europe). Five quality metrics were applied: completeness, weighted completeness, accuracy, richness of information and accessibility. The metrics and their implementation will be discussed in detail and the results evaluated.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.15
J. On, Yeongbok Choe, Moonkun Lee
In CCS, Milner defined the notion of Strong and Weak Bisimulations for behavioral equivalence between two processes or systems. However the notion has not been studied further in the perspective of abstraction for such behaviors in process algebra. In some sense, weak bisimulation could be interpreted as a kind of behavior equivalence between two processes at the certain degree of abstraction, based on observability. Here we noticed the possibility of representing such observable behaviors with a certain structure of abstraction and verify a number of behavioral equivalences in the structure. In the paper, such possibility has been realized with a new concept of Behavior Ontology. In the ontology, actions can be defined as an interaction between two processes or systems, and, further, behaviors can be defined as a sequence of such actions. Since some actions between the behaviors can be overlapped in some structural way, the behaviors can be organized in a lattice structure, namely, Behavior Lattice. Consequently, the lattice reveals certain levels of observability of the behaviors, based on degree of abstraction. From the lattice, a strong bisimulation and its weak bisimulations can be detected visually. The comparative study shows that the ontology is very effective and efficient for representing such abstract behaviors and verifying strong and weak bisimulations in a lattice structure. The ontology can be considered as one of the unique and innovative structure to represent such behaviors in a hierarchical structure of abstraction.
{"title":"An Abstraction Method of Behaviors for Process Algebra","authors":"J. On, Yeongbok Choe, Moonkun Lee","doi":"10.1109/COMPSACW.2013.15","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.15","url":null,"abstract":"In CCS, Milner defined the notion of Strong and Weak Bisimulations for behavioral equivalence between two processes or systems. However the notion has not been studied further in the perspective of abstraction for such behaviors in process algebra. In some sense, weak bisimulation could be interpreted as a kind of behavior equivalence between two processes at the certain degree of abstraction, based on observability. Here we noticed the possibility of representing such observable behaviors with a certain structure of abstraction and verify a number of behavioral equivalences in the structure. In the paper, such possibility has been realized with a new concept of Behavior Ontology. In the ontology, actions can be defined as an interaction between two processes or systems, and, further, behaviors can be defined as a sequence of such actions. Since some actions between the behaviors can be overlapped in some structural way, the behaviors can be organized in a lattice structure, namely, Behavior Lattice. Consequently, the lattice reveals certain levels of observability of the behaviors, based on degree of abstraction. From the lattice, a strong bisimulation and its weak bisimulations can be detected visually. The comparative study shows that the ontology is very effective and efficient for representing such abstract behaviors and verifying strong and weak bisimulations in a lattice structure. The ontology can be considered as one of the unique and innovative structure to represent such behaviors in a hierarchical structure of abstraction.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130050032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}