Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.73
M. Yokohata, Tomotaka Maeda, Y. Okabe
An on-demand power supply network is proposed in i-Energy project as a method to achieve the goal of powersaving in home. In the on-demand power supply network, power requests of devices are classified by priority of Quality of Life. When each device requires power, the device sends a power request message which contains required power and the priority to the network. When the network accepts a request message, the network supplies power to the device. In this paper, we focus on Power over Ethernet(PoE), in which power requests are sent from PD(Power Device) to the PSE(Power Supply Equipment) by Link Layer Discovery Protocol and Physical Layer. However PSE cannot allocate power for several PDs according to priority. We propose fair power allocation algorithms in terms of the priority from PSE to PD for minimizing decrease of QoL. We measured power requests and allocations time of PDs by using real PoE equipment. We show allocation time to complete the process within a certain period of time even in the worst case, that is, many devices send power request messages at the same time.
{"title":"Power Allocation Algorithms of PoE for On-Demand Power Supply","authors":"M. Yokohata, Tomotaka Maeda, Y. Okabe","doi":"10.1109/COMPSACW.2013.73","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.73","url":null,"abstract":"An on-demand power supply network is proposed in i-Energy project as a method to achieve the goal of powersaving in home. In the on-demand power supply network, power requests of devices are classified by priority of Quality of Life. When each device requires power, the device sends a power request message which contains required power and the priority to the network. When the network accepts a request message, the network supplies power to the device. In this paper, we focus on Power over Ethernet(PoE), in which power requests are sent from PD(Power Device) to the PSE(Power Supply Equipment) by Link Layer Discovery Protocol and Physical Layer. However PSE cannot allocate power for several PDs according to priority. We propose fair power allocation algorithms in terms of the priority from PSE to PD for minimizing decrease of QoL. We measured power requests and allocations time of PDs by using real PoE equipment. We show allocation time to complete the process within a certain period of time even in the worst case, that is, many devices send power request messages at the same time.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.93
Aseel Hmood, J. Rilling
The financial community assesses and analyzes fundamental qualities of stocks to predict their future performance. During the analysis different external and internal factors are considered which can affect the stock price. Financial analysts use indicators and analysis patterns, such as such as Moving Averages, Crossover patterns, and M-Top/W-Bottom patterns to determine stock price trends and potential trading opportunities. Similar to the stock market, also qualities of software systems are part of larger ecosystems which are affected by internal and external factors. Our research provides a cross disciplinary approach which takes advantages of these financial indicators and analysis patterns and re-applies them for the analysis and prediction of evolvability qualities in software system. We conducted several case studies to illustrate the applicability of our approach.
{"title":"Analyzing and Predicting Software Quality Trends Using Financial Patterns","authors":"Aseel Hmood, J. Rilling","doi":"10.1109/COMPSACW.2013.93","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.93","url":null,"abstract":"The financial community assesses and analyzes fundamental qualities of stocks to predict their future performance. During the analysis different external and internal factors are considered which can affect the stock price. Financial analysts use indicators and analysis patterns, such as such as Moving Averages, Crossover patterns, and M-Top/W-Bottom patterns to determine stock price trends and potential trading opportunities. Similar to the stock market, also qualities of software systems are part of larger ecosystems which are affected by internal and external factors. Our research provides a cross disciplinary approach which takes advantages of these financial indicators and analysis patterns and re-applies them for the analysis and prediction of evolvability qualities in software system. We conducted several case studies to illustrate the applicability of our approach.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.124
Amina Souag, C. Salinesi, I. Comyn-Wattiau, H. Mouratidis
Recent research has argued about the importance of considering security during Requirements Engineering (RE) stage. Literature also emphasizes the importance of using ontologies to facilitate requirements elicitation. Ontologies are known to be rich sources of knowledge, and, being structured and equipped with reasoning features, they form a powerful tool to handle requirements. We believe that security being a multi-faceted problem, a single security ontology is not enough to guide SR Engineering (SRE) efficiently. Indeed, security ontologies only focus on technical and domain independent aspects of security. Therefore, one can hypothesize that domain knowledge is needed too. Our question is "how to combine the use of security ontologies and domain ontologies to guide requirements elicitation efficiently and effectively?" We propose a method that exploits both types of ontologies dynamically through a collection of heuristic production rules. We demonstrate that the combined use of security ontologies with domain ontologies to guide SR elicitation is more effective than just relying on security ontologies. This paper presents our method and reports a preliminary evaluation conducted through critical analysis by experts. The evaluation shows that the method provides a good balance between the genericity with respect to the ontologies (which do not need to be selected in advance), and the specificity of the elicited requirements with respect to the domain at hand.
{"title":"Using Security and Domain Ontologies for Security Requirements Analysis","authors":"Amina Souag, C. Salinesi, I. Comyn-Wattiau, H. Mouratidis","doi":"10.1109/COMPSACW.2013.124","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.124","url":null,"abstract":"Recent research has argued about the importance of considering security during Requirements Engineering (RE) stage. Literature also emphasizes the importance of using ontologies to facilitate requirements elicitation. Ontologies are known to be rich sources of knowledge, and, being structured and equipped with reasoning features, they form a powerful tool to handle requirements. We believe that security being a multi-faceted problem, a single security ontology is not enough to guide SR Engineering (SRE) efficiently. Indeed, security ontologies only focus on technical and domain independent aspects of security. Therefore, one can hypothesize that domain knowledge is needed too. Our question is \"how to combine the use of security ontologies and domain ontologies to guide requirements elicitation efficiently and effectively?\" We propose a method that exploits both types of ontologies dynamically through a collection of heuristic production rules. We demonstrate that the combined use of security ontologies with domain ontologies to guide SR elicitation is more effective than just relying on security ontologies. This paper presents our method and reports a preliminary evaluation conducted through critical analysis by experts. The evaluation shows that the method provides a good balance between the genericity with respect to the ontologies (which do not need to be selected in advance), and the specificity of the elicited requirements with respect to the domain at hand.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.87
Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang
Packet Classification is an enabling technique for the future Internet by classifying incoming packets into forwarding classes to fulfill different service requirements. It is necessary for IP routers to provide network security and differentiated services. Recursive Flow Classification (RFC) is a notable high-speed scheme for packet classification. However, it may incur high memory consumption in generating the pre-computed cross-product tables. In this paper, we propose a new scheme to reduce the memory consumption by partitioning a rule database into several subsets. The rules of each subset are stored in an independent RFC data structure to significantly alleviate overall memory consumption. We also present several refinements for these RFC data structures to significantly improve the search speed. The experimental results show that our scheme dramatically improves the storage performance of RFC.
{"title":"Packet Classification Using Multi-iteration RFC","authors":"Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang","doi":"10.1109/COMPSACW.2013.87","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.87","url":null,"abstract":"Packet Classification is an enabling technique for the future Internet by classifying incoming packets into forwarding classes to fulfill different service requirements. It is necessary for IP routers to provide network security and differentiated services. Recursive Flow Classification (RFC) is a notable high-speed scheme for packet classification. However, it may incur high memory consumption in generating the pre-computed cross-product tables. In this paper, we propose a new scheme to reduce the memory consumption by partitioning a rule database into several subsets. The rules of each subset are stored in an independent RFC data structure to significantly alleviate overall memory consumption. We also present several refinements for these RFC data structures to significantly improve the search speed. The experimental results show that our scheme dramatically improves the storage performance of RFC.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133348947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.24
V. Chimisliu, F. Wotawa
In model-based testing the size of the used model has a great impact on the time for computing test cases. In model checking, dependence relations have been used in slicing of specifications in order to obtain reduced models pertinent to criteria of interest. In specifications described using state based formalisms slicing involves the removal of transitions and merging of states thus obtaining a structural modified specification. Using such a specification for model based test case generation where sequences of transitions represent test cases might provide traces that are not valid on a correctly behaving implementation. In order to avoid such trouble, we suggest the use of control, data and communication dependences for identifying parts of the model that can be excluded so that the remaining specification can be safely employed for test case generation. This information is included in test purposes which are then used in the test case generation process. We present also first empirical results obtained by using several models from industry and literature.
{"title":"Using Dependency Relations to Improve Test Case Generation from UML Statecharts","authors":"V. Chimisliu, F. Wotawa","doi":"10.1109/COMPSACW.2013.24","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.24","url":null,"abstract":"In model-based testing the size of the used model has a great impact on the time for computing test cases. In model checking, dependence relations have been used in slicing of specifications in order to obtain reduced models pertinent to criteria of interest. In specifications described using state based formalisms slicing involves the removal of transitions and merging of states thus obtaining a structural modified specification. Using such a specification for model based test case generation where sequences of transitions represent test cases might provide traces that are not valid on a correctly behaving implementation. In order to avoid such trouble, we suggest the use of control, data and communication dependences for identifying parts of the model that can be excluded so that the remaining specification can be safely employed for test case generation. This information is included in test purposes which are then used in the test case generation process. We present also first empirical results obtained by using several models from industry and literature.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115158445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.63
P. Wrzeciono, W. Karwowski
This paper presents an automatic indexing system, created on the basis of text analysis, which involves grouping words and reducing them to their dictionary form. The system, developed with the help of an inflection dictionary of the Polish language, is designed to store and retrieve scientific papers dedicated to agriculture. During the analysis, auxiliary words such as pronouns, conjunctions, etc. were omitted. The words which are not present in the inflection dictionary, were used to create a dictionary of new terms. The words stored in the dictionary of new terms were used for the extraction of agricultural terms, which then could be located in the AGROVOC thesaurus. For each of the analyzed papers, a set of concepts with assigned weights was created. For each of the stored scientific papers, an "artificial sentence" was generated. An "artificial sentence" was created on the basis of the frequency of occurrence of dictionary forms of a word appearing in the texts and the word's grammatical category. This "artificial sentence" as well as sets of terms were used to find relationships between the papers stored in the system. These dependencies are used in an algorithm of searching for articles matching a query. It was observed that the number of correct results depends on the number of words in the paper. If a work consisted of at least a thousand words, the probability of misdiagnosis of content was not higher than 5%. In the case of short texts, such as abstracts, the probability of misdiagnosis was much higher, approximately 23%. Results obtained in the presented system are more accurate than those obtained by standard search engines. This method can also be applied to other natural languages with extensive inflection systems. The presented solution is a continuation of the work carried out under a grant [N N310 038538].
{"title":"Automatic Indexing and Creating Semantic Networks for Agricultural Science Papers in the Polish Language","authors":"P. Wrzeciono, W. Karwowski","doi":"10.1109/COMPSACW.2013.63","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.63","url":null,"abstract":"This paper presents an automatic indexing system, created on the basis of text analysis, which involves grouping words and reducing them to their dictionary form. The system, developed with the help of an inflection dictionary of the Polish language, is designed to store and retrieve scientific papers dedicated to agriculture. During the analysis, auxiliary words such as pronouns, conjunctions, etc. were omitted. The words which are not present in the inflection dictionary, were used to create a dictionary of new terms. The words stored in the dictionary of new terms were used for the extraction of agricultural terms, which then could be located in the AGROVOC thesaurus. For each of the analyzed papers, a set of concepts with assigned weights was created. For each of the stored scientific papers, an \"artificial sentence\" was generated. An \"artificial sentence\" was created on the basis of the frequency of occurrence of dictionary forms of a word appearing in the texts and the word's grammatical category. This \"artificial sentence\" as well as sets of terms were used to find relationships between the papers stored in the system. These dependencies are used in an algorithm of searching for articles matching a query. It was observed that the number of correct results depends on the number of words in the paper. If a work consisted of at least a thousand words, the probability of misdiagnosis of content was not higher than 5%. In the case of short texts, such as abstracts, the probability of misdiagnosis was much higher, approximately 23%. Results obtained in the presented system are more accurate than those obtained by standard search engines. This method can also be applied to other natural languages with extensive inflection systems. The presented solution is a continuation of the work carried out under a grant [N N310 038538].","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121837576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.114
E. Tramontana
Developing large systems exhibiting a high degree of modularity can be a difficult task even for experienced developers. Hindering modularity has several armful effects, such as decreased readability, higher complexity and difficulties to reuse and evolve components. This paper assists developers to achieve modularity of components by providing a way to automatically characterise the concerns within components according to the APIs they are based on. This allows finding the degree of tangling and scattering of concerns over methods and classes. Moreover, by means of the proposed approach developers are given suggestions on how to reduce tangling of some components, thanks to the use of a metric and refactoring techniques. For systems comprising thousand of classes this is a valuable support, since unassisted developers could miss appropriate refactoring opportunities, due to the large number of details they should take into account.
{"title":"Automatically Characterising Components with Concerns and Reducing Tangling","authors":"E. Tramontana","doi":"10.1109/COMPSACW.2013.114","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.114","url":null,"abstract":"Developing large systems exhibiting a high degree of modularity can be a difficult task even for experienced developers. Hindering modularity has several armful effects, such as decreased readability, higher complexity and difficulties to reuse and evolve components. This paper assists developers to achieve modularity of components by providing a way to automatically characterise the concerns within components according to the APIs they are based on. This allows finding the degree of tangling and scattering of concerns over methods and classes. Moreover, by means of the proposed approach developers are given suggestions on how to reduce tangling of some components, thanks to the use of a metric and refactoring techniques. For systems comprising thousand of classes this is a valuable support, since unassisted developers could miss appropriate refactoring opportunities, due to the large number of details they should take into account.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.54
Shunsuke Aoki, M. Iwai, K. Sezaki
Community sensing is an emerging system which allows the increasing number of mobile phone users to share effectively minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data through mobile phone's applications, e.g. Facebook, Twitter and Linkdin. However, a number of privacy concerns will hinder the spread of community sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption scheme. We should prepare a privacy-preserving community sensing scheme with less computational-complexity. Moreover, an environment that is reassuring for participants to conduct community sensing is strongly required because the quality of the statistical data is depending on general users' active contribution. In this article, we suggest a privacy-preserving community sensing scheme for human-centric data such as profile information by using the combination of negative surveys and randomized response techniques. By using our method described in this paper, the server can reconstruct the probability distributions of the original distributions of sensed values without violating the privacy of users. Especially, we can protect sensitive information from malicious tracking attacks. We evaluated how this scheme can preserve the privacy while keeping the integrity of aggregated information.
{"title":"Privacy-Aware Community Sensing Using Randomized Response","authors":"Shunsuke Aoki, M. Iwai, K. Sezaki","doi":"10.1109/COMPSACW.2013.54","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.54","url":null,"abstract":"Community sensing is an emerging system which allows the increasing number of mobile phone users to share effectively minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data through mobile phone's applications, e.g. Facebook, Twitter and Linkdin. However, a number of privacy concerns will hinder the spread of community sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption scheme. We should prepare a privacy-preserving community sensing scheme with less computational-complexity. Moreover, an environment that is reassuring for participants to conduct community sensing is strongly required because the quality of the statistical data is depending on general users' active contribution. In this article, we suggest a privacy-preserving community sensing scheme for human-centric data such as profile information by using the combination of negative surveys and randomized response techniques. By using our method described in this paper, the server can reconstruct the probability distributions of the original distributions of sensed values without violating the privacy of users. Especially, we can protect sensitive information from malicious tracking attacks. We evaluated how this scheme can preserve the privacy while keeping the integrity of aggregated information.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121663970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.45
Tung-Hsiang Chou
This research tries to use IT-enabled service to create an intelligent robot for entertainment from service aspect and it also implements a real scenario of kart racer by using LEGO NXT. First, this research uses service perspective to analyze what kind of services will be needed and then this research tries to use a service blueprint to illustrate the services of intelligent robot (iRobot). The hardware of iRobot is developed by LEGO Corporation, in order to apply these service theories into a real environment, this research uses a kart racer example to implement irobots for racing. The irobot is invented by Lego and it is also called Mindstorms NXT. NXT is a programmable robotic kit that is released by LEGO and this research uses it to complete a real scenario. Secondly, irobot combines several technologies and multi-agents systems (MAS) such as remote control techniques, bluetooth agent, intelligent environmental detection agent, and other agents. iRobot tries to increases more fun in its example and it also uses the example to validate the design with service perspective.
{"title":"The Service Design of Intelligent Robot (iRobot) for Entertainment","authors":"Tung-Hsiang Chou","doi":"10.1109/COMPSACW.2013.45","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.45","url":null,"abstract":"This research tries to use IT-enabled service to create an intelligent robot for entertainment from service aspect and it also implements a real scenario of kart racer by using LEGO NXT. First, this research uses service perspective to analyze what kind of services will be needed and then this research tries to use a service blueprint to illustrate the services of intelligent robot (iRobot). The hardware of iRobot is developed by LEGO Corporation, in order to apply these service theories into a real environment, this research uses a kart racer example to implement irobots for racing. The irobot is invented by Lego and it is also called Mindstorms NXT. NXT is a programmable robotic kit that is released by LEGO and this research uses it to complete a real scenario. Secondly, irobot combines several technologies and multi-agents systems (MAS) such as remote control techniques, bluetooth agent, intelligent environmental detection agent, and other agents. iRobot tries to increases more fun in its example and it also uses the example to validate the design with service perspective.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122099634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.34
Akihito Nakamura
Continuous and comprehensive vulnerability management is a difficult task for administrators. The difficulties are not because of a lack of tools, but because they are designed without service-oriented architecture viewpoint and there is insufficient trustworthy machine-readable input data. This paper presents a service-oriented architecture for vulnerability assessment systems based on the open security standards and related contents. If the functions are provided as a service, various kinds of security applications can be interoperated and integrated in loosely-coupled way. We also studied the effectiveness of the available public data for automated vulnerability assessment. Despite the large amount of efforts that goes toward describing machine-readable assessment test in conformity to the OVAL standard, the evaluation result proves inadequate for comprehensive vulnerability assessment. Only about 12% of all the known vulnerabilities are covered by existing OVAL tests, while some popular client applications in the Top 30 with most unique vulnerabilities are covered more than 90%.
{"title":"Towards Unified Vulnerability Assessment with Open Data","authors":"Akihito Nakamura","doi":"10.1109/COMPSACW.2013.34","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.34","url":null,"abstract":"Continuous and comprehensive vulnerability management is a difficult task for administrators. The difficulties are not because of a lack of tools, but because they are designed without service-oriented architecture viewpoint and there is insufficient trustworthy machine-readable input data. This paper presents a service-oriented architecture for vulnerability assessment systems based on the open security standards and related contents. If the functions are provided as a service, various kinds of security applications can be interoperated and integrated in loosely-coupled way. We also studied the effectiveness of the available public data for automated vulnerability assessment. Despite the large amount of efforts that goes toward describing machine-readable assessment test in conformity to the OVAL standard, the evaluation result proves inadequate for comprehensive vulnerability assessment. Only about 12% of all the known vulnerabilities are covered by existing OVAL tests, while some popular client applications in the Top 30 with most unique vulnerabilities are covered more than 90%.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}