R. Laue, F. Hogrebe, Boris Böttcher, Markus Nüttgens
The visual syntax of modelling languages can support (or impede) the intuitive understandability of a model. We observed the process of problem solving with two notation variants of i* diagrams by means of an eye-tracking device. The number of wrongly answered questions was significantly lower when the alternative i* notation suggested by Moody et al. was used. For the eye-tracking metrics “time to solve a task” and “number of eye fixations”, no such significant result can be given. Furthermore, we identified a deficiency for the “dependency” symbol in the alternative notation.
{"title":"Efficient visual notations for efficient stakeholder communication","authors":"R. Laue, F. Hogrebe, Boris Böttcher, Markus Nüttgens","doi":"10.1109/RE.2014.6912281","DOIUrl":"https://doi.org/10.1109/RE.2014.6912281","url":null,"abstract":"The visual syntax of modelling languages can support (or impede) the intuitive understandability of a model. We observed the process of problem solving with two notation variants of i* diagrams by means of an eye-tracking device. The number of wrongly answered questions was significantly lower when the alternative i* notation suggested by Moody et al. was used. For the eye-tracking metrics “time to solve a task” and “number of eye fixations”, no such significant result can be given. Furthermore, we identified a deficiency for the “dependency” symbol in the alternative notation.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123856399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefan Gärtner, Thomas Ruhroth, J. Bürger, K. Schneider, J. Jürjens
Security is an increasingly important quality facet in modern information systems and needs to be retained. Due to a constantly changing environment, long-living software systems “age” not by wearing out, but by failing to keep up-to-date with their environment. The problem is that requirements engineers usually do not have a complete overview of the security-related knowledge necessary to retain security of long-living software systems. This includes security standards, principles and guidelines as well as reported security incidents. In this paper, we focus on the identification of known vulnerabilities (and their variations) in natural-language requirements by leveraging security knowledge. For this purpose, we present an integrative security knowledge model and a heuristic method to detect vulnerabilities in requirements based on reported security incidents. To support knowledge evolution, we further propose a method based on natural language analysis to refine and to adapt security knowledge. Our evaluation indicates that the proposed assessment approach detects vulnerable requirements more reliable than other methods (Bayes, SVM, k-NN). Thus, requirements engineers can react faster and more effectively to a changing environment that has an impact on the desired security level of the information system.
{"title":"Maintaining requirements for long-living software systems by incorporating security knowledge","authors":"Stefan Gärtner, Thomas Ruhroth, J. Bürger, K. Schneider, J. Jürjens","doi":"10.1109/RE.2014.6912252","DOIUrl":"https://doi.org/10.1109/RE.2014.6912252","url":null,"abstract":"Security is an increasingly important quality facet in modern information systems and needs to be retained. Due to a constantly changing environment, long-living software systems “age” not by wearing out, but by failing to keep up-to-date with their environment. The problem is that requirements engineers usually do not have a complete overview of the security-related knowledge necessary to retain security of long-living software systems. This includes security standards, principles and guidelines as well as reported security incidents. In this paper, we focus on the identification of known vulnerabilities (and their variations) in natural-language requirements by leveraging security knowledge. For this purpose, we present an integrative security knowledge model and a heuristic method to detect vulnerabilities in requirements based on reported security incidents. To support knowledge evolution, we further propose a method based on natural language analysis to refine and to adapt security knowledge. Our evaluation indicates that the proposed assessment approach detects vulnerable requirements more reliable than other methods (Bayes, SVM, k-NN). Thus, requirements engineers can react faster and more effectively to a changing environment that has an impact on the desired security level of the information system.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123444478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.
{"title":"Hidden in plain sight: Automatically identifying security requirements from natural language artifacts","authors":"M. Riaz, J. King, John Slankas, L. Williams","doi":"10.1109/RE.2014.6912260","DOIUrl":"https://doi.org/10.1109/RE.2014.6912260","url":null,"abstract":"Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131894983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traceability among requirements artifacts (and beyond, in certain cases all the way to actual implementation) has long been identified as a critical challenge in industrial practice. Manually establishing and maintaining such traces is a high-skill, labour-intensive job. It is often the case that the ideal person for the job also has other, highly critical tasks to take care of, so offering semi-automated support for the management of traces is an effective way of improving the efficiency of the whole development process. In this paper, we present a technique to exploit the information contained in previously defined traces, in order to facilitate the creation and ongoing maintenance of traces, as the requirements evolve. A case study on a reference dataset is employed to measure the effectiveness of the technique, compared to other proposals from the literature.
{"title":"Supporting traceability through affinity mining","authors":"V. Gervasi, D. Zowghi","doi":"10.1109/RE.2014.6912256","DOIUrl":"https://doi.org/10.1109/RE.2014.6912256","url":null,"abstract":"Traceability among requirements artifacts (and beyond, in certain cases all the way to actual implementation) has long been identified as a critical challenge in industrial practice. Manually establishing and maintaining such traces is a high-skill, labour-intensive job. It is often the case that the ideal person for the job also has other, highly critical tasks to take care of, so offering semi-automated support for the management of traces is an effective way of improving the efficiency of the whole development process. In this paper, we present a technique to exploit the information contained in previously defined traces, in order to facilitate the creation and ongoing maintenance of traces, as the requirements evolve. A case study on a reference dataset is employed to measure the effectiveness of the technique, compared to other proposals from the literature.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133686071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software systems could be seen as a hierarchy of features which are evolving due to the dynamic of the working environments. The companies who build software thus need to make an appropriate strategy, which takes into consideration of such dynamic, to select features to be implemented. In this work, we propose an approach to facilitate such selection by providing a means to capture the uncertainty of evolution in feature models. We also provide two analyses to support the decision makers. The approach is exemplified in the Smart Grid scenario.
{"title":"An Approach for Decision Support on the Uncertainty in Feature Model Evolution","authors":"L. M. Tran, F. Massacci","doi":"10.1109/RE.2014.6912251","DOIUrl":"https://doi.org/10.1109/RE.2014.6912251","url":null,"abstract":"Software systems could be seen as a hierarchy of features which are evolving due to the dynamic of the working environments. The companies who build software thus need to make an appropriate strategy, which takes into consideration of such dynamic, to select features to be implemented. In this work, we propose an approach to facilitate such selection by providing a means to capture the uncertainty of evolution in feature models. We also provide two analyses to support the decision makers. The approach is exemplified in the Smart Grid scenario.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123847089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
User experience (UX) is difficult to quantify and thus more challenging to require and guarantee. It is also difficult to gauge the potential impact on users' lived experience, especially at the earlier stages of the development life cycle, particularly before hi fidelity prototypes are developed. We believe that the enrolment process is a major hurdle for e-government service adoption and badly designed processes might result in negative repercussions for both the policy maker and the different user groups involved; non-adoption and resentment are two risks that may result in low return on investment (ROI), lost political goodwill and ultimately a negative lived experience for citizens. Identity assurance requirements need to balance out the real value of the assets being secured (risk) with the user groups' acceptance thresholds (based on a continuous cost-benefit exercise factoring in cognitive and physical workload). Sentire is a persona-centric requirements framework built on and extending the Volere requirements process with UX-analytics, reusable user behavioural models and simulated user feedback through calibrated personas. In this paper we present a story on how Sentire was adopted in the development of a national public-facing e-service. Daily journaling was used throughout the project and a custom built cloud-based CASE tool was used to manage the whole process. This paper outlines our experiences and lessons learnt.
{"title":"Building a National E-Service using Sentire experience report on the use of Sentire: A volere-based requirements framework driven by calibrated personas and simulated user feedback","authors":"C. Porter, Emmanuel Letier, M. Sasse","doi":"10.1109/RE.2014.6912288","DOIUrl":"https://doi.org/10.1109/RE.2014.6912288","url":null,"abstract":"User experience (UX) is difficult to quantify and thus more challenging to require and guarantee. It is also difficult to gauge the potential impact on users' lived experience, especially at the earlier stages of the development life cycle, particularly before hi fidelity prototypes are developed. We believe that the enrolment process is a major hurdle for e-government service adoption and badly designed processes might result in negative repercussions for both the policy maker and the different user groups involved; non-adoption and resentment are two risks that may result in low return on investment (ROI), lost political goodwill and ultimately a negative lived experience for citizens. Identity assurance requirements need to balance out the real value of the assets being secured (risk) with the user groups' acceptance thresholds (based on a continuous cost-benefit exercise factoring in cognitive and physical workload). Sentire is a persona-centric requirements framework built on and extending the Volere requirements process with UX-analytics, reusable user behavioural models and simulated user feedback through calibrated personas. In this paper we present a story on how Sentire was adopted in the development of a national public-facing e-service. Daily journaling was used throughout the project and a custom built cloud-based CASE tool was used to manage the whole process. This paper outlines our experiences and lessons learnt.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126461060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language text sources have increasingly been used to develop new methods and tools for extracting and analyzing requirements. To validate these new approaches, researchers rely on a small number of trained experts to perform a labor-intensive manual analysis of the text. The time and resources needed to conduct manual extraction, however, has limited the size of case studies and thus the generalizability of results. To begin to address this issue, we conducted three experiments to evaluate crowdsourcing a manual requirements extraction task to a larger number of untrained workers. In these experiments, we carefully balance worker payment and overall cost, as well as worker training and data quality to study the feasibility of distributing requirements extraction to the crowd. The task consists of extracting descriptions of data collection, sharing and usage requirements from privacy policies. We present results from two pilot studies and a third experiment to justify applying a task decomposition approach to requirements extraction. Our contributions include the task decomposition workflow and three metrics for measuring worker performance. The final evaluation shows a 60% reduction in the cost of manual extraction with a 16% increase in extraction coverage.
{"title":"Scaling requirements extraction to the crowd: Experiments with privacy policies","authors":"T. Breaux, F. Schaub","doi":"10.1109/RE.2014.6912258","DOIUrl":"https://doi.org/10.1109/RE.2014.6912258","url":null,"abstract":"Natural language text sources have increasingly been used to develop new methods and tools for extracting and analyzing requirements. To validate these new approaches, researchers rely on a small number of trained experts to perform a labor-intensive manual analysis of the text. The time and resources needed to conduct manual extraction, however, has limited the size of case studies and thus the generalizability of results. To begin to address this issue, we conducted three experiments to evaluate crowdsourcing a manual requirements extraction task to a larger number of untrained workers. In these experiments, we carefully balance worker payment and overall cost, as well as worker training and data quality to study the feasibility of distributing requirements extraction to the crowd. The task consists of extracting descriptions of data collection, sharing and usage requirements from privacy policies. We present results from two pilot studies and a third experiment to justify applying a task decomposition approach to requirements extraction. Our contributions include the task decomposition workflow and three metrics for measuring worker performance. The final evaluation shows a 60% reduction in the cost of manual extraction with a 16% increase in extraction coverage.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of my research is to improve and support the process of Information security Risk Assessment by designing a scalable Risk argumentation framework for socio-digital-technical Risk. Due to the various types of IT systems, diversity of architectures and dynamic nature of Risk, there is no one-size-fits all RA method. As such, the research hopes to identify guidelines for conducting Risk Assessments in contexts that raise special challenges such as Telecom and virtualized infrastructures. Finally, it will suggest ways of qualitatively and quantitatively evaluating Information Security Risks in such scenarios by using argumentation and/or modelling attacker business cases.
{"title":"Context-sensitive Information security Risk identification and evaluation techniques","authors":"D. Ionita","doi":"10.1109/RE.2014.6912303","DOIUrl":"https://doi.org/10.1109/RE.2014.6912303","url":null,"abstract":"The objective of my research is to improve and support the process of Information security Risk Assessment by designing a scalable Risk argumentation framework for socio-digital-technical Risk. Due to the various types of IT systems, diversity of architectures and dynamic nature of Risk, there is no one-size-fits all RA method. As such, the research hopes to identify guidelines for conducting Risk Assessments in contexts that raise special challenges such as Telecom and virtualized infrastructures. Finally, it will suggest ways of qualitatively and quantitatively evaluating Information Security Risks in such scenarios by using argumentation and/or modelling attacker business cases.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"357 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132965687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ghanavati, André Rifaut, E. Dubois, Daniel Amyot
Most systems and business processes in organizations need to comply with more than one law or regulation. Different regulations can partially overlap (e.g., one can be more detailed than the other) or even conflict with each other. In addition, one regulation can permit an action whereas the same action in another regulation might be mandatory or forbidden. In each of these cases, an organization needs to take different strategies. This paper presents an approach to handle different situations when comparing and attempting to comply with multiple regulations as part of a goal-oriented modeling framework named LEGAL-URN. This framework helps organizations find suitable trade-offs and priorities when complying with multiple regulations while at the same time trying to meet their own business objectives. The approach is illustrated with a case study involving a Canadian health care organization that must comply with four laws related to privacy, quality of care, freedom of information, and care consent.
{"title":"Goal-oriented compliance with multiple regulations","authors":"S. Ghanavati, André Rifaut, E. Dubois, Daniel Amyot","doi":"10.1109/RE.2014.6912249","DOIUrl":"https://doi.org/10.1109/RE.2014.6912249","url":null,"abstract":"Most systems and business processes in organizations need to comply with more than one law or regulation. Different regulations can partially overlap (e.g., one can be more detailed than the other) or even conflict with each other. In addition, one regulation can permit an action whereas the same action in another regulation might be mandatory or forbidden. In each of these cases, an organization needs to take different strategies. This paper presents an approach to handle different situations when comparing and attempting to comply with multiple regulations as part of a goal-oriented modeling framework named LEGAL-URN. This framework helps organizations find suitable trade-offs and priorities when complying with multiple regulations while at the same time trying to meet their own business objectives. The approach is illustrated with a case study involving a Canadian health care organization that must comply with four laws related to privacy, quality of care, freedom of information, and care consent.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129371666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
App stores allow users to submit feedback for downloaded apps in form of star ratings and text reviews. Recent studies analyzed this feedback and found that it includes information useful for app developers, such as user requirements, ideas for improvements, user sentiments about specific features, and descriptions of experiences with these features. However, for many apps, the amount of reviews is too large to be processed manually and their quality varies largely. The star ratings are given to the whole app and developers do not have a mean to analyze the feedback for the single features. In this paper we propose an automated approach that helps developers filter, aggregate, and analyze user reviews. We use natural language processing techniques to identify fine-grained app features in the reviews. We then extract the user sentiments about the identified features and give them a general score across all reviews. Finally, we use topic modeling techniques to group fine-grained features into more meaningful high-level features. We evaluated our approach with 7 apps from the Apple App Store and Google Play Store and compared its results with a manually, peer-conducted analysis of the reviews. On average, our approach has a precision of 0.59 and a recall of 0.51. The extracted features were coherent and relevant to requirements evolution tasks. Our approach can help app developers to systematically analyze user opinions about single features and filter irrelevant reviews.
应用商店允许用户以星级和文字评论的形式提交下载应用的反馈。最近的研究分析了这些反馈,发现它包含了对应用开发者有用的信息,如用户需求、改进想法、用户对特定功能的看法以及对这些功能的体验描述。然而,对于许多应用来说,评论数量太大,无法手工处理,而且评论的质量参差不齐。星级评级是针对整个应用的,开发者无法分析单个功能的反馈。在本文中,我们提出了一种自动化的方法来帮助开发人员过滤、聚合和分析用户评论。我们使用自然语言处理技术在评论中识别细粒度的应用功能。然后,我们提取用户对已识别特征的看法,并在所有评论中给它们一个总体分数。最后,我们使用主题建模技术将细粒度特征分组为更有意义的高级特征。我们用来自Apple App Store和Google Play Store的7款应用评估了我们的方法,并将其结果与手动的同行评论分析进行了比较。平均而言,我们的方法的精度为0.59,召回率为0.51。提取的特征是一致的,并且与需求演化任务相关。我们的方法可以帮助应用开发者系统地分析用户对单个功能的意见,并过滤不相关的评论。
{"title":"How Do Users Like This Feature? A Fine Grained Sentiment Analysis of App Reviews","authors":"Emitzá Guzmán, W. Maalej","doi":"10.1109/RE.2014.6912257","DOIUrl":"https://doi.org/10.1109/RE.2014.6912257","url":null,"abstract":"App stores allow users to submit feedback for downloaded apps in form of star ratings and text reviews. Recent studies analyzed this feedback and found that it includes information useful for app developers, such as user requirements, ideas for improvements, user sentiments about specific features, and descriptions of experiences with these features. However, for many apps, the amount of reviews is too large to be processed manually and their quality varies largely. The star ratings are given to the whole app and developers do not have a mean to analyze the feedback for the single features. In this paper we propose an automated approach that helps developers filter, aggregate, and analyze user reviews. We use natural language processing techniques to identify fine-grained app features in the reviews. We then extract the user sentiments about the identified features and give them a general score across all reviews. Finally, we use topic modeling techniques to group fine-grained features into more meaningful high-level features. We evaluated our approach with 7 apps from the Apple App Store and Google Play Store and compared its results with a manually, peer-conducted analysis of the reviews. On average, our approach has a precision of 0.59 and a recall of 0.51. The extracted features were coherent and relevant to requirements evolution tasks. Our approach can help app developers to systematically analyze user opinions about single features and filter irrelevant reviews.","PeriodicalId":307764,"journal":{"name":"2014 IEEE 22nd International Requirements Engineering Conference (RE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128960704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}