Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00073
Yong Yu, Ming Jing, Jie Li, Na Zhao, Jinzhuo Liu
In recent year, the combination of machine learning and complex networks is gaining more and more attention. Some network-based machine learning methods which transform the vector-based instances into a network has shown a lot of potential. Some researchers believe that the network can show more information than vector-based datasets. In this paper, we proposed a network-based classifier named decision network(DN). DN abstracts the corresponding relationships between attribute values and class labels into a weighted bipartite network. The weight of the edge between an attribute value node and a label node represents the tendency to assign the instance with this attribute value to the corresponding class. Compared with the existing classifier, DN is more comprehensible and easier to implement. We evaluated the performance of DN on 7 real-world datasets by using 10-fold cross validation. It performs better than 9 other methods.
{"title":"Decision Network: a New Network-Based Classifier","authors":"Yong Yu, Ming Jing, Jie Li, Na Zhao, Jinzhuo Liu","doi":"10.1109/QRS-C51114.2020.00073","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00073","url":null,"abstract":"In recent year, the combination of machine learning and complex networks is gaining more and more attention. Some network-based machine learning methods which transform the vector-based instances into a network has shown a lot of potential. Some researchers believe that the network can show more information than vector-based datasets. In this paper, we proposed a network-based classifier named decision network(DN). DN abstracts the corresponding relationships between attribute values and class labels into a weighted bipartite network. The weight of the edge between an attribute value node and a label node represents the tendency to assign the instance with this attribute value to the corresponding class. Compared with the existing classifier, DN is more comprehensible and easier to implement. We evaluated the performance of DN on 7 real-world datasets by using 10-fold cross validation. It performs better than 9 other methods.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115090142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00090
Ilie-Daniel Gheorghe-Pop, Alexander Kaiser, A. Rennoch, Sascha Hackel
The rapid growth of IoT across the globe has been significant over the past decade. As the number of connected devices increases by the order of billions year over year, the capacity and operating costs of IoT networks and associated communications software becomes crucial. The manufacturers, software developers, integrators, telco operators as well as business-end users face an increasing need of a benchmarking reference that covers performance aspects of IoT transport protocols. This paper introduces a performance benchmarking methodology as well as examples for the definition of performance tests for the MQTT protocol. The implementation work was done within the open source project IoT Testware project which is part of the Eclipse Foundation. The test suites were specified in TDL-TO and realized in TTCN-3 using the open source IDE Eclipse Titan. The test specifications are covered by the standardization activities of the ETSI working group MTS TST.
{"title":"A Performance Benchmarking Methodology for MQTT Broker Implementations","authors":"Ilie-Daniel Gheorghe-Pop, Alexander Kaiser, A. Rennoch, Sascha Hackel","doi":"10.1109/QRS-C51114.2020.00090","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00090","url":null,"abstract":"The rapid growth of IoT across the globe has been significant over the past decade. As the number of connected devices increases by the order of billions year over year, the capacity and operating costs of IoT networks and associated communications software becomes crucial. The manufacturers, software developers, integrators, telco operators as well as business-end users face an increasing need of a benchmarking reference that covers performance aspects of IoT transport protocols. This paper introduces a performance benchmarking methodology as well as examples for the definition of performance tests for the MQTT protocol. The implementation work was done within the open source project IoT Testware project which is part of the Eclipse Foundation. The test suites were specified in TDL-TO and realized in TTCN-3 using the open source IDE Eclipse Titan. The test specifications are covered by the standardization activities of the ETSI working group MTS TST.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115330002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00070
P. Mahato, Apurva Narayan
Dynamic behavior of real-time systems and the ability to distinguish between normal and abnormal behavior is critical in safety-critical systems. Temporal patterns define the order of occurrence of events. Temporal properties help draw insights over system specifications. However, given the complexity of modern-day software in cyber-physical systems, the specifications are either not specified or loosely specified. We propose a framework for automating the task of mining temporal specifications from system traces with both events and quantitative values. Our framework, QMine, is an online property mining framework that extracts properties specified in the form of Quantitative Regular Expression (QRE) templates. QMine is shown to be sound and complete. Moreover, we evaluate our framework using real-world industry-standard traces such as Arrhythmia dataset.
{"title":"QMine: A Framework for Mining Quantitative Regular Expressions from System Traces","authors":"P. Mahato, Apurva Narayan","doi":"10.1109/QRS-C51114.2020.00070","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00070","url":null,"abstract":"Dynamic behavior of real-time systems and the ability to distinguish between normal and abnormal behavior is critical in safety-critical systems. Temporal patterns define the order of occurrence of events. Temporal properties help draw insights over system specifications. However, given the complexity of modern-day software in cyber-physical systems, the specifications are either not specified or loosely specified. We propose a framework for automating the task of mining temporal specifications from system traces with both events and quantitative values. Our framework, QMine, is an online property mining framework that extracts properties specified in the form of Quantitative Regular Expression (QRE) templates. QMine is shown to be sound and complete. Moreover, we evaluate our framework using real-world industry-standard traces such as Arrhythmia dataset.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127644961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00071
Jinfeng Li, Yan Zhang, Jilong Bian, Tiejun Li, Baoying Ma
If we execute a test case and find a failure in the program, we need to locate the location of the faults, i.e., fault localization. Fault localization is a very costly and time-consuming process. In this paper, an improved spectrum-based fault localization method IOchiai is proposed. According to the execution of passed and failed test cases, we can calculate the suspiciousness score of software element which is the probability of the element contains faults. The passed and failed test cases have different contributions to the calculation of the suspiciousness scores, we divide them into three groups according to different contribution degrees. IOchiai gives higher suspiciousness scores to the element with faults and locates faults faster. Finally, the method proposed in this paper and the traditional spectrum-based fault localization are applied to stereo matching software and found that the method proposed in this paper has stronger fault localization capability.
{"title":"Application of Improved Fault Localization Method to Stereo Matching Software","authors":"Jinfeng Li, Yan Zhang, Jilong Bian, Tiejun Li, Baoying Ma","doi":"10.1109/QRS-C51114.2020.00071","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00071","url":null,"abstract":"If we execute a test case and find a failure in the program, we need to locate the location of the faults, i.e., fault localization. Fault localization is a very costly and time-consuming process. In this paper, an improved spectrum-based fault localization method IOchiai is proposed. According to the execution of passed and failed test cases, we can calculate the suspiciousness score of software element which is the probability of the element contains faults. The passed and failed test cases have different contributions to the calculation of the suspiciousness scores, we divide them into three groups according to different contribution degrees. IOchiai gives higher suspiciousness scores to the element with faults and locates faults faster. Finally, the method proposed in this paper and the traditional spectrum-based fault localization are applied to stereo matching software and found that the method proposed in this paper has stronger fault localization capability.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120954782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00013
Meng Li, Lijun Wang, Shiyu Yan, Xiaohua Yang, Jie Liu, Yaping Wan
The Chebyshev rational approximation method (CRAM) is an essential numerical solution algorithm for the burnup equation. Since the high complexity of nuclide depletion calculation, especially the existence of short-lived nuclide and closed cycle in the transition chains, outputs of the program are almost impossible to predict accurately. Therefore, the traditional testing methods are inapplicable, even invalid. Metamorphic testing (MT) is a promising method to solve such a typical testing oracle problem. However, the absence of metamorphic relations (MRs) severely hinders its application. According to the nuclear software development process, we established a nuclear MR hierarchical model (MRHM) for guiding MR identification and classification. MRHM divides MRs into three layers: physics, algorithm, and code. After in-depth analysis, we carried out a group of MRs from the burnup equation and CRAM and classified them according to MRHM. We adopted these MRs in MT of the Nuclide Inventory Tool (NUIT), which is a program that has implemented CRAM. These MRs represent the natural properties of CRAM, and other CRAM programs indeed used them. Moreover, the MRHM will extend to more nuclear science software.
{"title":"Metamorphic Relations Identification on Chebyshev Rational Approximation Method in the Nuclide Depletion Calculation Program","authors":"Meng Li, Lijun Wang, Shiyu Yan, Xiaohua Yang, Jie Liu, Yaping Wan","doi":"10.1109/QRS-C51114.2020.00013","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00013","url":null,"abstract":"The Chebyshev rational approximation method (CRAM) is an essential numerical solution algorithm for the burnup equation. Since the high complexity of nuclide depletion calculation, especially the existence of short-lived nuclide and closed cycle in the transition chains, outputs of the program are almost impossible to predict accurately. Therefore, the traditional testing methods are inapplicable, even invalid. Metamorphic testing (MT) is a promising method to solve such a typical testing oracle problem. However, the absence of metamorphic relations (MRs) severely hinders its application. According to the nuclear software development process, we established a nuclear MR hierarchical model (MRHM) for guiding MR identification and classification. MRHM divides MRs into three layers: physics, algorithm, and code. After in-depth analysis, we carried out a group of MRs from the burnup equation and CRAM and classified them according to MRHM. We adopted these MRs in MT of the Nuclide Inventory Tool (NUIT), which is a program that has implemented CRAM. These MRs represent the natural properties of CRAM, and other CRAM programs indeed used them. Moreover, the MRHM will extend to more nuclear science software.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121007064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00048
Xudong Zhang, Yan Cai, Z. Yang
In recent years, with the rapid development of artificial intelligence and other related technologies, the traditional automotive industry has begun to integrate information technology in an all-round way. Due to the contributions of computer vision, deep learning, and sensitive sensors, autonomous driving systems (ADS) has now achieved great progress. But as we all know, the primary requirement for autonomous driving is absolute safety. However, technology innovation has brought great challenges to the testing of ADS, and due to the high cost of field testing, industrial companies rarely open relevant test data for research. This paper aims to study existing testing methods for ADS. Our study shows that there are few published works focusing on testing aspects of ADS. However, there is an obvious trend on the record of published works on testing ADS. Also, we can find that most reviewed works focus on setting up virtual test environment including generating, synthesizing, or reconstructing test input data. They either treat ADS as a whole to conduct (sub) system level testing or limit ADS into certain scenarios. From this, we believe that testing of ADS has just begun to attract researchers' interest; great effort should be paid before ADS becomes maturer.
{"title":"A Study on Testing Autonomous Driving Systems","authors":"Xudong Zhang, Yan Cai, Z. Yang","doi":"10.1109/QRS-C51114.2020.00048","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00048","url":null,"abstract":"In recent years, with the rapid development of artificial intelligence and other related technologies, the traditional automotive industry has begun to integrate information technology in an all-round way. Due to the contributions of computer vision, deep learning, and sensitive sensors, autonomous driving systems (ADS) has now achieved great progress. But as we all know, the primary requirement for autonomous driving is absolute safety. However, technology innovation has brought great challenges to the testing of ADS, and due to the high cost of field testing, industrial companies rarely open relevant test data for research. This paper aims to study existing testing methods for ADS. Our study shows that there are few published works focusing on testing aspects of ADS. However, there is an obvious trend on the record of published works on testing ADS. Also, we can find that most reviewed works focus on setting up virtual test environment including generating, synthesizing, or reconstructing test input data. They either treat ADS as a whole to conduct (sub) system level testing or limit ADS into certain scenarios. From this, we believe that testing of ADS has just begun to attract researchers' interest; great effort should be paid before ADS becomes maturer.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127164640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00027
Yuji Sato
Since current specification-based testing (SBT) faces some challenges in regression test case generation, we have already proposed a new method for test case generation that combines formal specification and genetic algorithms (GA). This method mainly reconfigures formal specifications though GA to generate inputs data that can kill as many as possible mutants of the target program under test. In this paper, we propose ideas to improve the operability and the accuracy of solution search of this method. Specifically, we propose a specification-level constrained operation using genetic programming and discuss effectiveness from the viewpoint of clarity of chromosome notation and ability to search for solutions.
{"title":"Specification-based Test Case Generation with Constrained Genetic Programming","authors":"Yuji Sato","doi":"10.1109/QRS-C51114.2020.00027","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00027","url":null,"abstract":"Since current specification-based testing (SBT) faces some challenges in regression test case generation, we have already proposed a new method for test case generation that combines formal specification and genetic algorithms (GA). This method mainly reconfigures formal specifications though GA to generate inputs data that can kill as many as possible mutants of the target program under test. In this paper, we propose ideas to improve the operability and the accuracy of solution search of this method. Specifically, we propose a specification-level constrained operation using genetic programming and discuss effectiveness from the viewpoint of clarity of chromosome notation and ability to search for solutions.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127966747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00020
Tugkan Tuglular, Onur Leblebici
A coverage guided input domain testing approach is presented with a feedback loop-controlled testing workflow and a tool is developed to support this workflow. Multiple base choices coverage criterion (MBCC) is chosen for systematic unit test generation in the proposed approach and branch coverage information is utilized as feedback to improve selection of bases, which results in improved branch coverage. The proposed workflow is supported with the tool designed and developed for coverage guided MBCC-based unit testing.
{"title":"Coverage Guided Multiple Base Choice Testing","authors":"Tugkan Tuglular, Onur Leblebici","doi":"10.1109/QRS-C51114.2020.00020","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00020","url":null,"abstract":"A coverage guided input domain testing approach is presented with a feedback loop-controlled testing workflow and a tool is developed to support this workflow. Multiple base choices coverage criterion (MBCC) is chosen for systematic unit test generation in the proposed approach and branch coverage information is utilized as feedback to improve selection of bases, which results in improved branch coverage. The proposed workflow is supported with the tool designed and developed for coverage guided MBCC-based unit testing.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133130240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00077
Yingying Wang, Jun Zhao, Feng Li, Min Yu
In the software engineering process management, the level of regulation standardization and the depth of execution are one of the major marks of software management. Reducing human cost out of process training and compliance audit and improving the effectiveness of system management have attracted more and more attention to financial enterprises. Through the semantic markup platform and Neo4j graph database technologies, we are to develop the regulation knowledge graph which is appropriate for software waterfall model development and management. The regulation knowledge graph displays intuitive and comprehensive of the whole life cycle of software development in all kinds of specification information. It also improves software development process specifications and corresponding information query efficiency, accuracy and integrity. The regulation knowledge graph can rapidly and continuously integrate regulation knowledge information, significantly improve the efficiency of acquiring, sharing and maintaining regulation knowledge, reduce software labour costs and enhance the ability of enterprises to analyze and apply regulation information and data, which has wide application value in the construction of internal control management of enterprises.
{"title":"Construction of Knowledge Graph For Internal Control of Financial Enterprises","authors":"Yingying Wang, Jun Zhao, Feng Li, Min Yu","doi":"10.1109/QRS-C51114.2020.00077","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00077","url":null,"abstract":"In the software engineering process management, the level of regulation standardization and the depth of execution are one of the major marks of software management. Reducing human cost out of process training and compliance audit and improving the effectiveness of system management have attracted more and more attention to financial enterprises. Through the semantic markup platform and Neo4j graph database technologies, we are to develop the regulation knowledge graph which is appropriate for software waterfall model development and management. The regulation knowledge graph displays intuitive and comprehensive of the whole life cycle of software development in all kinds of specification information. It also improves software development process specifications and corresponding information query efficiency, accuracy and integrity. The regulation knowledge graph can rapidly and continuously integrate regulation knowledge information, significantly improve the efficiency of acquiring, sharing and maintaining regulation knowledge, reduce software labour costs and enhance the ability of enterprises to analyze and apply regulation information and data, which has wide application value in the construction of internal control management of enterprises.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123812287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/QRS-C51114.2020.00111
T. Ma, Hongwei Liu, Hongji Yang
The aesthetics of violence creates a visual-auditory spectacle in the films and a significant cultural ethos in the postmodern context. The combination of violence and aesthetics in the films creates a sense of paradox, subverting the audience's stereotypes about violence and evoking a new aesthetic experience physically and psychologically. To achieve such complicated aesthetic effects to a new level in the digital era, it is innovative to synthesise the literary theory, aesthetic criticism, cinematic strategies, procedural modelling and creative computation to produce more attractive and experimental stories in the film industry. Computer science facilitates the stylisation of violent films in a narrative, technical and artistic way. The paper will take Freudian psychoanalysis and Kantian aestheticism as the philosophical foundation, present an index system of evaluation and a model to compute the weight of violence and artistic beauty and further measure the effect of aesthetics conveyed by the violence. The application of the model will promote the creativity in the interactive narrative about the aesthetics of violence in films.
{"title":"Interactive Narrative Generation of Aesthetics of Violence in Films","authors":"T. Ma, Hongwei Liu, Hongji Yang","doi":"10.1109/QRS-C51114.2020.00111","DOIUrl":"https://doi.org/10.1109/QRS-C51114.2020.00111","url":null,"abstract":"The aesthetics of violence creates a visual-auditory spectacle in the films and a significant cultural ethos in the postmodern context. The combination of violence and aesthetics in the films creates a sense of paradox, subverting the audience's stereotypes about violence and evoking a new aesthetic experience physically and psychologically. To achieve such complicated aesthetic effects to a new level in the digital era, it is innovative to synthesise the literary theory, aesthetic criticism, cinematic strategies, procedural modelling and creative computation to produce more attractive and experimental stories in the film industry. Computer science facilitates the stylisation of violent films in a narrative, technical and artistic way. The paper will take Freudian psychoanalysis and Kantian aestheticism as the philosophical foundation, present an index system of evaluation and a model to compute the weight of violence and artistic beauty and further measure the effect of aesthetics conveyed by the violence. The application of the model will promote the creativity in the interactive narrative about the aesthetics of violence in films.","PeriodicalId":358174,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127352017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}