Pub Date : 2022-10-19DOI: 10.1109/KSE56063.2022.9953777
Thu-Trang Nguyen, H. Vo
Coincidental correctness is the phenomenon that test cases execute faulty statements yet still produce correct/expected outputs. In software testing, this problem is prevalent and causes negative impacts on fault localization performance. Although detecting coincidentally correct (CC) tests and mitigating their impacts on localizing faults in non-configurable systems have been studied in-depth, handling CC tests in Software Product Line (SPL) systems have been unexplored. To test an SPL system, products are often sampled, and each product is tested individually. The CC test cases, that occur in the test suite of a product, not only affect the testing results of the corresponding product but also affect the overall testing results of the system. This could negatively affect fault localization performance and decelerate the quality assurance process for the system. In this paper, we introduce DEMiC, a novel approach to detect CC tests and mitigate their impacts on localizing variability faults in SPL systems. Our key idea to detect CC tests is that two similar tests tend to examine similar behaviors of the system and should have a similar testing state (i.e., both passed or failed). If only one of them failed, the other could be coincidentally passed. In addition, we propose several solutions to mitigate the negative impacts of CC tests on variability fault localization at different levels. Our experimental results on +2,6M test cases of five widely used SPL systems show that DEMiC can effectively detect CC tests, with 97% accuracy on average. In addition, DEMiC could help to improve the fault localization performance by 61%.
{"title":"Detecting Coincidental Correctness and Mitigating Its Impacts on Localizing Variability Faults","authors":"Thu-Trang Nguyen, H. Vo","doi":"10.1109/KSE56063.2022.9953777","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953777","url":null,"abstract":"Coincidental correctness is the phenomenon that test cases execute faulty statements yet still produce correct/expected outputs. In software testing, this problem is prevalent and causes negative impacts on fault localization performance. Although detecting coincidentally correct (CC) tests and mitigating their impacts on localizing faults in non-configurable systems have been studied in-depth, handling CC tests in Software Product Line (SPL) systems have been unexplored. To test an SPL system, products are often sampled, and each product is tested individually. The CC test cases, that occur in the test suite of a product, not only affect the testing results of the corresponding product but also affect the overall testing results of the system. This could negatively affect fault localization performance and decelerate the quality assurance process for the system. In this paper, we introduce DEMiC, a novel approach to detect CC tests and mitigate their impacts on localizing variability faults in SPL systems. Our key idea to detect CC tests is that two similar tests tend to examine similar behaviors of the system and should have a similar testing state (i.e., both passed or failed). If only one of them failed, the other could be coincidentally passed. In addition, we propose several solutions to mitigate the negative impacts of CC tests on variability fault localization at different levels. Our experimental results on +2,6M test cases of five widely used SPL systems show that DEMiC can effectively detect CC tests, with 97% accuracy on average. In addition, DEMiC could help to improve the fault localization performance by 61%.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-19DOI: 10.1109/KSE56063.2022.9953750
Bang Giang Le, Viet-Cuong Ta
In multi-task reinforcement learning, it is possible to improve the data efficiency of training agents by transferring knowledge from other different but related tasks. Because the experiences from different tasks are usually biased toward the specific task goals. Traditional methods rely on Kullback-Leibler regularization to stabilize the transfer of knowledge from one task to the others. In this work, we explore the direction of replacing the Kullback-Leibler divergence with a novel Optimal transport-based regularization. By using the Sinkhorn mapping, we can approximate the Optimal transport distance between the state distribution of tasks. The distance is then used as an amortized reward to regularize the amount of sharing information. We experiment our frameworks on several grid-based navigation multi-goal to validate the effectiveness of the approach. The results show that our added Optimal transport-based rewards are able to speed up the learning process of agents and outperforms several baselines on multi-task learning.
{"title":"Distill Knowledge in Multi-task Reinforcement Learning with Optimal-Transport Regularization","authors":"Bang Giang Le, Viet-Cuong Ta","doi":"10.1109/KSE56063.2022.9953750","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953750","url":null,"abstract":"In multi-task reinforcement learning, it is possible to improve the data efficiency of training agents by transferring knowledge from other different but related tasks. Because the experiences from different tasks are usually biased toward the specific task goals. Traditional methods rely on Kullback-Leibler regularization to stabilize the transfer of knowledge from one task to the others. In this work, we explore the direction of replacing the Kullback-Leibler divergence with a novel Optimal transport-based regularization. By using the Sinkhorn mapping, we can approximate the Optimal transport distance between the state distribution of tasks. The distance is then used as an amortized reward to regularize the amount of sharing information. We experiment our frameworks on several grid-based navigation multi-goal to validate the effectiveness of the approach. The results show that our added Optimal transport-based rewards are able to speed up the learning process of agents and outperforms several baselines on multi-task learning.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126351942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-19DOI: 10.1109/KSE56063.2022.9953617
N. Do, H. Nguyen
Intelligent Problem Solver (IPS) is an intelligent system for solving practical problems in the determined domain by using human knowledge. Thus, designing of the knowledge base and the inference engine of IPS systems are important. This study proposed a general model for knowledge representation by using a kernel ontology combining other knowledge components, called Integ-Ontology Based on this model, the model of problems is presented. The reasoning method is also proposed. This method includes the inference processing and techniques of heuristic rules, sample problems and pattern to speed up the problem solving. The Integ-Ontology and its reasoning method are applied to design practical IPS in solid geometry and Direct Current (DC) Electrical Circuits.
{"title":"Knowledge-based Problem Solving and Reasoning methods","authors":"N. Do, H. Nguyen","doi":"10.1109/KSE56063.2022.9953617","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953617","url":null,"abstract":"Intelligent Problem Solver (IPS) is an intelligent system for solving practical problems in the determined domain by using human knowledge. Thus, designing of the knowledge base and the inference engine of IPS systems are important. This study proposed a general model for knowledge representation by using a kernel ontology combining other knowledge components, called Integ-Ontology Based on this model, the model of problems is presented. The reasoning method is also proposed. This method includes the inference processing and techniques of heuristic rules, sample problems and pattern to speed up the problem solving. The Integ-Ontology and its reasoning method are applied to design practical IPS in solid geometry and Direct Current (DC) Electrical Circuits.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115070635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-29DOI: 10.1109/KSE56063.2022.9953785
Toan Pham Van, Linh Doan Bao, Thanh-Tung Nguyen, Duc Trung Tran, Q. Nguyen, D. V. Sang
emantic segmentation is an essential task in developing medical image diagnosis systems. However, building an annotated medical dataset is expensive. Thus, semi-supervised methods are significant in this circumstance. In semi-supervised learning, the quality of labels plays a crucial role in modelperformance. In this work, we present a new pseudo labeling strategy that enhances the quality of pseudo labels used for training student networks. We follow the multi-stage semi-supervised training approach, which trains a teacher model on a labeled dataset and then uses the trained teacher to render pseudo labels for student training. By doing so, the pseudo labels will be updated and more precise as training progress. The key difference between previous and our methods is that we update the teacher model during the student training process. So the quality of pseudo labels is improved during the student training process. We also propose a simple but effective strategy to enhance the quality of pseudo labels using a momentum model - a slow copy version of the original model during training. By applying the momentum model combined with re-rendering pseudo labels during student training, we achieved an average of 84.1% Dice Score on five datasets (i.e., Kvarsir, CVC-ClinicDB, ETIS-LaribPolypDB, CVC-ColonDB, and CVC-300) with only 20% of the dataset used as labeled data. Our results surpass common practice by 3% and even approach fully-supervised results on some datasets. Oursource code and pre-trained models are available at https://github.com/sun-asterisk-research/online_learning_ssl
{"title":"Online pseudo labeling for polyp segmentation with momentum networks","authors":"Toan Pham Van, Linh Doan Bao, Thanh-Tung Nguyen, Duc Trung Tran, Q. Nguyen, D. V. Sang","doi":"10.1109/KSE56063.2022.9953785","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953785","url":null,"abstract":"emantic segmentation is an essential task in developing medical image diagnosis systems. However, building an annotated medical dataset is expensive. Thus, semi-supervised methods are significant in this circumstance. In semi-supervised learning, the quality of labels plays a crucial role in modelperformance. In this work, we present a new pseudo labeling strategy that enhances the quality of pseudo labels used for training student networks. We follow the multi-stage semi-supervised training approach, which trains a teacher model on a labeled dataset and then uses the trained teacher to render pseudo labels for student training. By doing so, the pseudo labels will be updated and more precise as training progress. The key difference between previous and our methods is that we update the teacher model during the student training process. So the quality of pseudo labels is improved during the student training process. We also propose a simple but effective strategy to enhance the quality of pseudo labels using a momentum model - a slow copy version of the original model during training. By applying the momentum model combined with re-rendering pseudo labels during student training, we achieved an average of 84.1% Dice Score on five datasets (i.e., Kvarsir, CVC-ClinicDB, ETIS-LaribPolypDB, CVC-ColonDB, and CVC-300) with only 20% of the dataset used as labeled data. Our results surpass common practice by 3% and even approach fully-supervised results on some datasets. Oursource code and pre-trained models are available at https://github.com/sun-asterisk-research/online_learning_ssl","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"26 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126037224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-25DOI: 10.1109/KSE56063.2022.9953786
Thanh Vu, H. Vo
In order to ensure the quality of software and prevent attacks from hackers on critical systems, static analysis tools are frequently utilized to detect vulnerabilities in the early development phase. However, these tools often report a large number of warnings with a high false-positive rate, which causes many difficulties for developers. In this paper, we introduce VULRG, a novel approach to address this problem. Specifically, VuLRG predicts and ranks the warnings based on their likelihoods to be true positives. To predict these likelihoods, VuLRG combines two deep learning models CNN and BiGRU to capture the context of each warning in terms of program syntax, control flow, and program dependence. Our experimental results on a real-world dataset of 6,620 warnings show that VuLRG’s Recall at Top-50% is 90%. This means that using VuLRG, 90% of the vulnerabilities can be found by examining only 50% of the warnings. Moreover, at Top-5%, VULRG can improve the state-of-the-art approach by +30% in both Precision and Recall.
{"title":"Using Multiple Code Representations to Prioritize Static Analysis Warnings","authors":"Thanh Vu, H. Vo","doi":"10.1109/KSE56063.2022.9953786","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953786","url":null,"abstract":"In order to ensure the quality of software and prevent attacks from hackers on critical systems, static analysis tools are frequently utilized to detect vulnerabilities in the early development phase. However, these tools often report a large number of warnings with a high false-positive rate, which causes many difficulties for developers. In this paper, we introduce VULRG, a novel approach to address this problem. Specifically, VuLRG predicts and ranks the warnings based on their likelihoods to be true positives. To predict these likelihoods, VuLRG combines two deep learning models CNN and BiGRU to capture the context of each warning in terms of program syntax, control flow, and program dependence. Our experimental results on a real-world dataset of 6,620 warnings show that VuLRG’s Recall at Top-50% is 90%. This means that using VuLRG, 90% of the vulnerabilities can be found by examining only 50% of the warnings. Moreover, at Top-5%, VULRG can improve the state-of-the-art approach by +30% in both Precision and Recall.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121685274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-07DOI: 10.1109/KSE56063.2022.9953791
Huu-Tien Dang, Thi-Hai-Yen Vuong, X. Phan
Converting written texts into their spoken forms is an essential problem in any text-to-speech (TTS) systems. However, building an effective text normalization solution for a real-world TTS system face two main challenges: (1) the semantic ambiguity of non-standard words (NSWs), e.g., numbers, dates, ranges, scores, abbreviations, and (2) transforming NSWs into pronounceable syllables, such as URL, email address, hashtag, and contact name. In this paper, we propose a new two-phase normalization approach to deal with these challenges. First, a model-based tagger is designed to detect NSWs. Then, depending on NSW types, a rule-based normalizer expands those NSWs into their final verbal forms. We conducted three empirical experiments for NSW detection using Conditional Random Fields (CRFs), BiLSTM-CNN-CRF, and BERT-BiGRU-CRF models on a manually annotated dataset including 5819 sentences extracted from Vietnamese news articles. In the second phase, we propose a forward lexicon-based maximum matching algorithm to split down the hashtag, email, URL, and contact name. The experimental results of the tagging phase show that the average F1 scores of the BiLSTM-CNN-CRF and CRF models are above 90.00%, reaching the highest F1 of 95.00% with the BERT-BiGRU-CRF model. Overall, our approach has low sentence error rates, at 8.15% with CRF and 7.11% with BiLSTM-CNNCRF taggers, and only 6.67% with BERT-BiGRU-CRF tagger.
{"title":"Non-Standard Vietnamese Word Detection and Normalization for Text–to–Speech","authors":"Huu-Tien Dang, Thi-Hai-Yen Vuong, X. Phan","doi":"10.1109/KSE56063.2022.9953791","DOIUrl":"https://doi.org/10.1109/KSE56063.2022.9953791","url":null,"abstract":"Converting written texts into their spoken forms is an essential problem in any text-to-speech (TTS) systems. However, building an effective text normalization solution for a real-world TTS system face two main challenges: (1) the semantic ambiguity of non-standard words (NSWs), e.g., numbers, dates, ranges, scores, abbreviations, and (2) transforming NSWs into pronounceable syllables, such as URL, email address, hashtag, and contact name. In this paper, we propose a new two-phase normalization approach to deal with these challenges. First, a model-based tagger is designed to detect NSWs. Then, depending on NSW types, a rule-based normalizer expands those NSWs into their final verbal forms. We conducted three empirical experiments for NSW detection using Conditional Random Fields (CRFs), BiLSTM-CNN-CRF, and BERT-BiGRU-CRF models on a manually annotated dataset including 5819 sentences extracted from Vietnamese news articles. In the second phase, we propose a forward lexicon-based maximum matching algorithm to split down the hashtag, email, URL, and contact name. The experimental results of the tagging phase show that the average F1 scores of the BiLSTM-CNN-CRF and CRF models are above 90.00%, reaching the highest F1 of 95.00% with the BERT-BiGRU-CRF model. Overall, our approach has low sentence error rates, at 8.15% with CRF and 7.11% with BiLSTM-CNNCRF taggers, and only 6.67% with BERT-BiGRU-CRF tagger.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114485344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-10DOI: 10.1109/kse53942.2021.9648796
N. Trang, N. Chawla, Y. Nagai, Nguyen Thanh Thoai
The 2022 IEEE International Conference on Knowledge and Systems Engineering (KSE) is the 14th meeting of the series, held online during October 19-21, 2022 at Thai Binh Duong University. Nha Trang, Vietnam. The first KSE conference was held during October 13-17, 2009, by the College of Technology, Vietnam National University (in short, VNU), Hanoi. As an annual meeting, the past KSE conferences were held by the University of Engineering and Technology, VNU (in short, VNU-UET), Hanoi and Le Quy Don Technical University (in short, LQDTU) in 2010, by Hanoi University and VNU-UET in 2011, by Danang University of Technology, University of Da Nang, and VNU-UET in 2012, by National University of Education and VNU-UET in 2013, by VNU-UET in 2014, University of Information and Technology, VNU, Ho Chi Minh City and Japan Advanced Institute of Science and Technology (in short, JAIST) in 2015, by LQDTU, JAIST, and VNU-UET in 2016, by Hue University of Education, Hue University of Sciences, Hue University, and VNU-UET in 2017, by Telecommunications Institute of Technology, Vietnam (PTIT) and JAIST in 2018, by University of Science and Education, University of Da Nang and VNU-UET in 2019, and by College of Information and Communication Technology, Can Tho University (in short, CTU), and VNU-UET in 2020, by Artificial Intelligence Association of Thailand (AIAT), Thamasart University (TU), Sirindhorn International Insitutute of Technology, Thamasart University, National Electronics and Computer Technology Center (NECTEC), Thailand, UET-VNU, LQDTU, Vietnam, and JAIST Japan in 2021.
{"title":"Welcome Message from the KSE 2022 General Committee","authors":"N. Trang, N. Chawla, Y. Nagai, Nguyen Thanh Thoai","doi":"10.1109/kse53942.2021.9648796","DOIUrl":"https://doi.org/10.1109/kse53942.2021.9648796","url":null,"abstract":"The 2022 IEEE International Conference on Knowledge and Systems Engineering (KSE) is the 14th meeting of the series, held online during October 19-21, 2022 at Thai Binh Duong University. Nha Trang, Vietnam. The first KSE conference was held during October 13-17, 2009, by the College of Technology, Vietnam National University (in short, VNU), Hanoi. As an annual meeting, the past KSE conferences were held by the University of Engineering and Technology, VNU (in short, VNU-UET), Hanoi and Le Quy Don Technical University (in short, LQDTU) in 2010, by Hanoi University and VNU-UET in 2011, by Danang University of Technology, University of Da Nang, and VNU-UET in 2012, by National University of Education and VNU-UET in 2013, by VNU-UET in 2014, University of Information and Technology, VNU, Ho Chi Minh City and Japan Advanced Institute of Science and Technology (in short, JAIST) in 2015, by LQDTU, JAIST, and VNU-UET in 2016, by Hue University of Education, Hue University of Sciences, Hue University, and VNU-UET in 2017, by Telecommunications Institute of Technology, Vietnam (PTIT) and JAIST in 2018, by University of Science and Education, University of Da Nang and VNU-UET in 2019, and by College of Information and Communication Technology, Can Tho University (in short, CTU), and VNU-UET in 2020, by Artificial Intelligence Association of Thailand (AIAT), Thamasart University (TU), Sirindhorn International Insitutute of Technology, Thamasart University, National Electronics and Computer Technology Center (NECTEC), Thailand, UET-VNU, LQDTU, Vietnam, and JAIST Japan in 2021.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128185590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-01DOI: 10.1080/08870446.2011.618585
Carola Lilienthal
Today programmers do not develop applications from scratch, but they spend their time fixing, extending, modifying and enhancing existing applications. The biggest problem in their daily work is that with time maintenance mutates from structured programming to defensive programming: The code becomes too complex to be maintained. We put in code we know is stupid from an architectural point of view but it is the only solution that will hopefully work. Maintenance is more and more difficult and expensive. Our software accumulates technical debts. In this talk, you will see how you should improve your architecture and source code to prevent technical debt growing unrestricted. With the proper knowledge about well-structured architecture, refactorings for tangled code can quickly be found. Complex code can be eliminated, and maintenance costs will be reduced. Bio Carola Lilienthal studied computer science at the University of Hamburg from 1988 to 1995, and in 2008 she received her doctoral degree in computer science at the University of Hamburg (Supervising Professors: Christiane Floyd and Claus Lewerentz). Today, Dr. Carola Lilienthal is managing director of WPS Workplace Solutions GmbH and is responsible for the department of software architecture. Since 2003, Dr. Carola Lilienthal has been analyzing architecture in Java, C #, C ++, ABAP and PHP throughout Germany, and advising development teams on how to improve the longevity of their software systems. In 2015, she summarized her experiences from over a hundred analyzes in the book Long-living software architectures. She is particularly interested in the education of software architects, which is why she is an active member of iSAQB, the International Software Architecture Quality Board e.V., and regularly disseminates her knowledge at conferences, in articles and training courses.
{"title":"Keynotes","authors":"Carola Lilienthal","doi":"10.1080/08870446.2011.618585","DOIUrl":"https://doi.org/10.1080/08870446.2011.618585","url":null,"abstract":"Today programmers do not develop applications from scratch, but they spend their time fixing, extending, modifying and enhancing existing applications. The biggest problem in their daily work is that with time maintenance mutates from structured programming to defensive programming: The code becomes too complex to be maintained. We put in code we know is stupid from an architectural point of view but it is the only solution that will hopefully work. Maintenance is more and more difficult and expensive. Our software accumulates technical debts. In this talk, you will see how you should improve your architecture and source code to prevent technical debt growing unrestricted. With the proper knowledge about well-structured architecture, refactorings for tangled code can quickly be found. Complex code can be eliminated, and maintenance costs will be reduced. Bio Carola Lilienthal studied computer science at the University of Hamburg from 1988 to 1995, and in 2008 she received her doctoral degree in computer science at the University of Hamburg (Supervising Professors: Christiane Floyd and Claus Lewerentz). Today, Dr. Carola Lilienthal is managing director of WPS Workplace Solutions GmbH and is responsible for the department of software architecture. Since 2003, Dr. Carola Lilienthal has been analyzing architecture in Java, C #, C ++, ABAP and PHP throughout Germany, and advising development teams on how to improve the longevity of their software systems. In 2015, she summarized her experiences from over a hundred analyzes in the book Long-living software architectures. She is particularly interested in the education of software architects, which is why she is an active member of iSAQB, the International Software Architecture Quality Board e.V., and regularly disseminates her knowledge at conferences, in articles and training courses.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129346157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}