Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197824
Krishna Chaitanya Gadepally, S. Dhal, Stavros Kalafatis, K. Nowka
Computer vision and image processing algorithms work well under strong assumptions. Computer vision algorithms are not expected to do well on all kinds of inputs. For instance, excessively noisy images may not fetch optimal results for most computer vision algorithms. Unexpected outputs from the computer vision module can have negative downstream consequences for other modules in the pipeline. To mitigate such consequences, we use a predictor framework that was simultaneously trained with a Hardness Predictor network. This framework guarantees improved performance over those images with lower "hardness" values. The proposed predictor framework, when applied to the input data, would result in a relatively lower variance estimator when the size of the training set is large, both in the domain of semantic segmentation as well as regression analysis.
{"title":"Realistic Predictors for Regression and Semantic Segmentation","authors":"Krishna Chaitanya Gadepally, S. Dhal, Stavros Kalafatis, K. Nowka","doi":"10.1109/SERA57763.2023.10197824","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197824","url":null,"abstract":"Computer vision and image processing algorithms work well under strong assumptions. Computer vision algorithms are not expected to do well on all kinds of inputs. For instance, excessively noisy images may not fetch optimal results for most computer vision algorithms. Unexpected outputs from the computer vision module can have negative downstream consequences for other modules in the pipeline. To mitigate such consequences, we use a predictor framework that was simultaneously trained with a Hardness Predictor network. This framework guarantees improved performance over those images with lower \"hardness\" values. The proposed predictor framework, when applied to the input data, would result in a relatively lower variance estimator when the size of the training set is large, both in the domain of semantic segmentation as well as regression analysis.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128695386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197838
Xinqun Luo, Xingyu Tan
In order to store and manage a large amount of image information, we built a database retrieval system and studied and applied content-based image retrieval algorithms. We found that the difference hashing algorithm performs better than the perceptual hashing algorithm, the mean hashing algorithm and the histogram feature extraction algorithm in the scenario of handling a large number of images. Finally, we applied the difference-value hashing algorithm to a database retrieval system for a film festival to achieve in storing and retrieving a large amount of film materials.
{"title":"Research and Application of Content-based Image Hash Retrieval Algorithm","authors":"Xinqun Luo, Xingyu Tan","doi":"10.1109/SERA57763.2023.10197838","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197838","url":null,"abstract":"In order to store and manage a large amount of image information, we built a database retrieval system and studied and applied content-based image retrieval algorithms. We found that the difference hashing algorithm performs better than the perceptual hashing algorithm, the mean hashing algorithm and the histogram feature extraction algorithm in the scenario of handling a large number of images. Finally, we applied the difference-value hashing algorithm to a database retrieval system for a film festival to achieve in storing and retrieving a large amount of film materials.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125271273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197694
Seongsoo Kim, Lei Chen, Jongyeop Kim, Yiming Ji, Rami J. Haddad
Intrusion Detection System (IDS) is a crucial security mechanism for protecting computer networks from cyber-attacks. Deep learning models have the potential to detect attack types by leveraging their ability to learn and extract features from large volumes of data. In this study, we compare the performance of four different deep learning algorithms for IDS: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), bidirectional LSTM, and bidirectional GRU. We evaluate the attack prediction accuracy for three types of attacks: Denial of Service (DoS), Generic, and Exploits. We vary each algorithm's range parameter and epochs and determine the best parameter combination sets for achieving the highest accuracy. Our experimental results demonstrate that increased range parameters influence the accuracy of LSTM, bi-LSTM, and Bi-GRU models. Ultimately, GRU proved to have the most outstanding performance among the four algorithms tested.
{"title":"A Comparative Study of Deep Learning Models for Hyper Parameter Classification on UNSW-NB15","authors":"Seongsoo Kim, Lei Chen, Jongyeop Kim, Yiming Ji, Rami J. Haddad","doi":"10.1109/SERA57763.2023.10197694","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197694","url":null,"abstract":"Intrusion Detection System (IDS) is a crucial security mechanism for protecting computer networks from cyber-attacks. Deep learning models have the potential to detect attack types by leveraging their ability to learn and extract features from large volumes of data. In this study, we compare the performance of four different deep learning algorithms for IDS: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), bidirectional LSTM, and bidirectional GRU. We evaluate the attack prediction accuracy for three types of attacks: Denial of Service (DoS), Generic, and Exploits. We vary each algorithm's range parameter and epochs and determine the best parameter combination sets for achieving the highest accuracy. Our experimental results demonstrate that increased range parameters influence the accuracy of LSTM, bi-LSTM, and Bi-GRU models. Ultimately, GRU proved to have the most outstanding performance among the four algorithms tested.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127794973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197830
Hye-Kyoung Ryu, Sara Yu, Ki Yong Lee
The Transformer is a widely used neural network architecture for natural language processing. Recently, it has been applied to time series prediction tasks. However, the vanilla transformer has a critical limitation in that it cannot predict the time intervals between elements. To overcome this limitation, we propose a new model architecture called TI-former (Time Interval Transformers) that predicts both the sequence elements and the time intervals between them. To incorporate the elements’ sequential order and temporal interval information, first we propose a new positional encoding method. Second, we modify the output layer to predict both the next sequence element and the time interval simultaneously. Lastly, we suggest a new loss function for timestamped sequences, namely Time soft-DTW, which measures similarity between sequences considering timestamps. We present experimental results based on synthetic sequence data. The experimental results show that our proposed model outperforms than vanilla transformer model in various sequence lengths, sequence numbers, and element occurrence time ranges.
{"title":"TI-former: A Time-Interval Prediction Transformer for Timestamped Sequences","authors":"Hye-Kyoung Ryu, Sara Yu, Ki Yong Lee","doi":"10.1109/SERA57763.2023.10197830","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197830","url":null,"abstract":"The Transformer is a widely used neural network architecture for natural language processing. Recently, it has been applied to time series prediction tasks. However, the vanilla transformer has a critical limitation in that it cannot predict the time intervals between elements. To overcome this limitation, we propose a new model architecture called TI-former (Time Interval Transformers) that predicts both the sequence elements and the time intervals between them. To incorporate the elements’ sequential order and temporal interval information, first we propose a new positional encoding method. Second, we modify the output layer to predict both the next sequence element and the time interval simultaneously. Lastly, we suggest a new loss function for timestamped sequences, namely Time soft-DTW, which measures similarity between sequences considering timestamps. We present experimental results based on synthetic sequence data. The experimental results show that our proposed model outperforms than vanilla transformer model in various sequence lengths, sequence numbers, and element occurrence time ranges.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129453788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rise and growing attention on Digital Twins (DT) as a way to provide integration between the Internet of Things (IoT) and data analytics, so does the need to consider how to address its challenges. To deal with these challenges, Named Data Networking (NDN) can be a possible solution. NDN has been rising in popularity due to its advancements over the traditional TCP/IP Internet architecture. In this paper, our approach begins with the framework that leverages an NDN-based DT architecture for data management. We then design two scenarios that focus on the performance of data querying in a small and large-scale simulated NDN-based DT architecture. Based on the designed scenarios, we conduct the performance evaluation of data query and DT performance to investigate the performance gap and determine whether an action needs to be taken.
{"title":"Named Data Networking (NDN) for Data Collection of Digital Twins-based IoT Systems","authors":"Hengshuo Liang, Cheng Qian, Chao Lu, Lauren Burgess, John Mulo, Wei Yu","doi":"10.1109/SERA57763.2023.10197693","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197693","url":null,"abstract":"With the rise and growing attention on Digital Twins (DT) as a way to provide integration between the Internet of Things (IoT) and data analytics, so does the need to consider how to address its challenges. To deal with these challenges, Named Data Networking (NDN) can be a possible solution. NDN has been rising in popularity due to its advancements over the traditional TCP/IP Internet architecture. In this paper, our approach begins with the framework that leverages an NDN-based DT architecture for data management. We then design two scenarios that focus on the performance of data querying in a small and large-scale simulated NDN-based DT architecture. Based on the designed scenarios, we conduct the performance evaluation of data query and DT performance to investigate the performance gap and determine whether an action needs to be taken.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132635445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197720
Samuel Sungmin Cho, Myoungkyu Song
The Internet of Things (IoT) has become an essential part of our daily lives and society as a whole, but it is still hard to develop and deliver IoT applications because of the complexity of the inherent heterogeneity nature of IoT devices. In this paper, we present a programming model for IoT, especially focusing on effective sharing of information based on software engineering ideas such as conceptual integrity and managing complexity. We analyze four programming models, Lisp, Fortran, Smalltalk, and Haskell, to understand what factors or ideas made them successful in managing complexity to solve problems in various domains. Then, based on the analysis, we propose an IoT programming model with the conceptual integrity that ‘every information is represented as a map.’ When we share information only in the form of a map data structure—a set of (key, value) pairs, we can simplify the process that manages the lifetime of the information shared. Even more, when we share information only in the form of a map, we can use both probabilistic data structures to reduce the footprint of the information when we need size efficiency and a JSON (JavaScript Object Notation) type information when we need to share high-quality information. We propose an architecture to accomplish this goal and implement a virtual machine to explain how information is generated, processed, and stored in the form of a map data structure.
{"title":"Programming Model for Information Sharing among IoT Devices: Software Engineering Perspective","authors":"Samuel Sungmin Cho, Myoungkyu Song","doi":"10.1109/SERA57763.2023.10197720","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197720","url":null,"abstract":"The Internet of Things (IoT) has become an essential part of our daily lives and society as a whole, but it is still hard to develop and deliver IoT applications because of the complexity of the inherent heterogeneity nature of IoT devices. In this paper, we present a programming model for IoT, especially focusing on effective sharing of information based on software engineering ideas such as conceptual integrity and managing complexity. We analyze four programming models, Lisp, Fortran, Smalltalk, and Haskell, to understand what factors or ideas made them successful in managing complexity to solve problems in various domains. Then, based on the analysis, we propose an IoT programming model with the conceptual integrity that ‘every information is represented as a map.’ When we share information only in the form of a map data structure—a set of (key, value) pairs, we can simplify the process that manages the lifetime of the information shared. Even more, when we share information only in the form of a map, we can use both probabilistic data structures to reduce the footprint of the information when we need size efficiency and a JSON (JavaScript Object Notation) type information when we need to share high-quality information. We propose an architecture to accomplish this goal and implement a virtual machine to explain how information is generated, processed, and stored in the form of a map data structure.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197746
Ioannis Nearchou, Lance Rafalko, Ryan Phillips, Matthew Anderson, Wuwei Shen, S. Drager
Autonomous driving has drawn great interest from both industry and academia. Due to some serious consequences such as loss of life caused by autonomous vehicles, assurance certification has been proposed in the automotive industry to ensure safe self-adaptive behaviors at run-time in autonomous cars. Central to assurance certification are assurance cases that provide compelling, comprehensive, and valid argument structures showing a system is safe in a given environment. However, many existing approaches generate assurance cases as a by-product of a system. In this paper, we will present a novel development paradigm that employs assurance cases to guide an autonomous vehicle to operate correctly and safely at run-time. Specifically, we consider an F1TENTH racing car as an example to illustrate how the assurance case driven paradigm can guide the vehicle to achieve safe and reliable self-adaptive behavior at run-time.
{"title":"An Assurance Case Driven Development Paradigm for Autonomous Vehicles: An F1TENTH Racing Car Case Study","authors":"Ioannis Nearchou, Lance Rafalko, Ryan Phillips, Matthew Anderson, Wuwei Shen, S. Drager","doi":"10.1109/SERA57763.2023.10197746","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197746","url":null,"abstract":"Autonomous driving has drawn great interest from both industry and academia. Due to some serious consequences such as loss of life caused by autonomous vehicles, assurance certification has been proposed in the automotive industry to ensure safe self-adaptive behaviors at run-time in autonomous cars. Central to assurance certification are assurance cases that provide compelling, comprehensive, and valid argument structures showing a system is safe in a given environment. However, many existing approaches generate assurance cases as a by-product of a system. In this paper, we will present a novel development paradigm that employs assurance cases to guide an autonomous vehicle to operate correctly and safely at run-time. Specifically, we consider an F1TENTH racing car as an example to illustrate how the assurance case driven paradigm can guide the vehicle to achieve safe and reliable self-adaptive behavior at run-time.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116362323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/sera57763.2023.10197675
{"title":"Keynote: Support of Assurance-based Software Development for Cyber-Physical Systems","authors":"","doi":"10.1109/sera57763.2023.10197675","DOIUrl":"https://doi.org/10.1109/sera57763.2023.10197675","url":null,"abstract":"","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124510061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-23DOI: 10.1109/SERA57763.2023.10197719
Zhengxinchao Xiao, Lei Xiao
Regression testing is a crucial component of software testing and a crucial tool for ensuring the quality of software. An appropriate optimization method is essential for maximizing productivity and reducing expenses in regression testing. Test case prioritization (TCP) and regression test selection (RTS) are two popular methods in regression testing. This paper provides a qualitative analysis of 18 TCP and 17 RTS publications from the last five years. This paper presents four main issues. The first covers the most popular TCP techniques, the second covers the most popular RTS methods, the third covers the most popular metrics for measuring TCP and RTS, and the fourth covers data sources. Based on this study, we draw the following conclusions: (1) Defect prediction and machine learning-based TCP methods, machine learning, multi-objective, and model-based RTS methods will receive additional attention in future. (2) Defects4J is the most commonly used data set in TCP in the past five years. SIR and GitHub are the most commonly used datasets in RTS. (3) The most widely used measurement methods in TCP and RTS are APFD and cost, respectively. In future, researchers will use these two indicators to conduct a more comprehensive evaluation together with cost, fault detection capability, and test coverage.
{"title":"A Systematic Literature Review on Test Case Prioritization and Regression Test Selection","authors":"Zhengxinchao Xiao, Lei Xiao","doi":"10.1109/SERA57763.2023.10197719","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197719","url":null,"abstract":"Regression testing is a crucial component of software testing and a crucial tool for ensuring the quality of software. An appropriate optimization method is essential for maximizing productivity and reducing expenses in regression testing. Test case prioritization (TCP) and regression test selection (RTS) are two popular methods in regression testing. This paper provides a qualitative analysis of 18 TCP and 17 RTS publications from the last five years. This paper presents four main issues. The first covers the most popular TCP techniques, the second covers the most popular RTS methods, the third covers the most popular metrics for measuring TCP and RTS, and the fourth covers data sources. Based on this study, we draw the following conclusions: (1) Defect prediction and machine learning-based TCP methods, machine learning, multi-objective, and model-based RTS methods will receive additional attention in future. (2) Defects4J is the most commonly used data set in TCP in the past five years. SIR and GitHub are the most commonly used datasets in RTS. (3) The most widely used measurement methods in TCP and RTS are APFD and cost, respectively. In future, researchers will use these two indicators to conduct a more comprehensive evaluation together with cost, fault detection capability, and test coverage.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132456919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individual differences in programming performance are very large. For objectivity in programming performance, there is a strong need to quantitatively evaluate programming performance of individual developer. In this study, we conduct an experiment to measure programmers’ behaviors including keystrokes and mouse operations of fourteen subjects, and analyze correlations between them and programming performance. The experimental results show that those who check a lot of program operation and those who use shortcut keys frequently had higher programming performance. In addition, subjects who spent a lot of time searching on the web, compiled a lot, and rewrote a lot of code had lower programming performance.
{"title":"Analysis of Programming Performance Based on 2-grams of Keystrokes and Mouse Operations","authors":"Kazuki Matsumoto, Kinari Nishiura, Mariko Sasakura, Akito Monden","doi":"10.1109/SERA57763.2023.10197645","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197645","url":null,"abstract":"Individual differences in programming performance are very large. For objectivity in programming performance, there is a strong need to quantitatively evaluate programming performance of individual developer. In this study, we conduct an experiment to measure programmers’ behaviors including keystrokes and mouse operations of fourteen subjects, and analyze correlations between them and programming performance. The experimental results show that those who check a lot of program operation and those who use shortcut keys frequently had higher programming performance. In addition, subjects who spent a lot of time searching on the web, compiled a lot, and rewrote a lot of code had lower programming performance.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125355475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}