Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5365099
Shaoli Huang, C. Cai, Yang Zhang
In this paper, we propose a new representation and matching scheme for wood image retrieval using Scale Invariant Feature Transformation (SIFT). We extract SIFT feature points in scale space and perform matching based on the texture information around the feature points using SIFT feature operator. This scheme can be appended to most existing wood image retrieval systems and improve their retrieval accuracy and efficiency. Experimental results demonstrate that the performance of this scheme is efficient and stable enough for wood image retrieval technique.
{"title":"Wood Image Retrieval Using SIFT Descriptor","authors":"Shaoli Huang, C. Cai, Yang Zhang","doi":"10.1109/CISE.2009.5365099","DOIUrl":"https://doi.org/10.1109/CISE.2009.5365099","url":null,"abstract":"In this paper, we propose a new representation and matching scheme for wood image retrieval using Scale Invariant Feature Transformation (SIFT). We extract SIFT feature points in scale space and perform matching based on the texture information around the feature points using SIFT feature operator. This scheme can be appended to most existing wood image retrieval systems and improve their retrieval accuracy and efficiency. Experimental results demonstrate that the performance of this scheme is efficient and stable enough for wood image retrieval technique.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114673983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366957
Dianqin Zhu, Zhongyuan Wu
zhu dianqin wu zhongyuan (school of management,TianJin polytechnic university,TianJin,China.300387) Abstract:Software defects are the parts of software products that software development company have to face. How to deal with them suitably is very important for software company’s survival. This article will use the collecting data of software defects. Then, according to GM (1, 1), which is the core theory of the Gary-prediction, establish the prediction model. Finally, we gain the prediction values. The results show that software company can improve software quality, control development process and allocate resources effectively.
{"title":"The Application of Gray-Prediction Theory in the Software Defects Management","authors":"Dianqin Zhu, Zhongyuan Wu","doi":"10.1109/CISE.2009.5366957","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366957","url":null,"abstract":"zhu dianqin wu zhongyuan (school of management,TianJin polytechnic university,TianJin,China.300387) Abstract:Software defects are the parts of software products that software development company have to face. How to deal with them suitably is very important for software company’s survival. This article will use the collecting data of software defects. Then, according to GM (1, 1), which is the core theory of the Gary-prediction, establish the prediction model. Finally, we gain the prediction values. The results show that software company can improve software quality, control development process and allocate resources effectively.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125231541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5367013
Guang Zheng, Xifu Li, Lian Li, Jinzhao Wu
Web servers providing services are widely used in Internet. The behaviors of web servers can be expressed by actions equipped with parameters of time and priority. However, process algebras nowadays cannot specify the behaviors of web server with different groups of clients with priorities and time limitations. We present a process algebra with timed-priority executing policy that can specify the behaviors of web server, with its actions equipped with parameters of time and priority.
{"title":"Process Algebra for Web Servers with Timed-Priority Executing Policy","authors":"Guang Zheng, Xifu Li, Lian Li, Jinzhao Wu","doi":"10.1109/CISE.2009.5367013","DOIUrl":"https://doi.org/10.1109/CISE.2009.5367013","url":null,"abstract":"Web servers providing services are widely used in Internet. The behaviors of web servers can be expressed by actions equipped with parameters of time and priority. However, process algebras nowadays cannot specify the behaviors of web server with different groups of clients with priorities and time limitations. We present a process algebra with timed-priority executing policy that can specify the behaviors of web server, with its actions equipped with parameters of time and priority.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116667168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366060
Juan Zhang, Hai-Bing Su, Qin-Zhang Wu
In order to sovle an intra-system interface for chip-to-chip and board-to-board communications and meet the explosive demand for higher bandwidth and more efficient signal processing and data transmission in typical enbeded system,there is an active demand that adopting a new system interconnect technology to ensure that bus performance continues to increase. The RapidIO is proposed in the paper. Up to 10Gb/s of bandwith,low latency and low power meet the demand on the performance of rapid developing communication technologies. The paper introduces the basic principle,inner architechture and the key technique of the RapidIO , research its application and based on DSP TMS320C6455. It shows the design of RapidIO transmission between different DSP. The paper gives the flow of the software design.The experiment results show that the read and write operation can stably work at 3.125Gb/s per channel between different DSP.The rate is up to 275MB/s when the baud rate is 3.125Gbps.
{"title":"Research and Implement of Serial RapidIO Based on Mul-DSP","authors":"Juan Zhang, Hai-Bing Su, Qin-Zhang Wu","doi":"10.1109/CISE.2009.5366060","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366060","url":null,"abstract":"In order to sovle an intra-system interface for chip-to-chip and board-to-board communications and meet the explosive demand for higher bandwidth and more efficient signal processing and data transmission in typical enbeded system,there is an active demand that adopting a new system interconnect technology to ensure that bus performance continues to increase. The RapidIO is proposed in the paper. Up to 10Gb/s of bandwith,low latency and low power meet the demand on the performance of rapid developing communication technologies. The paper introduces the basic principle,inner architechture and the key technique of the RapidIO , research its application and based on DSP TMS320C6455. It shows the design of RapidIO transmission between different DSP. The paper gives the flow of the software design.The experiment results show that the read and write operation can stably work at 3.125Gb/s per channel between different DSP.The rate is up to 275MB/s when the baud rate is 3.125Gbps.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116692872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364188
Xiaomin Zhao, Bin Lu
As the data size increases, the efficiency of algorithm and the clustering quality draw more attraction. CURD (clustering using references and density) is a fast clustering algorithm based on reference and density, which can discover clusters with arbitrary shape and has the linear times complexity. However, it still has some shortcomings such as: the efficiency to deal with the high-dimensional data is uncertain, the noise processing is not ideal, besides the number of the clustering results may not satisfy the requirement of the users. According to these deficiencies, this paper introduces a new method to propose the high-dimensional data with information entropy technology and quotient space theory. Additionally it disposes the noise date in two stages. Finally, some improvement are given on the step of sorting the reference points by quotient space theory to produce multi-level clustering results so as to meet the different needs of customers. Experiments show that the improved algorithm not only improves the quality of the clustering algorithm but also maintains the high efficiency.
随着数据量的增加,算法的效率和聚类质量越来越吸引人。CURD (clustering using reference and density)是一种基于参考和密度的快速聚类算法,可以发现任意形状的聚类,具有线性时间复杂度。然而,它仍然存在一些缺点,如:处理高维数据的效率不确定,噪声处理不理想,以及聚类结果的数量可能不能满足用户的要求。针对这些不足,本文提出了一种利用信息熵技术和商空间理论提出高维数据的新方法。此外,还分两个阶段对噪声数据进行处理。最后,利用商空间理论对参考点排序步骤进行改进,得到多级聚类结果,以满足客户的不同需求。实验表明,改进后的算法不仅提高了聚类算法的质量,而且保持了较高的效率。
{"title":"An Improved CURD Clustering Algorithm Based on Quotient Space","authors":"Xiaomin Zhao, Bin Lu","doi":"10.1109/CISE.2009.5364188","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364188","url":null,"abstract":"As the data size increases, the efficiency of algorithm and the clustering quality draw more attraction. CURD (clustering using references and density) is a fast clustering algorithm based on reference and density, which can discover clusters with arbitrary shape and has the linear times complexity. However, it still has some shortcomings such as: the efficiency to deal with the high-dimensional data is uncertain, the noise processing is not ideal, besides the number of the clustering results may not satisfy the requirement of the users. According to these deficiencies, this paper introduces a new method to propose the high-dimensional data with information entropy technology and quotient space theory. Additionally it disposes the noise date in two stages. Finally, some improvement are given on the step of sorting the reference points by quotient space theory to produce multi-level clustering results so as to meet the different needs of customers. Experiments show that the improved algorithm not only improves the quality of the clustering algorithm but also maintains the high efficiency.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117135992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364135
Xudong Li, Chunxia Zhang, Xing Lin, Shuguang Lin
This paper describes an improved semaphore with policies in Windows operating system. We introduce policies to help operating system kernel select next process (or thread) in the waiting list queue to satisfy. The paper present five policies: first in first out (FIFO), first in last out (FILO), highest priority first out (HPFO), lowest priority first out (LPFO) and Random. We discuss the design and implement of semaphore with policies in Windows Research Kernel (WRK), and the results are the same as expected.
本文介绍了一种改进的带有策略的Windows操作系统信号量。我们引入一些策略来帮助操作系统内核在等待列表队列中选择下一个进程(或线程)来满足。本文提出了五种策略:先进先出(FIFO)、先进后出(FILO)、最高优先级先出(HPFO)、最低优先级先出(LPFO)和随机。在Windows Research Kernel (WRK)中讨论了带有策略的信号量的设计和实现,结果与预期的一致。
{"title":"Introduce Satisfy Policies into Semaphore in WRK","authors":"Xudong Li, Chunxia Zhang, Xing Lin, Shuguang Lin","doi":"10.1109/CISE.2009.5364135","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364135","url":null,"abstract":"This paper describes an improved semaphore with policies in Windows operating system. We introduce policies to help operating system kernel select next process (or thread) in the waiting list queue to satisfy. The paper present five policies: first in first out (FIFO), first in last out (FILO), highest priority first out (HPFO), lowest priority first out (LPFO) and Random. We discuss the design and implement of semaphore with policies in Windows Research Kernel (WRK), and the results are the same as expected.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117190875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5365846
Hong Yang, Lina Hong, Rong Chen, Yaqing Liu
Program similarity means how similar comparable programs are by the computer-assisted analysis, with the aim to find equivalent code fragments. In this paper, we propose a semantic approach to detect program plagiarism by the computation of program similarity based on variable dependence. To do so, we compute dependence over program variables, find a variable mapping which maps a variable in a program to its counterpart in another program, then convert a program with the mapping into a version with can be compared in textual manner with another program, and thus compute the similarity of programs. An experiment is given to show the effectiveness of this method. Keywords-Program Plagiarism Detection; Similarity; Variable Dependence; Metrics
{"title":"A Method of Detecting Program Plagiarism Based on Variable Dependence","authors":"Hong Yang, Lina Hong, Rong Chen, Yaqing Liu","doi":"10.1109/CISE.2009.5365846","DOIUrl":"https://doi.org/10.1109/CISE.2009.5365846","url":null,"abstract":"Program similarity means how similar comparable programs are by the computer-assisted analysis, with the aim to find equivalent code fragments. In this paper, we propose a semantic approach to detect program plagiarism by the computation of program similarity based on variable dependence. To do so, we compute dependence over program variables, find a variable mapping which maps a variable in a program to its counterpart in another program, then convert a program with the mapping into a version with can be compared in textual manner with another program, and thus compute the similarity of programs. An experiment is given to show the effectiveness of this method. Keywords-Program Plagiarism Detection; Similarity; Variable Dependence; Metrics","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117286527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5365324
Haina Hu, Lin Yao
DDoS flows that do not cut down their sending rates after their packets are dropped will hog the buffer space at routers and deprive all other flows of their fair share of bandwidth. Based on the network behavior, this paper studies the defense mechanism of DDoS from the aspect of congestion control. And in the simulation environment of DDoS, this paper studies the RED (Random Early Detection) algorithm that is a congestion control strategy based on routers. Simulation results show that RED provides little protection from high bandwidth flows that take much wide bandwidth, which can result in extreme unfairness among per-flow. Based on the viewpoint, we put forward further improvement for the mechanism of congestion control based on routers. KeywordsDDoS; Random Early Detection; Congestion Control; NS
如果DDoS流在丢包后不降低其发送速率,则会占用路由器的缓冲空间,并剥夺所有其他流的公平带宽份额。本文基于网络行为,从拥塞控制的角度研究了DDoS的防御机制。在DDoS仿真环境下,研究了基于路由器的拥塞控制策略RED (Random Early Detection,随机早期检测)算法。仿真结果表明,对于占用大量带宽的高带宽流,RED提供的保护很少,这可能导致每个流之间的极度不公平。在此基础上,提出了基于路由器的拥塞控制机制的进一步改进。KeywordsDDoS;随机早期检测;拥塞控制;NS
{"title":"Improvement for Congestion Control Algorithms under DDoS Attacks","authors":"Haina Hu, Lin Yao","doi":"10.1109/CISE.2009.5365324","DOIUrl":"https://doi.org/10.1109/CISE.2009.5365324","url":null,"abstract":"DDoS flows that do not cut down their sending rates after their packets are dropped will hog the buffer space at routers and deprive all other flows of their fair share of bandwidth. Based on the network behavior, this paper studies the defense mechanism of DDoS from the aspect of congestion control. And in the simulation environment of DDoS, this paper studies the RED (Random Early Detection) algorithm that is a congestion control strategy based on routers. Simulation results show that RED provides little protection from high bandwidth flows that take much wide bandwidth, which can result in extreme unfairness among per-flow. Based on the viewpoint, we put forward further improvement for the mechanism of congestion control based on routers. KeywordsDDoS; Random Early Detection; Congestion Control; NS","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120987363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364452
Xiao-Hua Yang, Jie Liu, Tonglan Yu, Yang Luo, Qunyan Wu
Dynamic likely program invariant detection technology is an available instrument for discovering contract from large program in non-formal description. It is of benefit to contract technology exerting more influence on program quality assurance. Since the research of invariant detection technology has just started that the rough detection usually use hypothesis verification approach which relies on the experience of the detector and his degree of understanding of the detected program so that there is serious lack of accuracy and efficiency. This paper tempts to divide the invariants into two kinds that one is called functional invariant and the other is non-functional type based on relational data theory before starting the invariant detection. The paper focuses on the approach of detecting functional likely invariant, which accomplish detecting existence of them by discovering functional dependence set of the program variable at first and then detecting the forms of the existent invariants after deducing the function dependence set. Experiments demonstrate that this approach not only solves the problems of blind detection to improve the efficiency but also reduces the possibility of missing important functional invariants compared with the traditional hypothesis verification approach such as Daikon.
{"title":"Dynamically Discovering Functional Likely Program Invariants Based on Relational Database Theory","authors":"Xiao-Hua Yang, Jie Liu, Tonglan Yu, Yang Luo, Qunyan Wu","doi":"10.1109/CISE.2009.5364452","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364452","url":null,"abstract":"Dynamic likely program invariant detection technology is an available instrument for discovering contract from large program in non-formal description. It is of benefit to contract technology exerting more influence on program quality assurance. Since the research of invariant detection technology has just started that the rough detection usually use hypothesis verification approach which relies on the experience of the detector and his degree of understanding of the detected program so that there is serious lack of accuracy and efficiency. This paper tempts to divide the invariants into two kinds that one is called functional invariant and the other is non-functional type based on relational data theory before starting the invariant detection. The paper focuses on the approach of detecting functional likely invariant, which accomplish detecting existence of them by discovering functional dependence set of the program variable at first and then detecting the forms of the existent invariants after deducing the function dependence set. Experiments demonstrate that this approach not only solves the problems of blind detection to improve the efficiency but also reduces the possibility of missing important functional invariants compared with the traditional hypothesis verification approach such as Daikon.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121001676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5362528
Zhenhao Li, Xiaojuan Zheng, Yonglong Wei
This paper presents a new way to verify whether a behavior model of code satisfies a security policy in the model- carrying code(MCC) approach for safe execution of untrusted code. This new verification method based on a new kind of model called logic semantic based automata(LSBA). Logic semantic based pushdown automata(LSBPDA)is to model safety-related behaviors of codes unknown to a user and logic semantic based finite states automata(LSBFSA)is to model security policies of users. Verification is done by checking wether the language of the LSBPDA model of a policy and the language of the LSBFSA model of untrusted code intersect. This new method is formal in nature and suitable for automation of the verification step in MCC method. Index Terms—MCC, safety of mobile code, formal method, safety model verification
{"title":"LSBA Based Security Verification in MCC","authors":"Zhenhao Li, Xiaojuan Zheng, Yonglong Wei","doi":"10.1109/CISE.2009.5362528","DOIUrl":"https://doi.org/10.1109/CISE.2009.5362528","url":null,"abstract":"This paper presents a new way to verify whether a behavior model of code satisfies a security policy in the model- carrying code(MCC) approach for safe execution of untrusted code. This new verification method based on a new kind of model called logic semantic based automata(LSBA). Logic semantic based pushdown automata(LSBPDA)is to model safety-related behaviors of codes unknown to a user and logic semantic based finite states automata(LSBFSA)is to model security policies of users. Verification is done by checking wether the language of the LSBPDA model of a policy and the language of the LSBFSA model of untrusted code intersect. This new method is formal in nature and suitable for automation of the verification step in MCC method. Index Terms—MCC, safety of mobile code, formal method, safety model verification","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}