Laurin Maurer, Flurin Roca, Noel Treffinger, Ludwig Henke, Vitus Hofmann, Oliver Leonhartsberger
This article describes how parking search times and occupancy in a highly attractive area in central Zurich (Switzerland) develop on a weekday morning when a farmer’s market takes place on Bürkliplatz, a central market place inside the study area. Individual vehicles were tracked by bike, drivers were interviewed about their parking behaviour and their route was tracked using GPS. Additionally, the parking occupancy was registered every fifteen minutes during a five-hour time period. A connection between market opening hours and parking dynamics within the perimeter was observed. Drivers usually overestimate their parking search duration.
{"title":"Tracking Parking Search and Occupancy in Zurich","authors":"Laurin Maurer, Flurin Roca, Noel Treffinger, Ludwig Henke, Vitus Hofmann, Oliver Leonhartsberger","doi":"10.32866/001c.72793","DOIUrl":"https://doi.org/10.32866/001c.72793","url":null,"abstract":"This article describes how parking search times and occupancy in a highly attractive area in central Zurich (Switzerland) develop on a weekday morning when a farmer’s market takes place on Bürkliplatz, a central market place inside the study area. Individual vehicles were tracked by bike, drivers were interviewed about their parking behaviour and their route was tracked using GPS. Additionally, the parking occupancy was registered every fifteen minutes during a five-hour time period. A connection between market opening hours and parking dynamics within the perimeter was observed. Drivers usually overestimate their parking search duration.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How does ChatGPT introduce transport problems and solutions in North America? By analyzing ChatGPT’s answers to four prompts related to transport issues and solutions in the United States and Canada, our results reveal that ChatGPT’s answers generally align well with transport researchers’ expectations. However, ChatGPT’s capability may be limited in providing trustworthy or sound solutions because of the potential issues (e.g., geographic biases, inaccuracy) in its training data. ChatGPT might be a decent starting point for discussing transport issues and solutions, but one should be aware of its limitations.
{"title":"How does ChatGPT Introduce Transport Problems and Solutions in North America?","authors":"Junghwan Kim, Jinhyung Lee","doi":"10.32866/001c.72634","DOIUrl":"https://doi.org/10.32866/001c.72634","url":null,"abstract":"How does ChatGPT introduce transport problems and solutions in North America? By analyzing ChatGPT’s answers to four prompts related to transport issues and solutions in the United States and Canada, our results reveal that ChatGPT’s answers generally align well with transport researchers’ expectations. However, ChatGPT’s capability may be limited in providing trustworthy or sound solutions because of the potential issues (e.g., geographic biases, inaccuracy) in its training data. ChatGPT might be a decent starting point for discussing transport issues and solutions, but one should be aware of its limitations.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48291282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent methods to measure multimodality only consider the diversity and evenness of mode use, while ignoring that the classification of transport modes also matters. This study proposes a multigroup multimodality index to measure the extent of being multimodal at both single mode and mode group levels in a nested manner. The index is compared with the two most commonly used indices, the Herfindahl-Hirschman index and the Shannon Entropy index, to assess its reliability and improvement over existing approaches. Results show that the multigroup multimodality index can simultaneously distinguish the degree of being multimodal at both mode level and group level, which addresses the classification issue in measuring multimodality.
{"title":"Multigroup Multimodality Index: A Method to Solve the Issue of Transport Mode Classification in Measuring Multimodality","authors":"Xingxing Fu, Dea van Lierop, D. Ettema","doi":"10.32866/001c.72072","DOIUrl":"https://doi.org/10.32866/001c.72072","url":null,"abstract":"Recent methods to measure multimodality only consider the diversity and evenness of mode use, while ignoring that the classification of transport modes also matters. This study proposes a multigroup multimodality index to measure the extent of being multimodal at both single mode and mode group levels in a nested manner. The index is compared with the two most commonly used indices, the Herfindahl-Hirschman index and the Shannon Entropy index, to assess its reliability and improvement over existing approaches. Results show that the multigroup multimodality index can simultaneously distinguish the degree of being multimodal at both mode level and group level, which addresses the classification issue in measuring multimodality.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44842590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study evaluates the effect of an influencer campaign on e-scooter risk behavior among adolescent e-scooter users in Norway. The analysis shows no statistical differences in self-reported risk behaviors (dual riding, riding under the influence and mobile phone use) among respondents who had seen one of the campaign films, compared to respondents who had not seen the films. Neither did the campaign change norms or attitudes. Hence, the campaign did not appear to have intended effects. On the contrary, differences in perceived attitudes, descriptive norms and intentions were found, which could imply a backfire-effect. Respondents who had seen the campaign held poorer attitudes, were more likely to claim that it was normal, and were more inclined to perform some of the risky behaviors.
{"title":"Evaluation of an Influencer Campaign on Social Media Targeting Young E-scooter Users","authors":"A. Fyhri, V. Milch, Ingunn Ellis, Katrine Karlsen","doi":"10.32866/001c.71347","DOIUrl":"https://doi.org/10.32866/001c.71347","url":null,"abstract":"This study evaluates the effect of an influencer campaign on e-scooter risk behavior among adolescent e-scooter users in Norway. The analysis shows no statistical differences in self-reported risk behaviors (dual riding, riding under the influence and mobile phone use) among respondents who had seen one of the campaign films, compared to respondents who had not seen the films. Neither did the campaign change norms or attitudes. Hence, the campaign did not appear to have intended effects. On the contrary, differences in perceived attitudes, descriptive norms and intentions were found, which could imply a backfire-effect. Respondents who had seen the campaign held poorer attitudes, were more likely to claim that it was normal, and were more inclined to perform some of the risky behaviors.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44885049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-25DOI: 10.48550/arXiv.2302.13139
Bruce W. Lee, J. Lee
We propose the novel adaptation of a pre-trained seq2seq model for readability assessment. We prove that a seq2seq model - T5 or BART - can be adapted to discern which text is more difficult from two given texts (pairwise). As an exploratory study to prompt-learn a neural network for text readability in a text-to-text manner, we report useful tips for future work in seq2seq training and ranking-based approach to readability assessment. Specifically, we test nine input-output formats/prefixes and show that they can significantly influence the final model performance.Also, we argue that the combination of text-to-text training and pairwise ranking setup 1) enables leveraging multiple parallel text simplification data for teaching readability and 2) trains a neural model for the general concept of readability (therefore, better cross-domain generalization). At last, we report a 99.6% pairwise classification accuracy on Newsela and a 98.7% for OneStopEnglish, through a joint training approach. Our code is available at github.com/brucewlee/prompt-learning-readability.
{"title":"Prompt-based Learning for Text Readability Assessment","authors":"Bruce W. Lee, J. Lee","doi":"10.48550/arXiv.2302.13139","DOIUrl":"https://doi.org/10.48550/arXiv.2302.13139","url":null,"abstract":"We propose the novel adaptation of a pre-trained seq2seq model for readability assessment. We prove that a seq2seq model - T5 or BART - can be adapted to discern which text is more difficult from two given texts (pairwise). As an exploratory study to prompt-learn a neural network for text readability in a text-to-text manner, we report useful tips for future work in seq2seq training and ranking-based approach to readability assessment. Specifically, we test nine input-output formats/prefixes and show that they can significantly influence the final model performance.Also, we argue that the combination of text-to-text training and pairwise ranking setup 1) enables leveraging multiple parallel text simplification data for teaching readability and 2) trains a neural model for the general concept of readability (therefore, better cross-domain generalization). At last, we report a 99.6% pairwise classification accuracy on Newsela and a 98.7% for OneStopEnglish, through a joint training approach. Our code is available at github.com/brucewlee/prompt-learning-readability.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"82 1","pages":"1774-1779"},"PeriodicalIF":0.0,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91392100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-24DOI: 10.48550/arXiv.2302.12578
Krithika Ramesh, Sunayana Sitaram, M. Choudhury
With language models becoming increasingly ubiquitous, it has become essential to address their inequitable treatment of diverse demographic groups and factors. Most research on evaluating and mitigating fairness harms has been concentrated on English, while multilingual models and non-English languages have received comparatively little attention. In this paper, we survey different aspects of fairness in languages beyond English and multilingual contexts. This paper presents a survey of fairness in multilingual and non-English contexts, highlighting the shortcomings of current research and the difficulties faced by methods designed for English. We contend that the multitude of diverse cultures and languages across the world makes it infeasible to achieve comprehensive coverage in terms of constructing fairness datasets. Thus, the measurement and mitigation of biases must evolve beyond the current dataset-driven practices that are narrowly focused on specific dimensions and types of biases and, therefore, impossible to scale across languages and cultures.
{"title":"Fairness in Language Models Beyond English: Gaps and Challenges","authors":"Krithika Ramesh, Sunayana Sitaram, M. Choudhury","doi":"10.48550/arXiv.2302.12578","DOIUrl":"https://doi.org/10.48550/arXiv.2302.12578","url":null,"abstract":"With language models becoming increasingly ubiquitous, it has become essential to address their inequitable treatment of diverse demographic groups and factors. Most research on evaluating and mitigating fairness harms has been concentrated on English, while multilingual models and non-English languages have received comparatively little attention. In this paper, we survey different aspects of fairness in languages beyond English and multilingual contexts. This paper presents a survey of fairness in multilingual and non-English contexts, highlighting the shortcomings of current research and the difficulties faced by methods designed for English. We contend that the multitude of diverse cultures and languages across the world makes it infeasible to achieve comprehensive coverage in terms of constructing fairness datasets. Thus, the measurement and mitigation of biases must evolve beyond the current dataset-driven practices that are narrowly focused on specific dimensions and types of biases and, therefore, impossible to scale across languages and cultures.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"1 1","pages":"2061-2074"},"PeriodicalIF":0.0,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47727721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-20DOI: 10.48550/arXiv.2302.09820
Hanxu Hu, Yunqing Liu, Zhongyi Yu, Laura Perez-Beltrachini
In this work we study user controlled table-to-text generation where users explore the content in a table by selecting cells and reading a natural language description thereof automatically produce by a natural language generator. Such generation models usually learn from carefully selected cell combinations (clean cell selections); however, in practice users may select unexpected, redundant, or incoherent cell combinations (noisy cell selections). In experiments, we find that models perform well on test sets coming from the same distribution as the train data but their performance drops when evaluated on realistic noisy user inputs. We propose a fine-tuning regime with additional user-simulated noisy cell selections. Models fine-tuned with the proposed regime gain 4.85 BLEU points on user noisy test cases and 1.4 on clean test cases; and achieve comparable state-of-the-art performance on the ToTTo dataset.
{"title":"Improving User Controlled Table-To-Text Generation Robustness","authors":"Hanxu Hu, Yunqing Liu, Zhongyi Yu, Laura Perez-Beltrachini","doi":"10.48550/arXiv.2302.09820","DOIUrl":"https://doi.org/10.48550/arXiv.2302.09820","url":null,"abstract":"In this work we study user controlled table-to-text generation where users explore the content in a table by selecting cells and reading a natural language description thereof automatically produce by a natural language generator. Such generation models usually learn from carefully selected cell combinations (clean cell selections); however, in practice users may select unexpected, redundant, or incoherent cell combinations (noisy cell selections). In experiments, we find that models perform well on test sets coming from the same distribution as the train data but their performance drops when evaluated on realistic noisy user inputs. We propose a fine-tuning regime with additional user-simulated noisy cell selections. Models fine-tuned with the proposed regime gain 4.85 BLEU points on user noisy test cases and 1.4 on clean test cases; and achieve comparable state-of-the-art performance on the ToTTo dataset.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"1 1","pages":"2272-2279"},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42282954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-19DOI: 10.48550/arXiv.2302.09685
Ankan Mullick, Ishani Mondal, Sourjyadip Ray, R. Raghav, G. Chaitanya, Pawan Goyal
Scarcity of data and technological limitations for resource-poor languages in developing countries like India poses a threat to the development of sophisticated NLU systems for healthcare. To assess the current status of various state-of-the-art language models in healthcare, this paper studies the problem by initially proposing two different Healthcare datasets, Indian Healthcare Query Intent-WebMD and 1mg (IHQID-WebMD and IHQID-1mg) and one real world Indian hospital query data in English and multiple Indic languages (Hindi, Bengali, Tamil, Telugu, Marathi and Gujarati) which are annotated with the query intents as well as entities. Our aim is to detect query intents and corresponding entities. We perform extensive experiments on a set of models which in various realistic settings and explore two scenarios based on the access to English data only (less costly) and access to target language data (more expensive). We analyze context specific practical relevancy through empirical analysis. The results, expressed in terms of overall F-score show that our approach is practically useful to identify intents and entities.
{"title":"Intent Identification and Entity Extraction for Healthcare Queries in Indic Languages","authors":"Ankan Mullick, Ishani Mondal, Sourjyadip Ray, R. Raghav, G. Chaitanya, Pawan Goyal","doi":"10.48550/arXiv.2302.09685","DOIUrl":"https://doi.org/10.48550/arXiv.2302.09685","url":null,"abstract":"Scarcity of data and technological limitations for resource-poor languages in developing countries like India poses a threat to the development of sophisticated NLU systems for healthcare. To assess the current status of various state-of-the-art language models in healthcare, this paper studies the problem by initially proposing two different Healthcare datasets, Indian Healthcare Query Intent-WebMD and 1mg (IHQID-WebMD and IHQID-1mg) and one real world Indian hospital query data in English and multiple Indic languages (Hindi, Bengali, Tamil, Telugu, Marathi and Gujarati) which are annotated with the query intents as well as entities. Our aim is to detect query intents and corresponding entities. We perform extensive experiments on a set of models which in various realistic settings and explore two scenarios based on the access to English data only (less costly) and access to target language data (more expensive). We analyze context specific practical relevancy through empirical analysis. The results, expressed in terms of overall F-score show that our approach is practically useful to identify intents and entities.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"1 1","pages":"1825-1836"},"PeriodicalIF":0.0,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43353117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-14DOI: 10.48550/arXiv.2302.06829
Hossein Rajaby Faghihi, Parisa Kordjamshidi, C. Teng, J. Allen
In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text. We consider a deep semantic parser~(TRIPS) and semantic role labeling as two sources of semantic parsing knowledge. First, we propose PROPOLIS, a symbolic parsing-based procedural reasoning framework.Second, we integrate semantic parsing information into state-of-the-art neural models to conduct procedural reasoning.Our experiments indicate that explicitly incorporating such semantic knowledge improves procedural understanding. This paper presents new metrics for evaluating procedural reasoning tasks that clarify the challenges and identify differences among neural, symbolic, and integrated models.
{"title":"The Role of Semantic Parsing in Understanding Procedural Text","authors":"Hossein Rajaby Faghihi, Parisa Kordjamshidi, C. Teng, J. Allen","doi":"10.48550/arXiv.2302.06829","DOIUrl":"https://doi.org/10.48550/arXiv.2302.06829","url":null,"abstract":"In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text. We consider a deep semantic parser~(TRIPS) and semantic role labeling as two sources of semantic parsing knowledge. First, we propose PROPOLIS, a symbolic parsing-based procedural reasoning framework.Second, we integrate semantic parsing information into state-of-the-art neural models to conduct procedural reasoning.Our experiments indicate that explicitly incorporating such semantic knowledge improves procedural understanding. This paper presents new metrics for evaluating procedural reasoning tasks that clarify the challenges and identify differences among neural, symbolic, and integrated models.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"1 1","pages":"1792-1804"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47585602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.48550/arXiv.2302.06690
Jaeyoung Kim, Dongbin Na, Sungchul Choi, Sungbin Lim
While pre-trained language models (PLMs) have become a de-facto standard promoting the accuracy of text classification tasks, recent studies find that PLMs often predict over-confidently.Although calibration methods have been proposed, such as ensemble learning and data augmentation, most of the methods have been verified in computer vision benchmarks rather than in PLM-based text classification tasks. In this paper, we present an empirical study on confidence calibration for PLMs, addressing three categories, including confidence penalty losses, data augmentations, and ensemble methods. We find that the ensemble model overfitted to the training set shows sub-par calibration performance and also observe that PLMs trained with confidence penalty loss have a trade-off between calibration and accuracy. Building on these observations, we propose the Calibrated PLM (CALL), a combination of calibration techniques. The CALL complements shortcomings that may occur when utilizing a calibration method individually and boosts both classification and calibration accuracy. Design choices in CALL’s training procedures are extensively studied, and we provide a detailed analysis of how calibration techniques affect the calibration performance of PLMs.
{"title":"Bag of Tricks for In-Distribution Calibration of Pretrained Transformers","authors":"Jaeyoung Kim, Dongbin Na, Sungchul Choi, Sungbin Lim","doi":"10.48550/arXiv.2302.06690","DOIUrl":"https://doi.org/10.48550/arXiv.2302.06690","url":null,"abstract":"While pre-trained language models (PLMs) have become a de-facto standard promoting the accuracy of text classification tasks, recent studies find that PLMs often predict over-confidently.Although calibration methods have been proposed, such as ensemble learning and data augmentation, most of the methods have been verified in computer vision benchmarks rather than in PLM-based text classification tasks. In this paper, we present an empirical study on confidence calibration for PLMs, addressing three categories, including confidence penalty losses, data augmentations, and ensemble methods. We find that the ensemble model overfitted to the training set shows sub-par calibration performance and also observe that PLMs trained with confidence penalty loss have a trade-off between calibration and accuracy. Building on these observations, we propose the Calibrated PLM (CALL), a combination of calibration techniques. The CALL complements shortcomings that may occur when utilizing a calibration method individually and boosts both classification and calibration accuracy. Design choices in CALL’s training procedures are extensively studied, and we provide a detailed analysis of how calibration techniques affect the calibration performance of PLMs.","PeriodicalId":73025,"journal":{"name":"Findings (Sydney (N.S.W.)","volume":"1 1","pages":"551"},"PeriodicalIF":0.0,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45425387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}