H. Camatte, Guillaume Daudin, V. Faubert, A. Lalliard, Christine Rifflart
We analyse the elasticity of the household consumption expenditure (HCE) deflator to the exchange rate, using world input-output tables (WIOT) from 1995 to 2019. In line with the existing literature, we find a modest output-weighted elasticity of around 0.1. This elasticity is stable over time but heterogeneous across countries, ranging from 0.05 to 0.22. Such heterogeneity mainly reflects differences in foreign product content of consumption and intermediate products. Direct effects through imported consumption and intermediate products entering domestic production explain most of the transmission of an exchange rate appreciation to domestic prices. By contrast, indirect effects linked to participation in global value chains play a limited role. Our results are robust to using four different WIOT datasets. As WIOT are data-demanding and available with a lag of several years, we extrapolate a reliable estimate of the HCE deflator elasticity from 2015 onwards using trade data and GDP statistics.
{"title":"Estimating the Elasticity of Consumer Prices to the Exchange Rate: An Accounting Approach","authors":"H. Camatte, Guillaume Daudin, V. Faubert, A. Lalliard, Christine Rifflart","doi":"10.2139/ssrn.3943744","DOIUrl":"https://doi.org/10.2139/ssrn.3943744","url":null,"abstract":"We analyse the elasticity of the household consumption expenditure (HCE) deflator to the exchange rate, using world input-output tables (WIOT) from 1995 to 2019. In line with the existing literature, we find a modest output-weighted elasticity of around 0.1. This elasticity is stable over time but heterogeneous across countries, ranging from 0.05 to 0.22. Such heterogeneity mainly reflects differences in foreign product content of consumption and intermediate products. Direct effects through imported consumption and intermediate products entering domestic production explain most of the transmission of an exchange rate appreciation to domestic prices. By contrast, indirect effects linked to participation in global value chains play a limited role. Our results are robust to using four different WIOT datasets. As WIOT are data-demanding and available with a lag of several years, we extrapolate a reliable estimate of the HCE deflator elasticity from 2015 onwards using trade data and GDP statistics.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122280268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hansen, Maria Stoettrup Schioenning Larsen, A. H. Lassen
Digital transformation is a challenging task for companies and is frequently emphasised as being especially difficult for SMEs as they may be constrained on resources for large investments in new technology, lacking high-tech competences and without clear vision of digitalisation. Learning factories has emerged as a field of research, which addresses challenges of digital transformation. This is typically done via digital or physical learning factories focused on training skills and competences for specific technologies, strategic aspects of digitalisation and overall benefits of technology. While the learning factory research so far has focused on how to teach/learn new technological skills, we argue that embracing new processes or technologies relies not only on developing specific technological skills; it also relies heavily on developing the right organisational environment, which supports knowledge creation, knowledge sharing, continuous learning, and empowerment. We propose that a valuable addition to the learning factory research may be reached through the question: “Which practices do SMEs use to facilitate organisational anchoring of new knowledge gained through learning factories?” We approach this question based on empirical research of two SMEs that have participated in learning factory processes. The study follows a case study methodology and draws on interviews and observations from workshops with managers from the two companies. In particular, when dealing with learning factory programmes aimed towards Industry 4.0 in SMEs, we emphasise: (i) It is important to consider the organisational environment of the SMEs regarding their ability to empower employees and assimilate new knowledge in their organisation, i.e., their absorptive capacity, and (ii) learning factories have the potential to support a company's absorptive capacity.
{"title":"The Role of Absorptive Capacity and Employee Empowerment in Digital Transformation of SMEs","authors":"A. Hansen, Maria Stoettrup Schioenning Larsen, A. H. Lassen","doi":"10.2139/ssrn.3859277","DOIUrl":"https://doi.org/10.2139/ssrn.3859277","url":null,"abstract":"Digital transformation is a challenging task for companies and is frequently emphasised as being especially difficult for SMEs as they may be constrained on resources for large investments in new technology, lacking high-tech competences and without clear vision of digitalisation. Learning factories has emerged as a field of research, which addresses challenges of digital transformation. This is typically done via digital or physical learning factories focused on training skills and competences for specific technologies, strategic aspects of digitalisation and overall benefits of technology. While the learning factory research so far has focused on how to teach/learn new technological skills, we argue that embracing new processes or technologies relies not only on developing specific technological skills; it also relies heavily on developing the right organisational environment, which supports knowledge creation, knowledge sharing, continuous learning, and empowerment. We propose that a valuable addition to the learning factory research may be reached through the question: “Which practices do SMEs use to facilitate organisational anchoring of new knowledge gained through learning factories?” We approach this question based on empirical research of two SMEs that have participated in learning factory processes. The study follows a case study methodology and draws on interviews and observations from workshops with managers from the two companies. In particular, when dealing with learning factory programmes aimed towards Industry 4.0 in SMEs, we emphasise: (i) It is important to consider the organisational environment of the SMEs regarding their ability to empower employees and assimilate new knowledge in their organisation, i.e., their absorptive capacity, and (ii) learning factories have the potential to support a company's absorptive capacity.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122873285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By making the App Store the only official channel for iPhone owners to buy apps, Apple has created the only market in which iPhone owners and app developers may transact their business. Apple exploits the bottleneck that it creates by imposing a 15 percent or 30 percent commission on transactions between iPhone owners and app developers. The array of Apple litigation elicited by this conduct may pose a case of first impression, since there does not appear to be a pigeonhole for Apple’s conduct. Apple neither buys nor sells third-party apps and therefore it does not exercise monopsony or monopoly power.
If plaintiffs frame their antitrust challenges correctly, a court will have to determine whether Apple’s conduct—use of its proprietary technology and threats aimed at both app developers and iPhone owners—offends Section 2 of the Sherman Act. If the court finds that Apple has violated Section 2 of the Sherman Act, then it must also decide who has suffered antitrust injury and who has standing to sue under Section 4 of the Clayton Act. We offer an economic analysis of these issues.
{"title":"Apple's Mounting App Store Woes","authors":"R. Blair, Tirza J. Angerhofer","doi":"10.2139/ssrn.3868309","DOIUrl":"https://doi.org/10.2139/ssrn.3868309","url":null,"abstract":"By making the App Store the only official channel for iPhone owners to buy apps, Apple has created the only market in which iPhone owners and app developers may transact their business. Apple exploits the bottleneck that it creates by imposing a 15 percent or 30 percent commission on transactions between iPhone owners and app developers. The array of Apple litigation elicited by this conduct may pose a case of first impression, since there does not appear to be a pigeonhole for Apple’s conduct. Apple neither buys nor sells third-party apps and therefore it does not exercise monopsony or monopoly power. <br><br>If plaintiffs frame their antitrust challenges correctly, a court will have to determine whether Apple’s conduct—use of its proprietary technology and threats aimed at both app developers and iPhone owners—offends Section 2 of the Sherman Act. If the court finds that Apple has violated Section 2 of the Sherman Act, then it must also decide who has suffered antitrust injury and who has standing to sue under Section 4 of the Clayton Act. We offer an economic analysis of these issues.<br>","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115376923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rise of the Internet of Things (IoT) and the development of 5G are set to add a new layer of complexity to the current practice of standard essential patents (SEPs) licensing. While, until recently, the debate has centred on the nature of fair, reasonable and non-discriminatory (FRAND) commitments and the mechanisms to avoid hold-up and reverse hold-up problems between licensors and licensees, a new hotly-debated issue has now emerged. At its core is the question of whether SEP holders should be required to grant a FRAND licence to any implementer seeking a licence, including component makers (so-called ‘licence-to-all’ approach), or if they should be allowed freely to target the supply chain level at which the licence is to be granted (so-called ‘access-for-all’ approach). After providing an up-to-date overview of the current legal and economic debate, the paper focuses on the most recent antitrust case law dealing with the matter on both sides of the Atlantic and argues that no sound economic and legal bases which favour licence-to-all solutions can be identified.
{"title":"SEPs Licensing Across the Supply Chain: An Antitrust Perspective","authors":"O. Borgogno, G. Colangelo","doi":"10.2139/ssrn.3766118","DOIUrl":"https://doi.org/10.2139/ssrn.3766118","url":null,"abstract":"The rise of the Internet of Things (IoT) and the development of 5G are set to add a new layer of complexity to the current practice of standard essential patents (SEPs) licensing. While, until recently, the debate has centred on the nature of fair, reasonable and non-discriminatory (FRAND) commitments and the mechanisms to avoid hold-up and reverse hold-up problems between licensors and licensees, a new hotly-debated issue has now emerged. At its core is the question of whether SEP holders should be required to grant a FRAND licence to any implementer seeking a licence, including component makers (so-called ‘licence-to-all’ approach), or if they should be allowed freely to target the supply chain level at which the licence is to be granted (so-called ‘access-for-all’ approach). After providing an up-to-date overview of the current legal and economic debate, the paper focuses on the most recent antitrust case law dealing with the matter on both sides of the Atlantic and argues that no sound economic and legal bases which favour licence-to-all solutions can be identified.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Air quality is a major concern in cities because of the effect of air pollution on public health and the environment. Air pollution kills several million humans every year. WHO says about 9 out of 10 people breathe polluted air. Accurate air quality monitoring helps in assessing the pollutant levels with respect to the accepted ambient air quality standards. Monitoring is done in order to keep track of the quality of air and to collect pollutant information and to purify the air. Real-time air monitoring system with high-speed ratio-temporal resolution is essential because of the limited data availability and non-scalability of conventional air pollution monitoring systems. Currently, researchers are focusing on the concept of the next generation air pollution monitoring systems for smart cities and have achieved a significant breakthrough by utilizing advanced sensing technologies, micro-electro-mechanical systems (MEMS), wireless sensor networks (WSN), low power wide area networks (LPWAN), internet of things (IoT) and cloud computing. Implementation of a real time monitoring system requires suitable sensors to collect ambient air quality data, an efficient wide area network for communication, and a system to store, process, and visualize the data. This paper is an insight to design and develop a low cost, low power and accurate realtime air pollution monitoring system by incorporating advanced technologies.
{"title":"An Analysis on the Implementation of Air Pollution Monitoring System","authors":"Harinarayanan P, U. S. Kumar","doi":"10.2139/ssrn.3791147","DOIUrl":"https://doi.org/10.2139/ssrn.3791147","url":null,"abstract":"Air quality is a major concern in cities because of the effect of air pollution on public health and the environment. Air pollution kills several million humans every year. WHO says about 9 out of 10 people breathe polluted air. Accurate air quality monitoring helps in assessing the pollutant levels with respect to the accepted ambient air quality standards. Monitoring is done in order to keep track of the quality of air and to collect pollutant information and to purify the air. Real-time air monitoring system with high-speed ratio-temporal resolution is essential because of the limited data availability and non-scalability of conventional air pollution monitoring systems. Currently, researchers are focusing on the concept of the next generation air pollution monitoring systems for smart cities and have achieved a significant breakthrough by utilizing advanced sensing technologies, micro-electro-mechanical systems (MEMS), wireless sensor networks (WSN), low power wide area networks (LPWAN), internet of things (IoT) and cloud computing. Implementation of a real time monitoring system requires suitable sensors to collect ambient air quality data, an efficient wide area network for communication, and a system to store, process, and visualize the data. This paper is an insight to design and develop a low cost, low power and accurate realtime air pollution monitoring system by incorporating advanced technologies.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124558327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain tumor is an abnormality in brain cells caused due to mutations in brain cells. Detection of these tumors is done by using Magnetic Resonance Imaging (MRI) scanning. Researchers have been working on the automated brain tumor detection and classification techniques to assist doctors in diagnosis process. The MRI scans obtained are sometimes affected by noise. To eliminate this noise, image denoising techniques are used. But these techniques remove the noise at the cost of blurring the edges by lowering the resolution and the quality of the image. Retaining the edges present in the brain tumor image is very important for further processing. This paper presents a brain tumor denoising technique by using Edge adaptive total variation. The proposed algorithm analyses the edges present at every pixel while denoising, by using the gradient angle at the pixel. This enhances the performance of the algorithm by retaining the edges of the image while denoising. The algorithm has been compared with existing techniques and has proven to be very effective in removing noise from the image.
{"title":"Brain Tumor Image De-noising Using Edge Adaptive Total Variation Denoising Algorithm","authors":"Snehalatha V, S. Patil","doi":"10.2139/ssrn.3736536","DOIUrl":"https://doi.org/10.2139/ssrn.3736536","url":null,"abstract":"Brain tumor is an abnormality in brain cells caused due to mutations in brain cells. Detection of these tumors is done by using Magnetic Resonance Imaging (MRI) scanning. Researchers have been working on the automated brain tumor detection and classification techniques to assist doctors in diagnosis process. The MRI scans obtained are sometimes affected by noise. To eliminate this noise, image denoising techniques are used. But these techniques remove the noise at the cost of blurring the edges by lowering the resolution and the quality of the image. Retaining the edges present in the brain tumor image is very important for further processing. This paper presents a brain tumor denoising technique by using Edge adaptive total variation. The proposed algorithm analyses the edges present at every pixel while denoising, by using the gradient angle at the pixel. This enhances the performance of the algorithm by retaining the edges of the image while denoising. The algorithm has been compared with existing techniques and has proven to be very effective in removing noise from the image.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131187566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolutionary dynamics of a digital economy are different in important and systematic ways to an industrial economy. We therefore require modified or reconstructed theory and policy. This is the research agenda for evolutionary economics.
{"title":"Evolution of the Digital Economy: A Research Program for Evolutionary Economics","authors":"J. Potts","doi":"10.2139/ssrn.3736320","DOIUrl":"https://doi.org/10.2139/ssrn.3736320","url":null,"abstract":"The evolutionary dynamics of a digital economy are different in important and systematic ways to an industrial economy. We therefore require modified or reconstructed theory and policy. This is the research agenda for evolutionary economics.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132123584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
{"title":"An Empirical Study of Deep Web based on Graph Analysis","authors":"M. Morshed","doi":"10.2139/ssrn.3720454","DOIUrl":"https://doi.org/10.2139/ssrn.3720454","url":null,"abstract":"The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114399919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
United Nations has issued a Discussion Draft of the proposed changes to Article 12 relating to Royalties contained in the UN Model on Double Tax Conventions. The proposal intends to add payments for use of computer software in the definition of Royalties in that Article. The article first examines the history and the root cause behind the controversy and suggest changes to the UN Commentary by departing from the wordings of the OECD Commentary on the same Article. The article then comments on the arguments for and against the proposed change contained in the Discussion Draft for their validity and their practicality.
{"title":"United Nations Model Tax Convention – Proposed Inclusion of Software in the Definition of Royalties in Article 12: Comments on the 2020 Discussion Draft","authors":"Ganesh Rajgopalan","doi":"10.2139/ssrn.3715609","DOIUrl":"https://doi.org/10.2139/ssrn.3715609","url":null,"abstract":"United Nations has issued a Discussion Draft of the proposed changes to Article 12 relating to Royalties contained in the UN Model on Double Tax Conventions. The proposal intends to add payments for use of computer software in the definition of Royalties in that Article. The article first examines the history and the root cause behind the controversy and suggest changes to the UN Commentary by departing from the wordings of the OECD Commentary on the same Article. The article then comments on the arguments for and against the proposed change contained in the Discussion Draft for their validity and their practicality.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134023315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the use of online transaction is increasing day by day, the security measure parameter is difficult to manage. In that case, Blockchain enables peer-to-peer transfer of digital assets without any intermediaries in a secure manner with the use of verification and validation operation by different miner nodes of decentralized network. Blockchain technology also supports cryptocurrencies like bitcoin and ethereum for amount transfer digitally with secure communication.
{"title":"Survey on Blockchain -Future of Security for Cryptocurrency- Bitcoin and Ethereum","authors":"BRIJESHKUMAR Y. Panchal, Urvashi M. Chaudhari","doi":"10.7753/ijcatr0909.1001","DOIUrl":"https://doi.org/10.7753/ijcatr0909.1001","url":null,"abstract":"As the use of online transaction is increasing day by day, the security measure parameter is difficult to manage. In that case, Blockchain enables peer-to-peer transfer of digital assets without any intermediaries in a secure manner with the use of verification and validation operation by different miner nodes of decentralized network. Blockchain technology also supports cryptocurrencies like <br>bitcoin and ethereum for amount transfer digitally with secure communication.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"387 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132357323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}