Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905035
S. Haseloff
Standard 3-tier-architectures do not provide a sufficient basis for the development of services for mobile users. The specific characteristics of these types of applications require different approaches to application design. In view of this situation, development of specialized application architectures for the area of mobile computing will continue to be necessary. Our approach is to support the development of this kind of software systems by providing a specialized layered design for the middle tier of mobile applications as well as reusable components and services along with design and architecture patterns. This paper presents a design solution for the problem of handling multiple appliances and multiple information sources on the side of the stationary application. We illustrate our conclusions by an example from the application field of mobile document management.
{"title":"Designing adaptive mobile applications","authors":"S. Haseloff","doi":"10.1109/EMPDP.2001.905035","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905035","url":null,"abstract":"Standard 3-tier-architectures do not provide a sufficient basis for the development of services for mobile users. The specific characteristics of these types of applications require different approaches to application design. In view of this situation, development of specialized application architectures for the area of mobile computing will continue to be necessary. Our approach is to support the development of this kind of software systems by providing a specialized layered design for the middle tier of mobile applications as well as reusable components and services along with design and architecture patterns. This paper presents a design solution for the problem of handling multiple appliances and multiple information sources on the side of the stationary application. We illustrate our conclusions by an example from the application field of mobile document management.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131446129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905053
G. Danese, I. Lotto, F. Leporati, Alessio Quaglini, S. Ramat, G. Tecchiolli
In this work we present two different applications implemented on the neurocomputer Totem Nc3001 from Neuricam Inc. The goal of the experimentation is to test, on real problems, the performance of this powerful parallel unit consisting of 32 Digital Signal Processors (DSPs) and to evaluate its suitability to neural network applications. The first problem implemented is a typical classification algorithm in which the network recognises which points belong to different regions inside a 2D space. The second problem is more computationally heavy and consists of a network able to reproduce the eye movements, if properly stimulated. A comparison is reported between Matlab implementations or handwritten code run on workstations and the performance obtained from the Totem chip.
{"title":"A parallel neurochip for neural networks implementing the reactive tabu search algorithm: application case studies","authors":"G. Danese, I. Lotto, F. Leporati, Alessio Quaglini, S. Ramat, G. Tecchiolli","doi":"10.1109/EMPDP.2001.905053","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905053","url":null,"abstract":"In this work we present two different applications implemented on the neurocomputer Totem Nc3001 from Neuricam Inc. The goal of the experimentation is to test, on real problems, the performance of this powerful parallel unit consisting of 32 Digital Signal Processors (DSPs) and to evaluate its suitability to neural network applications. The first problem implemented is a typical classification algorithm in which the network recognises which points belong to different regions inside a 2D space. The second problem is more computationally heavy and consists of a network able to reproduce the eye movements, if properly stimulated. A comparison is reported between Matlab implementations or handwritten code run on workstations and the performance obtained from the Totem chip.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122577180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905058
Heiko Ludwig, Y. Hoffner
The increasing proliferation of complex electronic services, particularly in the business-to-business area, entails the creation and management of complex, cross-organisational distributed systems. An important challenge with such systems concerns the integration of different core and administrative services into a consistent whole. Furthermore, there is a need for extra functionality to facilitate the provision and consumption of the resulting service inside as well as outside the organisation. This functionality may have to deal with a host of issues such as: facilitating co-operation while maintaining autonomy, remuneration, monitoring, auditing and management of the integrated service, and the translation between internal and external models, processes and information. All the above are aimed at ensuring that the contractual obligations of the business relationship are met. We need a component that supports the integration of the core and administrative services while providing added functionality concerned with crossing the organisational boundary. This component is termed an "integration facilitator". The CrossFlow project dealt with the dynamic establishment of cross-organisational business relationships. CRAFT (CrossFlow Runtime Framework Technology) provides a framework to build integration facilitators. By using this framework, the core and administrative services of an organisation can be quickly integrated and extended to create different business level services to suit the changing needs of organisations and their dynamic business partnerships.
{"title":"CRAFT: a framework for integration facilitation in cross-organisational distributed systems","authors":"Heiko Ludwig, Y. Hoffner","doi":"10.1109/EMPDP.2001.905058","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905058","url":null,"abstract":"The increasing proliferation of complex electronic services, particularly in the business-to-business area, entails the creation and management of complex, cross-organisational distributed systems. An important challenge with such systems concerns the integration of different core and administrative services into a consistent whole. Furthermore, there is a need for extra functionality to facilitate the provision and consumption of the resulting service inside as well as outside the organisation. This functionality may have to deal with a host of issues such as: facilitating co-operation while maintaining autonomy, remuneration, monitoring, auditing and management of the integrated service, and the translation between internal and external models, processes and information. All the above are aimed at ensuring that the contractual obligations of the business relationship are met. We need a component that supports the integration of the core and administrative services while providing added functionality concerned with crossing the organisational boundary. This component is termed an \"integration facilitator\". The CrossFlow project dealt with the dynamic establishment of cross-organisational business relationships. CRAFT (CrossFlow Runtime Framework Technology) provides a framework to build integration facilitators. By using this framework, the core and administrative services of an organisation can be quickly integrated and extended to create different business level services to suit the changing needs of organisations and their dynamic business partnerships.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125402588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905037
Bernd Schopp, Axel Röpnack, Markus Greunz
This paper combines two major streams of recent research-the concepts of new media and the paradigm of platforms for mobile e-business applications. It shows that the 'time-space-intention' topology has to be investigated in order to design successful e-business applications in mobile scenarios. The paper is divided into four sections: after a brief introduction, we apply a media reference model to show in which phases of a mobile e-business transaction the issues of personalization and individualization have to be analyzed and what new parameters of personalization and individualization have to be taken into account in mobile scenarios. We analyze which parameters for personal filtering mechanisms are critical for adequate decision making in mobile environments. Finally we describe an information filtering mechanism that is based on the agent's 'time-space-intention' topology and the cross-agent 'time-space-intention' topology.
{"title":"The need for topological time and location information in mobile e-business applications","authors":"Bernd Schopp, Axel Röpnack, Markus Greunz","doi":"10.1109/EMPDP.2001.905037","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905037","url":null,"abstract":"This paper combines two major streams of recent research-the concepts of new media and the paradigm of platforms for mobile e-business applications. It shows that the 'time-space-intention' topology has to be investigated in order to design successful e-business applications in mobile scenarios. The paper is divided into four sections: after a brief introduction, we apply a media reference model to show in which phases of a mobile e-business transaction the issues of personalization and individualization have to be analyzed and what new parameters of personalization and individualization have to be taken into account in mobile scenarios. We analyze which parameters for personal filtering mechanisms are critical for adequate decision making in mobile environments. Finally we describe an information filtering mechanism that is based on the agent's 'time-space-intention' topology and the cross-agent 'time-space-intention' topology.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125445308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905022
P. Medeiros, J. Cunha
Among the efforts of building heterogeneous applications through the interconnection of independently developed components, we have proposed a group-oriented approach called PHIS. In this paper we identify several types of interaction between application components and show how a group-oriented approach eases the modelling of these interaction patterns. A brief description of PHIS primitives is made and through the use of three examples we show how the PHIS system supports the group-oriented approach. A comparison is made between several related systems and PHIS and its distinctive characteristics are highlighted. Our experience in the building of a medium sized application like a parallel genetic algorithm execution environment supports the claim of the power and flexibility of PHIS as an interconnection model.
{"title":"Using groups to support the interconnection of parallel applications","authors":"P. Medeiros, J. Cunha","doi":"10.1109/EMPDP.2001.905022","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905022","url":null,"abstract":"Among the efforts of building heterogeneous applications through the interconnection of independently developed components, we have proposed a group-oriented approach called PHIS. In this paper we identify several types of interaction between application components and show how a group-oriented approach eases the modelling of these interaction patterns. A brief description of PHIS primitives is made and through the use of three examples we show how the PHIS system supports the group-oriented approach. A comparison is made between several related systems and PHIS and its distinctive characteristics are highlighted. Our experience in the building of a medium sized application like a parallel genetic algorithm execution environment supports the claim of the power and flexibility of PHIS as an interconnection model.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905039
F. Baiardi, D. Guerri, P. Mori, L. Moroni, L. Ricci
With reference to numerical iterative algorithms, this paper exemplifies a methodology to design the runtime support of applications sharing a set of data structures on a distributed memory architecture. According to the methodology, the support is decomposed into two layers: an application independent one, supplying the basic functionalities to access a shared structure, and an application dependent layer that implements the caching and prefetching strategies most appropriate for the considered application. Starting from this assumption, we introduce DVSA, a package that implements the application independent layer and SHOB, one of the packages that can be developed on top of DVSA. SHOB defines a weak consistency memory model where the user controls the amount of inconsistency due to caching and prefetching. The model is well suitable to implement iterative numerical algorithms. Experimental results of the methodology are presented in the case of a uniform multi-grid method to solve partial differential equations.
{"title":"DVSA and SHOE: support to shared data structures on distributed memory architectures","authors":"F. Baiardi, D. Guerri, P. Mori, L. Moroni, L. Ricci","doi":"10.1109/EMPDP.2001.905039","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905039","url":null,"abstract":"With reference to numerical iterative algorithms, this paper exemplifies a methodology to design the runtime support of applications sharing a set of data structures on a distributed memory architecture. According to the methodology, the support is decomposed into two layers: an application independent one, supplying the basic functionalities to access a shared structure, and an application dependent layer that implements the caching and prefetching strategies most appropriate for the considered application. Starting from this assumption, we introduce DVSA, a package that implements the application independent layer and SHOB, one of the packages that can be developed on top of DVSA. SHOB defines a weak consistency memory model where the user controls the amount of inconsistency due to caching and prefetching. The model is well suitable to implement iterative numerical algorithms. Experimental results of the methodology are presented in the case of a uniform multi-grid method to solve partial differential equations.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116871091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905055
Sebastian Allmann, T. Rauber, G. Rünger
Cyclic reduction for the solution of linear equation systems with banded matrices exhibits fine to medium grain potential parallelism with regular but diverse data dependencies. We consider the parallel implementation for this algorithm on a distributed shared memory machine with different programming models. As distributed shared memory machine we use the Convex SPP2000. We compare the runtime results with results from a Cray T3E.
{"title":"Cyclic reduction on distributed shared memory machines","authors":"Sebastian Allmann, T. Rauber, G. Rünger","doi":"10.1109/EMPDP.2001.905055","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905055","url":null,"abstract":"Cyclic reduction for the solution of linear equation systems with banded matrices exhibits fine to medium grain potential parallelism with regular but diverse data dependencies. We consider the parallel implementation for this algorithm on a distributed shared memory machine with different programming models. As distributed shared memory machine we use the Convex SPP2000. We compare the runtime results with results from a Cray T3E.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129771796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905070
V. D. Florio, Geert Deconinck, R. Lauwereins
We describe a novel approach for software-implemented fault tolerance that separates error detection from error recovery and offers a distinct programming and processing context for the latter. This allows the application developer to address separately the non-functional aspects of error recovery from those pertaining to the functional behaviour that the user application is supposed to have in the absence of faults. We conjecture that this way only a limited amount of non-functional code intrusion affects the user application, while the bulk of the strategy to cope with errors is to be expressed by the user in a "recovery script", conceptually as well physically distinct from the functional application layer. Such script is to be written in what we call a "recovery language", i.e. a specialised linguistic framework devoted to the management of the fault tolerance strategies that allows to express scenarios of isolation, reconfiguration, and recovery. These are to be executed on meta-entities of the application with physical or logical counterparts (processing nodes, tasks, or user-defined groups of tasks). The developer is therefore made able to modify the fault tolerance strategy with only a few or no modifications in the application part, or vice-versa, tackling more easily and effectively any of these two fronts. This can result in a better maintainability of the target fault-tolerant application and in support for reaching portability of the service while moving the application to different unfavourable environments. The paper positions and discusses the recovery language approach and a prototypal implementation for embedded applications developed within project TIRAN on a number of distributed platforms.
{"title":"The recovery language approach for software-implemented fault tolerance","authors":"V. D. Florio, Geert Deconinck, R. Lauwereins","doi":"10.1109/EMPDP.2001.905070","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905070","url":null,"abstract":"We describe a novel approach for software-implemented fault tolerance that separates error detection from error recovery and offers a distinct programming and processing context for the latter. This allows the application developer to address separately the non-functional aspects of error recovery from those pertaining to the functional behaviour that the user application is supposed to have in the absence of faults. We conjecture that this way only a limited amount of non-functional code intrusion affects the user application, while the bulk of the strategy to cope with errors is to be expressed by the user in a \"recovery script\", conceptually as well physically distinct from the functional application layer. Such script is to be written in what we call a \"recovery language\", i.e. a specialised linguistic framework devoted to the management of the fault tolerance strategies that allows to express scenarios of isolation, reconfiguration, and recovery. These are to be executed on meta-entities of the application with physical or logical counterparts (processing nodes, tasks, or user-defined groups of tasks). The developer is therefore made able to modify the fault tolerance strategy with only a few or no modifications in the application part, or vice-versa, tackling more easily and effectively any of these two fronts. This can result in a better maintainability of the target fault-tolerant application and in support for reaching portability of the service while moving the application to different unfavourable environments. The paper positions and discusses the recovery language approach and a prototypal implementation for embedded applications developed within project TIRAN on a number of distributed platforms.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130987898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905050
Elizabeth Jayne Stuart, D. Bustard, J. Weston
Within Numerical Analysis, the requirement for visualisation of numerical data continues to grow as computational capabilities increase. Due to the vast amounts of data now produced, traditional mathematical methods of analysing data are cumbersome and time consuming. This research aims to capitalise on advances in information visualisation technology to support numerical data investigation. This paper presents details of a prototype software tool developed to address information visualisation requirements of numerical analysts. Experience of using the tool and an overview of results is reported. Conclusions are drawn and possible future use of the tool discussed.
{"title":"Information visualisation in numerical analysis","authors":"Elizabeth Jayne Stuart, D. Bustard, J. Weston","doi":"10.1109/EMPDP.2001.905050","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905050","url":null,"abstract":"Within Numerical Analysis, the requirement for visualisation of numerical data continues to grow as computational capabilities increase. Due to the vast amounts of data now produced, traditional mathematical methods of analysing data are cumbersome and time consuming. This research aims to capitalise on advances in information visualisation technology to support numerical data investigation. This paper presents details of a prototype software tool developed to address information visualisation requirements of numerical analysts. Experience of using the tool and an overview of results is reported. Conclusions are drawn and possible future use of the tool discussed.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122230446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-02-07DOI: 10.1109/EMPDP.2001.905073
M. Schmidt, U. Rückert
In this paper a novel approach for the storage and access of an index used in Internet search engines (Information Retrieval) is presented. The index provides a mapping from search terms to documents. The Binary Neural Associative Memory (BiNAM) stores an index by associating document signatures and document locations in a distributed and content addressable way. The system presented here has a high memory efficiency of more than 90%. The trade-off between memory consumption and precision of the query-results is examined. A scalable system architecture is presented. The architecture exploits the parallel structure of the BiNAM. The association time is estimated to be orders of magnitude faster than a software solution. The system is realized as a modular PCI architecture. The maximum capacity of the first version is 768 MByte memory which allows to implement a BiNAM of 80 K neurons with 80 K inputs each. In such a system approximately 64 million associations can be scored and accessed within 330 ns per association.
{"title":"Content-based information retrieval using an embedded neural associative memory","authors":"M. Schmidt, U. Rückert","doi":"10.1109/EMPDP.2001.905073","DOIUrl":"https://doi.org/10.1109/EMPDP.2001.905073","url":null,"abstract":"In this paper a novel approach for the storage and access of an index used in Internet search engines (Information Retrieval) is presented. The index provides a mapping from search terms to documents. The Binary Neural Associative Memory (BiNAM) stores an index by associating document signatures and document locations in a distributed and content addressable way. The system presented here has a high memory efficiency of more than 90%. The trade-off between memory consumption and precision of the query-results is examined. A scalable system architecture is presented. The architecture exploits the parallel structure of the BiNAM. The association time is estimated to be orders of magnitude faster than a software solution. The system is realized as a modular PCI architecture. The maximum capacity of the first version is 768 MByte memory which allows to implement a BiNAM of 80 K neurons with 80 K inputs each. In such a system approximately 64 million associations can be scored and accessed within 330 ns per association.","PeriodicalId":262971,"journal":{"name":"Proceedings Ninth Euromicro Workshop on Parallel and Distributed Processing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}