Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556267
Rutger F. H. Hofman
An architecture, which is a hybrid of local memory and shared memory, is described in this report: it uses dual ported memories (DPMs), each accessed by two processors. Each processor is connected to a number of DPMs. The profit that is gained by using a DPM as a shared memory between two processors appears from task allocation results: task transport costs are avoided when a task, newly created in DPM d by one of d’s two processors, is allocated to the other processor at d. For a number of task allocation strategies, simulation studies show that the fraction of the tasks that benefit from this optimisation decreases with the number of processors in the multiprocessor. For larger numbers of processors, this fraction is considerably higher than the fraction under random allocation.
{"title":"Evaluation of Dual Ported Memories from the Task Level","authors":"Rutger F. H. Hofman","doi":"10.1109/DMCC.1990.556267","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556267","url":null,"abstract":"An architecture, which is a hybrid of local memory and shared memory, is described in this report: it uses dual ported memories (DPMs), each accessed by two processors. Each processor is connected to a number of DPMs. The profit that is gained by using a DPM as a shared memory between two processors appears from task allocation results: task transport costs are avoided when a task, newly created in DPM d by one of d’s two processors, is allocated to the other processor at d. For a number of task allocation strategies, simulation studies show that the fraction of the tasks that benefit from this optimisation decreases with the number of processors in the multiprocessor. For larger numbers of processors, this fraction is considerably higher than the fraction under random allocation.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556313
D. Socha
This paper proposes a scheme for compiling an important class of iterative algorithms into efficient code for distributed memory computers. The programmer provides a description of the problem in Spot: a data parallel SIMD language that uses iterations as the unit of synchronization and is based on grids of data points. The data parallel description is in terms of a single point of the data space, with implicit communication semantics, and a set of numerical boundary conditions. The compiler eliminates the need for multi-tasking by ‘(expanding” the single-point code into multiplepoint code that executes over rectangular regions of points. Using rectangle intersection and difference operations on these regions allows the compiler to automatically insert the required communication calls and to hide communication latency by overlapping comput at ion and communication. The multiple-point code may be specialized, at compile-time, to the size and shape of different allocations, or it may use table-driven for-loops to adapt, at run-time, to the shape and size of the allocations. We show how to generalize this strategy to produce code for the near-rectangular shaped allocations required for balanced partitionings of rectangular arrays.
{"title":"An Approach to Compiling Single-point Iterative Programs for Distributed Memory Computers","authors":"D. Socha","doi":"10.1109/DMCC.1990.556313","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556313","url":null,"abstract":"This paper proposes a scheme for compiling an important class of iterative algorithms into efficient code for distributed memory computers. The programmer provides a description of the problem in Spot: a data parallel SIMD language that uses iterations as the unit of synchronization and is based on grids of data points. The data parallel description is in terms of a single point of the data space, with implicit communication semantics, and a set of numerical boundary conditions. The compiler eliminates the need for multi-tasking by ‘(expanding” the single-point code into multiplepoint code that executes over rectangular regions of points. Using rectangle intersection and difference operations on these regions allows the compiler to automatically insert the required communication calls and to hide communication latency by overlapping comput at ion and communication. The multiple-point code may be specialized, at compile-time, to the size and shape of different allocations, or it may use table-driven for-loops to adapt, at run-time, to the shape and size of the allocations. We show how to generalize this strategy to produce code for the near-rectangular shaped allocations required for balanced partitionings of rectangular arrays.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556295
M. Rosing, R. P. Weaver
Abstract : The authors present a structured scheme for allowing a programmer to specify the mapping of data to distributed memory multiprocessors. This scheme lets the programmer specify information about communication patterns as well as information about distributing data structures onto processors (including partitioning with replication). This mapping scheme allows the user to map arrays of data to arrays of processors. The user specifies how each axis of the data structure is mapped onto an axis of the processor structure. This mapping may either be one to one or one to many depending on the parallelism, load balancing, and communication requirements. The authors discuss the basics of how this scheme is implemented in the DINO language, the areas in which it has worked well, the few areas in which there were significant problems, and some ideas for future improvements.
{"title":"Mapping Data to Processors in Distributed Memory Computations","authors":"M. Rosing, R. P. Weaver","doi":"10.1109/DMCC.1990.556295","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556295","url":null,"abstract":"Abstract : The authors present a structured scheme for allowing a programmer to specify the mapping of data to distributed memory multiprocessors. This scheme lets the programmer specify information about communication patterns as well as information about distributing data structures onto processors (including partitioning with replication). This mapping scheme allows the user to map arrays of data to arrays of processors. The user specifies how each axis of the data structure is mapped onto an axis of the processor structure. This mapping may either be one to one or one to many depending on the parallelism, load balancing, and communication requirements. The authors discuss the basics of how this scheme is implemented in the DINO language, the areas in which it has worked well, the few areas in which there were significant problems, and some ideas for future improvements.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555383
R. Battiti
The problem considered in this work is that of estimating the motion field (i.e. the projection of the velocity field onto the image plane) from a temporal sequence of images. Generic images contain different objects with diverse spatial frequencies and motion amplitudes. To deal with this complex environment in a fast and effective way, biological visual systems use parallel processing, visual channels at different resolutions and adaptive mechanisms. In this paper a new adaptive multiscale scheme is proposed, in which the spatial discretization scale is based on a local estimate of the errors involved. Considering the constraints for real-time operation, flexibility and portability, the scheme can be implemented on MIMD parallel computers with medium size grains with high efficiency. Tests with ray-traced and video-acquired images for different motion ranges show that this method produces a better estimation with respect to the homogeneous (no Gadap t ive) mult iscale met hod.
在这项工作中考虑的问题是从图像的时间序列中估计运动场(即速度场在图像平面上的投影)。通用图像包含不同空间频率和运动幅度的不同对象。为了快速有效地处理这种复杂的环境,生物视觉系统采用并行处理、不同分辨率的视觉通道和自适应机制。本文提出了一种新的自适应多尺度方案,该方案的空间离散尺度基于误差的局部估计。考虑到实时性、灵活性和可移植性的限制,该方案可以在中等粒度的MIMD并行计算机上高效实现。对不同运动范围的光线跟踪和视频采集图像进行的测试表明,该方法相对于均匀(无Gadap t - ive)多尺度方法产生了更好的估计。
{"title":"An Adaptive Multiscale Scheme for Real-Time Motion Field Estimation","authors":"R. Battiti","doi":"10.1109/DMCC.1990.555383","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555383","url":null,"abstract":"The problem considered in this work is that of estimating the motion field (i.e. the projection of the velocity field onto the image plane) from a temporal sequence of images. Generic images contain different objects with diverse spatial frequencies and motion amplitudes. To deal with this complex environment in a fast and effective way, biological visual systems use parallel processing, visual channels at different resolutions and adaptive mechanisms. In this paper a new adaptive multiscale scheme is proposed, in which the spatial discretization scale is based on a local estimate of the errors involved. Considering the constraints for real-time operation, flexibility and portability, the scheme can be implemented on MIMD parallel computers with medium size grains with high efficiency. Tests with ray-traced and video-acquired images for different motion ranges show that this method produces a better estimation with respect to the homogeneous (no Gadap t ive) mult iscale met hod.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126646154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556327
D. Grit
SISAL is a general-purpose applicative 11 anguage intended for use on both conventional aiid novel multiprocessor systems. In this paper we describe the port of a shared memory implemeni,ation to a distributed memory environment. A ni mber of issues are specifically addressed: the e~~aluation strategy, memory management, schedulinp , stream handling, and task synchronization.
{"title":"A Distributed Memory Implementation of SISAL","authors":"D. Grit","doi":"10.1109/DMCC.1990.556327","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556327","url":null,"abstract":"SISAL is a general-purpose applicative 11 anguage intended for use on both conventional aiid novel multiprocessor systems. In this paper we describe the port of a shared memory implemeni,ation to a distributed memory environment. A ni mber of issues are specifically addressed: the e~~aluation strategy, memory management, schedulinp , stream handling, and task synchronization.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114182629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556269
K. Gunter, E. Gehringer
{"title":"Hot-Spot Performance of Single-Stage and Multistage Interconnection Networks","authors":"K. Gunter, E. Gehringer","doi":"10.1109/DMCC.1990.556269","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556269","url":null,"abstract":"","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126673213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555398
C. Romine, K. Sigmon
{"title":"Reducing Inner Product Computation in the Parallel One-Sided Jacobi Algorithm","authors":"C. Romine, K. Sigmon","doi":"10.1109/DMCC.1990.555398","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555398","url":null,"abstract":"","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113977152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555363
S. Al-Bassam, H. El-Rewini, B. Bose, T. Lewis
We develop an efficient subcube recognition algorithm that recognizes all the possible subcubes. The algorithm is based on exploiting more subcubes at different levels of the buddy tree. In exploiting the different levels, the algorithm checks any subcube at most once. Moreover, many unavailable subcubes are not considered as candidates and hence not checked for availability. This makes the algorithm fast in recognizing the subcubes. The number of recognized subcubes, for different subcube sizes, can be easily adjusted by restricting the search level down the buddy tree. The previous known algorithms become a special case of this general approach. When one level is searched, this algorithm perfoms as the original buddy system. When two levels are searched, it will recognized the Same subcubes as the ones in [4] with a faster speed. When all the levels are searched, a complete subcube recognition is obtained. In a multi-processing system, each processor can execute this algorithm on a different tree. Using a given number of processors in a multi-processing system, we give a method of constructing the trees that maximizes the overall number of recognized subcubes. Finally, we introduce an allocation method "best fit" that reduces hypercube fragmentation. Simulation results and performance comparisons between this method and the traditional "first fit" are presented.
{"title":"Efficient Serial and Parallel Subcube Recognition in Hypercubes","authors":"S. Al-Bassam, H. El-Rewini, B. Bose, T. Lewis","doi":"10.1109/DMCC.1990.555363","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555363","url":null,"abstract":"We develop an efficient subcube recognition algorithm that recognizes all the possible subcubes. The algorithm is based on exploiting more subcubes at different levels of the buddy tree. In exploiting the different levels, the algorithm checks any subcube at most once. Moreover, many unavailable subcubes are not considered as candidates and hence not checked for availability. This makes the algorithm fast in recognizing the subcubes. The number of recognized subcubes, for different subcube sizes, can be easily adjusted by restricting the search level down the buddy tree. The previous known algorithms become a special case of this general approach. When one level is searched, this algorithm perfoms as the original buddy system. When two levels are searched, it will recognized the Same subcubes as the ones in [4] with a faster speed. When all the levels are searched, a complete subcube recognition is obtained. In a multi-processing system, each processor can execute this algorithm on a different tree. Using a given number of processors in a multi-processing system, we give a method of constructing the trees that maximizes the overall number of recognized subcubes. Finally, we introduce an allocation method \"best fit\" that reduces hypercube fragmentation. Simulation results and performance comparisons between this method and the traditional \"first fit\" are presented.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555355
J. Horvath, T. Tang, L. P. Perry, R. Cole, D.B. OIster, J. Zipse
Controlling interplanetary spacecraft and planning their activities, as currently practiced, requires massive amounts of computer time and personnel. To improve this situation, it is desired to use advanced computing to speed up and automate the commanding process. Several design and prototype efforts have been underway at JPL to understand the appropriate roles for concurrent processors in future interplanetary spacecraft operations. Here we report on an effort to identify likely candidates for parallelism among existing software systems that both generate commands to be sent to the spacecraft and simulate what the spacecraft will do with these commands when it receives them. We also describe promising results from efforts to create parallel prototypes of representative portions of these software systems on the JPL/Caltech Mark 111 hypercube.
控制行星际航天器并规划其活动,如目前所实行的,需要大量的计算机时间和人员。为了改善这种情况,需要使用先进的计算来加速和自动化指挥过程。喷气推进实验室正在进行一些设计和原型工作,以了解并发处理器在未来星际航天器操作中的适当作用。在这里,我们报告了一项努力,以确定现有软件系统中可能的并行候选,这些软件系统既生成要发送给航天器的命令,又模拟航天器在接收这些命令时将如何处理这些命令。我们还描述了在JPL/Caltech Mark 111超立方体上创建这些软件系统的代表性部分的并行原型的有希望的结果。
{"title":"Hypercubes for Critical Space Flight Command Operations","authors":"J. Horvath, T. Tang, L. P. Perry, R. Cole, D.B. OIster, J. Zipse","doi":"10.1109/DMCC.1990.555355","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555355","url":null,"abstract":"Controlling interplanetary spacecraft and planning their activities, as currently practiced, requires massive amounts of computer time and personnel. To improve this situation, it is desired to use advanced computing to speed up and automate the commanding process. Several design and prototype efforts have been underway at JPL to understand the appropriate roles for concurrent processors in future interplanetary spacecraft operations. Here we report on an effort to identify likely candidates for parallelism among existing software systems that both generate commands to be sent to the spacecraft and simulate what the spacecraft will do with these commands when it receives them. We also describe promising results from efforts to create parallel prototypes of representative portions of these software systems on the JPL/Caltech Mark 111 hypercube.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127664534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556410
Yi-Chieh Chang, K. Shin
This paper discusses and analyzes two load sharing (LS) issues: adjusting preferred lists and implementing a fault-tolerant mechanism in the presence of node failures. In an early paper, we have proposed to order the nodes in each node's proximity into a preferred list for the purpose of load sharing in distributed real-time systems. The preferred list of each node is constructed in such a way that each node will be selected as the kth preferred node by one and only one other node. Such lists are proven to allow the tasks to be evenly distributed in a system. However, the presence of faulty nodes will destroy the original structure of a preferred list if the faulty nodes are simply skipped in the preferred list. An algorithm is therefore proposed to modify each preferred list to retain its original features regardless of the number of faulty nodes in the system. The communication overhead introduced by this algorithm is shown to be minimal. Based on the modified preferred lists, a simple fault-tolerant mechanism is implemented. Each node is equipped with a backup queue which 'The work reported in this paper was supported in part by the Office of Naval Research under contract N0001485-K-0122, and the NSF under grant DMC-8721492. Any opinions, findings, and recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the funding agencies. stores and updates the arriving/completing tasks at its most preferrecd node. Whenever a node becomes faulty, its m.ost preferred node will treat the tasks in the baxkup queue as externally axriving tasks. Our simulation results show that this approach, despite of the simplicity, can reduce the number of task losses dramatically, as compared to the approaches without any faulttolerant mechanism.
{"title":"Load Sharing In Hypercube Multicomputers In The Presence Of Node Failures","authors":"Yi-Chieh Chang, K. Shin","doi":"10.1109/DMCC.1990.556410","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556410","url":null,"abstract":"This paper discusses and analyzes two load sharing (LS) issues: adjusting preferred lists and implementing a fault-tolerant mechanism in the presence of node failures. In an early paper, we have proposed to order the nodes in each node's proximity into a preferred list for the purpose of load sharing in distributed real-time systems. The preferred list of each node is constructed in such a way that each node will be selected as the kth preferred node by one and only one other node. Such lists are proven to allow the tasks to be evenly distributed in a system. However, the presence of faulty nodes will destroy the original structure of a preferred list if the faulty nodes are simply skipped in the preferred list. An algorithm is therefore proposed to modify each preferred list to retain its original features regardless of the number of faulty nodes in the system. The communication overhead introduced by this algorithm is shown to be minimal. Based on the modified preferred lists, a simple fault-tolerant mechanism is implemented. Each node is equipped with a backup queue which 'The work reported in this paper was supported in part by the Office of Naval Research under contract N0001485-K-0122, and the NSF under grant DMC-8721492. Any opinions, findings, and recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the funding agencies. stores and updates the arriving/completing tasks at its most preferrecd node. Whenever a node becomes faulty, its m.ost preferred node will treat the tasks in the baxkup queue as externally axriving tasks. Our simulation results show that this approach, despite of the simplicity, can reduce the number of task losses dramatically, as compared to the approaches without any faulttolerant mechanism.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115882207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}