Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139352
C. King, T. Shiau, Chin-Piao Chan
The authors propose a hybrid static/dynamic scheduling scheme on distributed-memory multiple-processor systems, e.g., distributed systems and multicomputers. Using this self-balancing scheme, computations are first scheduled statically, and then dynamically redistributed to adapt to the run-time environments. The rescheduling operations are directed by a number of program parameters, which can be directly accessed from within the program and will serve as processor load indices. As a result the self-balancing operations can be implemented entirely at the application level, which requires minimal system supports. To illustrate the concept, the self-balancing technique is applied to the asynchronous iterative methods. Various design tradeoffs are discussed, and preliminary performance results on an NCUBE multicomputer are presented.<>
{"title":"Application-level software self-balancing","authors":"C. King, T. Shiau, Chin-Piao Chan","doi":"10.1109/CMPSAC.1990.139352","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139352","url":null,"abstract":"The authors propose a hybrid static/dynamic scheduling scheme on distributed-memory multiple-processor systems, e.g., distributed systems and multicomputers. Using this self-balancing scheme, computations are first scheduled statically, and then dynamically redistributed to adapt to the run-time environments. The rescheduling operations are directed by a number of program parameters, which can be directly accessed from within the program and will serve as processor load indices. As a result the self-balancing operations can be implemented entirely at the application level, which requires minimal system supports. To illustrate the concept, the self-balancing technique is applied to the asynchronous iterative methods. Various design tradeoffs are discussed, and preliminary performance results on an NCUBE multicomputer are presented.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139307
R. Jacoby, Y. Tohma
The hyper-geometric distribution is used to estimate the number of initial faults residual in software at the beginning of the test-and-debug phase. The hyper-geometric distribution growth model (HGD model) is well suited to making estimates for the observed growth curves of the accumulated number of detected faults. The advantage of the proposed model is the applicability to all kinds of observed data. By application of a single model, exponential growth curves as well as S-shaped growth curves can be estimated. The precise formulation of the HGD model is presented. The exact relationship of this model to the NHPP Goel-Okumoto growth model and the delayed S-shaped growth model is shown. With the introduction of a variable fault detection rate, the goodness of fit of the estimated growth curve to the growth curve of real observed faults is increased significantly. Different examples of the applicability of the model to real observed data are presented.<>
{"title":"The hyper-geometric distribution software reliability growth model (HGDM): precise formulation and applicability","authors":"R. Jacoby, Y. Tohma","doi":"10.1109/CMPSAC.1990.139307","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139307","url":null,"abstract":"The hyper-geometric distribution is used to estimate the number of initial faults residual in software at the beginning of the test-and-debug phase. The hyper-geometric distribution growth model (HGD model) is well suited to making estimates for the observed growth curves of the accumulated number of detected faults. The advantage of the proposed model is the applicability to all kinds of observed data. By application of a single model, exponential growth curves as well as S-shaped growth curves can be estimated. The precise formulation of the HGD model is presented. The exact relationship of this model to the NHPP Goel-Okumoto growth model and the delayed S-shaped growth model is shown. With the introduction of a variable fault detection rate, the goodness of fit of the estimated growth curve to the growth curve of real observed faults is increased significantly. Different examples of the applicability of the model to real observed data are presented.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116062054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139382
Shin-Nine Yang, T. Lin
The authors propose a new algorithm for finding the three-dimensional border of linear octrees stored in a one dimensional array. A simple method is proposed to check whether an octant is a border octant. Then, the border finding procedure can be carried out node by node according to their location code ordering. In order to improve the performance of the algorithm, a new and efficient neighbor finding technique is proposed. The time complexity of the proposed neighbor finding method is analyzed and proved to be O(1) on the average. Compared with the existing border algorithms, the proposed algorithm has the following advantages: (1) no preprocessing is required to arrange the input data according to their grouping factors, (2) the border found is already a sorted sequence of border voxels with no extra sorting required, and (3) the average time complexity is improved from O(N log N) to O(N), where N is the number of nodes in the linear octree.<>
{"title":"A new 3D-border algorithm by neighbor finding","authors":"Shin-Nine Yang, T. Lin","doi":"10.1109/CMPSAC.1990.139382","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139382","url":null,"abstract":"The authors propose a new algorithm for finding the three-dimensional border of linear octrees stored in a one dimensional array. A simple method is proposed to check whether an octant is a border octant. Then, the border finding procedure can be carried out node by node according to their location code ordering. In order to improve the performance of the algorithm, a new and efficient neighbor finding technique is proposed. The time complexity of the proposed neighbor finding method is analyzed and proved to be O(1) on the average. Compared with the existing border algorithms, the proposed algorithm has the following advantages: (1) no preprocessing is required to arrange the input data according to their grouping factors, (2) the border found is already a sorted sequence of border voxels with no extra sorting required, and (3) the average time complexity is improved from O(N log N) to O(N), where N is the number of nodes in the linear octree.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117343218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139410
Junichi Takahashi
The author describes hybrid relations in relational databases that allow existing relations to be altered by the addition of new attributes without reorganization of the database scheme. The values of new attributes with respect to an existing relation are stored separately from the relation as a set of triples of tuple identifier, attribute name, and value. At query time, a hybrid relation, which has only the attributes requested in a query, is derived virtually by combining the relation and this set of triples. A relation can be reorganized by upgrading its attribute values from these triples. The hybrid relation is defined as an algebraic expression, and equivalent expressions of a query on the hybrid relations are shown for efficient query processing.<>
{"title":"Hybrid relations for database schema evolution","authors":"Junichi Takahashi","doi":"10.1109/CMPSAC.1990.139410","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139410","url":null,"abstract":"The author describes hybrid relations in relational databases that allow existing relations to be altered by the addition of new attributes without reorganization of the database scheme. The values of new attributes with respect to an existing relation are stored separately from the relation as a set of triples of tuple identifier, attribute name, and value. At query time, a hybrid relation, which has only the attributes requested in a query, is derived virtually by combining the relation and this set of triples. A relation can be reorganized by upgrading its attribute values from these triples. The hybrid relation is defined as an algebraic expression, and equivalent expressions of a query on the hybrid relations are shown for efficient query processing.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139367
K. Barker, M. Tamer Özsu
Multidatabase serializability is defined as an extension of the well-known serializability theory in order to provide a theoretical framework for research in concurrency control of transactions over multidatabase systems. Also introduced are multidatabase serializability graphs which capture the ordering characteristics of global as well as local transactions. Two schedulers that produce multidatabase serializable histories are described. The first scheduler is a conservative one which only permits one global subtransaction to proceed if all of the global subtransactions can proceed for any given global transaction. The 'all or nothing' approach of this algorithm is simple, elegant, and correct. The second scheduler is more aggressive in that it attempts to schedule as many global subtransactions as possible as soon as possible. A distinguishing feature of this work is the environment that it considers; the most pessimistic scenario is assumed, where individual database management systems are totally autonomous with no knowledge of each other. This restricts the communication between them to be via the multidatabase layer and requires that the global scheduler 'hand down' the order of execution of global transactions.<>
{"title":"Concurrent transaction execution in multidatabase systems","authors":"K. Barker, M. Tamer Özsu","doi":"10.1109/CMPSAC.1990.139367","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139367","url":null,"abstract":"Multidatabase serializability is defined as an extension of the well-known serializability theory in order to provide a theoretical framework for research in concurrency control of transactions over multidatabase systems. Also introduced are multidatabase serializability graphs which capture the ordering characteristics of global as well as local transactions. Two schedulers that produce multidatabase serializable histories are described. The first scheduler is a conservative one which only permits one global subtransaction to proceed if all of the global subtransactions can proceed for any given global transaction. The 'all or nothing' approach of this algorithm is simple, elegant, and correct. The second scheduler is more aggressive in that it attempts to schedule as many global subtransactions as possible as soon as possible. A distinguishing feature of this work is the environment that it considers; the most pessimistic scenario is assumed, where individual database management systems are totally autonomous with no knowledge of each other. This restricts the communication between them to be via the multidatabase layer and requires that the global scheduler 'hand down' the order of execution of global transactions.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114076904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139383
Rosalee Nerheim
Adaptive DCT (discrete cosine transform) compression methods outperform fixed DCT compression methods in terms of image quality, but they need a large amount of scratch space for the transformed image file. The author proposes a semi-adaptive DCT compression method that outperforms fixed DCT compression, but uses only a small amount of scratch space. This method was designed for use in an electronic still camera that is being developed by NASA. Simulation results show that at 2.25 bits per pixel, the SNR (signal to noise ratio) of the semi-adaptive method ranged from 35 dB to 42 dB as compared to a range of 34 dB to 42 dB for the fixed DCT method. At 3 bits per pixel, the semiadaptive method has an SNR that ranges from 40 dB to 47 dB.<>
{"title":"A semi-adaptive DCT compression method that uses minimal space","authors":"Rosalee Nerheim","doi":"10.1109/CMPSAC.1990.139383","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139383","url":null,"abstract":"Adaptive DCT (discrete cosine transform) compression methods outperform fixed DCT compression methods in terms of image quality, but they need a large amount of scratch space for the transformed image file. The author proposes a semi-adaptive DCT compression method that outperforms fixed DCT compression, but uses only a small amount of scratch space. This method was designed for use in an electronic still camera that is being developed by NASA. Simulation results show that at 2.25 bits per pixel, the SNR (signal to noise ratio) of the semi-adaptive method ranged from 35 dB to 42 dB as compared to a range of 34 dB to 42 dB for the fixed DCT method. At 3 bits per pixel, the semiadaptive method has an SNR that ranges from 40 dB to 47 dB.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131314041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139462
Kirk Scott, W. Perrizo
The authors consider the problem of optimizing join query processing in a database distributed over a bus type local area network which uses the carrier sense multiple access/collision detection (CSMA/CD) access protocol. Some new algorithms are proposed which use a compatible access protocol, movable slot time division multiplexing (MSTDM), to achieve improved performance over existing algorithms. Analysis of example cases shows the improved performance potential for MSTDM. It is concluded that the proposed algorithms explicitly account for packetization and other costs unaccounted for in existing algorithms. If the overhead of CSMA/CD and MSTDM algorithms is comparable, MSTDM's performance characteristics translate directly into improved distributed join processing.<>
{"title":"Methods for distributed join processing using a voice-data protocol","authors":"Kirk Scott, W. Perrizo","doi":"10.1109/CMPSAC.1990.139462","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139462","url":null,"abstract":"The authors consider the problem of optimizing join query processing in a database distributed over a bus type local area network which uses the carrier sense multiple access/collision detection (CSMA/CD) access protocol. Some new algorithms are proposed which use a compatible access protocol, movable slot time division multiplexing (MSTDM), to achieve improved performance over existing algorithms. Analysis of example cases shows the improved performance potential for MSTDM. It is concluded that the proposed algorithms explicitly account for packetization and other costs unaccounted for in existing algorithms. If the overhead of CSMA/CD and MSTDM algorithms is comparable, MSTDM's performance characteristics translate directly into improved distributed join processing.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139336
Jason R. Lee, Kuo-Hua Wang, C. Chou
A set of debugging tools has been implemented for monitoring and controlling the execution sequences of Concurrent-C programs. Among these tools, REPLAY MONITOR can be employed for monitoring the synchronization events while RELAY REPRODUCER serves for reproducing the monitored synchronization events. When some processes of a concurrent program are irrelevant and time-consuming, PARTIAL REPLAY MONITOR and PARTIAL REPLAY REPRODUCER can be utilized for monitoring and reproducing synchronization events while ignoring the actual computation of those irrelevant processes in order to save debugging time.<>
{"title":"An implementation of software tools for replay and partial replay of Concurrent-C programs","authors":"Jason R. Lee, Kuo-Hua Wang, C. Chou","doi":"10.1109/CMPSAC.1990.139336","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139336","url":null,"abstract":"A set of debugging tools has been implemented for monitoring and controlling the execution sequences of Concurrent-C programs. Among these tools, REPLAY MONITOR can be employed for monitoring the synchronization events while RELAY REPRODUCER serves for reproducing the monitored synchronization events. When some processes of a concurrent program are irrelevant and time-consuming, PARTIAL REPLAY MONITOR and PARTIAL REPLAY REPRODUCER can be utilized for monitoring and reproducing synchronization events while ignoring the actual computation of those irrelevant processes in order to save debugging time.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127403481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139425
K. Hwang, A. A. Kapauan, W. N. Toy
A new approach to customer-oriented end-to-end testing is proposed as an integral part of the architecture for enhancing the reliability of AT&T Bell Labs' products and the quality of their ISDN services. The basic idea is to create virtual customers who are constantly using the system as real customers, thereby continually exercising and monitoring the system's operations in a real working environment. Any potential system problem will thus be detected first by the virtual customers. Therefore, problems are expected to be corrected before any reaction from the paying customers. Such a proposal appears to be relatively simple. The important question concerns whether one can emulate the customers who can effectively set up end-to-end dialogs as done by real customers. The objective of this experiment on the 5ESS (electronic switching system) is to apply this technique to implement virtual customers as a means of on-line, real-time testing of the system's capability in providing high quality customers services. Although a very limited data collection has been taken, the on-line customer-service-oriented testing approach has been demonstrated to be an effective means of uncovering difficult problems in a real system environment.<>
{"title":"A unified hardware/software fault detection experiment in a 5ESS system","authors":"K. Hwang, A. A. Kapauan, W. N. Toy","doi":"10.1109/CMPSAC.1990.139425","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139425","url":null,"abstract":"A new approach to customer-oriented end-to-end testing is proposed as an integral part of the architecture for enhancing the reliability of AT&T Bell Labs' products and the quality of their ISDN services. The basic idea is to create virtual customers who are constantly using the system as real customers, thereby continually exercising and monitoring the system's operations in a real working environment. Any potential system problem will thus be detected first by the virtual customers. Therefore, problems are expected to be corrected before any reaction from the paying customers. Such a proposal appears to be relatively simple. The important question concerns whether one can emulate the customers who can effectively set up end-to-end dialogs as done by real customers. The objective of this experiment on the 5ESS (electronic switching system) is to apply this technique to implement virtual customers as a means of on-line, real-time testing of the system's capability in providing high quality customers services. Although a very limited data collection has been taken, the on-line customer-service-oriented testing approach has been demonstrated to be an effective means of uncovering difficult problems in a real system environment.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126412871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139305
S. N. Weiss
The comparative analysis of test data criteria in software testing is considered, and an attempt is made to investigate how criteria have been and should be compared to each other. It is argued that there are two fundamentally different goals in comparing criteria: (1) to compare the error-exposing ability of criteria, and (2) to compare the cost of using the criteria for selecting and/or evaluating test data. Relations such as the power relation and probable correctness are clearly in the first category, and test case counting is clearly in the second category. Subsumption, in contrast, is not entirely in either category. It is shown that the subsumption relation primarily compares the difficulty of satisfying two criteria. If one assumes that the criteria being compared are applicable, then one can infer their relative power and size complexities from the subsumption relation. In addition, it is shown that, while the size complexity of a criterion gives some indication of the relative cost of using the criterion, it is by no means a sufficient measure of the overall difficulty of using that criterion, which also includes the process of checking whether the predicate defined by the criterion has been satisfied, which may not only be difficult, but impossible.<>
{"title":"Methods of comparing test data adequacy criteria","authors":"S. N. Weiss","doi":"10.1109/CMPSAC.1990.139305","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139305","url":null,"abstract":"The comparative analysis of test data criteria in software testing is considered, and an attempt is made to investigate how criteria have been and should be compared to each other. It is argued that there are two fundamentally different goals in comparing criteria: (1) to compare the error-exposing ability of criteria, and (2) to compare the cost of using the criteria for selecting and/or evaluating test data. Relations such as the power relation and probable correctness are clearly in the first category, and test case counting is clearly in the second category. Subsumption, in contrast, is not entirely in either category. It is shown that the subsumption relation primarily compares the difficulty of satisfying two criteria. If one assumes that the criteria being compared are applicable, then one can infer their relative power and size complexities from the subsumption relation. In addition, it is shown that, while the size complexity of a criterion gives some indication of the relative cost of using the criterion, it is by no means a sufficient measure of the overall difficulty of using that criterion, which also includes the process of checking whether the predicate defined by the criterion has been satisfied, which may not only be difficult, but impossible.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}