M. Kvassay, E. Zaitseva, V. Levashenko, J. Kostolny
One of the current issues of reliability analysis is investigation of complex systems consisting of many components. Investigation of such systems requires developing efficient methods for their representation. One of the possible ways is use of Binary Decision Diagrams (BDDs) that allow storing information about system topology in an efficient way. However, the main problem behind BDDs is how to create a good BDD. In this paper, we focus on construction of BDDs for common system structures such as series, parallel and k-out-of-n. Using these results, we present a method for creation of a good BDD for k-to-l-out-of-n systems, which are a typical instance of noncoherent systems. In the next phase, we use BDDs and logical differential calculus to develop an efficient method for importance analysis of k-to-l-out-of-n systems.
{"title":"Binary Decision Diagrams in reliability analysis of standard system structures","authors":"M. Kvassay, E. Zaitseva, V. Levashenko, J. Kostolny","doi":"10.1109/DT.2016.7557168","DOIUrl":"https://doi.org/10.1109/DT.2016.7557168","url":null,"abstract":"One of the current issues of reliability analysis is investigation of complex systems consisting of many components. Investigation of such systems requires developing efficient methods for their representation. One of the possible ways is use of Binary Decision Diagrams (BDDs) that allow storing information about system topology in an efficient way. However, the main problem behind BDDs is how to create a good BDD. In this paper, we focus on construction of BDDs for common system structures such as series, parallel and k-out-of-n. Using these results, we present a method for creation of a good BDD for k-to-l-out-of-n systems, which are a typical instance of noncoherent systems. In the next phase, we use BDDs and logical differential calculus to develop an efficient method for importance analysis of k-to-l-out-of-n systems.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117302495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sub-fractional Brownian motion {XH(t), t ≥ 0} with Hurst index 0 <; H <; 1, is an element of the QHASI class, a class of centered Gaussian processes which was introduced in 2015. It satisfies a specific assumption depending on the value of H. The study of the increments of XH consists mainly in investigating their limit properties under suitable conditions. The lim sup behavior was already investigated. It depends on two constants : the first one occurs in the quasi-helix property whereas the second one in the approximately stationary increments property. Here we investigate the lim inf behavior and assess the influence of the specific assumption of the process XH. The two above mentioned constants will play a key role in the statement of the results.
{"title":"The increments of a sub-fractional Brownian motion","authors":"C. El-Nouty","doi":"10.1109/DT.2016.7557156","DOIUrl":"https://doi.org/10.1109/DT.2016.7557156","url":null,"abstract":"The sub-fractional Brownian motion {XH(t), t ≥ 0} with Hurst index 0 <; H <; 1, is an element of the QHASI class, a class of centered Gaussian processes which was introduced in 2015. It satisfies a specific assumption depending on the value of H. The study of the increments of XH consists mainly in investigating their limit properties under suitable conditions. The lim sup behavior was already investigated. It depends on two constants : the first one occurs in the quasi-helix property whereas the second one in the approximately stationary increments property. Here we investigate the lim inf behavior and assess the influence of the specific assumption of the process XH. The two above mentioned constants will play a key role in the statement of the results.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126851711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vasileios Kouliaridis, V. Vlachos, I. Savvas, I. Androulidakis
This paper describes the development of a simple, real-time operating system for educational purposes. The system provides soft real-time capabilities to serve real time processes and data and meet deadlines. The main contribution of this paper is a very simple open source operating system that implements different real-time algorithms. The code is kept minimal as the kernel is only 15000 lines long. It supports high definition graphics and easy modification which allows more real-time algorithms to be implemented.
{"title":"SIRTOS: A simple real-time operating system","authors":"Vasileios Kouliaridis, V. Vlachos, I. Savvas, I. Androulidakis","doi":"10.1109/DT.2016.7557165","DOIUrl":"https://doi.org/10.1109/DT.2016.7557165","url":null,"abstract":"This paper describes the development of a simple, real-time operating system for educational purposes. The system provides soft real-time capabilities to serve real time processes and data and meet deadlines. The main contribution of this paper is a very simple open source operating system that implements different real-time algorithms. The code is kept minimal as the kernel is only 15000 lines long. It supports high definition graphics and easy modification which allows more real-time algorithms to be implemented.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Business processes, especially key value-add processes, need to be supported by targeted and relevant information services to ensure high performance. These services have more forms of utilization in a company. The basic use is creating and sustaining database which is utilized at the operational run of the processes as such. Further utilization takes the form of creating special information sources which serve as support of management decision making in managing business processes. Company management perceives information services as a necessary part of the business process and expects their continuous performance. Therefore, the cost of IT technologies represents a significant item in company budget. This places executives into a contradictory position. On one hand, they should increase efficiency and decrease cost; on the other hand they need to devote financial/human/technological resources to develop information services as the company information environment needs to be flexibly adapted to the needs of the changing markets. Developing of internal information environment by own resources is demanding and it takes sources of the company which could be devoted to its primary activities. One of the solutions for this situation is outsourcing, however transition to outsourcing is related to many risks. They include the transition to a new mode of service providing, the necessity to include the relation “external service provider - internal service customer”, selection of correct type of outsourcing and suitable external provider. The greatest risk is connected with defining the service parameters in contractual relationship for ensuring service quality; this risk is related to identification of these service quality parameters. This text focuses on defining the assumptions for monitoring of information services quality within information processes which are selected for outsourcing. A method is modelled which identifies the metrics that will become the base for definition of information services quality parameters.
{"title":"Identification of IT-service metrics for a business process when planning a transition to outsourcing","authors":"S. Simonova","doi":"10.1109/DT.2016.7557186","DOIUrl":"https://doi.org/10.1109/DT.2016.7557186","url":null,"abstract":"Business processes, especially key value-add processes, need to be supported by targeted and relevant information services to ensure high performance. These services have more forms of utilization in a company. The basic use is creating and sustaining database which is utilized at the operational run of the processes as such. Further utilization takes the form of creating special information sources which serve as support of management decision making in managing business processes. Company management perceives information services as a necessary part of the business process and expects their continuous performance. Therefore, the cost of IT technologies represents a significant item in company budget. This places executives into a contradictory position. On one hand, they should increase efficiency and decrease cost; on the other hand they need to devote financial/human/technological resources to develop information services as the company information environment needs to be flexibly adapted to the needs of the changing markets. Developing of internal information environment by own resources is demanding and it takes sources of the company which could be devoted to its primary activities. One of the solutions for this situation is outsourcing, however transition to outsourcing is related to many risks. They include the transition to a new mode of service providing, the necessity to include the relation “external service provider - internal service customer”, selection of correct type of outsourcing and suitable external provider. The greatest risk is connected with defining the service parameters in contractual relationship for ensuring service quality; this risk is related to identification of these service quality parameters. This text focuses on defining the assumptions for monitoring of information services quality within information processes which are selected for outsourcing. A method is modelled which identifies the metrics that will become the base for definition of information services quality parameters.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124372536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. A high reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes in an environment where normal accidents can be expected due to risk factors and complexity. The definition of a high reliability organization extends beyond patient safety to encompass quality care - and ultimately value. Recommendations and innovations focused on healthcare individual processes do not address the larger and often intangible systemic and cultural factors that create vulnerabilities throughout the entire system. In addition, an open, transparent, and just culture which would allow a deeper understanding of these factors does not appear to be forthcoming. Adapting and applying the lessons of this science as well as applied human factors thinking to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care.
{"title":"Designing high-reliability healthcare teams","authors":"P. Barach","doi":"10.1109/DT.2016.7557144","DOIUrl":"https://doi.org/10.1109/DT.2016.7557144","url":null,"abstract":"Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. A high reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes in an environment where normal accidents can be expected due to risk factors and complexity. The definition of a high reliability organization extends beyond patient safety to encompass quality care - and ultimately value. Recommendations and innovations focused on healthcare individual processes do not address the larger and often intangible systemic and cultural factors that create vulnerabilities throughout the entire system. In addition, an open, transparent, and just culture which would allow a deeper understanding of these factors does not appear to be forthcoming. Adapting and applying the lessons of this science as well as applied human factors thinking to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subsystem of heat supply is one of three subsystems constituting heat supply system together with subsystem of heat production and subsystem of heat distribution. An analysis of failures in the heat supply subsystem has been presented in the article. The analysis of variety, frequency and causes of failures, for individual kinds of district heating network: main ones, distributional ones and heat supply connections in the example of heat supply system of 200000-inhabitants-city has been presented in the work. An analysis has been based on a real exploitation data of the heat distribution network, which had been obtained from Municipal Heat Supply Company during the period of time from 2003 to 2012. This paper contains the failure analysis of the heat distribution network depending on the technology of construction, diameters and age of network, kinds and causes of damages, place and time of failures' occurrences for various kinds of heat distribution networks. There have been determined basic reliability parameters such as: mean time between failures and the failure rate in the exemplary subsystem of heat supply. The failure analysis of heat distribution networks allows the operating instructions to develop, which can serve as a basis to determine the optimal repair periods and periods of highest danger emergency. It also influences on making more accurate decisions considering maintenance and repairs. This reduces the consequences of the failure, which is related to the maximum shortening of repair time and to decreasing the negative results of interruptions in the heat supply to the customers.
{"title":"Reliability analysis in subsystem of heat supply","authors":"B. Babiarz","doi":"10.1109/DT.2016.7557143","DOIUrl":"https://doi.org/10.1109/DT.2016.7557143","url":null,"abstract":"Subsystem of heat supply is one of three subsystems constituting heat supply system together with subsystem of heat production and subsystem of heat distribution. An analysis of failures in the heat supply subsystem has been presented in the article. The analysis of variety, frequency and causes of failures, for individual kinds of district heating network: main ones, distributional ones and heat supply connections in the example of heat supply system of 200000-inhabitants-city has been presented in the work. An analysis has been based on a real exploitation data of the heat distribution network, which had been obtained from Municipal Heat Supply Company during the period of time from 2003 to 2012. This paper contains the failure analysis of the heat distribution network depending on the technology of construction, diameters and age of network, kinds and causes of damages, place and time of failures' occurrences for various kinds of heat distribution networks. There have been determined basic reliability parameters such as: mean time between failures and the failure rate in the exemplary subsystem of heat supply. The failure analysis of heat distribution networks allows the operating instructions to develop, which can serve as a basis to determine the optimal repair periods and periods of highest danger emergency. It also influences on making more accurate decisions considering maintenance and repairs. This reduces the consequences of the failure, which is related to the maximum shortening of repair time and to decreasing the negative results of interruptions in the heat supply to the customers.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michal Kochlán, S. Zák, J. Micek, M. Hodoň, Martin Hudik
Basic limiting factors for wireless sensor network deployment are closely related to the power consumption. Low power consumption can be achieved by reducing the sensor node's energy requirements or effective utilization of energy harvesting. For effective energy management it is critical to track the optimal electrical operating point. For this purpose maximum power point tracking (MPPT) algorithms are used. This paper observes the performance of Open Voltage (OV) MPPT algorithm on a bidirectional power management unit. The bidirectional power management unit, designed at the authors' department, finds utilization in wireless sensor networks with solar-driven energy harvesting modules. The design of the bidirectional power management unit is described along with the OV MPPT implementation. A brief comparison with other MPPT algorithms such as Constant Voltage Method, Short Current Pulse Method, Perturb and Observe Method, Incremental Conductance Method and Temperature Method is presented in the concluding remarks.
{"title":"Performance of Open Voltage control algorithm for sensor node power management unit","authors":"Michal Kochlán, S. Zák, J. Micek, M. Hodoň, Martin Hudik","doi":"10.1109/DT.2016.7557163","DOIUrl":"https://doi.org/10.1109/DT.2016.7557163","url":null,"abstract":"Basic limiting factors for wireless sensor network deployment are closely related to the power consumption. Low power consumption can be achieved by reducing the sensor node's energy requirements or effective utilization of energy harvesting. For effective energy management it is critical to track the optimal electrical operating point. For this purpose maximum power point tracking (MPPT) algorithms are used. This paper observes the performance of Open Voltage (OV) MPPT algorithm on a bidirectional power management unit. The bidirectional power management unit, designed at the authors' department, finds utilization in wireless sensor networks with solar-driven energy harvesting modules. The design of the bidirectional power management unit is described along with the OV MPPT implementation. A brief comparison with other MPPT algorithms such as Constant Voltage Method, Short Current Pulse Method, Perturb and Observe Method, Incremental Conductance Method and Temperature Method is presented in the concluding remarks.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125294919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A class of nonparametric kernel estimators is suggested for an unknown hazard rate function and its derivatives. Both weak and mean square convergence of the proposed estimators to the unknown hazard function and its derivatives are proved. These estimators can be used for solving the problems of operational reliability of complex physical, technical, and program systems under uncertainty conditions.
{"title":"Smooth kernel estimators of the hazard rate function and its first and second derivatives","authors":"I. Fuks, G. Koshkin","doi":"10.1109/DT.2016.7557164","DOIUrl":"https://doi.org/10.1109/DT.2016.7557164","url":null,"abstract":"A class of nonparametric kernel estimators is suggested for an unknown hazard rate function and its derivatives. Both weak and mean square convergence of the proposed estimators to the unknown hazard function and its derivatives are proved. These estimators can be used for solving the problems of operational reliability of complex physical, technical, and program systems under uncertainty conditions.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125649695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Masson, Jean Baratgin, Frank Jamet, Fabien Ruggieri, D. Filatova
Robots are increasingly used in scientific research. Given the multitude of existing robots, how to choose the most adapted robot to a research and above all, how to use it to study biases in reasoning or in decision making? We suggested studying well-known biases from a new point of view: social norms, including social behaviors, social context and pragmatics of language. How to measure the impact of implicit social factors on the term of an experimental interaction when the experimenter cannot control non-verbal social cues he emits? A robot, whose behavior can entirely be programmed, constitutes a useful tool for this level. Authors' purpose is to expose a new method, accessible to all and neophytes in computers, which can be applied to a humanoid reactive robot for scientific research. Harel's “statechart” bring a formalism allowing deriving from it a program in which the states of the artificial system can be modeled in terms of states-actions. The technics and advantages brought by this proposed method will be reviewed through the illustration of two studies: the first one focusing on inclusion processes in younger children, the second one on the endowment effect within a sample of adults.
{"title":"Use a robot to serve experimental psychology: Some examples of methods with children and adults","authors":"O. Masson, Jean Baratgin, Frank Jamet, Fabien Ruggieri, D. Filatova","doi":"10.1109/DT.2016.7557172","DOIUrl":"https://doi.org/10.1109/DT.2016.7557172","url":null,"abstract":"Robots are increasingly used in scientific research. Given the multitude of existing robots, how to choose the most adapted robot to a research and above all, how to use it to study biases in reasoning or in decision making? We suggested studying well-known biases from a new point of view: social norms, including social behaviors, social context and pragmatics of language. How to measure the impact of implicit social factors on the term of an experimental interaction when the experimenter cannot control non-verbal social cues he emits? A robot, whose behavior can entirely be programmed, constitutes a useful tool for this level. Authors' purpose is to expose a new method, accessible to all and neophytes in computers, which can be applied to a humanoid reactive robot for scientific research. Harel's “statechart” bring a formalism allowing deriving from it a program in which the states of the artificial system can be modeled in terms of states-actions. The technics and advantages brought by this proposed method will be reviewed through the illustration of two studies: the first one focusing on inclusion processes in younger children, the second one on the endowment effect within a sample of adults.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134107310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of numerical and experimental investigation of thermal convection near heat emitting surfaces of an electric device. For the intensification of thermal convection, special vertical channels were formed near the heat emitting surfaces of the device. An experimental test has proved that there is an optimal length of the channels - thermal convection is the most intensive then. Air velocity vectors close to heat emitting surfaces are presented with the use of Ansys software. Due to vertical channels, it is possible to increase the current-carrying capacity of electric devices.
{"title":"Numerical and experimental investigation of thermal convection near electric devices with vertical channels","authors":"S. Czapp, M. Czapp, Magdalena Orłowska","doi":"10.1109/DT.2016.7557149","DOIUrl":"https://doi.org/10.1109/DT.2016.7557149","url":null,"abstract":"This paper presents the results of numerical and experimental investigation of thermal convection near heat emitting surfaces of an electric device. For the intensification of thermal convection, special vertical channels were formed near the heat emitting surfaces of the device. An experimental test has proved that there is an optimal length of the channels - thermal convection is the most intensive then. Air velocity vectors close to heat emitting surfaces are presented with the use of Ansys software. Due to vertical channels, it is possible to increase the current-carrying capacity of electric devices.","PeriodicalId":281446,"journal":{"name":"2016 International Conference on Information and Digital Technologies (IDT)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134116272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}