The construct "self" appears in diverse forms in theories about what it is to be a person. As the sense of "self" is typically assessed through personal reports, differences in its description undoubtedly reflect significant differences in peoples' apperception of self. This report describes the development, reliability, and factorial structure of the Experience of Sense of Self (E-SOS), an inventory designed to assess one's perception of self in relation to the person's perception of various potential "others." It does so using Venn diagrams to depict and quantify the experienced overlap between the self and "others." Participant responses to the instrument were studied through Exploratory Factor Analysis. This yielded a five-factor solution: 1) Experience of Positive Sensation; 2) Experience of Challenges; 3) Experience of Temptations; 4) Experience of Higher Power; and 5) Experience of Family. The items comprising each of these were found to produce reliable subscales. Further research with the E-SOS and suggestions for its use are offered.
{"title":"The Experienced Self and Other Scale: A technique for assaying the experience of one's self in relation to the other.","authors":"Erel Shvil, Herbert Krauss, Elizabeth Midlarsky","doi":"10.2458/v4i2.17934","DOIUrl":"https://doi.org/10.2458/v4i2.17934","url":null,"abstract":"<p><p>The construct \"self\" appears in diverse forms in theories about what it is to be a person. As the sense of \"self\" is typically assessed through personal reports, differences in its description undoubtedly reflect significant differences in peoples' apperception of self. This report describes the development, reliability, and factorial structure of the Experience of Sense of Self (E-SOS), an inventory designed to assess one's perception of self in relation to the person's perception of various potential \"others.\" It does so using Venn diagrams to depict and quantify the experienced overlap between the self and \"others.\" Participant responses to the instrument were studied through Exploratory Factor Analysis. This yielded a five-factor solution: 1) Experience of Positive Sensation; 2) Experience of Challenges; 3) Experience of Temptations; 4) Experience of Higher Power; and 5) Experience of Family. The items comprising each of these were found to produce reliable subscales. Further research with the E-SOS and suggestions for its use are offered.</p>","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"4 2","pages":"1-20"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/v4i2.17934","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32570677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contents Meta-Analysis is a procedure designed to quantitatively analyze the methodological characteristics in studies sampled in conventional meta-analyses to assess the relationship between methodologies and outcomes. This article presents the rationale and procedures for conducting a Contents Meta-Analysis in conjunction with conventional Effects Meta-analysis. We provide an overview of the pertinent limitations of conventional meta-analysis from methodological and meta-scientific standpoint. We then introduce novel terminology distinguishing different kinds of complementary meta-analyses that address many of the problems previously identified for conventional meta-analyses. We would also like to direct readers to the second paper in this series (Figueredo, Black, & Scott, this issue), which demonstrates the utility of Contents Meta-Analysis with an empirical example and present findings regarding the generalizability of the effect sizes estimated. DOI:10.2458/azu_jmmss_v4i2_figueredo2
{"title":"Complementary Meta-Analytic Methods for the Quantitative Review of Research: 1. A Theoretical Overview","authors":"A. Figueredo, Candace J Black, A. Scott","doi":"10.2458/JMM.V4I2.17935","DOIUrl":"https://doi.org/10.2458/JMM.V4I2.17935","url":null,"abstract":"Contents Meta-Analysis is a procedure designed to quantitatively analyze the methodological characteristics in studies sampled in conventional meta-analyses to assess the relationship between methodologies and outcomes. This article presents the rationale and procedures for conducting a Contents Meta-Analysis in conjunction with conventional Effects Meta-analysis. We provide an overview of the pertinent limitations of conventional meta-analysis from methodological and meta-scientific standpoint. We then introduce novel terminology distinguishing different kinds of complementary meta-analyses that address many of the problems previously identified for conventional meta-analyses. We would also like to direct readers to the second paper in this series (Figueredo, Black, & Scott, this issue), which demonstrates the utility of Contents Meta-Analysis with an empirical example and present findings regarding the generalizability of the effect sizes estimated. DOI:10.2458/azu_jmmss_v4i2_figueredo2","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"4 1","pages":"21-45"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69055195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peikes, Moreno and Orzol (2008) sensibly caution researchers that propensity score analysis may not lead to valid causal inference in field applications. But at the same time, they made the far stronger claim to have performed an ideal test of whether propensity score matching in quasi-experimental data is capable of approximating the results of a randomized experiment in their dataset, and that this ideal test showed that such matching could not do so. In this article we show that their study does not support that conclusion because it failed to meet a number of basic criteria for an ideal test. By implication, many other purported tests of the effectiveness of propensity score analysis probably also fail to meet these criteria, and are therefore questionable contributions to the literature on the effects of propensity score analysis. DOI:10.2458/azu_jmmss_v3i2_shadish
{"title":"A Case Study About Why It Can Be Difficult To Test Whether Propensity Score Analysis Works in Field Experiments","authors":"W. Shadish, Peter M Steiner, T. Cook","doi":"10.2458/V3I2.16475","DOIUrl":"https://doi.org/10.2458/V3I2.16475","url":null,"abstract":"Peikes, Moreno and Orzol (2008) sensibly caution researchers that propensity score analysis may not lead to valid causal inference in field applications. But at the same time, they made the far stronger claim to have performed an ideal test of whether propensity score matching in quasi-experimental data is capable of approximating the results of a randomized experiment in their dataset, and that this ideal test showed that such matching could not do so. In this article we show that their study does not support that conclusion because it failed to meet a number of basic criteria for an ideal test. By implication, many other purported tests of the effectiveness of propensity score analysis probably also fail to meet these criteria, and are therefore questionable contributions to the literature on the effects of propensity score analysis. DOI:10.2458/azu_jmmss_v3i2_shadish","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2013-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V3I2.16475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69066854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study conceptually extends recent research on linguistic markers of psychological processes by demonstrating that psychological correlates of word use can vary with the context in which the words are used. The word use of 90 participants was analyzed across two theoretically defined communication contexts. Information about participants’ public language use was derived from recorded snippets of their daily conversations with others. Information about their private language use was derived from stream-of-consciousness essays. Personality trait–word use associations emerged as highly context dependent. Extraversion as a public trait was related to verbal productivity in public but not private language. Neuroticism as a private trait was related to the verbal expression of emotions in private but not public language. Verbal immediacy was indicative of Extraversion in public and Neuroticism in private language use. The findings illustrate the importance of considering communication contexts in research on psychological implications of natural language use.
{"title":"How Taking a Word for a Word Can Be Problematic: Context-Dependent Linguistic Markers of Extraversion and Neuroticism","authors":"M. Mehl, M. Robbins, Shannon E. Holleran","doi":"10.2458/V3I2.16477","DOIUrl":"https://doi.org/10.2458/V3I2.16477","url":null,"abstract":"This study conceptually extends recent research on linguistic markers of psychological processes by demonstrating that psychological correlates of word use can vary with the context in which the words are used. The word use of 90 participants was analyzed across two theoretically defined communication contexts. Information about participants’ public language use was derived from recorded snippets of their daily conversations with others. Information about their private language use was derived from stream-of-consciousness essays. Personality trait–word use associations emerged as highly context dependent. Extraversion as a public trait was related to verbal productivity in public but not private language. Neuroticism as a private trait was related to the verbal expression of emotions in private but not public language. Verbal immediacy was indicative of Extraversion in public and Neuroticism in private language use. The findings illustrate the importance of considering communication contexts in research on psychological implications of natural language use.","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"30-50"},"PeriodicalIF":0.0,"publicationDate":"2013-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V3I2.16477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69067143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference. DOI:10.2458/azu_jmmss_v3i1_campbell
{"title":"Speculations on Quasi-Experimental Design in HIV/AIDS Prevention Research","authors":"D. Campbell, B. Krauss","doi":"10.2458/V3I1.16113","DOIUrl":"https://doi.org/10.2458/V3I1.16113","url":null,"abstract":"This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference. DOI:10.2458/azu_jmmss_v3i1_campbell","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"52-84"},"PeriodicalIF":0.0,"publicationDate":"2012-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V3I1.16113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69066240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference.
{"title":"Introduction to Speculations on Quasi-Experimental Design in HIV/AIDS Prevention Research","authors":"B. Krauss","doi":"10.2458/V3I1.16112","DOIUrl":"https://doi.org/10.2458/V3I1.16112","url":null,"abstract":"This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference.","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"49-51"},"PeriodicalIF":0.0,"publicationDate":"2012-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V3I1.16112","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69066220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bickman, L. G. Wighton, E. W. Lambert, M. Karver, L. H. Steding
This paper presents results from a three-part study on diagnosis of children with affective and behavior disorders. We examined the reliability, discriminant, and predictive validity of common diagnoses used in mental health services research using a research diagnostic interview. Results suggest four problems: a) some diagnoses demonstrate internal consistency only slightly better than symptoms chosen at random; b) diagnosis did not add appreciably to a brief global functioning screen in predicting service use; c) low inter-rater reliability among informants and clinicians for six of the most common diagnoses; and d) clinician diagnoses differed between sites in ways that reflect different reimbursement strategies. The study concludes that clinicians and researchers should not assume diagnosis is a useful measure of child and adolescent problems and outcomes until there is more evidence supporting the validity of diagnosis. DOI:10.2458/azu_jmmss_v3i1_bickman
{"title":"Problems in Using Diagnosis in Child and Adolescent Mental Health Services Research","authors":"L. Bickman, L. G. Wighton, E. W. Lambert, M. Karver, L. H. Steding","doi":"10.2458/V3I1.16110","DOIUrl":"https://doi.org/10.2458/V3I1.16110","url":null,"abstract":"This paper presents results from a three-part study on diagnosis of children with affective and behavior disorders. We examined the reliability, discriminant, and predictive validity of common diagnoses used in mental health services research using a research diagnostic interview. Results suggest four problems: a) some diagnoses demonstrate internal consistency only slightly better than symptoms chosen at random; b) diagnosis did not add appreciably to a brief global functioning screen in predicting service use; c) low inter-rater reliability among informants and clinicians for six of the most common diagnoses; and d) clinician diagnoses differed between sites in ways that reflect different reimbursement strategies. The study concludes that clinicians and researchers should not assume diagnosis is a useful measure of child and adolescent problems and outcomes until there is more evidence supporting the validity of diagnosis. DOI:10.2458/azu_jmmss_v3i1_bickman","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"1-26"},"PeriodicalIF":0.0,"publicationDate":"2012-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69066032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Fitzpatrick, Patrick G. Saab, M. Llabre, Randall D. Penfield, J. Mccalla, N. Schneiderman
The purpose of this study was to assess the psychometric properties of a cardiovascular health knowledge measure for adolescents using item response theory. The measure was developed in the context of a cardiovascular lifestyle intervention for adolescents with elevated blood pressure. Sample consisted of 167 adolescents (mean age = 16.2 years) who completed the Cardiovascular Health Knowledge Assessment (CHKA), a 34-item multiple choice test, at baseline and post-intervention. The CHKA was unidimensional and internal consistency was .65 at pretest and .74 at posttest. Rasch analysis results indicated that at pretest the items targeted adolescents with variable levels of health knowledge. However, based on results at posttest, additional hard items are needed to account for the increase in level of cardiovascular health knowledge at post-intervention. Change in knowledge scores was examined using Rasch analysis. Findings indicated there was significant improvement in health knowledge over time [t(119) = -10.3, p< .0001]. In summary, the CHKA appears to contain items that are good approximations of the construct cardiovascular health knowledge and items that target adolescents with moderate levels of knowledge. DOI:10.2458/azu_jmmss_v3i1_fitzpatrick
{"title":"Use of Item Response Theory to Examine a Cardiovascular Health Knowledge Measure for Adolescents with Elevated Blood Pressure","authors":"S. Fitzpatrick, Patrick G. Saab, M. Llabre, Randall D. Penfield, J. Mccalla, N. Schneiderman","doi":"10.2458/V3I1.16111","DOIUrl":"https://doi.org/10.2458/V3I1.16111","url":null,"abstract":"The purpose of this study was to assess the psychometric properties of a cardiovascular health knowledge measure for adolescents using item response theory. The measure was developed in the context of a cardiovascular lifestyle intervention for adolescents with elevated blood pressure. Sample consisted of 167 adolescents (mean age = 16.2 years) who completed the Cardiovascular Health Knowledge Assessment (CHKA), a 34-item multiple choice test, at baseline and post-intervention. The CHKA was unidimensional and internal consistency was .65 at pretest and .74 at posttest. Rasch analysis results indicated that at pretest the items targeted adolescents with variable levels of health knowledge. However, based on results at posttest, additional hard items are needed to account for the increase in level of cardiovascular health knowledge at post-intervention. Change in knowledge scores was examined using Rasch analysis. Findings indicated there was significant improvement in health knowledge over time [t(119) = -10.3, p< .0001]. In summary, the CHKA appears to contain items that are good approximations of the construct cardiovascular health knowledge and items that target adolescents with moderate levels of knowledge. DOI:10.2458/azu_jmmss_v3i1_fitzpatrick","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"3 1","pages":"27-48"},"PeriodicalIF":0.0,"publicationDate":"2012-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V3I1.16111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69066212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR) is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997). DOI:10.2458/azu_jmmss_v2i1_menke
{"title":"Thinking Inside the Box: Simple Methods to Evaluate Complex Treatments","authors":"J. Menke","doi":"10.2458/V2I1.12365","DOIUrl":"https://doi.org/10.2458/V2I1.12365","url":null,"abstract":"We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR) is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997). DOI:10.2458/azu_jmmss_v2i1_menke","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"2 1","pages":"45-62"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69065652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of constructing a fixed-length conventional test frequently focuses on maximizing internal consistency reliability by selecting test items that are of average difficulty and high discrimination (a “peaked” test). The effect of constructing such a test, when viewed from the perspective of item response theory, is test scores that are precise for examinees whose trait levels are near the point at which the test is peaked; as examinee trait levels deviate from the mean, the precision of their scores decreases substantially. Results of a small simulation study demonstrate that when peaked tests are “off target” for an examinee, their scores are biased and have spuriously high standard deviations, reflecting substantial amounts of error. These errors can reduce the correlations of these kinds of scores with other variables and adversely affect the results of standard statistical tests. By contrast, scores from adaptive tests are essentially unbiased and have standard deviations that are much closer to true values. Basic concepts of adaptive testing are introduced and fully adaptive computerized tests (CATs) based on IRT are described. Several examples of response records from CATs are discussed to illustrate how CATs function. Some operational issues, including item exposure, content balancing, and enemy items are also briefly discussed. It is concluded that because CAT constructs a unique test for examinee, scores from CATs will be more precise and should provide better data for social science research and applications.
{"title":"Better Data From Better Measurements Using Computerized Adaptive Testing","authors":"D. Weiss","doi":"10.2458/V2I1.12351","DOIUrl":"https://doi.org/10.2458/V2I1.12351","url":null,"abstract":"The process of constructing a fixed-length conventional test frequently focuses on maximizing internal consistency reliability by selecting test items that are of average difficulty and high discrimination (a “peaked” test). The effect of constructing such a test, when viewed from the perspective of item response theory, is test scores that are precise for examinees whose trait levels are near the point at which the test is peaked; as examinee trait levels deviate from the mean, the precision of their scores decreases substantially. Results of a small simulation study demonstrate that when peaked tests are “off target” for an examinee, their scores are biased and have spuriously high standard deviations, reflecting substantial amounts of error. These errors can reduce the correlations of these kinds of scores with other variables and adversely affect the results of standard statistical tests. By contrast, scores from adaptive tests are essentially unbiased and have standard deviations that are much closer to true values. Basic concepts of adaptive testing are introduced and fully adaptive computerized tests (CATs) based on IRT are described. Several examples of response records from CATs are discussed to illustrate how CATs function. Some operational issues, including item exposure, content balancing, and enemy items are also briefly discussed. It is concluded that because CAT constructs a unique test for examinee, scores from CATs will be more precise and should provide better data for social science research and applications.","PeriodicalId":90602,"journal":{"name":"Journal of methods and measurement in the social sciences","volume":"78 1","pages":"1-27"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2458/V2I1.12351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69065630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}