{"title":"雌性大鼠神经内分泌免疫复合物及代谢信息场的因子分析","authors":"Y. Zavidnyuk, O. Mel’nyk, O. Mysakovets","doi":"10.25040/ecpb2019.03.012","DOIUrl":null,"url":null,"abstract":"Introduction. Despite considerable informativeness, factor analysis in biomedical research is still rarely used. Therefore, we set out to introduce our colleagues to the theoretical foundations of factor analysis and to demonstrate its application in our own material. According to the theory of factor analysis [1], it is considered that the observed parameters (variables) are a linear combination of some latent (hypothetical, unobservable) factors. In other words, the factors are hypothetical, not directly measured, hidden variables, in terms of which the measured variables are described. Some of the factors are assumed to be common to two or more variables, while others are specific to each parameter. Characteristic (unique) factors are orthogonal to one another, that is, they do not contribute to the covariance between the variables. In other words, only common factors that are much smaller than the number of variables contribute to the covariance between them. The latent factor structure can be accurately identified by examining the resulting covariance matrix. In practice, it is impossible to obtain the exact structure of the factor model, only estimates of the parameters of the factor structure can be found. Therefore, on the principle of postulate of parsimony, adopt a model with a minimum number of common factors. One of the methods of factor analysis is the analysis of principal components. Principal components (PCs) are linear combinations of observed variables that have orthogonality properties, that is, natural orthogonal functions. Thus, PCs are opposite to common factors, since the latter are hypothetical and are not expressed through a combination of variables, whereas PCs are linear functions of the observed variables. The essence of the PCs method lies in the linear transformation and condensation of the original information. On the basis of correlation matrices, a system of orthogonal, linearly independent functions, nominated by eigenvectors, corresponding to a system of independent random variables nominated by eigenvalues of the correlation matrix (λ) is determined. The first few eigenvalues of the correlation matrix exhaust the bulk of the total field variance, so special attention is given to the first eigenvalues and their corresponding components when analyzing the decomposition results. And since large-scale processes, which are functional systems of the body, are characterized","PeriodicalId":12101,"journal":{"name":"Experimental and Clinical Physiology and Biochemistry","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Factor analysis of the information field of the neuroendocrine-immune complex and metabolism in female rats\",\"authors\":\"Y. Zavidnyuk, O. Mel’nyk, O. Mysakovets\",\"doi\":\"10.25040/ecpb2019.03.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction. Despite considerable informativeness, factor analysis in biomedical research is still rarely used. Therefore, we set out to introduce our colleagues to the theoretical foundations of factor analysis and to demonstrate its application in our own material. According to the theory of factor analysis [1], it is considered that the observed parameters (variables) are a linear combination of some latent (hypothetical, unobservable) factors. In other words, the factors are hypothetical, not directly measured, hidden variables, in terms of which the measured variables are described. Some of the factors are assumed to be common to two or more variables, while others are specific to each parameter. Characteristic (unique) factors are orthogonal to one another, that is, they do not contribute to the covariance between the variables. In other words, only common factors that are much smaller than the number of variables contribute to the covariance between them. The latent factor structure can be accurately identified by examining the resulting covariance matrix. In practice, it is impossible to obtain the exact structure of the factor model, only estimates of the parameters of the factor structure can be found. Therefore, on the principle of postulate of parsimony, adopt a model with a minimum number of common factors. One of the methods of factor analysis is the analysis of principal components. Principal components (PCs) are linear combinations of observed variables that have orthogonality properties, that is, natural orthogonal functions. Thus, PCs are opposite to common factors, since the latter are hypothetical and are not expressed through a combination of variables, whereas PCs are linear functions of the observed variables. The essence of the PCs method lies in the linear transformation and condensation of the original information. On the basis of correlation matrices, a system of orthogonal, linearly independent functions, nominated by eigenvectors, corresponding to a system of independent random variables nominated by eigenvalues of the correlation matrix (λ) is determined. The first few eigenvalues of the correlation matrix exhaust the bulk of the total field variance, so special attention is given to the first eigenvalues and their corresponding components when analyzing the decomposition results. And since large-scale processes, which are functional systems of the body, are characterized\",\"PeriodicalId\":12101,\"journal\":{\"name\":\"Experimental and Clinical Physiology and Biochemistry\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Experimental and Clinical Physiology and Biochemistry\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.25040/ecpb2019.03.012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental and Clinical Physiology and Biochemistry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25040/ecpb2019.03.012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Factor analysis of the information field of the neuroendocrine-immune complex and metabolism in female rats
Introduction. Despite considerable informativeness, factor analysis in biomedical research is still rarely used. Therefore, we set out to introduce our colleagues to the theoretical foundations of factor analysis and to demonstrate its application in our own material. According to the theory of factor analysis [1], it is considered that the observed parameters (variables) are a linear combination of some latent (hypothetical, unobservable) factors. In other words, the factors are hypothetical, not directly measured, hidden variables, in terms of which the measured variables are described. Some of the factors are assumed to be common to two or more variables, while others are specific to each parameter. Characteristic (unique) factors are orthogonal to one another, that is, they do not contribute to the covariance between the variables. In other words, only common factors that are much smaller than the number of variables contribute to the covariance between them. The latent factor structure can be accurately identified by examining the resulting covariance matrix. In practice, it is impossible to obtain the exact structure of the factor model, only estimates of the parameters of the factor structure can be found. Therefore, on the principle of postulate of parsimony, adopt a model with a minimum number of common factors. One of the methods of factor analysis is the analysis of principal components. Principal components (PCs) are linear combinations of observed variables that have orthogonality properties, that is, natural orthogonal functions. Thus, PCs are opposite to common factors, since the latter are hypothetical and are not expressed through a combination of variables, whereas PCs are linear functions of the observed variables. The essence of the PCs method lies in the linear transformation and condensation of the original information. On the basis of correlation matrices, a system of orthogonal, linearly independent functions, nominated by eigenvectors, corresponding to a system of independent random variables nominated by eigenvalues of the correlation matrix (λ) is determined. The first few eigenvalues of the correlation matrix exhaust the bulk of the total field variance, so special attention is given to the first eigenvalues and their corresponding components when analyzing the decomposition results. And since large-scale processes, which are functional systems of the body, are characterized