Pub Date : 2024-01-30DOI: 10.1007/s42952-023-00248-x
Min-Jeong Park, Hang J. Kim, Sunghoon Kwon
We propose a confidential approach for disseminating frequency tables constructed for any combination of key variables in the given microdata, including those of hierarchical key variables. The system generates all possible frequency tables by either marginalizing or aggregating fully joint frequency tables of key variables while protecting the original cells with low frequencies through two masking steps: the small cell adjustments for joint tables followed by the proposed algorithm called information loss bounded aggregation for aggregated cells. The two-step approach is designed to control both disclosure risk and information loss by ensuring the k-anonymity of original cells with small frequencies while keeping the loss within a bounded limit.
我们提出了一种保密方法,用于传播为给定微观数据中任何关键变量组合(包括分层关键变量组合)构建的频率表。该系统通过对关键变量的完全联合频率表进行边际化或聚合,生成所有可能的频率表,同时通过两个屏蔽步骤保护频率较低的原始单元格:对联合表进行小单元格调整,然后对聚合单元格采用所提出的称为信息损失约束聚合的算法。该两步法旨在通过确保频率较低的原始单元格的 k 匿名性,同时将损失控制在一定范围内,从而控制披露风险和信息损失。
{"title":"Disseminating massive frequency tables by masking aggregated cell frequencies","authors":"Min-Jeong Park, Hang J. Kim, Sunghoon Kwon","doi":"10.1007/s42952-023-00248-x","DOIUrl":"https://doi.org/10.1007/s42952-023-00248-x","url":null,"abstract":"<p>We propose a confidential approach for disseminating frequency tables constructed for any combination of key variables in the given microdata, including those of hierarchical key variables. The system generates all possible frequency tables by either marginalizing or aggregating fully joint frequency tables of key variables while protecting the original cells with low frequencies through two masking steps: the small cell adjustments for joint tables followed by the proposed algorithm called information loss bounded aggregation for aggregated cells. The two-step approach is designed to control both disclosure risk and information loss by ensuring the <i>k</i>-anonymity of original cells with small frequencies while keeping the loss within a bounded limit.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"13 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139647272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1007/s42952-023-00254-z
Yohan Lim, Mingue Park
Ridge calibration is a penalized method used in survey sampling to reduce the variability of the final set of weights by relaxing the linear restrictions. We proposed a method for selecting the penalty parameter that minimizes the estimated mean squared error of the mean estimator when estimated auxiliary information is used. We showed that the proposed estimator is asymptotically equivalent to the generalized regression estimator. A simple simulation study shows that our estimator has the smaller MSE compared to the traditional calibration ones. We applied our method to predict election result using National Barometer Survey and Korea Social Integration Survey.
{"title":"Use of ridge calibration method in predicting election results","authors":"Yohan Lim, Mingue Park","doi":"10.1007/s42952-023-00254-z","DOIUrl":"https://doi.org/10.1007/s42952-023-00254-z","url":null,"abstract":"<p>Ridge calibration is a penalized method used in survey sampling to reduce the variability of the final set of weights by relaxing the linear restrictions. We proposed a method for selecting the penalty parameter that minimizes the estimated mean squared error of the mean estimator when estimated auxiliary information is used. We showed that the proposed estimator is asymptotically equivalent to the generalized regression estimator. A simple simulation study shows that our estimator has the smaller MSE compared to the traditional calibration ones. We applied our method to predict election result using National Barometer Survey and Korea Social Integration Survey.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"7 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139558036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1007/s42952-023-00250-3
Donghyeon Yu, Johan Lim, Won Son
It is well-known that the fused lasso signal approximator (FLSA) is inconsistent in change point detection under the presence of staircase blocks in true mean values. The existing studies focus on modifying the FLSA model to remedy this inconsistency. However, the inconsistency of the FLSA does not severely degrade the performance in change point detection if the FLSA can identify all true change points and the estimated change points set is sufficiently close to the true change points set. In this study, we investigate some asymptotic properties of the FLSA under the assumption of the noise level (sigma _n = o(n log n)). To be specific, we show that all the falsely segmented blocks are sub-blocks of true staircase blocks if the noise level is sufficiently low and a tuning parameter is chosen appropriately. In addition, each false change point of the optimal FLSA estimate can be associated with a vertex of a concave majorant or a convex minorant of a discrete Brownian bridge. Based on these results, we derive an asymptotic distribution of the number of false change points and provide numerical examples supporting the theoretical results.
{"title":"Asymptotic of the number of false change points of the fused lasso signal approximator","authors":"Donghyeon Yu, Johan Lim, Won Son","doi":"10.1007/s42952-023-00250-3","DOIUrl":"https://doi.org/10.1007/s42952-023-00250-3","url":null,"abstract":"<p>It is well-known that the fused lasso signal approximator (FLSA) is inconsistent in change point detection under the presence of staircase blocks in true mean values. The existing studies focus on modifying the FLSA model to remedy this inconsistency. However, the inconsistency of the FLSA does not severely degrade the performance in change point detection if the FLSA can identify all true change points and the estimated change points set is sufficiently close to the true change points set. In this study, we investigate some asymptotic properties of the FLSA under the assumption of the noise level <span>(sigma _n = o(n log n))</span>. To be specific, we show that all the falsely segmented blocks are sub-blocks of true staircase blocks if the noise level is sufficiently low and a tuning parameter is chosen appropriately. In addition, each false change point of the optimal FLSA estimate can be associated with a vertex of a concave majorant or a convex minorant of a discrete Brownian bridge. Based on these results, we derive an asymptotic distribution of the number of false change points and provide numerical examples supporting the theoretical results.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"10 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139499555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-13DOI: 10.1007/s42952-023-00251-2
Han Wang, Wangxue Chen, Bingjie Li
In this paper, we investigate the maximum likelihood estimator (MLE) for the parameter (theta) in the probability density function (f(x;theta )). We specifically focus on the application of moving extremes ranked set sampling (MERSS) and analyze its properties in large samples. We establish the existence and uniqueness of the MLE for two common distributions when utilizing MERSS. Our theoretical analysis demonstrates that the MLE obtained through MERSS is, at the very least, as efficient as the MLE obtained through simple random sampling with an equivalent sample size. To substantiate these theoretical findings, we conduct numerical experiments. Furthermore, we explore the implications of imperfect ranking and provide a practical illustration by applying our approach to a real dataset.
{"title":"Large sample properties of maximum likelihood estimator using moving extremes ranked set sampling","authors":"Han Wang, Wangxue Chen, Bingjie Li","doi":"10.1007/s42952-023-00251-2","DOIUrl":"https://doi.org/10.1007/s42952-023-00251-2","url":null,"abstract":"<p>In this paper, we investigate the maximum likelihood estimator (MLE) for the parameter <span>(theta)</span> in the probability density function <span>(f(x;theta ))</span>. We specifically focus on the application of moving extremes ranked set sampling (MERSS) and analyze its properties in large samples. We establish the existence and uniqueness of the MLE for two common distributions when utilizing MERSS. Our theoretical analysis demonstrates that the MLE obtained through MERSS is, at the very least, as efficient as the MLE obtained through simple random sampling with an equivalent sample size. To substantiate these theoretical findings, we conduct numerical experiments. Furthermore, we explore the implications of imperfect ranking and provide a practical illustration by applying our approach to a real dataset.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"83 3 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-12DOI: 10.1007/s42952-023-00252-1
Tae-Young Heo, Joon Myoung Lee, Myung Hun Woo, Hyeongseok Lee, Min Ho Cho
Shape analysis is widely used in many application areas such as computer vision, medical and biological studies. One challenge to analyze the shape of an object in an image is its invariant property to shape-preserving transformations. To measure the distance or dissimilarity between two different shapes, we worked with the square-root velocity function (SRVF) representation and the elastic metric. Since shapes are inherently high-dimensional in a nonlinear space, we adopted a tangent space at the mean shape and a few principal components (PCs) on the linearized space. We proposed classification methods based on logistic regression using these PCs and tangent vectors with the elastic net penalty. We then compared its performance with other model-based methods for shape classification in application to shape of algae in watersheds as well as simulated data generated by the mixture of von Mises-Fisher distributions.
形状分析广泛应用于计算机视觉、医学和生物研究等多个领域。分析图像中物体的形状所面临的一个挑战是其对保形变换的不变性。为了测量两个不同形状之间的距离或差异,我们使用了平方根速度函数(SRVF)表示法和弹性度量。由于形状在非线性空间中本身就是高维的,因此我们在平均形状处采用了切线空间,并在线性化空间上采用了几个主成分(PC)。我们提出了基于逻辑回归的分类方法,使用这些 PC 和切向量以及弹性网惩罚。然后,我们将其与其他基于模型的形状分类方法进行了性能比较,并将其应用于流域中藻类的形状以及由 von Mises-Fisher 分布混合生成的模拟数据。
{"title":"Logistic regression models for elastic shape of curves based on tangent representations","authors":"Tae-Young Heo, Joon Myoung Lee, Myung Hun Woo, Hyeongseok Lee, Min Ho Cho","doi":"10.1007/s42952-023-00252-1","DOIUrl":"https://doi.org/10.1007/s42952-023-00252-1","url":null,"abstract":"<p>Shape analysis is widely used in many application areas such as computer vision, medical and biological studies. One challenge to analyze the shape of an object in an image is its invariant property to shape-preserving transformations. To measure the distance or dissimilarity between two different shapes, we worked with the square-root velocity function (SRVF) representation and the elastic metric. Since shapes are inherently high-dimensional in a nonlinear space, we adopted a tangent space at the mean shape and a few principal components (PCs) on the linearized space. We proposed classification methods based on logistic regression using these PCs and tangent vectors with the elastic net penalty. We then compared its performance with other model-based methods for shape classification in application to shape of algae in watersheds as well as simulated data generated by the mixture of von Mises-Fisher distributions.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-10DOI: 10.1007/s42952-023-00249-w
Yaohong Yang, Lei Wang
Decentralized federated learning based on fully normal nodes has drawn attention in modern statistical learning. However, due to data corruption, device malfunctioning, malicious attacks and some other unexpected behaviors, not all nodes can obey the estimation process and the existing decentralized federated learning methods may fail. An unknown number of abnormal nodes, called Byzantine nodes, arbitrarily deviate from their intended behaviors, send wrong messages to their neighbors and affect all honest nodes across the entire network through passing polluted messages. In this paper, we focus on decentralized federated learning in the presence of Byzantine attacks and then propose a unified Byzantine-resilient framework based on the network gradient descent and several robust aggregation rules. Theoretically, the convergence of the proposed algorithm is guaranteed under some weakly balanced conditions of network structure. The finite-sample performance is studied through simulations under different network topologies and various Byzantine attacks. An application to Communities and Crime Data is also presented.
{"title":"Byzantine-resilient decentralized network learning","authors":"Yaohong Yang, Lei Wang","doi":"10.1007/s42952-023-00249-w","DOIUrl":"https://doi.org/10.1007/s42952-023-00249-w","url":null,"abstract":"<p>Decentralized federated learning based on fully normal nodes has drawn attention in modern statistical learning. However, due to data corruption, device malfunctioning, malicious attacks and some other unexpected behaviors, not all nodes can obey the estimation process and the existing decentralized federated learning methods may fail. An unknown number of abnormal nodes, called Byzantine nodes, arbitrarily deviate from their intended behaviors, send wrong messages to their neighbors and affect all honest nodes across the entire network through passing polluted messages. In this paper, we focus on decentralized federated learning in the presence of Byzantine attacks and then propose a unified Byzantine-resilient framework based on the network gradient descent and several robust aggregation rules. Theoretically, the convergence of the proposed algorithm is guaranteed under some weakly balanced conditions of network structure. The finite-sample performance is studied through simulations under different network topologies and various Byzantine attacks. An application to Communities and Crime Data is also presented.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"62 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s42952-023-00247-y
Abstract
This study considers the online monitoring problem for detecting the parameter change in time series of counts. For this task, we construct a monitoring process based on the residuals obtained from integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models. We consider this problem within a more general framework using martingale difference sequences as the monitoring problem on GARCH-type processes based on the residuals or score vectors can be viewed as a special case of the monitoring problems on martingale differences. The limiting behavior of the stopping rule is investigated in this general set-up and is applied to the INGARCH processes. To assess the performance of our method, we conduct Monte Carlo simulations. A real data analysis is also provided for illustration. Our findings in this empirical study demonstrate the validity of the proposed monitoring process.
{"title":"Sequential online monitoring for autoregressive time series of counts","authors":"","doi":"10.1007/s42952-023-00247-y","DOIUrl":"https://doi.org/10.1007/s42952-023-00247-y","url":null,"abstract":"<h3>Abstract</h3> <p>This study considers the online monitoring problem for detecting the parameter change in time series of counts. For this task, we construct a monitoring process based on the residuals obtained from integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models. We consider this problem within a more general framework using martingale difference sequences as the monitoring problem on GARCH-type processes based on the residuals or score vectors can be viewed as a special case of the monitoring problems on martingale differences. The limiting behavior of the stopping rule is investigated in this general set-up and is applied to the INGARCH processes. To assess the performance of our method, we conduct Monte Carlo simulations. A real data analysis is also provided for illustration. Our findings in this empirical study demonstrate the validity of the proposed monitoring process.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"50 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139079722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1007/s42952-023-00245-0
Wonwoo Choi, Seongho Jang, Sanghee Kim, Chayoung Park, Sunyoung Park, Seongjoo Song
In this study, we aim to forecast monthly stock returns and analyze factors influencing stock prices in the Korean stock market. To find a model that maximizes the cumulative return of the portfolio of stocks with high predicted returns, we use machine learning models such as linear models, tree-based models, neural networks, and learning to rank algorithms. We employ a novel validation metric which we call the Cumulative net Return of a Portfolio with top 10% predicted return (CRP10) for tuning hyperparameters to increase the cumulative return of the selected portfolio. CRP10 tends to provide higher cumulative returns compared to out-of-sample R-squared as a validation metric with the data that we used. Our findings indicate that Light Gradient Boosting Machine (LightGBM) and Gradient Boosted Regression Trees (GBRT) demonstrate better performance than other models when we apply a single model for the entire test period. We also take the strategy of changing the model on a yearly basis by assessing the best model annually and observed that it did not outperform the approach of using a single model such as LightGBM or GBRT for the entire period.
在本研究中,我们旨在预测韩国股市的月度股票回报率并分析影响股价的因素。为了找到一个能使预测回报率高的股票投资组合的累计回报率最大化的模型,我们使用了线性模型、树型模型、神经网络和学习排名算法等机器学习模型。我们采用了一种新颖的验证指标,称为 "预测回报率前 10%的投资组合的累计净回报率(CRP10)",用于调整超参数,以提高所选投资组合的累计回报率。在我们使用的数据中,与样本外 R 平方作为验证指标相比,CRP10 往往能提供更高的累计回报。我们的研究结果表明,当我们在整个测试期间使用单一模型时,轻梯度提升机(LightGBM)和梯度提升回归树(GBRT)比其他模型表现得更好。我们还采取了每年更换模型的策略,每年对最佳模型进行评估,结果发现,在整个测试期间使用 LightGBM 或 GBRT 等单一模型的效果并不理想。
{"title":"Return prediction by machine learning for the Korean stock market","authors":"Wonwoo Choi, Seongho Jang, Sanghee Kim, Chayoung Park, Sunyoung Park, Seongjoo Song","doi":"10.1007/s42952-023-00245-0","DOIUrl":"https://doi.org/10.1007/s42952-023-00245-0","url":null,"abstract":"<p>In this study, we aim to forecast monthly stock returns and analyze factors influencing stock prices in the Korean stock market. To find a model that maximizes the cumulative return of the portfolio of stocks with high predicted returns, we use machine learning models such as linear models, tree-based models, neural networks, and learning to rank algorithms. We employ a novel validation metric which we call the Cumulative net Return of a Portfolio with top 10% predicted return (CRP10) for tuning hyperparameters to increase the cumulative return of the selected portfolio. CRP10 tends to provide higher cumulative returns compared to out-of-sample R-squared as a validation metric with the data that we used. Our findings indicate that Light Gradient Boosting Machine (LightGBM) and Gradient Boosted Regression Trees (GBRT) demonstrate better performance than other models when we apply a single model for the entire test period. We also take the strategy of changing the model on a yearly basis by assessing the best model annually and observed that it did not outperform the approach of using a single model such as LightGBM or GBRT for the entire period.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"65 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138816752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major goal of survey sampling is finite population inference. In recent years, large-scale survey programs have encountered many practical challenges which include higher data collection cost, increasing non-response rate, increasing demand for disaggregated level statistics and desire for timely estimates. Data integration is a new field of research that provides a timely solution to these above-mentioned challenges by integrating data from multiple surveys. Now, it is possible to develop a framework that can efficiently combine information from several surveys to obtain more precise estimates of population parameters. In many surveys, parameters of interest are often spatial in nature, which means, the relationship between the study variable and covariates varies across all locations in the study area and this situation is referred as spatial non-stationarity. Hence, there is a need of a sampling methodology that can efficiently tackle this spatial non-stationarity problem and can be able to integrate this spatially referenced data to get more detailed information. In this study, a Geographically Weighted Spatially Integrated (GWSI) estimator of finite population total was developed by integrating data from two independent surveys using spatial information. The statistical properties of the proposed spatially integrated estimator were then evaluated empirically through a spatial simulation study. Three different spatial populations were generated having high spatial autocorrelation. The proposed spatially integrated estimator performed better than usual design-based estimator under all three populations. Furthermore, a Spatial Proportionate Bootstrap (SPB) method was developed for variance estimation of the proposed spatially integrated estimator.
{"title":"Spatially integrated estimator of finite population total by integrating data from two independent surveys using spatial information","authors":"Nobin Chandra Paul, Anil Rai, Tauqueer Ahmad, Ankur Biswas, Prachi Misra Sahoo","doi":"10.1007/s42952-023-00244-1","DOIUrl":"https://doi.org/10.1007/s42952-023-00244-1","url":null,"abstract":"<p>A major goal of survey sampling is finite population inference. In recent years, large-scale survey programs have encountered many practical challenges which include higher data collection cost, increasing non-response rate, increasing demand for disaggregated level statistics and desire for timely estimates. Data integration is a new field of research that provides a timely solution to these above-mentioned challenges by integrating data from multiple surveys. Now, it is possible to develop a framework that can efficiently combine information from several surveys to obtain more precise estimates of population parameters. In many surveys, parameters of interest are often spatial in nature, which means, the relationship between the study variable and covariates varies across all locations in the study area and this situation is referred as spatial non-stationarity. Hence, there is a need of a sampling methodology that can efficiently tackle this spatial non-stationarity problem and can be able to integrate this spatially referenced data to get more detailed information. In this study, a Geographically Weighted Spatially Integrated (GWSI) estimator of finite population total was developed by integrating data from two independent surveys using spatial information. The statistical properties of the proposed spatially integrated estimator were then evaluated empirically through a spatial simulation study. Three different spatial populations were generated having high spatial autocorrelation. The proposed spatially integrated estimator performed better than usual design-based estimator under all three populations. Furthermore, a Spatial Proportionate Bootstrap (SPB) method was developed for variance estimation of the proposed spatially integrated estimator.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"15 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138744869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-18DOI: 10.1007/s42952-023-00243-2
Su Jin Jeong, Hyo-jung Lee, Soong Deok Lee, Su Jeong Park, Seung Hwan Lee, Jae Won Lee
Genetic evidence, especially evidence based on short tandem repeats, is of paramount importance for human identification in forensic inferences. In recent years, the identification of kinship using DNA evidence has drawn much attention in various fields. In particular, it is employed, using a criminal database, to confirm blood relations in forensics. The interpretation of the likelihood ratio when identifying an individual or a relationship depends on the allele frequencies that are used, and thus, it is crucial to obtain an accurate estimate of allele frequency. Each organization such as Supreme Prosecutors’ Office and Korean National Police Agency in Korea provides different statistical interpretations due to differing estimations of the allele frequency, which can lead to confusion in forensic identification. Therefore, it is very important to estimate allele frequency accurately, and doing so requires a certain amount of information. However, simply using a weighted average for each allele frequency may not be sufficient to determine biological independence. In this study, we propose a new statistical method for estimating allele frequency by integrating the data obtained from several organizations, and we analyze biological independence and differences in allele frequency relative to the weighted average of allele frequencies in various subgroups. Finally, our proposed method is illustrated using real data from 576 Korean individuals.
基因证据,尤其是基于短串联重复序列的证据,对于法医推断中的人类身份识别至关重要。近年来,利用 DNA 证据进行亲属关系鉴定在各个领域引起了广泛关注。特别是在法医学中,人们利用犯罪数据库来确认血缘关系。在确认个人或亲属关系时,对似然比的解释取决于所使用的等位基因频率,因此,准确估计等位基因频率至关重要。由于对等位基因频率的估计不同,韩国最高检察院和韩国国家警察厅等每个机构都提供了不同的统计解释,这可能导致法医鉴定中的混乱。因此,准确估算等位基因频率非常重要,而这样做需要一定量的信息。然而,仅仅使用每个等位基因频率的加权平均值可能不足以确定生物独立性。在本研究中,我们提出了一种新的统计方法,通过整合从多个机构获得的数据来估算等位基因频率,并分析了生物独立性以及相对于不同亚组等位基因频率加权平均值的等位基因频率差异。最后,我们使用 576 个韩国个体的真实数据对我们提出的方法进行了说明。
{"title":"Statistical integration of allele frequencies from several organizations","authors":"Su Jin Jeong, Hyo-jung Lee, Soong Deok Lee, Su Jeong Park, Seung Hwan Lee, Jae Won Lee","doi":"10.1007/s42952-023-00243-2","DOIUrl":"https://doi.org/10.1007/s42952-023-00243-2","url":null,"abstract":"<p>Genetic evidence, especially evidence based on short tandem repeats, is of paramount importance for human identification in forensic inferences. In recent years, the identification of kinship using DNA evidence has drawn much attention in various fields. In particular, it is employed, using a criminal database, to confirm blood relations in forensics. The interpretation of the likelihood ratio when identifying an individual or a relationship depends on the allele frequencies that are used, and thus, it is crucial to obtain an accurate estimate of allele frequency. Each organization such as Supreme Prosecutors’ Office and Korean National Police Agency in Korea provides different statistical interpretations due to differing estimations of the allele frequency, which can lead to confusion in forensic identification. Therefore, it is very important to estimate allele frequency accurately, and doing so requires a certain amount of information. However, simply using a weighted average for each allele frequency may not be sufficient to determine biological independence. In this study, we propose a new statistical method for estimating allele frequency by integrating the data obtained from several organizations, and we analyze biological independence and differences in allele frequency relative to the weighted average of allele frequencies in various subgroups. Finally, our proposed method is illustrated using real data from 576 Korean individuals.</p>","PeriodicalId":49992,"journal":{"name":"Journal of the Korean Statistical Society","volume":"21 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138717017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}