Pub Date : 2022-06-22Epub Date: 2022-05-19DOI: 10.1523/JNEUROSCI.2074-21.2022
Daniel B Rubin, Tommy Hosman, Jessica N Kelemen, Anastasia Kapitonava, Francis R Willett, Brian F Coughlin, Eric Halgren, Eyal Y Kimchi, Ziv M Williams, John D Simeral, Leigh R Hochberg, Sydney S Cash
Consolidation of memory is believed to involve offline replay of neural activity. While amply demonstrated in rodents, evidence for replay in humans, particularly regarding motor memory, is less compelling. To determine whether replay occurs after motor learning, we sought to record from motor cortex during a novel motor task and subsequent overnight sleep. A 36-year-old man with tetraplegia secondary to cervical spinal cord injury enrolled in the ongoing BrainGate brain-computer interface pilot clinical trial had two 96-channel intracortical microelectrode arrays placed chronically into left precentral gyrus. Single- and multi-unit activity was recorded while he played a color/sound sequence matching memory game. Intended movements were decoded from motor cortical neuronal activity by a real-time steady-state Kalman filter that allowed the participant to control a neurally driven cursor on the screen. Intracortical neural activity from precentral gyrus and 2-lead scalp EEG were recorded overnight as he slept. When decoded using the same steady-state Kalman filter parameters, intracortical neural signals recorded overnight replayed the target sequence from the memory game at intervals throughout at a frequency significantly greater than expected by chance. Replay events occurred at speeds ranging from 1 to 4 times as fast as initial task execution and were most frequently observed during slow-wave sleep. These results demonstrate that recent visuomotor skill acquisition in humans may be accompanied by replay of the corresponding motor cortex neural activity during sleep.SIGNIFICANCE STATEMENT Within cortex, the acquisition of information is often followed by the offline recapitulation of specific sequences of neural firing. Replay of recent activity is enriched during sleep and may support the consolidation of learning and memory. Using an intracortical brain-computer interface, we recorded and decoded activity from motor cortex as a human research participant performed a novel motor task. By decoding neural activity throughout subsequent sleep, we find that neural sequences underlying the recently practiced motor task are repeated throughout the night, providing direct evidence of replay in human motor cortex during sleep. This approach, using an optimized brain-computer interface decoder to characterize neural activity during sleep, provides a framework for future studies exploring replay, learning, and memory.
{"title":"Learned Motor Patterns Are Replayed in Human Motor Cortex during Sleep.","authors":"Daniel B Rubin, Tommy Hosman, Jessica N Kelemen, Anastasia Kapitonava, Francis R Willett, Brian F Coughlin, Eric Halgren, Eyal Y Kimchi, Ziv M Williams, John D Simeral, Leigh R Hochberg, Sydney S Cash","doi":"10.1523/JNEUROSCI.2074-21.2022","DOIUrl":"10.1523/JNEUROSCI.2074-21.2022","url":null,"abstract":"<p><p>Consolidation of memory is believed to involve offline replay of neural activity. While amply demonstrated in rodents, evidence for replay in humans, particularly regarding motor memory, is less compelling. To determine whether replay occurs after motor learning, we sought to record from motor cortex during a novel motor task and subsequent overnight sleep. A 36-year-old man with tetraplegia secondary to cervical spinal cord injury enrolled in the ongoing BrainGate brain-computer interface pilot clinical trial had two 96-channel intracortical microelectrode arrays placed chronically into left precentral gyrus. Single- and multi-unit activity was recorded while he played a color/sound sequence matching memory game. Intended movements were decoded from motor cortical neuronal activity by a real-time steady-state Kalman filter that allowed the participant to control a neurally driven cursor on the screen. Intracortical neural activity from precentral gyrus and 2-lead scalp EEG were recorded overnight as he slept. When decoded using the same steady-state Kalman filter parameters, intracortical neural signals recorded overnight replayed the target sequence from the memory game at intervals throughout at a frequency significantly greater than expected by chance. Replay events occurred at speeds ranging from 1 to 4 times as fast as initial task execution and were most frequently observed during slow-wave sleep. These results demonstrate that recent visuomotor skill acquisition in humans may be accompanied by replay of the corresponding motor cortex neural activity during sleep.<b>SIGNIFICANCE STATEMENT</b> Within cortex, the acquisition of information is often followed by the offline recapitulation of specific sequences of neural firing. Replay of recent activity is enriched during sleep and may support the consolidation of learning and memory. Using an intracortical brain-computer interface, we recorded and decoded activity from motor cortex as a human research participant performed a novel motor task. By decoding neural activity throughout subsequent sleep, we find that neural sequences underlying the recently practiced motor task are repeated throughout the night, providing direct evidence of replay in human motor cortex during sleep. This approach, using an optimized brain-computer interface decoder to characterize neural activity during sleep, provides a framework for future studies exploring replay, learning, and memory.</p>","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"5007-5020"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9233445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81756362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Keele (2010, Political Analysis 18:189–205) emphasizes that the incumbent test for detecting proportional hazard (PH) violations in Cox duration models can be adversely affected by misspecified covariate functional form(s). In this note, I reevaluate Keele’s evidence by running a full set of Monte Carlo simulations using the original article’s illustrative data-generating processes (DGPs). I make use of the updated PH test calculation available in R’s survival package starting with v3.0-10. Importantly, I find the updated PH test calculation performs better for Keele’s DGPs, suggesting its scope conditions are distinct and worth further investigating. I also uncover some evidence for the traditional calculation suggesting it, too, may have additional scope conditions that could impact practitioners’ interpretation of Keele (2010). On the whole, while we should always be attentive to model misspecification, my results suggest we should also become more attentive to how frequently the PH test’s performance is affected in practice, and that the answer may depend on the calculation’s implementation.
Keele (2010, Political Analysis 18:189-205)强调,在Cox持续时间模型中,检测比例风险(PH)违规的在位检验可能会受到错误指定的协变量函数形式的不利影响。在本文中,我通过使用原始文章的说明性数据生成过程(dpp)运行一整套蒙特卡罗模拟来重新评估Keele的证据。我使用从v3.0-10开始的R生存包中提供的更新的PH测试计算。重要的是,我发现更新后的PH测试计算对Keele的dpps有更好的表现,这表明它的范围条件是独特的,值得进一步研究。我还发现了一些传统计算的证据,表明它也可能有额外的范围条件,可能影响从业者对Keele(2010)的解释。总的来说,虽然我们应该始终注意模型的错误规范,但我的结果表明,我们也应该更加注意PH测试的性能在实践中受到影响的频率,而答案可能取决于计算的实现。
{"title":"Proportionally Less Difficult?: Reevaluating Keele’s “Proportionally Difficult”","authors":"Shawna K. Metzger","doi":"10.1017/pan.2022.13","DOIUrl":"https://doi.org/10.1017/pan.2022.13","url":null,"abstract":"Abstract Keele (2010, Political Analysis 18:189–205) emphasizes that the incumbent test for detecting proportional hazard (PH) violations in Cox duration models can be adversely affected by misspecified covariate functional form(s). In this note, I reevaluate Keele’s evidence by running a full set of Monte Carlo simulations using the original article’s illustrative data-generating processes (DGPs). I make use of the updated PH test calculation available in R’s survival package starting with v3.0-10. Importantly, I find the updated PH test calculation performs better for Keele’s DGPs, suggesting its scope conditions are distinct and worth further investigating. I also uncover some evidence for the traditional calculation suggesting it, too, may have additional scope conditions that could impact practitioners’ interpretation of Keele (2010). On the whole, while we should always be attentive to model misspecification, my results suggest we should also become more attentive to how frequently the PH test’s performance is affected in practice, and that the answer may depend on the calculation’s implementation.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"156 - 163"},"PeriodicalIF":5.4,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47683986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey weighting allows researchers to account for bias in survey samples, due to unit nonresponse or convenience sampling, using measured demographic covariates. Unfortunately, in practice, it is impossible to know whether the estimated survey weights are sufficient to alleviate concerns about bias due to unobserved confounders or incorrect functional forms used in weighting. In the following paper, we propose two sensitivity analyses for the exclusion of important covariates: (1) a sensitivity analysis for partially observed confounders (i.e., variables measured across the survey sample, but not the target population) and (2) a sensitivity analysis for fully unobserved confounders (i.e., variables not measured in either the survey or the target population). We provide graphical and numerical summaries of the potential bias that arises from such confounders, and introduce a benchmarking approach that allows researchers to quantitatively reason about the sensitivity of their results. We demonstrate our proposed sensitivity analyses using state-level 2020 U.S. Presidential Election polls.
{"title":"Sensitivity Analysis for Survey Weights","authors":"E. Hartman, Melody Y. Huang","doi":"10.1017/pan.2023.12","DOIUrl":"https://doi.org/10.1017/pan.2023.12","url":null,"abstract":"\u0000 Survey weighting allows researchers to account for bias in survey samples, due to unit nonresponse or convenience sampling, using measured demographic covariates. Unfortunately, in practice, it is impossible to know whether the estimated survey weights are sufficient to alleviate concerns about bias due to unobserved confounders or incorrect functional forms used in weighting. In the following paper, we propose two sensitivity analyses for the exclusion of important covariates: (1) a sensitivity analysis for partially observed confounders (i.e., variables measured across the survey sample, but not the target population) and (2) a sensitivity analysis for fully unobserved confounders (i.e., variables not measured in either the survey or the target population). We provide graphical and numerical summaries of the potential bias that arises from such confounders, and introduce a benchmarking approach that allows researchers to quantitatively reason about the sensitivity of their results. We demonstrate our proposed sensitivity analyses using state-level 2020 U.S. Presidential Election polls.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"1 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46154776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Conventional multidimensional statistical models of roll call votes assume that legislators’ preferences are additively separable over dimensions. In this article, we introduce an item response model of roll call votes that allows for non-separability over latent dimensions. Conceptually, non-separability matters if outcomes over dimensions are related rather than independent in legislators’ decisions. Monte Carlo simulations highlight that separable item response models of roll call votes capture non-separability via correlated ideal points and higher salience of a primary dimension. We apply our model to the U.S. Senate and the European Parliament. In both settings, we find that legislators’ preferences over two basic dimensions are non-separable. These results have general implications for our understanding of legislative decision-making, as well as for empirical descriptions of preferences in legislatures.
{"title":"Non-Separable Preferences in the Statistical Analysis of Roll Call Votes","authors":"Garret Binding, Lukas F. Stoetzer","doi":"10.1017/pan.2022.11","DOIUrl":"https://doi.org/10.1017/pan.2022.11","url":null,"abstract":"Abstract Conventional multidimensional statistical models of roll call votes assume that legislators’ preferences are additively separable over dimensions. In this article, we introduce an item response model of roll call votes that allows for non-separability over latent dimensions. Conceptually, non-separability matters if outcomes over dimensions are related rather than independent in legislators’ decisions. Monte Carlo simulations highlight that separable item response models of roll call votes capture non-separability via correlated ideal points and higher salience of a primary dimension. We apply our model to the U.S. Senate and the European Parliament. In both settings, we find that legislators’ preferences over two basic dimensions are non-separable. These results have general implications for our understanding of legislative decision-making, as well as for empirical descriptions of preferences in legislatures.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"352 - 365"},"PeriodicalIF":5.4,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48322878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Ensuring descriptive representation of racial minorities without packing minorities too heavily into districts is a perpetual difficulty, especially in states lacking voter file race data. One advance since the 2010 redistricting cycle is the advent of Bayesian Improved Surname Geocoding (BISG), which greatly improves upon previous ecological inference methods in identifying voter race. In this article, we test the viability of employing BISG to redistricting under two posterior allocation methods for race assignment: plurality versus probabilistic. We validate these methods through 10,000 redistricting simulations of North Carolina and Georgia’s congressional districts and compare BISG estimates to actual voter file racial data. We find that probabilistic summing of the BISG posteriors significantly reduces error rates at the precinct and district level relative to plurality racial assignment, and therefore should be the preferred method when using BISG for redistricting. Our results suggest that BISG can aid in the construction of majority-minority districts during the redistricting process.
{"title":"Validating the Applicability of Bayesian Inference with Surname and Geocoding to Congressional Redistricting","authors":"K. DeLuca, John A. Curiel","doi":"10.1017/pan.2022.14","DOIUrl":"https://doi.org/10.1017/pan.2022.14","url":null,"abstract":"Abstract Ensuring descriptive representation of racial minorities without packing minorities too heavily into districts is a perpetual difficulty, especially in states lacking voter file race data. One advance since the 2010 redistricting cycle is the advent of Bayesian Improved Surname Geocoding (BISG), which greatly improves upon previous ecological inference methods in identifying voter race. In this article, we test the viability of employing BISG to redistricting under two posterior allocation methods for race assignment: plurality versus probabilistic. We validate these methods through 10,000 redistricting simulations of North Carolina and Georgia’s congressional districts and compare BISG estimates to actual voter file racial data. We find that probabilistic summing of the BISG posteriors significantly reduces error rates at the precinct and district level relative to plurality racial assignment, and therefore should be the preferred method when using BISG for redistricting. Our results suggest that BISG can aid in the construction of majority-minority districts during the redistricting process.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"465 - 471"},"PeriodicalIF":5.4,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47969806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A common approach when studying the quality of representation involves comparing the latent preferences of voters and legislators, commonly obtained by fitting an item response theory (IRT) model to a common set of stimuli. Despite being exposed to the same stimuli, voters and legislators may not share a common understanding of how these stimuli map onto their latent preferences, leading to differential item functioning (DIF) and incomparability of estimates. We explore the presence of DIF and incomparability of latent preferences obtained through IRT models by reanalyzing an influential survey dataset, where survey respondents expressed their preferences on roll call votes that U.S. legislators had previously voted on. To do so, we propose defining a Dirichlet process prior over item response functions in standard IRT models. In contrast to typical multistep approaches to detecting DIF, our strategy allows researchers to fit a single model, automatically identifying incomparable subgroups with different mappings from latent traits onto observed responses. We find that although there is a group of voters whose estimated positions can be safely compared to those of legislators, a sizeable share of surveyed voters understand stimuli in fundamentally different ways. Ignoring these issues can lead to incorrect conclusions about the quality of representation.
{"title":"A Nonparametric Bayesian Model for Detecting Differential Item Functioning: An Application to Political Representation in the US","authors":"Y. Shiraito, James Lo, S. Olivella","doi":"10.1017/pan.2023.1","DOIUrl":"https://doi.org/10.1017/pan.2023.1","url":null,"abstract":"Abstract A common approach when studying the quality of representation involves comparing the latent preferences of voters and legislators, commonly obtained by fitting an item response theory (IRT) model to a common set of stimuli. Despite being exposed to the same stimuli, voters and legislators may not share a common understanding of how these stimuli map onto their latent preferences, leading to differential item functioning (DIF) and incomparability of estimates. We explore the presence of DIF and incomparability of latent preferences obtained through IRT models by reanalyzing an influential survey dataset, where survey respondents expressed their preferences on roll call votes that U.S. legislators had previously voted on. To do so, we propose defining a Dirichlet process prior over item response functions in standard IRT models. In contrast to typical multistep approaches to detecting DIF, our strategy allows researchers to fit a single model, automatically identifying incomparable subgroups with different mappings from latent traits onto observed responses. We find that although there is a group of voters whose estimated positions can be safely compared to those of legislators, a sizeable share of surveyed voters understand stimuli in fundamentally different ways. Ignoring these issues can lead to incorrect conclusions about the quality of representation.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"430 - 447"},"PeriodicalIF":5.4,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47706513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Sentiment analysis techniques have a long history in natural language processing and have become a standard tool in the analysis of political texts, promising a conceptually straightforward automated method of extracting meaning from textual data by scoring documents on a scale from positive to negative. However, while these kinds of sentiment scores can capture the overall tone of a document, the underlying concept of interest for political analysis is often actually the document’s stance with respect to a given target—how positively or negatively it frames a specific idea, individual, or group—as this reflects the author’s underlying political attitudes. In this paper, we question the validity of approximating author stance through sentiment scoring in the analysis of political texts, and advocate for greater attention to be paid to the conceptual distinction between a document’s sentiment and its stance. Using examples from open-ended survey responses and from political discussions on social media, we demonstrate that in many political text analysis applications, sentiment and stance do not necessarily align, and therefore sentiment analysis methods fail to reliably capture ground-truth document stance, amplifying noise in the data and leading to faulty conclusions.
{"title":"Sentiment is Not Stance: Target-Aware Opinion Classification for Political Text Analysis","authors":"Samuel E. Bestvater, B. Monroe","doi":"10.1017/pan.2022.10","DOIUrl":"https://doi.org/10.1017/pan.2022.10","url":null,"abstract":"Abstract Sentiment analysis techniques have a long history in natural language processing and have become a standard tool in the analysis of political texts, promising a conceptually straightforward automated method of extracting meaning from textual data by scoring documents on a scale from positive to negative. However, while these kinds of sentiment scores can capture the overall tone of a document, the underlying concept of interest for political analysis is often actually the document’s stance with respect to a given target—how positively or negatively it frames a specific idea, individual, or group—as this reflects the author’s underlying political attitudes. In this paper, we question the validity of approximating author stance through sentiment scoring in the analysis of political texts, and advocate for greater attention to be paid to the conceptual distinction between a document’s sentiment and its stance. Using examples from open-ended survey responses and from political discussions on social media, we demonstrate that in many political text analysis applications, sentiment and stance do not necessarily align, and therefore sentiment analysis methods fail to reliably capture ground-truth document stance, amplifying noise in the data and leading to faulty conclusions.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"235 - 256"},"PeriodicalIF":5.4,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47418268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We offer methods to analyze the “differentially private” Facebook URLs Dataset which, at over 40 trillion cell values, is one of the largest social science research datasets ever constructed. The version of differential privacy used in the URLs dataset has specially calibrated random noise added, which provides mathematical guarantees for the privacy of individual research subjects while still making it possible to learn about aggregate patterns of interest to social scientists. Unfortunately, random noise creates measurement error which induces statistical bias—including attenuation, exaggeration, switched signs, or incorrect uncertainty estimates. We adapt methods developed to correct for naturally occurring measurement error, with special attention to computational efficiency for large datasets. The result is statistically valid linear regression estimates and descriptive statistics that can be interpreted as ordinary analyses of nonconfidential data but with appropriately larger standard errors.
{"title":"Statistically Valid Inferences from Differentially Private Data Releases, with Application to the Facebook URLs Dataset","authors":"Georgina Evans, Gary King","doi":"10.1017/pan.2022.1","DOIUrl":"https://doi.org/10.1017/pan.2022.1","url":null,"abstract":"Abstract We offer methods to analyze the “differentially private” Facebook URLs Dataset which, at over 40 trillion cell values, is one of the largest social science research datasets ever constructed. The version of differential privacy used in the URLs dataset has specially calibrated random noise added, which provides mathematical guarantees for the privacy of individual research subjects while still making it possible to learn about aggregate patterns of interest to social scientists. Unfortunately, random noise creates measurement error which induces statistical bias—including attenuation, exaggeration, switched signs, or incorrect uncertainty estimates. We adapt methods developed to correct for naturally occurring measurement error, with special attention to computational efficiency for large datasets. The result is statistically valid linear regression estimates and descriptive statistics that can be interpreted as ordinary analyses of nonconfidential data but with appropriately larger standard errors.","PeriodicalId":48270,"journal":{"name":"Political Analysis","volume":"31 1","pages":"1 - 21"},"PeriodicalIF":5.4,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45865643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}