Pub Date : 2017-05-26DOI: 10.20944/PREPRINTS201705.0194.V1
T. Graves, R. Gramacy, N. Watkins, C. Franzke
Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here we discuss the original motivation of the development of long memory and Mandelbrot's influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities
长记忆通过决定系统的行为和可预测性在许多领域发挥着重要作用;例如,气候、水文、金融、网络和DNA测序。特别是,测试一个过程是否表现出长记忆是很重要的,因为它会影响人们在少量历史数据的基础上预测未来事件的准确性和信心。发展和研究长记忆的主要力量是已故的Benoit B. Mandelbrot。在这里,我们讨论长记忆发展的原始动机和曼德布洛特对这个迷人领域的影响。我们还将阐明在不同的科学团体中,有时会有不同的方法来研究长记忆
{"title":"A Brief History of Long Memory: Hurst, Mandelbrot and the Road to ARFIMA","authors":"T. Graves, R. Gramacy, N. Watkins, C. Franzke","doi":"10.20944/PREPRINTS201705.0194.V1","DOIUrl":"https://doi.org/10.20944/PREPRINTS201705.0194.V1","url":null,"abstract":"Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here we discuss the original motivation of the development of long memory and Mandelbrot's influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121578148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-10DOI: 10.1146/annurev-statistics-060116-053930
R. D. Veaux, Mahesh Agarwal, Maia Averett, Benjamin S. Baumer, Andrew Bray, T. Bressoud, Lance Bryant, Lei Cheng, Amanda Francis, R. Gould, Albert Y. Kim, Matt Kretchmar, Qin Lu, Ann Moskol, D. Nolan, Roberto Pelayo, Sean Raleigh, Ricky J. Sethi, Mutiara Sondjaja, Neelesh Tiruviluamala, P. Uhlig, Talitha M. Washington, Curtis L. Wesley, David L. White, Ping Ye
The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science.
{"title":"Curriculum Guidelines for Undergraduate Programs in Data Science","authors":"R. D. Veaux, Mahesh Agarwal, Maia Averett, Benjamin S. Baumer, Andrew Bray, T. Bressoud, Lance Bryant, Lei Cheng, Amanda Francis, R. Gould, Albert Y. Kim, Matt Kretchmar, Qin Lu, Ann Moskol, D. Nolan, Roberto Pelayo, Sean Raleigh, Ricky J. Sethi, Mutiara Sondjaja, Neelesh Tiruviluamala, P. Uhlig, Talitha M. Washington, Curtis L. Wesley, David L. White, Ping Ye","doi":"10.1146/annurev-statistics-060116-053930","DOIUrl":"https://doi.org/10.1146/annurev-statistics-060116-053930","url":null,"abstract":"The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132066225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models are consistently treated as approximations and all procedures are consistent with this. They do not treat the model as being true. In this context $p$-values are one measure of approximation, a small $p$-value indicating a poor approximation. Approximation regions are defined and distinguished from confidence regions.
{"title":"On $p$-values","authors":"Laurie Davies","doi":"10.5705/SS.202016.0507","DOIUrl":"https://doi.org/10.5705/SS.202016.0507","url":null,"abstract":"Models are consistently treated as approximations and all procedures are consistent with this. They do not treat the model as being true. In this context $p$-values are one measure of approximation, a small $p$-value indicating a poor approximation. Approximation regions are defined and distinguished from confidence regions.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132415306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The call for using real data in the classroom has long meant using datasets which are culled, cleaned, and wrangled prior to any student working with the observations. However, an important part of teaching statistics should include actually retrieving data from the Internet. Nowadays, there are many different sources of data that are continually updated by the organization hosting the data website. The R tools to download such dynamic data have improved in such a way to make accessing the data possible even in an introductory statistics class. We provide five full analyses on dynamic data as well as an additional nine sources of dynamic data that can be brought into the classroom. The goal of our work is to demonstrate that using dynamic data can have a short learning curve, even for introductory students or faculty unfamiliar with the landscape. The examples provided are unlikely to create expert data scrapers, but they should help motivate students and faculty toward more engaged use of online data sources.
{"title":"Dynamic Data in the Statistics Classroom","authors":"Johanna S. Hardin","doi":"10.5070/T5111031079","DOIUrl":"https://doi.org/10.5070/T5111031079","url":null,"abstract":"The call for using real data in the classroom has long meant using datasets which are culled, cleaned, and wrangled prior to any student working with the observations. However, an important part of teaching statistics should include actually retrieving data from the Internet. Nowadays, there are many different sources of data that are continually updated by the organization hosting the data website. The R tools to download such dynamic data have improved in such a way to make accessing the data possible even in an introductory statistics class. We provide five full analyses on dynamic data as well as an additional nine sources of dynamic data that can be brought into the classroom. The goal of our work is to demonstrate that using dynamic data can have a short learning curve, even for introductory students or faculty unfamiliar with the landscape. The examples provided are unlikely to create expert data scrapers, but they should help motivate students and faculty toward more engaged use of online data sources.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131884960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-29DOI: 10.1007/978-3-319-55789-2_14
N. Watkins
{"title":"Mandelbrot's 1/f fractional renewal models of 1963-67: The non-ergodic missing link between change points and long range dependence","authors":"N. Watkins","doi":"10.1007/978-3-319-55789-2_14","DOIUrl":"https://doi.org/10.1007/978-3-319-55789-2_14","url":null,"abstract":"","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-11DOI: 10.6084/M9.FIGSHARE.1569497.V3
C. Tim
I have three goals in this article: (1) To show the enormous potential of bootstrapping and permutation tests to help students understand statistical concepts including sampling distributions, standard errors, bias, confidence intervals, null distributions, and P-values. (2) To dig deeper, understand why these methods work and when they don't, things to watch out for, and how to deal with these issues when teaching. (3) To change statistical practice---by comparing these methods to common t tests and intervals, we see how inaccurate the latter are; we confirm this with asymptotics. n >= 30 isn't enough---think n >= 5000. Resampling provides diagnostics, and more accurate alternatives. Sadly, the common bootstrap percentile interval badly under-covers in small samples; there are better alternatives. The tone is informal, with a few stories and jokes.
{"title":"What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum","authors":"C. Tim","doi":"10.6084/M9.FIGSHARE.1569497.V3","DOIUrl":"https://doi.org/10.6084/M9.FIGSHARE.1569497.V3","url":null,"abstract":"I have three goals in this article: (1) To show the enormous potential of bootstrapping and permutation tests to help students understand statistical concepts including sampling distributions, standard errors, bias, confidence intervals, null distributions, and P-values. (2) To dig deeper, understand why these methods work and when they don't, things to watch out for, and how to deal with these issues when teaching. (3) To change statistical practice---by comparing these methods to common t tests and intervals, we see how inaccurate the latter are; we confirm this with asymptotics. n >= 30 isn't enough---think n >= 5000. Resampling provides diagnostics, and more accurate alternatives. Sadly, the common bootstrap percentile interval badly under-covers in small samples; there are better alternatives. The tone is informal, with a few stories and jokes.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116019344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article brings attention to some historical developments that gave rise to the Bayes factor for testing a point null hypothesis against a composite alternative. In line with current thinking, we find that the conceptual innovation - to assign prior mass to a general law - is due to a series of three articles by Dorothy Wrinch and Sir Harold Jeffreys (1919, 1921, 1923). However, our historical investigation also suggests that in 1932 J. B. S. Haldane made an important contribution to the development of the Bayes factor by proposing the use of a mixture prior comprising a point mass and a continuous probability density. Jeffreys was aware of Haldane's work and it may have inspired him to pursue a more concrete statistical implementation for his conceptual ideas. It thus appears that Haldane may have played a much bigger role in the statistical development of the Bayes factor than has hitherto been assumed.
{"title":"J. B. S. Haldane's Contribution to the Bayes Factor Hypothesis Test","authors":"Alexander Etz, E. Wagenmakers","doi":"10.1214/16-STS599","DOIUrl":"https://doi.org/10.1214/16-STS599","url":null,"abstract":"This article brings attention to some historical developments that gave rise to the Bayes factor for testing a point null hypothesis against a composite alternative. In line with current thinking, we find that the conceptual innovation - to assign prior mass to a general law - is due to a series of three articles by Dorothy Wrinch and Sir Harold Jeffreys (1919, 1921, 1923). However, our historical investigation also suggests that in 1932 J. B. S. Haldane made an important contribution to the development of the Bayes factor by proposing the use of a mixture prior comprising a point mass and a continuous probability density. Jeffreys was aware of Haldane's work and it may have inspired him to pursue a more concrete statistical implementation for his conceptual ideas. It thus appears that Haldane may have played a much bigger role in the statistical development of the Bayes factor than has hitherto been assumed.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133979068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Bayesian decision theory is neo-Bernoullian in that it proves, by way of a consistency derivation, that Bernoulli’s utility function is the only appropriate function by which to translate, for a given initial wealth, gains and losses to their corresponding utilities. But the Bayesian decision theory deviates from Bernoulli’s original expected utility theory in that it offers up an alternative for the traditional criterion of choice of expectation value maximization, as it proposes to choose that decision which has associated with it the utility probability distribution which maximizes the mean of the expectation value and the lower and upper confidence bounds.
{"title":"An Outline of the Bayesian Decision Theory","authors":"H. V. Erp, R. O. Linger, P. V. Gelder","doi":"10.1063/1.4959057","DOIUrl":"https://doi.org/10.1063/1.4959057","url":null,"abstract":"The Bayesian decision theory is neo-Bernoullian in that it proves, by way of a consistency derivation, that Bernoulli’s utility function is the only appropriate function by which to translate, for a given initial wealth, gains and losses to their corresponding utilities. But the Bayesian decision theory deviates from Bernoulli’s original expected utility theory in that it offers up an alternative for the traditional criterion of choice of expectation value maximization, as it proposes to choose that decision which has associated with it the utility probability distribution which maximizes the mean of the expectation value and the lower and upper confidence bounds.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128684859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alan E. Gelfand was born April 17, 1945, in the Bronx, New York. He attended public grade schools and did his undergraduate work at what was then called City College of New York (CCNY, now CUNY), excelling at mathematics. He then surprised and saddened his mother by going all the way across the country to Stanford to graduate school, where he completed his dissertation in 1969 under the direction of Professor Herbert Solomon, making him an academic grandson of Herman Rubin and Harold Hotelling. Alan then accepted a faculty position at the University of Connecticut (UConn) where he was promoted to tenured associate professor in 1975 and to full professor in 1980. A few years later he became interested in decision theory, then empirical Bayes, which eventually led to the publication of Gelfand and Smith [J. Amer. Statist. Assoc. 85 (1990) 398-409], the paper that introduced the Gibbs sampler to most statisticians and revolutionized Bayesian computing. In the mid-1990s, Alan's interests turned strongly to spatial statistics, leading to fundamental contributions in spatially-varying coefficient models, coregionalization, and spatial boundary analysis (wombling). He spent 33 years on the faculty at UConn, retiring in 2002 to become the James B. Duke Professor of Statistics and Decision Sciences at Duke University, serving as chair from 2007-2012. At Duke, he has continued his work in spatial methodology while increasing his impact in the environmental sciences. To date, he has published over 260 papers and 6 books; he has also supervised 36 Ph.D. dissertations and 10 postdocs. This interview was done just prior to a conference of his family, academic descendants, and colleagues to celebrate his 70th birthday and his contributions to statistics which took place on April 19-22, 2015 at Duke University.
{"title":"A Conversation with Alan Gelfand","authors":"B. Carlin, A. Herring","doi":"10.1214/15-STS521","DOIUrl":"https://doi.org/10.1214/15-STS521","url":null,"abstract":"Alan E. Gelfand was born April 17, 1945, in the Bronx, New York. He attended public grade schools and did his undergraduate work at what was then called City College of New York (CCNY, now CUNY), excelling at mathematics. He then surprised and saddened his mother by going all the way across the country to Stanford to graduate school, where he completed his dissertation in 1969 under the direction of Professor Herbert Solomon, making him an academic grandson of Herman Rubin and Harold Hotelling. Alan then accepted a faculty position at the University of Connecticut (UConn) where he was promoted to tenured associate professor in 1975 and to full professor in 1980. A few years later he became interested in decision theory, then empirical Bayes, which eventually led to the publication of Gelfand and Smith [J. Amer. Statist. Assoc. 85 (1990) 398-409], the paper that introduced the Gibbs sampler to most statisticians and revolutionized Bayesian computing. In the mid-1990s, Alan's interests turned strongly to spatial statistics, leading to fundamental contributions in spatially-varying coefficient models, coregionalization, and spatial boundary analysis (wombling). He spent 33 years on the faculty at UConn, retiring in 2002 to become the James B. Duke Professor of Statistics and Decision Sciences at Duke University, serving as chair from 2007-2012. At Duke, he has continued his work in spatial methodology while increasing his impact in the environmental sciences. To date, he has published over 260 papers and 6 books; he has also supervised 36 Ph.D. dissertations and 10 postdocs. This interview was done just prior to a conference of his family, academic descendants, and colleagues to celebrate his 70th birthday and his contributions to statistics which took place on April 19-22, 2015 at Duke University.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125415646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Part V of the second edition of Pierre R'{e}mond de Montmort's Essay d'analyse sur les jeux de hazard published in 1713 contains correspondence on probability problems between Montmort and Nicolaus Bernoulli. This correspondence begins in 1710. The last published letter, dated November 15, 1713, is from Montmort to Nicolaus Bernoulli. There is some discussion of the strategy of play in the card game Le Her and a bit of news that Montmort's friend Waldegrave in Paris was going to take care of the printing of the book. From earlier correspondence between Bernoulli and Montmort, it is apparent that Waldegrave had also analyzed Le Her and had come up with a mixed strategy as a solution. He had also suggested working on the "problem of the pool," or what is often called Waldegrave's problem. The Universit"{a}tsbibliothek Basel contains an additional forty-two letters between Bernoulli and Montmort written after 1713, as well as two letters between Bernoulli and Waldegrave. The letters are all in French, and here we provide translations of key passages. The trio continued to discuss probability problems, particularly Le Her which was still under discussion when the Essay d'analyse went to print. We describe the probability content of this body of correspondence and put it in its historical context. We also provide a proper identification of Waldegrave based on manuscripts in the Archives nationales de France in Paris.
1713年出版的Pierre R ' e}mond de montmont 's Essay d'analyse sur les jeux de hazard第二版的第五部分包含了montmont和Nicolaus Bernoulli之间关于概率问题的通信。这封书信开始于1710年。最后一封公开的信是1713年11月15日蒙蒙特写给尼古拉斯·伯努利的。书中有一些关于纸牌游戏“乐和”的策略的讨论,还有一些关于蒙特在巴黎的朋友瓦德格拉夫将负责这本书的印刷的消息。从伯努利和蒙特蒙特之间的早期通信中可以看出,瓦德格拉夫也分析了勒赫,并提出了一个混合策略作为解决方案。他还建议研究“水池问题”,也就是通常所说的瓦德格拉夫问题。巴塞尔大学图书馆还收藏了伯努利和蒙特蒙特在1713年之后写的另外42封信,以及伯努利和瓦德格拉夫之间的两封信。这些信件都是用法语写的,这里我们提供了关键段落的翻译。三人继续讨论概率问题,特别是在《分析论文》出版时仍在讨论的勒赫。我们描述了这个通信体的概率内容,并把它放在历史背景中。我们还根据巴黎法国国家档案馆的手稿提供了瓦德格拉夫的适当鉴定。
{"title":"Le Her and Other Problems in Probability Discussed by Bernoulli, Montmort and Waldegrave","authors":"D. Bellhouse, Nicolas Fillion","doi":"10.1214/14-STS469","DOIUrl":"https://doi.org/10.1214/14-STS469","url":null,"abstract":"Part V of the second edition of Pierre R'{e}mond de Montmort's Essay d'analyse sur les jeux de hazard published in 1713 contains correspondence on probability problems between Montmort and Nicolaus Bernoulli. This correspondence begins in 1710. The last published letter, dated November 15, 1713, is from Montmort to Nicolaus Bernoulli. There is some discussion of the strategy of play in the card game Le Her and a bit of news that Montmort's friend Waldegrave in Paris was going to take care of the printing of the book. From earlier correspondence between Bernoulli and Montmort, it is apparent that Waldegrave had also analyzed Le Her and had come up with a mixed strategy as a solution. He had also suggested working on the \"problem of the pool,\" or what is often called Waldegrave's problem. The Universit\"{a}tsbibliothek Basel contains an additional forty-two letters between Bernoulli and Montmort written after 1713, as well as two letters between Bernoulli and Waldegrave. The letters are all in French, and here we provide translations of key passages. The trio continued to discuss probability problems, particularly Le Her which was still under discussion when the Essay d'analyse went to print. We describe the probability content of this body of correspondence and put it in its historical context. We also provide a proper identification of Waldegrave based on manuscripts in the Archives nationales de France in Paris.","PeriodicalId":413623,"journal":{"name":"arXiv: Other Statistics","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134057581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}