{"title":"Estimation of Unknown Parameters","authors":"A. Borovkov","doi":"10.1201/9780203749326-2","DOIUrl":"https://doi.org/10.1201/9780203749326-2","url":null,"abstract":"","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83420741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Statistical Problems for Two or More Samples","authors":"A. Borovkov","doi":"10.1201/9780203749326-4","DOIUrl":"https://doi.org/10.1201/9780203749326-4","url":null,"abstract":"","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89591557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Game-Theoretic Approach to Problems of Mathematical Statistics","authors":"A. Borovkov","doi":"10.1201/9780203749326-6","DOIUrl":"https://doi.org/10.1201/9780203749326-6","url":null,"abstract":"","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81018434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-29DOI: 10.1002/9781118771075.ch3
R. Haralick
Probability theory is the mathematical study of uncertainty. In the real world, probability models are used to predict the movement of stock prices, compute insurance premiums, and evaluate cancer treatments. In mathematics, probabilistic techniques are sometimes applied to problems in analysis, combinatorics, and discrete math. Most importantly for this course, probability theory is the foundation of the field of statistics, which is concerned with decision making under uncertainty. This course is an introduction to probability theory for statisticians. Applications of the concepts in this course appear throughout theoretical and applied statistics courses. In probability theory, uncertain outcomes are called random; that is, given the information we currently have about an event, we are uncertain of its outcome. For example, suppose I flip a coin with two sides, labeled Heads and Tails, and secretly record the side facing up upon landing, called the outcome. We call such a procedure an experiment or a random process. Even though I have performed the experiment and I know the outcome, you only know the specifications of the experiment; you are uncertain of its outcome. We call such an outcome random. Of all non-trivial processes, coin flipping is the simplest because there are only two possible outcomes from any coin flip, heads and tails. In fact, although coin flipping is the simplest random process, it is fundamental to much of probability theory, as we will learn throughout this course. In the above experiment, the possible outcomes are Heads and Tails, and an event is any set of possible outcomes. There are four events for this experiment: ∅ := {}, {Heads}, {Tails} and {Heads,Tails}. If the result of the experiment is an outcome in E, then E is said to occur. The set of possible outcomes is called the sample space and is usually denoted Ω. For any event, we can count the number of favorable outcomes associated to that event. For example, if E := {Heads} is the event of interest, then there is #E = 1 favorable outcome and the fraction of favorable outcomes to total outcomes is #E/#Ω = 1/2. For a probability model in which the occurrence of each outcome in Ω is equally likely, the fraction #E/#Ω is a natural probability assignment for the event E, which we denote P(E). This choice of P(E) comes from the frequentist interpretation of probability, by which P(E) is interpreted as the proportion of times E occurs in a large number of repetitions of the same experiment.
{"title":"Probability Models","authors":"R. Haralick","doi":"10.1002/9781118771075.ch3","DOIUrl":"https://doi.org/10.1002/9781118771075.ch3","url":null,"abstract":"Probability theory is the mathematical study of uncertainty. In the real world, probability models are used to predict the movement of stock prices, compute insurance premiums, and evaluate cancer treatments. In mathematics, probabilistic techniques are sometimes applied to problems in analysis, combinatorics, and discrete math. Most importantly for this course, probability theory is the foundation of the field of statistics, which is concerned with decision making under uncertainty. This course is an introduction to probability theory for statisticians. Applications of the concepts in this course appear throughout theoretical and applied statistics courses. In probability theory, uncertain outcomes are called random; that is, given the information we currently have about an event, we are uncertain of its outcome. For example, suppose I flip a coin with two sides, labeled Heads and Tails, and secretly record the side facing up upon landing, called the outcome. We call such a procedure an experiment or a random process. Even though I have performed the experiment and I know the outcome, you only know the specifications of the experiment; you are uncertain of its outcome. We call such an outcome random. Of all non-trivial processes, coin flipping is the simplest because there are only two possible outcomes from any coin flip, heads and tails. In fact, although coin flipping is the simplest random process, it is fundamental to much of probability theory, as we will learn throughout this course. In the above experiment, the possible outcomes are Heads and Tails, and an event is any set of possible outcomes. There are four events for this experiment: ∅ := {}, {Heads}, {Tails} and {Heads,Tails}. If the result of the experiment is an outcome in E, then E is said to occur. The set of possible outcomes is called the sample space and is usually denoted Ω. For any event, we can count the number of favorable outcomes associated to that event. For example, if E := {Heads} is the event of interest, then there is #E = 1 favorable outcome and the fraction of favorable outcomes to total outcomes is #E/#Ω = 1/2. For a probability model in which the occurrence of each outcome in Ω is equally likely, the fraction #E/#Ω is a natural probability assignment for the event E, which we denote P(E). This choice of P(E) comes from the frequentist interpretation of probability, by which P(E) is interpreted as the proportion of times E occurs in a large number of repetitions of the same experiment.","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85366567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-29DOI: 10.1002/9781118771075.ch2
Tim Marks
• Samples from a random variable are real numbers – A random variable is associated with a probability distribution over these real values – Two types of random variables • Discrete – Only finitely many possible values for the random variable: X ∈ {a1, a2, ..., an} – (Could also have a countable infinity of possible values) » e.g., the random variable could take any positive integer value – Each possible value has a finite probability of occurring. • Continuous – Infinitely many possible values for the random variable – E.g., X ∈ {Real numbers}
{"title":"Random Variables and Random Vectors","authors":"Tim Marks","doi":"10.1002/9781118771075.ch2","DOIUrl":"https://doi.org/10.1002/9781118771075.ch2","url":null,"abstract":"• Samples from a random variable are real numbers – A random variable is associated with a probability distribution over these real values – Two types of random variables • Discrete – Only finitely many possible values for the random variable: X ∈ {a1, a2, ..., an} – (Could also have a countable infinity of possible values) » e.g., the random variable could take any positive integer value – Each possible value has a finite probability of occurring. • Continuous – Infinitely many possible values for the random variable – E.g., X ∈ {Real numbers}","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75752477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}