{"title":"概率模型","authors":"R. Haralick","doi":"10.1002/9781118771075.ch3","DOIUrl":null,"url":null,"abstract":"Probability theory is the mathematical study of uncertainty. In the real world, probability models are used to predict the movement of stock prices, compute insurance premiums, and evaluate cancer treatments. In mathematics, probabilistic techniques are sometimes applied to problems in analysis, combinatorics, and discrete math. Most importantly for this course, probability theory is the foundation of the field of statistics, which is concerned with decision making under uncertainty. This course is an introduction to probability theory for statisticians. Applications of the concepts in this course appear throughout theoretical and applied statistics courses. In probability theory, uncertain outcomes are called random; that is, given the information we currently have about an event, we are uncertain of its outcome. For example, suppose I flip a coin with two sides, labeled Heads and Tails, and secretly record the side facing up upon landing, called the outcome. We call such a procedure an experiment or a random process. Even though I have performed the experiment and I know the outcome, you only know the specifications of the experiment; you are uncertain of its outcome. We call such an outcome random. Of all non-trivial processes, coin flipping is the simplest because there are only two possible outcomes from any coin flip, heads and tails. In fact, although coin flipping is the simplest random process, it is fundamental to much of probability theory, as we will learn throughout this course. In the above experiment, the possible outcomes are Heads and Tails, and an event is any set of possible outcomes. There are four events for this experiment: ∅ := {}, {Heads}, {Tails} and {Heads,Tails}. If the result of the experiment is an outcome in E, then E is said to occur. The set of possible outcomes is called the sample space and is usually denoted Ω. For any event, we can count the number of favorable outcomes associated to that event. For example, if E := {Heads} is the event of interest, then there is #E = 1 favorable outcome and the fraction of favorable outcomes to total outcomes is #E/#Ω = 1/2. For a probability model in which the occurrence of each outcome in Ω is equally likely, the fraction #E/#Ω is a natural probability assignment for the event E, which we denote P(E). This choice of P(E) comes from the frequentist interpretation of probability, by which P(E) is interpreted as the proportion of times E occurs in a large number of repetitions of the same experiment.","PeriodicalId":50764,"journal":{"name":"Annals of Mathematical Statistics","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"90","resultStr":"{\"title\":\"Probability Models\",\"authors\":\"R. Haralick\",\"doi\":\"10.1002/9781118771075.ch3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Probability theory is the mathematical study of uncertainty. In the real world, probability models are used to predict the movement of stock prices, compute insurance premiums, and evaluate cancer treatments. In mathematics, probabilistic techniques are sometimes applied to problems in analysis, combinatorics, and discrete math. Most importantly for this course, probability theory is the foundation of the field of statistics, which is concerned with decision making under uncertainty. This course is an introduction to probability theory for statisticians. Applications of the concepts in this course appear throughout theoretical and applied statistics courses. In probability theory, uncertain outcomes are called random; that is, given the information we currently have about an event, we are uncertain of its outcome. For example, suppose I flip a coin with two sides, labeled Heads and Tails, and secretly record the side facing up upon landing, called the outcome. We call such a procedure an experiment or a random process. Even though I have performed the experiment and I know the outcome, you only know the specifications of the experiment; you are uncertain of its outcome. We call such an outcome random. Of all non-trivial processes, coin flipping is the simplest because there are only two possible outcomes from any coin flip, heads and tails. In fact, although coin flipping is the simplest random process, it is fundamental to much of probability theory, as we will learn throughout this course. In the above experiment, the possible outcomes are Heads and Tails, and an event is any set of possible outcomes. There are four events for this experiment: ∅ := {}, {Heads}, {Tails} and {Heads,Tails}. If the result of the experiment is an outcome in E, then E is said to occur. The set of possible outcomes is called the sample space and is usually denoted Ω. For any event, we can count the number of favorable outcomes associated to that event. For example, if E := {Heads} is the event of interest, then there is #E = 1 favorable outcome and the fraction of favorable outcomes to total outcomes is #E/#Ω = 1/2. For a probability model in which the occurrence of each outcome in Ω is equally likely, the fraction #E/#Ω is a natural probability assignment for the event E, which we denote P(E). This choice of P(E) comes from the frequentist interpretation of probability, by which P(E) is interpreted as the proportion of times E occurs in a large number of repetitions of the same experiment.\",\"PeriodicalId\":50764,\"journal\":{\"name\":\"Annals of Mathematical Statistics\",\"volume\":\"31 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"90\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Mathematical Statistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/9781118771075.ch3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Mathematical Statistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/9781118771075.ch3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Probability theory is the mathematical study of uncertainty. In the real world, probability models are used to predict the movement of stock prices, compute insurance premiums, and evaluate cancer treatments. In mathematics, probabilistic techniques are sometimes applied to problems in analysis, combinatorics, and discrete math. Most importantly for this course, probability theory is the foundation of the field of statistics, which is concerned with decision making under uncertainty. This course is an introduction to probability theory for statisticians. Applications of the concepts in this course appear throughout theoretical and applied statistics courses. In probability theory, uncertain outcomes are called random; that is, given the information we currently have about an event, we are uncertain of its outcome. For example, suppose I flip a coin with two sides, labeled Heads and Tails, and secretly record the side facing up upon landing, called the outcome. We call such a procedure an experiment or a random process. Even though I have performed the experiment and I know the outcome, you only know the specifications of the experiment; you are uncertain of its outcome. We call such an outcome random. Of all non-trivial processes, coin flipping is the simplest because there are only two possible outcomes from any coin flip, heads and tails. In fact, although coin flipping is the simplest random process, it is fundamental to much of probability theory, as we will learn throughout this course. In the above experiment, the possible outcomes are Heads and Tails, and an event is any set of possible outcomes. There are four events for this experiment: ∅ := {}, {Heads}, {Tails} and {Heads,Tails}. If the result of the experiment is an outcome in E, then E is said to occur. The set of possible outcomes is called the sample space and is usually denoted Ω. For any event, we can count the number of favorable outcomes associated to that event. For example, if E := {Heads} is the event of interest, then there is #E = 1 favorable outcome and the fraction of favorable outcomes to total outcomes is #E/#Ω = 1/2. For a probability model in which the occurrence of each outcome in Ω is equally likely, the fraction #E/#Ω is a natural probability assignment for the event E, which we denote P(E). This choice of P(E) comes from the frequentist interpretation of probability, by which P(E) is interpreted as the proportion of times E occurs in a large number of repetitions of the same experiment.