This lecture serves as a philosophically informed mathematical introduction to the ideas and notation of probability theory from its most important historical theorist. It is part of an ongoing contemporary formal reconstruction of Laplace’s Calculus of Probability from his english-translated introductory essay, “A Philosophical Essay on Probabilities,” (cite: PEP) which can be read along with these notes, which are divided into the same sections as Laplace. I have included deeper supplements from the untranslated treatise Théorie Analytique des Probabilités (cite: TA) through personal and online translation tools in section 1.10 and the Appendix (3).
The General Principles of the Calculus of Probabilities
is the state of all possible events.
is an event as element of the state.
1st Principle: The probability of the occurrence of an event is the number of favorable cases divided by the total number of causal cases, assuming all cases are equally likely
is the derivational system of the state as the space of cases that will cause different events in the state. is the derivational system of the state favoring the event . The order of a particular state (or derivational state-system) is given by the measure () evaluated as the number of elements in it.
If we introduce time as the attribute of case-based favorability, i.e. causality, the event is to occur at a future time , such as would be represented by the formal statement . The conditioning cases, equally likely, which will deterministically cause the event at are the possible events at the previous conditioning states of the system , given as , a superposition of possible states-as-cases since they are unknown at the time of the present of , where is a derivational state-system, or set of possible causal states, here evaluated at given , i.e. . This set of possible cases can be partitioned into those that are favorable to and those that aren’t favorable. The set of cases favorable to are .
2nd Principle: Assuming the conditioning cases are not equal in probability, the probability of the occurrence of an event is the sum of the probability of the favorable cases
3rd Principle: The probability of the combined event () of independent events is the product of the probability of the composite events.
4th Principle: The probability of a compound event () of two events dependent upon each other, , where is after , is the probability of the first times the probability of the second conditioned on the first having occurred.
5th Principle (p.15): The probability of an expected event conditioned on an occurred event is the probability of the composite event divided by the a priori probability of occurred event.
Always, is from a prior state, as can be given by a previous event . Thus, if we assume the present to be , the prior time to have been , and the future time to be , then the probability of the presently occurred event is made from as
The probability of the combined event occurring can also be measured partially from the perspective as
Thus,
6th Principle: For a constant event, the likelihood of a cause to an event is the same as the probability that the event will occur. 2. The probability of the existence of any one of those causes is the probability of the event (resulting from this cause) divided by the sum of the probabilities of similar events from all causes. 3. For causes, considered a priori, which are unequally probable, the probability of the existence of a cause is the probability of the caused event divided by the sum of the product of the probability of the events and the possibility (a priori probability) of their cause.
For event , let be its cause. While is the probability of an actual existence, is the measure of the a priori likelihood of a cause since its existence is unknown. These two measurements may be used interchangeably where the existential nature of the measurement is known or substitutions as approximations are permissible. In Principle 5 they are conflated since the probability of an occurred event always implies an a priori likelihood.
- for constant (i.e. only 1 cause, ),
- for equally likely,
7th Principle (p. 17): The probability of a future event, , is the sum of the products of the probability of each cause, drawn from the event observed, by the probability that, this cause existing, the future event will occur.
The present is while the future time is . Thus, the future event expected is . Given that has been observed, we ask about the probability of a future event from the set of causes (change of notation for causes).
How are we to consider causes? They can be historical events with a causal-deterministic relationship to the future or they can be considered event-conditions, as a spatiality (possibly true over a temporal duration) rather than a temporality (true at one time). Generally, we can consider causes to be hypotheses , with the probability (single term) and the (conditional) probability. The observed event () is and the future event () is the expected event . Thus, we can restate principles 7 & 6 as:
Clearly, Principle 6 is the same as Bayes Theorem (Wassermann, Thm. 2.16), which articulates the Hypotheses as a partition of in that (), in that each hypothesis is a limitation of the domain of possible events. The observed event is also considered a set of events rather than a single ‘point.’ Therefore, Principle 6 says that “the probability that the possibility of the event is comprised within given limits is the sum of the fractions comprised within these limits” (Laplace, p.18).
8th Principle (p.20): The Advantage of Mathematical Hope, , depending on several events, is the sum of the products of the probability of each event by the benefit to its occurrence
Let be the set of events under consideration. Let be the benefit function giving a value to each event. The advantage hoped for from these events is:
A fair game is one whose cost of playing is equal to the advantage gained through it.
9th Principle (p.21): The Advantage , depending on a series of events (), is the sum of the products of the probability of each favorable event by the benefit to its occurrence minus the sum of the products of the probability of each unfavorable event by the cost to its occurrence.
Let be the series of events under consideration, partitioned into for favorable and unfavorable events. Let be the benefit function for and the loss function for , each giving the value to each event. The advantage of playing the game is:
Mathematical Hope is the positivity of A. Thus, if A is positive, one has hope for the game, while if A is negative one has fear.
In generality, is the random variable function, , that gives a value to each event, either a benefit () or cost (). The absolute expectation () of value for the game from these events is:
10th Principle (p.23): The relative value of an infinitely small sum is equal to its absolute value divided by the total benefit of the person interested.
This section can be explicated by examining Laplace’s corresponding section in Théorie Analytique (S.41-42, p.432-445) as a development of Bernoulli’s work on the subject.
(432) For a \textit{physical fortune} , an increase by produces a moral good reciprocal to the fortune, for a constant . is the \say{units} of moral goodness (i.e. utility) in that 1 moral good. So, is the quantity of physical fortune whereby a marginal increase by unity of physical fortune is equivalent to unity of moral fortune. For a \textit{moral fortune} , [y=kln x + ln h]
A moral good is the proportion of an increase in part of a fortune by the whole fortune. Moral fortune is the sum of all moral goods. If we consider this summation continuously for all infinitesimally small increases in physical fortune, moral fortune is the integral of the proportional reciprocal of the physical fortune by the changes in that physical fortune. Deriving this from principle 10,
is the constant of minimum moral good when the physical fortune is unity. We can put this in terms of a physical fortune, , the minimum physical fortune for surviving one’s existence – the cost of reproducing the conditions of one’s own existence. With ,
h is a constant given by an empirical observation of as never positive or negative but always at least what is necessary, as even someone without any physical fortune will still have a moral fortune in their existence – it is thus the unpriced \say{physical fortune} of laboring existence.
(433) Suppose an individual with a physical fortune of expects to receive a variety of changes in fortunes , as increments or diminishings, with probabilities of summing to unity. The corresponding moral fortunes would be,
Thus, the expected moral fortune is
Let be the physical fortune corresponding to this moral fortune, as
with,
Taking away the primitive fortune from this value of , the difference will be the increase in the physical fortune that would procure the individual the same moral advantage resulting from his expectation. This difference is therefore the expression of the mathematical advantage,
This results in several important consequences. One of them is that the mathematically most equal game is always advantageous. Indeed, if we denote by the physical fortune of the player before starting the game, by his probability of winning, (434)
Concerning the Analytical Methods of the Calculus of Probabilities
The Binomial Theorem:
Letting ,
If we suppose these letters are equal
Consider the lottery composed of numbers, of which are drawn at each draw:\
What is the probability of drawing s given numbers in one draw ?\
Consider the Urn with white balls and black balls with replacement. Let be n draws. Let be the number of white balls and be the number of black balls. What is the probability of white balls and black balls being drawn?
is the number of all the cases possible in draws. In the expansion of this binomial, expresses the number of cases in which white ballsa nd black balls may be drawn. Thus,
Letting be the probability of drawing a white ball out of single draw and be the probability of a drawing a black ball in a single draw,
This is an ordinary finite differential equation:
Three players of supposed equal ability play together on the following conditions: that one of the first two players who beats his adversary plays the third, and if he beats him the game is finished. If he is beaten, the victor plays against the second until one of the players has defeated consecutively the two others, which ends the game. The probability is demanded that the game will be finished in a certain number of of plays. Let us find the probability that it will end precisely at the th play. For that the player who wins ought to enter the game at the play and win it thus at the following play. But, if in place of winning the play he should be beaten by his adversary who had just beaten the other player, the game would end at this play. Thus the probability that one of the players will enter the game at the play and will win it is equal to the probability that the game will end precisely with this play; and as this player ought to win the following play in order that the game may be finished at the th play, the probability of this last case will be only one half of the preceding one. (p.29-30)
Let be the random variable of the number of plays it takes for the game to finish.
Let be the random variable of the two players playing in game . Let be the random variable of the winning player, , of game .
This is an ordinary finite differential equation for a recurrent process. To solve this probability, we notice the game cannot end sooner than the 2nd play and extend the iterative expression recursively,
is the probability that one of the first two players who has beaten his adversary should beat at the second play the third player, which is . Thus,
The probability the game will end at latest the th play is the sum of these,
Appendix: The Calculus of Generating Functions
In general, we can define the ordinary finite differential polynomial equation. For a particular Event, , its probability density function over internal-time steps is given by the distribution . The base case () of the inductive definition is known for the lowest time-step, , as , while the iterative step () is constructed as a polynomial function on the difference step of one time-unit: