The Theory of Statistical Inference

Let \(X^n\) be a random variable representing the n qualities that can be measured for the thing under investigation, \(\Omega\), itself the collected gathering of all its possible appearances \(\omega \in \Omega\) such that \(X^n:\omega \rightarrow \mathbb{R}^n\).  Each sampled measurement of \(X^n\) through an interaction with \(\omega\) is given as an \(\hat{X}^n(t_i)\), each one constituting a unit of indexable time in the catalogable measurement process.  Thus, the set of sampled measurements, a sample-space, is a partition of ‘internally orderable’ test times within the measurement action, \(\{ \hat{X}^n(t): t \in \pi \}\). 
 
 \(\Omega\) is a state-system, i.e the spatio-temporality of the thing in question, in that it has specific space-states (omega) at different times  \(\Omega(t)=\omega\).  \(X\) is the function that measures \(\omega\).  What if measurement is not simply Real, but Complex: \(X: \Omega \rightarrow \mathbb{C}\)?  Every interaction with \(\Omega\) lets it appear as \(\omega\), which is quantified by \(X\).   From these interactions, we seek to establish truths about \(\Omega\) as quantifying the probability that the Claim (C) is correct, which is itself a quantifiable statement about \(\Omega\).  
 
Ultimately, we seek the nature of how \(\Omega\) appears differently depending on one’s interactions with it (i.e. samplings), as thus the actual distribution \((\mathcal{D})\) of the observed measurements, using our measurement apparatus \(X\), that is, we ask about \(\mathcal{D}_X(\Omega)=f_{X(\Omega)}\).  The assumptions will describe the class \(\mathcal{C}\) of the family \(\mathcal{F}\) of distribution functions which \(f_X\) belongs to, i.e. \(f_X \in \mathcal{F}_{\mathcal{C}}\),  for the \(\hat{X}\) measurements of the appearances of (Omega), while the sampling will give the parameter (theta), such that \(f_X =f_{\mathcal{C}}(\theta)\).  The hypothesis distribution-parameter \((\theta^*)\) may be either established by prior knowledge \((\theta_0)\) or some the present n-sampling of the state-system \((\theta_1)\).  Thus, the parameter obtained from the present sampling \(\hat{\theta}=\Theta(\hat{X_1}, \cdots \hat{X_n})\) is either used to judge the validity of a prior parameter estimation \((\theta^*=\theta_0)\) or is assessed in its own right (i.e. \(\theta^*=\theta_1=\hat{\theta}\)) as representative of the actual object’s state-system distribution, the difference between the two hypothesis set-ups, a priori vs. a posteriori, being whether the present experiment is seen has having a bias or not.  In either the prior or posteriori cases, \(H_{-}:\theta_0=\theta|\hat{\theta}\) or \(H_{+}:\hat{\theta}=\theta\),   one uses the present sampling to establish the validity of a certain parameter value.  If \(\hat{\Delta} \theta =\theta_0-\hat{\theta}\) is the expected bias of the experiment, then \(H_{-}:\hat{\theta}+\hat{\Delta}\theta=\theta|\hat{\theta} \ \& \ H_{+}:\hat{\theta}=\theta|\hat{\theta}\).  Thus, in all experiments, the statistical question is primarily that of the bias of the experiment that samples a parameter, whether it is 0 or not, i.e. \(H_{-}:|\hat{\Delta}\theta|>0 \ or \ H_{+}:\hat{\Delta}\theta=0\). 
 
 The truth of the bias of the experiment, i.e. how representative it is, can only be given by our prior assumptions, \(A\), such as to know the validity of our claim about the state-system’s distributional parameter, \(P(C|A)=P(\theta=\theta^*|\hat{\theta})=P(\Delta \theta=\hat{\Delta}\theta)\), as the probability our expectation of bias is correct.  Our prior assumption, \(A: f_X \in \mathcal{F}_{\mathcal{C}}\) is about the distribution of the (k)-parameters in the class-family of distributions, where \(\mathcal{F}_{\mathcal{C}}={f(k)}, s.t. f_X=f(\theta)\), that is about \(\mathcal{D}_K(\mathcal{F}_{\mathcal{C}})\).  Here, \(K\) is a random variable that samples state-systems in the wider class of generally known objects, or equivalently their distributions (i.e. functional representations), measuring the (k)-parameter of their distribution, such that \(f_K(\mathcal{F}_{\mathcal{C}})=\mathcal{D}_K(\mathcal{F}_{\mathcal{C}})\).  The distributed-objects in \(\mathcal{F}_{\mathcal{C}}\) are themselves relative to the measurement system \(X\) although they may be transformed into other measurement units, in that this distribution class is of all possible state-systems which \(X\) might measure sample-wise, for which we seek to know specifically about the \(\Omega\) in question to obtain its distributional (k)-parameter value of \(\theta\).  Essentially, the assumption \(A\) is about a meta-state-system as the set of all objects \(X\) can measure, and thus has more to do with \(X\), the subject’s method of measurement, and \(\Theta\), the parametrical aggregation of interest, than with \(\Omega\), the specific object of measurement. 

\(\theta \in \Theta\), the set of all the parameters to the family \(\mathcal{F}\) of relevant distributions, in that \(\Theta\) uniquely determines \(f\), in that \(\exists M: \Theta \rightarrow f \in  \mathcal{F}\), or \(f=\mathcal{F}(\Theta)\). 

COVID-19 Health Economics Data Encoding

The communicable SARS-COV-2 viral spread within different localities should be analyzed as a message within a communication system, mutating based upon the properties of the local communication system that propagates it. The two properties of a communication state-system can be expressed as a complex number (λ), with a real component as the total (economic) functionality (σ) of the communication system that propagates it and an imaginary component of the communicability (ω) of the lifeworld, i.e. social non-distancing, within which it is embedded, i.e. λ=σ+ωi. This analytical format for any particular state or place of analysis encodes thus the percent of a shut-down (i.e. dysfunctionality) and the social-distancing policy in place (i.e. non-communicability), both of which can be calculated as a combination of the policy and the percent-compliance. While a communication system may be thus represented as a complex-variable system with the real system functionality and an imaginary lifeworld communicativity, any of its sub-systems or states, i.e. at lower levels of analysis, can be represented similarly by a complex number. Our public health response system must thus seek to reduce λ2 at all levels of analysis and components of interaction.

The functional communicativity of a particular place (i.e. city or state) within a general place (i.e. country or world) as a communication system determines the spread of a communicable disease, such as SARS-2. The purpose of the data project will be to sample individual functional communicativities as behavioral dysfunctionalities as they relate to viral infection, and these measurements can be situated within the functional communicativities of the socialities as communication systems in which the individuals inhabit, as such properties within biophysical systems are often inherited from their environmental embeddings. The meta-dimension to the proposed empirical research must be to study the bi-directional causality (i.e. inter-communicative) between social-system communication and the spread (and mutation) of communicable diseases, from which the individual measurements can be situated within already-established research contexts into the spread of COVID-19. This aim of the project is already underway. While one can measure a sociality’s actual functional-communicativity from PCA-KDE aggregation of the individually sampled communicative functionals, i.e. how well individuals social distance and reduce participation in economic functionality, there exists on the policy level the socialities’ self-understood communicative functionality conceived as its policy, now as the quarantine measure of economic shut-down and social-distancing. By developing this policy-analytic along with the measurement system, the difference between λ and λ^ can be statistically measured as population non-compliance, explaining anomalous viral surges. This can allow a mapping between the results from the individual level and the health system policies at the state level.

To enable population-wide compliance with public-safety health measures, the cause of the high-risk behaviors associated with breaking quarantine must be identified and treated systemically. This approach thus conceives of infection as having a pathogenesis from the behaviors that induce it, which is itself caused by the behavioral-dysfunctional residuals to the malfunctioning of an underlying social-systemic process. City and State variation in infection rate-curves (i.e. time-based differential distributions) can be explained by the presence or absence (and degree) of different high risk behaviors and there may be learning opportunities from successful cities or states. These differences between platial socialities may be classified according to the (functional-communicativity) of the underlying communication system, a complex variable (λ=σ+ωi), which may be measured inversely by a real measure of the economic dysfunctionality (i.e. ‘shut-down’) and an imaginary measure of the communal miscommunication (i.e ‘social-distancing’). The present crisis derives from the divergence between the policy of the required λ and the actual sampled λ^.

Reconstructing Laplace’s Probability Calculus

Reconstructing Laplace’s Probability Calculus

This blog serves as a philosophically informed mathematical introduction to the ideas and notation of probability theory from its most important historical theorist. It is part of an ongoing contemporary formal reconstruction of Laplace’s Calculus of Probability from his english-translated introductory essay, “A Philosophical Essay on Probabilities,” (cite: PEP) which can be read along with these notes, which are divided into the same sections as Laplace. I have included deeper supplements from the untranslated treatise Théorie Analytique des Probabilités (cite: TA) through personal and online translation tools in section 1.10 and the Appendix (3).

Table of Contents:

  1. The General Principles of the Calculus of Probabilities
  2. Concerning the Analytical Methods of the Calculus of Probabilities
  3. Appendix: Calculus of Generating Functions

1. The General Principles of the Calculus of Probabilities

\(\Omega\) is the state-space of all possible events.
\(\omega \in \Omega\) is an event as element of the state.

1st Principle: The probability of the occurrence of an event \(\omega\) is the number of favorable cases divided by the total number of causal cases, assuming all cases are equally likely.


\(\Omega’ =\{\omega_1′,\cdots , \omega_n’ \}\) is the derivational system of the state \(\Omega\) as the space of cases that will cause different events in the state. \(\Omega_{\omega}’= \{\omega_{i_1}’, \cdots , \omega_{i_m}’: \omega_{i_j} \rightarrow \omega\}\) is the derivational system of the state favoring the event \(\omega\). The order of a particular state (or derivational state-system) is given by the measure \((| \cdots |)\) evaluated as the number of elements in it.

$$P(\omega)=P(\Omega=\omega)=\frac{|\Omega_{\omega}’|}{|\Omega’|}=\frac{m}{n}$$
If we introduce time as the attribute of case-based favorability, i.e. causality, the event \(\omega\) is to occur at a future time \(t_1\), such as would be represented by the formal statement \(\Omega(t_1)=\omega\). The conditioning cases, equally likely, which will deterministically cause the event at \(T=t_1\) are the possible events at the previous conditioning states of the system \(T=t<t_1\), given as \(\Omega(t_0<t<t_1) \in \Omega'(t_1 | t_0)=\{\omega_1′, \cdots , \omega_n’ \}\), a superposition of possible states-as-cases since they are unknown at the time of the present of \(t_0\), where \(\Omega’\) is a derivational state-system, or set of possible causal states, here evaluated at \(t_1\) given \(t_0\), i.e. \(t_1 | t_0\). This set of possible cases can be partitioned into those that are favorable to \(\Omega(t_1)=\omega\) and those that aren’t favorable. The set of cases favorable to \(\omega\) are \(\Omega_{\omega}'(t_1 | t_0)=\{\omega_{i_1}’, \cdots , \omega_{i_m}’: \omega_{i_j} \rightarrow \omega\}\).

$$P(\omega)=P\bigg(\Omega(t_1)=\omega \bigg| \Omega(t_0)\bigg)=\frac{|\Omega_{\omega}'(t_1 | t_0)|}{|\Omega'(t_1 | t_0)|}=\frac{m}{n}$$

2nd Principle: Assuming the conditioning cases are not equal in probability, the probability of the occurrence of an event \(\omega\) is the sum of the probability of the favorable cases
$$P(\omega)=\sum_j P(\omega_{i_j}’)$$

3rd Principle: The probability of the combined event \((\omega)\) of independent events \(\{\omega_1, \cdots,\omega_n\}\) is the product of the probability of the composite events.

$$P(\omega_1 \cap \cdots \cap \omega_n) = \prod_i P(\omega_i)$$

4th Principle: The probability of a compound event \(omega\) of two events dependent upon each other, \(\omega_1 \& \omega_2\), where \(\omega_2\) is after \(\omega_1\), is the probability of the first times the probability of the second conditioned on the first having occurred:$$P(\omega_1 \cap \omega_2)= P(\omega_1) * P(\omega_2 | \omega_1)$$

5th Principle, p.15: The probability of an expected event \(\omega_1\) conditioned on an occurred event \(\omega_0\) is the probability of the composite event \(\omega=\omega_0 \cap \omega_1\) divided by the a priori probability of occurred event.
$$P(\omega_1|\omega_0)=\frac{P(\omega_0 \cap \omega_1)}{P(\omega_0)}$$

Always, a priori is from a prior state, as can be given by a previous event \(\omega_{-1}\). Thus, if we assume the present to be \(t_0\), the prior time to have been \(t_{-1}\), and the future time to be \(t_1\), then the a priori probability of the presently occurred event is made from \(\Omega(t_{-1})=\omega_{-1}\) as $$P(\omega_0)=P(\omega_0 | \omega_{-1})=P\bigg( \Omega(t_0)=\omega_0 \bigg | \Omega(t_{-1})=\omega_{-1} \bigg)$$
The probability of the combined event (omega_0 cap omega_1) occurring can also be measured partially from the a priori perspective as
$$P(\omega_0 \cap \omega_1)=P(\omega_0 \cap \omega_1 | \omega_{-1})=P\bigg(\Omega(t_0)=\omega_0 \bigcap \Omega(t_1)=\omega_1 \bigg| \Omega(t_{-1})=\omega_{-1} \bigg)$$

Thus,
$$P(\omega_1|\omega_0)=P\bigg( (\omega_1|\omega_0) \bigg | \omega_{-1} \bigg)=\frac{P(\omega_0 \cap \omega_1 | \omega_{-1})}{P(\omega_0 | \omega_{-1})}$$

6th Principle: 1. For a constant event, the likelihood of a cause to an event is the same as the probability that the event will occur. 2. The probability of the existence of any one of those causes is the probability of the event (resulting from this cause) divided by the sum of the probabilities of similar events from all causes. 3. For causes, considered a priori, which are equally probable, the probability of the existence of a cause is the probability of the caused event divided by the sum of the product of probability of the events and the possibility (a priori probability) of their cause.

For event \(\omega_i\), let \(\omega_i’\) be its cause. While \(P\) is the probability of an actual existence, \(\mu\) is the measure of the a priori likelihood of a cause since its existence is unknown. These two measurements may be used interchangeably where the existential nature of the measurement is known or substitutions as approximations are permissible. In Principle 5 they are conflated since the probability of an occurred event always implies an a priori likelihood.

  1. for \(\omega\) constant (i.e. only 1 cause, \(\omega’\)), \(P(\omega’)=P(\omega)\)
  2. for \(\omega_i’\) equally likely, \(P(\omega_i’)=P(\omega_i’|\omega)=\frac{P(\omega | \omega_i’)}{\sum_j P(\omega|\omega_j’)}\)
  3. \(P(\omega_i’)=P(\omega_i’|\omega)=\frac{P(\omega | \omega_i’) \mu(\omega_i’)}{\sum_j P(\omega | \omega_j’)\mu(\omega_j’)}\)

7th principle, p.17 The probability of a future event, \(\omega_1\), is the sum of the products of the probability of each cause, drawn from the event observed, by the probability that, this cause existing, the future event will occur.

The present is \(t_0\) while the future time is \(t_1\). Thus, the future event expected is \(\Omega(t_1)=\omega_1\). Given that \(\Omega(t_0)=\omega_0\) has been observed, we ask about the probability of a future event \(\omega_1\) from the set of causes \(\Omega'(t_1)=\{\omega_1^{(i)}:\omega_1^{(i)\} \rightarrow \Omega(t_1)}\) (change of notation for causes).
$$P(\omega_1|\omega_0)=\sum_i P(\omega_1^{(i)} | \omega_0)*P(\omega_1 | \omega_1^{(i)})$$
How are we to consider causes? They can be historical events with a causal-deterministic relationship to the future or they can be considered event-conditions, as a spatiality (possibly true over a temporal duration) rather than a temporality (true at one time). Generally, we can consider causes to be hypotheses \(H={H_1 \cdots H_n}\), with \(P(H_i)\) the prior probability (single term) and \(P(\omega | H_i)\) the posterior (conditional) probability. The observed event \((\omega_0)\) is \(\omega_{obs}\) and the future event \((\omega_1)\) is the expected event \(\omega_{exp}\). Thus, we can restate principles 7 & 6 as:

  1. \( P(H_i|\omega_{obs})=\frac{P(\omega_{obs} | H_i)P(H_i)}{\sum_j P(omega_{obs} | H_j)P(H_j)}\)
  2. \( P(\omega_{exp}|\omega_{obs})=\sum_i P(H_i | \omega_{obs})P(\omega_{exp} | H_i) = \frac{\sum_i P(\omega_{obs} | H_i)P(H_i)P(\omega_{exp} | H_i)}{\sum_j P(\omega_{obs} | H_j)P(H_j)}\)

Clearly, Principle 6 is the same as Bayes Theorem (Wassermann, Thm. 2.16), which articulates the Hypotheses (H) as a partition of \(\Omega\) in that \(\Omega=\cup_i H_i (H_i \cap H_j = \emptyset for \neq j)\), in that each hypothesis is a limitation of the domain of possible events. The observed event is also considered a set of events rather than a single ‘point.’ Therefore, Principle 6 says that “the probability that the possibility of the event is comprised within given limits is the sum of the fractions comprised within these limits” (Laplace, PEP, p.18).

8th principle (PEP, p.20): The Advantage of Mathematical Hope, (A), depending on several events, is the sum of the products of the probability of each event by the benefit to its occurrence

Let \(\omega=\{\omega_1, \cdots ,\omega_n: \omega_i in \Omega\}\) be the set of events under consideration. Let (B) be the benefit function giving a value to each event. The advantage hoped for from these events is:

$$A(\omega)=\sum_i B(\omega_i)*P(\omega_i)$$

A fair game is one whose cost of playing is equal to the advantage gained through it.

9th principle, p.21: The Advantage (A), depending on a series of events \((\omega)\), is the sum of the products of the probability of each favorable event by the benefit to its occurrence minus the sum of the products of the probability of each unfavorable event by the cost to its occurrence.

Let \(\omega=\{\omega_1, \cdots ,\omega_n: \omega_i in \Omega\}\) be the series of events under consideration, partitioned into \(\omega=(\omega^+,\omega^-)\) for favorable and unfavorable events. Let (B) be the benefit function for \(\omega_i in \omega^+\) and (L) the loss function for \(\omega_i in \omega^-\), each giving the value to each event. The advantage of playing the game is:

$$A(\omega)=\sum_{i: \omega_i in \omega^+} B(\omega_i)P(\omega_i) – sum_{j: \omega_j in \omega^-} L(\omega_j)P(\omega_j)$$

Mathematical Hope is the positivity of A. Thus, if A is positive, one has hope for the game, while if A is negative one has fear.

In generality, (X) is the random variable function, \(X:\omega_i \rightarrow \mathbb{R}\), that gives a value to each event, either a benefit \((>0)\) or cost \((<0)\). The absolute expectation \((\mathbb{E})\) of value for the game from these events is:

$$\mathbb{E}(\omega)=\sum_i X(\omega_i)*P(\omega_i)$$

10th principle, p.23 The relative value of an infinitely small sum is equal to its absolute value divided by the total benefit of the person interested.

This section can be explicated by examining Laplace’s corresponding section in Théorie Analytique (S.41-42, p.432-445) as a development of Bernoulli’s work on the subject.

(432) For a physical fortune (x), an increase by (dx) produces a moral good reciprocal to the fortune, \(\frac{kdx}{x}\) for a constant (k). (k) is the “units” of moral goodness (i.e. utility) in that \(\frac{dx}{x}=\frac{1}{k}\rightarrow\) 1 moral good. So, (k) is the quantity of physical fortune whereby a marginal increase by unity of physical fortune is equivalent to unity of moral fortune. For a textit{moral fortune} (y), $$y=kln x + ln h$$
A moral good is the proportion of an increase in part of a fortune by the whole fortune. Moral fortune is the sum of all moral goods. If we consider this summation continuously for all infinitesimally small increases in physical fortune, moral fortune is the integral of the proportional reciprocal of the physical fortune by the changes in that physical fortune. Deriving this from principle 10,
$$dy=\frac{kdx}{x}$$

$$\int dy = y = \int \frac{kdx}{x} = k \int \frac{1}{x} dx = kln(x) + C$$ (C=ln(h)) is the constant of minimum moral good when the physical fortune is unity. We can put this in terms of a physical fortune, (x_0), the minimum physical fortune for surviving one’s existence – the cost of reproducing the conditions of one’s own existence. With \(h=\frac{1}{{x_0}^k}\), $$y=\int_{x_0}^x dy =\int_{x_0}^x \frac{kdx}{x}=kln(x) – k ln(x_0)=kln(x) – k ln\bigg(\frac{1}{\sqrt[k]{h}}\bigg)=kln(x) + ln(h) $$
h is a constant given by an empirical observation of (y) as never positive or negative but always at least what is necessary, as even someone without any physical fortune will still have a moral fortune in their existence – it is thus the unpriced “physical fortune” of laboring existence.

(433) Suppose an individual with a physical fortune of (a) expects to receive a variety of changes in fortunes \(\alpha, \zeta, \gamma, \cdots\), as increments or diminishings, with probabilities of \(p, q, r, \cdots \) summing to unity. The corresponding moral fortunes would be,
$$ k ln(a+\alpha) + ln(h), k ln(a+\zeta) + ln(h), k ln(a+\gamma) + ln(h), \cdots $$
Thus, the expected moral fortune (Y) is
$$Y=kp ln(a+\alpha)+ kq ln(a+\zeta) + kr ln(a+\gamma) + \cdots + ln(h)$$
Let (X) be the physical fortune corresponding to this moral fortune, as
$$Y=k ln(X) + ln(h)$$
with,
$$X=(a+\alpha)^p(a+\zeta)^q(a+\gamma)^r \cdots$$
Taking away the primitive fortune (a) from this value of (X), the difference will be the increase in the physical fortune that would procure the individual the same moral advantage resulting from his expectation. This difference is therefore the expression of the mathematical advantage,
$$p\alpha + q\zeta + r\gamma + \cdots$$
This results in several important consequences. One of them is that the mathematically most equal game is always advantageous. Indeed, if we denote by (a) the physical fortune of the player before starting the game, by \(p\) his probability of winning, (434) \(\cdots\)

2. Concerning the Analytical Methods of the Calculus of Probabilities

$$\prod_{i=1}^n (1+a_i) -1 = \sum_{i=1}^n {{i}\choose{a_i}} $$

How many ways can s letters drawn from n be arranged?

$$s! {{n}\choose{s}}$$

Consider the lottery composed of (n) numbers, of which (r) are drawn at each draw:
What is the probability of drawing s given numbers \(Y=(y_1, \cdots y_s)\) in one draw \(X=(x_1, \cdots, x_r)\)?
$$P(Y \in X)=\frac{{n}\choose{n-s}}{{n}\choose{r}}=\frac{{r}\choose{s}}{{n}\choose{s}}$$

Consider the Urn \(\Omega\) with (a) white balls and (b) black balls with replacement. Let \(A_n = {\omega_1, \cdots \omega_n}\) be n draws. Let \(\mu_w(A)\) be the number of white balls and \(\mu_b (A)\) be the number of black balls. What is the probability of (m) white balls and (n-m) black balls being drawn?

$$P\bigg(\mu_w(A_n) = m \& \mu_b(A_n)=n-m\bigg)=P^n_m=?$$
\((a+b)^n\) is the number of all the cases possible in (n) draws. In the expansion of this binomial, \({n}\choose{m}b^{n-m}a^m\) expresses the number of cases in which (m) white balls and (n-m) black balls may be drawn. Thus,

$$P^n_m=\frac{{{n}\choose{m}}b^{n-m}a^m}{(a+b)^n} $$

Letting \(p=P(\mu_w(A_1)=1)=\frac{a}{a+b}\) be the probability of drawing a white ball out of single draw and \(q=P(\mu_b(A_1)=1)=\frac{b}{a+b}\) be the probability of a drawing a black ball in a single draw,

$$P^n_m={{n}\choose{m}}q^{n-m}p^m$$

$$\Delta P^n_{m}=\frac{P^n_{m+1}}{P^n_{m}}=\frac{(n-m)p}{(m+1)q}$$

This is an ordinary finite differential equation:

$${\Delta}^r P^n_{m}= \frac{P^n_{m+r}}{P^n_{m}}=\frac{p^{r}}{q^{r}}\prod_{i=0}^{r-1}\frac{n-m-i}{m+i+1}$$

Three players of supposed equal ability play together on the following conditions: that one of the first two players who beats his adversary plays the third, and if he beats him the game is finished. If he is beaten, the victor plays against the second until one of the players has defeated consecutively the two others, which ends the game. The probability is demanded that the game will be finished in a certain number of (n) of plays. Let us find the probability that it will end precisely at the (n)th play. For that the player who wins ought to enter the game at the play (n-1) and win it thus at the following play. But, if in place of winning the play (n-1) he should be beaten by his adversary who had just beaten the other player, the game would end at this play. Thus the probability that one of the players will enter the game at the play (n-1) and will win it is equal to the probability that the game will end precisely with this play; and as this player ought to win the following play in order that the game may be finished at the (n)th play, the probability of this last case will be only one half of the preceding one. .

(p.29-30)

Let \(E\) be the random variable of the number of plays it takes for the game to finish.
$$\mathbb{P}(E=n)=?$$
Let \(G_k=(p_1,p_2)\) be the random variable of the two players \((p_1,p_2)\) playing in game (k). Let \(W_k=p_0\) be the random variable of the winning player, \(p_0\), of game (k).
$$\mathbb{P}(E=n-1)=\mathbb{P}(G_{n-1}=W_{n-1}=p)$$
$$\mathbb{P}(E=n)=\frac{1}{2}\mathbb{P}(E=n-1)$$
This is an ordinary finite differential equation for a recurrent process. To solve this probability, we notice the game cannot end sooner than the 2nd play and extend the iterative expression recursively,
$$\mathbb{P}(E=n)=\bigg(\frac{1}{2}\bigg)^{n-2}\mathbb{P}(E=2)$$
\(\mathbb{P}(E=2)\) is the probability that one of the first two players who has beaten his adversary should beat at the second play the third player, which is \(\frac{1}{2}\). Thus,
$$\mathbb{P}(E=n)=\bigg(\frac{1}{2}\bigg)^{n-1}$$
The probability the game will end at latest the (n) th play is the sum of these,
$$\mathbb{P}(E\leq n)=\sum_{k=2}^n \bigg(\frac{1}{2}\bigg)^{k-1} = 1 – \bigg(\frac{1}{2}\bigg)^{n-1}$$

(p.31)

3. Appendix: The Calculus of Generating Functions

In general, we can define the ordinary finite differential polynomial equation. For a particular Event, (E), its probability density function over internal-time steps (n) is given by the distribution \(f(n)=\mathbb{P}(E=n)\). The base case \((I_0)\) of the inductive definition is known for the lowest time-step, \(n_0\), as \(f(n_0)=c\), while the iterative step \((I^+)\) is constructed as a polynomial function \(\mathcal{P}(x)=\sum_i a_i x^i\) on the difference step of one time-unit:


$$I^+: f(n)=\mathcal{P}(f(n-1))$$


$$\rightarrow f(n)=\underbrace{\mathcal{P}(\cdots \mathcal{P}(}n(f(0))\cdots)$$

$$\mathcal{D}(f(n))=f'(n)=\mathcal{D}(\mathcal{P})(f(n-1))f'(n-1)=\prod{k=n_0}^{n-1}\mathcal{D}(\mathcal{P})(f(k)) $$

The Functional-Communicativity of COVID-19

The Functional-Communicativity of COVID-19

An Astro-Socio-Biological Analysis

A multi-scale entropic analysis of COVID-19 is developed on the micro-biological, meso-social, & macro-astrological levels to model the accumulation of errors during processes of self-replication within immune response, communicative functionality within social mitigation strategies, and mutation/genesis within radiation conditioning from solar-cosmic cycles.

  1. Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity
  2. Micro-Scale: Computational Biology of RNA Sequence
  3. Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction
  4. Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles
  5. References

Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity

The genesis of SARS-CoV-2, with its internal code of a precise self-check mechanism on reducing errors in RNA replication and external attributes of ACE2 binding proteins, is an entropy-minimizing solution to the highly functionally communicative interconnected human societies embedded within high-entropic geophysical conditions of higher cosmic radiation atmospheric penetration with radioactive C-14 residues due to the present solar-cycle modulation. This background condition explains the mutation differences between SARS-1 & SARS-2, where the latter has a more persistent environment of C-14 to evolve steadily into stable forms. The counter-measures against the spread of the virus, either as therapeutics, vaccines, or social mitigation strategies, are thus disruptions (entropy-inducing) to these evolved entropy-reducing mechanisms within the intra-host replication and inter-host communicability processes.

The point of origin for understanding the spread of the virus in a society or subdivision is through its communicative functionality, which may be expressed as a complex variable of the real functionality of the social system and the imaginary communicativity of its lifeworld, the two attributes which are diminished by the shut-down and social distance measures. Conditions of high communicativity, such as New York City, will induce mutations with greater ACE2 binding proteins, i.e. communicability, as the virus adapts to its environment, while one of high functionality will induce error-minimization in replication. These two micro & meso scale processes of replication and communicability (i.e. intra- & inter- host propagation) can be viewed together from the thermodynamic-informatic perspective of the viral RNA code as a message – refinement and transmission – itself initialized (‘transcribed’) by the macro conditions of the Earth’s spatio-temporality (i.e. gravitational fluctuation). This message is induced, altered, amplified spatially, & temporalized by the entropic functional-communicative qualities of its environment that it essentially describes inversely.

Micro-Scale: Computational Biology of RNA Sequence

As with other viruses of its CoV virus family, the RNA of COVID-19 encodes a self-check on the duplication of its code by nuclei, thereby ensuring it is copied with little error. With little replication-error, the virus can be replicated many more rounds (exponential factor) without degeneration, which will ultimately stop the replication-process. Compare an example of \(t=3\) rounds for a normal virus with \(t=8\) for a Coronavirus under simple exponential replication viral count \(C\) by replication rounds t as \(C(t)=e^{(t)}: \ C(3)=e^3=20.1\) vs. \( C(7)=e^7=1096.6\).

Let us consider an example where a single RNA can create N-1 copies of itself before its code is degenerated beyond even replicative encoding, i.e. the binding segment code directing RNA replicase to replicate. The original RNA code is given by \(\mathcal{N}_0\), with each subsequent code given by \(\mathcal{N}_t\), where t is the number of times of replication. Thus, \(t\) counts the internal time of the “life” of the virus, as its number of times of self-replication. The relevant length of a sequence can be given as the number of base-pairs that will be replicated in the next round of replication. This will be expressed as the zero-order distance metric, \(\mu^0(\mathcal{N}_t)=|\mathcal{N}_t|\).

The errors in the replicative process at time \(t\) will be given by \(DISCR\_ERR(\mathcal{N}t)\), for “discrete error”, and will be a function of \(t\), given as thus \(\epsilon(t)\). Cleary, \(|\mathcal{N}_t| = |\mathcal{N}{t+1}| + \epsilon(t)\). In all likelihood, \(\epsilon(t)\) is a decreasing function since with each round of replication the errors will decrease the number of copiable base-pairs, and yet with an exceptionally random alteration of a stable insertion, the error could technically be negative. There are two types of these zero-order errors, \(\epsilon^-\), as the number of pre-programmed deletions occurring due to the need for a “zero-length” sequence segment to which the RNA polymerase binds and is thereby directed to replicate “what is to its right in the given reading orientation,” and \(\epsilon^+\) as the non-determined erroneous alterations, either as deletions, changes, or insertions. The total number of errors at any single time will be their sum, as thus \(\epsilon(t)=\epsilon^-(t)+\epsilon^+(t)\). A more useful error-metric may be the proportional error, \(PROP\_ERR(\mathcal{N}t)\), since it is likely to be approximately constant across time, which will be given by the time-function \(\epsilon'(t)\), and can similarly be broken into determined(-) and non-determined(+) errors as \(\epsilon'(t)={\epsilon’}^-(t)+{\epsilon’}^+(t)\). Expressed thus in proportion to the (zero-order) length of the RNA sequence, $$\epsilon'(t)=1-\frac{|\mathcal{N}{t+1}|}{|\mathcal{N}_t|}=\frac{\epsilon(t)}{|\mathcal{N}_t|}$$.

The “length” (of internal time) of an RNA code, \(\mathcal{N}t\), in terms of the number of times it itself may be copied before it is degenerated beyond replication, is given as the first order “distance” metric \(\mu^1(\mathcal{N}_t)=N(\mathcal{N}_t)\). For our generalized example, \(\mu^1(\mathcal{N}_0)=N(\mathcal{N}_0)=N\). This may be expressed as the sum of all errors $$N=\sum_{t=0}^{\infty}\epsilon(t)=\sum_{t=0}^{t_{max}}\epsilon(t)=\sum_{t=0}^{N}\epsilon(t)$$.

We are interested in the “length” (of internal space) of the RNA code, second-order distance metric, \(\mu^2(\mathcal{N}_0)\), as the number of copies it can make of itself, including the original copy in the counting and the children of all children viruses. This is the micro-factor of self-limitation of the virus, to be compared ultimately to the meso-factor of aerosolized half-life and the macro-factor of survival on surfaces.

These errors in replication, compounded by radiation exposure in the atmosphere, will add up to mutations of the virus, which by natural selection in the corporeal, social-communicative, and environmental (i.e. surfaces and aerosolized forms) levels has produced stable new forms of the virus.

Comparing SARS-1 & SARS-2, the former had a higher mortality rate and the latter has a higher transmission rate. There is certainly an inverse relationship between mortality and transmission on the meso-level as fatality prevents transmission, but there may also be inherent differences at the micro-level in methods of replication leading to these different outcomes. Mortality is due to intra-host replication exponentiation – whereby there are so many copies made that the containing cells burst – while communicability is due to the inter-host stability of the RNA code in the air and organic surfaces where it is subject to organic reactions and cosmic radiation.

Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction

We can apply the theory of communicativity to studying the natural pathology (disease) and the social pathology (violence) of human society through René Girard’s Theory of Mimetics [VIOL-SAC]. Viewing a virus and a dysfunctional social system under a single conceptual unity (Mimetics) of a communicative pathology, the former ‘spreads’ by communication while the latter is the system of communication. Yet, different types of communication systems can lead to higher outbreaks for a communicable disease. Thus, the system of communication is the condition for the health outcomes of communicable disease. Beyond merely ‘viruses,’ a dysfunctional communication system unable to coordinate actions to distribute resources effectively within a population can cause other pathologies such as violence and poverty. From this integrated perspective, these ‘social problems’ may themselves be viewed as communicable disease in the sense of being caused, rather than ‘spread,’ by faulty systems of communication. Since violence and poverty are themselves health concerns in themselves, such a re-categorization is certainly permittable. The difference in these communicable diseases of micro and macro levels is that a virus is a replication script read and enacted by human polymerase in a cell’s biology while a dysfunctional social system is a replication script read and enacted by human officials in a society. We can also thereby view health in the more generalized political-economy lens as the quantity of life a person has, beyond merely the isolated corporeal body but also including the action-potentialities of the person as the security from harm and capacity to use resources (i.e. via money) for one’s own survival. It is clear that ‘money’ should be the metric of this ‘bio-quantification’ in the sense that someone with more money can create healthier conditions for life and even seek better treatment, and similarly a sick person (i.e. deprived of life) should be given more social resources (i.e. money) to reduce the harm. Yet, the economic system fails to accurately price and distribute life-resources due to its nodal premise prescribed by capitalism whereby individuals, and by extension their property resources, are not social (as in distributively shared), but rather isolated \& alienated for individual private consumption.

This critique of capitalism was first made by Karl Marx in advocation for socialism as an ontological critique of the lack of recognition of the social being to human existence in the emerging economic sciences of liberalism. In the 17th century, Locke conceived of the public good as based upon an individual rights to freedom, thereby endowing the alienated (i.e. private) nature with the economic right to life. This moral reasoning was based on the theological premise that the capacity for reason was not a public-communicative process, but rather a private faculty based only upon an individual’s relationship with God. Today we may understand Marx’s critique of Lockean liberalism from the deep ecology perspective that sociality is an ontological premise to biological analysis due to both the relationship of an organism grouping to its environment and the in-group self-coordinating mechanism with its own type. Both of these aspects of a biological group, in-group relationships (\(H^+(G):G \rightarrow G\)) and out-group relationships (\(H^-(G)={H_-^-(G): G^c \rightarrow G, H_+^-(G): G \rightarrow G^c}\)) may be viewed as communicative properties of the group, as in how the group communicates with itself and with not-itself. In the human-capital model of economic liberalism, the group is reduced to the individual economic agent that must act alone, i.e. an interconnected system of capabilities, creating thereby an enormous complexity in any biological modeling from micro-economic premises to macro-economic outcomes. If instead we permit different levels of group analysis, where it is assumed a group distributes resources within itself, with the particular rules of group-distribution (i.e. its social system) requiring an analysis of the group at a deeper level that decomposes the group into smaller individual parts, such a multi-level model has a manageable complexity. The purpose is therefore to study Communicativity as a property of Group Action.

A group is a system of action coordination functionally interconnecting sub-groups. Each group must “act as a whole” in that the inverse branching process of coordination adds up all actions towards the fulfillment of a single highest good, the supreme value-orientation. Therefore, the representation of a group is by a tree, whose nodes are the coordination actions (intermediate groupings), edges the value produced, and leaves the elemental sub-groups “at the level of analysis”. The total society can be represented as a class system hierarchy of group orderings, with primary groups of individuals. The distribution of resources between a group follows the branching orientation (\(\sigma^-\)) from root to leaves as resources are divided up, while the coordination follows the inverse orientation (\(\sigma^+\)) from leaves to root as elemental resources are coordinated in production to produce an aggregate good.

In the parasite-stress theory of sociality[fincher_thornhill_2012], in-group assortative sociality arose due to the stress of parasites in order to prevent contagion. There is thus a causal equivalence between the viral scripts of replication and the social structures selected for by the virus as the optimal strategy of survival. Violence too has the same selection-capacity since existentially conflicting groups are forced to isolate to avoid the war of revenge cycles. This process is the same as the spread of communicable diseases between groups – even after supposed containment of a virus, movement of people between groups can cause additional cycles of resurgence.

Racism is an example of non-effective extrapolation of in-grouping based on non-essential categories. As a highly contagious and deadly disease, on the macro-social level COVID-19 selects for non-racist societies via natural selection since racist societies spend too many resources to organize in-group social structure along non-essential characteristics, as race, and thus have few reserves left to reorganize along the essential criteria selected for by the disease (i.e. segregating those at-risk). Additionally, racism prevents resource sharing between the dominant group and the racially marginalized or oppressed group, and thus limits the transfer of scientific knowledge in addition to other social-cultural resources since what the marginalized group knows to be true is ignored.

With a complex systems approach to studying the communicability of the virus between groups (i.e. different levels of analysis) we can analyze the transmission between both persons and segregated groups (i.e. cities or states) to evaluate both social distancing and shut-down policies. A single mitigation strategy can be represented as the complex number \(\lambda = \sigma + \omega i\), where \(\sigma\) is the dysfunctionality of the social system (percent shut-down) and \(\omega\) is the periodicity of the shut-down. We can include \(s_d\) for social distance as a proportion of the natural radii given by the social density. The critical issue now is mistimed reopening policies, whereby physical communication (i.e. travel) between peaking and recovering groups may cause resurgences of the virus, which can be complicated by reactivation post immunity and the threat of mutations producing strands resistant to future vaccines. This model thus considers the long-term perspective of social equilibrium solutions as mixed strategies between socialism and capitalism (i.e. social distancing and systemic shut-downs) to coronaviruses as a semi-permanent condition to the ecology of our time.

Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles

The genesis of COR-VIR are by mutations (and likely reassortment) induced by a burst of solar flare radiation and a conditioning by cosmic radiation, each with different effects on the viral composition. Comparison with SARS-1 (outbreak immediately after a solar maximum) reveals that solar radiation (i.e. UVC) from flares & CMEs, more frequent and with higher intensity during solar maximums yet also present during minima, is responsible for the intensity (mortality rate) of the virus, while cosmic radiation, enabled by the lower count of sun spots that decreases the Ozone in the atmosphere normally shielding the Earth’s surface from radiation, gives the virus a longer duration within and on organic matter (SARS-2), likely through mutation by radioactive C-14 created by cosmic radiation interaction with atmospheric Nitrogen. The increased organic surface radioactivity is compounded by the ozone-reduction due to \(N^2\) emissions concurrent with “Global Warming.” The recent appearance of all coronaviruses in the last 5 solar cycles is likely due to a global minimum within a hypothetical longer cosmic-solar cycle (~25 solar cycles) that modulates the relative sun cycle sunspot count, and has been linked to historical pandemics. A meta-analysis has detected such a frequency the last milenia with global pandemics [2017JAsBO…5..159W]. The present sun cycle, 25, beginning with a minimum coincident with the first SARS-2 case of COVID-19, has the lowest sunspot count in recorded history (i.e. double or triple minimum). Likely, this explains the genesis of difference in duration and intensity between SARS-1 & SARS-2.

This longer solar-cosmic cycle that modulates the relative sunspot count of a solar cycle, the midpoint of which is associated with global pandemics, has recently been measured to 208 years by C-14 time-cycle-analysis, which is itself modulated by a 2,300 year cycle. These time-cycles accord to the (perhaps time-varying) Mayan Round calendar: 1 K’atun=2 solar cycles (~20 years); 1 May = 13 K’atun (~ 256 years); 1 B’ak’tun = 20 K’atun (~394 years) ; 1 Great Cycle = 13 B’ak’tun (~ 5,125 year). Thus, the 208-year cycle is between 1/2 B’ak’tun (~197 years) and 1 May (~ 256 years, 13 K’atuns). It is likely the length of 25 sun cycles, the same as the May cycle, yet has decreased in length the last few thousand years (perhaps as well with sun-spot counts). The 2,300 year cycle is ~ 6 B’ak’tuns (2,365 years), constituting almost half of a Great Cycle (13 B’ak’tuns). We are likely at a triple minimum in sunspot count from all 3 solar-cosmic cycles, at the start of the first K’atun (2020) of the beginning of a new Great Cycle (2012), falling in the middle of the May (associated with crises).

The entropic characterization of the pathogenesis as prolonged radioactivity – low entropic conditioning of high entropy – leads to the property of high durability on organic matter and stable mutations.

[5] References

  1. [fincher_thornhill_2012]  Corey L. Fincher and Randy Thornhill. “Parasite-stress promotes in-group assortative sociality: The cases of strong family ties and heightened religiosity”. In: Behavioral and Brain Sciences 35.2 (2012), pp. 61–79. doi: 10.1017/S0140525X11000021.
  2. [VIOL-SAC]  René Girard. Violence & The Sacred.
  3. [2017JAsBO…5..159W]  N. C. Wickramasinghe et al. “Sunspot Cycle Minima and Pandemics: The Case for Vigilance?” In: Journal of Astrobiology & Outreach 5.2, 159 (Jan. 2017), p. 159. doi: 10.4172/2332-2519.1000159.

Pin It on Pinterest