Normal Communication: Distribution & Codes

Consider the modeling of a communication system. A message is sent through this system, arriving at a state of the system at each time. The message is thus the temporal reconfiguration chain of the system as it takes on different states from its possibility set \(\Omega\). Due to noise in the message-propagation channel, i.e. the necessity of interpretation within any deciphering of the meaning of the message from its natural ambiguity, we can only know the probability of the system’s state at a certain time, in that the full interpretation set of the message is a sequence of probability distributions as the probability of it having a certain state at a certain time. Performing this operation discretely in finite time, we can only sample the code as a sequence of states a number of trial-repetitions (\(N\)), and take the frequency of each state at each time to be its approximate probability. We consider this to be empirical interpretation of the message. With \(b\) possible states to our system, let \(\Sigma_b\) be the symbolic code-space of all possible message sequences, including bi-directionally infinite ones, i.e. where the starting and finishing time is not known. Thus, a given empirically sampled message sequence is given by \(\tilde{x^l}=(x^l_k)\{k=-\infty\}^{k=\infty}, x^l_k \in \{0, \cdots, b-1\}\), where the empirical interpretation is given by the frequency distributions \(\tilde{f}_{x(i)}(k)=\frac{1}{N}\sum_{l=0}^N \bf{1}_{x^l{k}=i}\).

Stationary Distribution: What are the initial distribution conditions such that the distribution positions do not change over time? Let \(f\) be the time iterational operation of the system on its space \(X\). While the times are counted \(t\in \mathbb{N}\), the length of each time step \(t_i \ \rightarrow \ t_{i+1}\) is given by \(\Delta t_i\), such that the real time \(T\) is given by \(T(t_i)=\sum_{k=0}^{i-1} \Delta t_k\). The system at a given time is given by the probability distributions of the different states, which are the macro partitions \(\omega \in \Omega\) of the micro states \(x \in \omega\) as thus \(F_t(\omega_k)=f^t(x \in \omega)=\mathbb{P}(X_t=k)\). \(F_t(\omega_i)=F(t)[i]\) is the cumulative time distribution of the class-partitioned state-space distributions. The stationary distribution is such that \(F(0)=F(t)\).

Consider the string of numbers \((x_k)_{k=T_0}^{T_N}\) from N iterations of an experiment. What does it mean for the underlying numbers to be normally distributed? It means that the experiment is independent of time. The distribution stays the same at each time interval. Given a time-dependent process, the averages of these empirical measurement numbers will always be normal. Thus, normality is the stationary distribution of the averaging process. For random time-lengths, it averages all the values in that time-interval, without remembering the length of time or, equivalently, the number of values. markov – time homogeneity. Consider a system that changes states over time between \(b\) different state-indexes \(\{0, \cdots, b-1\}\). When the system-state appears as 0, we perform an average of the previous values between its present time and the previous 0 occurrence. Thus, the variable of 0 although an intrinsic part of the object of measurement is in fact a property of the subject performing the measurement, as when he or she decides to stop the measurement process and perform an averaging of the results. We call such a variable the mimetic basis when its objectivity depends upon a subjectivity in the action of measurement and the mimetic dynamics are given by the relationship between the occurrence of a 0 and the other states. Here 0 is the stopping time, where a string of results are thus averaged before continuing. Let \(T_0^{(k)}=\inf\{m: m_{i=1}^{\tau_0^{k}}\hat{f}(X_{T_0}^{(k-1)}+i)\}\), where \(\hat{f}(i)\) gives the actual measured value from the ith state of the system. In reality, the system’s function time-inducing function \(f\) has resulted in a particular value \(f(x) \in X\) before it was partitioned into \(\Omega\) via \(P\), although here the time-inverting (dys)function \(\hat{f}\) determines this original pre-value from the result. Often the empirical \(\tilde{f}\) is used from the average of the state’s values, i.e. \(\tilde{f}x^{k}(i)=\frac{1}{M}\sum_{k=0}^{M} f^{n_k}(x)\) such that \(x, f^{n_k}(x)\in \omega_i, n_{k-1}<l<n_{k}, f^l(x) \notin \omega_i, M=N_{T_0^{(k)}}(i)\) and \(N_{n}(i)=|\{m: X_m=i, m\leq n \}| \), which thus takes the average value from an M-length self-communication string for a particular state. \( \{\tilde{S}_k(x)= \frac{1}{\tau_0^{k}-1}\sum_{i=1}^{\tau_0^{k}}\tilde{f}x^{k}(X_{T_0^{(k-1)}+i})\}_{k=1}^{N}\) & \(\{S_k(x)= \frac{1}{\tau_0^{k}-1}\sum_{i=1}^{\tau_0^{k}} f^{T_0^{(k-1)}+i}(x) \}_{k=1}^{N}\) approach normal distributions as \(N\) increases if state-0 is independent of the others.

Exploratory Data Analysis

The Issue of The Datum

Data, as finite, can never be merely fit without presupposition. The theory of the data, as what it is, is the presupposition that discloses the data in the first place through the act of measurement. As independent and identical (i.i.d.) measurements, there is not temporality to the measurement activities in serial, so the ordering of the samples is not relevant. But, this means that there is no temporality to the disclosure of the object at hand, preventing one measurement from being distinguished from another – they are either all simultaneous or uncomparable. Thus, i.i.d. random variables (measurement resultants) as a whole describe the different time-invariant superpositions of the system in question since at the single null time of the serial measurements all the sample-values were found together ‘at once’ or ‘in a unity.’ To ‘force’ an order on the data by some random indexing is an unnecessary addition of data to our data-sampling process and thus an analysis that requires an ordering while be extraneous to the matter at hand. Thus, data as ‘non-hypothesized’ describes a state-system ‘broken’ in its spatio-temporality, unable to reveal itself as itself in unity, but rather as several (many) different states (omega_i) all occurring with a minor existence ((0<mathbb{P}(omega_i in Omega)<1)). In the schemata of Heidegger (Being & Time), such things are present-at-hand in that they have been removed from their interlocking chains of signification from-which & for-which they exist through participation in the world as ready-at-hand – their existence is in question.

The Mimetics of Meaning

The meaning of a datum is its intelligibility within an interpretive context of signification. An Interpretation gives a specificity to its underlying distribution. A rational Interpretation to data gives a rational structure to its conditionality. In the circular process of interpretation, an assumption is made from which to understand the datum, while in the branching process, these assumptions are hierarchically decomposed.

Statistics as The Logic of Science

The question of science itself has never been its particular object of inquiry but the existential nature, in its possibility and thereby the nature of its actuality. Science is power, and thus abstracts itself as the desired meta-good, although it is always itself about particularities as an ever-finer branching process. Although a philosophic question, the ‘question of science’ is inherently a political one, as it is the highest good desired by the society, its population, and its government. To make sense of science mathematically-numerically, as statistics claims, it is the scientific process itself that must be understood through probability theory as The Logic of Science (Jaynes).

Linguistic Analysis of the Invariants of Science: The Laws of Nature

The theory of science, as the proof of its validity in universality, must consider the practice of science, as the negating particularity. The symbolic language of science, within which its practice and results are embedded, necessarily negates its own particularity as well, as thus to represent a structure universally. Science, in the strict sense of having achieved already the goal of universality, is de-linguistified. While mathematics, in its extra-linguistic nature, often has the illusion of universal de-linguistification, such is only a semblance and often an illusion. The numbers of mathematics always can refer to things, and in the particular basis of their conceptual context always do. The non-numeric symbols of mathematics too represented words before short-hand gave them a distilled symbolic life. The de-linguistified nature of the extra-linguistic property of mathematics is that to count as mathematics, the symbols must themselves represent universal things. Thus, all true mathematical statements may represent scientific phenomena, but the context and work of this referencing is not trivial and sometimes the entirety of the scientific labor. The tense of science, as the time-space of the activity of its being, is the tensor, which is the extra-linguistic meta-grammar of null-time, and thus any and all times.

The Event Horizon of Discovery: The Dynamics between an Observer & a Black Hole

The consciousness who writes or reads science, and thereby reports or performs the described tensor as an action of experimentation or validation, is the transcendental consciousness. Although science is real, it is only a horizon. The question is thus of its nature and existence at this horizon. What is knowable of science is thereby known as ‘the event horizon’, as that which has appeared already, beyond which is mere a ‘black hole’ as what has not yet revealed itself – always there is a not-yet to temporality and so such a black hole can always be at least found as all that of science that has not and cannot be revealed since within the very notion of science is a negation of withdrawal (non-appearance) as the condition of its own universality (negating its particularity). Beginning here with the null-space of black-holes, the physical universe – at least the negative gravitational entities – have a natural extra-language – at least for the negative linguistic operation of signification whereby what is not known is the ‘object’ of reference. In this cosmological interpretation of subjectivity within the objectivity of physical space-time, we thus come to the result of General Relativity that the existence of a black-hole is not independent of the observer, and in fact is only an element in the Null-Set, or negation, of the observer. To ‘observer’ a black-hole is to point to and outline something of which one does not know. If one ‘knew’ what it was positively then it would be not ‘black’ in the sense of not-emitting light within the reference frame (space-time curvature) of the observer. That one cannot see something, as receive photons reflecting space-time measurements, is not a property of the object but rather of the observer in his or her subjective activity of observation since to be at all must mean there is some perspective from which it can be seen. As the Negation of the objectivity of an observer, subjectivity is the negative gravitational anti-substance of blackholes. Subjectivity, as what is not known by consciousness, observes the boundaries of an aspect (a negative element) of itself in the physical measurement of an ‘event horizon.’

These invariants of nature, as the conditions of its space-time, are the laws of dynamics in natural science. At the limit of observation we find the basis of the conditionality of the observation and thus its existence as an observer. From the perspective of absolute science, within the horizon of universality (i.e. the itself as not-itself of the black-hole or Pure Subjectivity), the space-time of the activity of observation (i.e. the labor of science) is a time-space as the hyperbolic negative geometry of conditioning (the itself of an unconditionality). What is a positive element of the bio-physical contextual condition of life, from which science takes place, for the observer is a negative aspect from the perspective of transcendental consciousness (i.e. science) as the limitation of the observation. Within Husserlian Phenomenology and Hilbertian Geometry of the early 20th century in Germany, from which Einstein’s theory arose, a Black-Hole is therefore a Transcendental Ego as the absolute measurement point. Our Solar System is conditioned in its space-time geometry by the MilkyWay galaxy it is within, which is conditioned by the blackhole Sagittarius A* (SgrA). Therefore, the unconditionality of our solar space-time (hence the bio-kinetic features) is an unknown of space-time possibilities, enveloped in the event horizon of SgrA. What is the inverse to our place (i.e. space-time) of observation will naturally only exist as a negativity, what cannot be seen.

Classical Origins of The Random Variable as The Unknown: Levels of Analysis

Strictly speaking, within Chinese Cosmological Algebra of 4-variables (\(\mu\), X,Y,Z), this first variable of primary Unknowing, is represented by \(X\), or Tiān (天), for ‘sky’ as that which conditions the arc of the sky, i.e. “the heavens” or the space of our temporal dwelling ‘in the earth.’ We can say thus that \(X=SgrA\) is the largest and most relevant primary unknown for solarized galactic life. While of course X may represent anything, in the total cosmological nature of science, i.e. all that Humanity doesn’t know yet is conditioned by, it appears most relevantly and wholistically as SgrA. It can be said thus that all unknowns (\(x\)) in our space-time of observation are within “the great unknown (\(X\)) of SgrA, as thus \(x \in X\) or \(x \mathcal{A} X\) for the negative aspectual (\(\mathcal{A}\)) relationship “x is an aspect of X”. These are the relevant, and most general (i.e. universal) invariants to our existence of observation. They are the relative absolutes of, from, and for science. Within more practical scientific judgements from a cosmological perspective, the relevant aspects of variable unknowns are the planets within our solar system as conditioning the solar life of Earth. The Earthly unknowns are the second variable Y, or Di (地) for “earth.” They are the unknowns that condition the Earth, or life, as determining the changes in climate through their cyclical dynamics. Finally, the last unknown of conditionals, Z, refers to people, Ren (人) for ‘men,’ as what conditions their actions. X is the macro unknown (conditionality) of the gravity of ‘the heavens,’ Y the meso unknown of biological life in and on Earth, and Z the micro unknown of psychology as quantum phenomena. These unknowns are the subjective conditions of observation. Finally, the 4th variable is the “object”, or Wu (物), \(\mu\) of measurement. This last quality is the only $real$ value in the sense of an objective measurement of reality, while the others are imaginary in the sense that their real values aren’t known, and can’t be within the reference of observation since they are its own conditions of measurement within “the heavens, the earth, and the person.” (Bréard, p.82)

In the quaternion tradition of Hamilton, (\(\mu\), X,Y,Z) are the quaternions, (\(\mu\), i,j,k). Since the real-values of X,Y,Z in the scientific sense can’t be known truly and thus must always be themselves unknowns, they are treated as imaginary numbers (\(i=\sqrt{-1}\)) with their ‘values’ merely coefficients to the quaternions \(i,j,k\). These quaternions are derived as quotients of vectors, as thus the unit orientations of measurement’s subjectivity, themselves representing the space-time. We often approximate this with the Cartesian X,Y,Z of 3 independent directions as vectors, yet such is to assume Euclidean Geometry as independence.

References

[1] E.T. Jaynes. “Complexity, Entropy and the Physics of Information”. In: ed. by W. H. Zurek. Wesley Publishing Co., 1990. Chap. Probability In Quantum Theory.

[2] Andrea Bréard. Nine Chapters on Mathematical Modernity: Essays on the Global Historical Entanglements of the Science of Numbers in China. Springer, 2019.

The Derivation of the Normal Distribution

Abstract: The Internal Space-Time Geometry to Experiment as the Distribution of Measurement InterActions is set up by the Statistical Parameter.

The Scientific Process

Statistics is the method of determining the validity of an empirical claim about nature. A claim that is not particularly valid will likely be true only some of the time or under certain specific conditions that are not too common. Ultimately, thus, within a domain of consideration, statistics answer the question of the universality of the claims made about nature through empirical methods of observation. It may be that two opposing claims are both true in the sense that they are each true half the time of random observation or within half the space of contextual conditionality’s. The scientific process, as progress, relies on methods that over a linear time of repeated experimental cycles, increase the validity of the claims as the knowledge of nature approaches universality, itself always merely a horizon within the phenomenology of empiricism. This progressive scientific process is called ‘discovery,’ or merely research, although it is highly non-linear.

The scientific process is a branching process as the truth of a claim is found to be dependent upon its conditions, and those conditions found dependent on further conditionals. This structure of rationality is as a tree. A single claim \(C\) has a relative validity \(V\) due to the truth of an underlying, or conditioning, claim, \( C_i \), given as \( V_{C_i}(C)=V(C,C_i) \) . We may understand the validity of claims through probability theory, in that the relative validity of a claim based on a conditioning claim is the probability the claim is true conditioned on \(C_i, \ V(C,C_i)=P(C|C_i)\). In general, we will refer to the object under investigation, of which C is a claim about, as the primary variable X, and the subject performing the investigation, of which \(C_i\) is hypothesized (as a cognitive action), as the secondary variable Y. Thus, the orientation of observation, i.e. the time-arrow, is given as \(\sigma: Y \rightarrow X\).

An observer (Y) makes an observation from a particular position of an event (X) with its own place, forming a space-time of the action of measurement. An observation-as-information is a complex quantum-bit, which within a space of investigation is a complex variable, representing a tree of observation-conditioning rationality resulting from the branching process of hypothesis formation, with each node a conditional hypothesis and edge length the conditional probability. The gravitation of the system of measurement is the space-time tensor of its world-manifold, stable or chaotic of the time of interaction. We thus understand the positions of observers within a place of investigation, itself given at least in real-part component by the object of investigation.

Experimental Set-up

Nature is explained by a parameterized model. Each parameter, as a functional aggregation of measurement samples, has itself a corresponding distribution as it occurs in nature along the infinite, universal horizon of measurement.

Let \(X^n\) be a random variable representing the n qualities that can be measured for the thing under investigation, \(\Omega\), itself the collected gathering of all its possible appearances, \(\omega \in \Omega\) such that \(X^n:\omega \rightarrow {\mathbb{R}}^n\). Each sampled measurement of \(X^n\) through an interaction with \(\omega\) is given as an \(\hat{X}^n(t_i)\), each one constituting a unit of indexable time in the catalogable measurement process. Thus, the set of sampled measurements, a sample space, is a partition of ‘internally orderable’ test times within the measurement action, \({ \hat{X}^n(t): t \in \pi }\).


In this set up of statistical sampling, one will notice the step-wise process-timing of a single actor performing n sequential measurements can be represented the same as n indexed actors performing simultaneous measurements, at least with regard to internal time accounting. In order to infer the latter interpretational context, such as to preserve the common sense notion of time as distinct from social space, one would represent all n simultaneous measurements as n dimensions of X, although assumed to be generally the same in quality in such that all n actors sample the same object in the same way, yet are distinct in some orderable indexical quality. Thus, in each turn of the round time (i.e. one unit), all actors perform independent and similar measurements. It may be, as in progressive action processes, future actions are dependent on previous ones, and thus independence is only found within the sample space of a single time round. Alternatively, it may also be that the actors perform different actions, or are dependent upon each other in their interactions. Thus, the notion of actor(s) may be embedded in the space-time of the action of measurement. The embedding of a coordinated plurality of actors in the most mundane sense of ‘collective progress’ can be represented as the group action of all independent & similar measurers completes itself in each round of time, with inter-temporalities in the process measurement process being similar but dependent on the previous one. The progressive interaction may be represented as the inducer \(I^+:X(t_i) \rightarrow X(t_{i}+1)\), with the assumptions of similarity and independence as \(\hat{x_i}(t) \sim \hat{x_j}(t) \ \& \ I(\hat{x_i}(t),\hat{x_j}(t))=0\). We take \(\hat{X}(t)\) to be a group of measurement actors/actions \({ \hat{x}_i(t): i \in \pi }\) that acts on \(\Omega\) together, or simultaneously, to produce a singular measurement of one round time

Derivation of the Normal Distribution

The question with measurement is not, “what is the true distribution of the object in question in nature?”, but “what is the distribution of the parameter I am using to measure?”. The underlying metric of the quality under investigation, itself arising due to an interaction of measurement as the distance function within the investigatory space-time, is \(\mu\). As the central limit states, averages of these measurements, each having an error, will converge to normality. We can describe analytically the space of our ‘atemporal’ averaged measurements in that the rate of change of the frequency \(f\) of our sample measurements \(x,x_0 \in X\) by the change in the space of measuring, is inversely proportional, by constant k, to the distance from the true measurement (\(\mu\)) and the frequency:
$$\forall \epsilon > 0, \exists \delta(\epsilon)>0 \ s.t. \ \forall x, |x_0-x|<\delta \rightarrow \bigg| k(x_0-\mu)f(x_0) – \frac{f(x_0)-f(x)}{x_0-x}\bigg|<\epsilon $$
or in the differential form
$$\frac{df}{dx}=-k(x-\mu)f(x)$$
$$f(t)=\int_{-\infty}^{+\infty}-k(x-\mu)f(x)dx$$
the solution distribution is scaled by the constant of coefficient, C
$$f(x)=Ce^{-\frac{k}{2}{(x-\mu)}^2}$$
given the normalization of the total size of the universe of events as 1
$$ \int_{-\infty}^{\infty} f dx =1$$
thus,
$$C=\sqrt{\frac{k}{2\pi}}$$
so the total distribution is,
$$f(x)=\sqrt{\frac{k}{2\pi}}e^{-\frac{k}{2}{(x-\mu)}^2}$$
$$\mathbb{E}(X)=\int (x-\mu)f(x)dx=\mu$$
$$\sigma^2=E(X^2)=\int {(x-\mu)}^2 f(x)dx=\frac{1}{\sqrt{k}}$$

$$so, \ f(x)=N\bigg(\mu,\sigma=\frac{1}{\sqrt{k}}\bigg)$$

Reconstructing Distributions by Moments


When two states, i.e. possibly measured outcomes, of the stochastic sampling process of the underlying statistical object communicate, there is a probability of one occurring after the other, perhaps within the internal time (i.e. indexical ordering) of the measurement process, \(t \in \pi=(1, \cdots, n)\) for the sample space \((\hat{X_1}, \cdots , \hat{X_n})\).  Arranging the resulting values as a list, there is some chance of one value occurring after the other.  Such is a direction of communication between the states those values represent.  When two states inter-communicate, there is a positive probability that each state will occur after the other (within the ordering \(\pi\)).  For such inter-communicating states, they have the same period, defined as the GCD of distances between occurrences.  The complex variable of functional communicativity can be described as the real probability of conditioned occurrence and the imaginary period of its intercommunications.
 
 To describe our model by communicative functionals is to follow the Laplacian method of generating the distribution via moments by finite difference equations.  A single state, within state-space rather than time-space, is described as a complex variable \(s=\sigma + \omega i\), where \(\sigma\) is the real functional relation between state & system (or part & whole), while \(\omega\) is its imaginary communicative relationship.  If we view the branching evolution of the possible states measured in a system under sampling, then the actual sampled values is a path along this decision-tree.  The total system, as its Laplacian, or characteristic, representation is the (tensorial) sum of the underlying sub-systems, of which each state belongs as a possible value.  A continuum (real-distributional) system can only result as the infinite branching process, as thus each value a limit to an infinite path-sequence of rationalities in the state-system sub-dividing into state-systems until the limiting (stationary) systems-as-states are reached that are non-dynamic or static in the inner spatio-temporality of self-differentiation, i.e. non-dividing.  Any node of this possibility-tree can be represented as a state of the higher-order system by a complex-value or as a system of the lower-order states by a complex-function.  The real part of this value is the probability of the lower state occurring given the higher-system occurring (uni-directional communicativity), while the imaginary part is its relative period.  Similarly, the real function of a higher system is the probability of lower states occurring given its occurrence and the imaginary part is relative periods of the state-values.\end{document

The Theory of Statistical Inference

Let \(X^n\) be a random variable representing the n qualities that can be measured for the thing under investigation, \(\Omega\), itself the collected gathering of all its possible appearances \(\omega \in \Omega\) such that \(X^n:\omega \rightarrow \mathbb{R}^n\).  Each sampled measurement of \(X^n\) through an interaction with \(\omega\) is given as an \(\hat{X}^n(t_i)\), each one constituting a unit of indexable time in the catalogable measurement process.  Thus, the set of sampled measurements, a sample-space, is a partition of ‘internally orderable’ test times within the measurement action, \(\{ \hat{X}^n(t): t \in \pi \}\). 
 
 \(\Omega\) is a state-system, i.e the spatio-temporality of the thing in question, in that it has specific space-states (omega) at different times  \(\Omega(t)=\omega\).  \(X\) is the function that measures \(\omega\).  What if measurement is not simply Real, but Complex: \(X: \Omega \rightarrow \mathbb{C}\)?  Every interaction with \(\Omega\) lets it appear as \(\omega\), which is quantified by \(X\).   From these interactions, we seek to establish truths about \(\Omega\) as quantifying the probability that the Claim (C) is correct, which is itself a quantifiable statement about \(\Omega\).  
 
Ultimately, we seek the nature of how \(\Omega\) appears differently depending on one’s interactions with it (i.e. samplings), as thus the actual distribution \((\mathcal{D})\) of the observed measurements, using our measurement apparatus \(X\), that is, we ask about \(\mathcal{D}_X(\Omega)=f_{X(\Omega)}\).  The assumptions will describe the class \(\mathcal{C}\) of the family \(\mathcal{F}\) of distribution functions which \(f_X\) belongs to, i.e. \(f_X \in \mathcal{F}_{\mathcal{C}}\),  for the \(\hat{X}\) measurements of the appearances of (Omega), while the sampling will give the parameter (theta), such that \(f_X =f_{\mathcal{C}}(\theta)\).  The hypothesis distribution-parameter \((\theta^*)\) may be either established by prior knowledge \((\theta_0)\) or some the present n-sampling of the state-system \((\theta_1)\).  Thus, the parameter obtained from the present sampling \(\hat{\theta}=\Theta(\hat{X_1}, \cdots \hat{X_n})\) is either used to judge the validity of a prior parameter estimation \((\theta^*=\theta_0)\) or is assessed in its own right (i.e. \(\theta^*=\theta_1=\hat{\theta}\)) as representative of the actual object’s state-system distribution, the difference between the two hypothesis set-ups, a priori vs. a posteriori, being whether the present experiment is seen has having a bias or not.  In either the prior or posteriori cases, \(H_{-}:\theta_0=\theta|\hat{\theta}\) or \(H_{+}:\hat{\theta}=\theta\),   one uses the present sampling to establish the validity of a certain parameter value.  If \(\hat{\Delta} \theta =\theta_0-\hat{\theta}\) is the expected bias of the experiment, then \(H_{-}:\hat{\theta}+\hat{\Delta}\theta=\theta|\hat{\theta} \ \& \ H_{+}:\hat{\theta}=\theta|\hat{\theta}\).  Thus, in all experiments, the statistical question is primarily that of the bias of the experiment that samples a parameter, whether it is 0 or not, i.e. \(H_{-}:|\hat{\Delta}\theta|>0 \ or \ H_{+}:\hat{\Delta}\theta=0\). 
 
 The truth of the bias of the experiment, i.e. how representative it is, can only be given by our prior assumptions, \(A\), such as to know the validity of our claim about the state-system’s distributional parameter, \(P(C|A)=P(\theta=\theta^*|\hat{\theta})=P(\Delta \theta=\hat{\Delta}\theta)\), as the probability our expectation of bias is correct.  Our prior assumption, \(A: f_X \in \mathcal{F}_{\mathcal{C}}\) is about the distribution of the (k)-parameters in the class-family of distributions, where \(\mathcal{F}_{\mathcal{C}}={f(k)}, s.t. f_X=f(\theta)\), that is about \(\mathcal{D}_K(\mathcal{F}_{\mathcal{C}})\).  Here, \(K\) is a random variable that samples state-systems in the wider class of generally known objects, or equivalently their distributions (i.e. functional representations), measuring the (k)-parameter of their distribution, such that \(f_K(\mathcal{F}_{\mathcal{C}})=\mathcal{D}_K(\mathcal{F}_{\mathcal{C}})\).  The distributed-objects in \(\mathcal{F}_{\mathcal{C}}\) are themselves relative to the measurement system \(X\) although they may be transformed into other measurement units, in that this distribution class is of all possible state-systems which \(X\) might measure sample-wise, for which we seek to know specifically about the \(\Omega\) in question to obtain its distributional (k)-parameter value of \(\theta\).  Essentially, the assumption \(A\) is about a meta-state-system as the set of all objects \(X\) can measure, and thus has more to do with \(X\), the subject’s method of measurement, and \(\Theta\), the parametrical aggregation of interest, than with \(\Omega\), the specific object of measurement. 

\(\theta \in \Theta\), the set of all the parameters to the family \(\mathcal{F}\) of relevant distributions, in that \(\Theta\) uniquely determines \(f\), in that \(\exists M: \Theta \rightarrow f \in  \mathcal{F}\), or \(f=\mathcal{F}(\Theta)\). 

COVID-19 Health Economics Data Encoding

The communicable SARS-COV-2 viral spread within different localities should be analyzed as a message within a communication system, mutating based upon the properties of the local communication system that propagates it. The two properties of a communication state-system can be expressed as a complex number (λ), with a real component as the total (economic) functionality (σ) of the communication system that propagates it and an imaginary component of the communicability (ω) of the lifeworld, i.e. social non-distancing, within which it is embedded, i.e. λ=σ+ωi. This analytical format for any particular state or place of analysis encodes thus the percent of a shut-down (i.e. dysfunctionality) and the social-distancing policy in place (i.e. non-communicability), both of which can be calculated as a combination of the policy and the percent-compliance. While a communication system may be thus represented as a complex-variable system with the real system functionality and an imaginary lifeworld communicativity, any of its sub-systems or states, i.e. at lower levels of analysis, can be represented similarly by a complex number. Our public health response system must thus seek to reduce λ2 at all levels of analysis and components of interaction.

The functional communicativity of a particular place (i.e. city or state) within a general place (i.e. country or world) as a communication system determines the spread of a communicable disease, such as SARS-2. The purpose of the data project will be to sample individual functional communicativities as behavioral dysfunctionalities as they relate to viral infection, and these measurements can be situated within the functional communicativities of the socialities as communication systems in which the individuals inhabit, as such properties within biophysical systems are often inherited from their environmental embeddings. The meta-dimension to the proposed empirical research must be to study the bi-directional causality (i.e. inter-communicative) between social-system communication and the spread (and mutation) of communicable diseases, from which the individual measurements can be situated within already-established research contexts into the spread of COVID-19. This aim of the project is already underway. While one can measure a sociality’s actual functional-communicativity from PCA-KDE aggregation of the individually sampled communicative functionals, i.e. how well individuals social distance and reduce participation in economic functionality, there exists on the policy level the socialities’ self-understood communicative functionality conceived as its policy, now as the quarantine measure of economic shut-down and social-distancing. By developing this policy-analytic along with the measurement system, the difference between λ and λ^ can be statistically measured as population non-compliance, explaining anomalous viral surges. This can allow a mapping between the results from the individual level and the health system policies at the state level.

To enable population-wide compliance with public-safety health measures, the cause of the high-risk behaviors associated with breaking quarantine must be identified and treated systemically. This approach thus conceives of infection as having a pathogenesis from the behaviors that induce it, which is itself caused by the behavioral-dysfunctional residuals to the malfunctioning of an underlying social-systemic process. City and State variation in infection rate-curves (i.e. time-based differential distributions) can be explained by the presence or absence (and degree) of different high risk behaviors and there may be learning opportunities from successful cities or states. These differences between platial socialities may be classified according to the (functional-communicativity) of the underlying communication system, a complex variable (λ=σ+ωi), which may be measured inversely by a real measure of the economic dysfunctionality (i.e. ‘shut-down’) and an imaginary measure of the communal miscommunication (i.e ‘social-distancing’). The present crisis derives from the divergence between the policy of the required λ and the actual sampled λ^.

Reconstructing Laplace’s Probability Calculus

Reconstructing Laplace’s Probability Calculus

This blog serves as a philosophically informed mathematical introduction to the ideas and notation of probability theory from its most important historical theorist. It is part of an ongoing contemporary formal reconstruction of Laplace’s Calculus of Probability from his english-translated introductory essay, “A Philosophical Essay on Probabilities,” (cite: PEP) which can be read along with these notes, which are divided into the same sections as Laplace. I have included deeper supplements from the untranslated treatise Théorie Analytique des Probabilités (cite: TA) through personal and online translation tools in section 1.10 and the Appendix (3).

Table of Contents:

  1. The General Principles of the Calculus of Probabilities
  2. Concerning the Analytical Methods of the Calculus of Probabilities
  3. Appendix: Calculus of Generating Functions

1. The General Principles of the Calculus of Probabilities

\(\Omega\) is the state-space of all possible events.
\(\omega \in \Omega\) is an event as element of the state.

1st Principle: The probability of the occurrence of an event \(\omega\) is the number of favorable cases divided by the total number of causal cases, assuming all cases are equally likely.


\(\Omega’ =\{\omega_1′,\cdots , \omega_n’ \}\) is the derivational system of the state \(\Omega\) as the space of cases that will cause different events in the state. \(\Omega_{\omega}’= \{\omega_{i_1}’, \cdots , \omega_{i_m}’: \omega_{i_j} \rightarrow \omega\}\) is the derivational system of the state favoring the event \(\omega\). The order of a particular state (or derivational state-system) is given by the measure \((| \cdots |)\) evaluated as the number of elements in it.

$$P(\omega)=P(\Omega=\omega)=\frac{|\Omega_{\omega}’|}{|\Omega’|}=\frac{m}{n}$$
If we introduce time as the attribute of case-based favorability, i.e. causality, the event \(\omega\) is to occur at a future time \(t_1\), such as would be represented by the formal statement \(\Omega(t_1)=\omega\). The conditioning cases, equally likely, which will deterministically cause the event at \(T=t_1\) are the possible events at the previous conditioning states of the system \(T=t<t_1\), given as \(\Omega(t_0<t<t_1) \in \Omega'(t_1 | t_0)=\{\omega_1′, \cdots , \omega_n’ \}\), a superposition of possible states-as-cases since they are unknown at the time of the present of \(t_0\), where \(\Omega’\) is a derivational state-system, or set of possible causal states, here evaluated at \(t_1\) given \(t_0\), i.e. \(t_1 | t_0\). This set of possible cases can be partitioned into those that are favorable to \(\Omega(t_1)=\omega\) and those that aren’t favorable. The set of cases favorable to \(\omega\) are \(\Omega_{\omega}'(t_1 | t_0)=\{\omega_{i_1}’, \cdots , \omega_{i_m}’: \omega_{i_j} \rightarrow \omega\}\).

$$P(\omega)=P\bigg(\Omega(t_1)=\omega \bigg| \Omega(t_0)\bigg)=\frac{|\Omega_{\omega}'(t_1 | t_0)|}{|\Omega'(t_1 | t_0)|}=\frac{m}{n}$$

2nd Principle: Assuming the conditioning cases are not equal in probability, the probability of the occurrence of an event \(\omega\) is the sum of the probability of the favorable cases
$$P(\omega)=\sum_j P(\omega_{i_j}’)$$

3rd Principle: The probability of the combined event \((\omega)\) of independent events \(\{\omega_1, \cdots,\omega_n\}\) is the product of the probability of the composite events.

$$P(\omega_1 \cap \cdots \cap \omega_n) = \prod_i P(\omega_i)$$

4th Principle: The probability of a compound event \(omega\) of two events dependent upon each other, \(\omega_1 \& \omega_2\), where \(\omega_2\) is after \(\omega_1\), is the probability of the first times the probability of the second conditioned on the first having occurred:$$P(\omega_1 \cap \omega_2)= P(\omega_1) * P(\omega_2 | \omega_1)$$

5th Principle, p.15: The probability of an expected event \(\omega_1\) conditioned on an occurred event \(\omega_0\) is the probability of the composite event \(\omega=\omega_0 \cap \omega_1\) divided by the a priori probability of occurred event.
$$P(\omega_1|\omega_0)=\frac{P(\omega_0 \cap \omega_1)}{P(\omega_0)}$$

Always, a priori is from a prior state, as can be given by a previous event \(\omega_{-1}\). Thus, if we assume the present to be \(t_0\), the prior time to have been \(t_{-1}\), and the future time to be \(t_1\), then the a priori probability of the presently occurred event is made from \(\Omega(t_{-1})=\omega_{-1}\) as $$P(\omega_0)=P(\omega_0 | \omega_{-1})=P\bigg( \Omega(t_0)=\omega_0 \bigg | \Omega(t_{-1})=\omega_{-1} \bigg)$$
The probability of the combined event (omega_0 cap omega_1) occurring can also be measured partially from the a priori perspective as
$$P(\omega_0 \cap \omega_1)=P(\omega_0 \cap \omega_1 | \omega_{-1})=P\bigg(\Omega(t_0)=\omega_0 \bigcap \Omega(t_1)=\omega_1 \bigg| \Omega(t_{-1})=\omega_{-1} \bigg)$$

Thus,
$$P(\omega_1|\omega_0)=P\bigg( (\omega_1|\omega_0) \bigg | \omega_{-1} \bigg)=\frac{P(\omega_0 \cap \omega_1 | \omega_{-1})}{P(\omega_0 | \omega_{-1})}$$

6th Principle: 1. For a constant event, the likelihood of a cause to an event is the same as the probability that the event will occur. 2. The probability of the existence of any one of those causes is the probability of the event (resulting from this cause) divided by the sum of the probabilities of similar events from all causes. 3. For causes, considered a priori, which are equally probable, the probability of the existence of a cause is the probability of the caused event divided by the sum of the product of probability of the events and the possibility (a priori probability) of their cause.

For event \(\omega_i\), let \(\omega_i’\) be its cause. While \(P\) is the probability of an actual existence, \(\mu\) is the measure of the a priori likelihood of a cause since its existence is unknown. These two measurements may be used interchangeably where the existential nature of the measurement is known or substitutions as approximations are permissible. In Principle 5 they are conflated since the probability of an occurred event always implies an a priori likelihood.

  1. for \(\omega\) constant (i.e. only 1 cause, \(\omega’\)), \(P(\omega’)=P(\omega)\)
  2. for \(\omega_i’\) equally likely, \(P(\omega_i’)=P(\omega_i’|\omega)=\frac{P(\omega | \omega_i’)}{\sum_j P(\omega|\omega_j’)}\)
  3. \(P(\omega_i’)=P(\omega_i’|\omega)=\frac{P(\omega | \omega_i’) \mu(\omega_i’)}{\sum_j P(\omega | \omega_j’)\mu(\omega_j’)}\)

7th principle, p.17 The probability of a future event, \(\omega_1\), is the sum of the products of the probability of each cause, drawn from the event observed, by the probability that, this cause existing, the future event will occur.

The present is \(t_0\) while the future time is \(t_1\). Thus, the future event expected is \(\Omega(t_1)=\omega_1\). Given that \(\Omega(t_0)=\omega_0\) has been observed, we ask about the probability of a future event \(\omega_1\) from the set of causes \(\Omega'(t_1)=\{\omega_1^{(i)}:\omega_1^{(i)\} \rightarrow \Omega(t_1)}\) (change of notation for causes).
$$P(\omega_1|\omega_0)=\sum_i P(\omega_1^{(i)} | \omega_0)*P(\omega_1 | \omega_1^{(i)})$$
How are we to consider causes? They can be historical events with a causal-deterministic relationship to the future or they can be considered event-conditions, as a spatiality (possibly true over a temporal duration) rather than a temporality (true at one time). Generally, we can consider causes to be hypotheses \(H={H_1 \cdots H_n}\), with \(P(H_i)\) the prior probability (single term) and \(P(\omega | H_i)\) the posterior (conditional) probability. The observed event \((\omega_0)\) is \(\omega_{obs}\) and the future event \((\omega_1)\) is the expected event \(\omega_{exp}\). Thus, we can restate principles 7 & 6 as:

  1. \( P(H_i|\omega_{obs})=\frac{P(\omega_{obs} | H_i)P(H_i)}{\sum_j P(omega_{obs} | H_j)P(H_j)}\)
  2. \( P(\omega_{exp}|\omega_{obs})=\sum_i P(H_i | \omega_{obs})P(\omega_{exp} | H_i) = \frac{\sum_i P(\omega_{obs} | H_i)P(H_i)P(\omega_{exp} | H_i)}{\sum_j P(\omega_{obs} | H_j)P(H_j)}\)

Clearly, Principle 6 is the same as Bayes Theorem (Wassermann, Thm. 2.16), which articulates the Hypotheses (H) as a partition of \(\Omega\) in that \(\Omega=\cup_i H_i (H_i \cap H_j = \emptyset for \neq j)\), in that each hypothesis is a limitation of the domain of possible events. The observed event is also considered a set of events rather than a single ‘point.’ Therefore, Principle 6 says that “the probability that the possibility of the event is comprised within given limits is the sum of the fractions comprised within these limits” (Laplace, PEP, p.18).

8th principle (PEP, p.20): The Advantage of Mathematical Hope, (A), depending on several events, is the sum of the products of the probability of each event by the benefit to its occurrence

Let \(\omega=\{\omega_1, \cdots ,\omega_n: \omega_i in \Omega\}\) be the set of events under consideration. Let (B) be the benefit function giving a value to each event. The advantage hoped for from these events is:

$$A(\omega)=\sum_i B(\omega_i)*P(\omega_i)$$

A fair game is one whose cost of playing is equal to the advantage gained through it.

9th principle, p.21: The Advantage (A), depending on a series of events \((\omega)\), is the sum of the products of the probability of each favorable event by the benefit to its occurrence minus the sum of the products of the probability of each unfavorable event by the cost to its occurrence.

Let \(\omega=\{\omega_1, \cdots ,\omega_n: \omega_i in \Omega\}\) be the series of events under consideration, partitioned into \(\omega=(\omega^+,\omega^-)\) for favorable and unfavorable events. Let (B) be the benefit function for \(\omega_i in \omega^+\) and (L) the loss function for \(\omega_i in \omega^-\), each giving the value to each event. The advantage of playing the game is:

$$A(\omega)=\sum_{i: \omega_i in \omega^+} B(\omega_i)P(\omega_i) – sum_{j: \omega_j in \omega^-} L(\omega_j)P(\omega_j)$$

Mathematical Hope is the positivity of A. Thus, if A is positive, one has hope for the game, while if A is negative one has fear.

In generality, (X) is the random variable function, \(X:\omega_i \rightarrow \mathbb{R}\), that gives a value to each event, either a benefit \((>0)\) or cost \((<0)\). The absolute expectation \((\mathbb{E})\) of value for the game from these events is:

$$\mathbb{E}(\omega)=\sum_i X(\omega_i)*P(\omega_i)$$

10th principle, p.23 The relative value of an infinitely small sum is equal to its absolute value divided by the total benefit of the person interested.

This section can be explicated by examining Laplace’s corresponding section in Théorie Analytique (S.41-42, p.432-445) as a development of Bernoulli’s work on the subject.

(432) For a physical fortune (x), an increase by (dx) produces a moral good reciprocal to the fortune, \(\frac{kdx}{x}\) for a constant (k). (k) is the “units” of moral goodness (i.e. utility) in that \(\frac{dx}{x}=\frac{1}{k}\rightarrow\) 1 moral good. So, (k) is the quantity of physical fortune whereby a marginal increase by unity of physical fortune is equivalent to unity of moral fortune. For a textit{moral fortune} (y), $$y=kln x + ln h$$
A moral good is the proportion of an increase in part of a fortune by the whole fortune. Moral fortune is the sum of all moral goods. If we consider this summation continuously for all infinitesimally small increases in physical fortune, moral fortune is the integral of the proportional reciprocal of the physical fortune by the changes in that physical fortune. Deriving this from principle 10,
$$dy=\frac{kdx}{x}$$

$$\int dy = y = \int \frac{kdx}{x} = k \int \frac{1}{x} dx = kln(x) + C$$ (C=ln(h)) is the constant of minimum moral good when the physical fortune is unity. We can put this in terms of a physical fortune, (x_0), the minimum physical fortune for surviving one’s existence – the cost of reproducing the conditions of one’s own existence. With \(h=\frac{1}{{x_0}^k}\), $$y=\int_{x_0}^x dy =\int_{x_0}^x \frac{kdx}{x}=kln(x) – k ln(x_0)=kln(x) – k ln\bigg(\frac{1}{\sqrt[k]{h}}\bigg)=kln(x) + ln(h) $$
h is a constant given by an empirical observation of (y) as never positive or negative but always at least what is necessary, as even someone without any physical fortune will still have a moral fortune in their existence – it is thus the unpriced “physical fortune” of laboring existence.

(433) Suppose an individual with a physical fortune of (a) expects to receive a variety of changes in fortunes \(\alpha, \zeta, \gamma, \cdots\), as increments or diminishings, with probabilities of \(p, q, r, \cdots \) summing to unity. The corresponding moral fortunes would be,
$$ k ln(a+\alpha) + ln(h), k ln(a+\zeta) + ln(h), k ln(a+\gamma) + ln(h), \cdots $$
Thus, the expected moral fortune (Y) is
$$Y=kp ln(a+\alpha)+ kq ln(a+\zeta) + kr ln(a+\gamma) + \cdots + ln(h)$$
Let (X) be the physical fortune corresponding to this moral fortune, as
$$Y=k ln(X) + ln(h)$$
with,
$$X=(a+\alpha)^p(a+\zeta)^q(a+\gamma)^r \cdots$$
Taking away the primitive fortune (a) from this value of (X), the difference will be the increase in the physical fortune that would procure the individual the same moral advantage resulting from his expectation. This difference is therefore the expression of the mathematical advantage,
$$p\alpha + q\zeta + r\gamma + \cdots$$
This results in several important consequences. One of them is that the mathematically most equal game is always advantageous. Indeed, if we denote by (a) the physical fortune of the player before starting the game, by \(p\) his probability of winning, (434) \(\cdots\)

2. Concerning the Analytical Methods of the Calculus of Probabilities

$$\prod_{i=1}^n (1+a_i) -1 = \sum_{i=1}^n {{i}\choose{a_i}} $$

How many ways can s letters drawn from n be arranged?

$$s! {{n}\choose{s}}$$

Consider the lottery composed of (n) numbers, of which (r) are drawn at each draw:
What is the probability of drawing s given numbers \(Y=(y_1, \cdots y_s)\) in one draw \(X=(x_1, \cdots, x_r)\)?
$$P(Y \in X)=\frac{{n}\choose{n-s}}{{n}\choose{r}}=\frac{{r}\choose{s}}{{n}\choose{s}}$$

Consider the Urn \(\Omega\) with (a) white balls and (b) black balls with replacement. Let \(A_n = {\omega_1, \cdots \omega_n}\) be n draws. Let \(\mu_w(A)\) be the number of white balls and \(\mu_b (A)\) be the number of black balls. What is the probability of (m) white balls and (n-m) black balls being drawn?

$$P\bigg(\mu_w(A_n) = m \& \mu_b(A_n)=n-m\bigg)=P^n_m=?$$
\((a+b)^n\) is the number of all the cases possible in (n) draws. In the expansion of this binomial, \({n}\choose{m}b^{n-m}a^m\) expresses the number of cases in which (m) white balls and (n-m) black balls may be drawn. Thus,

$$P^n_m=\frac{{{n}\choose{m}}b^{n-m}a^m}{(a+b)^n} $$

Letting \(p=P(\mu_w(A_1)=1)=\frac{a}{a+b}\) be the probability of drawing a white ball out of single draw and \(q=P(\mu_b(A_1)=1)=\frac{b}{a+b}\) be the probability of a drawing a black ball in a single draw,

$$P^n_m={{n}\choose{m}}q^{n-m}p^m$$

$$\Delta P^n_{m}=\frac{P^n_{m+1}}{P^n_{m}}=\frac{(n-m)p}{(m+1)q}$$

This is an ordinary finite differential equation:

$${\Delta}^r P^n_{m}= \frac{P^n_{m+r}}{P^n_{m}}=\frac{p^{r}}{q^{r}}\prod_{i=0}^{r-1}\frac{n-m-i}{m+i+1}$$

Three players of supposed equal ability play together on the following conditions: that one of the first two players who beats his adversary plays the third, and if he beats him the game is finished. If he is beaten, the victor plays against the second until one of the players has defeated consecutively the two others, which ends the game. The probability is demanded that the game will be finished in a certain number of (n) of plays. Let us find the probability that it will end precisely at the (n)th play. For that the player who wins ought to enter the game at the play (n-1) and win it thus at the following play. But, if in place of winning the play (n-1) he should be beaten by his adversary who had just beaten the other player, the game would end at this play. Thus the probability that one of the players will enter the game at the play (n-1) and will win it is equal to the probability that the game will end precisely with this play; and as this player ought to win the following play in order that the game may be finished at the (n)th play, the probability of this last case will be only one half of the preceding one. .

(p.29-30)

Let \(E\) be the random variable of the number of plays it takes for the game to finish.
$$\mathbb{P}(E=n)=?$$
Let \(G_k=(p_1,p_2)\) be the random variable of the two players \((p_1,p_2)\) playing in game (k). Let \(W_k=p_0\) be the random variable of the winning player, \(p_0\), of game (k).
$$\mathbb{P}(E=n-1)=\mathbb{P}(G_{n-1}=W_{n-1}=p)$$
$$\mathbb{P}(E=n)=\frac{1}{2}\mathbb{P}(E=n-1)$$
This is an ordinary finite differential equation for a recurrent process. To solve this probability, we notice the game cannot end sooner than the 2nd play and extend the iterative expression recursively,
$$\mathbb{P}(E=n)=\bigg(\frac{1}{2}\bigg)^{n-2}\mathbb{P}(E=2)$$
\(\mathbb{P}(E=2)\) is the probability that one of the first two players who has beaten his adversary should beat at the second play the third player, which is \(\frac{1}{2}\). Thus,
$$\mathbb{P}(E=n)=\bigg(\frac{1}{2}\bigg)^{n-1}$$
The probability the game will end at latest the (n) th play is the sum of these,
$$\mathbb{P}(E\leq n)=\sum_{k=2}^n \bigg(\frac{1}{2}\bigg)^{k-1} = 1 – \bigg(\frac{1}{2}\bigg)^{n-1}$$

(p.31)

3. Appendix: The Calculus of Generating Functions

In general, we can define the ordinary finite differential polynomial equation. For a particular Event, (E), its probability density function over internal-time steps (n) is given by the distribution \(f(n)=\mathbb{P}(E=n)\). The base case \((I_0)\) of the inductive definition is known for the lowest time-step, \(n_0\), as \(f(n_0)=c\), while the iterative step \((I^+)\) is constructed as a polynomial function \(\mathcal{P}(x)=\sum_i a_i x^i\) on the difference step of one time-unit:


$$I^+: f(n)=\mathcal{P}(f(n-1))$$


$$\rightarrow f(n)=\underbrace{\mathcal{P}(\cdots \mathcal{P}(}n(f(0))\cdots)$$

$$\mathcal{D}(f(n))=f'(n)=\mathcal{D}(\mathcal{P})(f(n-1))f'(n-1)=\prod{k=n_0}^{n-1}\mathcal{D}(\mathcal{P})(f(k)) $$

The Functional-Communicativity of COVID-19

The Functional-Communicativity of COVID-19

An Astro-Socio-Biological Analysis

A multi-scale entropic analysis of COVID-19 is developed on the micro-biological, meso-social, & macro-astrological levels to model the accumulation of errors during processes of self-replication within immune response, communicative functionality within social mitigation strategies, and mutation/genesis within radiation conditioning from solar-cosmic cycles.

  1. Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity
  2. Micro-Scale: Computational Biology of RNA Sequence
  3. Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction
  4. Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles
  5. References

Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity

The genesis of SARS-CoV-2, with its internal code of a precise self-check mechanism on reducing errors in RNA replication and external attributes of ACE2 binding proteins, is an entropy-minimizing solution to the highly functionally communicative interconnected human societies embedded within high-entropic geophysical conditions of higher cosmic radiation atmospheric penetration with radioactive C-14 residues due to the present solar-cycle modulation. This background condition explains the mutation differences between SARS-1 & SARS-2, where the latter has a more persistent environment of C-14 to evolve steadily into stable forms. The counter-measures against the spread of the virus, either as therapeutics, vaccines, or social mitigation strategies, are thus disruptions (entropy-inducing) to these evolved entropy-reducing mechanisms within the intra-host replication and inter-host communicability processes.

The point of origin for understanding the spread of the virus in a society or subdivision is through its communicative functionality, which may be expressed as a complex variable of the real functionality of the social system and the imaginary communicativity of its lifeworld, the two attributes which are diminished by the shut-down and social distance measures. Conditions of high communicativity, such as New York City, will induce mutations with greater ACE2 binding proteins, i.e. communicability, as the virus adapts to its environment, while one of high functionality will induce error-minimization in replication. These two micro & meso scale processes of replication and communicability (i.e. intra- & inter- host propagation) can be viewed together from the thermodynamic-informatic perspective of the viral RNA code as a message – refinement and transmission – itself initialized (‘transcribed’) by the macro conditions of the Earth’s spatio-temporality (i.e. gravitational fluctuation). This message is induced, altered, amplified spatially, & temporalized by the entropic functional-communicative qualities of its environment that it essentially describes inversely.

Micro-Scale: Computational Biology of RNA Sequence

As with other viruses of its CoV virus family, the RNA of COVID-19 encodes a self-check on the duplication of its code by nuclei, thereby ensuring it is copied with little error. With little replication-error, the virus can be replicated many more rounds (exponential factor) without degeneration, which will ultimately stop the replication-process. Compare an example of \(t=3\) rounds for a normal virus with \(t=8\) for a Coronavirus under simple exponential replication viral count \(C\) by replication rounds t as \(C(t)=e^{(t)}: \ C(3)=e^3=20.1\) vs. \( C(7)=e^7=1096.6\).

Let us consider an example where a single RNA can create N-1 copies of itself before its code is degenerated beyond even replicative encoding, i.e. the binding segment code directing RNA replicase to replicate. The original RNA code is given by \(\mathcal{N}_0\), with each subsequent code given by \(\mathcal{N}_t\), where t is the number of times of replication. Thus, \(t\) counts the internal time of the “life” of the virus, as its number of times of self-replication. The relevant length of a sequence can be given as the number of base-pairs that will be replicated in the next round of replication. This will be expressed as the zero-order distance metric, \(\mu^0(\mathcal{N}_t)=|\mathcal{N}_t|\).

The errors in the replicative process at time \(t\) will be given by \(DISCR\_ERR(\mathcal{N}t)\), for “discrete error”, and will be a function of \(t\), given as thus \(\epsilon(t)\). Cleary, \(|\mathcal{N}_t| = |\mathcal{N}{t+1}| + \epsilon(t)\). In all likelihood, \(\epsilon(t)\) is a decreasing function since with each round of replication the errors will decrease the number of copiable base-pairs, and yet with an exceptionally random alteration of a stable insertion, the error could technically be negative. There are two types of these zero-order errors, \(\epsilon^-\), as the number of pre-programmed deletions occurring due to the need for a “zero-length” sequence segment to which the RNA polymerase binds and is thereby directed to replicate “what is to its right in the given reading orientation,” and \(\epsilon^+\) as the non-determined erroneous alterations, either as deletions, changes, or insertions. The total number of errors at any single time will be their sum, as thus \(\epsilon(t)=\epsilon^-(t)+\epsilon^+(t)\). A more useful error-metric may be the proportional error, \(PROP\_ERR(\mathcal{N}t)\), since it is likely to be approximately constant across time, which will be given by the time-function \(\epsilon'(t)\), and can similarly be broken into determined(-) and non-determined(+) errors as \(\epsilon'(t)={\epsilon’}^-(t)+{\epsilon’}^+(t)\). Expressed thus in proportion to the (zero-order) length of the RNA sequence, $$\epsilon'(t)=1-\frac{|\mathcal{N}{t+1}|}{|\mathcal{N}_t|}=\frac{\epsilon(t)}{|\mathcal{N}_t|}$$.

The “length” (of internal time) of an RNA code, \(\mathcal{N}t\), in terms of the number of times it itself may be copied before it is degenerated beyond replication, is given as the first order “distance” metric \(\mu^1(\mathcal{N}_t)=N(\mathcal{N}_t)\). For our generalized example, \(\mu^1(\mathcal{N}_0)=N(\mathcal{N}_0)=N\). This may be expressed as the sum of all errors $$N=\sum_{t=0}^{\infty}\epsilon(t)=\sum_{t=0}^{t_{max}}\epsilon(t)=\sum_{t=0}^{N}\epsilon(t)$$.

We are interested in the “length” (of internal space) of the RNA code, second-order distance metric, \(\mu^2(\mathcal{N}_0)\), as the number of copies it can make of itself, including the original copy in the counting and the children of all children viruses. This is the micro-factor of self-limitation of the virus, to be compared ultimately to the meso-factor of aerosolized half-life and the macro-factor of survival on surfaces.

These errors in replication, compounded by radiation exposure in the atmosphere, will add up to mutations of the virus, which by natural selection in the corporeal, social-communicative, and environmental (i.e. surfaces and aerosolized forms) levels has produced stable new forms of the virus.

Comparing SARS-1 & SARS-2, the former had a higher mortality rate and the latter has a higher transmission rate. There is certainly an inverse relationship between mortality and transmission on the meso-level as fatality prevents transmission, but there may also be inherent differences at the micro-level in methods of replication leading to these different outcomes. Mortality is due to intra-host replication exponentiation – whereby there are so many copies made that the containing cells burst – while communicability is due to the inter-host stability of the RNA code in the air and organic surfaces where it is subject to organic reactions and cosmic radiation.

Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction

We can apply the theory of communicativity to studying the natural pathology (disease) and the social pathology (violence) of human society through René Girard’s Theory of Mimetics [VIOL-SAC]. Viewing a virus and a dysfunctional social system under a single conceptual unity (Mimetics) of a communicative pathology, the former ‘spreads’ by communication while the latter is the system of communication. Yet, different types of communication systems can lead to higher outbreaks for a communicable disease. Thus, the system of communication is the condition for the health outcomes of communicable disease. Beyond merely ‘viruses,’ a dysfunctional communication system unable to coordinate actions to distribute resources effectively within a population can cause other pathologies such as violence and poverty. From this integrated perspective, these ‘social problems’ may themselves be viewed as communicable disease in the sense of being caused, rather than ‘spread,’ by faulty systems of communication. Since violence and poverty are themselves health concerns in themselves, such a re-categorization is certainly permittable. The difference in these communicable diseases of micro and macro levels is that a virus is a replication script read and enacted by human polymerase in a cell’s biology while a dysfunctional social system is a replication script read and enacted by human officials in a society. We can also thereby view health in the more generalized political-economy lens as the quantity of life a person has, beyond merely the isolated corporeal body but also including the action-potentialities of the person as the security from harm and capacity to use resources (i.e. via money) for one’s own survival. It is clear that ‘money’ should be the metric of this ‘bio-quantification’ in the sense that someone with more money can create healthier conditions for life and even seek better treatment, and similarly a sick person (i.e. deprived of life) should be given more social resources (i.e. money) to reduce the harm. Yet, the economic system fails to accurately price and distribute life-resources due to its nodal premise prescribed by capitalism whereby individuals, and by extension their property resources, are not social (as in distributively shared), but rather isolated \& alienated for individual private consumption.

This critique of capitalism was first made by Karl Marx in advocation for socialism as an ontological critique of the lack of recognition of the social being to human existence in the emerging economic sciences of liberalism. In the 17th century, Locke conceived of the public good as based upon an individual rights to freedom, thereby endowing the alienated (i.e. private) nature with the economic right to life. This moral reasoning was based on the theological premise that the capacity for reason was not a public-communicative process, but rather a private faculty based only upon an individual’s relationship with God. Today we may understand Marx’s critique of Lockean liberalism from the deep ecology perspective that sociality is an ontological premise to biological analysis due to both the relationship of an organism grouping to its environment and the in-group self-coordinating mechanism with its own type. Both of these aspects of a biological group, in-group relationships (\(H^+(G):G \rightarrow G\)) and out-group relationships (\(H^-(G)={H_-^-(G): G^c \rightarrow G, H_+^-(G): G \rightarrow G^c}\)) may be viewed as communicative properties of the group, as in how the group communicates with itself and with not-itself. In the human-capital model of economic liberalism, the group is reduced to the individual economic agent that must act alone, i.e. an interconnected system of capabilities, creating thereby an enormous complexity in any biological modeling from micro-economic premises to macro-economic outcomes. If instead we permit different levels of group analysis, where it is assumed a group distributes resources within itself, with the particular rules of group-distribution (i.e. its social system) requiring an analysis of the group at a deeper level that decomposes the group into smaller individual parts, such a multi-level model has a manageable complexity. The purpose is therefore to study Communicativity as a property of Group Action.

A group is a system of action coordination functionally interconnecting sub-groups. Each group must “act as a whole” in that the inverse branching process of coordination adds up all actions towards the fulfillment of a single highest good, the supreme value-orientation. Therefore, the representation of a group is by a tree, whose nodes are the coordination actions (intermediate groupings), edges the value produced, and leaves the elemental sub-groups “at the level of analysis”. The total society can be represented as a class system hierarchy of group orderings, with primary groups of individuals. The distribution of resources between a group follows the branching orientation (\(\sigma^-\)) from root to leaves as resources are divided up, while the coordination follows the inverse orientation (\(\sigma^+\)) from leaves to root as elemental resources are coordinated in production to produce an aggregate good.

In the parasite-stress theory of sociality[fincher_thornhill_2012], in-group assortative sociality arose due to the stress of parasites in order to prevent contagion. There is thus a causal equivalence between the viral scripts of replication and the social structures selected for by the virus as the optimal strategy of survival. Violence too has the same selection-capacity since existentially conflicting groups are forced to isolate to avoid the war of revenge cycles. This process is the same as the spread of communicable diseases between groups – even after supposed containment of a virus, movement of people between groups can cause additional cycles of resurgence.

Racism is an example of non-effective extrapolation of in-grouping based on non-essential categories. As a highly contagious and deadly disease, on the macro-social level COVID-19 selects for non-racist societies via natural selection since racist societies spend too many resources to organize in-group social structure along non-essential characteristics, as race, and thus have few reserves left to reorganize along the essential criteria selected for by the disease (i.e. segregating those at-risk). Additionally, racism prevents resource sharing between the dominant group and the racially marginalized or oppressed group, and thus limits the transfer of scientific knowledge in addition to other social-cultural resources since what the marginalized group knows to be true is ignored.

With a complex systems approach to studying the communicability of the virus between groups (i.e. different levels of analysis) we can analyze the transmission between both persons and segregated groups (i.e. cities or states) to evaluate both social distancing and shut-down policies. A single mitigation strategy can be represented as the complex number \(\lambda = \sigma + \omega i\), where \(\sigma\) is the dysfunctionality of the social system (percent shut-down) and \(\omega\) is the periodicity of the shut-down. We can include \(s_d\) for social distance as a proportion of the natural radii given by the social density. The critical issue now is mistimed reopening policies, whereby physical communication (i.e. travel) between peaking and recovering groups may cause resurgences of the virus, which can be complicated by reactivation post immunity and the threat of mutations producing strands resistant to future vaccines. This model thus considers the long-term perspective of social equilibrium solutions as mixed strategies between socialism and capitalism (i.e. social distancing and systemic shut-downs) to coronaviruses as a semi-permanent condition to the ecology of our time.

Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles

The genesis of COR-VIR are by mutations (and likely reassortment) induced by a burst of solar flare radiation and a conditioning by cosmic radiation, each with different effects on the viral composition. Comparison with SARS-1 (outbreak immediately after a solar maximum) reveals that solar radiation (i.e. UVC) from flares & CMEs, more frequent and with higher intensity during solar maximums yet also present during minima, is responsible for the intensity (mortality rate) of the virus, while cosmic radiation, enabled by the lower count of sun spots that decreases the Ozone in the atmosphere normally shielding the Earth’s surface from radiation, gives the virus a longer duration within and on organic matter (SARS-2), likely through mutation by radioactive C-14 created by cosmic radiation interaction with atmospheric Nitrogen. The increased organic surface radioactivity is compounded by the ozone-reduction due to \(N^2\) emissions concurrent with “Global Warming.” The recent appearance of all coronaviruses in the last 5 solar cycles is likely due to a global minimum within a hypothetical longer cosmic-solar cycle (~25 solar cycles) that modulates the relative sun cycle sunspot count, and has been linked to historical pandemics. A meta-analysis has detected such a frequency the last milenia with global pandemics [2017JAsBO…5..159W]. The present sun cycle, 25, beginning with a minimum coincident with the first SARS-2 case of COVID-19, has the lowest sunspot count in recorded history (i.e. double or triple minimum). Likely, this explains the genesis of difference in duration and intensity between SARS-1 & SARS-2.

This longer solar-cosmic cycle that modulates the relative sunspot count of a solar cycle, the midpoint of which is associated with global pandemics, has recently been measured to 208 years by C-14 time-cycle-analysis, which is itself modulated by a 2,300 year cycle. These time-cycles accord to the (perhaps time-varying) Mayan Round calendar: 1 K’atun=2 solar cycles (~20 years); 1 May = 13 K’atun (~ 256 years); 1 B’ak’tun = 20 K’atun (~394 years) ; 1 Great Cycle = 13 B’ak’tun (~ 5,125 year). Thus, the 208-year cycle is between 1/2 B’ak’tun (~197 years) and 1 May (~ 256 years, 13 K’atuns). It is likely the length of 25 sun cycles, the same as the May cycle, yet has decreased in length the last few thousand years (perhaps as well with sun-spot counts). The 2,300 year cycle is ~ 6 B’ak’tuns (2,365 years), constituting almost half of a Great Cycle (13 B’ak’tuns). We are likely at a triple minimum in sunspot count from all 3 solar-cosmic cycles, at the start of the first K’atun (2020) of the beginning of a new Great Cycle (2012), falling in the middle of the May (associated with crises).

The entropic characterization of the pathogenesis as prolonged radioactivity – low entropic conditioning of high entropy – leads to the property of high durability on organic matter and stable mutations.

[5] References

  1. [fincher_thornhill_2012]  Corey L. Fincher and Randy Thornhill. “Parasite-stress promotes in-group assortative sociality: The cases of strong family ties and heightened religiosity”. In: Behavioral and Brain Sciences 35.2 (2012), pp. 61–79. doi: 10.1017/S0140525X11000021.
  2. [VIOL-SAC]  René Girard. Violence & The Sacred.
  3. [2017JAsBO…5..159W]  N. C. Wickramasinghe et al. “Sunspot Cycle Minima and Pandemics: The Case for Vigilance?” In: Journal of Astrobiology & Outreach 5.2, 159 (Jan. 2017), p. 159. doi: 10.4172/2332-2519.1000159.
Limits, the First Step into Calculus

Limits, the First Step into Calculus

The concept of a limit is the central idea that underlies calculus and is the unifying mechanism that allows for differentials and integrals to be related. Calculus is used to model real-life phenomena in the language of mathematics. Anything that involves a rate of change, as the velocity of your car is the rate of change of distance with respect to the rate of change of time, is found using derivatives. Limits are the basis of the derivative, by finding the instantaneous rate of change.


Definition of a Limit

The limit is the behavior of a function as we approach a certain value. Let’s start by looking at a particular function

$$f(x) = x^2 + x – 6$$

for values near 2. We can use a table of values that gets really close to 2 from values less than 2, and another that gets really close to 2 from values greater than 2.

xf(x)|xf(x)
-2-4|636
0-6|414
1-4|36
1.5-2.25|2.52.75
1.75-1.1875|2.251.3125
1.875-0.609375|2.125|0.640625
1.936-0.315904|2.06250.31640625
1.968-1.58976|2.031250.15722656
1.984-0.079744|2.0156250.7936914
1.992-0.039936|2.00781250.03912354
1.996-0.19984|2.003906250.01954651
1.998-0.009996|2.001953130.00976946
1.999-0.004999|2.000976560.00488375

From the table, we can see that as x approaches 2, the value of f(x) approaches 0. It would appear from the chart, that if we let x get really close to 2 in either direction that f(x) becomes 0. This is the basic version of how we solve a limit. We use the English phrase “the limit of f(x) as x approaches 0 is equal to 0”.


The Limit of a Function: Definition

We say

$$\lim_{x\rightarrow a} f(x) = L$$

and

$$\text{“The limit of f(x) as x approaches a equals L”}$$

if when we make the value of x get arbitrarily close to a, the value of f(x) gets arbitrarily close to L.


Finding Limits by “Direct Injection”

If we are searching for a limit like

$$\lim_{x\rightarrow 5} x^2+x-10$$

we can do what is called “Direct Injection”, in other words we plug in the value of \(x=5\) into the function we are finding the limit of

$$\lim_{x\rightarrow 5} 5^2+5-10=20$$

then we have discovered that the limit is equal to 20.

Try this method on the following problem:

Example

Find the limit

$$\lim_{x\rightarrow 1} \frac{x-1}{x^2-1}$$

Solution

If we try direct injection we have a problem:

$$\lim_{x\rightarrow 1} \frac{1-1}{1^2-1} = \frac{0}{0}$$

The problem is that we cannot ever divide by zero. This is an undefined function in mathematics and algebra. We need another method to figure out how to take this limit. We are allowed to manipulate the function algebraically as long as we do not break any math rules. Notice that the denominator is factorable

$$\lim_{x\rightarrow 1}\frac{x-1}{x^2-1} = \lim_{x\rightarrow 1} \frac{x-1}{(x-1)(x+1)}$$

Now we can see that (x-1) is found in the numerator and denominator. We can simplify the expression into

$$\lim_{x\rightarrow 1}\frac{1}{x+1}$$

And now if we do the direct injection of x=1 we get

$$\lim_{x\rightarrow 1} \frac{1}{x+1} = \frac{1}{2}$$

And we have discovered the limit!


Conclusion

The limit is the behavior of a function as the variable approaches a specific number. Limits can be found in numerous different ways, this post has shown you two specific methods to discover limits:

  1. Table of values: Take values getting really close to the value you are searching for and measure the behavior of f(x).
  2. Direct Injection: Try plugging in the value you are searching for directly into f(x), and if it fails, try manipulating the equation using standard algebra techniques.

Check back soon for more information on Limits and Calculus in general!

Pin It on Pinterest

Share This