## Normal Communication: Distribution & Codes

Consider the modeling of a communication system. A message is sent through this system, arriving at a state of the system at each time. The message is thus the temporal reconfiguration chain of the system as it takes on different states from its possibility set $$\Omega$$. Due to noise in the message-propagation channel, i.e. the necessity of interpretation within any deciphering of the meaning of the message from its natural ambiguity, we can only know the probability of the system’s state at a certain time, in that the full interpretation set of the message is a sequence of probability distributions as the probability of it having a certain state at a certain time. Performing this operation discretely in finite time, we can only sample the code as a sequence of states a number of trial-repetitions ($$N$$), and take the frequency of each state at each time to be its approximate probability. We consider this to be empirical interpretation of the message. With $$b$$ possible states to our system, let $$\Sigma_b$$ be the symbolic code-space of all possible message sequences, including bi-directionally infinite ones, i.e. where the starting and finishing time is not known. Thus, a given empirically sampled message sequence is given by $$\tilde{x^l}=(x^l_k)\{k=-\infty\}^{k=\infty}, x^l_k \in \{0, \cdots, b-1\}$$, where the empirical interpretation is given by the frequency distributions $$\tilde{f}_{x(i)}(k)=\frac{1}{N}\sum_{l=0}^N \bf{1}_{x^l{k}=i}$$.

Stationary Distribution: What are the initial distribution conditions such that the distribution positions do not change over time? Let $$f$$ be the time iterational operation of the system on its space $$X$$. While the times are counted $$t\in \mathbb{N}$$, the length of each time step $$t_i \ \rightarrow \ t_{i+1}$$ is given by $$\Delta t_i$$, such that the real time $$T$$ is given by $$T(t_i)=\sum_{k=0}^{i-1} \Delta t_k$$. The system at a given time is given by the probability distributions of the different states, which are the macro partitions $$\omega \in \Omega$$ of the micro states $$x \in \omega$$ as thus $$F_t(\omega_k)=f^t(x \in \omega)=\mathbb{P}(X_t=k)$$. $$F_t(\omega_i)=F(t)[i]$$ is the cumulative time distribution of the class-partitioned state-space distributions. The stationary distribution is such that $$F(0)=F(t)$$.

Consider the string of numbers $$(x_k)_{k=T_0}^{T_N}$$ from N iterations of an experiment. What does it mean for the underlying numbers to be normally distributed? It means that the experiment is independent of time. The distribution stays the same at each time interval. Given a time-dependent process, the averages of these empirical measurement numbers will always be normal. Thus, normality is the stationary distribution of the averaging process. For random time-lengths, it averages all the values in that time-interval, without remembering the length of time or, equivalently, the number of values. markov – time homogeneity. Consider a system that changes states over time between $$b$$ different state-indexes $$\{0, \cdots, b-1\}$$. When the system-state appears as 0, we perform an average of the previous values between its present time and the previous 0 occurrence. Thus, the variable of 0 although an intrinsic part of the object of measurement is in fact a property of the subject performing the measurement, as when he or she decides to stop the measurement process and perform an averaging of the results. We call such a variable the mimetic basis when its objectivity depends upon a subjectivity in the action of measurement and the mimetic dynamics are given by the relationship between the occurrence of a 0 and the other states. Here 0 is the stopping time, where a string of results are thus averaged before continuing. Let $$T_0^{(k)}=\inf\{m: m_{i=1}^{\tau_0^{k}}\hat{f}(X_{T_0}^{(k-1)}+i)\}$$, where $$\hat{f}(i)$$ gives the actual measured value from the ith state of the system. In reality, the system’s function time-inducing function $$f$$ has resulted in a particular value $$f(x) \in X$$ before it was partitioned into $$\Omega$$ via $$P$$, although here the time-inverting (dys)function $$\hat{f}$$ determines this original pre-value from the result. Often the empirical $$\tilde{f}$$ is used from the average of the state’s values, i.e. $$\tilde{f}x^{k}(i)=\frac{1}{M}\sum_{k=0}^{M} f^{n_k}(x)$$ such that $$x, f^{n_k}(x)\in \omega_i, n_{k-1}<l<n_{k}, f^l(x) \notin \omega_i, M=N_{T_0^{(k)}}(i)$$ and $$N_{n}(i)=|\{m: X_m=i, m\leq n \}|$$, which thus takes the average value from an M-length self-communication string for a particular state. $$\{\tilde{S}_k(x)= \frac{1}{\tau_0^{k}-1}\sum_{i=1}^{\tau_0^{k}}\tilde{f}x^{k}(X_{T_0^{(k-1)}+i})\}_{k=1}^{N}$$ & $$\{S_k(x)= \frac{1}{\tau_0^{k}-1}\sum_{i=1}^{\tau_0^{k}} f^{T_0^{(k-1)}+i}(x) \}_{k=1}^{N}$$ approach normal distributions as $$N$$ increases if state-0 is independent of the others.

## The Issue of The Datum

Data, as finite, can never be merely fit without presupposition. The theory of the data, as what it is, is the presupposition that discloses the data in the first place through the act of measurement. As independent and identical (i.i.d.) measurements, there is not temporality to the measurement activities in serial, so the ordering of the samples is not relevant. But, this means that there is no temporality to the disclosure of the object at hand, preventing one measurement from being distinguished from another – they are either all simultaneous or uncomparable. Thus, i.i.d. random variables (measurement resultants) as a whole describe the different time-invariant superpositions of the system in question since at the single null time of the serial measurements all the sample-values were found together ‘at once’ or ‘in a unity.’ To ‘force’ an order on the data by some random indexing is an unnecessary addition of data to our data-sampling process and thus an analysis that requires an ordering while be extraneous to the matter at hand. Thus, data as ‘non-hypothesized’ describes a state-system ‘broken’ in its spatio-temporality, unable to reveal itself as itself in unity, but rather as several (many) different states (omega_i) all occurring with a minor existence ((0<mathbb{P}(omega_i in Omega)<1)). In the schemata of Heidegger (Being & Time), such things are present-at-hand in that they have been removed from their interlocking chains of signification from-which & for-which they exist through participation in the world as ready-at-hand – their existence is in question.

## The Mimetics of Meaning

The meaning of a datum is its intelligibility within an interpretive context of signification. An Interpretation gives a specificity to its underlying distribution. A rational Interpretation to data gives a rational structure to its conditionality. In the circular process of interpretation, an assumption is made from which to understand the datum, while in the branching process, these assumptions are hierarchically decomposed.

## Statistics as The Logic of Science

The question of science itself has never been its particular object of inquiry but the existential nature, in its possibility and thereby the nature of its actuality. Science is power, and thus abstracts itself as the desired meta-good, although it is always itself about particularities as an ever-finer branching process. Although a philosophic question, the ‘question of science’ is inherently a political one, as it is the highest good desired by the society, its population, and its government. To make sense of science mathematically-numerically, as statistics claims, it is the scientific process itself that must be understood through probability theory as The Logic of Science (Jaynes).

## Linguistic Analysis of the Invariants of Science: The Laws of Nature

The theory of science, as the proof of its validity in universality, must consider the practice of science, as the negating particularity. The symbolic language of science, within which its practice and results are embedded, necessarily negates its own particularity as well, as thus to represent a structure universally. Science, in the strict sense of having achieved already the goal of universality, is de-linguistified. While mathematics, in its extra-linguistic nature, often has the illusion of universal de-linguistification, such is only a semblance and often an illusion. The numbers of mathematics always can refer to things, and in the particular basis of their conceptual context always do. The non-numeric symbols of mathematics too represented words before short-hand gave them a distilled symbolic life. The de-linguistified nature of the extra-linguistic property of mathematics is that to count as mathematics, the symbols must themselves represent universal things. Thus, all true mathematical statements may represent scientific phenomena, but the context and work of this referencing is not trivial and sometimes the entirety of the scientific labor. The tense of science, as the time-space of the activity of its being, is the tensor, which is the extra-linguistic meta-grammar of null-time, and thus any and all times.

## The Event Horizon of Discovery: The Dynamics between an Observer & a Black Hole

The consciousness who writes or reads science, and thereby reports or performs the described tensor as an action of experimentation or validation, is the transcendental consciousness. Although science is real, it is only a horizon. The question is thus of its nature and existence at this horizon. What is knowable of science is thereby known as ‘the event horizon’, as that which has appeared already, beyond which is mere a ‘black hole’ as what has not yet revealed itself – always there is a not-yet to temporality and so such a black hole can always be at least found as all that of science that has not and cannot be revealed since within the very notion of science is a negation of withdrawal (non-appearance) as the condition of its own universality (negating its particularity). Beginning here with the null-space of black-holes, the physical universe – at least the negative gravitational entities – have a natural extra-language – at least for the negative linguistic operation of signification whereby what is not known is the ‘object’ of reference. In this cosmological interpretation of subjectivity within the objectivity of physical space-time, we thus come to the result of General Relativity that the existence of a black-hole is not independent of the observer, and in fact is only an element in the Null-Set, or negation, of the observer. To ‘observer’ a black-hole is to point to and outline something of which one does not know. If one ‘knew’ what it was positively then it would be not ‘black’ in the sense of not-emitting light within the reference frame (space-time curvature) of the observer. That one cannot see something, as receive photons reflecting space-time measurements, is not a property of the object but rather of the observer in his or her subjective activity of observation since to be at all must mean there is some perspective from which it can be seen. As the Negation of the objectivity of an observer, subjectivity is the negative gravitational anti-substance of blackholes. Subjectivity, as what is not known by consciousness, observes the boundaries of an aspect (a negative element) of itself in the physical measurement of an ‘event horizon.’

These invariants of nature, as the conditions of its space-time, are the laws of dynamics in natural science. At the limit of observation we find the basis of the conditionality of the observation and thus its existence as an observer. From the perspective of absolute science, within the horizon of universality (i.e. the itself as not-itself of the black-hole or Pure Subjectivity), the space-time of the activity of observation (i.e. the labor of science) is a time-space as the hyperbolic negative geometry of conditioning (the itself of an unconditionality). What is a positive element of the bio-physical contextual condition of life, from which science takes place, for the observer is a negative aspect from the perspective of transcendental consciousness (i.e. science) as the limitation of the observation. Within Husserlian Phenomenology and Hilbertian Geometry of the early 20th century in Germany, from which Einstein’s theory arose, a Black-Hole is therefore a Transcendental Ego as the absolute measurement point. Our Solar System is conditioned in its space-time geometry by the MilkyWay galaxy it is within, which is conditioned by the blackhole Sagittarius A* (SgrA). Therefore, the unconditionality of our solar space-time (hence the bio-kinetic features) is an unknown of space-time possibilities, enveloped in the event horizon of SgrA. What is the inverse to our place (i.e. space-time) of observation will naturally only exist as a negativity, what cannot be seen.

## Classical Origins of The Random Variable as The Unknown: Levels of Analysis

Strictly speaking, within Chinese Cosmological Algebra of 4-variables ($$\mu$$, X,Y,Z), this first variable of primary Unknowing, is represented by $$X$$, or Tiān (天), for ‘sky’ as that which conditions the arc of the sky, i.e. “the heavens” or the space of our temporal dwelling ‘in the earth.’ We can say thus that $$X=SgrA$$ is the largest and most relevant primary unknown for solarized galactic life. While of course X may represent anything, in the total cosmological nature of science, i.e. all that Humanity doesn’t know yet is conditioned by, it appears most relevantly and wholistically as SgrA. It can be said thus that all unknowns ($$x$$) in our space-time of observation are within “the great unknown ($$X$$) of SgrA, as thus $$x \in X$$ or $$x \mathcal{A} X$$ for the negative aspectual ($$\mathcal{A}$$) relationship “x is an aspect of X”. These are the relevant, and most general (i.e. universal) invariants to our existence of observation. They are the relative absolutes of, from, and for science. Within more practical scientific judgements from a cosmological perspective, the relevant aspects of variable unknowns are the planets within our solar system as conditioning the solar life of Earth. The Earthly unknowns are the second variable Y, or Di (地) for “earth.” They are the unknowns that condition the Earth, or life, as determining the changes in climate through their cyclical dynamics. Finally, the last unknown of conditionals, Z, refers to people, Ren (人) for ‘men,’ as what conditions their actions. X is the macro unknown (conditionality) of the gravity of ‘the heavens,’ Y the meso unknown of biological life in and on Earth, and Z the micro unknown of psychology as quantum phenomena. These unknowns are the subjective conditions of observation. Finally, the 4th variable is the “object”, or Wu (物), $$\mu$$ of measurement. This last quality is the only $real$ value in the sense of an objective measurement of reality, while the others are imaginary in the sense that their real values aren’t known, and can’t be within the reference of observation since they are its own conditions of measurement within “the heavens, the earth, and the person.” (Bréard, p.82)

In the quaternion tradition of Hamilton, ($$\mu$$, X,Y,Z) are the quaternions, ($$\mu$$, i,j,k). Since the real-values of X,Y,Z in the scientific sense can’t be known truly and thus must always be themselves unknowns, they are treated as imaginary numbers ($$i=\sqrt{-1}$$) with their ‘values’ merely coefficients to the quaternions $$i,j,k$$. These quaternions are derived as quotients of vectors, as thus the unit orientations of measurement’s subjectivity, themselves representing the space-time. We often approximate this with the Cartesian X,Y,Z of 3 independent directions as vectors, yet such is to assume Euclidean Geometry as independence.

## References

[1] E.T. Jaynes. “Complexity, Entropy and the Physics of Information”. In: ed. by W. H. Zurek. Wesley Publishing Co., 1990. Chap. Probability In Quantum Theory.

[2] Andrea Bréard. Nine Chapters on Mathematical Modernity: Essays on the Global Historical Entanglements of the Science of Numbers in China. Springer, 2019.

## The Derivation of the Normal Distribution

Abstract: The Internal Space-Time Geometry to Experiment as the Distribution of Measurement InterActions is set up by the Statistical Parameter.

## The Scientific Process

Statistics is the method of determining the validity of an empirical claim about nature. A claim that is not particularly valid will likely be true only some of the time or under certain specific conditions that are not too common. Ultimately, thus, within a domain of consideration, statistics answer the question of the universality of the claims made about nature through empirical methods of observation. It may be that two opposing claims are both true in the sense that they are each true half the time of random observation or within half the space of contextual conditionality’s. The scientific process, as progress, relies on methods that over a linear time of repeated experimental cycles, increase the validity of the claims as the knowledge of nature approaches universality, itself always merely a horizon within the phenomenology of empiricism. This progressive scientific process is called ‘discovery,’ or merely research, although it is highly non-linear.

The scientific process is a branching process as the truth of a claim is found to be dependent upon its conditions, and those conditions found dependent on further conditionals. This structure of rationality is as a tree. A single claim $$C$$ has a relative validity $$V$$ due to the truth of an underlying, or conditioning, claim, $$C_i$$, given as $$V_{C_i}(C)=V(C,C_i)$$ . We may understand the validity of claims through probability theory, in that the relative validity of a claim based on a conditioning claim is the probability the claim is true conditioned on $$C_i, \ V(C,C_i)=P(C|C_i)$$. In general, we will refer to the object under investigation, of which C is a claim about, as the primary variable X, and the subject performing the investigation, of which $$C_i$$ is hypothesized (as a cognitive action), as the secondary variable Y. Thus, the orientation of observation, i.e. the time-arrow, is given as $$\sigma: Y \rightarrow X$$.

An observer (Y) makes an observation from a particular position of an event (X) with its own place, forming a space-time of the action of measurement. An observation-as-information is a complex quantum-bit, which within a space of investigation is a complex variable, representing a tree of observation-conditioning rationality resulting from the branching process of hypothesis formation, with each node a conditional hypothesis and edge length the conditional probability. The gravitation of the system of measurement is the space-time tensor of its world-manifold, stable or chaotic of the time of interaction. We thus understand the positions of observers within a place of investigation, itself given at least in real-part component by the object of investigation.

## Experimental Set-up

Nature is explained by a parameterized model. Each parameter, as a functional aggregation of measurement samples, has itself a corresponding distribution as it occurs in nature along the infinite, universal horizon of measurement.

Let $$X^n$$ be a random variable representing the n qualities that can be measured for the thing under investigation, $$\Omega$$, itself the collected gathering of all its possible appearances, $$\omega \in \Omega$$ such that $$X^n:\omega \rightarrow {\mathbb{R}}^n$$. Each sampled measurement of $$X^n$$ through an interaction with $$\omega$$ is given as an $$\hat{X}^n(t_i)$$, each one constituting a unit of indexable time in the catalogable measurement process. Thus, the set of sampled measurements, a sample space, is a partition of ‘internally orderable’ test times within the measurement action, $${ \hat{X}^n(t): t \in \pi }$$.

In this set up of statistical sampling, one will notice the step-wise process-timing of a single actor performing n sequential measurements can be represented the same as n indexed actors performing simultaneous measurements, at least with regard to internal time accounting. In order to infer the latter interpretational context, such as to preserve the common sense notion of time as distinct from social space, one would represent all n simultaneous measurements as n dimensions of X, although assumed to be generally the same in quality in such that all n actors sample the same object in the same way, yet are distinct in some orderable indexical quality. Thus, in each turn of the round time (i.e. one unit), all actors perform independent and similar measurements. It may be, as in progressive action processes, future actions are dependent on previous ones, and thus independence is only found within the sample space of a single time round. Alternatively, it may also be that the actors perform different actions, or are dependent upon each other in their interactions. Thus, the notion of actor(s) may be embedded in the space-time of the action of measurement. The embedding of a coordinated plurality of actors in the most mundane sense of ‘collective progress’ can be represented as the group action of all independent & similar measurers completes itself in each round of time, with inter-temporalities in the process measurement process being similar but dependent on the previous one. The progressive interaction may be represented as the inducer $$I^+:X(t_i) \rightarrow X(t_{i}+1)$$, with the assumptions of similarity and independence as $$\hat{x_i}(t) \sim \hat{x_j}(t) \ \& \ I(\hat{x_i}(t),\hat{x_j}(t))=0$$. We take $$\hat{X}(t)$$ to be a group of measurement actors/actions $${ \hat{x}_i(t): i \in \pi }$$ that acts on $$\Omega$$ together, or simultaneously, to produce a singular measurement of one round time

## Derivation of the Normal Distribution

The question with measurement is not, “what is the true distribution of the object in question in nature?”, but “what is the distribution of the parameter I am using to measure?”. The underlying metric of the quality under investigation, itself arising due to an interaction of measurement as the distance function within the investigatory space-time, is $$\mu$$. As the central limit states, averages of these measurements, each having an error, will converge to normality. We can describe analytically the space of our ‘atemporal’ averaged measurements in that the rate of change of the frequency $$f$$ of our sample measurements $$x,x_0 \in X$$ by the change in the space of measuring, is inversely proportional, by constant k, to the distance from the true measurement ($$\mu$$) and the frequency:
$$\forall \epsilon > 0, \exists \delta(\epsilon)>0 \ s.t. \ \forall x, |x_0-x|<\delta \rightarrow \bigg| k(x_0-\mu)f(x_0) – \frac{f(x_0)-f(x)}{x_0-x}\bigg|<\epsilon$$
or in the differential form
$$\frac{df}{dx}=-k(x-\mu)f(x)$$
$$f(t)=\int_{-\infty}^{+\infty}-k(x-\mu)f(x)dx$$
the solution distribution is scaled by the constant of coefficient, C
$$f(x)=Ce^{-\frac{k}{2}{(x-\mu)}^2}$$
given the normalization of the total size of the universe of events as 1
$$\int_{-\infty}^{\infty} f dx =1$$
thus,
$$C=\sqrt{\frac{k}{2\pi}}$$
so the total distribution is,
$$f(x)=\sqrt{\frac{k}{2\pi}}e^{-\frac{k}{2}{(x-\mu)}^2}$$
$$\mathbb{E}(X)=\int (x-\mu)f(x)dx=\mu$$
$$\sigma^2=E(X^2)=\int {(x-\mu)}^2 f(x)dx=\frac{1}{\sqrt{k}}$$

$$so, \ f(x)=N\bigg(\mu,\sigma=\frac{1}{\sqrt{k}}\bigg)$$

## Reconstructing Distributions by Moments

When two states, i.e. possibly measured outcomes, of the stochastic sampling process of the underlying statistical object communicate, there is a probability of one occurring after the other, perhaps within the internal time (i.e. indexical ordering) of the measurement process, $$t \in \pi=(1, \cdots, n)$$ for the sample space $$(\hat{X_1}, \cdots , \hat{X_n})$$.  Arranging the resulting values as a list, there is some chance of one value occurring after the other.  Such is a direction of communication between the states those values represent.  When two states inter-communicate, there is a positive probability that each state will occur after the other (within the ordering $$\pi$$).  For such inter-communicating states, they have the same period, defined as the GCD of distances between occurrences.  The complex variable of functional communicativity can be described as the real probability of conditioned occurrence and the imaginary period of its intercommunications.

To describe our model by communicative functionals is to follow the Laplacian method of generating the distribution via moments by finite difference equations.  A single state, within state-space rather than time-space, is described as a complex variable $$s=\sigma + \omega i$$, where $$\sigma$$ is the real functional relation between state & system (or part & whole), while $$\omega$$ is its imaginary communicative relationship.  If we view the branching evolution of the possible states measured in a system under sampling, then the actual sampled values is a path along this decision-tree.  The total system, as its Laplacian, or characteristic, representation is the (tensorial) sum of the underlying sub-systems, of which each state belongs as a possible value.  A continuum (real-distributional) system can only result as the infinite branching process, as thus each value a limit to an infinite path-sequence of rationalities in the state-system sub-dividing into state-systems until the limiting (stationary) systems-as-states are reached that are non-dynamic or static in the inner spatio-temporality of self-differentiation, i.e. non-dividing.  Any node of this possibility-tree can be represented as a state of the higher-order system by a complex-value or as a system of the lower-order states by a complex-function.  The real part of this value is the probability of the lower state occurring given the higher-system occurring (uni-directional communicativity), while the imaginary part is its relative period.  Similarly, the real function of a higher system is the probability of lower states occurring given its occurrence and the imaginary part is relative periods of the state-values.\end{document