mathacademytutoring Home



A Tutorial in Data Science: Lecture 9 – The Functional Theory of Communication in Dynamic Systems

by | Jan 12, 2021 | Math Lecture

Table of Contents

We consider thus the functional notion of communication within the theory of dynamic systems, where the finite difference equation, or flow of a differential equation, is iterated upon as a function. Within this framework, there is a true underlying deterministic system which can be explained by a single function that is iterated upon over time as the repeated trials of the experiment. The space of outcomes may be partitioned, often into intervals, and then coded into a symbolic system of numerical sequences, where the place-order indicates the number of iterations of the function and the place-value the indexed interval it falls within [1]. This classifies initial conditions by their results, and may even represent a homomorphism between initial points and resulting codes.

Dysfunction

The proper functional notion of a probabilistic system is a dysfunction since it sends one state to many possible different states, in the sense that f(x)=\sin(\frac{1}{x}) obtains an uncountably many values at 0 since analytically f(0)=[0,1], i.e. the whole unit interval. We may study these as rather iterated functional recursions, where when there is a sensitive dependence upon initial conditions, we will likely find the function to be \textbf{hyperbolic} in that the derivatives of these iterates grow at least geometrically, and the problem of dysfunctionality will become one of imprecision in initial conditions.

From the previous lecture, a Markov chain is a dynamic system where, X_{t}=F(X_{t-1}, Y^{(1)}_t, \cdots , Y^{(n)}_t) for n independent variables Y^{(1)}_t, \cdots , Y^{(n)}_t [2]. While the system may be \say{fully determined} by an initial condition x_0, i.e. the exact (pre-partitioned) system at time t is given by x_n=f^t(x_0), limits on the precision of measurement prevent us from ever either determining empirically or choosing an exact value to begin iterations upon f. Thus, the secondary independent random variable {Y^{(i)}_t}, as post-partitioned, account for this error in measurement from imprecision.

Given a Partitioning P of the space X into a state-space P(X)=\Omega ={\omega_i: i \in \mathbb{N}}. Let x \in X \rightarrow \omega \in \Omega in that P(x)=\omega. Let I be the index map as I:\Omega \rightarrow \mathbb{N}. \Omega_i \ \& \ \Omega_j communicate if \exists \omega_i \in \Omega_i, \omega_j \in \Omega_j, m \ or \ n \in T\in \mathbb{ N} s.t. f^m(\omega_i)=\omega_j \ or \ f^n(\omega_j)=\omega_i, i.e. X_m=\omega_j | P_0\omega_i) \ or \ f^n(\omega_j)=\omega_i. f is a function on the state-space in that f: \Omega \rightarrow \Omega. If our state-space is a dual space of functionals acting on the linear transformations of an underlying vector space, then functional \ communicativity has been well-defined.

Consider the modeling of a communication system. A message is sent through this system, arriving at a state of the system at each time. The message is thus the temporal reconfiguration chain of the system as it takes on different states from its possibility set \Omega. Due to noise in the message-propagation channel, i.e. the necessity of interpretation within any deciphering of \textit{the meaning} of the message from its natural ambiguity, we can only know the probability of the system’s state at a certain time, in that the full interpretation set of the message is a sequence of probability distributions as the probability of it having a certain state at a certain time. Performing this operation discretely in finite time, we can only sample the code as a sequence of states a number of trial-repetitions (N), and take the frequency of each state at each time to be its approximate probability. We consider this to be empirical interpretation of the message. With b possible states to our system, let \Sigma_b be the symbolic code-space of all possible message sequences, including bidirectionally infinite ones, i.e. where the starting and finishing time is not known. Thus, a given empirically sampled message sequence is given by \tilde{x^l}=(x^l_k){k=-\infty}^{k=\infty}, x^l_k \in {0, \cdots, b-1}, where the empirical \ interpretation is given by the frequency distributions \tilde{f}{x(i)}(k)=\frac{1}{N}\sum_{l=0}^N \bf{1}{x^l{k}=i}.

From the stochastic process approach, we might ask thus about the stationary distribution of these sampling codes, as what are the initial distribution conditions such that the distribution positions do not change over time? Let f be the time iterational operation of the system on its space X. While the times are counted t\in \mathbb{N}, the length of each time step t_i \ \rightarrow \ t_{i+1} is given by \Delta t_i, such that the real time T is given by T(t_i)=\sum_{k=0}^{i-1} \Delta t_k. The system at a given time is given by the probability distributions of the different states, which are the macro partitions \omega \in \Omega of the micro states x \in \omega as thus F_t(\omega_k)=f^t(x \in \omega)=\mathbb{P}(X_t=k). F_t(\omega_i)=F(t)[i] is the cumulative time distribution of the class-partitioned state-space distributions. The stationary distribution is such that F(0)=F(t).

Consider the string of numbers (x_k){k=T_0}^{T_N} from N iterations of an experiment. What does it mean for the underlying numbers to be normally distributed? It means that the experiment is independent of time. The distribution stays the same at each time interval. Given a time-dependent process, the averages of these empirical measurement numbers will always be normal. Thus, normality is the stationary distribution of the averaging process. For random time-lengths, it averages all the values in that time-interval, without remembering the length of time or, equivalently, the number of values. markov – time homogeneity. Consider a system that changes states over time between b different state-indexes {0, \cdots, b-1}. When the system-state appears as 0, we perform an average of the previous values between its present time and the previous 0 occurrence. Thus, the variable of 0 although an intrinsic part of the object of measurement is in fact a property of the subject performing the measurement, as when he or she decides to stop the measurement process and perform an averaging of the results. We call such a variable the mimetic basis when its objectivity depends upon a subjectivity in the action of measurement and the mimetic dynamics are given by the relationship between the occurrence of a 0 and the other states. Here 0 is the stopping time, where a string of results are thus averaged before continuing. Let T_0^{(k)}=inf{m: m{i=1}^{\tau_0^{k}}\hat{f}(X_{T_0^{(k-1)}+i}), where \hat{f}(i) gives the actual measured value from the ith state of the system. In reality, the system’s function time-inducing function f has resulted in a particular value f(x) \in X before it was partitioned into \Omega via P, although here the time-inverting (dys)function \hat{f} determines this original pre-value from the result. Often the empirical \tilde{\hat{f}} is used from the average of the state’s values, i.e. \tilde{\hat{f}}x^{k}(i)=\frac{1}{M}\sum{k=0}^{M} f^{n_k}(x): x, f^{n_k}(x) \in \omega_i, n_{k-1}<l<n_{k}, f^l(x) \notin \omega_i, M =N_{T_0^{(k)}}(i), \ N_{n}(i)=|{m: X_m=i, m\leq n }|, which thus takes the average value from an M-length self-communication string for a particular state. \{\tilde{S}k(x)= \frac{1}{\tau_0^{k}-1}\sum{i=1}^{\tau_0^{k}}\tilde{\hat{f}}x^{k}(X{T_0^{(k-1)}+i})\}_{k=1}^{N} & \{S_k(x)= \frac{1}{\tau_0^{k}-1}\sum_{i=1}^{\tau_0^{k}} f^{T_0^{(k-1)}+i}(x) \}_{k=1}^{N} approach normal distributions as N increases if state-0 is independent of the others.

References

  1. Robert L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd Ed. Addison-Wesley Publishing Company, Inc, The Advanced Book Program, 1948.
  2. Richard Serfozo, The Basics of Applied Stochastic Processes, Probability and its Applications. Springer-Verlag Berlin Heidelberg, 2009. p. 1.
Share This