 ## Statistics as the Logic of Science

\documentclass{article}
\usepackage[utf8]{inputenc}
% !BIB TS-program = biber
\usepackage[backend=biber,style=numeric, citestyle=authoryear]{biblatex}

\usepackage{amssymb}
\usepackage{dirtytalk}
\usepackage{csquotes}
\usepackage{amsmath}
\usepackage{calc}
\usepackage{textcomp}
\usepackage{mathtools}
\usepackage[english]{babel}
\usepackage{fancyhdr}
\usepackage{url}
\def\UrlBreaks{\do\/\do-}
\usepackage{breakurl}
\usepackage{graphicx}
\graphicspath{ {images/} }
\usepackage{wrapfig}
\usepackage{float}
\usepackage[T1]{fontenc}
\usepackage{outlines}
\usepackage{enumitem}
\setenumerate{label=\null}
\setenumerate{label=\null}
\setenumerate{label=\roman*.}
\setenumerate{label=\alph*.}
\newcommand{\midtilde}{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}

\usepackage{CJKutf8}

\pagestyle{fancy}
\fancyhf{}

\title{Statistical Analysis by Communicative Functionals: \\ Lecture 2 – Statistics as The Logic of Science}

\author{Justin Petrillo}

\begin{document}
\maketitle

The question of $science \ itself$ has never been its particular object of inquiry but the existential nature, in its possibility and thereby the nature of its actuality. Science is power, and thus abstracts itself as the desired meta-good, although it is always itself about particularities as an ever-finer branching process. Although a philosophic question, the \textit{question of science} is inherently a political one, as it is the highest good desired by the society, its population, and its government. To make sense of science mathematically-numerically, as statistics claims, it is the scientific process itself that must be understood through probability theory as \say{The Logic of Science.} \footcite{QTSTATS}

\section{Linguistic Analysis of the Invariants of Science: The Laws of Nature}
The theory of science, as the proof of its validity in universality, must consider the practice of science, as the negating particularity. The symbolic language of science, within which its practice and results are embedded, necessarily negates its own particularity as well, as thus to represent a structure universally. Science, in the strict sense of having achieved already the goal of universality, is $de-linguistified$. While mathematics, in its extra-linguistic nature, often has the illusion of universal de-linguistification, such is only a semblance and often an illusion. The numbers of mathematics always can refer to things, and in the particular basis of their conceptual context always do. The non-numeric symbols of mathematics too represented words before short-hand gave them a distilled symbolic life. The de-linguistified nature of the extra-linguistic property of mathematics is that to count as mathematics, the symbols must themselves represent universal things. Thus, all true mathematical statements may represent scientific phenomena, but the context and work of this referencing is not trivial and sometimes the entirety of the scientific labor. The tense of science, as the time-space of the activity of its being, is the $tensor$, which is the extra-linguistic meta-grammar of null-time, and thus any and all times.

\section{The Event Horizon of Discovery: The Dynamics between an Observer \& a Black Hole}
The consciousness who writes or reads science, and thereby reports or performs the described tensor as an action of experimentation or validation, is the transcendental consciousness. Although science is real, it is only a horizon. The question is thus of its nature and existence at this horizon. What is knowable of science is thereby known as \say{the event horizon}, as that which has appeared already, beyond which is mere a \say{black hole} as what has not yet revealed itself – always there is a not-yet to temporality and so such a black hole can always be at least found as all that of science that has not and cannot be revealed since within the very notion of science is a negation of withdrawal (non-appearance) as the condition of its own universality (negating its particularity). Beginning here with the null-space of black-holes, the physical universe – at least the negative gravitational entities – have a natural extra-language – at least for the negative linguistic operation of signification whereby what is not known is the \say{object} of reference. In this cosmological interpretation of subjectivity within the objectivity of physical space-time, we thus come to the result of General Relativity that the existence of a black-hole is not independent of the observer, and in fact is only an element in the Null-Set, or negation, of the observer. To ‘observer’ a black-hole is to point to and outline something of which one does not know. If one ‘knew’ what it was positively then it would be not ‘black’ in the sense of not-emitting light \textit{within the reference frame (space-time curvature) of the observer}. That one $cannot$ see something, as receive photons reflecting space-time measurements, is not a property of the object but rather of the observer in his or her subjective activity of observation since to be at all must mean there is some perspective from which it can be seen. As the Negation of the objectivity of an observer, subjectivity is the $negative \ gravitational \ anti-substance$ of blackholes. Subjectivity, as what is not known by consciousness, observes the boundaries of an aspect (a negative element) of itself in the physical measurement of an ‘event horizon.’

These invariants of nature, as the conditions of its space-time, are the laws of dynamics in natural science. At the limit of observation we find the basis of the conditionality of the observation and thus its existence as an observer. From the perspective of absolute science, within the horizon of universality (i.e. the itself as not-itself of the black-hole or Pure Subjectivity), the space-time of the activity of observation (i.e. the labor of science) is a time-space as the hyperbolic negative geometry of conditioning (the itself of an unconditionality). What is a positive element of the bio-physical contextual condition of life, from which science takes place, for the observer is a negative aspect from the perspective of transcendental consciousness (i.e. science) as the limitation of the observation. Within Husserlian Phenomenology and Hilbertian Geometry of the early 20th century in Germany, from which Einstein’s theory arose, a Black-Hole is therefore a Transcendental Ego as the absolute measurement point. Our Solar System is conditioned in its space-time geometry by the MilkyWay galaxy it is within, which is conditioned by the blackhole Sagittarius A* ($SgrA*$). Therefore, the unconditionality of our solar space-time (hence the bio-kinetic features) is an unknown of space-time possibilities, enveloped in the event horizon of $SgrA*$. What is the inverse to our place (i.e. space-time) of observation will naturally only exist as a negativity, what cannot be seen.

\section{Classical Origins of The Random Variable as The Unknown: Levels of Analysis}
Strictly speaking, within Chinese Cosmological Algebra of 4-variables ($\mu$, X,Y,Z), this first variable of primary Unknowing, is represented by $X$, or Tiān (\begin{CJK*}{UTF8}{gbsn}天\end{CJK*}), for ‘sky’ as that which conditions the arc of the sky, i.e. “the heavens” or the space of our temporal dwelling ‘in the earth.’ We can say thus that $X=SgrA*$ is the largest and most relevant primary unknown for solarized galactic life. While of course X may represent anything, in the total cosmological nature of science, i.e. all that Humanity doesn’t know yet is conditioned by, it appears most relevantly and wholistically as $SgrA*$. It can be said thus that all unknowns ($x$) in our space-time of observation are within \say{\textit{the great unknown}} ($X$) of $SgrA*$, as thus $x \in X$ or $x \mathcal{A} X$ for the negative aspectual ($\mathcal{A}$) relationship \say{x is an aspect of X}. These are the relevant, and most general (i.e. universal) invariants to our existence of observation. They are the relative absolutes of, from, and for science. Within more practical scientific judgements from a cosmological perspective, the relevant aspects of variable unknowns are the planets within our solar system as conditioning the solar life of Earth. The Earthly unknowns are the second variable Y, or Di (\begin{CJK*}{UTF8}{gbsn}地\end{CJK*}) for “earth.” They are the unknowns that condition the Earth, or life, as determining the changes in climate through their cyclical dynamics. Finally, the last unknown of conditionals, Z, refers to people, Ren (\begin{CJK*}{UTF8}{gbsn}人\end{CJK*}) for ‘men,’ as what conditions their actions. X is the macro unknown (conditionality) of the gravity of ‘the heavens,’ Y the meso unknown of biological life in and on Earth, and Z the micro unknown of psychology as quantum phenomena. These unknowns are the subjective conditions of observation. Finally, the 4th variable is the “object”, or Wu (\begin{CJK*}{UTF8}{gbsn}物\end{CJK*}), $\mu$ of measurement. This last quality is the only $real$ value in the sense of an objective measurement of reality, while the others are imaginary in the sense that their real values aren’t known, and can’t be within the reference of observation since they are its own conditions of measurement within \say{the heavens, the earth, and the person}. \footcite[p.~82]{CHIN-MATH}

In the quaternion tradition of Hamilton, ($\mu$, X,Y,Z) are the quaternions, ($\mu$, i,j,k). Since the real-values of X,Y,Z in the scientific sense can’t be known truly and thus must always be themselves unknowns, they are treated as imaginary numbers ($i=\sqrt{-1}$) with their ‘values’ merely coefficients to the quaternions $i,j,k$. These quaternions are derived as quotients of vectors, as thus the unit orientations of measurement’s subjectivity, themselves representing the space-time. We often approximate this with the Cartesian X,Y,Z of 3 independent directions as vectors, yet such is to assume Euclidean Geometry as independence.

\printbibliography

\end{document}

## The Derivation of the Normal Distribution

\documentclass{article}
\usepackage[utf8]{inputenc}
% !BIB TS-program = biber
\usepackage[backend=biber,style=numeric, citestyle=authoryear]{biblatex}

\usepackage{amssymb}
\usepackage{dirtytalk}
\usepackage{csquotes}
\usepackage{amsmath}
\usepackage{calc}
\usepackage{textcomp}
\usepackage{mathtools}
\usepackage[english]{babel}
\usepackage{fancyhdr}
\usepackage{url}
\def\UrlBreaks{\do\/\do-}
\usepackage{breakurl}
\usepackage{graphicx}
\graphicspath{ {images/} }
\usepackage{wrapfig}
\usepackage{float}
\usepackage[T1]{fontenc}
\usepackage{outlines}
\usepackage{enumitem}
\setenumerate{label=\null}
\setenumerate{label=\null}
\setenumerate{label=\roman*.}
\setenumerate{label=\alph*.}
\newcommand{\midtilde}{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}

\pagestyle{fancy}
\fancyhf{}

\title{Statistical Analysis by Communicative Functionals: \\ Lecture 1 – A Geometric Derivation of the Normal Distribution}

\author{Justin Petrillo}

\begin{document}
\maketitle

\begin{abstract}
The Internal Space-Time Geometry to Experiment as the Distribution of Measurement InterActions is set up by the Statistical Parameter.
\end{abstract}

\section{The Scientific Process}
Statistics is the method of determining the validity of an empirical claim about nature. A claim that is not particularly valid will likely be true only some of the time or under certain specific conditions that are not too common. Ultimately, thus, within a domain of consideration, statistics answers the question of the universality of the claims made about nature through empirical methods of observation. It may be that two opposing claims are both true in the sense that they are each true half the time of random observation or within half the space of contextual conditionalities. The scientific process, as progress, relies on methods that over a linear time of repeated experimental cycles, increase the validity of the claims as the knowledge of nature approaches universality, itself always merely a horizon within the phenomenology of empiricism. This progressive scientific process is called ‘discovery,’ or merely $research$, although it is highly non-linear.

The scientific process is a branching process as the truth of a claim is found to be dependent upon its conditions, and those conditions found dependent on further conditionals. This structure of rationality is as a tree. A single claim (C) has a relative validity (V) due to the truth of an underlying, or conditioning, claim, $C_i$, given as $V_{C_i}(C)=V(C,C_i)$. We may understand the validity of claims through probability theory, in that the relative validity of a claim based on a conditioning claim is the probability the claim is true conditioned on $C_i$, $V(C,C_i)=P(C|C_i)$. In general, we will refer to the object under investigation, of which C is a claim about, as the primary variable X, and the subject performing the investigation, of which $C_i$ is hypothesized (as a cognitive action), as the secondary variable Y. Thus, the orientation of observation, i.e. the time-arrow, is given as $\sigma: Y \rightarrow X$.

An observer (Y) makes an observation from a particular position of an event (X) with its own place, forming a space-time of the action of measurement. An observation-as-information is a complex quantum-bit, which within a space of investigation is a complex variable, representing a tree of observation-conditioning rationality resulting from the branching process of hypothesis formation, with each node a conditional hypothesis and edge length the conditional probability. The gravitation of the system of measurement is the space-time tensor of its world-manifold, stable or chaotic of the time of interaction. We thus understand the positions of observers within a place of investigation, itself given at least in real-part component by the object of investigation.

\section{Experimental Set-up}
Nature is explained by a parameterized model. Each parameter, as a functional aggregation of measurement samples, has itself a corresponding distribution as it $occurs \ in \ nature$ along the infinite, universal horizon of measurement. \\
\\
Let $X^n$ be a random variable representing the n qualities that can be measured for the thing under investigation, $\Omega$, itself the collected gathering of all its possible appearances, $\omega \in \Omega$ such that $X^n:\omega \rightarrow {\mathbb{R}}^n$. Each sampled measurement of $X^n$ through an interaction with $\omega$ is given as an $\hat{X}^n(t_i)$, each one constituting a unit of indexable time in the catalogable measurement process. Thus, the set of sampled measurements, a $sample \ space$, is a partition of ‘internally orderable’ test times within the measurement action, $\{ \hat{X}^n(t): t \in \pi \}$. \\
\\
In this set up of statistical sampling, one will notice the step-wise process-timing of a single actor performing n sequential measurements can be represented the same as n indexed actors performing simultaneous measurements, at least with regard to internal time accounting. In order to infer the latter interpretational context, such as to preserve the common sense notion of time as distinct from social space, one would represent all n simultaneous measurements as n dimensions of X, although assumed to be generally the same in quality in such that all n actors sample the same object in the same way, yet are distinct in some orderable indexical quality. Thus, in each turn of the round time (i.e. one unit), all actors perform independent and similar measurements. It may be, as in progressive action processes, future actions are dependent on previous ones, and thus independence is only found within the sample space of a single time round. Alternatively, it may also be that the actors perform different actions, or are dependent upon each other in their interactions. Thus, the notion of actor(s) may be embedded in the space-time of the action of measurement. The embedding of a coordinated plurality of actors in the most mundane sense of ‘collective progress’ can be represented as the group action of all independent \& similar measurers completes itself in each round of time, with inter-temporalities in the process measurement process being similar but dependent on the previous one. The progressive interaction may be represented as the inducer $I^+:X(t_i) \rightarrow X(t_{i}+1)$, with the assumptions of similarity and independence as $\hat{x_i}(t) \sim \hat{x_j}(t) \ \& \ I(\hat{x_i}(t),\hat{x_j}(t))=0$. We take $\hat{X}(t)$ to be a group of measurement actors/actions $\{ \hat{x}_i(t): i \in \pi \}$ that acts on $\Omega$ together, or simultaneously, to produce a singular measurement of one round time.

\section{Derivation of the Normal Distribution}
The question with measurement is not, \say{what is the true distribution of the object in question in nature?}, but \say{what is the distribution of the parameter I am using to measure?}. The underlying metric of the quality under investigation, itself arising due to an interaction of measurement as the distance function within the investigatory space-time, is $\mu$. As the central limit states, averages of these measurements, each having an error, will converge to normality. We can describe analytically the space of our ‘atemporal’ averaged measurements in that the rate of change of the frequency $f$ of our sample measurements $x,x_0 \in X$ by the change in the space of measuring, is inversely proportional, by constant k, to the distance from the true measurement ($\mu$) and the frequency:
$\forall \epsilon > 0, \exists \delta(\epsilon)>0 \ s.t. \ \forall x, |x_0-x|<\delta \rightarrow \bigg| k(x_0-\mu)f(x_0) - \frac{f(x_0)-f(x)}{x_0-x}\bigg|<\epsilon$ or in the differential form $\frac{df}{dx}=-k(x-\mu)f(x)$ $f(t)=\int_{-\infty}^{+\infty}-k(x-\mu)f(x)dx$ the solution distribution is scaled by the constant of coefficient, C $f(x)=Ce^{-\frac{k}{2}{(x-\mu)}^2}$ given the normalization of the total size of the universe of events as 1 $\int_{-\infty}^{\infty} f dx =1$ thus, $C=\sqrt{\frac{k}{2\pi}}$ so the total distribution is, $f(x)=\sqrt{\frac{k}{2\pi}}e^{-\frac{k}{2}{(x-\mu)}^2}$ $E(X)=\int (x-\mu)f(x)dx=\mu$ $\sigma^2=E(X^2)=\int {(x-\mu)}^2 f(x)dx=\frac{1}{\sqrt{k}}$ $so, f(x)=N\bigg(\mu,\sigma=\frac{1}{\sqrt{k}}\bigg)$\printbibliography\end{document}[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

## The Art of Argumentation-Making: Statistics as Modern Rhetoric

The process of statistical measurement is used to make precise the evaluation of a claim relies upon our assumptions about the sampling measurement process and the empirical phenomena measured.   The independence of the sampling measurements leads to the normal distribution, which allows the confidence of statistical estimations to be calculated.  This is the metric used to gauge the validity of the tested hypothesis and therefore the empirical claim proposed.  While usually the question of independence of variables arises in relation to the different quantities measured for each repeated sample, we ask now about the independence of the measurement operation from the measured quantity, and thus the effect of the observer, i.e. subjectivity, on the results found, i.e. objectivity.  When there is an interaction between the observing subjectivity and the observed object, the normal distribution does not hold and thus the objective validity of the sampling test is under question.  Yet, this is the reality of quantum (small) measurements and measurements in the social world.  If we consider the cognitive bias of decreasing marginal utility, we find that samples of marginal utility will decrease with each consumption of a good, making the discovery of an underlying objective measurement of the subjective preference impossible.  This assumption of independence of the Measurer from the Measurement is inherited from Descartes.

Descartes created the modern mathematical sciences through the development of a universal mathematics that would apply to all the other sciences to find certain validity with exactitude and a rigor of proof, for which essays can be found in his early writings developing these subject-oriented reflections.  In his Meditations, one finds two ‘substances’ clearly and distinctly after his ‘doubting of everything, to know what is true’ – thinking & extension.  This separation of thinking and extension places measurement as objective, without acknowledging the perspective, or reference frame, of the subjective observer, leading to the formulation of the person as ‘a thinking thing,’ through cogito, ergo sum, ‘I think, I am.’  Just as with the detachment of mathematics from the other sciences – a pure universal science – and therefore the concrete particularity of scientific truth, the mind becomes disconnected from the continuum of reality (i.e. ’the reals,’ cc: Cantor) of the extended body as subjectivity infinitely far from objectivity, yet able to measure it perfectly.  This would lead to the Cartesian Plane of XY independence as a generalization of Euclidean Geometry from the 2D Euclidean Plane where the parallel (5th) postulate was retained:

Euclid’s 5th Postulate: For an infinitely extended straight line and a point outside it, there exist only one parallel (non-intersecting) line going through the point.

This became the objective coordinate system of the extended world, apart from the subjective consciousness that observed each dimension in its infinite independence, since it was itself independent of all extended objects of the world.  All phenomena, it was said could be embedded within this geometry to be measured using the Euclidean-Cartesian metrics of distance.  For centuries, attempts were made to prove this postulate of Euclid, but none successful.  The 19th century jurist, Schweikart, no doubt following up millennia of ancient law derived from cosmo-theology, wrote to Gauss a Memorandum (below) of the first complete hyperbolic geometry as “Astral Geometry” where the geometry of the solar system was worked out by internal relationships between celestial bodies rather than through imposing a Cartesian-Euclidean plane. (p.76, Non-Euclidean Geometry, Bonola, 1912)

This short Memorandum convinced Gauss to take the existence of non-Euclidean geometries seriously, developing differential geometry into the notion of curvature of a surface, one over Schweikart’s Constant.  This categorized the observed geometric trichotomy of hyperbolic, Euclidean, and elliptical geometries to be distinguished by negative, null, and positive curvatures.  These geometries are perspectives of measurement – internal, universally embedding, and external – corresponding to the value-orientations of subjective, normative, and objective.  From within the Solar System, there is no reason to assume the ‘infinite’ Constant of Euclidean Geometry, but can instead work out the geometry around the planets, leading to an “Astral” geometry of negative curvature.  The question of the horizon of infinity in the universe, and therefore paralleling, is a fundamental question of cosmology and theology, hardly one to be assumed away.  Yet, it may practically be conceived as the limit of knowledge in a particular domain space of investigation.  In fact, arising at a similar time as the Ancient Greeks (i.e. Euclid), the Mayans worked out a cosmology similar to this astral geometry, the ‘4-cornered universe’ (identical to Fig. 42 above) using circular time through modular arithmetic, only assuming the universal spatial measurement when measuring over 5,000 years of time.  The astral geometry of the solar system does not use ‘universal forms’ to ‘represent’ the solar system – rather, it describes the existing forms by the relation between the part and the whole of that which is investigated.  The Sacred Geometries of astrology have significance not because they are ‘perfectly ideal shapes’ coincidently found in nature, but because they are the existing shapes and numbers found in the cosmos, whose gravitational patterns, i.e. internal geometry, determine the dynamics of climate and thus the conditions of life on Earth.

The error of Descartes can be found in his conception of mathematics as a purely universal subject, often inherited in the bias of ‘pure mathematics’ vs. ‘applied mathematics.’  Mathematics may be defined as methods of counting, which therefore find the universality of an object (‘does it exist in itself as 1 or more?’), but always in a particular context.  Thus, even as ‘generalized methods of abstraction,’ mathematics is rooted in concrete scientific problems as the perspectival position of an observer in a certain space.  Absolute measurement can only be found in the reduction of the space of investigation as all parallel lines are collapsed in an elliptical geometry.  Always, the independence of dimensions in Cartesian Analysis is a presupposition given by the norms of the activity in question.  Contemporary to Descartes, Vico criticized his mathematically universal modern science as lacking in the common sense wisdom of the humanities in favor of a science of rhetoric.  While rhetoric is often criticized as the art of saying something well over saying the truth, it is fundamentally the art of argumentation and thus, like Mathematics, as the art of measurement, neither are independent from the truth as the topic of what is under question.  The Greek into Roman word for Senatorial debate was Topology, which comes from topos (topic) + logos (speech), thus using the numeral system of mathematics to measure the relationships of validation between claims made rhetorically concerning the public interest or greater good.  The science of topology itself studies the underlying structures (‘of truth’) of different topics under question.

Together, Rhetoric and Mathematics enable Statistics, the art of validation.  Ultimately, Statistics questions ask ‘What is the probability an empirical claim is true?’

While it is often assumed the empirical claim must be ‘objective,’ as independent of the observer, quantum physics developing in Germany around WWI revealed otherwise.  When we perform statistics on claims of a subjective or normative nature, as commonly done in the human sciences, we must adjust the geometry of our measurement spaces to correspond to internal and consensual measurement processes.  In order to do justice to subjectivity in rhetorical claims, it may be that hyperbolic geometry is the proper domain for most measurements of validity in empirical statistics, although this is rarely used.  Edmund Husserl, a colleague of Hilbert who was formulating the axiomatic treatment of Euclid by removing the 5th postulate, in his Origins of Geometry, described how Geometry is a culture’s idealizations about the world and so their axioms can never be self-grounded, but only assumed based upon the problems-at-hand as long as they are internally consistent to be worked out from within an engaged activity of interest – survival and emancipation.  Geometry is the basis of how things appear, so it encodes a way of understanding time and moving within space, therefore conditioned on the embedded anthropology of a people, rather than a human-independent universal ideal – how we think is how we act.  Thus, the hypothesis of equidistance at infinity of parallel lines is an assumption of independence of linear actions as the repeated trials of sample-testing in an experiment (‘Normality’).  Against the universalistic concept of mathematics, rooted in Euclid’s geometry, Husserl argued in The Crisis of the European Sciences for a concept of science, and therefore verification by mathematics, grounded in the lifeworld, the way in which things appear through intersubjective & historical processes – hardly universal, this geometry is hyperbolic in its nature and particular to contextual actions.  Post WWII German thinkers, including Gadamer and Habermas, further developed this move in philosophy of science towards historical intersubjectivity as the process of Normativity.  The Geometry from which we measure the validity of a statement (in relation to reality) encodes our biases as the value-orientation of our investigation, making idealizations about the reality we question.  We cannot escape presupposing a geometry as there must always be ground to walk on, yet through the phenomenological method of questioning how things actually appear we can find geometries that do not presuppose more than the problem requires and through the hermeneutic method gain a historical interpretation of their significance, why certain presuppositions are required for certain problems.  Ultimately, one must have a critical approach to the geometry employed in order to question one’s own assumptions about the thing under investigation.

## Continuous Democracy System

The future of participatory direct democracy, as advocated by Bernie Sander, lies in information systems of coordination that allow deep public opinion to be integrated within a whole reflexive administrative state.  The ideal of a fully adaptive and sensitive autonomous governance system can be called a continuous democracy system since it samples the population opinions and recompiles these in its inner communication system on a continuous basis, thereby adapting as circumstances change and as public opinion shifts by developing on an issue of public importance.

The basic operation of a direct democracy system is simply voting on a referendum or a candidate for an office.  To perform these operations on a more continuous basis is to change the public vote on a policy or candidate, perhaps before the official duration has been completed or at least within shorter election-cycles.  Such would crowdsource administrative decisions of reflection to the wider public, and could even include modifications.  Yet, there is still the problem of who sets the agenda, itself the function of electoral politics.  The general problem with the presently practiced populist and yearly cycle of electoral democracy is often that the electoral system is not sensitive enough to the preferences of the total population or to its changes with time.  This can be solved through including more frequent votes with lesser weight within government functioning and more depth in the voting as through partially ranked choices rather than single choices to generate changing public systems from the wisdom of the crowds.  The choices for each deep vote are established through gaining a threshold of petitions.  Clearly the official use of available virtual technologies can significantly improve populist democracy to allow temporal and spatial sensitivity without changing its underlying process structures.

One may also begin to think of a more complex component to continuous democracy systems by conceiving of it as an Artificial-Intelligence system that samples the population in order to both represent the population’s aggregate deep opinion, as a psuedo-law, but also functionally coordinate the society through this functional communication system, an emergent-economy.  Clearly, as an autonomous system, participation (and thus coordination) would only be optional and so real economic transactions unlikely, rendering the functions as communicative, rather than directive towards peoples’ actual behaviors as with population control or commercial enforcement by the state.  Yet, integrations between this complex system and the real economic-commerce and state-administration can be made.

In this Complex Democracy system, the ideals of frequent voting and deep opinion can be realized to a further level since it has less official validity and therefore real institutional administrative checks that consume much human resources.  The underlying public opinion can be considered to be a quantum system, and hence a random variable with an underlying distribution.  While with the real component to complex democracy a single conclusive vote is the output, for the imaginary component of complex democracy, the underlying distribution (a complex function) is the sought-after solution.  In order to properly use this Social Information AI system to solve real problems, it is important to recognize that the crowdsourcing of research to the population, as distributed cognitive loads, performs these underlying quantum-computing operations.  Its functional-system ‘code of operation’ is itself a pseudo or emergent legal-economy, interpreted both by the humans – for their quantum operations of cognition and communication – and the main digital AI computer system that learns and evolves with each iteration of population sampling and recompiling.

I am presently developing an experimental virtual continuous complex democracy system with the migrant population in Honduras, in partnership with Foundation ALMA (www.foundationalam.org) to help them reintegrate into the places they fled by helping them organize to solve the normative disputes in their communities and society that have caused such high local violence and national systems of violence.

## The Bias of 1-D Opinion Polling: Explaining the Low Polling Numbers for Candidate Beto O’Rourke

In Electoral Opinion Sampling, whether of candidates, policies, or values, it is commonplace to ask subjects yes/no questions, where someone either choses one person out of a list of candidates or says whether or not he or she agrees or disagrees with a political statement. Such a question though only has one layer of information and disregards the unique nature of an opinion, which includes not only the final choice – voting for a candidate or policy – but also the reasoning behind the choice, the “why?” behind the claim. Thus, only the surface of the opinion manifold is measured through the yes/no questions of mass politics. This creates a bias in our statistical understanding of the population’s political views since it collapses the distribution of opinions into a single norm, leaving us with the impression of polarization, where people are either on the right or left of an issue with little sense of the commonalities or overlaps. Thus, when the political sphere appears polarized, it is more of a problem in measurement, than in the actual underlying viewpoints. To resolve this social-political problem of polarization, where the nation can’t seem to come to a common viewpoint, we must look at the depth of the opinion manifold by mapping out a system of opinions rather than a single norm.

We can use Game Theory to represent an opinion as an ordering of preferences, i.e. A < B < C < D < E < F. Where each choice-element of the preference set must be strictly ordered in relationship to each other, leaving a ranked list of choices, one has a strict ordering of preferences. This was used to represent opinion in Arrow’s Theorem of Social Choice. Yet, without any allowable ambiguity, the result proves an equivalence between the aggregate social choice methods of dictatorship (one person chooses the social good) and democracy (the majority chooses the social good). This explains the critical political observation that mass politics – based upon superficial opinions – often becomes fascist – where one personality dominates the national opinion at the exclusion of immigrant or marginal groups. This game-theoretic error of restricting preferences is equivalent to the recently noted behavioral-economic error of excluding irrationalities (i.e. risk-aversion) from micro utility-maximization. Instead, we can represent opinion as a partial ordering of preferences, rather than a strict ordering. Thus, an opinion is represented as a tree graph, algebraically by A >> B, B >> D, B >> E, A >> C, & C >> F, or a tree data structure, formatted as {A: (B: (D,E), C: (F))} (i.e. JSON). The relationship of inclusion (>>, i.e. A >> B) can be interpreted as ‘A is preferred over B’ or ‘B is the reason for A,’ depending on whether one is looking at the incomplete ranking of choices or the irrationality of certain value-claims. In micro-economics, this yields a non-linear hyperbolic functional relationship between individual opinion and the aggregate social choice, rather than a reductionist linear functional relationship. In a hyperbolic space, we can represent each opinion-tree as a hyper-dimensional point (via a Kernel Density Estimation) and perform commonplace statistical tools, such as linear-regression or the multi-dimensional Principal Component Analysis, resulting in hyper-lines of best-fit that describe the depth of the aggregate social choice.

This method of deep-opinion analysis is particularly useful for understanding electoral dynamics still in flux, as with the Democratic Primaries, where there are too many candidates for people to have a strictly ranked preference of them all. In such an indeterminate thermodynamic system (such as a particle moving randomly along a line of preferences), there is an element of complexity due to the inherent stochastic ‘noise’ as people debate over each candidate, change their minds, but ultimately come to deeper rationalities for their opinions through the national communication mediums. Instead of trying to reduce this ‘noise’ to one Primary Candidate choice so early in the democratic process when the policies of the party are still being figured out – similar to waiting to measure a hot quantum system (i.e. collapsing the wave-function of the particle’s position) while it is still cooling into equilibrium – we can instead represent the probabilistic complexity of the preference distributions. In preference orderings of democratic candidates, this means that while the underlying rationality of an opinion (deep levels of a tree) may not change much during an election cycle, with small amounts of new information the surface of the top candidate choice may change frequently. In order to make a more predictive electoral-political models, we should thereby measure what is invariant (i.e. deep-structure), always missed in asking people for their top or strictly-ranked preferences. While a single candidate may consistently be people’s second choice, he or she could end up still polling at 0%. If this ordering isn’t strict, i.e. always less than the top choice but above most others, then the likelihood of this ‘2nd-place candidate’ being close to 0% is even higher. Without the false assumption of deterministic processes, it is not true that the surface measurement of the percent of the population willing to vote for a candidate is equivalent to the normative rationality of that candidate – the 0% candidate may actually represent public views very well although such cannot be expressed in the 1-dimensional polling surveys. Thus, while the actual electoral voting does collapse the chaotic system of public opinion into a single choice aggregated over the electoral-college, such measurement reduction is insignificant so early in a democratic process with fluctuating conditions. As a thermodynamic rule, to measure a high-entropic system, we must use hyper-dimensional informational units.

The Democratic Primary candidate Beto O’Rourke is precisely such a hidden 2nd-place candidate thus far, who is indeed was polling close to 0% (now he is at 4%) in the primary although the votes he received in his Texas Senate run alone would place him near 3.5% of total votes across both parties, assuming no one in any other state voted for him and Trump was substituted for Sen. Ted Cruz. Due to risk-aversion, there is a tendency to vote for candidates who may win and avoid those who may lose. This causes initial polling measurements of elections to be skewed towards the more well-known candidates, since deciding upon the newer candidates early-on appears as a losing-strategy until they have gained traction. Yet, this presents a problem of risk-taking ‘first-movers’ in the transition of a new candidate to the front-line. Such explains only part of the low-polling for Beto, since Pete Buttigieg is also effected by the same time-lag for newcomers. When a candidate introduces a change to the status quo, we would expect a similar behavioral risk-aversion and resultant time-lag while the social norm is in the process of changing. While Pete’s gay-rights policy is already the norm for the Democratic Party, Beto’s Immigration-Asylum policy is not, given Obama’s record of a high-number of deportations, and thus we would expect Beto’s polling numbers to grow more slowly at first than Pete’s. Complex information to support this hypothesis is available by comparing the differential polling between the General Election and the Primary Election – Beto was found to be the Democratic Candidate most likely to win against President Trump, yet out of the top 7 primary candidates, he is the least likely to be picked for the primary, even though most Democrats rank ‘winning against Donald Trump’ as their top priority. This inconsistency is explained through the irrationality of vote preferences as only partially order-able (i.e. not-strict) thus far. Within the Primary race, people who may support Beto’s policies will not yet choose him as their candidate because of the newcomer and status-quo time-lag biases, although they believe he may be most likely to win over the long-run of the general election. In the General Election, Beto is the 2nd-place candidate across both parties under a rule of