The Functional Communicativity of COVID-19

The Functional Communicativity of COVID-19

Contents

1 Multi-Scale Integration: The Informatic Thermodynamics of
Functional Communicativity 1
2 Micro-Scale: Computational Biology of RNA Sequence 3
3 Meso-Scale: The Communicative Epidemiology of Viral Social
Reproduction 4
4 Macro-Scale: Astro-biological Genesis of COR-VIR by Solar
Cycles 6

Abstract
A multi-scale entropic analysis of COVID-19 is developed on the microbiological, meso-social, & macro-astrological levels to model the accumulation of errors during processes of self-replication within immune response,
communicative functionality within social mitigation strategies, and mutation/genesis within radiation conditioning from solar-cosmic cycles.

1 Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity

To enable population-wide compliance with public-safety health measures, the
cause of the high-risk behaviors associated with breaking quarantine must be
identified and treated systemically. This approach thus conceives of infection as
having a pathogenesis from the behaviors that induce it, which is itself caused
by the cognitive-dysfunctional residuals to the malfunctioning of an underlying
social-systemic process. City and State variation in infection rate-curves (i.e.
time-based differential distributions) can be explained by the presence or absence (and degree) of different high risk behaviors and there may be learning
1
opportunities from successful cities or states. These differences between platial
socialities may be classified according to the functional − communicativity of
the underlying communication system, a complex variable (λ = σ + ωi), which
may be measured inversely by a real measure of the economic dysfunctionality
(i.e. ’shut-down’) and an imaginary measure of the communal miscommunication (i.e ’social-distancing’). The present crisis derives from the divergence
between the policy of the required λ and the actual sampled λˆ.
The communicable SARS-COV-2 viral spread within different localities should
be analyzed as a message within a communication system, mutating based upon
the properties of the local communication system that propagates it. The two
properties of a communication state-system can be expressed as a complex number (λ), with a real component as the total functionality (σ) of the communication system that propagates it and an imaginary component of the communicability (ω) of the lifeworld within which it is embedded, i.e. λ = σ + ωi. This
analytical format for any particular state or place of analysis encodes thus the
percent of a shut-down (i.e. dysfunctionality) and the social-distancing policy in
place (i.e. non-communicability), both of which can be calculated as a combination of the policy and the percent-compliance. While a communication system
may be thus represented as a complex-variable system with the real system functionality and an imaginary lifeworld communicativity, any of its sub-systems or
states, i.e. at lower levels of analysis, can be represented by a complex number.
Our public health response system must thus seek to reduce λ
2 at all levels of
analysis and components of interaction.
The genesis of SARS-CoV-2, with its internal code of a precise self-check
mechanism on reducing errors in RNA replication and external attributes of
ACE2 binding proteins, is an entropy-minimizing solution to the highly functionally communicative interconnected human societies embedded within highentropic geophysical conditions of higher cosmic radiation atmospheric penetration with radioactive C-14 residues due to the present solar-cycle modulation.
This background condition explains the mutation differences between SARS-1
& SARS-2, where the latter has a more persistent environment of C-14 to evolve
steadily into stable forms. The counter-measures against the spread of the virus,
either as therapeutics, vaccines, or social mitigation strategies, are thus disruptions (entropy-inducing) to these evolved entropy-reducing mechanisms within
the intra-host replication and inter-host communicability processes.
The point of origin for understanding the spread of the virus in a society
or subdivision is through its communicative functionality, which may be expressed as a complex variable of the real functionality of the social system and
the imaginary communicativity of its lifeworld, the two attributes which are
diminished by the shut-down and social distance measures. Conditions of high
communicativity, such as New York City, will induce mutations with greater
ACE2 binding proteins, i.e. communicability, as the virus adapts to its environment, while one of high functionality will induce error-minimization in
replication. These two micro & meso scale processes of replication and communicability (i.e. intra- & inter- host propagation) can be viewed together from
the thermodynamic-informatic perspective of the viral RNA code as a message –
refinement and transmission – itself initialized (’transcribed’) by the macro conditions of the Earth’s spatio-temporality (i.e. gravitational fluxuation). This
message is induced, altered, amplified spatially, & temporallized by the entropic functional-communicative qualities of its environment that it essentially
describes inversely.

2 Micro-Scale: Computational Biology of RNA Sequence

As with other viruses of its CoV virus family, the RNA of COVID-19 encodes a
self-check on the duplication of its code by nuclei, thereby ensuring it is copied
with little error. With little replication-error, the virus can be replicated many
more rounds (exponential factor) without degeneration, which will ultimately
stop the replication-process. Compare an example of t = 3 rounds for a normal
virus with t = 8 for a Coronavirus under simple exponential replication viral
count C by replication rounds t as C(t) = e
(t)
: C(3) = e
3 = 20.1 vs. C(7) =
e
7 = 1096.6.
Let us consider an example where a single RNA can create N-1 copies of
itself before its code is degenerated beyond even replicative encoding, i.e. the
binding segment code directing RNA replicase to replicate. The original RNA
code is given by N0, with each subsequent code given by Nt, where t is the
number of times of replication. Thus, t counts the internal time of the “life”
of the virus, as its number of times of self-replication. The relevant length of
a sequence can be given as the number of base-pairs that will be replicated in
the next round of replication. This will be expressed as the zero-order distance
metric, µ
0
(Nt) = |Nt|.
The errors in the replicative process at time t will be given by DISCR
_ERR(Nt), for “discrete error”, and will be a function of t, given as thus (t).
Cleary, |Nt| = |Nt+1| + (t). In all likelihood, (t) is a decreasing function
since with each round of replication the errors will decrease the number of
copiable base-pairs, and yet with an exceptionally random alteration of a stable
insertion, the error could technically be negative. There are two types of these
zero-order errors, 
−, as the number of pre-programmed deletions occurring due
to the need for a “zero-length” sequence segment to which the RNA polymerase
binds and is thereby directed to replicate “what is to its right in the given
reading orientation”, and 
+ as the non-determined erroneous alterations, either
as deletions, changes, or insertions. The total number of errors at any single
time will be their sum, as thus (t) = 
−(t) + 
+(t). A more useful errormetric may be the proportional error, P ROP_ERR(Nt), since it is likely to be
approximately constant across time, which will be given by the time-function

0
(t), and can similarly be broken into determined(-) and non-determined(+)
errors as 
0
(t) = 
0−
(t) + 
0+
(t). Expressed thus in proportion to the (zeroorder) length of the RNA sequence, 
0
(t) = 1 −
|Nt+1|
|Nt| =
(t)
|Nt|
.
The “length” (of internal time) of an RNA code, Nt, in terms of the number
of times it itself may be copied before it is degenerated beyond replication, is
given as the first order “distance” metric µ
1
(Nt) = N(Nt). For our generalized
example, µ
1
(N0) = N(N0) = N. This may be expressed as the sum of all errors
N =
P∞
t=0 (t) = Ptmax
t=0 (t) = PN
t=0 (t).
We are interested in the “length” (of internal space) of the RNA code, secondorder distance metric, µ
2
(N0), as the number of copies it can make of itself, including the original copy in the counting and the children of all children viruses.
This is the micro-factor of self-limitation of the virus, to be compared ultimately
to the meso-factor of aerosolized half-life and the macro-factor of survival on
surfaces.
These errors in replication, compounded by radiation exposure in the atmosphere, will add up to mutations of the virus, which by natural selection
in the corporeal, social-communicative, and environmental (i.e. surfaces and
aerosolized forms) levels has produced stable new forms of the virus.
Comparing SARS-1 & SARS-2, the former had a higher mortality rate and
the latter has a higher transmission rate. There is certainly an inverse relationship between mortality and transmission on the meso-level as fatality prevents
transmission, but there may also be inherent differences at the micro-level in
methods of replication leading to these different outcomes. Mortality is due to
intra-host replication exponentiation – whereby there are so many copies made
that the containing cells burst – while communicability is due to the inter-host
stability of the RNA code in the air and organic surfaces where it is subject to
organic reactions and cosmic radiation.

3 Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction

We can apply the theory of communicativity to studying the natural pathology
(disease) and the social pathology (violence) of human society through René
Girard’s Theory of Mimetics1
. Viewing a virus and a dysfunctional social system under a single conceptual unity (Mimetics) of a communicative pathology,
the former ’spreads’ by communication while the latter is the system of communication. Yet, different types of communication systems can lead to higher
outbreaks for a communicable disease. Thus, the system of communication is
the condition for the health outcomes of communicable disease. Beyond merely
’viruses,’ a dysfunctional communication system unable to coordinate actions to
distribute resources effectively within a population can cause other pathologies
such as violence and poverty. From this integrated perspective, these ’social
problems’ may themselves be viewed as communicable disease in the sense of
being caused, rather than ’spread,’ by faulty systems of communication. Since
violence and poverty are themselves health concerns in themselves, such a recategorization is certainly permittable. The difference in these communicable
1VIOL-SAC.
diseases of micro and macro levels is that a virus is a replication script read and
enacted by human polymerase in a cell’s biology while a dysfunctional social
system is a replication script read and enacted by human officials in a society.
We can also thereby view health in the more generalized political-economy lens
as the quantity of life a person has, beyond merely the isolated corporeal body
but also including the action-potentialities of the person as the security from
harm and capacity to use resources (i.e. via money) for one’s own survival.
It is clear that ’money’ should be the metric of this ’bio-quantification’ in the
sense that someone with more money can create healthier conditions for life and
even seek better treatment, and similarly a sick person (i.e. deprived of life)
should be given more social resources (i.e. money) to reduce the harm. Yet, the
economic system fails to accurately price and distribute life-resources due to its
nodal premise prescribed by capitalism whereby individuals, and by extension
their property resources, are not social (as in distributively shared), but rather
isolated & alienated for individual private consumption.
This critique of capitalism was first made by Karl Marx in advocation for
socialism as an ontological critique of the lack of recognition of the social being
to human existence in the emerging economic sciences of liberalism. In the
17th century, Locke conceived of the public good as based upon an individual
rights to freedom, thereby endowing the alienated (i.e. private) nature with
the economic right to life. This moral reasoning was based on the theological
premise that the capacity for reason was not a public-communicative process,
but rather a private faculty based only upon an individual’s relationship with
God. Today we may understand Marx’s critique of Lockean liberalism from the
deep ecology perspective that sociality is an ontological premise to biological
analysis due to both the relationship of an organism grouping to its environment
and the in-group self-coordinating mechanism with its own type. Both of these
aspects of a biological group, in-group relationships (H+(G) : G → G) and
out-group relationships (H−(G) = {H

− (G) : Gc → G, H−
+ (G) : G → Gc})
may be viewed as communicative properties of the group, as in how the group
communicates with itself and with not-itself. In the human-capital model of
economic liberalism, the group is reduced to the individual economic agent that
must act alone, i.e. an interconnected system of capabilities, creating thereby an
enormous complexity in any biological modeling from micro-economic premises
to macro-economic outcomes. If instead we permit different levels of group
analysis, where it is assumed a group distributes resources within itself, with
the particular rules of group-distribution (i.e. its social system) requiring an
analysis of the group at a deeper level that decomposes the group into smaller
individual parts, such a multi-level model has a manageable complexity. The
purpose is therefore to study Communicativity as a property of Group Action.
A group is a system of action coordination functionally interconnecting subgroups. Each group must act as a whole in that the inverse branching process of
coordination adds up all actions towards the fulfillment of a single highest good,
the supreme value-orientation. Therefore, the representation of a group is by a
tree, whose nodes are the coordination actions (intermediate groupings), edges
the value produced, and leaves the elemental sub-groups at the level of analysis.
The total society can be represented as a class system hierarchy of group orderings, with primary groups of individuals. The distribution of resources between
a group follows the branching orientation (σ
−) from root to leaves as resources
are divided up, while the coordination follows the inverse orientation (σ
+) from
leaves to root as elemental resources are coordinated in production to produce
an aggregate good.
In the parasite-stress theory of sociality2
, in-group assortative sociality arose
due to the stress of parasites in order to prevent contagion. There is thus a causal
equivalence between the viral scripts of replication and the social structures
selected for by the virus as the optimal strategy of survival. Violence too has
the same selection-capacity since existentially conflicting groups are forced to
isolate to avoid the war of revenge cycles. This process is the same as the spread
of communicable diseases between groups – even after supposed containment
of a virus, movement of people between groups can cause additional cycles of
resurgence.
Racism is an example of non-effective extrapolation of in-grouping based
on non-essential categories. As a highly contagious and deadly disease, on the
macro-social level COVID-19 selects for non-racist societies via natural selection since racist societies spend too many resources to organize in-group social
structure along non-essential characteristics, as race, and thus have few reserves
left to reorganize along the essential criteria selected for by the disease (i.e. segregating those at-risk). Additionally, racism prevents resource sharing between
the dominant group and the racially marginalized or oppressed group, and thus
limits the transfer of scientific knowledge in addition to other social-cultural
resources since what the marginalized group knows to be true is ignored.
With a complex systems approach to studying the communicability of the
virus between groups (i.e. different levels of analysis) we can analyze the transmission between both persons and segregated groups (i.e. cities or states) to
evaluate both social distancing and shut-down policies. A single mitigation
strategy can be represented as the complex number λ = σ + ωi, where σ is the
dysfunctionality of the social system (percent shut-down) and ω is the periodicity of the shut-down. We can include sd for social distance as a proportion of the
natural radii given by the social density. The critical issue now is mistimed reopening policies, whereby physical communication (i.e. travel) between peaking
and recovering groups may cause resurgences of the virus, which can be complicated by reactivation post immunity and the threat of mutations producing
strands resistant to future vaccines. This model thus considers the long-term
perspective of social equilibrium solutions as mixed strategies between socialism
and capitalism (i.e. social distancing and systemic shut-downs) to coronaviruses
as a semi-permanent condition to the ecology of our time.
2fincher_thornhill_2012.

4 Macro-Scale: Astro-biological Genesis of CORVIR by Solar Cycles

The genesis of COR-VIR are by mutations (and likely reassortment) induced by
a burst of solar flare radiation and a conditioning by cosmic radiation, each with
different effects on the viral composition. Comparison with SARS-1 (outbreak
immediately after a solar maximum) reveals that solar radiation (i.e. UVC) from
flares & CMEs, more frequent and with higher intensity during solar maximums
yet also present during minima, is responsible for the intensity (mortality rate)
of the virus, while cosmic radiation, enabled by the lower count of sun spots that
decreases the Ozone in the atmosphere normally shielding the Earth’s surface
from radiation, gives the virus a longer duration within and on organic matter (SARS-2), likely through mutation by radioactive C-14 created by cosmic
radiation interaction with atmospheric Nitrogen, The increased organic surface
radioactivity is compounded by the ozone-reduction due to N2
emissions concurrent with “Global Warming”. The recent appearance of all coronaviruses in
the last 5 solar cycles is likely due to a global minimum within a hypothetical longer cosmic-solar cycle (∼ 25 solar cycles) that modulates the relative
sun cycle sunspot count, and has been linked to historical pandemics. A metaanalysis has detected such a frequency the last milenia with global pandemics3
.
The present sun cycle, 25, beginning with a minimum coincident with the first
SARS-2 case of COVID-19, has the lowest sunspot count in recorded history
(i.e. double or triple minimum). Likely, this explains the genesis of difference
in duration and intensity between SARS-1 & SARS-2.
This longer solar-cosmic cycle that modulates the relative sunspot count of
a solar cycle, the midpoint of which is associated with global pandemics, has
recently been measured to 208 years by C-14 time-cycle-analysis, which is itself
modulated by a 2,300 year cycle. These time-cycles accord to the (perhaps
time-varying) Mayan Round calendar: 1 K’atun=2 solar cycles (∼20 years); 1
May = 13 K’atun (∼ 256 years); 1 B’ak’tun = 20 K’atun (∼394 years) ; 1 Great
Cycle = 13 B’ak’tun (∼ 5,125 year). Thus, the 208-year cycle is between 1/2
B’ak’tun (∼197 years) and 1 May (∼ 256 years, 13 K’atuns). It is likely the
length of 25 sun cycles, the same as the May cycle, yet has decreased in length
the last few thousand years (perhaps as well with sun-spot counts). The 2,300
year cycle is ∼ 6 B’ak’tuns (2,365 years), constituting almost half of a Great
Cycle (13 B’ak’tuns). We are likely at a triple minimum in sunspot count from
all 3 solar-cosmic cycles, at the start of the first K’atun (2020) of the beginning
of a new Great Cycle (2012), falling in the middle of the May (associated with
crises).
The entropic characterization of the pathogenesis as prolonged radioactivity – low entropic conditioning of high entropy – leads to the property of high
durability on organic matter and stable mutations.

The Art of Argumentation-Making: Statistics as Modern Rhetoric

The Art of Argumentation-Making: Statistics as Modern Rhetoric

The process of statistical measurement is used to make precise the evaluation of a claim relies upon our assumptions about the sampling measurement process and the empirical phenomena measured.   The independence of the sampling measurements leads to the normal distribution, which allows the confidence of statistical estimations to be calculated.  This is the metric used to gauge the validity of the tested hypothesis and therefore the empirical claim proposed.  While usually the question of independence of variables arises in relation to the different quantities measured for each repeated sample, we ask now about the independence of the measurement operation from the measured quantity, and thus the effect of the observer, i.e. subjectivity, on the results found, i.e. objectivity.  When there is an interaction between the observing subjectivity and the observed object, the normal distribution does not hold and thus the objective validity of the sampling test is under question.  Yet, this is the reality of quantum (small) measurements and measurements in the social world.  If we consider the cognitive bias of decreasing marginal utility, we find that samples of marginal utility will decrease with each consumption of a good, making the discovery of an underlying objective measurement of the subjective preference impossible.  This assumption of independence of the Measurer from the Measurement is inherited from Descartes.

Descartes created the modern mathematical sciences through the development of a universal mathematics that would apply to all the other sciences to find certain validity with exactitude and a rigor of proof, for which essays can be found in his early writings developing these subject-oriented reflections.  In his Meditations, one finds two ‘substances’ clearly and distinctly after his ‘doubting of everything, to know what is true’ – thinking & extension.  This separation of thinking and extension places measurement as objective, without acknowledging the perspective, or reference frame, of the subjective observer, leading to the formulation of the person as ‘a thinking thing,’ through cogito, ergo sum, ‘I think, I am.’  Just as with the detachment of mathematics from the other sciences – a pure universal science – and therefore the concrete particularity of scientific truth, the mind becomes disconnected from the continuum of reality (i.e. ’the reals,’ cc: Cantor) of the extended body as subjectivity infinitely far from objectivity, yet able to measure it perfectly.  This would lead to the Cartesian Plane of XY independence as a generalization of Euclidean Geometry from the 2D Euclidean Plane where the parallel (5th) postulate was retained:

Euclid’s 5th Postulate: For an infinitely extended straight line and a point outside it, there exist only one parallel (non-intersecting) line going through the point.

This became the objective coordinate system of the extended world, apart from the subjective consciousness that observed each dimension in its infinite independence, since it was itself independent of all extended objects of the world.  All phenomena, it was said could be embedded within this geometry to be measured using the Euclidean-Cartesian metrics of distance.  For centuries, attempts were made to prove this postulate of Euclid, but none successful.  The 19th century jurist, Schweikart, no doubt following up millennia of ancient law derived from cosmo-theology, wrote to Gauss a Memorandum (below) of the first complete hyperbolic geometry as “Astral Geometry” where the geometry of the solar system was worked out by internal relationships between celestial bodies rather than through imposing a Cartesian-Euclidean plane.  

(p.76, Non-Euclidean Geometry, Bonola, 1912)

This short Memorandum convinced Gauss to take the existence of non-Euclidean geometries seriously, developing differential geometry into the notion of curvature of a surface, one over Schweikart’s Constant.  This categorized the observed geometric trichotomy of hyperbolic, Euclidean, and elliptical geometries to be distinguished by negative, null, and positive curvatures.  These geometries are perspectives of measurement – internal, universally embedding, and external – corresponding to the value-orientations of subjective, normative, and objective.  From within the Solar System, there is no reason to assume the ‘infinite’ Constant of Euclidean Geometry, but can instead work out the geometry around the planets, leading to an “Astral” geometry of negative curvature.  The question of the horizon of infinity in the universe, and therefore paralleling, is a fundamental question of cosmology and theology, hardly one to be assumed away.  Yet, it may practically be conceived as the limit of knowledge in a particular domain space of investigation.  In fact, arising at a similar time as the Ancient Greeks (i.e. Euclid), the Mayans worked out a cosmology similar to this astral geometry, the ‘4-cornered universe’ (identical to Fig. 42 above) using circular time through modular arithmetic, only assuming the universal spatial measurement when measuring over 5,000 years of time.  The astral geometry of the solar system does not use ‘universal forms’ to ‘represent’ the solar system – rather, it describes the existing forms by the relation between the part and the whole of that which is investigated.  The Sacred Geometries of astrology have significance not because they are ‘perfectly ideal shapes’ coincidently found in nature, but because they are the existing shapes and numbers found in the cosmos, whose gravitational patterns, i.e. internal geometry, determine the dynamics of climate and thus the conditions of life on Earth. 

The error of Descartes can be found in his conception of mathematics as a purely universal subject, often inherited in the bias of ‘pure mathematics’ vs. ‘applied mathematics.’  Mathematics may be defined as methods of counting, which therefore find the universality of an object (‘does it exist in itself as 1 or more?’), but always in a particular context.  Thus, even as ‘generalized methods of abstraction,’ mathematics is rooted in concrete scientific problems as the perspectival position of an observer in a certain space.  Absolute measurement can only be found in the reduction of the space of investigation as all parallel lines are collapsed in an elliptical geometry.  Always, the independence of dimensions in Cartesian Analysis is a presupposition given by the norms of the activity in question.  Contemporary to Descartes, Vico criticized his mathematically universal modern science as lacking in the common sense wisdom of the humanities in favor of a science of rhetoric.  While rhetoric is often criticized as the art of saying something well over saying the truth, it is fundamentally the art of argumentation and thus, like Mathematics, as the art of measurement, neither are independent from the truth as the topic of what is under question.  The Greek into Roman word for Senatorial debate was Topology, which comes from topos (topic) + logos (speech), thus using the numeral system of mathematics to measure the relationships of validation between claims made rhetorically concerning the public interest or greater good.  The science of topology itself studies the underlying structures (‘of truth’) of different topics under question. 

Together, Rhetoric and Mathematics enable Statistics, the art of validation.  Ultimately, Statistics questions ask ‘What is the probability an empirical claim is true?’ 

While it is often assumed the empirical claim must be ‘objective,’ as independent of the observer, quantum physics developing in Germany around WWI revealed otherwise.  When we perform statistics on claims of a subjective or normative nature, as commonly done in the human sciences, we must adjust the geometry of our measurement spaces to correspond to internal and consensual measurement processes.  In order to do justice to subjectivity in rhetorical claims, it may be that hyperbolic geometry is the proper domain for most measurements of validity in empirical statistics, although this is rarely used.  Edmund Husserl, a colleague of Hilbert who was formulating the axiomatic treatment of Euclid by removing the 5th postulate, in his Origins of Geometry, described how Geometry is a culture’s idealizations about the world and so their axioms can never be self-grounded, but only assumed based upon the problems-at-hand as long as they are internally consistent to be worked out from within an engaged activity of interest – survival and emancipation.  Geometry is the basis of how things appear, so it encodes a way of understanding time and moving within space, therefore conditioned on the embedded anthropology of a people, rather than a human-independent universal ideal – how we think is how we act.  Thus, the hypothesis of equidistance at infinity of parallel lines is an assumption of independence of linear actions as the repeated trials of sample-testing in an experiment (‘Normality’).  Against the universalistic concept of mathematics, rooted in Euclid’s geometry, Husserl argued in The Crisis of the European Sciences for a concept of science, and therefore verification by mathematics, grounded in the lifeworld, the way in which things appear through intersubjective & historical processes – hardly universal, this geometry is hyperbolic in its nature and particular to contextual actions.  Post WWII German thinkers, including Gadamer and Habermas, further developed this move in philosophy of science towards historical intersubjectivity as the process of Normativity.  The Geometry from which we measure the validity of a statement (in relation to reality) encodes our biases as the value-orientation of our investigation, making idealizations about the reality we question.  We cannot escape presupposing a geometry as there must always be ground to walk on, yet through the phenomenological method of questioning how things actually appear we can find geometries that do not presuppose more than the problem requires and through the hermeneutic method gain a historical interpretation of their significance, why certain presuppositions are required for certain problems.  Ultimately, one must have a critical approach to the geometry employed in order to question one’s own assumptions about the thing under investigation.

Continuous Democracy System

The future of participatory direct democracy, as advocated by Bernie Sander, lies in information systems of coordination that allow deep public opinion to be integrated within a whole reflexive administrative state.  The ideal of a fully adaptive and sensitive autonomous governance system can be called a continuous democracy system since it samples the population opinions and recompiles these in its inner communication system on a continuous basis, thereby adapting as circumstances change and as public opinion shifts by developing on an issue of public importance.

The basic operation of a direct democracy system is simply voting on a referendum or a candidate for an office.  To perform these operations on a more continuous basis is to change the public vote on a policy or candidate, perhaps before the official duration has been completed or at least within shorter election-cycles.  Such would crowdsource administrative decisions of reflection to the wider public, and could even include modifications.  Yet, there is still the problem of who sets the agenda, itself the function of electoral politics.  The general problem with the presently practiced populist and yearly cycle of electoral democracy is often that the electoral system is not sensitive enough to the preferences of the total population or to its changes with time.  This can be solved through including more frequent votes with lesser weight within government functioning and more depth in the voting as through partially ranked choices rather than single choices to generate changing public systems from the wisdom of the crowds.  The choices for each deep vote are established through gaining a threshold of petitions.  Clearly the official use of available virtual technologies can significantly improve populist democracy to allow temporal and spatial sensitivity without changing its underlying process structures.

One may also begin to think of a more complex component to continuous democracy systems by conceiving of it as an Artificial-Intelligence system that samples the population in order to both represent the population’s aggregate deep opinion, as a psuedo-law, but also functionally coordinate the society through this functional communication system, an emergent-economy.  Clearly, as an autonomous system, participation (and thus coordination) would only be optional and so real economic transactions unlikely, rendering the functions as communicative, rather than directive towards peoples’ actual behaviors as with population control or commercial enforcement by the state.  Yet, integrations between this complex system and the real economic-commerce and state-administration can be made.

In this Complex Democracy system, the ideals of frequent voting and deep opinion can be realized to a further level since it has less official validity and therefore real institutional administrative checks that consume much human resources.  The underlying public opinion can be considered to be a quantum system, and hence a random variable with an underlying distribution.  While with the real component to complex democracy a single conclusive vote is the output, for the imaginary component of complex democracy, the underlying distribution (a complex function) is the sought-after solution.  In order to properly use this Social Information AI system to solve real problems, it is important to recognize that the crowdsourcing of research to the population, as distributed cognitive loads, performs these underlying quantum-computing operations.  Its functional-system ‘code of operation’ is itself a pseudo or emergent legal-economy, interpreted both by the humans – for their quantum operations of cognition and communication – and the main digital AI computer system that learns and evolves with each iteration of population sampling and recompiling.

I am presently developing an experimental virtual continuous complex democracy system with the migrant population in Honduras, in partnership with Foundation ALMA (www.foundationalam.org) to help them reintegrate into the places they fled by helping them organize to solve the normative disputes in their communities and society that have caused such high local violence and national systems of violence.

The Bias of 1-D Opinion Polling: Explaining the Low Polling Numbers for Candidate Beto O’Rourke

In Electoral Opinion Sampling, whether of candidates, policies, or values, it is commonplace to ask subjects yes/no questions, where someone either choses one person out of a list of candidates or says whether or not he or she agrees or disagrees with a political statement. Such a question though only has one layer of information and disregards the unique nature of an opinion, which includes not only the final choice – voting for a candidate or policy – but also the reasoning behind the choice, the “why?” behind the claim. Thus, only the surface of the opinion manifold is measured through the yes/no questions of mass politics. This creates a bias in our statistical understanding of the population’s political views since it collapses the distribution of opinions into a single norm, leaving us with the impression of polarization, where people are either on the right or left of an issue with little sense of the commonalities or overlaps. Thus, when the political sphere appears polarized, it is more of a problem in measurement, than in the actual underlying viewpoints. To resolve this social-political problem of polarization, where the nation can’t seem to come to a common viewpoint, we must look at the depth of the opinion manifold by mapping out a system of opinions rather than a single norm.

We can use Game Theory to represent an opinion as an ordering of preferences, i.e. A < B < C < D < E < F. Where each choice-element of the preference set must be strictly ordered in relationship to each other, leaving a ranked list of choices, one has a strict ordering of preferences. This was used to represent opinion in Arrow’s Theorem of Social Choice. Yet, without any allowable ambiguity, the result proves an equivalence between the aggregate social choice methods of dictatorship (one person chooses the social good) and democracy (the majority chooses the social good). This explains the critical political observation that mass politics – based upon superficial opinions – often becomes fascist – where one personality dominates the national opinion at the exclusion of immigrant or marginal groups. This game-theoretic error of restricting preferences is equivalent to the recently noted behavioral-economic error of excluding irrationalities (i.e. risk-aversion) from micro utility-maximization. Instead, we can represent opinion as a partial ordering of preferences, rather than a strict ordering. Thus, an opinion is represented as a tree graph, algebraically by A >> B, B >> D, B >> E, A >> C, & C >> F, or a tree data structure, formatted as {A: (B: (D,E), C: (F))} (i.e. JSON). The relationship of inclusion (>>, i.e. A >> B) can be interpreted as ‘A is preferred over B’ or ‘B is the reason for A,’ depending on whether one is looking at the incomplete ranking of choices or the irrationality of certain value-claims. In micro-economics, this yields a non-linear hyperbolic functional relationship between individual opinion and the aggregate social choice, rather than a reductionist linear functional relationship. In a hyperbolic space, we can represent each opinion-tree as a hyper-dimensional point (via a Kernel Density Estimation) and perform commonplace statistical tools, such as linear-regression or the multi-dimensional Principal Component Analysis, resulting in hyper-lines of best-fit that describe the depth of the aggregate social choice.

This method of deep-opinion analysis is particularly useful for understanding electoral dynamics still in flux, as with the Democratic Primaries, where there are too many candidates for people to have a strictly ranked preference of them all. In such an indeterminate thermodynamic system (such as a particle moving randomly along a line of preferences), there is an element of complexity due to the inherent stochastic ‘noise’ as people debate over each candidate, change their minds, but ultimately come to deeper rationalities for their opinions through the national communication mediums. Instead of trying to reduce this ‘noise’ to one Primary Candidate choice so early in the democratic process when the policies of the party are still being figured out – similar to waiting to measure a hot quantum system (i.e. collapsing the wave-function of the particle’s position) while it is still cooling into equilibrium – we can instead represent the probabilistic complexity of the preference distributions. In preference orderings of democratic candidates, this means that while the underlying rationality of an opinion (deep levels of a tree) may not change much during an election cycle, with small amounts of new information the surface of the top candidate choice may change frequently. In order to make a more predictive electoral-political models, we should thereby measure what is invariant (i.e. deep-structure), always missed in asking people for their top or strictly-ranked preferences. While a single candidate may consistently be people’s second choice, he or she could end up still polling at 0%. If this ordering isn’t strict, i.e. always less than the top choice but above most others, then the likelihood of this ‘2nd-place candidate’ being close to 0% is even higher. Without the false assumption of deterministic processes, it is not true that the surface measurement of the percent of the population willing to vote for a candidate is equivalent to the normative rationality of that candidate – the 0% candidate may actually represent public views very well although such cannot be expressed in the 1-dimensional polling surveys. Thus, while the actual electoral voting does collapse the chaotic system of public opinion into a single choice aggregated over the electoral-college, such measurement reduction is insignificant so early in a democratic process with fluctuating conditions. As a thermodynamic rule, to measure a high-entropic system, we must use hyper-dimensional informational units.

The Democratic Primary candidate Beto O’Rourke is precisely such a hidden 2nd-place candidate thus far, who is indeed was polling close to 0% (now he is at 4%) in the primary although the votes he received in his Texas Senate run alone would place him near 3.5% of total votes across both parties, assuming no one in any other state voted for him and Trump was substituted for Sen. Ted Cruz. Due to risk-aversion, there is a tendency to vote for candidates who may win and avoid those who may lose. This causes initial polling measurements of elections to be skewed towards the more well-known candidates, since deciding upon the newer candidates early-on appears as a losing-strategy until they have gained traction. Yet, this presents a problem of risk-taking ‘first-movers’ in the transition of a new candidate to the front-line. Such explains only part of the low-polling for Beto, since Pete Buttigieg is also effected by the same time-lag for newcomers. When a candidate introduces a change to the status quo, we would expect a similar behavioral risk-aversion and resultant time-lag while the social norm is in the process of changing. While Pete’s gay-rights policy is already the norm for the Democratic Party, Beto’s Immigration-Asylum policy is not, given Obama’s record of a high-number of deportations, and thus we would expect Beto’s polling numbers to grow more slowly at first than Pete’s. Complex information to support this hypothesis is available by comparing the differential polling between the General Election and the Primary Election – Beto was found to be the Democratic Candidate most likely to win against President Trump, yet out of the top 7 primary candidates, he is the least likely to be picked for the primary, even though most Democrats rank ‘winning against Donald Trump’ as their top priority. This inconsistency is explained through the irrationality of vote preferences as only partially order-able (i.e. not-strict) thus far. Within the Primary race, people who may support Beto’s policies will not yet choose him as their candidate because of the newcomer and status-quo time-lag biases, although they believe he may be most likely to win over the long-run of the general election. In the General Election, Beto is the 2nd-place candidate across both parties under a rule of

Quantum Computing Democracy

Consider the binary computer.  All bits have one of two states either 0 or 1, symbolic for ‘off’ and ‘on’ with reference to a circuit as symbolic of a ‘fact of the world’ propositionally, A is true or false.  In a quantum computer, the states may be occupied in superposition with a probability distribution such that the quantum-state is “a<1> + (1-a)<0>” where ‘a’  is a real positive numbers less than 1 signifying the probability of state-1 occurring, ‘turning on.’  The quantum binary computer, at least locally, ultimately collapses into a common electro-magnetic binary computer when the true value of the bit is measured as either 1 or 0, yet before then is suspended in the super-positional quantum state.  Thus, the resultant value of measurement is the answer to a particular question, while the quantum-state is the intermediation of problem-solving.  The problem is inputted for initial conditions of the quantum-formulation as the distributions of the different informational bits (i.e. ‘a’ values).  Altogether this is the modern formulation of randomness introduced to a primitive Abacus, for which beads might be slid varying lengths between either edge to represent the constant values of each informational bar; it is dropped (on a sacred object) to introduce random interaction between the parts; and the resulting answer is decoded by whether a bead is closer to one-side (1) or the other (0).  Random interaction is   allowed between informational elements through relational connections so that the system can interact with itself in totality, representing the underlying distributional assumptions of the problem.

 If a quantum computer is to be used to find the overall objective truth to a set of seemingly valid propositional statements, each claim can be inputted as a single quantum-bit with the perceived probability of its correctness, i.e. validity metric, and the inter-relational system of claims inputted through oriented connections between bit-elements.  After the quantum simulation, it will reach a steady-state with each bit either being true (1) or false (0), allowing the resulting valid system, considering the uncertainty and relationship between all the facts, to be determined.  In a particularly chaotic system, the steady-state may itself exhibit uncertainty, as when there are many equally good solutions to a problem, with therefore repeated sampling of the system giving varying results.  The problem is thus formulated as a directed ‘network’ deep-graph and initialized as nodes, edge-lengths, and orientations.  The random interaction of the system operates as partial-differential relations (directed edges) between the – here, binary –  random variables (nodes).  The quantum computer therefore naturally solves problems formulated under the calculus class of Partial Differential Equations for for the Stochastic Processes.  The quasi-state nodes interact through pre-determined relations (assumptions) to reach an equilibria for the total automata as the state-of-affairs.

 We may therefore consider a quantum voting machine to solve normative problems of policy and representation.  Each person submits not just a single vote (0 or 1), but a quantum-bit as a single subjective estimation of the validity of the claim-at-hand.  The basic democratic assumption to voting as a solution to the normative problem is that all votes are equal, so each single q-vote is connected to the central state-node with a proportional flow.  The final state-solution will be a q-bit with probability equal to the average of all the q-votes, which may be close to the extremas (true or false), yet may also be close to the center (non-decidability).  A measured decision by the state will thus result from not-collapsing this random variable with all its information, especially if the probability is close to ½, and therefore leaving the policy’s value undecidable, although rules for further collapsing the distribution (i.e. passing the law only if majority for popular-vote) can be established before-hand.  It is also possible to create a more complicated method of aggregation, rather than total-connection, as with the districting for the electoral college, by grouping certain votes and connecting these groupings in relations to the whole state-decision through concentric meta-groupings.  We may further complicate the model of quantum-voting by allowing each citizen to submit, not just a single q-vote, but a whole quantum system of validity statements instead of just one bit to answer the question, such as the rationality for a single normative answer expressed through a hierarchically weighted validity tree of minor claims.  In fact, if more information is inputted by each citizen (i.e. validity arguments), then the naturally representative systems of groupings between persons can be determined by, rather than prior, to the vote.  We end up with, not a ‘yes’ or ‘no’ on a single proposition, but an entire representational system for a regulatory domain of social life – the structural-functional pre-image (i.e. ‘photonegative’).

 Quantum computing is a natural process of biological life.  One can construct a DNA-computer with quantum properties through encoding the initial-conditions as the frequency-distributions of certain gene-patterns (built from the 4 base-pairs) and the computers’ dynamic systems of interaction through the osmosis of an enzyme mixture, resulting in a distributional answer to a complex problem.  More generally within evolutionary theory, the environment acts as a macro quantum computer through random mutations and natural selection to create and select for the best gene variations (polymorphisms) for a species’ survival.