The Bias of 1D Opinion Polling: Explaining the Low Polling Numbers for Candidate Beto O’Rourke

In Electoral Opinion Sampling, whether of candidates, policies, or values, it is commonplace to ask subjects yes/no questions, where someone either choses one person out of a list of candidates or says whether or not he or she agrees or disagrees with a political statement. Such a question though only has one layer of information and disregards the unique nature of an opinion, which includes not only the final choice – voting for a candidate or policy – but also the reasoning behind the choice, the “why?” behind the claim. Thus, only the surface of the opinion manifold is measured through the yes/no questions of mass politics. This creates a bias in our statistical understanding of the population’s political views since it collapses the distribution of opinions into a single norm, leaving us with the impression of polarization, where people are either on the right or left of an issue with little sense of the commonalities or overlaps. Thus, when the political sphere appears polarized, it is more of a problem in measurement, than in the actual underlying viewpoints. To resolve this social-political problem of polarization, where the nation can’t seem to come to a common viewpoint, we must look at the depth of the opinion manifold by mapping out a system of opinions rather than a single norm.

We can use Game Theory to represent an opinion as an ordering of preferences, i.e. A < B < C < D < E < F. Where each choice-element of the preference set must be strictly ordered in relationship to each other, leaving a ranked list of choices, one has a strict ordering of preferences. This was used to represent opinion in Arrow’s Theorem of Social Choice. Yet, without any allowable ambiguity, the result proves an equivalence between the aggregate social choice methods of dictatorship (one person chooses the social good) and democracy (the majority chooses the social good). This explains the critical political observation that mass politics – based upon superficial opinions – often becomes fascist – where one personality dominates the national opinion at the exclusion of immigrant or marginal groups. This game-theoretic error of restricting preferences is equivalent to the recently noted behavioral-economic error of excluding irrationalities (i.e. risk-aversion) from micro utility-maximization. Instead, we can represent opinion as a partial ordering of preferences, rather than a strict ordering. Thus, an opinion is represented as a tree graph, algebraically by A >> B, B >> D, B >> E, A >> C, & C >> F, or a tree data structure, formatted as {A: (B: (D,E), C: (F))} (i.e. JSON). The relationship of inclusion (>>, i.e. A >> B) can be interpreted as ‘A is preferred over B’ or ‘B is the reason for A,’ depending on whether one is looking at the incomplete ranking of choices or the irrationality of certain value-claims. In micro-economics, this yields a non-linear hyperbolic functional relationship between individual opinion and the aggregate social choice, rather than a reductionist linear functional relationship. In a hyperbolic space, we can represent each opinion-tree as a hyper-dimensional point (via a Kernel Density Estimation) and perform commonplace statistical tools, such as linear-regression or the multi-dimensional Principal Component Analysis, resulting in hyper-lines of best-fit that describe the depth of the aggregate social choice.

This method of deep-opinion analysis is particularly useful for understanding electoral dynamics still in flux, as with the Democratic Primaries, where there are too many candidates for people to have a strictly ranked preference of them all. In such an indeterminate thermodynamic system (such as a particle moving randomly along a line of preferences), there is an element of complexity due to the inherent stochastic ‘noise’ as people debate over each candidate, change their minds, but ultimately come to deeper rationalities for their opinions through the national communication mediums. Instead of trying to reduce this ‘noise’ to one Primary Candidate choice so early in the democratic process when the policies of the party are still being figured out – similar to waiting to measure a hot quantum system (i.e. collapsing the wave-function of the particle’s position) while it is still cooling into equilibrium – we can instead represent the probabilistic complexity of the preference distributions. In preference orderings of democratic candidates, this means that while the underlying rationality of an opinion (deep levels of a tree) may not change much during an election cycle, with small amounts of new information the surface of the top candidate choice may change frequently. In order to make a more predictive electoral-political models, we should thereby measure what is invariant (i.e. deep-structure), always missed in asking people for their top or strictly-ranked preferences. While a single candidate may consistently be people’s second choice, he or she could end up still polling at 0%. If this ordering isn’t strict, i.e. always less than the top choice but above most others, then the likelihood of this ‘2nd-place candidate’ being close to 0% is even higher. Without the false assumption of deterministic processes, it is not true that the surface measurement of the percent of the population willing to vote for a candidate is equivalent to the normative rationality of that candidate – the 0% candidate may actually represent public views very well although such cannot be expressed in the 1-dimensional polling surveys. Thus, while the actual electoral voting does collapse the chaotic system of public opinion into a single choice aggregated over the electoral-college, such measurement reduction is insignificant so early in a democratic process with fluctuating conditions. As a thermodynamic rule, to measure a high-entropic system, we must use hyper-dimensional informational units.

The Democratic Primary candidate Beto O’Rourke is precisely such a hidden 2nd-place candidate thus far, who is indeed was polling close to 0% (now he is at 4%) in the primary although the votes he received in his Texas Senate run alone would place him near 3.5% of total votes across both parties, assuming no one in any other state voted for him and Trump was substituted for Sen. Ted Cruz. Due to risk-aversion, there is a tendency to vote for candidates who may win and avoid those who may lose. This causes initial polling measurements of elections to be skewed towards the more well-known candidates, since deciding upon the newer candidates early-on appears as a losing-strategy until they have gained traction. Yet, this presents a problem of risk-taking ‘first-movers’ in the transition of a new candidate to the front-line. Such explains only part of the low-polling for Beto, since Pete Buttigieg is also effected by the same time-lag for newcomers. When a candidate introduces a change to the status quo, we would expect a similar behavioral risk-aversion and resultant time-lag while the social norm is in the process of changing. While Pete’s gay-rights policy is already the norm for the Democratic Party, Beto’s Immigration-Asylum policy is not, given Obama’s record of a high-number of deportations, and thus we would expect Beto’s polling numbers to grow more slowly at first than Pete’s. Complex information to support this hypothesis is available by comparing the differential polling between the General Election and the Primary Election – Beto was found to be the Democratic Candidate most likely to win against President Trump, yet out of the top 7 primary candidates, he is the least likely to be picked for the primary, even though most Democrats rank ‘winning against Donald Trump’ as their top priority. This inconsistency is explained through the irrationality of vote preferences as only partially order-able (i.e. not-strict) thus far. Within the Primary race, people who may support Beto’s policies will not yet choose him as their candidate because of the newcomer and status-quo time-lag biases, although they believe he may be most likely to win over the long-run of the general election. In the General Election, Beto is the 2nd-place candidate across both parties under a rule of

Quantum Computing Democracy

Consider the binary computer.  All bits have one of two states either 0 or 1, symbolic for ‘off’ and ‘on’ with reference to a circuit as symbolic of a ‘fact of the world’ propositionally, A is true or false.  In a quantum computer, the states may be occupied in superposition with a probability distribution such that the quantum-state is “a<1> + (1-a)<0>” where ‘a’  is a real positive numbers less than 1 signifying the probability of state-1 occurring, ‘turning on.’  The quantum binary computer, at least locally, ultimately collapses into a common electro-magnetic binary computer when the true value of the bit is measured as either 1 or 0, yet before then is suspended in the super-positional quantum state.  Thus, the resultant value of measurement is the answer to a particular question, while the quantum-state is the intermediation of problem-solving.  The problem is inputted for initial conditions of the quantum-formulation as the distributions of the different informational bits (i.e. ‘a’ values).  Altogether this is the modern formulation of randomness introduced to a primitive Abacus, for which beads might be slid varying lengths between either edge to represent the constant values of each informational bar; it is dropped (on a sacred object) to introduce random interaction between the parts; and the resulting answer is decoded by whether a bead is closer to one-side (1) or the other (0).  Random interaction is   allowed between informational elements through relational connections so that the system can interact with itself in totality, representing the underlying distributional assumptions of the problem.

 If a quantum computer is to be used to find the overall objective truth to a set of seemingly valid propositional statements, each claim can be inputted as a single quantum-bit with the perceived probability of its correctness, i.e. validity metric, and the inter-relational system of claims inputted through oriented connections between bit-elements.  After the quantum simulation, it will reach a steady-state with each bit either being true (1) or false (0), allowing the resulting valid system, considering the uncertainty and relationship between all the facts, to be determined.  In a particularly chaotic system, the steady-state may itself exhibit uncertainty, as when there are many equally good solutions to a problem, with therefore repeated sampling of the system giving varying results.  The problem is thus formulated as a directed ‘network’ deep-graph and initialized as nodes, edge-lengths, and orientations.  The random interaction of the system operates as partial-differential relations (directed edges) between the – here, binary –  random variables (nodes).  The quantum computer therefore naturally solves problems formulated under the calculus class of Partial Differential Equations for for the Stochastic Processes.  The quasi-state nodes interact through pre-determined relations (assumptions) to reach an equilibria for the total automata as the state-of-affairs.

 We may therefore consider a quantum voting machine to solve normative problems of policy and representation.  Each person submits not just a single vote (0 or 1), but a quantum-bit as a single subjective estimation of the validity of the claim-at-hand.  The basic democratic assumption to voting as a solution to the normative problem is that all votes are equal, so each single q-vote is connected to the central state-node with a proportional flow.  The final state-solution will be a q-bit with probability equal to the average of all the q-votes, which may be close to the extremas (true or false), yet may also be close to the center (non-decidability).  A measured decision by the state will thus result from not-collapsing this random variable with all its information, especially if the probability is close to ½, and therefore leaving the policy’s value undecidable, although rules for further collapsing the distribution (i.e. passing the law only if majority for popular-vote) can be established before-hand.  It is also possible to create a more complicated method of aggregation, rather than total-connection, as with the districting for the electoral college, by grouping certain votes and connecting these groupings in relations to the whole state-decision through concentric meta-groupings.  We may further complicate the model of quantum-voting by allowing each citizen to submit, not just a single q-vote, but a whole quantum system of validity statements instead of just one bit to answer the question, such as the rationality for a single normative answer expressed through a hierarchically weighted validity tree of minor claims.  In fact, if more information is inputted by each citizen (i.e. validity arguments), then the naturally representative systems of groupings between persons can be determined by, rather than prior, to the vote.  We end up with, not a ‘yes’ or ‘no’ on a single proposition, but an entire representational system for a regulatory domain of social life – the structural-functional pre-image (i.e. ‘photonegative’).

 Quantum computing is a natural process of biological life.  One can construct a DNA-computer with quantum properties through encoding the initial-conditions as the frequency-distributions of certain gene-patterns (built from the 4 base-pairs) and the computers’ dynamic systems of interaction through the osmosis of an enzyme mixture, resulting in a distributional answer to a complex problem.  More generally within evolutionary theory, the environment acts as a macro quantum computer through random mutations and natural selection to create and select for the best gene variations (polymorphisms) for a species’ survival.

The Bias of Underreporting in Experimental Design: Ranked-Choice Voting & The Case of the Latino Vote

The ‘hidden 2nd-place bias’ from my previous blog post <a href=“blog 1”> is due to a bias in the question-asking of political candidate polling, neglecting a necessary ambiguity in the answer when it can’t be precisely known.  This ambiguity is essential to the social sciences since an ethical social life is generally defined by a plurality – to attempt to erase this plurality by reductionist measurement will lead to unethical states of affairs by cutting out the voices and perspectives of those marginalized within the plural sociality.  Yet, ambiguity goes against our precepts of science as a deterministic process that yields an exact answer, although quantum physics has revealed an indeterminacy at base of physical processes.  Thus, social processes must be conceived through bio-physics as having emergent complexity that creates a chaotic indeterminacy although with stable supra-structures, and such is a condition of freedom in society and for societies.  While previously we addressed the bias in 1-dimensional polling questions – those that require a strict ordering of preferences – here we address the bias of underreporting by marginalized groups.  Both effect the result in the same way and indeed stem from the same norm-reducing force – marginalized preferences are suppressed while the norm is exaggerated.     

 The purpose of ranked-choice voting is to allow the marginalized vote to be seen in the aggregate, rather than be reduced by the normalizing force of single-choice voting (‘dominant group conformity’).  The more complex the available choice of the vote, the more complexity will be preserved about the actual beliefs of the population in all its variation.  Thus, latino voices are heard more clearly in ranked-choice than single-choice.  The ranked-choice voting is where there is a strict ordering of candidate preferences.  As we demonstrated previously, the (minority) latino preference for Beto is more clearly visible when he is allowed as a second choice within a ranked-choice voting system.  Still, more complexity can be allowed is we use partially ordered choice sets for voting, as represented in a tree diagram, where instead of A<<B<<C<<D, we have A<{B,C},B<D,C>D, indicating that B and C are indistinguishable on the same level, although both are less than A and greater than D, i.e. A<<{B,C}<<D. 

 A partially ordered vote is more inclusive than a strict ordering (ranked-choice) vote, which is more inclusive than a single vote.  For now, it is best to conceive of strict-ordering as a more democratic voting system, while partial-ordering as an improved sampling method since it would perhaps be unrealistic as a voting rule, although still useful for administrative decision-making.  Yet, when we consider quantum computer in the next blog, we will consider a democratic polity where everyone can submit quantum-votes in real-time and thus recover the concept of partial-ordering as the foundation for a democratic voting system.  The concept of measuring “quanta” as the infinitesimally small quantities is the same as measuring the marginalized peoples whose rights are minimal, i.e. quantum entities.  The thermodynamic principle at work is that biophysical diversity entails a dynamic nature to a system – the more entropy allowed in a system, and therefore representational state ambiguity, will allow for greater diversity to be measured.

 It should be no surprise that Republicans are against ranked-choice voting systems since much of their policy relies upon conformity to a shrinking notion of nationalism, that only includes latinos if they forsake their race.  Thus, republican latinos vote to exclude marginalized populations, such as those of the same race/ethnicity, because they believe such will benefit their particular interest even if it is against the interest of their more general group of belonging.  In fact, as marginalized groups, latinos are systematically threatened for voting, and especially if they vote in the interest of their group/race/class rather than the interest of the dominant/oppressor class.  This explains why latino votes are either i) not-cast due to voter intimidation by deportation or questioning of citizen-status, as Trump has exemplified; ii) divided across districts such they cannot sum to any significant race/class voting-block due to gerrymandering; or iii) vote against their interest through a form of internalized racism against oneself.  These methods of electoral system manipulation nullify or invert the vote of the most marginalized population in the USA, latinx.  While it is true that racism in the US would be undermined finally by a ranked-choice or partially-ranked-choice voting system, it will always have pushback from Republicanism that seeks the reductionist unity of population information over the plural complex diversity.

Pin It on Pinterest