The Mathematics of COVID-19

The Mathematics of COVID-19

An Astro-Socio-Biological Analysis

A multi-scale entropic analysis of COVID-19 is developed on the micro-biological, meso-social, & macro-astrological levels to model the accumulation of errors during processes of self-replication within immune response, communicative functionality within social mitigation strategies, and mutation/genesis within radiation conditioning from solar-cosmic cycles.

  1. Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity
  2. Micro-Scale: Computational Biology of RNA Sequence
  3. Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction
  4. Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles
  5. References

Multi-Scale Integration: The Informatic Thermodynamics of Functional Communicativity

The genesis of SARS-CoV-2, with its internal code of a precise self-check mechanism on reducing errors in RNA replication and external attributes of ACE2 binding proteins, is an entropy-minimizing solution to the highly functionally communicative interconnected human societies embedded within high-entropic geophysical conditions of higher cosmic radiation atmospheric penetration with radioactive C-14 residues due to the present solar-cycle modulation. This background condition explains the mutation differences between SARS-1 & SARS-2, where the latter has a more persistent environment of C-14 to evolve steadily into stable forms. The counter-measures against the spread of the virus, either as therapeutics, vaccines, or social mitigation strategies, are thus disruptions (entropy-inducing) to these evolved entropy-reducing mechanisms within the intra-host replication and inter-host communicability processes.

The point of origin for understanding the spread of the virus in a society or subdivision is through its communicative functionality, which may be expressed as a complex variable of the real functionality of the social system and the imaginary communicativity of its lifeworld, the two attributes which are diminished by the shut-down and social distance measures. Conditions of high communicativity, such as New York City, will induce mutations with greater ACE2 binding proteins, i.e. communicability, as the virus adapts to its environment, while one of high functionality will induce error-minimization in replication. These two micro & meso scale processes of replication and communicability (i.e. intra- & inter- host propagation) can be viewed together from the thermodynamic-informatic perspective of the viral RNA code as a message – refinement and transmission – itself initialized (‘transcribed’) by the macro conditions of the Earth’s spatio-temporality (i.e. gravitational fluctuation). This message is induced, altered, amplified spatially, & temporalized by the entropic functional-communicative qualities of its environment that it essentially describes inversely.

Micro-Scale: Computational Biology of RNA Sequence

As with other viruses of its CoV virus family, the RNA of COVID-19 encodes a self-check on the duplication of its code by nuclei, thereby ensuring it is copied with little error. With little replication-error, the virus can be replicated many more rounds (exponential factor) without degeneration, which will ultimately stop the replication-process. Compare an example of \(t=3\) rounds for a normal virus with \(t=8\) for a Coronavirus under simple exponential replication viral count \(C\) by replication rounds t as \(C(t)=e^{(t)}: \ C(3)=e^3=20.1\) vs. \( C(7)=e^7=1096.6\).

Let us consider an example where a single RNA can create N-1 copies of itself before its code is degenerated beyond even replicative encoding, i.e. the binding segment code directing RNA replicase to replicate. The original RNA code is given by \(\mathcal{N}_0\), with each subsequent code given by \(\mathcal{N}_t\), where t is the number of times of replication. Thus, \(t\) counts the internal time of the “life” of the virus, as its number of times of self-replication. The relevant length of a sequence can be given as the number of base-pairs that will be replicated in the next round of replication. This will be expressed as the zero-order distance metric, \(\mu^0(\mathcal{N}_t)=|\mathcal{N}_t|\).

The errors in the replicative process at time \(t\) will be given by \(DISCR\_ERR(\mathcal{N}t)\), for “discrete error”, and will be a function of \(t\), given as thus \(\epsilon(t)\). Cleary, \(|\mathcal{N}_t| = |\mathcal{N}{t+1}| + \epsilon(t)\). In all likelihood, \(\epsilon(t)\) is a decreasing function since with each round of replication the errors will decrease the number of copiable base-pairs, and yet with an exceptionally random alteration of a stable insertion, the error could technically be negative. There are two types of these zero-order errors, \(\epsilon^-\), as the number of pre-programmed deletions occurring due to the need for a “zero-length” sequence segment to which the RNA polymerase binds and is thereby directed to replicate “what is to its right in the given reading orientation,” and \(\epsilon^+\) as the non-determined erroneous alterations, either as deletions, changes, or insertions. The total number of errors at any single time will be their sum, as thus \(\epsilon(t)=\epsilon^-(t)+\epsilon^+(t)\). A more useful error-metric may be the proportional error, \(PROP\_ERR(\mathcal{N}t)\), since it is likely to be approximately constant across time, which will be given by the time-function \(\epsilon'(t)\), and can similarly be broken into determined(-) and non-determined(+) errors as \(\epsilon'(t)={\epsilon’}^-(t)+{\epsilon’}^+(t)\). Expressed thus in proportion to the (zero-order) length of the RNA sequence, $$\epsilon'(t)=1-\frac{|\mathcal{N}{t+1}|}{|\mathcal{N}_t|}=\frac{\epsilon(t)}{|\mathcal{N}_t|}$$.

The “length” (of internal time) of an RNA code, \(\mathcal{N}t\), in terms of the number of times it itself may be copied before it is degenerated beyond replication, is given as the first order “distance” metric \(\mu^1(\mathcal{N}_t)=N(\mathcal{N}_t)\). For our generalized example, \(\mu^1(\mathcal{N}_0)=N(\mathcal{N}_0)=N\). This may be expressed as the sum of all errors $$N=\sum_{t=0}^{\infty}\epsilon(t)=\sum_{t=0}^{t_{max}}\epsilon(t)=\sum_{t=0}^{N}\epsilon(t)$$.

We are interested in the “length” (of internal space) of the RNA code, second-order distance metric, \(\mu^2(\mathcal{N}_0)\), as the number of copies it can make of itself, including the original copy in the counting and the children of all children viruses. This is the micro-factor of self-limitation of the virus, to be compared ultimately to the meso-factor of aerosolized half-life and the macro-factor of survival on surfaces.

These errors in replication, compounded by radiation exposure in the atmosphere, will add up to mutations of the virus, which by natural selection in the corporeal, social-communicative, and environmental (i.e. surfaces and aerosolized forms) levels has produced stable new forms of the virus.

Comparing SARS-1 & SARS-2, the former had a higher mortality rate and the latter has a higher transmission rate. There is certainly an inverse relationship between mortality and transmission on the meso-level as fatality prevents transmission, but there may also be inherent differences at the micro-level in methods of replication leading to these different outcomes. Mortality is due to intra-host replication exponentiation – whereby there are so many copies made that the containing cells burst – while communicability is due to the inter-host stability of the RNA code in the air and organic surfaces where it is subject to organic reactions and cosmic radiation.

Meso-Scale: The Communicative Epidemiology of Viral Social Reproduction

We can apply the theory of communicativity to studying the natural pathology (disease) and the social pathology (violence) of human society through René Girard’s Theory of Mimetics [VIOL-SAC]. Viewing a virus and a dysfunctional social system under a single conceptual unity (Mimetics) of a communicative pathology, the former ‘spreads’ by communication while the latter is the system of communication. Yet, different types of communication systems can lead to higher outbreaks for a communicable disease. Thus, the system of communication is the condition for the health outcomes of communicable disease. Beyond merely ‘viruses,’ a dysfunctional communication system unable to coordinate actions to distribute resources effectively within a population can cause other pathologies such as violence and poverty. From this integrated perspective, these ‘social problems’ may themselves be viewed as communicable disease in the sense of being caused, rather than ‘spread,’ by faulty systems of communication. Since violence and poverty are themselves health concerns in themselves, such a re-categorization is certainly permittable. The difference in these communicable diseases of micro and macro levels is that a virus is a replication script read and enacted by human polymerase in a cell’s biology while a dysfunctional social system is a replication script read and enacted by human officials in a society. We can also thereby view health in the more generalized political-economy lens as the quantity of life a person has, beyond merely the isolated corporeal body but also including the action-potentialities of the person as the security from harm and capacity to use resources (i.e. via money) for one’s own survival. It is clear that ‘money’ should be the metric of this ‘bio-quantification’ in the sense that someone with more money can create healthier conditions for life and even seek better treatment, and similarly a sick person (i.e. deprived of life) should be given more social resources (i.e. money) to reduce the harm. Yet, the economic system fails to accurately price and distribute life-resources due to its nodal premise prescribed by capitalism whereby individuals, and by extension their property resources, are not social (as in distributively shared), but rather isolated \& alienated for individual private consumption.

This critique of capitalism was first made by Karl Marx in advocation for socialism as an ontological critique of the lack of recognition of the social being to human existence in the emerging economic sciences of liberalism. In the 17th century, Locke conceived of the public good as based upon an individual rights to freedom, thereby endowing the alienated (i.e. private) nature with the economic right to life. This moral reasoning was based on the theological premise that the capacity for reason was not a public-communicative process, but rather a private faculty based only upon an individual’s relationship with God. Today we may understand Marx’s critique of Lockean liberalism from the deep ecology perspective that sociality is an ontological premise to biological analysis due to both the relationship of an organism grouping to its environment and the in-group self-coordinating mechanism with its own type. Both of these aspects of a biological group, in-group relationships (\(H^+(G):G \rightarrow G\)) and out-group relationships (\(H^-(G)={H_-^-(G): G^c \rightarrow G, H_+^-(G): G \rightarrow G^c}\)) may be viewed as communicative properties of the group, as in how the group communicates with itself and with not-itself. In the human-capital model of economic liberalism, the group is reduced to the individual economic agent that must act alone, i.e. an interconnected system of capabilities, creating thereby an enormous complexity in any biological modeling from micro-economic premises to macro-economic outcomes. If instead we permit different levels of group analysis, where it is assumed a group distributes resources within itself, with the particular rules of group-distribution (i.e. its social system) requiring an analysis of the group at a deeper level that decomposes the group into smaller individual parts, such a multi-level model has a manageable complexity. The purpose is therefore to study Communicativity as a property of Group Action.

A group is a system of action coordination functionally interconnecting sub-groups. Each group must “act as a whole” in that the inverse branching process of coordination adds up all actions towards the fulfillment of a single highest good, the supreme value-orientation. Therefore, the representation of a group is by a tree, whose nodes are the coordination actions (intermediate groupings), edges the value produced, and leaves the elemental sub-groups “at the level of analysis”. The total society can be represented as a class system hierarchy of group orderings, with primary groups of individuals. The distribution of resources between a group follows the branching orientation (\(\sigma^-\)) from root to leaves as resources are divided up, while the coordination follows the inverse orientation (\(\sigma^+\)) from leaves to root as elemental resources are coordinated in production to produce an aggregate good.

In the parasite-stress theory of sociality[fincher_thornhill_2012], in-group assortative sociality arose due to the stress of parasites in order to prevent contagion. There is thus a causal equivalence between the viral scripts of replication and the social structures selected for by the virus as the optimal strategy of survival. Violence too has the same selection-capacity since existentially conflicting groups are forced to isolate to avoid the war of revenge cycles. This process is the same as the spread of communicable diseases between groups – even after supposed containment of a virus, movement of people between groups can cause additional cycles of resurgence.

Racism is an example of non-effective extrapolation of in-grouping based on non-essential categories. As a highly contagious and deadly disease, on the macro-social level COVID-19 selects for non-racist societies via natural selection since racist societies spend too many resources to organize in-group social structure along non-essential characteristics, as race, and thus have few reserves left to reorganize along the essential criteria selected for by the disease (i.e. segregating those at-risk). Additionally, racism prevents resource sharing between the dominant group and the racially marginalized or oppressed group, and thus limits the transfer of scientific knowledge in addition to other social-cultural resources since what the marginalized group knows to be true is ignored.

With a complex systems approach to studying the communicability of the virus between groups (i.e. different levels of analysis) we can analyze the transmission between both persons and segregated groups (i.e. cities or states) to evaluate both social distancing and shut-down policies. A single mitigation strategy can be represented as the complex number \(\lambda = \sigma + \omega i\), where \(\sigma\) is the dysfunctionality of the social system (percent shut-down) and \(\omega\) is the periodicity of the shut-down. We can include \(s_d\) for social distance as a proportion of the natural radii given by the social density. The critical issue now is mistimed reopening policies, whereby physical communication (i.e. travel) between peaking and recovering groups may cause resurgences of the virus, which can be complicated by reactivation post immunity and the threat of mutations producing strands resistant to future vaccines. This model thus considers the long-term perspective of social equilibrium solutions as mixed strategies between socialism and capitalism (i.e. social distancing and systemic shut-downs) to coronaviruses as a semi-permanent condition to the ecology of our time.

Macro-Scale: Astro-biological Genesis of COR-VIR by Solar Cycles

The genesis of COR-VIR are by mutations (and likely reassortment) induced by a burst of solar flare radiation and a conditioning by cosmic radiation, each with different effects on the viral composition. Comparison with SARS-1 (outbreak immediately after a solar maximum) reveals that solar radiation (i.e. UVC) from flares & CMEs, more frequent and with higher intensity during solar maximums yet also present during minima, is responsible for the intensity (mortality rate) of the virus, while cosmic radiation, enabled by the lower count of sun spots that decreases the Ozone in the atmosphere normally shielding the Earth’s surface from radiation, gives the virus a longer duration within and on organic matter (SARS-2), likely through mutation by radioactive C-14 created by cosmic radiation interaction with atmospheric Nitrogen. The increased organic surface radioactivity is compounded by the ozone-reduction due to \(N^2\) emissions concurrent with “Global Warming.” The recent appearance of all coronaviruses in the last 5 solar cycles is likely due to a global minimum within a hypothetical longer cosmic-solar cycle (~25 solar cycles) that modulates the relative sun cycle sunspot count, and has been linked to historical pandemics. A meta-analysis has detected such a frequency the last milenia with global pandemics [2017JAsBO…5..159W]. The present sun cycle, 25, beginning with a minimum coincident with the first SARS-2 case of COVID-19, has the lowest sunspot count in recorded history (i.e. double or triple minimum). Likely, this explains the genesis of difference in duration and intensity between SARS-1 & SARS-2.

This longer solar-cosmic cycle that modulates the relative sunspot count of a solar cycle, the midpoint of which is associated with global pandemics, has recently been measured to 208 years by C-14 time-cycle-analysis, which is itself modulated by a 2,300 year cycle. These time-cycles accord to the (perhaps time-varying) Mayan Round calendar: 1 K’atun=2 solar cycles (~20 years); 1 May = 13 K’atun (~ 256 years); 1 B’ak’tun = 20 K’atun (~394 years) ; 1 Great Cycle = 13 B’ak’tun (~ 5,125 year). Thus, the 208-year cycle is between 1/2 B’ak’tun (~197 years) and 1 May (~ 256 years, 13 K’atuns). It is likely the length of 25 sun cycles, the same as the May cycle, yet has decreased in length the last few thousand years (perhaps as well with sun-spot counts). The 2,300 year cycle is ~ 6 B’ak’tuns (2,365 years), constituting almost half of a Great Cycle (13 B’ak’tuns). We are likely at a triple minimum in sunspot count from all 3 solar-cosmic cycles, at the start of the first K’atun (2020) of the beginning of a new Great Cycle (2012), falling in the middle of the May (associated with crises).

The entropic characterization of the pathogenesis as prolonged radioactivity – low entropic conditioning of high entropy – leads to the property of high durability on organic matter and stable mutations.

[5] References

  1. [fincher_thornhill_2012]  Corey L. Fincher and Randy Thornhill. “Parasite-stress promotes in-group assortative sociality: The cases of strong family ties and heightened religiosity”. In: Behavioral and Brain Sciences 35.2 (2012), pp. 61–79. doi: 10.1017/S0140525X11000021.
  2. [VIOL-SAC]  René Girard. Violence & The Sacred.
  3. [2017JAsBO…5..159W]  N. C. Wickramasinghe et al. “Sunspot Cycle Minima and Pandemics: The Case for Vigilance?” In: Journal of Astrobiology & Outreach 5.2, 159 (Jan. 2017), p. 159. doi: 10.4172/2332-2519.1000159.
Limits, the First Step into Calculus

Limits, the First Step into Calculus

The concept of a limit is the central idea that underlies calculus and is the unifying mechanism that allows for differentials and integrals to be related. Calculus is used to model real-life phenomena in the language of mathematics. Anything that involves a rate of change, as the velocity of your car is the rate of change of distance with respect to the rate of change of time, is found using derivatives. Limits are the basis of the derivative, by finding the instantaneous rate of change.


Definition of a Limit

The limit is the behavior of a function as we approach a certain value. Let’s start by looking at a particular function

$$f(x) = x^2 + x – 6$$

for values near 2. We can use a table of values that gets really close to 2 from values less than 2, and another that gets really close to 2 from values greater than 2.

xf(x)|xf(x)
-2-4|636
0-6|414
1-4|36
1.5-2.25|2.52.75
1.75-1.1875|2.251.3125
1.875-0.609375|2.125|0.640625
1.936-0.315904|2.06250.31640625
1.968-1.58976|2.031250.15722656
1.984-0.079744|2.0156250.7936914
1.992-0.039936|2.00781250.03912354
1.996-0.19984|2.003906250.01954651
1.998-0.009996|2.001953130.00976946
1.999-0.004999|2.000976560.00488375

From the table, we can see that as x approaches 2, the value of f(x) approaches 0. It would appear from the chart, that if we let x get really close to 2 in either direction that f(x) becomes 0. This is the basic version of how we solve a limit. We use the English phrase “the limit of f(x) as x approaches 0 is equal to 0”.


The Limit of a Function: Definition

We say

$$\lim_{x\rightarrow a} f(x) = L$$

and

$$\text{“The limit of f(x) as x approaches a equals L”}$$

if when we make the value of x get arbitrarily close to a, the value of f(x) gets arbitrarily close to L.


Finding Limits by “Direct Injection”

If we are searching for a limit like

$$\lim_{x\rightarrow 5} x^2+x-10$$

we can do what is called “Direct Injection”, in other words we plug in the value of \(x=5\) into the function we are finding the limit of

$$\lim_{x\rightarrow 5} 5^2+5-10=20$$

then we have discovered that the limit is equal to 20.

Try this method on the following problem:

Example

Find the limit

$$\lim_{x\rightarrow 1} \frac{x-1}{x^2-1}$$

Solution

If we try direct injection we have a problem:

$$\lim_{x\rightarrow 1} \frac{1-1}{1^2-1} = \frac{0}{0}$$

The problem is that we cannot ever divide by zero. This is an undefined function in mathematics and algebra. We need another method to figure out how to take this limit. We are allowed to manipulate the function algebraically as long as we do not break any math rules. Notice that the denominator is factorable

$$\lim_{x\rightarrow 1}\frac{x-1}{x^2-1} = \lim_{x\rightarrow 1} \frac{x-1}{(x-1)(x+1)}$$

Now we can see that (x-1) is found in the numerator and denominator. We can simplify the expression into

$$\lim_{x\rightarrow 1}\frac{1}{x+1}$$

And now if we do the direct injection of x=1 we get

$$\lim_{x\rightarrow 1} \frac{1}{x+1} = \frac{1}{2}$$

And we have discovered the limit!


Conclusion

The limit is the behavior of a function as the variable approaches a specific number. Limits can be found in numerous different ways, this post has shown you two specific methods to discover limits:

  1. Table of values: Take values getting really close to the value you are searching for and measure the behavior of f(x).
  2. Direct Injection: Try plugging in the value you are searching for directly into f(x), and if it fails, try manipulating the equation using standard algebra techniques.

Check back soon for more information on Limits and Calculus in general!

What is an Integral?

What is an Integral?

The integral is a method to find the area under a curve. It is formulated as a sum of many smaller areas that approximate the area of the curve, all added together to find the total area. We let the number of areas under the curve approach infinity so the approximation to the area becomes the actual area under the curve. This is called a Reimann Sum.

Integrals were created to find the area, but it was discovered that they are related to derivatives. This discovery leads to integrals being what is called an “antiderivative.” The fundamental theorem of calculus attaches the theory of derivatives and integrals together, forming what we consider modern calculus.

The Problem: How To Find Area Under A Curve?

Area has always been of interest to society. Ancient farmers needed ways to divide up the land, which ideally would be located adjacent to a river. Since rivers wind back and forth in almost unpredictable ways, ensuring that each farmer received the same area in which to grow crops, a method for determining areas was required.

In trying to solve for the area of anything, we have to ask ourselves ‘What is the meaning of area?’

If you have a square, or rectangle, or any shape with straight edges, the area is somewhat easy to calculate. But what about the farmers with the winding river? How do you choose the boundaries of a curving river to be a straight side? You would have to start taking approximations of the river using straight lines in order. I used straight lines in my crude drawing above to try and segregate the proposed farm sections. But you may be able to spot that my plots are not all the same area. Some farmers would complain!

For a rectangle, the area is found by multiplying the length and the width. The area of a triangle is half the base multiplied by the height. The area of a polygon can be discovered by compartmentalizing it into triangles and adding the areas of the triangles.

We have methods to solve the areas of straight lines. But what about curved lines? We need a precise definition of the area. Let us start with a general curve:

general curve graph

We have no way (yet) to calculate the area found underneath this curve. So to make a crude approximation, we will draw rectangles whose height is from the x-axis to the function, and whose width is chosen so there are 5 equal width rectangles.

crude approximation of integral

You can see the first few rectangles overestimate the area under the curve, while the last few underestimate it. But if we add up all the areas of the rectangles, we arrive at a simplistic approximation to the area under the curve.

$$\text{Area} = f(x_0)\times (x_1-x_0)+f(x_1)\times (x_2-x_1)+f(x_2)\times (x_3-x_2)+f(x_3)\times (x_4-x_3) + f(x_4)\times (x_5-x_4)$$

If we know that all the \(x_i\) points are the same distance apart, we can rename that distance to \(\Delta x\) where \(\Delta x_i = x_{i+1}-x_{i}\)

Then we can re-write our sum of all rectangles in summation notation.

$$ \text{Area}=\sum_{i=0}^{4} f(x_i) \Delta x_i $$

Now imagine if we have a lot more rectangles:

Hopefully, it is obvious to you that by using many more rectangles of smaller widths we have reduced the error in how much each rectangle overestimates or underestimates the height of the curve. If we continue this trend and let the number of rectangles approach infinity, and the width of the rectangles approaches zero, then the accuracy of the area under the curve becomes perfect. We write such a notion like this:

$$\text{Area}= \underset{n \to \infty}{\lim_{\Delta x \to 0}} \sum_{i=0}^{n} f(x_i)\Delta x$$

By letting the distance between the two points make the width of the rectangle go to zero, we let the number of rectangles approach infinity by using the limit. The result is a perfect representation of the area under a curve. We call this result The Reimann Sum and we give it a special name:

The Integral.

Therefore:

$$\text{Area}=\text{integral} = \underset{n \to \infty}{\lim_{\Delta x \to 0}} \sum_{i=0}^{n} f(x_i)\Delta x = \int_a^b f(x) dx$$

The symbol \(\int\) stands for the integral, \(a\) and \(b\) are called the bounds of integration, the \(dx\) stands for \(\Delta x\) and represents an infinitesimal amount of \(x\), and \(f(x)\) is the curve we are looking for the area under.

The Natural Logarithm Rules

The Natural Logarithm Rules

The natural logarithm, whose symbol is ln, is a useful tool in algebra and calculus to simplify complicated problems. In order to use the natural log, you will need to understand what ln is, what the rules for using ln are, and the useful properties of ln that you need to remember.

What is the natural logarithm?

The natural logarithm is a regular logarithm with the base e. Remember that e is a mathematical constant known as the natural exponent. We write the natural logarithm as ln.

$$\log_e (x) = \ln(x)$$

Since the ln is a log with the base of e we can actually think about it as the inverse function of e with a power.

$$\ln(e^x ) = x$$
or
$$e^{\ln(x)} = x $$

The natural exponent e shows up in many forms of mathematics from finance to differential equations to normal distributions. It is clear that the logarithm with a base of e would be a required inverse so as to help solve problems involving such exponents.

Properties of ln

  1. \(\ln(a)\) exists if and only if \(a>0\)The natural logarithm of a requires that a is a positive value. This is true of all logarithms. This is an important parameter to remember, as any logarithm of a negative number is undefined.
  2. \(\ln(0)\) is undefinedNotice how in property 1 that we define \(\ln(a)\) to exist if \(a > 0\). That is no mistake. The logarithm of zero is undefined.
  3. \(\ln(1)=0\)The natural logarithm of 1 is 0. This is a useful property to eliminate certain terms in an equation if you can show that the value in the natural logarithm is 1. It also serves as a divider between solutions of the natural log that are either positive or negative. \(\ln(a) < 0\) if \(0 < a < 1\) and on the other side \(\ln(a) > 0\) if \(a > 1\).
  4. \(\lim\limits_{a\rightarrow\infty} \ln(a)=\infty\)The limit as \(ln(a)\) as a approaches infinity is infinity. The natural logarithm is a monotonically increasing function, so the larger the input the larger the output.
  5. \(\ln(e)=1\)Since the base of the natural logarithm is the mathematical constant e, the natural log of e is then equal to 1.
  6. \(\ln(e^x)=x\)Since the natural logarithm is the inverse of the natural exponential, the natural log of e x becomes x.
  7. \(e^{\ln(x)}=x\)Similar to property 6, the natural exponential of the natural log of x is equal to x because they are inverse functions.

The Natural Logarithm Rules

There are 4 rules for logarithms that are applicable to the natural log. These
rules are excellent tools for solving problems with natural logarithms involved,
and as such warrant memorization.

  1. The Product Rule$$\ln(ab)=\ln(a)+\ln(b)$$

    If you are taking the natural log of two terms multiplied together, it is equivalent to taking the natural log of each term added together.

    Note 1: Remember property 1. The natural log of a negative value is undefined. This implies that both terms a and b from the product rule are required to be greater than zero.

    Note 2: This property holds true for multiple terms:
    $$\ln(abcd…)=\ln(a)+\ln(b)+\ln(c)+\ln(d)+…

  2. The Quotient Rule$$\ln\left(\frac{a}{b}\right)=\ln(a)-\ln(b)$$

    If you take the natural log of one term divided by another, it is equivalent to the natural log of numerator minus the natural log of the denominator.

    Note 1: Remember property 1. The natural log of a negative value is undefined. This implies that both terms \(a\) and \(b\) from the quotient rule are required to be greater than zero.

  3. The Reciprocal Rule$$\ln\left(\frac{1}{x}\right)=-\ln(x)$$

    If you take the natural log of 1 divided by a number, it is equivalent to the negative natural log of that number.

  4. The Power Rule$$\ln(a^b)=b\ln(a)$$

    If you take the natural log of a term \(a\) with an exponent \(b\), it is equivalent to \(b\) times the natural log of \(a\).


It is of use to any student to be able to prove these 4 rules of natural logarithms. The observant student will see that the product rule can be proved easily using property 6 and 7, and some knowledge of exponents. The quotient, reciprocal, and power rule all follow from specific versions of the product rule. So if you are able to prove the product rule, the remaining three should be trivial.

Conclusion

  • The natural log ln is a logarithm with a base of the mathematical constant e. ie \(\ln=\log_e\)
  • The natural log ln is the inverse of e
  • The 4 rules of logs
    • $$\ln(ab)=\ln(a)+\ln(b)$$
    • $$\ln\left(\frac{a}{b}\right)=\ln(a)-\ln(b)$$
    • $$\ln\left(\frac{1}{x}\right)=-\ln(x)$$
    • $$\ln(a^b)=b\ln(a)$$

The Art of Argumentation – Making: Statistics as Modern Rhetoric

The Art of Argumentation – Making: Statistics as Modern Rhetoric

The process of statistical
measurement is used to make precise the evaluation of a claim relies upon our
assumptions about the sampling measurement process and the empirical phenomena
measured.   The independence of the
sampling measurements leads to the normal distribution, which allows the
confidence of statistical estimations to be calculated.  This is the metric used to gauge the validity
of the tested hypothesis and therefore the empirical claim proposed.  While usually the question of independence of
variables arises in relation to the different quantities measured for each
repeated sample, we ask now about the independence of the measurement operation
from the measured quantity, and thus the effect of the observer, i.e.
subjectivity, on the results found, i.e. objectivity.  When there is an interaction between the
observing subjectivity and the observed object, the normal distribution does
not hold and thus the objective validity of the sampling test is under
question.  Yet, this is the reality of
quantum (small) measurements and measurements in the social world.  If we consider the cognitive bias of
decreasing marginal utility, we find that samples of marginal utility will
decrease with each consumption of a good, making the discovery of an underlying
objective measurement of the subjective preference impossible.  This assumption of independence of the
Measurer from the Measurement is inherited from Descartes.

Descartes created the
modern mathematical sciences through the development of
a universal mathematics that would apply to all the other sciences to find certain
validity with exactitude and a rigor of proof, for which essays can be found in
his early writings developing these subject-oriented reflections.  In his Meditations, one finds two
‘substances’ clearly and distinctly after his ‘doubting of everything, to know
what is true’ – thinking & extension.  This separation of thinking
and extension places measurement as objective, without acknowledging the
perspective, or reference frame, of the subjective observer, leading to the
formulation of the person as ‘a thinking thing,’ through cogito, ergo sum, ‘I think, I am.’  Just as with the detachment of mathematics
from the other sciences – a pure universal science – and therefore the concrete
particularity of scientific truth, the mind becomes disconnected from the
continuum of reality (i.e. ’the reals,’ cc: Cantor) of the extended body as
subjectivity infinitely far from objectivity, yet able to measure it
perfectly.  This would lead to the
Cartesian Plane of XY independence as a generalization of Euclidean Geometry
from the 2D Euclidean Plane where the parallel (5th) postulate was retained:

Euclid’s 5th Postulate: For an infinitely extended straight line
and a point outside it, there exist only one parallel (non-intersecting) line
going through the point.

This became the objective
coordinate system of the extended world, apart from the subjective
consciousness that observed each dimension in its infinite independence, since
it was itself independent of all extended objects of the world.  All phenomena, it was said could be embedded
within this geometry to be measured using the Euclidean-Cartesian metrics of
distance.  For centuries, attempts were
made to prove this postulate of Euclid, but none successful.  The 19th century jurist, Schweikart, no doubt
following up millennia of ancient law derived from cosmo-theology, wrote to
Gauss a Memorandum (below) of the first complete hyperbolic geometry as “Astral
Geometry” where the geometry of the solar system was worked out by internal
relationships between celestial bodies rather than through imposing a
Cartesian-Euclidean plane.  

(p.76, Non-Euclidean Geometry, Bonola, 1912)

This short Memorandum
convinced Gauss to take the existence of non-Euclidean geometries seriously,
developing differential geometry into the notion of curvature of a surface, one
over Schweikart’s Constant.  This categorized
the observed geometric trichotomy of hyperbolic, Euclidean, and elliptical
geometries to be distinguished by negative, null, and positive curvatures.  These geometries are perspectives of
measurement – internal, universally embedding, and external – corresponding to
the value-orientations of subjective, normative, and objective.  From within the Solar System, there is no
reason to assume the ‘infinite’ Constant of Euclidean Geometry, but can instead
work out the geometry around the planets, leading to an “Astral” geometry of
negative curvature.  The question of the
horizon of infinity in the universe, and therefore paralleling, is a
fundamental question of cosmology and theology, hardly one to be assumed
away.  Yet, it may practically be
conceived as the limit of knowledge in a particular domain space of
investigation.  In fact, arising at a
similar time as the Ancient Greeks (i.e. Euclid), the Mayans worked out a
cosmology similar to this astral geometry, the ‘4-cornered universe’ (identical
to Fig. 42 above) using circular time through modular arithmetic, only assuming
the universal spatial measurement when measuring over 5,000 years of time.  The astral geometry of the solar system does
not use ‘universal forms’ to ‘represent’ the solar system – rather, it describes
the existing forms by the relation between the part and the whole of that which
is investigated.  The Sacred Geometries
of astrology have significance not because they are ‘perfectly ideal shapes’
coincidently found in nature, but because they are the existing shapes and
numbers found in the cosmos, whose gravitational patterns, i.e. internal
geometry, determine the dynamics of climate and thus the conditions of life on
Earth. 

The error of Descartes can
be found in his conception of mathematics as a purely universal subject, often
inherited in the bias of ‘pure mathematics’ vs. ‘applied mathematics.’  Mathematics may be defined as methods of
counting, which therefore find the universality of an object (‘does it exist in
itself as 1 or more?’), but always in a particular context.  Thus, even as ‘generalized methods of
abstraction,’ mathematics is rooted in concrete scientific problems as the
perspectival position of an observer in a certain space.  Absolute measurement can only be found in the
reduction of the space of investigation as all parallel lines are collapsed in
an elliptical geometry.  Always, the
independence of dimensions in Cartesian Analysis is a presupposition given by
the norms of the activity in question. 
Contemporary to Descartes, Vico criticized his mathematically universal
modern science as lacking in the common sense wisdom of the humanities in favor
of a science of rhetoric.  While rhetoric
is often criticized as the art of saying something well over saying the truth,
it is fundamentally the art of argumentation and thus, like Mathematics, as the
art of measurement, neither are independent from the truth as the topic of what
is under question.  The Greek into Roman
word for Senatorial debate was Topology, which comes from topos (topic) + logos (speech), thus using the
numeral system of mathematics to measure the relationships of validation
between claims made rhetorically concerning the public interest or greater
good.  The science of topology itself
studies the underlying structures (‘of truth’) of different topics under
question. 

Together, Rhetoric and Mathematics enable Statistics, the art of
validation.  Ultimately, Statistics
questions ask ‘What is the probability an empirical claim is true?’
 

While it is often assumed
the empirical claim must be ‘objective,’ as independent of the observer,
quantum physics developing in Germany around WWI revealed otherwise.  When we perform statistics on claims of a
subjective or normative nature, as commonly done in the human sciences, we must
adjust the geometry of our measurement spaces to correspond to internal and
consensual measurement processes.  In
order to do justice to subjectivity in rhetorical claims, it may be that
hyperbolic geometry is the proper domain for most measurements of validity in
empirical statistics, although this is rarely used.  Edmund Husserl, a colleague of Hilbert who
was formulating the axiomatic treatment of Euclid by removing the 5th
postulate, in his Origins of Geometry, described how Geometry is
a culture’s idealizations about the world and so their axioms can never be
self-grounded, but only assumed based upon the problems-at-hand as long as they
are internally consistent to be worked out from within an engaged activity of
interest – survival and emancipation. 
Geometry is the basis of how things appear, so it encodes a way of
understanding time and moving within space, therefore conditioned on the
embedded anthropology of a people, rather than a human-independent universal
ideal – how we think is how we act. 
Thus, the hypothesis of equidistance at infinity of parallel lines is an
assumption of independence of linear actions as the repeated trials of
sample-testing in an experiment (‘Normality’). 
Against the universalistic concept of mathematics, rooted in Euclid’s
geometry, Husserl argued in The Crisis of the European
Sciences

for a concept of science, and therefore verification by mathematics, grounded
in the lifeworld, the way in which things appear through
intersubjective & historical processes – hardly universal, this geometry is
hyperbolic in its nature and particular to contextual actions.  Post WWII German thinkers, including Gadamer
and Habermas, further developed this move in philosophy of science towards historical
intersubjectivity as the process of Normativity.  The Geometry from which we measure the
validity of a statement (in relation to reality) encodes our biases as the
value-orientation of our investigation, making idealizations about the reality
we question.  We cannot escape
presupposing a geometry as there must always be ground to walk on, yet through
the phenomenological method of questioning how things actually appear we can
find geometries that do not presuppose more than the problem requires and
through the hermeneutic method gain a historical interpretation of their
significance, why certain presuppositions are required for certain
problems.  Ultimately, one must have a
critical approach to the geometry employed in order to question one’s own
assumptions about the thing under investigation.

Quantum Computing Democracy

Consider the binary computer.  All bits have one of two states either 0 or 1, symbolic for ‘off’ and ‘on’ with reference to a circuit as symbolic of a ‘fact of the world’ propositionally, A is true or false.  In a quantum computer, the states may be occupied in superposition with a probability distribution such that the quantum-state is “a<1> + (1-a)<0>” where ‘a’  is a real positive numbers less than 1 signifying the probability of state-1 occurring, ‘turning on.’  The quantum binary computer, at least locally, ultimately collapses into a common electro-magnetic binary computer when the true value of the bit is measured as either 1 or 0, yet before then is suspended in the super-positional quantum state.  Thus, the resultant value of measurement is the answer to a particular question, while the quantum-state is the intermediation of problem-solving.  The problem is inputted for initial conditions of the quantum-formulation as the distributions of the different informational bits (i.e. ‘a’ values).  Altogether this is the modern formulation of randomness introduced to a primitive Abacus, for which beads might be slid varying lengths between either edge to represent the constant values of each informational bar; it is dropped (on a sacred object) to introduce random interaction between the parts; and the resulting answer is decoded by whether a bead is closer to one-side (1) or the other (0).  Random interaction is   allowed between informational elements through relational connections so that the system can interact with itself in totality, representing the underlying distributional assumptions of the problem.

 If a quantum computer is to be used to find the overall objective truth to a set of seemingly valid propositional statements, each claim can be inputted as a single quantum-bit with the perceived probability of its correctness, i.e. validity metric, and the inter-relational system of claims inputted through oriented connections between bit-elements.  After the quantum simulation, it will reach a steady-state with each bit either being true (1) or false (0), allowing the resulting valid system, considering the uncertainty and relationship between all the facts, to be determined.  In a particularly chaotic system, the steady-state may itself exhibit uncertainty, as when there are many equally good solutions to a problem, with therefore repeated sampling of the system giving varying results.  The problem is thus formulated as a directed ‘network’ deep-graph and initialized as nodes, edge-lengths, and orientations.  The random interaction of the system operates as partial-differential relations (directed edges) between the – here, binary –  random variables (nodes).  The quantum computer therefore naturally solves problems formulated under the calculus class of Partial Differential Equations for for the Stochastic Processes.  The quasi-state nodes interact through pre-determined relations (assumptions) to reach an equilibria for the total automata as the state-of-affairs.

 We may therefore consider a quantum voting machine to solve normative problems of policy and representation.  Each person submits not just a single vote (0 or 1), but a quantum-bit as a single subjective estimation of the validity of the claim-at-hand.  The basic democratic assumption to voting as a solution to the normative problem is that all votes are equal, so each single q-vote is connected to the central state-node with a proportional flow.  The final state-solution will be a q-bit with probability equal to the average of all the q-votes, which may be close to the extremas (true or false), yet may also be close to the center (non-decidability).  A measured decision by the state will thus result from not-collapsing this random variable with all its information, especially if the probability is close to ½, and therefore leaving the policy’s value undecidable, although rules for further collapsing the distribution (i.e. passing the law only if majority for popular-vote) can be established before-hand.  It is also possible to create a more complicated method of aggregation, rather than total-connection, as with the districting for the electoral college, by grouping certain votes and connecting these groupings in relations to the whole state-decision through concentric meta-groupings.  We may further complicate the model of quantum-voting by allowing each citizen to submit, not just a single q-vote, but a whole quantum system of validity statements instead of just one bit to answer the question, such as the rationality for a single normative answer expressed through a hierarchically weighted validity tree of minor claims.  In fact, if more information is inputted by each citizen (i.e. validity arguments), then the naturally representative systems of groupings between persons can be determined by, rather than prior, to the vote.  We end up with, not a ‘yes’ or ‘no’ on a single proposition, but an entire representational system for a regulatory domain of social life – the structural-functional pre-image (i.e. ‘photonegative’).

 Quantum computing is a natural process of biological life.  One can construct a DNA-computer with quantum properties through encoding the initial-conditions as the frequency-distributions of certain gene-patterns (built from the 4 base-pairs) and the computers’ dynamic systems of interaction through the osmosis of an enzyme mixture, resulting in a distributional answer to a complex problem.  More generally within evolutionary theory, the environment acts as a macro quantum computer through random mutations and natural selection to create and select for the best gene variations (polymorphisms) for a species’ survival.

Pin It on Pinterest