The Natural Logarithm Rules

The natural logarithm, whose symbol is ln, is a useful tool in algebra and calculus to simplify complicated problems. In order to use the natural log, you will need to understand what ln is, what the rules for using ln are, and the useful properties of ln that you need to remember.


What is the natural logarithm?


The natural logarithm is a regular logarithm with the base e. Remember that e is a mathematical constant known as the natural exponent. We write the natural logarithm as ln.

$$\log_e (x) = \ln(x)$$

Since the ln is a log with the base of e we can actually think about it as the inverse function of e with a power.

$$\ln(e^x ) = x$$
or
$$e^{\ln(x)} = x $$

The natural exponent e shows up in many forms of mathematics from finance to differential equations to normal distributions. It is clear that the logarithm with a base of e would be a required inverse so as to help solve problems involving such exponents.


Properties of ln

  1. \(\ln(a)\) exists if and only if \(a>0\)

    The natural logarithm of a requires that a is a positive value. This is true of all logarithms. This is an important parameter to remember, as any logarithm of a negative number is undefined.


  2. \(\ln(0)\) is undefined

    Notice how in property 1 that we define \(\ln(a)\) to exist if \(a > 0\). That is no mistake. The logarithm of zero is undefined.


  3. \(\ln(1)=0\)

    The natural logarithm of 1 is 0. This is a useful property to eliminate certain terms in an equation if you can show that the value in the natural logarithm is 1. It also serves as a divider between solutions of the natural log that are either positive or negative. \(\ln(a) < 0\) if \(0 < a < 1\) and on the other side \(\ln(a) > 0\) if \(a > 1\).


  4. \(\lim\limits_{a\rightarrow\infty} \ln(a)=\infty\)

    The limit as \(ln(a)\) as a approaches infinity is infinity. The natural logarithm is a monotonically increasing function, so the larger the input the larger the output.


  5. \(\ln(e)=1\)

    Since the base of the natural logarithm is the mathematical constant e, the natural log of e is then equal to 1.


  6. \(\ln(e^x)=x\)

    Since the natural logarithm is the inverse of the natural exponential, the natural log of e x becomes x.


  7. \(e^{\ln(x)}=x\)

    Similar to property 6, the natural exponential of the natural log of x is equal to x because they are inverse functions.



The Natural Logarithm Rules


There are 4 rules for logarithms that are applicable to the natural log. These
rules are excellent tools for solving problems with natural logarithms involved,
and as such warrant memorization.

  1. The Product Rule

    $$\ln(ab)=\ln(a)+\ln(b)$$

    If you are taking the natural log of two terms multiplied together, it is equivalent to taking the natural log of each term added together.

    Note 1: Remember property 1. The natural log of a negative value is undefined. This implies that both terms a and b from the product rule are required to be greater than zero.

    Note 2: This property holds true for multiple terms:
    $$\ln(abcd…)=\ln(a)+\ln(b)+\ln(c)+\ln(d)+…


  2. The Quotient Rule

    $$\ln\left(\frac{a}{b}\right)=\ln(a)-\ln(b)$$

    If you take the natural log of one term divided by another, it is equivalent to the natural log of numerator minus the natural log of the denominator.

    Note 1: Remember property 1. The natural log of a negative value is undefined. This implies that both terms \(a\) and \(b\) from the quotient rule are required to be greater than zero.


  3. The Reciprocal Rule

    $$\ln\left(\frac{1}{x}\right)=-\ln(x)$$

    If you take the natural log of 1 divided by a number, it is equivalent to the negative natural log of that number.


  4. The Power Rule

    $$\ln(a^b)=b\ln(a)$$

    If you take the natural log of a term \(a\) with an exponent \(b\), it is equivalent to \(b\) times the natural log of \(a\).


It is of use to any student to be able to prove these 4 rules of natural logarithms. The observant student will see that the product rule can be proved easily using property 6 and 7, and some knowledge of exponents. The quotient, reciprocal, and power rule all follow from specific versions of the product rule. So if you are able to prove the product rule, the remaining three should be trivial.


Conclusion

  • The natural log ln is a logarithm with a base of the mathematical constant e. ie \(\ln=\log_e\)
  • The natural log ln is the inverse of e
  • The 4 rules of logs
    • $$\ln(ab)=\ln(a)+\ln(b)$$
    • $$\ln\left(\frac{a}{b}\right)=\ln(a)-\ln(b)$$
    • $$\ln\left(\frac{1}{x}\right)=-\ln(x)$$
    • $$\ln(a^b)=b\ln(a)$$

The Art of Argumentation-Making: Statistics as Modern Rhetoric

The process of statistical measurement is used to make precise the evaluation of a claim relies upon our assumptions about the sampling measurement process and the empirical phenomena measured.   The independence of the sampling measurements leads to the normal distribution, which allows the confidence of statistical estimations to be calculated.  This is the metric used to gauge the validity of the tested hypothesis and therefore the empirical claim proposed.  While usually the question of independence of variables arises in relation to the different quantities measured for each repeated sample, we ask now about the independence of the measurement operation from the measured quantity, and thus the effect of the observer, i.e. subjectivity, on the results found, i.e. objectivity.  When there is an interaction between the observing subjectivity and the observed object, the normal distribution does not hold and thus the objective validity of the sampling test is under question.  Yet, this is the reality of quantum (small) measurements and measurements in the social world.  If we consider the cognitive bias of decreasing marginal utility, we find that samples of marginal utility will decrease with each consumption of a good, making the discovery of an underlying objective measurement of the subjective preference impossible.  This assumption of independence of the Measurer from the Measurement is inherited from Descartes.

Descartes created the modern mathematical sciences through the development of a universal mathematics that would apply to all the other sciences to find certain validity with exactitude and a rigor of proof, for which essays can be found in his early writings developing these subject-oriented reflections.  In his Meditations, one finds two ‘substances’ clearly and distinctly after his ‘doubting of everything, to know what is true’ – thinking & extension.  This separation of thinking and extension places measurement as objective, without acknowledging the perspective, or reference frame, of the subjective observer, leading to the formulation of the person as ‘a thinking thing,’ through cogito, ergo sum, ‘I think, I am.’  Just as with the detachment of mathematics from the other sciences – a pure universal science – and therefore the concrete particularity of scientific truth, the mind becomes disconnected from the continuum of reality (i.e. ’the reals,’ cc: Cantor) of the extended body as subjectivity infinitely far from objectivity, yet able to measure it perfectly.  This would lead to the Cartesian Plane of XY independence as a generalization of Euclidean Geometry from the 2D Euclidean Plane where the parallel (5th) postulate was retained:

Euclid’s 5th Postulate: For an infinitely extended straight line and a point outside it, there exist only one parallel (non-intersecting) line going through the point.

This became the objective coordinate system of the extended world, apart from the subjective consciousness that observed each dimension in its infinite independence, since it was itself independent of all extended objects of the world.  All phenomena, it was said could be embedded within this geometry to be measured using the Euclidean-Cartesian metrics of distance.  For centuries, attempts were made to prove this postulate of Euclid, but none successful.  The 19th century jurist, Schweikart, no doubt following up millennia of ancient law derived from cosmo-theology, wrote to Gauss a Memorandum (below) of the first complete hyperbolic geometry as “Astral Geometry” where the geometry of the solar system was worked out by internal relationships between celestial bodies rather than through imposing a Cartesian-Euclidean plane.  

(p.76, Non-Euclidean Geometry, Bonola, 1912)

This short Memorandum convinced Gauss to take the existence of non-Euclidean geometries seriously, developing differential geometry into the notion of curvature of a surface, one over Schweikart’s Constant.  This categorized the observed geometric trichotomy of hyperbolic, Euclidean, and elliptical geometries to be distinguished by negative, null, and positive curvatures.  These geometries are perspectives of measurement – internal, universally embedding, and external – corresponding to the value-orientations of subjective, normative, and objective.  From within the Solar System, there is no reason to assume the ‘infinite’ Constant of Euclidean Geometry, but can instead work out the geometry around the planets, leading to an “Astral” geometry of negative curvature.  The question of the horizon of infinity in the universe, and therefore paralleling, is a fundamental question of cosmology and theology, hardly one to be assumed away.  Yet, it may practically be conceived as the limit of knowledge in a particular domain space of investigation.  In fact, arising at a similar time as the Ancient Greeks (i.e. Euclid), the Mayans worked out a cosmology similar to this astral geometry, the ‘4-cornered universe’ (identical to Fig. 42 above) using circular time through modular arithmetic, only assuming the universal spatial measurement when measuring over 5,000 years of time.  The astral geometry of the solar system does not use ‘universal forms’ to ‘represent’ the solar system – rather, it describes the existing forms by the relation between the part and the whole of that which is investigated.  The Sacred Geometries of astrology have significance not because they are ‘perfectly ideal shapes’ coincidently found in nature, but because they are the existing shapes and numbers found in the cosmos, whose gravitational patterns, i.e. internal geometry, determine the dynamics of climate and thus the conditions of life on Earth. 

The error of Descartes can be found in his conception of mathematics as a purely universal subject, often inherited in the bias of ‘pure mathematics’ vs. ‘applied mathematics.’  Mathematics may be defined as methods of counting, which therefore find the universality of an object (‘does it exist in itself as 1 or more?’), but always in a particular context.  Thus, even as ‘generalized methods of abstraction,’ mathematics is rooted in concrete scientific problems as the perspectival position of an observer in a certain space.  Absolute measurement can only be found in the reduction of the space of investigation as all parallel lines are collapsed in an elliptical geometry.  Always, the independence of dimensions in Cartesian Analysis is a presupposition given by the norms of the activity in question.  Contemporary to Descartes, Vico criticized his mathematically universal modern science as lacking in the common sense wisdom of the humanities in favor of a science of rhetoric.  While rhetoric is often criticized as the art of saying something well over saying the truth, it is fundamentally the art of argumentation and thus, like Mathematics, as the art of measurement, neither are independent from the truth as the topic of what is under question.  The Greek into Roman word for Senatorial debate was Topology, which comes from topos (topic) + logos (speech), thus using the numeral system of mathematics to measure the relationships of validation between claims made rhetorically concerning the public interest or greater good.  The science of topology itself studies the underlying structures (‘of truth’) of different topics under question. 

Together, Rhetoric and Mathematics enable Statistics, the art of validation.  Ultimately, Statistics questions ask ‘What is the probability an empirical claim is true?’ 

While it is often assumed the empirical claim must be ‘objective,’ as independent of the observer, quantum physics developing in Germany around WWI revealed otherwise.  When we perform statistics on claims of a subjective or normative nature, as commonly done in the human sciences, we must adjust the geometry of our measurement spaces to correspond to internal and consensual measurement processes.  In order to do justice to subjectivity in rhetorical claims, it may be that hyperbolic geometry is the proper domain for most measurements of validity in empirical statistics, although this is rarely used.  Edmund Husserl, a colleague of Hilbert who was formulating the axiomatic treatment of Euclid by removing the 5th postulate, in his Origins of Geometry, described how Geometry is a culture’s idealizations about the world and so their axioms can never be self-grounded, but only assumed based upon the problems-at-hand as long as they are internally consistent to be worked out from within an engaged activity of interest – survival and emancipation.  Geometry is the basis of how things appear, so it encodes a way of understanding time and moving within space, therefore conditioned on the embedded anthropology of a people, rather than a human-independent universal ideal – how we think is how we act.  Thus, the hypothesis of equidistance at infinity of parallel lines is an assumption of independence of linear actions as the repeated trials of sample-testing in an experiment (‘Normality’).  Against the universalistic concept of mathematics, rooted in Euclid’s geometry, Husserl argued in The Crisis of the European Sciences for a concept of science, and therefore verification by mathematics, grounded in the lifeworld, the way in which things appear through intersubjective & historical processes – hardly universal, this geometry is hyperbolic in its nature and particular to contextual actions.  Post WWII German thinkers, including Gadamer and Habermas, further developed this move in philosophy of science towards historical intersubjectivity as the process of Normativity.  The Geometry from which we measure the validity of a statement (in relation to reality) encodes our biases as the value-orientation of our investigation, making idealizations about the reality we question.  We cannot escape presupposing a geometry as there must always be ground to walk on, yet through the phenomenological method of questioning how things actually appear we can find geometries that do not presuppose more than the problem requires and through the hermeneutic method gain a historical interpretation of their significance, why certain presuppositions are required for certain problems.  Ultimately, one must have a critical approach to the geometry employed in order to question one’s own assumptions about the thing under investigation.

Quantum Computing Democracy

Consider the binary computer.  All bits have one of two states either 0 or 1, symbolic for ‘off’ and ‘on’ with reference to a circuit as symbolic of a ‘fact of the world’ propositionally, A is true or false.  In a quantum computer, the states may be occupied in superposition with a probability distribution such that the quantum-state is “a<1> + (1-a)<0>” where ‘a’  is a real positive numbers less than 1 signifying the probability of state-1 occurring, ‘turning on.’  The quantum binary computer, at least locally, ultimately collapses into a common electro-magnetic binary computer when the true value of the bit is measured as either 1 or 0, yet before then is suspended in the super-positional quantum state.  Thus, the resultant value of measurement is the answer to a particular question, while the quantum-state is the intermediation of problem-solving.  The problem is inputted for initial conditions of the quantum-formulation as the distributions of the different informational bits (i.e. ‘a’ values).  Altogether this is the modern formulation of randomness introduced to a primitive Abacus, for which beads might be slid varying lengths between either edge to represent the constant values of each informational bar; it is dropped (on a sacred object) to introduce random interaction between the parts; and the resulting answer is decoded by whether a bead is closer to one-side (1) or the other (0).  Random interaction is   allowed between informational elements through relational connections so that the system can interact with itself in totality, representing the underlying distributional assumptions of the problem.

 If a quantum computer is to be used to find the overall objective truth to a set of seemingly valid propositional statements, each claim can be inputted as a single quantum-bit with the perceived probability of its correctness, i.e. validity metric, and the inter-relational system of claims inputted through oriented connections between bit-elements.  After the quantum simulation, it will reach a steady-state with each bit either being true (1) or false (0), allowing the resulting valid system, considering the uncertainty and relationship between all the facts, to be determined.  In a particularly chaotic system, the steady-state may itself exhibit uncertainty, as when there are many equally good solutions to a problem, with therefore repeated sampling of the system giving varying results.  The problem is thus formulated as a directed ‘network’ deep-graph and initialized as nodes, edge-lengths, and orientations.  The random interaction of the system operates as partial-differential relations (directed edges) between the – here, binary –  random variables (nodes).  The quantum computer therefore naturally solves problems formulated under the calculus class of Partial Differential Equations for for the Stochastic Processes.  The quasi-state nodes interact through pre-determined relations (assumptions) to reach an equilibria for the total automata as the state-of-affairs.

 We may therefore consider a quantum voting machine to solve normative problems of policy and representation.  Each person submits not just a single vote (0 or 1), but a quantum-bit as a single subjective estimation of the validity of the claim-at-hand.  The basic democratic assumption to voting as a solution to the normative problem is that all votes are equal, so each single q-vote is connected to the central state-node with a proportional flow.  The final state-solution will be a q-bit with probability equal to the average of all the q-votes, which may be close to the extremas (true or false), yet may also be close to the center (non-decidability).  A measured decision by the state will thus result from not-collapsing this random variable with all its information, especially if the probability is close to ½, and therefore leaving the policy’s value undecidable, although rules for further collapsing the distribution (i.e. passing the law only if majority for popular-vote) can be established before-hand.  It is also possible to create a more complicated method of aggregation, rather than total-connection, as with the districting for the electoral college, by grouping certain votes and connecting these groupings in relations to the whole state-decision through concentric meta-groupings.  We may further complicate the model of quantum-voting by allowing each citizen to submit, not just a single q-vote, but a whole quantum system of validity statements instead of just one bit to answer the question, such as the rationality for a single normative answer expressed through a hierarchically weighted validity tree of minor claims.  In fact, if more information is inputted by each citizen (i.e. validity arguments), then the naturally representative systems of groupings between persons can be determined by, rather than prior, to the vote.  We end up with, not a ‘yes’ or ‘no’ on a single proposition, but an entire representational system for a regulatory domain of social life – the structural-functional pre-image (i.e. ‘photonegative’).

 Quantum computing is a natural process of biological life.  One can construct a DNA-computer with quantum properties through encoding the initial-conditions as the frequency-distributions of certain gene-patterns (built from the 4 base-pairs) and the computers’ dynamic systems of interaction through the osmosis of an enzyme mixture, resulting in a distributional answer to a complex problem.  More generally within evolutionary theory, the environment acts as a macro quantum computer through random mutations and natural selection to create and select for the best gene variations (polymorphisms) for a species’ survival.