You are on page 1of 20

c 

In mathematics, the  of a number to a given O  is the exponent to which the base
must be raised in order to produce that number. For example, the logarithm of 1000 to base 10 is
3, because 10 to the power of 3 is 1000: 103 = 1000. The logarithm of  to the base O is written
logO() such as log10(1000) = 3.

By the following formulas, logarithms reduce multiplication to addition and exponentiation to


products:

logO( · ) = logO() + logO(),


logO() = logO().

Three particular values for the base O are most common. The natural logarithm, the one with base
O = e, occurs in calculus since its derivative is 1/. The logarithm to base O = 10 is called
common logarithm, while the base O = 2 gives rise to the binary logarithm.

The invention of logarithms is due to John Napier in the early 17th century. Before calculators
became available, via logarithm tables, logarithms were crucial to simplifying scientific
calculations. Today's applications of logarithms are numerous. Logarithmic scale reduces wide-
ranging quantities to smaller scopes; this is applied in the Richter scale, for example. In addition
to being a standard function used in various scientific formulas, logarithms appear in determining
the complexity of algorithms and of fractals. They also form the mathematical backbone of
musical intervals and some models in psychophysics, and have been used in forensic accounting.
Logarithms are also closely related to questions revolving around counting prime numbers.

Different routines are available to calculate logarithms, some designed for speedy calculations,
other ones for high accuracy. Logarithms have been generalized in various ways. The complex
logarithm applies to complex numbers instead of real numbers. The discrete logarithm is an
important primer in public-key cryptography.

c 
    
The Ô  
of a number  with respect to a number O is the power to which O has to be raised
in order to give . The number O is called O . In symbols, the logarithm is the number  solving
the following equation:

O = .[1]

The logarithm  is denoted logO(). For O = 2, for example, this means

log2(8) = 3,
since 23 = 2 · 2 · 2 = 8. Another example is

since

The first equality is because í1 is the reciprocal of , 1 / , for any number unequal to zero.c[¾]
A third example: log10(150) is approximately 2.176. Indeed, 102 = 100 and 103 = 1000. As 150
lies between 100 and 1000, its logarithm lies between 2 and 3. Ways of calculating the logarithm
are explained further down.

The logarithm logO() is defined for any positive number  and any positive base O which is
unequal to 1. These restrictions are explained below.

Ñc  
 ain article: Logarithmic identities

There are three important formulas, sometimes called Ô  


  , relating various
logarithms to one another. The first is about the logarithm of a product, the second about
logarithms of powers and the third involves logarithms with respect to different bases.

Ñc 
  

The logarithm of a product is the sum of the two logarithms. That is, for any two positive real
numbers  and , and a given base O, the following equality holds:

Ô O    Ô O  Ô O

For example,

log3(9 · 27) = log3(243) = 5,

since 35 = 243. On the other hand, the sum of log3(9) = 2 and log3(27) = 3 also equals 5. For  =
O, it yields

logO( · O) = logO() + 1,

sincelogO(O) is 1, because O1 = O. In other words,


ÔÔ the number  by the base O amounts
to  one to its logarithm. In general, that identity is proven using the following relation of
powers and multiplication:c[¾]
O · O = O + .

Substituting the particular values  = logO() and  = logO() gives

OlogO() · OlogO() = OlogO() + logO().

The left hand side agrees with  · . Indeed, the logarithm of  is the number to which the base O
has to be raised in order to yield . That is to say:

 = OlogO()

and a similar equality holds with  in place of . Taking the base-O-logarithm of both sides yields

logO( · ) = logO(OlogO() + logO()) = logO() + logO().

Logarithms also convert divisions to subtractions:

for example

Ñc 
 

The logarithm of the -th power of a number  is  times the logarithm of that number. In
symbols:

Ô O    Ô O

Also, logarithms relate roots to division:

As an example, the logarithm to base O = 2 of 64 is 6, since 26 = 64. But 64 is also 43 and indeed
3 · log2(4) = 3 · 2 equals 6. This formula can be derived by raising the afore-mentioned identity

 = OlogO()

to the -th power (exponentiation), which gives


 = (OlogO()) = O · logO().

For the second equality, the identity ( ) =  ·  was used.c[¾] Thus, the logO of the left hand side,
logO(), and of the right hand side,  · logO(), agree. The sought formula is proven.

Ñ
 

The following formula relates the logarithm of a fixed number  to one base in terms of the one
to another base:

This is actually a consequence of the previous rule, as the following proof shows: taking the
base--logarithm of the above-mentioned identity,

 = OlogO(),

yields

log() = log(OlogO()).

By the previous rule, the right hand term equals logO() · log(O). Dividing the preceding equation
by log(O) shows the change-of-base formula. The division is possible since logb  0. Indeed,
this logarithm cannot be zero, since 0 = 1, but O was supposed to be different than 1.

As a practical consequence, logarithms with respect to any base  can be calculated with a
calculator, if logarithms to any base O (often O = 10 or O = e) are available: log() = log10() /
log10().

Ñ   
While the definition of logarithm applies to any positive real number O (excluding 1), a few
particular choices for O are more commonly used. These are O = 10, O = e (the mathematical
constant § 2.71828«), and O = 2. The different standards come about because of the different
properties preferred in different fields. For example, it has been argued that base-10 logarithms
are easier to deal with than other ones, since in the decimal number system the powers of 10
have a simple representation.[2] athematical analysis, on the other hand, prefers the base O = e
because of the "natural" properties explained below.

The following table lists common notations for logarithms to these bases and the fields where
they are used. Instead of writing logO(), it is common in many disciplines to omit the base and to
write log(), when the intended base can be determined from the context. The notations
suggested by the International Organization for Standardization (ISO 31-11) are underlined in
the table below.[3] Given a number  and its logarithm logO(), the base O can be determined by
the following formula:

This follows from the change-of-base formula above.

J  

   
 O   
O O 

binary lb(),[4]ld(), log() computer science,



logarithm (in computer science), lg() information theory

ln(),a[¾] log() mathematical analysis,


natural (in mathematics and many programming physics, chemistry,

logarithm languages including C, Java, Haskell, and statistics, economics and
BASIC</ref>) some engineering fields

various engineering fields,


common lg(), log() (see decibel and see below),
’ 
logarithm (in engineering, biology, astronomy), logarithm tables, handheld
calculators

Ñ    


Ñc  
 

The following discussion of logarithms uses the notion of   . In a nutshell, a function is a
datum that assigns to a given number another number. The first one is called variable to
emphasize the idea that it can take different values.

The expression logO() depends on both the base O and on . The base O is usually regarded as
fixed. Therefore the logarithm only depends on the variable , a positive real number. Assigning
to  its logarithm logO() therefore is a function. It is called Ô  
   or Ô  

   or even just Ô  
.
The graph of the logarithm function logO() (green) is obtained by reflecting the one of the
function O (red) at the diagonal line ( = ).

The above definition of logarithms was done indirectly by means of the exponential function()
= O: the logarithm (to base O) of  is the solution  to the equation

() = .

A rigorous proof that this equation actually  a solution and that this solution is unique
requires some elementary calculus, specifically the intermediate value theorem.[5] This theorem
says that a continuous function that takes two values
and  also takes any intermediate value,
that is, any value that lies between
and . A function is   if it does not "jump", that
is, if its graph can be drawn without lifting the pen. This property can be shown to hold for the
exponential function ().  oreover, it takes arbitrarily big and arbitrarily small positive values,
so that any number  can be boxed by 0 and 1 which are values of the function . Thus, the
intermediate value theorem ensures that the equation

() = 

has a solution.  oreover, as the function  is strictly increasing (for O> 1), or strictly decreasing
(for 0 <O< 1), there is only one solution to this equation.

A compact way of rephrasing the point that the base-O logarithm of  is the solution  to the
equation () = O =  is to say that the logarithm function is the inverse function of the
exponential function. Inverse functions are closely related to the original functions: the graphs of
the two correspond to each other upon reflecting them at the diagonal line  = , as shown at the
right: a point (,  = O) on the graph of the exponential function yields a point (,  = logO) on
the graph of the logarithm and vice versa.  oreover, analytic properties of the function pass to
its inverse function.[5] Thus, as the exponential function () = O is continuous and differentiable,
so is its inverse function, logO(). Roughly speaking, a differentiable function is one whose graph
has no sharp "corners".

Ñ    

Further information: List of integrals of logarithmic functions

The area of the hyperbolic sector equals ln(O) í ln( ).

The derivative of the natural logarithm ln() = loge() is given by

This can be derived from the definition as the inverse function of e, using the chain rule.[6] This
implies that the antiderivative of 1/ is ln() + C. An early application of this fact was the
quadrature of a hyperbolic sector, as shown at the right, by de Saint-Vincent in 1647. The
derivative with a generalised functional argument () is

For this reason the quotient at right hand side is called logarithmic derivative of . The
antiderivative of the natural logarithm ln() is

Derivatives and antiderivatives of logarithms to other bases can be derived therefrom using the
formula for change of bases.

Ñ     


  
The natural logarithm of  is the shaded area underneath the graph of the function () = 1/
(reciprocal of ).

The natural logarithm satisfies the following identity:

In prose, the natural logarithm of  agrees with the integral of 1/  from 1 to , that is to say, the
area between the -axis and the function 1/, ranging from  = 1 to  = . This is depicted at the
right. The formula is a consequence of the fundamental theorem of calculus and the above
formula for the derivative of ln(). Some authors actually use the right hand side of this equation
as a definition of the natural logarithm and derive the formulas concerning logarithms of
products and powers mentioned above from this definition.[7] The product formula ln() = ln()
+ ln() is deduced in the following way:

The "=" labelled (1) used a splitting of the integral into two parts, the equality (2) is a change of
variable ( = /). This can also be understood geometrically. The integral is split into two parts
(shown in yellow and blue). The key point is this: rescaling the left hand blue area in vertical
direction by the factor  and shrinking it by the same factor in the horizontal direction does not
change its size.  oving it appropriately, the area fits the graph of the function 1/ again.
Therefore, the left hand area, which is the integral of () from  to  is the same as the integral
from  to :

A visual proof of the product formula of the natural logarithm.

The power formula ln() = ln() is derived similarly:


The second equality is using a change of variables,  :=1/, while the third equality follows from
integration by substitution.

The sum over the reciprocals of natural numbers, the so-called harmonic series

is also closely tied to the natural logarithm: as  tends to infinity, the difference

converges (i.e., gets arbitrarily close) to a number known as Euler± ascheroni constant. Little is
known about it²not even whether it is a rational number or not. This relation is used in the
performance analysis of algorithms such as quicksort, see below.[8]

Ñ   
There are a variety of ways to calculate logarithms for practical purposes. One method uses
power series, that is to say a sequence of polynomials whose values get arbitrarily close to the
exact value of the logarithm. For high precision calculation of the natural logarithm, an
approximation formula based on the arithmetic-geometric mean is used. Algorithms involving
lookups of precalculated tables are used when emphasis is on speed rather than high accuracy.
 oreover, the binary logarithm algorithm calculates lb() recursively based on repeated
squarings of , taking advantage of the relation

log2(2) = 2 log2().

Finally, based on quick ways to calculate exponentials , the natural logarithm of , that is the
solution to

 í=0

can also efficiently be calculated using Newton's method.[9]

While in some cases²such as log10(10.000) = 4²logarithms can be easy to compute, they


generally take less simple values: by the Gelfond±Schneider theorem, given two algebraic

numbers and O such as or , the ratio Ȗ = ln( ) / ln(O) is either a rational


number  /  (in which case  = O) or transcendental. The latter means that for all  and ÔÔ
rational numbers 0, ..., the expression

Ȗ +  í 1Ȗ í 1 + ... + 1 Ȗ + 0

is nonzero.[10] Related questions in transcendence theory such as linear forms in logarithms are a
matter of current research.

Ñ!   

The Taylor series of ln(º) at º = 1. The animation shows the first 10 approximations.

For real numbers º satisfying 0 <º< 2,b[¾] the natural logarithm of º can be written as

[11]

The infinite sum means that the logarithm of º is approximated by the sums

to arbitrary precision, provided  big is enough. In the parlance of elementary calculus, ln(º) is
the Ô
 of the sequence of these sums. This series is the Taylor series expansion of the natural
logarithm at º = 1.

Ñ" 

  

Another series is
for complex numbers º with positive real part.[11] This series can be derived from the above
Taylor series. It converges more quickly than the Taylor series, especially if º is close to 1. For
example, for º = 1.5, the first three terms of the second series approximate ln(1.5) with an error
of about 3·10í6. The above Taylor series needs 13 terms to achieve that precision. The quick
convergence for º close to 1 can be taken advantage of in the following way: given a low-
accuracy approximation  § ln(º) and putting = º/exp(), the logarithm of º is

ln(º) =  + ln( ).

The better the initial approximation  is, the closer is to 1, so its logarithm can be calculated
efficiently. The calculation of can be done using the exponential series, which converges
quickly provided  is not too large. Calculating the logarithm of larger º can be reduced to
smaller values of º by writing º = · 10O, so that ln(º) = ln( ) + O · ln(10).

Ñ#   $ 

The natural logarithm ln() is approximated to a precision of 2í (or  precise bits) by

[12][13]

Here  denotes the arithmetic-geometric mean and


is chosen so that  =  / 2
is bigger than
2/2. The constants ʌ and ln(2) can be calculated with particular series. The formula goes back to
Gauss.

Ñ  $ 


 ain article: Complex logarithm

Polar form of complex numbers

The
Ô Ô  
is a generalization of the above definition of logarithms of positive real
numbers to complex numbers. Complex numbers are commonly represented as º =  + i, where
 and  are real numbers and i is the imaginary unit. As was discussed above, given any real
number º  1, the equation

e =º

has exactly one  Ô solution . However, there are infinitely many


Ô numbers solving
this equation, i.e., multiple complex numbers whoseexponential equals º. This causes complex
logarithms to be different from real ones.

The solutions of the above equation are most readily described using the polar form of º. It
encodes a complex number º by its absolute value, that is, the distance  to the origin, and the
argument ij, the angle between the line connecting the origin and º and the -axis. In terms of the
trigonometric functions sine and cosine,  and ij are such that

º =  · cos(ij) + i ·  · sin(ij).[14]

The principal branch of the complex logarithm, Log(º). The hue of the color shows the argument
of Log(º), the saturation (intensity) of the color shows the absolute value of the complex
logarithm.

The absolute value  is uniquely determined by º by the formula but


there are multiple numbers ij such that the preceding equation holds²given one such ij,

ij' = ij + 2ʌ

also satisfies the preceding equation. Adding 2ʌ or 360 degreesd[¾] to the argument corresponds to
"winding" around the circle counter-clock-wise by an angle of 2ʌ. However, there is exactly one
argument ij satisfying íʌ < ij and ij ” ʌ. It is called the   Ô 
 and denoted Arg(º),
with a capital .[15] (The normalization 0 ” Arg(º) < 2ʌ also appears in the literature.[16]) It is a
fact proven in complex analysis that

º = eiij.
Consequently, if ij is the principal argument Arg(º), the number number

= ln() + i · (ij + 2ʌ)

is such that the -th power of  equals º, for any integer. Accordingly, is called
Ô
Ô  
of º. If  = 0, is called   Ô  Ô of the logarithm, denoted Log(º). The
principal argument of any positive real number is 0; hence the principal logarithm of such a
number is a real number and equals the natural logarithm as defined above. In contrast to the real
case, analogous formula for principal values of logarithm of products and powers for complex
numbers do in general   hold.

The graph at the right depicts Log(º). The discontinuity, i.e., jump in the hue at the negative part
of the -axis is due to the jump of the principal argument at this locus. This behavior can only be
circumvented by dropping the range restriction on ij. Then the argument of º and, consequently,
its logarithm become multi-valued functions.

Ñ    

A nautilus displaying a logarithmic spiral.

Logarithms are have many applications, both within and outside mathematics. The logarithmic
spiral, for example, appears (approximately) in various guises in nature, such as the shells of
nautilus.[17] Logarithms also occur in various scientific formulas, such as the Tsiolkovsky rocket
equation, the Fenske equation, or the Nernst equation.

The logarithm of a positive integer  to a given base shows how many digits are needed to write
that integer in that base. For instance, the common logarithm log10() is linked to the number  of
numerical digits of :  is the smallest integer strictly bigger than log10(), i.e., the one satisfying
the inequalities:

 í 1 ” log10() <.

Similarly, the number of binary digits (or bits) needed to store a positive integer  is the smallest
integer strictly bigger than log2().
A tiling of the Poincaré disk of hyperbolic geometry by ideal triangles²their edges are shortest
"lines" in hyperbolic geometry.

In hyperbolic geometry the logarithm is used in measuring the distances between points. In the
Poincaré disk model, this causes the shortest connections between any two points to be either
segments of circles perpendicular to the boundary of the disk or lines through the origin, the
center of the circle.[18]

Ñc   

 ain article: Logarithmic scale

A logarithmic chart depicting the value of one Goldmark in Reichsmark during the German
hyperinflation in the 1920s.

Various scientific quantities are expressed as logarithms of other quantities, a concept known as
Ô  
  Ô. For example, the Richter scale measures the strength of an earthquake by
taking the common logarithm of the energy emitted at the earthquake. Thus, an increase in
energy by a    of 10  one to the Richter magnitude; a 100-fold energy results in +2 in the
magnitude etc.[19] This way, large-scaled quantities are reduced to much smaller ranges. A
second example is the pH in chemistry: it is the negative of the base-10 logarithm of the activity
of hydronium ions (H3O+, the form H+ takes in water).[20] The activity of hydronium ions in
neutral water is 10í7 mol·Lí1 at 25 °C, hence a pH of 7. Vinegar (typically around 5% w/v
aqueous solution of acetic acid), on the other hand, has a pH of about 3. The difference of 4
corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about
10í3 mol·Lí1. In a similar vein, the decibel (symbol dB) is a unit of measure which is the base-10
logarithm of ratios. For example it is used to quantify the loss of voltage levels in transmitting
electrical signals[21] or to describe power levels of sounds in acoustics.[22] In spectrometry and
optics, the absorbance unit used to measure optical density is equivalent to í10 dB. The apparent
magnitude measures the brightness of stars logarithmically.[23]

Semilog graphs or log-linear graphs use this concept for visualization: one axis, typically the
vertical one, is scaled logarithmically. This way, exponential functions of the form () = · O
appear as straight lines with slope proportional to O. In a similar vein, log-log graphs scale both
axes logarithmically.[24]

Ñ  

In psychophysics, the Weber±Fechner law proposes a logarithmic relationship between stimulus


and sensation.[25] (However, it is often considered to be less accurate than Stevens' power
law.[26]) According to that model, the smallest change ǻü of some stimulus ü that an observer can
still notice is proportional to ü. This gives rise to a logarithmic relation by the above relation of
the natural logarithm and the integral over dü / ü. Hick's law proposes a logarithmic relation
between the time individuals take for choosing an alternative and the number of choices they
have.[27]

 athematically untrained individuals tend to estimate numerals with a logarithmic spacing, i.e.,
the position of a presented numeral correlates with the logarithm of the given number so that
smaller numbers are given more space than bigger ones. With increasing mathematical training
this logarithmic representation becomes gradually superseeded by a linear one²a finding
confirmed both in comparing second to sixth grade Western school children,[28] as well as in
comparison between American and indigene cultures.[29]

Ñ      


Distribution of first digits (in %, red bars) in the population of the 237 countries of the world.
Black dots indicate the distribution predicted by Benford's law.

Logarithms also arise in probability theory: tossing a coin repeatedly, it is known (by the law of
large numbers), that the "heads"-vs.-"tails"-ratio is approaching 0.5 as the number of tosses
increases. The fluctuations of this ratio around the limiting value are quantified by the law of the
iterated logarithm.

Logarithms are used in the process of maximum likelihood estimation when applied to a sample
consisting of independent random variables: maximizing the product of the random variables is
equivalent to maximizing the logarithm of the product, and in so doing one differentiates a sum
rather than a product.[30]

Benford's law, an empiricalstatistical description of the occurrence of digits in certain real-life


data sources, such as heights of buildings, is based on logarithms: the probability that the first
decimal digit of an item in the data sample is (from 1 to 9) equals

log10( + 1) í log10( ),

  of the unit of measurement.[31] Thus, according to that law, about 30% of the data
can be expected to have 1 as first digit, 18% start with 2 etc. Deviations from this pattern can be
used to detect fraud in accounting.[32]

A log-normal distribution is one whose logarithm is normally distributed.[33]

Ñ  $

Quicksort sorts by choosing a pivot element (red), and sorts separately the items which are
smaller than this element and the ones which are bigger.

Logarithms arise in complexity theory, a branch of computer science which studies the
performance of algorithms.[34] They are prone to occur in describing algorithms which divide a
problem into two smaller ones, and join the solution of the subproblems.[35] For example, to find
a number in an sorted list, the binary search algorithm checks the middle entry and proceeds with
the half before or after the middle entry if the number is still not found. Similarly, the quicksort
algorithm sorts an array by dividing the array into halves and sorting these first, as shown in the
animation. These algorithms typically need time that is approximately proportional tolog(ë) and
ë · log(ë), respectively, where ë is the length of the list.

A function () is said to grow logarithmically, if it is (sometimes approximately) proportional to


the logarithm. (Biology, in describing growth of organisms, uses this term for an exponential
function, though.[36]) It is irrelevant to which base the logarithm is taken, since choosing a
different base amounts to multiplying the result by a constant, as follows from the change-of-
base formula above. For example, any natural numberë can be represented in binary form in no
more than (log2(ë)) + 1 bits. I.e., the amount of hard disk space on a computer grows
logarithmically as a function of the size of the number to store. Corresponding calculations
carried out using loge will lead to results in nats which may lack this intuitive interpretation. The
change amounts to a factor of loge2§0.69²twice as many values can be encoded with one
additional bit, which corresponds to an increase of about 0.69 nats. A similar example is the
relation of decibel (dB), using a common logarithm (O = 10) vis-à-vis neper, based on a natural
logarithm.

Ñ% 

Entropy, broadly speaking a measure of (dis-)order of some system, also relies on logarithms. In
thermodynamics, the entropy ü of some physical system is defined by

The sum is over all states  the system in question can have,  is the probability that the state  is
attained and  is the Boltzmann constant. Similarly, entropy in information theory is a measure
of quantity of information. If a message recipient may expect any one of ë possible messages
with equal likelihood, then the amount of information conveyed by any one such message is
quantified as log2 ëbits.[37]

Ñ& 

The Sierpinski triangle (at the right) is constructed by repeatedly replacing equilateral triangles
by three smaller ones.

Logarithms occur in various definitions of the dimension of fractals.[38] Fractals are geometric
objects that are self-similar, i.e., that have small parts which reproduce (at least roughly) the
entire global structure. The Sierpinski triangle, for example, can be covered by three copies of it
having half the original size. This causes the Hausdorff dimension of this structure to be
log(3)/log(2) § 1.58. The idea of scaling invariance is also inherent to Benford's law mentioned
above. Another logarithm-based notion of dimension is obtained by counting the number of
boxes needed to cover the fractal in question.
Ñ 

Graph comparing ʌ() (red),  / ln (green) and Li() (blue).

Natural logarithms are closely linked to counting prime numbers, an important topic in number
theory. For any given number , the number of prime numbers less than or equal to  is denoted
ʌ(). In its simplest form, the prime number theorem asserts that ʌ() is approximately given by

in the sense that the ratio of ʌ() and that fraction approaches 1 when  tends to infinity.[39] This
can be rephrased by saying that the probability that a randomly chosen number between 1 and 
is prime is indirectly proportional to the numbers of decimal digits of . A far better estimate of
ʌ() is given by the offset logarithmic integral function Li(), defined by

The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms
of comparing ʌ() and Li().[40] The Erdős±Kac theorem describing the number of distinct prime
factors also involves the natural logarithm.

By the formula calculating logarithms of products, the logarithm of factorial, ! = 1 · 2 · ... · ,


is given by

ln(!) = ln(1) + ln(2) + ... + ln().

This can be used to obtain Stirling's formula, an approximation for ! for large .[41]

Ñ" 

Logarithms appear in the encoding of musical tones. In equal temperament, the frequency ratio
depends only on the interval between two tones, not on the specific frequency, or pitch of the
individual tones. Therefore, logarithms can be used to describe the intervals: the interval between
two notes in semitones is the base-21/12 logarithm of the frequency ratio.[42] For finer encoding, as
it is needed for non-equal temperaments, intervals are also expressed in cents, hundredths of an
equally-tempered semitone. The interval between two notes in cents is the base-21/1200 logarithm
of the frequency ratio (or 1200 times the base-2 logarithm). The table below lists some musical
intervals together with the frequency ratios and their logarithms.

Just
1/72 tone Octa
Interval (two tones are Semitone major  ajor third Tritone
play(help·info ve
played at the same time) play third play play
) play
play

Frequency ratio  2

, i.e., corresponding 1/6 1 § 3.86 4 6 12


number of semitones

§
, i.e., corresponding 16.67 100 400 600 1200
386.31
number of cents

Ñ'    
The Ô  
of a number is the logarithm of the reciprocal of the number: cologO() =
logO(1/) = ílogO().[43] The Ô  
function antilogO() is the inverse function of the
logarithm function logO(); it can be written in closed form as O.[44] Both terminologies are
primarily found in older books.

The OÔ or   Ô  


, ln(ln()), is the inverse function of the double exponential
function. The   Ô  
is the inverse function of tetration. The super-
logarithm of  grows even more slowly than the double logarithm for large . The Lambert W
function is the inverse function of () = .

From the perspective of pure mathematics, the identity log( ) = log( ) + log( ) expresses an
isomorphism between the multiplicative group of the positive real numbers and the group of all
the reals under addition. Logarithmic functions are the only continuous isomorphisms from the
multiplicative group of positive real numbers to the additive group of real numbers.[45] By means
of that isomorphism, the Lebesgue measure d on ' corresponds to the Haar measure d/ on the
positive reals.[46] In complex analysis and algebraic geometry, differential forms of the form (d
log() =) d/ are known as forms with logarithmic poles.[47] This notion in turn gives rise to
concepts such as logarithmic pairs, log terminalsingularities or logarithmic geometry.

The polylogarithm is the function defined by

[48]

It is related to the natural logarithm by Li1(º) = íln(1 í º).  oreover, Li(1) equals the Riemann
zeta function ȗ().

The discrete logarithm is a related notion in the theory of finite groups. It involves solving the
equation O = , where O and  are elements of the group, and  is an integer specifying a power
in the group operation. Zech's logarithm is related to the discrete logarithm in the multiplicative
group of non-zero elements of a finite field.[49] For some finite groups, it is believed that the
discrete logarithm is very hard to calculate, whereas discrete exponentials are quite easy. This
asymmetry has applications in public key cryptography, more specifically in elliptic curve
cryptography.[50]

For -adic numbers, the logarithm can also be defined via its Taylor series, which again has
radius of convergence equal to 1. This definition can be extended to all non-zero -adic numbers.
Similarly, a -adic exponential can be defined (though its domain is smaller than in the real or
complex case) and the -adic logarithm is, once again, its inverse function.[51]

The logarithm of a matrix is the inverse function of the matrix exponential.[52]

You might also like