College Mathematics: Algebra

A Study Guide for Fullsail Research

PDF generated using the open source mwlib toolkit. See for more information. PDF generated at: Sat, 10 Dec 2011 01:38:23 UTC

Basic Algebra
0 (number) Identity element Variable Integer Monomial Binomial Polynomial Coefficient 1 1 11 12 16 20 22 23 35 37 37 37 39 40 61 62 62 63 67 76 76 83 92 97 107 112 112 114 125 129

Routine Functions
Simplification Expression Root of a function Exponentiation Symmetric function

Algebraic Structures
Pre-algebra Algebra of sets Algebraic structure

Notation & Symbols
Decimal Multiplication Division Fraction Factorization

Distributivity Distribution Associativity Commutativity

Orders of Operations
Function Set Binary operation Inverse element Elementary algebra FOIL method Antisymmetric relation Symmetric polynomial Summation Closure

134 134 155 161 163 166 174 177 179 184 190 193 193 195 201 208 214 218 226 229 232 249 249 252 255 257 260

Zero divisor Negative and non-negative numbers Real number Natural number Rational number Irrational number Imaginary number Algebraic number Complex number

Special References
Portal:Algebra Proportionality Equation Word problem Statistical population

Article Sources and Contributors Image Sources, Licenses and Contributors 261 268

Article Licenses
License 270


Basic Algebra
0 (number)
0 −1 0 1 2 3 4 5 6 7 8 9 → List of numbers — Integers 0 10 20 30 40 50 60 70 80 90 → Cardinal Ordinal Factorization Divisors Arabic Bengali Devanāgarī Chinese Japanese Khmer Thai Binary Octal Duodecimal all numbers 0, zero, "oh" (  /ˈoʊ/), nought, naught, nil.

0th, zeroth, noughth

٠,0 ০ ०
零, 〇 零, 〇 ០ ๐ 0 0 0

Hexadecimal 0

0 (zero;  /ˈziːroʊ/ zeer-oh) is both a number[1] and the numerical digit used to represent that number in numerals. It fulfills a central role in mathematics as the additive identity of the integers, real numbers, and many other algebraic structures. As a digit, 0 is used as a placeholder in place value systems. In the English language, 0 may be called zero, nought or (US) naught(  /ˈnɔːt/), nil, or "o". Informal or slang terms for zero include zilch and zip.[2] Ought or aught (  /ˈɔːt/) have also been used historically.[3]

0 (number)


The word "zero" came via French zéro from Venetian zero, which (together with cipher) came via Italian zefiro from Arabic ‫ ,ﺻﻔﺮ‬ṣafira = "it was empty", ṣifr = "zero", "nothing".[4]

Early history
By the middle of the 2nd millennium BC, the Babylonian mathematics had a sophisticated sexagesimal positional numeral system. The lack of a positional value (or zero) was indicated by a space between sexagesimal numerals. By 300 BC, a punctuation symbol (two slanted wedges) was co-opted as a placeholder in the same Babylonian system. In a tablet unearthed at Kish (dating from about 700 BC), the scribe Bêl-bân-aplu wrote his zeros with three hooks, rather than two slanted wedges.[5] The Babylonian placeholder was not a true zero because it was not used alone. Nor was it used at the end of a number. Thus numbers like 2 and 120 (2×60), 3 and 180 (3×60), 4 and 240 (4×60), looked the same because the larger numbers lacked a final sexagesimal placeholder. Only context could differentiate them. Records show that the ancient Greeks seemed unsure about the status of zero as a number. They asked themselves, "How can nothing be something?", leading to philosophical and, by the Medieval period, religious arguments about the nature and existence of zero and the vacuum. The paradoxes of Zeno of Elea depend in large part on the uncertain interpretation of zero. The concept of zero as a number and not merely a symbol for separation is attributed to India where by the 9th century AD practical calculations were carried out using zero, which was treated like any other number, even in case of division.[6] [7] The Indian scholar Pingala (circa 5th-2nd century BC) used binary numbers in the form of short and long syllables (the latter equal in length to two short syllables), making it similar to Morse code.[8] [9] He and his contemporary Indian scholars used the Sanskrit word śūnya to refer to zero or void.

0 (number)


History of zero
The Mesoamerican Long Count calendar developed in south-central Mexico and Central America required the use of zero as a place-holder within its vigesimal (base-20) positional numeral system. Many different glyphs, including this partial quatrefoil— —were used as a zero symbol for these Long Count dates, the earliest of which (on Stela 2 at Chiapa de Corzo, Chiapas) has a date of 36 BC.[10] Since the eight earliest Long Count dates appear outside the Maya homeland,[11] it is assumed that the use of zero in the Americas predated the Maya and was possibly the invention of the Olmecs. Many of the earliest Long Count dates were found within the Olmec heartland, although the Olmec civilization ended by the 4th century BC, several centuries before the earliest known Long Count dates. Although zero became an integral part of Maya numerals, it did not influence Old World numeral systems. Quipu, a knotted cord device, used in the Inca Empire and its predecessor societies in the Andean region to record accounting and other digital data, is encoded in a base ten positional system. Zero is represented by the absence of a knot in the appropriate position. The use of a blank on a counting board to represent 0 dated back in India to 4th century BC.[12]
The back of Olmec Stela C from Tres Zapotes, the second oldest Long Count date yet discovered. The numerals translate to September, 32 BC (Julian). The glyphs surrounding the date are thought to be one of the few surviving examples of Epi-Olmec script.

In China, counting rods were used for decimal calculation since the 4th century BC including the use of blank spaces. Chinese mathematicians understood negative numbers and zero, some mathematicians used 無入, 空, 口 for the latter, until Gautama Siddha introduced the symbol 0.[13] [14] The Nine Chapters on the Mathematical Art, which was mainly composed in the 1st century AD, stated "[when subtracting] subtract same signed numbers, add differently signed numbers, subtract a positive number from zero to make a negative number, and subtract a negative number from zero to make a positive number."[15] By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for zero (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not just as a placeholder, this Hellenistic zero was perhaps the first documented use of a number zero in the Old World. However, the positions were usually limited to the fractional part of a number (called minutes, seconds, thirds, fourths, etc.)—they were not used for the integral part of a number. In later Byzantine manuscripts of Ptolemy's Syntaxis Mathematica (also known as the Almagest), the Hellenistic zero had morphed into the Greek letter omicron (otherwise meaning 70). Another zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, nulla meaning "nothing", not as a symbol. When division produced zero as a remainder, nihil, also meaning "nothing", was used. These medieval zeros were used by all future medieval computists (calculators of Easter). The initial "N" was used as a zero symbol in a table of Roman numerals by Bede or his colleague around 725. In 498 AD, Indian mathematician and astronomer Aryabhata stated that "Sthanam sthanam dasa gunam" or place to place in ten times in value, which is the origin of the modern decimal-based place value notation.[16] The oldest known text to use a decimal place-value system, including a zero, is the Jain text from India entitled the Lokavibhâga, dated 458 AD. This text uses Sanskrit numeral words for the digits, with words for zero such as the Sanskrit word for "void" or "empty", shunya.[17] The first known use of special glyphs for the decimal digits that includes the indubitable appearance of a symbol for the digit zero, a small circle, appears on a stone inscription

0 (number) found at the Chaturbhuja Temple at Gwalior in India, dated 876 AD.[18] [19] There are many documents on copper plates, with the same small o in them, dated back as far as the sixth century AD, but their authenticity may be doubted.[5] The Hindu-Arabic numerals and the positional number system were introduced around 500 AD, and in 825 AD, it was introduced by a Persian scientist, al-Khwārizmī,[20] in his book on arithmetic. This book synthesized Greek and Hindu knowledge and also contained his own fundamental contribution to mathematics and science including an explanation of the use of zero. It was only centuries later, in the 12th century, that the Arabic numeral system was introduced to the Western world through Latin translations of his Arithmetic.


As a number
0 is the integer immediately preceding 1. In most cultures, 0 was identified before the idea of negative things (quantities) that go lower than zero was accepted. Zero is an even number,[21] because it is divisible by 2. 0 is neither positive or negative. By most definitions[22] 0 is a natural number, and then the only natural number not to be positive. Zero is a number which quantifies a count or an amount of null size. The value, or number, zero is not the same as the digit zero, used in numeral systems using positional notation. Successive positions of digits have higher weights, so inside a numeral the digit zero is used to skip a position and give appropriate weights to the preceding and following digits. A zero digit is not always necessary in a positional number system, for example, in the number 02. In some instances, a leading zero may be used to distinguish a number.

As a year label
In the BC calendar era, the year 1 BC is the first year before AD 1; no room is reserved for a year zero. By contrast, in astronomical year numbering, the year 1 BC is numbered 0, the year 2 BC is numbered −1, and so on.[23]

Names and symbols
In 976 AD the Persian encyclopedist Muhammad ibn Ahmad al-Khwarizmi, in his "Keys of the Sciences", remarked that if, in a calculation, no number appears in the place of tens, then a little circle should be used "to keep the rows". This circle the Arabs called ‫ ﺻﻔﺮ‬ṣifr, "empty". That was the earliest mention of the name ṣifr that eventually became zero.[20] Italian zefiro already meant "west wind" from Latin and Greek zephyrus; this may have influenced the spelling when transcribing Arabic ṣifr.[24] The Italian mathematician Fibonacci (c.1170–1250), who grew up in North Africa and is credited with introducing the decimal system to Europe, used the term zephyrum. This became zefiro in Italian, which was contracted to zero in Venetian. As the decimal zero and its new mathematics spread from the Arab world to Europe in the Middle Ages, words derived from ṣifr and zephyrus came to refer to calculation, as well as to privileged knowledge and secret codes. According to Ifrah, "in thirteenth-century Paris, a 'worthless fellow' was called a '... cifre en algorisme', i.e., an 'arithmetical nothing'."[24] From ṣifr also came French chiffre = "digit", "figure", "number", chiffrer = "to calculate or compute", chiffré = "encrypted". Today, the word in Arabic is still ṣifr, and cognates of ṣifr are common in the languages of Europe and southwest Asia. The modern numerical digit 0 is usually written as a circle or ellipse. Traditionally, many print typefaces made the capital letter O more rounded than the narrower, elliptical digit 0.[25]

0 (number) Typewriters originally made no distinction in shape between O and 0; some models did not even have a separate key for the digit 0. The distinction came into prominence on modern character displays.[25] A slashed zero can be used to distinguish the number from the letter. The digit 0 with a dot in the center seems to have originated as an option on IBM 3270 displays and has continued with the some modern computer typefaces such as Andalé Mono. One variation uses a short vertical bar instead of the dot. Some fonts designed for use with computers made one of the capital-O–digit-0 pair more rounded and the other more angular (closer to a rectangle). A further distinction is made in falsification-hindering typeface as used on German car number plates by slitting open the digit 0 on the upper right side. Sometimes the digit 0 is used either exclusively, or not at all, to avoid confusion altogether.


Rules of Brahmagupta
The rules governing the use of zero appeared for the first time in Brahmagupta's book Brahmasputha Siddhanta (The Opening of the Universe),[26] written in 628 AD. Here Brahmagupta considers not only zero, but negative numbers, and the algebraic rules for the elementary operations of arithmetic with such numbers. In some instances, his rules differ from the modern standard. Here are the rules of Brahmagupta:[26] • The sum of zero and a negative number is negative. • The sum of zero and a positive number is positive. • • • • The sum of zero and zero is zero. The sum of a positive and a negative is their difference; or, if their absolute values are equal, zero. A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. • Zero divided by zero is zero. In saying zero divided by zero is zero, Brahmagupta differs from the modern position. Mathematicians normally do not assign a value to this, whereas computers and calculators sometimes assign NaN, which means "not a number." Moreover, non-zero positive or negative numbers when divided by zero are either assigned no value, or a value of unsigned infinity, positive infinity, or negative infinity. Once again, these assignments are not numbers, and are associated more with computer science than pure mathematics, where in most contexts no assignment is done.

Zero as a decimal digit
Positional notation without the use of zero (using an empty space in tabular arrangements, or the word kha "emptiness") is known to have been in use in India from the 6th century. The earliest certain use of zero as a decimal positional digit dates to the 5th century mention in the text Lokavibhaga. The glyph for the zero digit was written in the shape of a dot, and consequently called bindu ("dot"). The dot had been used in Greece during earlier ciphered numeral periods. The Hindu-Arabic numeral system (base 10) reached Europe in the 11th century, via the Iberian Peninsula through Spanish Muslims, the Moors, together with knowledge of astronomy and instruments like the astrolabe, first imported by Gerbert of Aurillac. For this reason, the numerals came to be known in Europe as "Arabic numerals". The Italian mathematician Fibonacci or Leonardo of Pisa was instrumental in bringing the system into European mathematics in 1202, stating: After my father's appointment by his homeland as state official in the customs house of Bugia for the Pisan merchants who thronged to it, he took charge; and in view of its future usefulness and convenience, had me in my boyhood come to him and there wanted me to devote myself to and be instructed in the study of calculation for some days. There, following my introduction, as a consequence of marvelous instruction in the art, to the nine digits of the Hindus, the knowledge of the art very much appealed to me before all others, and for it I realized that all its aspects were studied in Egypt, Syria,

0 (number) Greece, Sicily, and Provence, with their varying methods; and at these places thereafter, while on business. I pursued my study in depth and learned the give-and-take of disputation. But all this even, and the algorism, as well as the art of Pythagoras, I considered as almost a mistake in respect to the method of the Hindus (Modus Indorum). Therefore, embracing more stringently that method of the Hindus, and taking stricter pains in its study, while adding certain things from my own understanding and inserting also certain things from the niceties of Euclid's geometric art. I have striven to compose this book in its entirety as understandably as I could, dividing it into fifteen chapters. Almost everything which I have introduced I have displayed with exact proof, in order that those further seeking this knowledge, with its pre-eminent method, might be instructed, and further, in order that the Latin people might not be discovered to be without it, as they have been up to now. If I have perchance omitted anything more or less proper or necessary, I beg indulgence, since there is no one who is blameless and utterly provident in all things. The nine Indian figures are: 9 8 7 6 5 4 3 2 1. With these nine figures, and with the sign 0 ... any number may be written.[27] [28] Here Leonardo of Pisa uses the phrase "sign 0", indicating it is like a sign to do operations like addition or multiplication. From the 13th century, manuals on calculation (adding, multiplying, extracting roots, etc.) became common in Europe where they were called algorismus after the Persian mathematician al-Khwārizmī. The most popular was written by Johannes de Sacrobosco, about 1235 and was one of the earliest scientific books to be printed in 1488. Until the late 15th century, Hindu-Arabic numerals seem to have predominated among mathematicians, while merchants preferred to use the Roman numerals. In the 16th century, they became commonly used in Europe.


In mathematics
Elementary algebra
The number 0 is the smallest non-negative integer. The natural number following 0 is 1 and no natural number precedes 0. The number 0 may or may not be considered a natural number, but it is a whole number and hence a rational number and a real number (as well as an algebraic number and a complex number). The number 0 is neither positive nor negative and appears in the middle of a number line. It is neither a prime number nor a composite number. It cannot be prime because it has an infinite number of factors and cannot be composite because it cannot be expressed by multiplying prime numbers (0 must always be one of the factors).[29] Zero is, however, even (see parity of zero). The following are some basic (elementary) rules for dealing with the number 0. These rules apply for any real or complex number x, unless otherwise stated. • • • • Addition: x + 0 = 0 + x = x. That is, 0 is an identity element (or neutral element) with respect to addition. Subtraction: x − 0 = x and 0 − x = −x. Multiplication: x · 0 = 0 · x = 0. Division: 0⁄x = 0, for nonzero x. But x⁄0 is undefined, because 0 has no multiplicative inverse (no real number multiplied by 0 produces 1), a consequence of the previous rule; see division by zero. • Exponentiation: x0 = x/x = 1, except that the case x = 0 may be left undefined in some contexts; see Zero to the zero power. For all positive real x, 0x = 0. The expression 0⁄0, which may be obtained in an attempt to determine the limit of an expression of the form f(x)⁄g(x) as a result of applying the lim operator independently to both operands of the fraction, is a so-called "indeterminate form". That does not simply mean that the limit sought is necessarily undefined; rather, it means that the limit of f(x) ⁄g(x), if it exists, must be found by another method, such as l'Hôpital's rule. The sum of 0 numbers is 0, and the product of 0 numbers is 1. The factorial 0! evaluates to 1.

0 (number)


Other branches of mathematics
• In set theory, 0 is the cardinality of the empty set: if one does not have any apples, then one has 0 apples. In fact, in certain axiomatic developments of mathematics from set theory, 0 is defined to be the empty set. When this is done, the empty set is the Von Neumann cardinal assignment for a set with no elements, which is the empty set. The cardinality function, applied to the empty set, returns the empty set as a value, thereby assigning it 0 elements. • Also in set theory, 0 is the lowest ordinal number, corresponding to the empty set viewed as a well-ordered set. • In propositional logic, 0 may be used to denote the truth value false. • In abstract algebra, 0 is commonly used to denote a zero element, which is a neutral element for addition (if defined on the structure under consideration) and an absorbing element for multiplication (if defined). • In lattice theory, 0 may denote the bottom element of a bounded lattice. • In category theory, 0 is sometimes used to denote an initial object of a category. • In recursion theory, 0 can be used to denote the Turing degree of the partial computable functions.

Related mathematical terms
• A zero of a function f is a point x in the domain of the function such that f(x) = 0. When there are finitely many zeros these are called the roots of the function. See also zero (complex analysis) for zeros of a holomorphic function. • The zero function (or zero map) on a domain D is the constant function with 0 as its only possible output value, i.e., the function f defined by f(x) = 0 for all x in D. A particular zero function is a zero morphism in category theory; e.g., a zero map is the identity in the additive group of functions. The determinant on non-invertible square matrices is a zero map. • Several branches of mathematics have zero elements, which generalise either the property 0 + x = x, or the property 0 × x = 0, or both.

In science
The value zero plays a special role for many physical quantities. For some quantities, the zero level is naturally distinguished from all other levels, whereas for others it is more or less arbitrarily chosen. For example, on the Kelvin temperature scale, zero is the coldest possible temperature (negative temperatures exist but are not actually colder), whereas on the Celsius scale, zero is arbitrarily defined to be at the freezing point of water. Measuring sound intensity in decibels or phons, the zero level is arbitrarily set at a reference value—for example, at a value for the threshold of hearing. In physics, the zero-point energy is the lowest possible energy that a quantum mechanical physical system may possess and is the energy of the ground state of the system.

Zero has been proposed as the atomic number of the theoretical element tetraneutron. It has been shown that a cluster of four neutrons may be stable enough to be considered an atom in its own right. This would create an element with no protons and no charge on its nucleus. As early as 1926, Professor Andreas von Antropoff coined the term neutronium for a conjectured form of matter made up of neutrons with no protons, which he placed as the chemical element of atomic number zero at the head of his new version of the periodic table. It was subsequently placed as a noble gas in the middle of several spiral representations of the periodic system for classifying the chemical elements.

0 (number)


In computer science
The most common practice throughout human history has been to start counting at one, and this is the practice in early classic computer science programming languages such as Fortran and COBOL. However, in the late 1950s LISP introduced zero-based numbering for arrays while Algol 58 introduced completely flexible basing for array subscripts (allowing any positive, negative, or zero integer as base for array subscripts), and most subsequent programming languages adopted one or other of these positions. For example, the elements of an array are numbered starting from 0 in C, so that for an array of n items the sequence of array indices runs from 0 to n−1. This permits an array element's location to be calculated by adding the index directly to address of the array, whereas 1 based languages precalculate the array's base address to be the position one element before the first. There can be confusion between 0 and 1 based indexing, for example Java's JDBC indexes parameters from 1 although Java itself uses 0-based indexing. In databases, it is possible for a field not to have a value. It is then said to have a null value. For numeric fields it is not the value zero. For text fields this is not blank nor the empty string. The presence of null values leads to three-valued logic. No longer is a condition either true or false, but it can be undetermined. Any computation including a null value delivers a null result. Asking for all records with value 0 or value not equal 0 will not yield all records, since the records with value null are excluded. A null pointer is a pointer in a computer program that does not point to any object or function. In C, the integer constant 0 is converted into the null pointer at compile time when it appears in a pointer context, and so 0 is a standard way to refer to the null pointer in code. However, the internal representation of the null pointer may be any bit pattern (possibly different values for different data types). In mathematics , both −0 and +0 represent exactly the same number, i.e., there is no "negative zero" distinct from zero. In some signed number representations (but not the two's complement representation used to represent integers in most computers today) and most floating point number representations, zero has two distinct representations, one grouping it with the positive numbers and one with the negatives; this latter representation is known as negative zero.

In other fields
• In some countries and some company phone networks, dialing 0 on a telephone places a call for operator assistance. • DVDs that can be played in any region are sometimes referred to as being "region 0" • Roulette wheels usually feature a "0" space (and sometimes also a "00" space), whose presence is ignored when calculating payoffs (thereby allowing the house to win in the long run). • In Formula One, if the reigning World Champion no longer competes in Formula One in the year following their victory in the title race, 0 is given to one of the drivers of the team that the reigning champion won the title with. This happened in 1993 and 1994, with Damon Hill driving car 0, due to the reigning World Champion (Nigel Mansell and Alain Prost respectively) not competing in the championship.

0 (number)


[1] Russel, Bertrand (1942). Principles of mathematics (http:/ / books. google. com/ books?id=63ooitcP2osC) (2 ed.). Forgotten Books. p. 125. ISBN 1-440-05416-9. ., Chapter 14, page 125 (http:/ / books. google. com/ books?id=63ooitcP2osC& pg=PA125) [2] Catherine Soanes, ed (2001) (Hardback). The Oxford Dictionary, Thesaurus and Wordpower Guide. Maurice Waite, Sara Hawker (2nd ed.). New York, United States: Oxford University Press. ISBN 978-0-19-860373-3. [3] aught at (http:/ / www. etymonline. com/ index. php?search=aught& searchmode=none) [4] Merriam Webster online Dictionary (http:/ / www. merriam-webster. com/ dictionary/ zero) [5] Kaplan, Robert. (2000). The Nothing That Is: A Natural History of Zero. Oxford: Oxford University Press. [6] Bourbaki, Nicolas (1998). Elements of the History of Mathematics. Berlin, Heidelberg, and New York: Springer-Verlag. 46. ISBN 3-540-64767-8. [7] Britannica Concise Encyclopedia (2007), entry algebra [8] Binary Numbers in Ancient India (http:/ / home. ica. net/ ~roymanju/ Binary. htm) [9] Math for Poets and Drummers (http:/ / www. sju. edu/ ~rhall/ Rhythms/ Poets/ arcadia. pdf) (pdf, 145KB) [10] No long count date actually using the number 0 has been found before the 3rd century AD, but since the long count system would make no sense without some placeholder, and since Mesoamerican glyphs do not typically leave empty spaces, these earlier dates are taken as indirect evidence that the concept of 0 already existed at the time. [11] Diehl, p. 186 [12] Robert Temple, The Genius of China, A place for zero; ISBN 1-85375-292-4 [13] 「零」字考與陳雲林會長來台 (http:/ / www. nownews. com/ 2008/ 11/ 03/ 142-2359146. htm) [14] 對中國傳統筆算之探討 (http:/ / www. math. sinica. edu. tw/ math_media/ pdf. php?m_file=ZDI2My8yNjMwNg==) [15] The statement in Chinese, found in Chapter 8 of The Nine Chapters on the Mathematical Art is 正負術曰: 同名相除,異名相益,正無入負之,負無入正之。其異名相除,同名相益,正無入正之,負無入負之。The word 無入 used here, for which zero is the standard translation by mathematical historians, literally means: no entry. The full Chinese text can be found at wikisource:zh:九章算術. [16] Aryabhatiya of Aryabhata, translated by Walter Eugene Clark. [17] Ifrah, Georges (2000), p. 416. [18] Feature Column from the AMS (http:/ / www. ams. org/ featurecolumn/ archive/ india-zero. html) [19] Ifrah, Georges (2000), p. 400. [20] Will Durant, 'The Story of Civilization, Volume 4, 'The Age of Faith, pp. 241. (http:/ / www. archive. org/ details/ ageoffaithahisto012288mbp) [21] Lemma B.2.2, The integer 0 is even and is not odd, in Penner, Robert C. (1999). Discrete Mathematics: Proof Techniques and Mathematical Structures. World Scientific. p. 34. ISBN 9810240880. [22] Bunt, Lucas Nicolaas Hendrik; Jones, Phillip S.; Bedient, Jack D. (1988). The historical roots of elementary mathematics (http:/ / books. google. com/ books?id=7xArILpcndYC). Courier Dover Publications. p. 254-255. ISBN 0-486-2556-3. ., Extract of pages 254-255 (http:/ / books. google. com/ books?id=7xArILpcndYC& pg=PA255) [23] Steel, Duncan (2000). Marking time: the epic quest to invent the perfect calendar. John Wiley & Sons. p. 113. ISBN 0-471-29827-1. "In the B.C./A.D. scheme there is no year zero. After 31 December 1 BC came AD 1 January 1. ... If you object to that no-year-zero scheme, then don't use it: use the astronomer's counting scheme, with negative year numbers." [24] Ifrah, Georges (2000). The Universal History of Numbers: From Prehistory to the Invention of the Computer. Wiley. ISBN 0-471-39340-1. [25] Bemer, R. W. (1967). "Towards standards for handwritten zero and oh: much ado about nothing (and a letter), or a partial dossier on distinguishing between handwritten zero and oh". Communications of the ACM 10 (8): 513–518. doi:10.1145/363534.363563. [26] Algebra with Arithmetic of Brahmagupta and Bhaskara (http:/ / books. google. com/ books?id=A3cAAAAAMAAJ& printsec=frontcover& dq=brahmagupta), translated to English by Henry Thomas Colebrooke, London1817 [27] Sigler, L., Fibonacci's Liber Abaci. English translation, Springer, 2003. [28] Grimm, R.E., "The Autobiography of Leonardo Pisano", Fibonacci Quarterly 11/1 (February 1973), pp. 99–104. [29] Reid, Constance (1992). From zero to infinity: what makes numbers interesting (http:/ / books. google. com/ ?id=d3NFIvrTk4sC& pg=PA23& dq=zero+ neither+ prime+ nor+ composite& q=zero neither prime nor composite) (4th ed.). Mathematical Association of America. p. 23. ISBN 9780883855058. .

0 (number)


This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL. • Barrow, John D. (2001) The Book of Nothing, Vintage. ISBN 0-09-928845-1. • Diehl, Richard A. (2004) The Olmecs: America's First Civilization, Thames & Hudson, London. • Ifrah, Georges (2000) The Universal History of Numbers: From Prehistory to the Invention of the Computer, Wiley. ISBN 0-471-39340-1. • Kaplan, Robert (2000) The Nothing That Is: A Natural History of Zero, Oxford: Oxford University Press. • Seife, Charles (2000) Zero: The Biography of a Dangerous Idea, Penguin USA (Paper). ISBN 0-14-029647-6. • Bourbaki, Nicolas (1998). Elements of the History of Mathematics. Berlin, Heidelberg, and New York: Springer-Verlag. ISBN 3-540-64767-8. • Isaac Asimov (1978). Article "Nothing Counts" in Asimov on Numbers. Pocket Books.

External links
• A History of Zero ( • Zero Saga ( • The History of Algebra ( • Edsger W. Dijkstra: Why numbering should start at zero ( EWD831.PDF), EWD831 (PDF of a handwritten manuscript) • "My Hero Zero" ( Educational children's song in Schoolhouse Rock! • Zero ( on In Our Time at the BBC. ( listen now (http://www. nso:0 (nomoro)

Identity element


Identity element
In mathematics, an identity element (or neutral element) is a special type of element of a set with respect to a binary operation on that set. It leaves other elements unchanged when combined with them. This is used for groups and related concepts. The term identity element is often shortened to identity (as will be done in this article) when there is no possibility of confusion. Let (S,*) be a set S with a binary operation * on it (known as a magma). Then an element e of S is called a left identity if e * a = a for all a in S, and a right identity if a * e = a for all a in S. If e is both a left identity and a right identity, then it is called a two-sided identity, or simply an identity. An identity with respect to addition is called an additive identity (often denoted as 0) and an identity with respect to multiplication is called a multiplicative identity (often denoted as 1). The distinction is used most often for sets that support both binary operations, such as rings. The multiplicative identity is often called the unit in the latter context, where, unfortunately, a unit is also sometimes used to mean an element with a multiplicative inverse.

set real numbers real numbers real numbers positive integers nonnegative integers m-by-n matrices n-by-n square matrices operation + (addition) · (multiplication) ab (exponentiation) least common multiple 0 1 1 (right identity only) 1 identity

greatest common divisor 0 (under most definitions of GCD) + (addition) · (multiplication) matrix of all zeroes In (matrix with 1 on diagonal and 0 elsewhere)

all functions from a set M to itself ∘ (function composition) identity function all functions from a set M to itself * (convolution) character strings, lists extended real numbers extended real numbers subsets of a set M sets boolean logic boolean logic boolean logic compact surfaces only two elements {e, f} concatenation minimum/infimum maximum/supremum ∩ (intersection) ∪ (union) ∧ (logical and) ∨ (logical or) ⊕ (Exclusive or) # (connected sum) * defined by e * e = f * e = e and f * f = e * f = f δ (Dirac delta) empty string, empty list +∞ −∞ M { } (empty set) ⊤ (truth) ⊥ (falsity) ⊥ (falsity) S² both e and f are left identities, but there is no right identity and no two-sided identity

Identity element


As the last example shows, it is possible for (S, *) to have several left identities. In fact, every element can be a left identity. Similarly, there can be several right identities. But if there is both a right identity and a left identity, then they are equal and there is just a single two-sided identity. To see this, note that if l is a left identity and r is a right identity then l = l * r = r. In particular, there can never be more than one two-sided identity. If there were two, e and f, then e * f would have to be equal to both e and f. It is also quite possible for (S, *) to have no identity element. The most common example of this is the cross product of vectors. The absence of an identity element is related to the fact that the direction of any nonzero cross product is always orthogonal to any element multiplied – so that it is not possible to obtain a non-zero vector in the same direction as the original. Another example would be the additive semigroup of positive natural numbers.

• M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3110152487, p. 14-15

In mathematics, a variable is a value that may change within the scope of a given problem or set of operations. In contrast, a constant is a value that remains unchanged, though often unknown or undetermined.[1] The concepts of constants and variables are fundamental to many areas of mathematics and its applications. A "constant" in this context should not be confused with a mathematical constant which is a specific number independent of the scope of the given problem.

Dependent and independent variables
Variables are further distinguished as being either a dependent variable or an independent variable. Independent variables are regarded as inputs to a system and may take on different values freely. Dependent variables are those values that change as a consequence of changes in other values in the system.[2] When one value is completely determined by another, or several others, then it is called a function of the other value or values. In this case the value of the function is a dependent variable and the other values are independent variables. The notation f(x) is used for the value of the function f with x representing the independent variable. Similarly, notation such as f(x, y, z) may be used when there are several independent variables.[3]

What it means for a variable to vary
Varying, in the context of mathematical variables, does not mean change in the course of time, but rather dependence on the context in which the variable is used. This can be the immediate context of the expression in which the variable occurs, as in the case of summation variables or variables that designate the argument of a function being defined. The context can also be larger, for instance when a variable is used to designate a value occurring in a hypothesis of the discussion at hand. In some cases nothing varies at all, and alternative names can be used instead of "variable": a parameter is a value that is fixed in the statement of the problem being studied (although its value may not be explicitly known), an unknown is a variable that is introduced to stand for a constant value that is not initially known, but which may become known by solving some equation(s) for it, and an indeterminate is a symbol that need not stand for anything else but is an abstract value in itself. In all these cases the term "variable" is often still used because the rules for the manipulation of these symbols are the same; however, in some languages other than English, one distinguishes between "variables" in functions and "unknown quantities" in equations ("incógnita" in

Variable Portuguese/Spanish, "inconnue" in French).


If one defines a function f from the real numbers to the real numbers by

then x is a variable standing for the argument of the function being defined, which can be any real number. In the identity

the variable i is a summation variable which designates in turn each of the integers 1, 2, ..., n (it is also called index because its variation is over a discrete set of values) while n is a parameter (it does not vary within the formula). In the theory of polynomials, a polynomial of degree 2 is generally denoted as ax2 + bx + c, where a, b and c are called coefficients (they are assumed to be fixed, i.e., parameters of the problem considered) while x is called a variable. When studying this polynomial for its polynomial function this x stands for the function argument. When studying the polynomial as an object in itself, x is taken to be an indeterminate, and would often be written with a capital letter instead to indicate this status. Formulas from physics such as E = mc2 or PV = nRT (the ideal gas law) do not involve the mathematical notion of a variable, because the quantities E, m, P, V, n, and T are instead used to designate certain properties (energy, mass, pressure, volume, quantity, temperature) of the physical system.

In mathematics, single-symbol names for variables are the norm, with letter at the beginning of the alphabet, e.g. a, b, c commonly used for constants and letters at the end of the alphabet, e.g. x, y, z, and t commonly used for variables.[1] In written mathematics, variables and constants are usually set in an italic typeface. Specific branches and applications of mathematics usually have specific naming conventions for variables. Variables with similar roles or meanings are often assigned consecutive letters. For example, the three axes in 3D coordinate space are conventionally called x, y, and z, while random variables in statistics are usually named X, Y, Z. In physics, the names of variables are largely determined by the physical quantity they describe, but various naming conventions exist. A convention sometimes followed in statistics is to use X, Y, Z for the names of random variables, with these being replaced by x, y, z for observations or sample outcomes of those random variables. Another convention sometimes used in statistics is to denote population values of particular statistics by lower (or upper) case Greek letters, with sample-based estimates of those quantities being denoted by the corresponding lower (or upper) case letters from the ordinary alphabet.

General introduction
Variables are used in open sentences. For instance, in the formula x + 1 = 5, x is a variable which represents an "unknown" number. Variables are often represented by Greek or Roman letters and may be used with other special symbols. In mathematics, variables are essential because they let quantitative relationships to be stated in a general way. If we were forced to use actual values, then the relationships would only apply in a more narrow set of situations. For example: State a mathematical definition for finding the number twice that of ANY other finite number:

Variable 2(x) = x + x or x * 2 Now, all we need to do to find the double of a number is replace x with any number we want. • • • • 2(1) = 1 + 1 = 2 or 1 * 2 2(3) = 3 + 3 = 6 or 3 * 2 2(55) = 55 + 55 = 110 or 55 * 2 etc.


So in this example, the variable x is a "placeholder" for any number—that is to say, a variable. One important thing we assume is that the value of x does not change, even though we do not know what x is. But in some algorithms, obviously, will change x, and there are various ways to then denote if we mean its old or new value—again, generally not knowing either, but perhaps (for example) that one is less than the other.

Naming conventions
Mathematics has many conventions. Below are some of the more common. Many of the symbols have other conventional uses, but they may actually represent a constant or a specific function rather than a variable. • ai is often used to denote a term of a sequence. • a, b, c, and d (sometimes extended to e and f) usually play similar roles or are made to represent parallel notions in a mathematical context. They often represent constants. • The coefficients in an equation, for example the general expression of a polynomial or a Diophantine equation are often a, b, c, d, e, and f. f and g (sometimes h) commonly denote functions. i, j, and k (sometimes l or h) are often used as subscripts to denote indices. l and w are often used to represent the length and width of a figure. m and n usually denote integers and usually play similar roles or are made to represent parallel notions in a mathematical context, such a pair of dimensions.

• • • •

• n commonly denotes a count of objects, or, in statistics, the number of individuals or observations. • p, q, and r usually play similar roles or are made to represent parallel notions in a mathematical context. • • • • • • p and q often denote prime numbers or relatively prime numbers, or, in statistics, probabilities. r often denotes a remainder or modulus. r, s, and t usually play similar roles or are made to represent parallel notions in a mathematical context. u and v usually play similar roles or are made to represent parallel notions in a mathematical context, such as denoting a vertex (graph theory). w, x, y, and z usually play similar roles or are made to represent parallel notions in a mathematical context, such as representing unknowns in an equation. x, y and z correspond to the three Cartesian axes. In many two-dimensional cases, y will be expressed in terms of x; if a third dimension is added, z is expressed in terms of x and y. • z typically denotes a complex number, or, in statistics, a normal random variate. • • • • • , , , and commonly denote angle measures. usually represents an arbitrarily small positive number. and commonly denote two small positives. is used for eigenvalues. often denotes a sum, or, in statistics, the standard deviation.



Applied statistics
In statistics, variables refer to measurable attributes, as these typically vary over time or between individuals. Variables can be discrete (taking values from a finite or countable set), continuous (having a continuous distribution function), or neither. Temperature is a continuous variable, while the number of legs of an animal is a discrete variable. This concept of a variable is widely used in the natural, medical, and social sciences. In causal models, a distinction is made between "independent variables" and "dependent variables", the latter being expected to vary in value in response to changes in the former. In other words, an independent variable is presumed to potentially affect a dependent one. In experiments, independent variables include factors that can be altered or chosen by the researcher independent of other factors. So, in an experiment to test if the boiling point of water changes with altitude, the altitude is under direct control and is the independent variable, and the boiling point is presumed to depend upon it, so being the dependent variable. The results of an experiment, or information to be used to draw conclusions, are known as data. It is often important to consider which variables to allow, or directly control or eliminate, in the design of experiments. There are also quasi-independent variables, which are used by researchers to group things without affecting the variable itself. For example, to separate people into groups by their sex does not change whether they are male or female. Or a researcher may separate people, arbitrarily, on the amount of coffee they had drunk before beginning an experiment. The researcher cannot change the past, but can use it to split people into groups. While independent variables can refer to quantities and qualities that are under experimental control, they can also include extraneous factors that influence results in a confusing or undesired manner. In statistics the technique to work this out is called correlation. If strongly confounding variables exist that can substantially change the result, it makes it harder to interpret. For example, a study on cancer against age will also have to take into account variables such as income, location, stress, and lifestyle. Without considering these, the results could be grossly inaccurate deductions. Because of this, controlling unwanted variables is important in research.

[1] Edwards Art. 4 [2] Edwards Art. 5 [3] Edwards Art. 6

• J. Edwards (1892). Differential Calculus ( pg=PA1#v=onepage&q&f=false). London: MacMillan and Co.. pp. 1 ff..



The integers (from the Latin integer, literally "untouched", hence "whole": the word entire comes from the same origin, but via French[1] ) are formed by the natural numbers (including 0) (0, 1, 2, 3, ...) together with the negatives of the non-zero natural numbers (−1, −2, −3, ...).They are known as Positive and Negative Integers respectively. Viewed as a subset of the real numbers, they are numbers that can be written without a fractional or decimal component, and fall within the set {..., −2, −1, 0, 1, 2, ...}. For example, 21, 4, and −2048 are integers; 9.75, 5½, and 14% are not integers.
Symbol often used to denote the set of integers

The set of all integers is often denoted by a boldface Z (or blackboard bold for Zahlen (German for numbers, pronounced [ˈtsaːlən]). example, ).

, Unicode U+2124 ℤ), which stands

The set

is the finite set of integers modulo n (for

The integers (with addition as operation) form the smallest group containing the additive monoid of the natural numbers. Like the natural numbers, the integers form a countably infinite set. In algebraic number theory, these commonly understood integers, embedded in the field of rational numbers, are referred to as rational integers to distinguish them from the more broadly defined algebraic integers (but with "rational" meaning "quotient of integers", this attempt at precision suffers from circularity).

Algebraic properties
Like the natural numbers, Z is closed under Integers can be thought of as discrete, equally spaced points on an infinitely long the operations of addition and number line. Natural Integers (purple) and Negative Integers (red). multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers, and, importantly, zero, Z (unlike the natural numbers) is also closed under subtraction. Z is not closed under division, since the quotient of two integers (e.g., 1 divided by 2), need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following lists some of the basic properties of addition and multiplication for any integers a, b and c.
Addition Closure: Associativity: Commutativity: a + b   is an integer Multiplication a × b   is an integer

a + (b + c)  =  (a + b) + c a × (b × c)  =  (a × b) × c a + b  =  b + a a × b  =  b × a a × 1  =  a An inverse element usually does not exist at all.

Existence of an identity element: a + 0  =  a Existence of inverse elements: Distributivity: No zero divisors: a + (−a)  =  0

a × (b + c) = (a × b) + (a × c)   and   (a + b) × c = (a × c) + (b × c) If a × b = 0, then a = 0 or b = 0 (or both)

In the language of abstract algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, since every nonzero integer can be written as a finite sum 1 + 1 + ... + 1 or (−1) + (−1) + ... + (−1). In fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z.

Integer The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However not every integer has a multiplicative inverse; e.g. there is no integer x such that 2x = 1, because the left hand side is even, while the right hand side is odd. This means that Z under multiplication is not a group. All the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. Adding the last property says that Z is an integral domain. In fact, Z provides the motivation for defining such a structure. The lack of multiplicative inverses, which is equivalent to the fact that Z is not closed under division, means that Z is not a field. The smallest field containing the integers is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain. Although ordinary division is not defined on Z, it does possess an important property called the division algorithm: that is, given two integers a and b with b ≠ 0, there exist unique integers q and r such that a = q × b + r and 0 ≤ r < | b |, where | b | denotes the absolute value of b. The integer q is called the quotient and r is called the remainder, resulting from division of a by b. This is the basis for the Euclidean algorithm for computing greatest common divisors. Again, in the language of abstract algebra, the above says that Z is a Euclidean domain. This implies that Z is a principal ideal domain and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic.


Order-theoretic properties
Z is a totally ordered set without upper or lower bound. The ordering of Z is given by: ... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ... An integer is positive if it is greater than zero and negative if it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: 1. if a < b and c < d, then a + c < b + d 2. if a < b and 0 < c, then ac < bc. It follows that Z together with the above ordering is an ordered ring. The integers are the only integral domain whose positive elements are well-ordered, and in which order is preserved by addition.



The integers can be formally constructed as the equivalence classes of ordered pairs of natural numbers (a, b). The intuition is that (a, b) stands for the result of subtracting b from a. To confirm our expectation that 1 − 2 and 4 − 5 denote the same number, we define an equivalence relation ~ on these pairs with the following rule:

alt=Representation of equivalence classes for the numbers -5 to 5 |Red Points represent ordered pairs of natural numbers. Linked red points are equivalence classes representing the blue integers at the end of the line. |upright=2

precisely when

Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; denoting by [(a,b)] the equivalence class having (a,b) as a member, one has:

The negation (or additive inverse) of an integer is obtained by reversing the order of the pair:

Hence subtraction can be defined as the addition of the additive inverse:

The standard ordering on the integers is given by: iff It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form (n,0) or (0,n) (or both at once). The natural number n is identified with the class [(n,0)] (in other words the natural numbers are embedded into the integers by map sending n to [(n,0)]), and the class [(0,n)] is denoted −n (this covers all remaining classes, and gives the class [(0,0)] a second time since −0 = 0. Thus, [(a,b)] is denoted by

If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity. This notation recovers the familiar representation of the integers as {... −3,−2,−1, 0, 1, 2, 3, ...}. Some examples are:



Integers in computing
An integer is often a primitive datatype in computer languages. However, integer datatypes can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation datatypes (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer datatypes are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10).

The cardinality of the set of integers is equal to If N = {0, 1, 2, ...} then consider the function: (aleph-null). This is readily demonstrated by the construction of a bijection, that is, a function that is injective and surjective from Z to N.

{ ... (-4,8) (-3,6) (-2,4) (-1,2) (0,0) (1,1) (2,3) (3,5) ... } If N = {1,2,3,...} then consider the function:

{ ... (-4,8) (-3,6) (-2,4) (-1,2) (0,1) (1,3) (2,5) (3,7) ... } If the domain is restricted to Z then each and every member of Z has one and only one corresponding member of N and by the definition of cardinal equality the two sets have equal cardinality.



[1] Evans, Nick (1995). "A-Quantifiers and Scope" (http:/ / books. google. com/ ?id=NlQL97qBSZkC). In Bach, Emmon W. Quantification in Natural Languages. Dordrecht, The Netherlands; Boston, MA: Kluwer Academic Publishers. pp. 262. ISBN 0792333527. [2] Miller, Jeff (2010-08-29). "Earliest Uses of Symbols of Number Theory" (http:/ / jeff560. tripod. com/ nth. html). . Retrieved 2010-09-20.

• Bell, E. T., Men of Mathematics. New York: Simon and Schuster, 1986. (Hardcover; ISBN 0-671-46400-0)/(Paperback; ISBN 0-671-62818-6) • Herstein, I. N., Topics in Algebra, Wiley; 2 edition (June 20, 1975), ISBN 0-471-01090-1. • Mac Lane, Saunders, and Garrett Birkhoff; Algebra, American Mathematical Society; 3rd edition (April 1999). ISBN 0-8218-1646-2. • Weisstein, Eric W., " Integer (" from MathWorld.

External links
• The Positive Integers - divisor tables and numeral representation tools ( • On-Line Encyclopedia of Integer Sequences ( cf OEIS This article incorporates material from Integer on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. nso:Integer

In mathematics, in the context of polynomials, the word monomial can have one of two different meanings: • The first is a product of powers of variables, or formally any value obtained by finitely many multiplications of a variable. If only a single variable is considered, this means that any monomial is either 1 or a power of , with a positive integer. If several variables are considered, say, with , , , then each can be given an exponent, so that any monomial is of the form non-negative integers (taking note that any

exponent 0 makes the corresponding factor equal to 1). • The second meaning of monomial includes monomials in the first sense, but also allows multiplication by any constant, so that and are also considered to be monomials (the second example assuming polynomials in , , over the complex numbers are considered).

Comparison of the two definitions
With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication. Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first[1] and second[2] meaning, and an unclear definition [3]. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring, or a monomial ordering of that basis. An argument in favor of the first meaning is also that no obvious other notion is available to designate these values (the term power product is in use, but it does not make the absence of constants clear either), while the notion term of a polynomial unambiguously coincides with the second meaning of monomial. For an isolated polynomial consisting of a single term, one could if necessary use the uncontracted form mononomial, analogous to binomial and trinomial. The remainder of this article assumes the first meaning of "monomial".



As bases
The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials - a fact of constant implicit use in mathematics.

The number of monomials of degree d in n variables is the number of multicombinations of d elements chosen among the n variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient . This expression can also be given in the form of a binomial coefficient, as a polynomial expression in d, or using a rising factorial power of d + 1:

The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n, the number of monomials of degree d is a polynomial expression in d of degree with leading coefficient . For example, the number of monomials in three variables ( ) of degree d is

; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers.

Notation for monomials is constantly required in fields like partial differential equations. If the variables being used form an indexed family like , , , ..., then multi-index notation is helpful: if we write

we can define

and save a great deal of space.

In algebraic geometry the varieties defined by monomial equations for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups, in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices). This area is studied under the name of torus embeddings.

[1] Cox, David; John Little, Donal O'Shea (1998). Using Algebraic Geometry. Springer Verlag. pp. 1. ISBN 0-387-98487-9. [2] Hazewinkel, Michiel, ed. (2001), "Monomial" (http:/ / www. encyclopediaofmath. org/ index. php?title=M/ m064760), Encyclopedia of Mathematics, Springer, ISBN 978-1556080104, [3] http:/ / planetmath. org/ encyclopedia/ Monomial. html



In algebra, a binomial is a polynomial with two terms[1] —the sum of two monomials—often bound by parenthesis or brackets when operated upon. It is the simplest kind of polynomial after the monomials.

Operations on simple binomials
• The binomial can be factored as the product of two other binomials:

This is a special case of the more general formula: • The product of a pair of linear binomials • A binomial raised to the nth power, represented as and is:


can be expanded by means of the binomial theorem or, equivalently, using Pascal's triangle. Taking a simple example, the perfect square binomial can be found by squaring the first term, adding twice the product of the first and second terms and finally adding the square of the second term, to give . • A simple but interesting application of the cited binomial formula is the "(m,n)-formula" for generating Pythagorean triples: for m < n, let , , , then .

[1] Weisstein, Eric. "Binomial" (http:/ / mathworld. wolfram. com/ Binomial. html). Wolfram MathWorld. . Retrieved 29 March 2011.

• L. Bostock, and S. Chandler (1978). Pure Mathematics 1. ISBN 0 85950 0926. pp. 36.



In mathematics, a polynomial is an expression of finite length constructed from variables (also known as indeterminates) and constants, using only the operations of addition, subtraction, multiplication, and non-negative integer exponents. For example, x2 − 4x + 7 is a polynomial, but x2 − 4/x + 7x3/2 is not, because its second term involves division by the variable x (4/x) and because its third term contains an exponent that is not an integer (3/2). The term "polynomial" can also be used as an adjective, for quantities that can be expressed as a polynomial of some parameter, as in "polynomial time" which is used in computational complexity theory. Polynomial comes from the Greek poly, "many" and medieval Latin binomium, "binomial".[1] introduced in Latin by Franciscus Vieta.[4]
[2] [3]

The word was

Polynomials appear in a wide variety of areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated problems in the sciences; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings, a central concept in abstract algebra and algebraic geometry.

A polynomial is either zero, or can be written as the sum of one or more non-zero terms. The number of terms is finite. These terms consist of a constant (called the coefficient of the term) which may be multiplied by a finite number of variables (usually represented by letters), also called indeterminates.[5] Each variable may have an exponent that is a non-negative integer, i.e., a natural number. The exponent on a variable in a term is called the degree of that variable in that term, the degree of the term is the sum of the degrees of the variables in that term, and the degree of a polynomial is the largest degree of any one term. Since x = x1, the degree of a variable without a written exponent is one. A term with no variables is called a constant term, or just a constant. The degree of a (nonzero) constant term is 0. The coefficient of a term may be any number from a specified set. If that set is the set of real numbers, we speak of "polynomials over the reals". Other common kinds of polynomials are polynomials with integer coefficients, polynomials with complex coefficients, and polynomials with coefficients that are integers modulo of some prime number p. In most of the examples in this section, the coefficients are integers. For example:

is a term. The coefficient is –5, the variables are x and y, the degree of x is in the term two, while the degree of y is one. The degree of the entire term is the sum of the degrees of each variable in it, so in this example the degree is 2 + 1 = 3. Forming a sum of several terms produces a polynomial. For example, the following is a polynomial:

It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero. The commutative law of addition can be used to freely permute terms into any preferred order. In polynomials with one variable, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x". The polynomial in the example above is written in descending powers of x. The first term has coefficient 3, variable x, and exponent 2. In the second term, the coefficient is –5. The third term is a constant. Since the degree of a non-zero polynomial is the largest degree of any one term, this

Polynomial polynomial has degree two. Two terms with the same variables raised to the same powers are called "like terms", and they can be combined (after having been made adjacent) using the distributive law into a single term, whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0, in which case their combination just cancels out the terms. Polynomials can be added using the associative law of addition (which simply groups all their terms together into a single sum), possibly followed by reordering, and combining of like terms. For example, if



which can be simplified to

To work out the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if


which can be simplified to

The sum or product of two polynomials is always a polynomial.

Alternative forms
In general any expression can be considered to be a polynomial if it is built up from variables and constants using only addition, subtraction, multiplication, and raising expressions to constant positive whole number powers. Such an expression can always be rewritten as a sum of terms. For example, (x + 1)3 is a polynomial; its standard form is x3 + 3x2 + 3x + 1. Division of one polynomial by another does not, in general, produce a polynomial, but rather produces a quotient and a remainder.[6] A formal quotient of polynomials, that is, an algebraic fraction where the numerator and denominator are polynomials, is called a "rational expression" or "rational fraction" and is not, in general, a polynomial. Division of a polynomial by a number, however, does yield another polynomial. For example,

is considered a valid term in a polynomial (and a polynomial by itself) because it is equivalent to a constant. When this expression is used as a term, its coefficient is therefore coefficients are allowed, one may have a single term like


is just

. For similar reasons, if complex

; even though it looks like it should be

expanded to two terms, the complex number 2 + 3i is one complex number, and is the coefficient of that term.

is not a polynomial because it includes division by a non-constant polynomial.



is not a polynomial, because it contains a variable used as exponent. Since subtraction can be replaced by addition of the opposite quantity, and since positive whole number exponents can be replaced by repeated multiplication, all polynomials can be constructed from constants and variables using only addition and multiplication.

Polynomial functions
A polynomial function is a function that can be defined by evaluating a polynomial. A function ƒ of one argument is called a polynomial function if it satisfies

for all arguments x, where n is a non-negative integer and a0, a1,a2, ..., an are constant coefficients. For example, the function ƒ, taking real numbers to real numbers, defined by

is a polynomial function of one argument. Polynomial functions of multiple arguments can also be defined, using polynomials in multiple variables, as in

An example is also the function

which, although it doesn't look like a polynomial, is a

polynomial function since for every x it is true that (see Chebyshev polynomials). Polynomial functions are a class of functions having many important properties. They are all continuous, smooth, entire, computable, etc.

Polynomial equations
A polynomial equation, also called algebraic equation, is an equation in which a polynomial is set equal to another polynomial.

is a polynomial equation. In case of a univariate polynomial equation, the variable is considered an unknown, and one seeks to find the possible values for which both members of the equation evaluate to the same value (in general more than one solution may exist). A polynomial equation is to be contrasted with a polynomial identity like (x + y)(x – y) = x2 – y2, where both members represent the same polynomial in different forms, and as a consequence any evaluation of both members will give a valid equality. This means that a polynomial identity is a polynomial equation for which all possible values of the unknowns are solutions.

Elementary properties of polynomials
• A sum of polynomials is a polynomial. • A product of polynomials is a polynomial. • A composition of two polynomials is a polynomial, which is obtained by substituting a variable of the first polynomial by the second polynomial. • The derivative of the polynomial anxn + an-1xn-1 + ... + a2x2 + a1x + a0 is the polynomial nanxn-1 + (n-1)an-1xn-2 + ... + 2a2x + a1. If the set of the coefficients does not contain the integers (for example if the coefficients are integers modulo some prime number p), then kak should be interpreted as the sum of ak with itself, k times. For example, over the integers modulo p, the derivative of the polynomial xp+1 is the polynomial 0. • If the division by integers is allowed in the set of coefficients, a primitive or antiderivative of the polynomial anxn + an-1xn-1 + ... + a2x2 + a1x + a0 is anxn+1/(n+1) + an-1xn/n + ... + a2x3/3 + a1x2/2 + a0x +c, where c is an arbitrary constant. Thus x2+1 is a polynomial with integer coefficients whose primitives are not polynomials over the

Polynomial integers. If this polynomial is viewed as a polynomial over the integers modulo 3 it has no primitive at all. Polynomials serve to approximate other functions, such as sine, cosine, and exponential. All polynomials have an expanded form, in which the distributive and associative laws have been used to remove all brackets and commutative law has been used to make the like terms adjacent and combine them. All polynomials with coefficients in a unique factorization domain (for example, the integers or a field also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. In the case of the field of complex numbers, the irreducible polynomials are linear. For example, the factored form of



over the integers and

over the complex numbers. Every polynomial in one variable is equivalent to a polynomial with the form

This form is sometimes taken as the definition of a polynomial in one variable. Evaluation of a polynomial consists of assigning a number to each variable and carrying out the indicated multiplications and additions. Actual evaluation is usually more efficient using the Horner scheme:

In elementary algebra, methods are given for solving all first degree and second degree polynomial equations in one variable. In the case of polynomial equations, the variable is often called an unknown. The number of solutions may not exceed the degree, and will equal the degree when multiplicity of solutions and complex number solutions are counted. This fact is called the fundamental theorem of algebra. A system of polynomial equations is a set of equations in which each variable must take on the same value everywhere it appears in any of the equations. Systems of equations are usually grouped with a single open brace on the left. In elementary algebra, in particular in linear algebra, methods are given for solving a system of linear equations in several unknowns. If there are more unknowns than equations, the system is called underdetermined. If there are more equations than unknowns, the system is called overdetermined. Overdetermined systems are common in practical applications. For example, one U.S. mapping survey used computers to solve 2.5 million equations in 400,000 unknowns.[7] Viète's formulas relate the coefficients of a polynomial to symmetric polynomial functions of its roots.

Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, circa 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write 3x + 2y + z = 29.



The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie, 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the a 's denote constants and x denotes a variable. Descartes introduced the use of superscripts to denote exponents as well.[8]

Solving polynomial equations
Every polynomial P in x corresponds to a function, ƒ(x) = P (where the occurrences of x in P are interpreted as the argument of ƒ), called the polynomial function of P; the equation in x setting f(x) = 0 is the polynomial equation corresponding to P. The solutions of this equation are called the roots of the polynomial; they are the zeroes of the function ƒ (corresponding to the points where the graph of ƒ meets the x-axis). A number a is a root of P if and only if the polynomial x − a (of degree one in x) divides P. It may happen that x − a divides P more than once: if (x − a)2 divides P then a is called a multiple root of P, and otherwise a is called a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a)m divides P, which is called the multiplicity of the root a in P. When P is the zero polynomial, the corresponding polynomial equation is trivial, and this case is usually excluded when considering roots: with the above definitions every number would be a root of the zero polynomial, with undefined (or infinite) multiplicity. With this exception made, the number of roots of P, even counted with their respective multiplicities, cannot exceed the degree of P. Some polynomials, such as x2 + 1, do not have any roots among the real numbers. If, however, the set of allowed candidates is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial. There is a difference between approximating roots and finding exact expressions for roots. Formulas for expressing the roots of polynomials of degree 2 in terms of square roots have been known since ancient times (see quadratic equation), and for polynomials of degree 3 or 4 similar formulas (using cube roots in addition to square roots) were found in the 16th century (see cubic function and quartic function for the formulas and Niccolo Fontana Tartaglia, Lodovico Ferrari, Gerolamo Cardano, and Vieta for historical details). But formulas for degree 5 eluded researchers. In 1824, Niels Henrik Abel proved the striking result that there can be no general (finite) formula, involving only arithmetic operations and radicals, that expresses the roots of a polynomial of degree 5 or greater in terms of its coefficients (see Abel-Ruffini theorem). In 1830, Évariste Galois, studying the permutations of the roots of a polynomial, extended Abel-Ruffini theorem by showing that, given a polynomial equation, one may decide if it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and Group theory, two important branches of modern mathematics. Galois himself noted that the computations implied by his method were impracticable. Nevertheless formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation). Numerical approximations of roots of polynomial equations in one unknown is easily done on a computer by the Jenkins-Traub method, Laguerre's method, Durand–Kerner method or by some other root-finding algorithm. For polynomials in more than one variable the notion of root does not exist, and there are usually infinitely many combinations of values for the variables for which the polynomial function takes the value zero. However for certain sets of such polynomials it may happen that for only finitely many combinations all polynomial functions take the value zero.

Polynomial For a set of polynomial equations in several unknowns, there are algorithms to decide if they have a finite number of complex solutions. If the number of solutions is finite, there are algorithms to compute the solutions. The methods underlying these algorithms are described in the article systems of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. It has been shown by Richard Birkeland and Karl Meyr that the roots of any polynomial may be expressed in terms of multivariate hypergeometric functions. Ferdinand von Lindemann and Hiroshi Umemura showed that the roots may also be expressed in terms of Siegel modular functions, generalizations of the theta functions that appear in the theory of elliptic functions. These characterisations of the roots of arbitrary polynomials are generalisations of the methods previously discovered to solve the quintic equation.


Method based on eigenvalues computation
A numerical method for the progressive elimination of multiple roots, based on eigenvalue computation, has been proposed.[9] [10] The method transforms the problem into a sequence of eigenvalue problems involving tridiagonal matrices with only simple eigenvalues: such eigenvalues can easily be approximated by means of the QR algorithm.

A polynomial function in one real variable can be represented by a graph. • The graph of the zero polynomial f(x) = 0 is the x-axis. • The graph of a degree 0 polynomial f(x) = a0, where a0 ≠ 0, is a horizontal line with y-intercept a0 • The graph of a degree 1 polynomial (or linear function) f(x) = a0 + a1x , where a1 ≠ 0, is an oblique line with y-intercept a0 and slope a1. • The graph of a degree 2 polynomial f(x) = a0 + a1x + a2x2, where a2 ≠ 0 is a parabola. • The graph of a degree 3 polynomial f(x) = a0 + a1x + a2x2, + a3x3, where a3 ≠ 0 is a cubic curve. • The graph of any polynomial with degree 2 or greater f(x) = a0 + a1x + a2x2 + ... + anxn , where an ≠ 0 and n ≥ 2 is a continuous non-linear curve. The graph of a non-constant (univariate) polynomial always tends to infinity when the variable increases indefinitely (in absolute value). Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior. The illustrations below show graphs of polynomials.



Polynomial of degree 2: f(x) = x2 - x - 2 = (x+1)(x-2)

Polynomial of degree 3: f(x) = x3/4 + 3x2/4 - 3x/2 - 2 = 1/4 (x+4)(x+1)(x-2)

Polynomial of degree 4: f(x) = 1/14 (x+4)(x+1)(x-1)(x-3) + 0.5

Polynomial of degree 5: f(x) = 1/20 (x+4)(x+2)(x+1)(x-1)(x-3) + 2

Polynomial of degree 6: f(x) = 1/30 (x+3.5)(x+2)(x+1)(x-1)(x-3)(x-4) +2

Polynomial of degree 7: f(x) = (x-3)(x-2)(x-1)(x)(x+1)(x+2)(x+3)

Polynomials and calculus
One important aspect of calculus is the project of analyzing complicated functions by means of approximating them with polynomial functions. The culmination of these efforts is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone-Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Polynomial functions are also frequently used to interpolate functions. Calculating derivatives and integrals of polynomial functions is particularly simple. For the polynomial function

the derivative with respect to x is

and the indefinite integral is



Abstract algebra
In abstract algebra, one distinguishes between polynomials and polynomial functions. A polynomial f in one variable X over a ring R is defined to be a formal expression of the form

where n is a natural number, the coefficients

are elements of R, and X is a formal symbol, whose

powers X are just placeholders for the corresponding coefficients ai, so that the given formal expression is just a way to encode the sequence , where there is an n such that ai = 0 for all i > n. Two polynomials sharing the same value of n are considered to be equal if and only if the sequences of their coefficients are equal; furthermore any polynomial is equal to any polynomial with greater value of n obtained from it by adding terms in front whose coefficient is zero. These polynomials can be added by simply adding corresponding coefficients (the rule for extending by terms with zero coefficients can be used to make sure such coefficients exist). Thus each polynomial is actually equal to the sum of the terms used in its formal expression, if such a term aiXi is interpreted as a polynomial that has zero coefficients at all powers of X other than Xi. Then to define multiplication, it suffices by the distributive law to describe the product of any two such terms, which is given by the rule for all elements a, b of the ring R and all natural numbers k and l. Thus the set of all polynomials with coefficients in the ring R forms itself a ring, the ring of polynomials over R, which is denoted by R[X]. The map from R to R[X] sending r to rX0 is an injective homomorphism of rings, by which R is viewed as a subring of R[X]. If R is commutative, then R[X] is an algebra over R. One can think of the ring R[X] as arising from R by adding one new element X to R, and extending in a minimal way to a ring in which X satisfies no other relations than the obligatory ones, plus commutation with all elements of R (that is Xr = rX). To do this, one must add all powers of X and their linear combinations as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[X] over the real numbers by factoring out the ideal of multiples of the polynomial X2 + 1. Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic). If R is commutative, then one can associate to every polynomial P in R[X], a polynomial function f with domain and range equal to R (more generally one can take domain and range to be the same unital associative algebra over R). One obtains the value f(r) by substitution of the value r for the symbol X in P. One reason to distinguish between polynomials and polynomial functions is that over some rings different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where R is the integers modulo p). This is not the case when R is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for X. And it should be noted that if R is not commutative, there is no (well behaved) notion of polynomial function at all.

In commutative algebra, one major focus of study is divisibility among polynomials. If R is an integral domain and f and g are polynomials in R[X], it is said that f divides g or f is a divisor of g if there exists a polynomial q in R[X] such that f q = g. One can show that every zero gives rise to a linear divisor, or more formally, if f is a polynomial in R[X] and r is an element of R such that f(r) = 0, then the polynomial (X − r) divides f. The converse is also true. The quotient can be computed using the Horner scheme.

Polynomial If F is a field and f and g are polynomials in F[X] with g ≠ 0, then there exist unique polynomials q and r in F[X] with


and such that the degree of r is smaller than the degree of g. The polynomials q and r are uniquely determined by f and g. This is called Euclidean division, division with remainder or polynomial long division and shows that the ring F[X] is a Euclidean domain. Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as polynomials which cannot be factorized into the product of two non constant polynomials. Any polynomial may be decomposed into the product of a constant by a product of irreducible polynomials. This decomposition is unique up to the order of the factors and the multiplication of any constant factors by a constant (and division of the constant factor by the same constant. When the coefficients belong to a finite field or are rational numbers, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials. These algorithms are not practicable for hand written computation, but are available in any Computer algebra system (see Berlekamp's algorithm for the case in which the coefficients belong to a finite field or the Berlekamp–Zassenhaus algorithm when working over the rational numbers [11] ). Eisenstein's criterion can also be used in some cases to determine irreducibility. See also: Greatest common divisor of two polynomials.

Polynomials are classified according to many different properties.

Number of variables
One classification of polynomials is based on the number of distinct variables. A polynomial in one variable is called a univariate polynomial, a polynomial in more than one variable is called a multivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance when working with univariate polynomials one does not exclude constant polynomials (which may result, for instance, from the subtraction of non-constant polynomials), although strictly speaking constant polynomials do not contain any variables at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of variables allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is common, also, to say simply "polynomials in x, y, and z", listing the variables allowed. In this case, xy is allowed.

A second major way of classifying polynomials is by their degree. Recall that the degree of a term is the sum of the exponents on variables, and that the degree of a polynomial is the largest degree of any one term.



Polynomials classified by degree
Degree −∞ Name zero (non-zero) constant linear quadratic cubic quartic (or biquadratic) quintic sextic (or hexic) septic (or heptic) octic nonic decic hectic Example

Usually, a polynomial of degree n, for n greater than 3, is called a polynomial of degree n, although the phrases quartic polynomial and quintic polynomial are sometimes used. The use of names for degrees greater than 5 is even less common. The names for the degrees may be applied to the polynomial or to its terms. For example, in the term is a first degree term in a second degree polynomial. In the context of polynomial interpolation there is some ambiguity when combining the two classifications above. For example, a bilinear interpolant, being the product of two univariate linear polynomials, is bivariate but is not linear; similar ambiguity affects the bicubic interpolant. The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather the degree of the zero polynomial is either left explicitly undefined, or defined to be negative (either –1 or –∞).[12] These conventions are important when defining Euclidean division of polynomials. The zero polynomial is also unique in that it is the only polynomial having an infinite number of roots.

Polynomials classified by number of non-zero terms
Number of non-zero terms Name zero polynomial monomial binomial trinomial Example

If a polynomial has only one variable, then the terms are usually written either from highest degree to lowest degree ("descending powers") or from lowest degree to highest degree ("ascending powers"). A univariate polynomial in x of degree n then takes the general form

where cn ≠ 0, cn-1, ..., c2, c1 and c0 are constants, the coefficients of this polynomial.

Polynomial Here the term cnxn is called the leading term and its coefficient cn the leading coefficient; if the leading coefficient is 1, the univariate polynomial is called monic. Note that apart from the leading coefficient cn (which must be non-zero or else the polynomial would not be of degree n) this general form allows for coefficients to be zero; when this happens the corresponding term is zero and may be removed from the sum without changing the polynomial. It is nevertheless common to refer to ci as the coefficient of xi, even when ci happens to be 0, so that xi does not really occur in any term; for instance one can speak of the constant term of the polynomial, meaning c0 even if it is zero. In the case of polynomials in more than one variable, a polynomial is called homogeneous of degree n if all its terms have degree n. For example, is homogeneous.


Another classification of polynomials is by the kind of constant values allowed as coefficients. One can work with polynomials with integer, rational, real, or complex coefficients, and in abstract algebra polynomials with many other types of coefficients can be defined, such as integers modulo p. As in the classification by number of variables, when working with coefficients for a given set, such as the complex numbers, coefficients from any subset are allowed. Thus is a polynomial with integer coefficients, but it is also a polynomial with complex coefficients, because the integers are a subset of the complex numbers.

Number of non-zero terms
Polynomials may also be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and so on. (Some authors use "monomial" to mean "monic monomial".[13] )

Polynomials associated to other objects
Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph.

Extensions of the concept of a polynomial
Polynomials can involve more than one variable, in which they are called multivariate. Rings of polynomials in a finite number of variables are of fundamental importance in algebraic geometry which studies the simultaneous zero sets of several such multivariate polynomials. These rings can alternatively be constructed by repeating the construction of univariate polynomials with as coefficient ring another ring of polynomials: thus the ring R[X,Y] of polynomials in X and Y can be viewed as the ring (R[X])[Y] of polynomials in Y with as coefficients polynomials in X, or as the ring (R[Y])[X] of polynomials in X with as coefficients polynomials in Y. These identifications are compatible with arithmetic operations (they are isomorphisms of rings), but some notions such as degree or whether a polynomial is considered monic can change between these points of view. One can construct rings of polynomials in infinitely many variables, but since polynomials are (finite) expressions, any individual polynomial can only contain finitely many variables. A binary polynomial where the second variable takes the form of an exponential function applied to the first variable, for example P(X,eX ), may be called an exponential polynomial. Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur. Quotients of polynomials are called rational expressions (or rational fractions), and functions that evaluate rational expressions are called rational functions. Rational fractions are formal quotients of polynomials (they are formed

Polynomial from polynomials just as rational numbers are formed from integers, writing a fraction of two of them; fractions related by the canceling of common factors are identified with each other). The rational function defined by a rational fraction is the quotient of the polynomial functions defined by the numerator and the denominator of the rational fraction. The rational fractions contain the Laurent polynomials, but do not limit denominators to be powers of a variable. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not null. A rational function produces rational output for any rational input for which it is defined; this is not true of other functions such as trigonometric functions, logarithms and exponential functions. Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like real numbers cannot), but the rules for manipulating their terms are the same as for polynomials.


[1] [2] [3] [4] CNTRL (French National Center for Textual and Lexical Resources), etymology of binôme (http:/ / www. cnrtl. fr/ etymologie/ binôme) Etymology of "polynomial" Compact Oxford English Dictionary Online Etymology Dictionary "binomial" (http:/ / www. etymonline. com/ index. php?term=binomial) Florian Cajori (1991). A History of Mathematics. AMS. ISBN 978-0-8218-2102-2.| (http:/ / books. google. fr/ books?id=mGJRjIC9fZgC& pg=PA139)

[5] The term indeterminate is more proper, and, in theory, variable should be used only when considering the function defined by the polynomial. In practice, most authors use indifferently the two words. [6] Peter H. Selby, Steve Slavin, Practical Algebra: A Self-Teaching Guide, 2nd Edition, Wiley, ISBN 0471530123 ISBN 978-0471530121 [7] Gilbert Strang, Linear Algebra and its Applications, Fourth Edition, Thompson Brooks/Cole, ISBN 0030105676. [8] Howard Eves, An Introduction to the History of Mathematics, Sixth Edition, Saunders, ISBN 0030295580 [9] L.Brugnano, D.Trigiante. Polynomial Roots: the Ultimate Answer?, Linear Algebra and its Applications 225 (1995) 207-219 [10] L.Brugnano. Numerical Implementation of a New Algorithm for Polynomials with Multiple Roots, Journal of Difference Equations and Applications 1 (1995) 187-207 [11] http:/ / mathworld. wolfram. com/ Berlekamp-ZassenhausAlgorithm. html [12] Weisstein, Eric W., " Zero Polynomial (http:/ / mathworld. wolfram. com/ ZeroPolynomial. html)" from MathWorld. [13] Anthony W. Knapp (2007). Advanced Algebra: Along with a Companion Volume Basic Algebra. Springer. p. 457. ISBN 0817645225.

• R. Birkeland. Über die Auflösung algebraischer Gleichungen durch hypergeometrische Funktionen (http:// Mathematische Zeitschrift vol. 26, (1927) pp. 565–578. Shows that the roots of any polynomial may be written in terms of multivariate hypergeometric functions. • F. von Lindemann. Über die Auflösung der algebraischen Gleichungen durch transcendente Functionen (http:// Nachrichten von der Königl. Gesellschaft der Wissenschaften, vol. 7, 1884. Polynomial solutions in terms of theta functions. • F. von Lindemann. Über die Auflösung der algebraischen Gleichungen durch transcendente Functionen II (http:// Nachrichten von der Königl. Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen, 1892 edition. • K. Mayr. Über die Auflösung algebraischer Gleichungssysteme durch hypergeometrische Funktionen. Monatshefte für Mathematik und Physik vol. 45, (1937) pp. 280–313. • H. Umemura. Solution of algebraic equations in terms of theta constants. In D. Mumford, Tata Lectures on Theta II, Progress in Mathematics 43, Birkhäuser, Boston, 1984.



External links
• List of Calculators for Quadratic through Sextic equations ( • Euler's work on Imaginary Roots of Polynomials ( sa=viewDocument&nodeId=640&bodyId=1038) at Convergence ( • Characteristics of polynomials ( • Free online polynomial root finder for both real and complex coefficients ( websolver.php)

In mathematics, a coefficient is a multiplicative factor in some term of an expression (or of a series); it is usually a number, but in any case does not involve any variables of the expression. For instance in

the first three terms respectively have the coefficients 7, −3, and 1.5 (in the third term the variables are hidden (raised to the 0 power), so the coefficient is the term itself; it is called the constant term or constant coefficient of this expression). The final term does not have any explicitly written coefficient, but is considered to have coefficient 1, since multiplying by that factor would not change the term. Often coefficients are numbers as in this example, although they could be parameters of the problem, as a, b, and c in

when it is understood that these are not considered as variables. Thus a polynomial in one variable x can be written as

for some integer k, where ak, ... a1, a0 are coefficients; to allow this kind of expression in all cases one must allow introducing terms with 0 as coefficient. For the largest i with ai ≠ 0 (if any), ai is called the leading coefficient of the polynomial. So for example the leading coefficient of the polynomial

is 4. Specific coefficients arise in mathematical identities, such as the binomial theorem which involves binomial coefficients; these particular coefficients are tabulated in Pascal's triangle.

Linear algebra
In linear algebra, the leading coefficient of a row in a matrix is the first nonzero entry in that row. So, for example, given

The leading coefficient of the first row is 1; 2 is the leading coefficient of the second row; 4 is the leading coefficient of the third row, and the last row does not have a leading coefficient. Though coefficients are frequently viewed as constants in elementary algebra, they can be variables more generally. For example, the coordinates of a vector v in a vector space with basis , are the coefficients of the basis vectors in the expression

Coefficient Coefficient is just the fancy name for the numbers multiplied by variables.


Examples of physical coefficients
1. Coefficient of Thermal Expansion (thermodynamics) (dimensionless) - Relates the change in temperature to the change in a material's dimensions. 2. Partition Coefficient (KD) (chemistry) - The ratio of concentrations of a compound in two phases of a mixture of two immiscible solvents at equilibrium. 3. Hall coefficient (electrical physics) - Relates a magnetic field applied to an element to the voltage created, the amount of current and the element thickness. It is a characteristic of the material from which the conductor is made. 4. Lift coefficient (CL or CZ) (Aerodynamics) (dimensionless) - Relates the lift generated by an airfoil with the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil. 5. Ballistic coefficient (BC) (Aerodynamics) (units of kg/m2) - A measure of a body's ability to overcome air resistance in flight. BC is a function of mass, diameter, and drag coefficient. 6. Transmission Coefficient (quantum mechanics) (dimensionless) - Represents the probability flux of a transmitted wave relative to that of an incident wave. It is often used to describe the probability of a particle tunnelling through a barrier. 7. Damping Factor a.k.a. viscous damping coefficient (Physical Engineering) (units of newton-seconds per meter) relates a damping force with the velocity of the object whose motion is being

A coefficient is a number placed in front of a term in a chemical equation to indicate how many molecules (or atoms) take part in the reaction. For example, in the formula , the number 2's in front of and are stoichiometric coefficients.

• Sabah Al-hadad and C.H. Scott (1979) College Algebra with Applications, page 42, Winthrop Publishers, Cambridge Massachusetts ISBN 0876261403 . • Gordon Fuller, Walter L Wilson, Henry C Miller, (1982) College Algebra, 5th edition, page 24, Brooks/Cole Publishing, Monterey California ISBN 0534011381 . • Steven Schwartzman (1994) The Words of Mathematics: an etymological dictionary of mathematical terms used in English, page 48, Mathematics Association of America, ISBN 0883855119.


Routine Functions
In mathematical logic, simplification (equivalent to conjunction elimination) is a valid argument and rule of inference which makes the inference that, if the conjunction A and B is true, then A is true, and B is true. In formal language:


The argument has one premise, namely a conjunction, and one often uses simplification in longer arguments to derive one of the conjuncts. An example in English: It's raining and it's pouring. Therefore it's raining.

In mathematics, an expression is a finite combination of symbols that is well-formed according to rules that depend on the context. Symbols can designate numbers (constants), variables, operations, functions, and other mathematical symbols, as well as punctuation, symbols of grouping, and other syntactic symbols. The use of expressions can range from the simple:

to the complex: . Strings of symbols that violate the rules of syntax are not well-formed and are not valid mathematical expressions. For example: would not be considered a mathematical expression but only a meaningless jumble.[1] In algebra an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression; the determination of this value depends on the semantics attached to the symbols of the expression. These semantic rules may declare that certain expressions do not designate any value; such expressions are said to have an undefined value, but they are well-formed expressions nonetheless. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator to designate an internal direct sum. Being an expression is a syntactic concept; although different mathematical fields have different notions of valid expressions, the values associated to variables does not play a role. See formal language for general considerations

Expression on how expressions are constructed, and formal semantics for questions concerning attaching meaning (values) to expressions.


Many mathematical expressions include letters called variables. Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents a function whose inputs are the value assigned the free variables and whose output is the resulting value of the expression.[2] For example, the expression

evaluated for x = 10, y = 5, will give 2; but is undefined for y = 0. The evaluation of an expression is dependent on the definition of the mathematical operators and on the system of values that is its context. Two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. Example: The expression

has free variable x, bound variable n, constants 1, 2, and 3, two occurrences of an implicit multiplication operator, and a summation operator. The expression is equivalent with the simpler expression 12x. The value for x = 3 is 36. The '+' and '−' (addition and subtraction) symbols have their usual meanings. Division can be expressed either with the '/' or with a horizontal dash. Thus

are perfectly valid. Also, for multiplication one can use the symbols '×' or a '·' (mid dot), or else simply omit it (multiplication is implicit); so:

are all acceptable. However, notice in the first example above how the "times" symbol resembles the letter 'x' and also how the '·' symbol resembles a decimal point, so to avoid confusion it's best to use one of the later two forms. An expression must be well-formed. That is, the operators must have the correct number of inputs, in the correct places. The expression 2 + 3 is well formed; the expression * 2 + is not, at least, not in the usual notation of arithmetic. Expressions and their evaluation were formalised by Alonzo Church and Stephen Kleene[3] in the 1930s in their lambda calculus. The lambda calculus has been a major influence in the development of modern mathematics and computer programming languages.[4] One of the more interesting results of the lambda calculus is that the equivalence of two expressions in the lambda calculus is in some cases undecidable. This is also true of any expression in any system that has power equivalent to the lambda calculus.



[1] [2] [3] [4] Introduction to Algebra (http:/ / www. mathleague. com/ help/ algebra/ algebra. htm) TalkTalk Reference Encyclopedia (http:/ / www. talktalk. co. uk/ reference/ encyclopaedia/ hutchinson/ m0006748. html) Biographical Memoir of Stephen Kleene (http:/ / www. nap. edu/ html/ biomems/ skleene. html) Programming Languages and Lambda Calculi (http:/ / www. cs. utah. edu/ plt/ publications/ pllc. pdf)

Root of a function
In mathematics, a zero, also sometimes called a root, of a real-, complex- or generally vector-valued function ƒ is a member x of the domain of ƒ such that ƒ(x) vanishes at , that is,

ƒ(x)=cosx on the interval [-2π,2π], with x-intercepts indicated in red (the roots highlighted are -3π/2, -π/2, π/2, 3π/2)

In other words, a "zero" of a function ƒ is a value for x that produces a result of zero ("0"). [1] For example, consider the function ƒ defined by the formula

ƒ has a root at 3 because

If the function maps real numbers to real numbers, its zeros are the x-coordinates of the points where its graph meets the x-axis. An alternative name for such a point (x,0) in this context is an x-intercept. Finding roots of certain functions, especially polynomials, frequently requires the use of specialised or approximation techniques (for example, Newton's method). The concept of complex numbers was developed to handle the roots of cubic equations with negative discriminants (that is, those leading to expressions involving the square root of negative numbers). Complex numbers also occur as zeros of quadratic equations with negative discriminants. Every real polynomial of odd degree has at least one real number as a root. Many real polynomials of even degree do not have a real root, but the fundamental theorem of algebra states that every polynomial of degree n has n complex

Root of a function roots, counted with their multiplicities. The non-real roots of polynomials with real coefficients come in conjugate pairs. Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. The Riemann hypothesis, one of the most important unsolved problems in mathematics, concerns the location of the zeros of the Riemann zeta function.


[1] Weisstein, Eric W., " Root (http:/ / mathworld. wolfram. com/ Root. html)" from MathWorld.

Exponentiation is a mathematical operation, written as bn, involving two numbers, the base b and the exponent (or power) n. When n is a positive integer, exponentiation corresponds to repeated multiplication; in other words, a product of n factors of b (the product itself can also be called power):

just as multiplication by a positive integer corresponds to repeated addition:

The exponent is usually shown as a superscript to the right of the base. The exponentiation bn can be read as: b [raised] to the n-th [power], b [raised] to the [power [of]] n, or possibly b [raised] to the exponent [of] n, most briefly as b to the n. Some exponents have their own pronunciation: for example, b2 is usually read as b squared and b3 as b cubed. The power bn can be defined also when n is a negative integer, for nonzero b. No natural extension to all real b and n exists, but when the base b is a positive real number, bn can be defined for all real and even complex exponents n via the exponential function ez. Trigonometric functions can be expressed in terms of complex exponentiation. Exponentiation where the exponent is a matrix is used for solving systems of linear differential equations. Exponentiation is used pervasively in many other fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public key cryptography.



Background and terminology
The expression b = b·b is called the square of b because the area of a square with side-length b is b2. The expression b3 = b·b·b is called the cube, because the volume of a cube with side-length b is b3. So 32 is pronounced "three squared", and 23 is "two cubed". The exponent says how many copies of the base are multiplied together. For example, 35 = 3·3·3·3·3 = 243. The base 3 appears 5 times in the repeated multiplication, because the exponent is 5. Here, 3 is the base, 5 is the exponent, and 243 is the power or, more specifically, the fifth power of 3, 3 raised to the fifth power, or 3 to the power of 5. The word "raised" is usually omitted, and very often "power" as well, so 35 is typically pronounced "three to the fifth" or "three to the five".

Graphs of y=bx for various bases b: base 10 (green), base e (red), base 2 (blue), and base ½ (cyan). Each curve passes through the point (0,1) because any nonzero number raised to the power 0 is 1. At x=1, the y-value equals the base because any number raised to the power 1 is itself.

Exponentiation may be generalized from integer exponents to more general types of number. When this article refers to 'an odd power' of a number it means the exponent is an odd number, not that the result is odd. For instance 23 which is 8 is an odd power of 2 because the exponent is 3. This is the usual usage and applies to any similar form like an even power, negative power, or positive power.

Integer exponents
The exponentiation operation with integer exponents requires only elementary algebra.

Positive integer exponents
Formally, powers with positive integer exponents may be defined by the initial condition

and the recurrence relation

From the associativity of multiplication, it follows that for any positive integers m and n,



Arbitary integer exponents
For non-zero b and positive n, the recurrence relation from the previous subsection can be rewritten as

By defining this relation as valid for all integer n and nonzero b, it follows that

and more generally,

for any nonzero b and any nonnegative integer n (and indeed any integer n). The following observations may be made: • • • • Any number raised to the exponent 1 is the number itself. Any nonzero number raised to the exponent 0 is 1; one interpretation of these powers is as empty products. These equations do not decide the value of 00. This is discussed below. Raising 0 to a negative exponent would imply division by 0, so it is left undefined.

The identity

initially defined only for positive integers m and n, holds for arbitrary integers m and n, with the constraint that m and n must both be positive when b is zero.

Combinatorial interpretation
For nonnegative integers n and m, the power nm equals the cardinality of the set of m-tuples from an n-element set, or the number of m-letter words from an n-letter alphabet.
05 = │ {} │ = 0. 14 = │ { (1,1,1,1) } │ = 1. There is no 5-tuple from the empty set. There is one 4-tuple from a one-element set.

23 = │ { (1,1,1), (1,1,2), (1,2,1), (1,2,2), (2,1,1), (2,1,2), (2,2,1), (2,2,2) } │ = 8. There are eight 3-tuples from a two-element set. 32 = │ { (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3) } │ = 9. 41 = │ { (1), (2), (3), (4) } │ = 4. 50 = │ { () } │ = 1. There are nine 2-tuples from a three-element set. There are four 1-tuples from a four-element set. There is exactly one empty tuple.

See also exponentiation over sets.



Identities and properties
The following identities hold, provided that the base is non-zero whenever the integer exponent is not positive:

Exponentiation is not commutative. This contrasts with addition and multiplication, which are. For example, 2+3 = 3+2 = 5 and 2·3 = 3·2 = 6, but 23 = 8, whereas 32 = 9. Exponentiation is not associative either. Addition and multiplication are. For example, (2+3)+4 = 2+(3+4) = 9 and (2·3)·4 = 2·(3·4) = 24, but 23 to the 4 is 84 or 4096, whereas 2 to the 34 is 281 or 2,417,851,639,229,258,349,412,352. Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up:

Particular bases
Powers of ten See Scientific notation In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example,  × 103 = 1000 and  × 10−4 = 0.0001. Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299,792,458 m/s (the speed of light in vacuum, in metre per second) can be written as 2.99792458 × 108 m/s and then approximated as 2.998 × 108 m/s. SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means  × 103 = 1000, so a kilometre is 1000 metres. Powers of two The positive powers of 2 are important in computer science because there are 2n possible values for an n-bit binary variable. Powers of 2 are important in set theory since a set with n members has a power set, or set of all subsets of the original set, with 2n members. The negative powers of 2 are commonly used, and the first two have special names: half, and quarter. In the base 2 (binary) number system, integer powers of 2 are written as 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, two to the power of three is written as 1000 in binary.

Exponentiation Powers of one The integer powers of one are all one: 1n = 1. Powers of zero If the exponent is positive, the power of zero is zero: 0n = 0, where n > 0. If the exponent is negative, the power of zero (0n, where n < 0) is undefined, because division by zero is implied. If the exponent is zero, some authors define 00=1, whereas others leave it undefined, as discussed below. Powers of minus one If n is an even integer, then (−1)n = 1. If n is an odd integer, then (−1)n = −1. Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see the section on Powers of complex numbers.


Large exponents
The limit of a sequence of powers of a number greater than one diverges, in other words they grow without bound: bn → ∞ as n → ∞ when b > 1 . This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: bn → 0 as n → ∞ when |b| < 1 . Any power of one is always itself: bn = 1 for all n if b = 1 . If the number b varies tending to 1 as the exponent tends to infinity then the limit is not necessarily one of those above. A particularly important case is (1+1/n)n → e as n→∞ see the section below Powers of e. Other limits, in particular of those tending to indeterminate forms, are described in limits of powers below.



Rational powers
An n-th root of a number b is a number x such that xn = b. If b is a positive real number and n is a positive integer, then there is exactly one positive real solution to xn = b. This solution is called the principal n-th root of b. It is denoted n√b, where √ is the radical symbol; alternatively, it may be written b1/n. For example: 41/2 = 2, 81/3 = 2, When one speaks of the n-th root of a positive real number b, one usually means the principal n-th root. If n is even, then xn = b has two real solutions if b is positive, which are the positive and negative nth roots. The equation has no solution in real numbers if b is negative.
From top to bottom: x1/8, x1/4, x1/2, x1, x2, x4, x8.

If n is odd, then xn = b has one real solution. The solution is positive if b is positive and negative if b is negative. Rational powers m/n, where m/n is in lowest terms, are positive if m is even, negative for negative b if m and n are odd, and can be either sign if b is positive and n is even. (−27)1/3 = −3, (−27)2/3 = 9, and 43/2 has two roots 8 and −8. Since there is no real number x such that x2 = −1, the definition of bm/n when b is negative and n is even must use the imaginary unit i, as described more fully in the section Powers of complex numbers. A power of a positive real number b with a rational exponent m/n in lowest terms satisfies

where m is an integer and n is a positive integer. Care needs to be taken when applying the power law identities with negative nth roots. For instance, −27 = (−27)((2/3)⋅(3/2)) = ((−27)2/3)3/2 = 93/2 = 27 is clearly wrong. The problem here occurs in taking the positive square root rather than the negative one at the last step, but in general the same sorts of problems occur as described for complex numbers in the section Failure of power and logarithm identities.

Real powers
The identities and properties shown above for integer exponents are true for positive real numbers with noninteger exponents as well. However the identity

cannot be extended consistently to where b is a negative real number, see real powers of negative numbers. The failure of this identity is the basis for the problems with complex number powers detailed under failure of power and logarithm identities. The extension of exponentiation to real powers of positive real numbers can be done either by extending the rational powers to reals by continuity, or more usually by using the exponential function and its inverse the natural logarithm.



Limits of rational powers
Since any irrational number can be approximated by a rational number, exponentiation of a positive real number b to an arbitrary real exponent x can be defined by continuity with the rule[1]

where the limit as r gets close to x is taken only over rational values of r. This limit only exists for positive b. The (ε, δ)-definition of limit is used, this involves showing that for any desired accuracy of the result one can choose a sufficiently small interval around x so all the rational powers in the interval are within the desired accuracy. For example, if

then x is within the interval (1732/1000, 1733/1000). This means that be got by using a sufficiently small interval around x.

must be within . Any extra desired accuracy can

, that is within (16.2411, 16.2673). This give one decimal place of accuracy for

The exponential function
The important mathematical constant e, sometimes called Euler's number, is approximately equal to 2.718 and is the base of the natural logarithm. It provides a path for defining exponentiation with irrational exponents. It is defined as the following limit where the power goes to infinity as the base tends to one:

The exponential function, defined by

satisfies the basic exponential identity


is equal to e and

satisfies the exponential identity then the exponential form . For integer and rational powers the value of

for real x

can be defined to give the value

is the same as that given

by this definition for real powers, a short proof for integers is given just below. The exponential function is defined for all integer, fractional, real, and complex values of x. It can even be used to extend exponentiation to some nonnumerical entities such as square matrices; however, the exponential identity only holds when x and y commute. A short proof that e to a positive integer power k is the same as is:

This proof also shows that

satisfies the exponential identity when x and y are positive integers.

These results are in fact generally true for all numbers, not just for the positive integers.



Powers via logarithms
The natural logarithm ln(x) is the inverse of the exponential function ex. It is defined for b > 0, and satisfies If bx is to preserve the logarithm and exponent rules, then one must have

for each real number x. This can be used as an alternative definition of the real number power bx and agrees with the definition given above using rational exponents and continuity. The definition of exponentiation using logarithms is more common in the context of complex numbers, as discussed below.

Real powers of negative numbers
Powers of a positive real number are always positive real numbers. The solution of x2 = 4, however, can be either 2 or −2. The principal value of 41/2 is 2, but −2 is also a valid square root. If the definition of exponentiation of real numbers is extended to allow negative results then the result is no longer well behaved. Neither the logarithm method nor the rational exponent method can be used to define br as a real number for a negative real number b and an arbitrary real number r. Indeed, er is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0. (On the other hand, arbitrary complex powers of negative numbers b can be defined by choosing a complex logarithm of b.) The rational exponent method cannot be used for negative values of b because it relies on continuity. The function f(r) = br has a unique continuous extension[1] from the rational numbers to the real numbers for each b > 0. But when b < 0, the function f is not even continuous on the set of rational numbers r for which it is defined. For example, consider b = −1. The nth root of −1 is −1 for every odd natural number n. So if n is an odd positive integer, (−1)(m/n) = −1 if m is odd, and (−1)(m/n) = 1 if m is even. Thus the set of rational numbers q for which (−1)q = 1 is dense in the rational numbers, as is the set of q for which (−1)q = −1. This means that the function (−1)q is not continuous at any rational number q where it is defined.



Complex powers of positive real numbers
Imaginary powers of e
The geometric interpretation of the operations on complex numbers and the definition of powers of e is the clue to understanding eix for real x. Consider the right triangle (0, 1, 1 + ix/n). For big values of n the triangle is almost a circular sector with a small central angle equal to x/n radians. The triangles (0, (1 + ix/n)k, (1 + ix/n)k+1) are mutually similar for all values of k. So for large values of n the limiting point of (1 + ix/n)n is the point on the unit circle whose angle from the positive real axis is x radians. The polar coordinates of this point are (r, θ) = (1, x), and the cartesian coordinates are (cos x, sin x). So e ix = cos x + isin x, and this is Euler's formula, connecting algebra to trigonometry by means of complex numbers. The solutions to the equation ez = 1 are the integer multiples of 2πi:

The exponential function ez can be defined as the limit of (1 + z/N)N, as N approaches infinity, and thus eiπ is the limit of (1 + iπ/N)N. In this animation N takes various increasing values from 1 to 100. The computation of (1 + iπ/N)N is displayed as the combined effect of N repeated multiplications in the complex plane, with the final point being the actual value of (1 + iπ/N)N. It can be seen that as N gets larger (1 + iπ/N)N approaches a limit of −1. Therefore, eiπ = −1, which is known as Euler's identity.

More generally, if ev = w, then every solution to ez = w can be obtained by adding an integer multiple of 2πi to v:

Thus the complex exponential function is a periodic function with period 2πi. More simply: eiπ = −1; ex + iy = ex(cos y + i sin y).

Trigonometric functions
It follows from Euler's formula that the trigonometric functions cosine and sine are

Historically, cosine and sine were defined geometrically before the invention of complex numbers. The above formula reduces the complicated formulas for trigonometric functions of a sum into the simple exponentiation formula

Using exponentiation with complex exponents may reduce problems in trigonometry to algebra.



Complex powers of e
The power z = ex+i·y can be computed as ex · ei·y. The real factor ex is the absolute value of z and the complex factor ei·y identifies the direction of z.

Complex powers of positive real numbers
If b is a positive real number, and z is any complex number, the power bz is defined as ez·ln(b), where x = ln(b) is the unique real solution to the equation ex = b. So the same method working for real exponents also works for complex exponents. For example: 2i = e i·ln(2) = cos(ln(2)) + i·sin(ln(2)) ≈ 0.76924 + 0.63896i ei ≈ 0.54030 + 0.84147i 10i ≈ −0.66820 + 0.74398i (e2π)i ≈ 535.49i ≈ 1

Powers of complex numbers
Integer powers of nonzero complex numbers are defined by repeated multiplication or division as above. If i is the imaginary unit and n is an integer, then in equals 1, i, −1, or −i, according to whether the integer n is congruent to 0, 1, 2, or 3 modulo 4. Because of this, the powers of i are useful for expressing sequences of period 4. Complex powers of positive reals are defined via ex as in section Complex powers of positive real numbers above. These are continuous functions. Trying to extend these functions to the general case of noninteger powers of complex numbers that are not positive reals leads to difficulties. Either we define discontinuous functions or multivalued functions. Neither of these options is entirely satisfactory. The rational power of a complex number must be the solution to an algebraic equation. Therefore it always has a finite number of possible values. For example, w = z1/2 must be a solution to the equation w2 = z. But if w is a solution, then so is −w, because (−1)2 = 1 . A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for nonrational powers. Complex powers and logarithms are more naturally handled as single valued functions on a Riemann surface. Single valued versions are defined by choosing a sheet. The value has a discontinuity along a branch cut. Choosing one out of many solutions as the principal value leaves us with functions that are not continuous, and the usual rules for manipulating powers can lead us astray. Any nonrational power of a complex number has an infinite number of possible values because of the multi-valued nature of the complex logarithm (see below). The principal value is a single value chosen from these by a rule which, amongst its other properties, ensures powers of complex numbers with a positive real part and zero imaginary part give the same value as for the corresponding real numbers. Exponentiating a real number to a complex power is formally a different operation from that for the corresponding complex number. However in the common case of a positive real number the principal value is the same. The powers of negative real numbers are not always defined and are discontinuous even where defined. When dealing with complex numbers the complex number operation is normally used instead.



Complex power of a complex number
For complex numbers w and z with w ≠ 0, the notation wz is ambiguous in the same sense that log w is. To obtain a value of wz, first choose a logarithm of w; call it log w. Such a choice may be the principal value Log w (the default, if no other specification is given), or perhaps a value given by some other branch of log w fixed in advance. Then, using the complex exponential function one defines

because this agrees with the earlier definition in the case where w is a positive real number and the (real) principal value of log w is used. If z is an integer, then the value of wz is independent of the choice of log w, and it agrees with the earlier definition of exponentation with an integer exponent. If z is a rational number m/n in lowest terms with z > 0, then the infinitely many choices of log w yield only n different values for wz; these values are the n complex solutions s to the equation sn = wm. If z is an irrational number, then the infinitely many choices of log w lead to infinitely many distinct values for wz. The computation of complex powers is facilitated by converting the base w to polar form, as described in detail below. A similar construction is employed in quaternions.

Complex roots of unity
A complex number w such that wn = 1 for a positive integer n is an nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1. If wn = 1 but wk ≠ 1 for all natural numbers k such that 0 < k < n, then w is called a primitive nth root of unity. The negative unit −1 is the only primitive square root of unity. The imaginary unit i is one of the two primitive 4-th roots of unity; the other one is −i. The number e2πi (1/n) is the primitive nth root of unity with the smallest positive complex argument. (It is sometimes called the principal nth root of unity, although this terminology is not universal and should not be confused with the principal value of n√1, which is 1.[2] ) The other nth roots of unity are given by

The three 3rd roots of 1

for 2 ≤ k ≤ n.



Roots of arbitrary complex numbers
Although there are infinitely many possible values for a general complex logarithm, there are only a finite number of values for the power wq in the important special case where q = 1/n and n is a positive integer. These are the nth roots of w; they are solutions of the equation zn = w. As with real roots, a second root is also called a square root and a third root is also called a cube root. It is conventional in mathematics to define w1/n as the principal value of the root. If w is a positive real number, it is also conventional to select a positive real number as the principal value of the root w1/n. For general complex numbers, the nth root with the smallest argument is often selected as the principal value of the nth root operation, as with principal values of roots of unity. The set of nth roots of a complex number w is obtained by multiplying the principal value w1/n by each of the nth roots of unity. For example, the fourth roots of 16 are 2, −2, 2i, and −2i, because the principal value of the fourth root of 16 is 2 and the fourth roots of unity are 1, −1, i, and −i.

Computing complex powers
It is often easier to compute complex powers by writing the number to be exponentiated in polar form. Every complex number z can be written in the polar form

where r is a nonnegative real number and θ is the (real) argument of z. The polar form has a simple geometric interpretation: if a complex number u + iv is thought of as representing a point (u, v) in the complex plane using Cartesian coordinates, then (r, θ) is the same point in polar coordinates. That is, r is the "radius" r2 = u2 + v2 and θ is the "angle" θ = atan2(v, u). The polar angle θ is ambiguous since any multiple of 2π could be added to θ without changing the location of the point. Each choice of θ gives in general a different possible value of the power. A branch cut can be used to choose a specific value. The principal value (the most common branch cut), corresponds to θ chosen in the interval (−π, π]. For complex numbers with a positive real part and zero imaginary part using the principal value gives the same result as using the corresponding real number. In order to compute the complex power wz, write w in polar form: . Then

and thus If z is decomposed as c + di, then the formula for wz can be written more explicitly as

This final formula allows complex powers to be computed easily from decompositions of the base into polar form and the exponent into Cartesian form. It is shown here both in polar form and in Cartesian form (via Euler's identity). The following examples use the principal value, the branch cut which causes θ to be in the interval (−π, π]. To compute ii, write i in polar and Cartesian forms:

Then the formula above, with r = 1, θ = π/2, c = 0, and d = 1, yields:

Similarly, to find (−2)3 + 4i, compute the polar form of −2,



and use the formula above to compute

The value of a complex power depends on the branch used. For example, if the polar form i = 1ei(5π/2) is used to compute i i, the power is found to be e−5π/2; the principal value of i i, computed above, is e−π/2. The set of all possible values for i i is given by:[3]

So there is an infinity of values which are possible candidates for the value of ii, one for each integer k. All of them have a zero imaginary part so one can say ii has an infinity of valid real values.

Failure of power and logarithm identities
Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example: • The identity log(bx) = x · log b holds whenever b is a positive real number and x is a real number. But for the principal branch of the complex logarithm one has

Regardless of which branch of the logarithm is used, a similar failure of the identity will exist. The best that can be said (if only using this result) is that:

This identity does not hold even when considering log as a multivalued function. The possible values of log(wz) contain those of z · log w as a subset. Using Log(w) for the principal value of log(w) and m, n as any integers the possible values of both sides are:

• The identities (bc)x = bxcx and (b/c)x = bx/cx are valid when b and c are positive real numbers and x is a real number. But a calculation using principal branches shows that


On the other hand, when x is an integer, the identities are valid for all nonzero complex numbers. If exponentiation is considered as a multivalued function then the possible values of (−1×−1)1/2 are {1, −1}. The identity holds but saying {1} = {(−1×−1)1/2} is wrong. • The identity (ex)y = exy holds for real numbers x and y, but assuming its truth for complex numbers leads to the following paradox, discovered in 1827 by Clausen:[4] For any integer n, we have: 1. 2. 3.

Exponentiation 4. 5. but this is false when the integer n is nonzero. There are a number of problems in the reasoning: The major error is that changing the order of exponentiation in going from line two to three changes what the principal value chosen will be. From the multi-valued point of view the first error occurs even sooner, it is implicit in the first line and not obvious. It is that e is a real number whereas the result of e1+2πin is a complex number better represented as e+0i. Substituting the complex number for the real on the second line makes the power have multiple possible values. Changing the order of exponentiation from lines two to three also affects how many possible values the result can have.


Zero to the zero power
Most authors agree with the statements related to 00 in the two lists below, but make different decisions when it comes to defining 00 or not: see the next subsection.

Plot of z = abs(x)y with red curves yielding different limits as (x,y) approaches (0,0). The green curves all yield a limit of 1.

In most settings not involving continuity in the exponent, interpreting 00 as 1 simplifies formulas and eliminates the need for special cases in theorems. (See the next paragraph for some settings that do involve continuity.) For example: • Regarding b0 as an empty product assigns it the value 1, even when b = 0. • The combinatorial interpretation of 00 is the number of empty tuples of elements from the empty set. There is exactly one empty tuple. • Equivalently, the set-theoretic interpretation of 00 is the number of functions from the empty set to the empty set. There is exactly one such function, the empty function.[5] • The notation and for polynomials and power series rely on defining 00 = 1. Identities like and the binomial theorem are not valid for x

Exponentiation = 0 unless 00 = 1.[6] • In differential calculus, the power rule is not valid for n = 1 at x = 0 unless 00 = 1. , it must be handled as an indeterminate On the other hand, when 00 arises from a limit of the form


form. • Limits involving algebraic operations can often be evaluated by replacing subexpressions by their limits; if the resulting expression does not determine the original limit, the expression is known as an indeterminate form.[7] In fact, when f(t) and g(t) are real-valued functions both approaching 0 (as t approaches a real number or ±∞), with f(t) > 0, the function f(t)g(t) need not approach 1; depending on f and g, the limit of f(t)g(t) can be any nonnegative real number or +∞, or it can be undefined. For example, the functions below are of the form f(t)g(t) with f(t),g(t) → 0 as t → 0+, but the limits are different: . So 0 is an indeterminate form. This behavior shows that the two-variable function x , though continuous on the set {(x,y): x > 0}, cannot be extended to a continuous function on any set containing (0,0), no matter how 00 is defined.[8] However, under certain conditions, such as when f and g are both analytic functions and f is nonnegative, the limit approaching from the right is always 1.[9] [10] [11] • In the complex domain, the function zw is defined for nonzero z by choosing a branch of log z and setting zw := ew log z, but there is no branch of log z defined at z = 0, let alone in a neighborhood of 0.[12]
0 y

History of differing points of view
Different authors interpret the situation above in different ways: • Some argue that the best value for 00 depends on context, and hence that defining it once and for all is problematic.[13] According to Benson (1999), "The choice whether to define 00 is based on convenience, not on correctness."[14] • Others argue that 00 is 1. According to p. 408 of Knuth (1992), it "has to be 1", although he goes on to say that "Cauchy had good reason to consider 00 as an undefined limiting form" and that "in this much stronger sense, the value of 00 is less defined than, say, the value of 0 + 0" (emphases in original).[15] The debate has been going on at least since the early 19th century. At that time, most mathematicians agreed that 00 [16] 0 0 = 1, until in 1821 Cauchy listed 0 along with expressions like ⁄0 in a table of undefined forms. In the 1830s Libri[17] [18] published an unconvincing argument for 00 = 1, and Möbius[19] sided with him, erroneously claiming that whenever A commentator who signed his name simply as "S" provided the counterexample of (e−1/t)t, and this quieted the debate for some time, with the apparent conclusion of this episode being that 00 should be undefined. More details can be found in Knuth (1992).[15]

Treatment on computers
IEEE floating point standard The IEEE 754-2008 floating point standard is used in the design of most floating point libraries. It recommends a number of different functions for computing a power:[20] • pow treats 00 as 1. This is the oldest defined version. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases). • pown treats 00 as 1. The power must be an exact integer. The value is defined for negative bases, e.g. pown(−3,5) is −243. • powr treats 00 as NaN (Not-a-Number – undefined). The value is also NaN for cases like powr(−3,2) where the base is less than zero. The value is defined by epower'×log(base).

Exponentiation Programming languages Most programming language with a power function are implemented using the IEEE pow function and therefore evaluate 00 as 1. The later C[21] and C++ standards describe this as the normative behaviour. The Java standard[22] mandates this behavior. The .NET Framework method System.Math.Pow also treats 00 as 1.[23] Mathematics software • Sage simplifies b0 to 1, even if no constraints are placed on b.[24] It does not simplify 0x, and it takes 00 to be 1. • Maple simplifies b0 to 1 and 0x to 0, even if no constraints are placed on b (the latter simplification is only valid for x > 0), and evaluates 00 to 1. • Macsyma also simplifies b0 to 1 and 0x to 0, even if no constraints are placed on b and x, but issues an error for 00. • Mathematica and Wolfram Alpha simplify b0 into 1, even if no constraints are placed on b.[25] While Mathematica does not simplify 0x, Wolfram Alpha returns two results, 0 and indeterminate.[26] Both Mathematica and Wolfram Alpha take 00 to be an indeterminate form.[27]


Limits of powers
The section zero to the zero power gives a number of examples of limits which are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function xy has no limit at the point (0,0). One may ask at what points this function does have a limit. More precisely, consider the function f(x,y) = xy defined on D = {(x,y) ∈ R2 : x > 0}. Then D can be viewed as a subset of R2 (that is, the set of all pairs (x,y) with x,y belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function f has a limit. In fact, f has a limit at all accumulation points of D, except for (0,0), (+∞,0), (1,+∞) and (1,−∞).[28] Accordingly, this allows one to define the powers xy by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms. Under this definition by continuity, we obtain: • • • • x+∞ = +∞ and x−∞ = 0, when 1 < x ≤ +∞. x+∞ = 0 and x−∞ = +∞, when 0 ≤ x < 1. 0y = 0 and (+∞)y = +∞, when 0 < y ≤ +∞. 0y = +∞ and (+∞)y = 0, when −∞ ≤ y < 0.

These powers are obtained by taking limits of xy for positive values of x. This method does not permit a definition of xy when x < 0, since pairs (x,y) with x < 0 are not accumulation points of D. On the other hand, when n is an integer, the power xn is already meaningful for all values of x, including negative ones. This may make the definition 0n = +∞ obtained above for negative n problematic when n is odd, since in this case xn → +∞ as x tends to 0 through positive values, but not negative ones.

Efficient computation of integer powers
The simplest method of computing bn requires n−1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2100, note that 100 = 64 + 32 + 4. Compute the following in order: 1. 22 = 4 2. (22)2 = 24 = 16 3. (24)2 = 28 = 256 4. (28)2 = 216 = 65,536 5. (216)2 = 232 = 4,294,967,296 6. (232)2 = 264 = 18,446,744,073,709,551,616

Exponentiation 7. 264 232 24 = 2100 = 1,267,650,600,228,229,401,496,703,205,376 This series of steps only requires 8 multiplication operations instead of 99 (since the last product above takes 2 multiplications). In general, the number of multiplication operations required to compute bn can be reduced to Θ(log n) by using exponentiation by squaring or (more generally) addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for bn is a difficult problem for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available.[29]


Exponential notation for function names
Placing an integer superscript after the name or symbol of a function, as if the function were being raised to a power, commonly refers to repeated function composition rather than repeated multiplication. Thus f 3(x) may mean f(f(f(x))); in particular, f −1(x) usually denotes the inverse function of f. Iterated functions are of interest in the study of fractals and dynamical systems. Babbage was the first to study the problem of finding a functional square root f 1/2(x). However, for historical reasons, a special syntax applies to the trigonometric functions: a positive exponent applied to the function's abbreviation means that the result is raised to that power, while an exponent of −1 denotes the inverse function. That is, sin2x is just a shorthand way to write (sin x)2 without using parentheses, whereas sin−1x refers to the inverse function of the sine, also called arcsin x. There is no need for a shorthand for the reciprocals of trigonometric functions since each has its own name and abbreviation; for example, 1/(sin x) = (sin x)−1 = csc x. A similar convention applies to logarithms, where log2x usually means (log x)2, not log log x.

In abstract algebra
Exponentiation for integer exponents can be defined for quite general structures in abstract algebra. Let X be a set with a power-associative binary operation which is written multiplicatively. Then xn is defined for any element x of X and any nonzero natural number n as the product of n copies of x, which is recursively defined by

One has the following properties • • • If the operation has a two-sided identity element 1 (often denoted by e), then x0 is defined to be equal to 1 for any x. • • Two sided identity (power-associative property),

If the operation also has two-sided inverses, and multiplication is associative then the magma is a group. The inverse of x can be denoted by x−1 and follows all the usual rules for exponents. • • • • Two sided inverse Associative

Exponentiation If the multiplication operation is commutative (as for instance in abelian groups), then the following holds: • If the binary operation is written additively, as it often is for abelian groups, then "exponentiation is repeated multiplication" can be reinterpreted as "multiplication is repeated addition". Thus, each of the laws of exponentiation above has an analogue among laws of multiplication. When one has several operations around, any of which might be repeated using exponentiation, it is common to indicate which operation is being repeated by placing its symbol in the superscript. Thus, x∗n is x ∗ ··· ∗ x, while x#n is x # ··· # x, whatever the operations ∗ and # might be. Superscript notation is also used, especially in group theory, to indicate conjugation. That is, gh = h−1gh, where g and h are elements of some group. Although conjugation obeys some of the same laws as exponentiation, it is not an example of repeated multiplication in any sense. A quandle is an algebraic structure in which these laws of conjugation play a central role.


Over sets
If n is a natural number and A is an arbitrary set, the expression An is often used to denote the set of ordered n-tuples of elements of A. This is equivalent to letting An denote the set of functions from the set {0, 1, 2, ..., n−1} to the set A; the n-tuple (a0, a1, a2, ..., an−1) represents the function that sends i to ai. For an infinite cardinal number κ and a set A, the notation Aκ is also used to denote the set of all functions from a set of size κ to A. This is sometimes written κA to distinguish it from cardinal exponentiation, defined below. This generalized exponential can also be defined for operations on sets or for sets with extra structure. For example, in linear algebra, it makes sense to index direct sums of vector spaces over arbitrary index sets. That is, we can speak of

where each Vi is a vector space. Then if Vi = V for each i, the resulting direct sum can be written in exponential notation as V⊕N, or simply VN with the understanding that the direct sum is the default. We can again replace the set N with a cardinal number n to get Vn, although without choosing a specific standard set with cardinality n, this is defined only up to isomorphism. Taking V to be the field R of real numbers (thought of as a vector space over itself) and n to be some natural number, we get the vector space that is most commonly studied in linear algebra, the Euclidean space Rn. If the base of the exponentiation operation is a set, the exponentiation operation is the Cartesian product unless otherwise stated. Since multiple Cartesian products produce an n-tuple, which can be represented by a function on a set of appropriate cardinality, SN becomes simply the set of all functions from N to S in this case: This fits in with the exponentiation of cardinal numbers, in the sense that |SN| = |S||N|, where |X| is the cardinality of X. When "2" is defined as {0,1}, we have |2X| = 2|X|, where 2X, usually denoted by P(X), is the power set of X; each subset Y of X corresponds uniquely to a function on X taking the value 1 for x ∈ Y and 0 for x ∉ Y.



In category theory
In a Cartesian closed category, the exponential operation can be used to raise an arbitrary object to the power of another object. This generalizes the Cartesian product in the category of sets. If is an initial object in a Cartesian closed category, then the exponential object is isomorphic to any terminal object .

Of cardinal and ordinal numbers
In set theory, there are exponential operations for cardinal and ordinal numbers. If κ and λ are cardinal numbers, the expression κλ represents the cardinality of the set of functions from any set of cardinality λ to any set of cardinality κ.[5] If κ and λ are finite, then this agrees with the ordinary arithmetic exponential operation. For example, the set of 3-tuples of elements from a 2-element set has cardinality 8 = 23. Exponentiation of cardinal numbers is distinct from exponentiation of ordinal numbers, which is defined by a limit process involving transfinite induction.

Repeated exponentiation
Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called tetration. Iterating tetration leads to another operation, and so on. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster growing than addition, tetration is faster growing than exponentiation. Evaluated at (3,3), the functions addition, multiplication, exponentiation, tetration yield 6, 9, 27, and 7,625,597,484,987 respectively.

In programming languages
The superscript notation xy is convenient in handwriting but inconvenient for typewriters and computer terminals that align the baselines of all characters on each line. Many programming languages have alternate ways of expressing exponentiation that do not use superscripts: • x ↑ y: Algol, Commodore BASIC • x ^ y: BASIC, J, MATLAB, R, Microsoft Excel, TeX (and its derivatives), TI-BASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua, ASP and most computer algebra systems • x ^^ y: Haskell (for fractional base, integer exponents), D • x ** y: Ada, Bash, COBOL, Fortran, FoxPro, Gnuplot, OCaml, Perl, PL/I, Python, Rexx, Ruby, SAS, Tcl, ABAP, Haskell (for floating-point exponents), Turing, VHDL • x⋆y: APL • Power(x, y): Microsoft Excel, Delphi/Pascal (declared in "Math"-unit) • pow(x, y): C, C++, PHP, Tcl, Python • math.pow(x, y): Python (always fractional results) • Math.pow(x, y): Java, JavaScript, Modula-3, Standard ML • Math.Pow(x, y) or BigInteger.Pow(x, y): C# (and other languages using the BCL) • (expt x y): Common Lisp, Scheme • math:pow(x, y): Erlang In Bash, C, C++, C#, Java, JavaScript, Perl, PHP, Python and Ruby, the symbol ^ represents bitwise XOR. In Pascal, it represents indirection. In OCaml and Standard ML, it represents string concatenation.



History of the notation
The term power was used by the Greek mathematician Euclid for the square of a line.[30] In the 9th century, Muhammad ibn Mūsā al-Khwārizmī used the terms mal for a square and kab for a cube, which later Islamic mathematicians represented in mathematical notation as m and k, respectively, by the 15th century, as seen in the work of Abū al-Hasan ibn Alī al-Qalasādī.[31] Nicolas Chuquet used a form of exponential notation in the 15th century, which was later used by Henricus Grammateus and Michael Stifel in the 16th century. Samuel Jeake introduced the term indices in 1696.[30] In the 16th century Robert Recorde used the terms square, cube, zenzizenzic (fourth power), surfolide (fifth), zenzicube (sixth), second surfolide (seventh) and Zenzizenzizenzic (eighth).[32] Biquadrate has been used to refer to the fourth power as well. Some mathematicians (e.g., Isaac Newton) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d. Another historical synonym, involution,[33] is now rare and should not be confused with its more common meaning.

[1] Denlinger, Charles G. (2011). Elements of Real Analysis. Jones and Bartlett. pp. 278–283. ISBN 9780763779474. [2] This definition of a principal root of unity can be found in: Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001). Introduction to Algorithms (second ed.). MIT Press. ISBN 0262032937. Online resource (http:/ / highered. mcgraw-hill. com/ sites/ 0070131511/ student_view0/ chapter30/ glossary. html) • Paul Cull, Mary Flahive, and Robby Robson (2005). Difference Equations: From Rabbits to Chaos (Undergraduate Texts in Mathematics ed.). Springer. ISBN 0387232346. Defined on page 351, available on Google books. • " Principal root of unity (http:/ / mathworld. wolfram. com/ PrincipalRootofUnity. html)", MathWorld. [3] Complex number to a complex power may be real (http:/ / www. cut-the-knot. org/ do_you_know/ complex. shtml) at Cut The Knot gives some references to ii [4] Steiner J, Clausen T, Abel NH (1827). "Aufgaben und Lehrsätze, erstere aufzulösen, letztere zu beweisen" (http:/ / gdz. sub. uni-goettingen. de/ no_cache/ dms/ load/ img/ ?IDDOC=270662). Journal für die reine und angewandte Mathematik 2: 286–287. . [5] N. Bourbaki, Elements of Mathematics, Theory of Sets, Springer-Verlag, 2004, III.§3.5. [6] "Some textbooks leave the quantity 00 undefined, because the functions x0 and 0x have different limiting values when x decreases to 0. But this is a mistake. We must define x0 = 1, for all x, if the binomial theorem is to be valid when x = 0, y = 0, and/or x = −y. The binomial theorem is too important to be arbitrarily restricted! By contrast, the function 0x is quite unimportant".Ronald Graham, Donald Knuth, and Oren Patashnik (1989-01-05). "Binomial coefficients". Concrete Mathematics (1st ed.). Addison Wesley Longman Publishing Co. p. 162. ISBN 0-201-14236-8. [7] Malik, S. C.; Savita Arora (1992). Mathematical Analysis. New York: Wiley. p. 223. ISBN 978-8122403237. "In general the limit of φ(x)/ψ(x) when x=a in case the limits of both the functions exist is equal to the limit of the numerator divided by the denominator. But what happens when both limits are zero? The division (0/0) then becomes meaningless. A case like this is known as an indeterminate form. Other such forms are ∞/∞ 0 × ∞, ∞ − ∞, 00, 1∞ and ∞0." [8] L. J. Paige (March 1954). "A note on indeterminate forms". American Mathematical Monthly 61 (3): 189–190. doi:10.2307/2307224. JSTOR 2307224. [9] sci.math FAQ: What is 0^0? (http:/ / www. faqs. org/ faqs/ sci-math-faq/ specialnumbers/ 0to0/ ) [10] Rotando, Louis M.; Korn, Henry (1977). "The Indeterminate Form 00". Mathematics Magazine (Mathematical Association of America) 50 (1): 41–42. doi:10.2307/2689754. JSTOR 2689754. [11] Lipkin, Leonard J. (2003). "On the Indeterminate Form 00". The College Mathematics Journal (Mathematical Association of America) 34 (1): 55–56. doi:10.2307/3595845. JSTOR 3595845. [12] "... Let's start at x = 0. Here xx is undefined." Mark D. Meyerson, The xx Spindle, Mathematics Magazine 69, no. 3 (June 1996), 198-206. [13] Examples include Edwards and Penny (1994). Calculus, 4th ed,, Prentice-Hall, p. 466, and Keedy, Bittinger, and Smith (1982). Algebra Two. Addison-Wesley, p. 32. [14] Donald C. Benson, The Moment of Proof : Mathematical Epiphanies. New York Oxford University Press (UK), 1999. ISBN 978-0-19-511721-9 [15] Donald E. Knuth, Two notes on notation, Amer. Math. Monthly 99 no. 5 (May 1992), 403–422. [16] Augustin-Louis Cauchy, Cours d'Analyse de l'École Royale Polytechnique (1821). In his Oeuvres Complètes, series 2, volume 3. [17] Guillaume Libri, Note sur les valeurs de la fonction 00x, Journal für die reine und angewandte Mathematik 6 (1830), 67–72. [18] Guillaume Libri, Mémoire sur les fonctions discontinues, Journal für die reine und angewandte Mathematik 10 (1833), 303–316. •

[19] A. F. Möbius, Beweis der Gleichung 00 = 1, nach J. F. Pfaff, Journal für die reine und angewandte Mathematik 12 (1834), 134–136. [20] Handbook of Floating-Point Arithmetic. Birkhäuser Boston. 2009. p. 216. ISBN 978-0817647049. [21] John Benito (April 2003) (PDF). Rationale for International Standard—Programming Languages—C (http:/ / www. open-std. org/ jtc1/ sc22/ wg14/ www/ C99RationaleV5. 10. pdf). Revision 5.10. p. 182. . [22] "Math (Java 2 Platform SE 1.4.2) pow" (http:/ / download. oracle. com/ javase/ 1. 4. 2/ docs/ api/ java/ lang/ Math. html#pow(double, double)). Oracle. . [23] ".NET Framework Class Library Math.Pow Method" (http:/ / msdn. microsoft. com/ en-us/ library/ system. math. pow. aspx). Microsoft. . [24] "Sage worksheet calculating x^0" (http:/ / sagenb. org/ home/ pub/ 2433/ ). Jason Grout. . [25] "Wolfram Alpha calculates b^0" (http:/ / www. wolframalpha. com/ input/ ?i=a^0). Wolfram Alpha LLC, accessed July 24, 2011. . [26] "Wolfram Alpha calculates 0^a" (http:/ / www. wolframalpha. com/ input/ ?i=0^a). Wolfram Alpha LLC, accessed July 24, 2011. . [27] "Wolfram Alpha calculates 0^0" (http:/ / www. wolframalpha. com/ input/ ?i=0^0). Wolfram Alpha LLC, accessed July 24, 2011. . [28] N. Bourbaki, Topologie générale, V.4.2. [29] Gordon, D. M. 1998. A survey of fast exponentiation methods. J. Algorithms 27, 1 (Apr. 1998), 129-146. doi:http:/ / dx. doi. org/ 10. 1006/ jagm. 1997. 0913 [30] O'Connor, John J.; Robertson, Edmund F., "Etymology of some common mathematical terms" (http:/ / www-history. mcs. st-andrews. ac. uk/ Miscellaneous/ Mathematical_notation. html), MacTutor History of Mathematics archive, University of St Andrews, . [31] O'Connor, John J.; Robertson, Edmund F., "Abu'l Hasan ibn Ali al Qalasadi" (http:/ / www-history. mcs. st-andrews. ac. uk/ Biographies/ Al-Qalasadi. html), MacTutor History of Mathematics archive, University of St Andrews, . [32] Quinion, Michael. "World Wide Words" (http:/ / www. worldwidewords. org/ weirdwords/ ww-zen1. htm). . Retrieved 2010-03-19 [33] This definition of "involution" appears in the OED second edition, 1989, and Merriam-Webster online dictionary (http:/ / www. m-w. com/ dictionary/ involution). The most recent usage in this sense cited by the OED is from 1806.


External links
• • • • sci.math FAQ: What is 00? ( Introducing 0th power (;from=objects&amp;id=3948) on PlanetMath Laws of Exponents ( with derivation and examples What does 0^0 (zero to the zeroth power) equal? ( on

Symmetric function


Symmetric function
In mathematics, the term "symmetric function" can mean two different concepts. A symmetric function of n variables is one whose value at any n-tuple of arguments is the same as its value at any permutation of that n-tuple. While this notion can apply to any type of function whose n arguments live in the same set, it is most often used for polynomial functions, in which case these are the functions given by symmetric polynomials. There is very little systematic theory of symmetric non-polynomial functions of n variables, so this sense is little-used, except as a general definition. In algebra and in particular in algebraic combinatorics, the term "symmetric function" is often used instead to refer to elements of the ring of symmetric functions, where that ring is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric groups. For these specific uses, see the corresponding articles; the remainder of this article addresses general properties of symmetric functions in n variables.

Given any function f in n variables with values in an abelian group, a symmetric function can be constructed by summing values of f over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions f. The only general case where f can be recovered if both its symmetrization and anti-symmetrization are known is when n = 2 and the abelian group admits a division by 2 (inverse of doubling); then f is equal to half the sum of its symmetrization and its anti-symmetrization.

In statistics, an n-sample statistic (a function in n variables) that is obtained by bootstrapping symmetrization of a k-sample statistic, yielding a symmetric function in n variables, is called a U-statistic. Examples include the sample mean and sample variance.


Algebraic Structures
Pre-Algebra is a common name for a course in middle school mathematics. In the United States, it is generally taught between the fifth and eighth grades, although it may be necessary to take this course as early as sixth grade in order to advance to Calculus BC by twelfth grade. The objective of Pre-Algebra is to prepare the student for the study of algebra. Pre-Algebra includes several broad subjects: • • • • • Review of natural number arithmetic New types of numbers such as integers, fractions, decimals and negative numbers Factorization of natural numbers Properties of operations (associativity, distributivity and so on) Simple (integer) roots and powers

• Rules of evaluation of expressions, such as operator precedence and use of parentheses • Basics of equations, including rules for invariant manipulation of equations • Variables and exponentiation Pre-algebra often includes some basic subjects from geometry, mostly the kinds that further understanding of algebra and show how it is used, such as area, volume, and perimeter.

External links
• Pre-Algebra [1] online study guides, examples, practice problems, and teacher resources

[1] http:/ / www. shmoop. com/ pre-algebra/

Algebra of sets


Algebra of sets
The algebra of sets develops and describes the basic properties and laws of sets, the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.

The algebra of sets is the development of the fundamental properties of set operations and set relations. These properties provide insight into the fundamental nature of sets. They also have practical considerations. Just like expressions and calculations in ordinary arithmetic, expressions and calculations involving sets can be quite complex. It is helpful to have systematic procedures available for manipulating and evaluating such expressions and performing such computations. In the case of arithmetic, it is elementary algebra that develops the fundamental properties of arithmetic operations and relations. For example, the operations of addition and multiplication obey familiar laws such as associativity, commutativity and distributivity; while the "less than or equal" relation satisfies such laws as reflexivity, antisymmetry and transitivity. These laws provide tools which facilitate computation as well as describe the fundamental nature of numbers, their operations and relations. The algebra of sets is the set-theoretic analogue of the algebra of numbers. It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. These are the topics covered in this article. For a basic introduction to sets see the article on sets, for a fuller account see naive set theory, and for a full rigorous axiomatic treatment see axiomatic set theory.

The fundamental laws of set algebra
The binary operations of set union and intersection satisfy many identities. Several of these identities or "laws" have well established names. Three pairs of laws, are stated, without proof, in the following proposition. PROPOSITION 1: For any sets A, B, and C, the following identities hold: commutative laws: • • associative laws: • • distributive laws: • • Notice that the analogy between unions and intersections of sets, and addition and multiplication of numbers, is quite striking. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersection distributes over unions. However, unlike addition and multiplication, union also distributes over intersection. The next proposition, states two additional pairs of laws involving three specials sets: the empty set, the universal set and the complement of a set.

Algebra of sets PROPOSITION 2: For any subset A of universal set U, where Ø is the empty set, the following identities hold: identity laws: • • complement laws: • • The identity laws (together with the commutative laws) say that, just like 0 and 1 for addition and multiplication, Ø and U are the identity elements for union and intersection, respectively. Unlike addition and multiplication, union and intersection do not have inverse elements. However the complement laws give the fundamental properties of the somewhat inverse-like unary operation of set complementation. The preceding five pairs of laws: the commutative, associative, distributive, identity and complement laws, can be said to encompass all of set algebra, in the sense that every valid proposition in the algebra of sets can be derived from them.


The principle of duality
The above propositions display the following interesting pattern. Each of the identities stated above is one of a pair of identities such that each can be transformed into the other by interchanging ∪ and ∩, and also Ø and U. These are examples of an extremely important and powerful property of set algebra, namely, the principle of duality for sets, which asserts that for any true statement about sets, the dual statement obtained by interchanging unions and intersections, interchanging U and Ø and reversing inclusions is also true. A statement is said to be self-dual if it is equal to its own dual.

Some additional laws for unions and intersections
The following proposition states six more important laws of set algebra, involving unions and intersections. PROPOSITION 3: For any subsets A and B of a universal set U, the following identities hold: idempotent laws: • • domination laws: • • absorption laws: • • As noted above each of the laws stated in proposition 3, can be derived from the five fundamental pairs of laws stated in proposition 1 and proposition 2. As an illustration, a proof is given below for the idempotent law for union. Proof:

Algebra of sets


by the identity law of intersection by the complement law for union by the distributive law of union over intersection by the complement law for intersection by the identity law for union

The following proof illustrates that the dual of the above proof is the proof of the dual of the idempotent law for union, namely the idempotent law for intersection. Proof:
by the identity law for union by the complement law for intersection by the distributive law of intersection over union by the complement law for union by the identity law for intersection

Intersection can be expressed in terms of union and set difference :

Some additional laws for complements
The following proposition states five more important laws of set algebra, involving complements. PROPOSITION 4: Let A and B be subsets of a universe U, then: De Morgan's laws: • • double complement or Involution law: • complement laws for the universal set and the empty set: • • Notice that the double complement law is self-dual. The next proposition, which is also self-dual, says that the complement of a set is the only set that satisfies the complement laws. In other words, complementation is characterized by the complement laws. PROPOSITION 5: Let A and B be subsets of a universe U, then: uniqueness of complements: • If , and , then

Algebra of sets


The algebra of inclusion
The following proposition says that inclusion is a partial order. PROPOSITION 6: If A, B and C are sets then the following hold: reflexivity: • antisymmetry: • transitivity: • If and , then and if and only if

The following proposition says that for any set S, the power set of S, ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra. PROPOSITION 7: If A, B and C are subsets of a set S then the following hold: existence of a least element and a greatest element: • existence of joins: • • If and , then

existence of meets: • • If and , then is equivalent to various other statements involving

The following proposition says that the statement unions, intersections and complements.

PROPOSITION 8: For any two sets A and B, the following are equivalent: • • • • • The above proposition shows that the relation of set inclusion can be characterized by either of the operations of set union or set intersection, which means that the notion of set inclusion is axiomatically superfluous.

The algebra of relative complements
The following proposition lists several identities concerning relative complements or set-theoretic difference. PROPOSITION 9: For any universe U and subsets A, B, and C of U, the following identities hold: • • • • • • • •

Algebra of sets • • • •


• Stoll, Robert R.; Set Theory and Logic, Mineola, N.Y.: Dover Publications (1979) ISBN 0486638294. "The Algebra of Sets", pp 16—23 [1] • Courant, Richard, Herbert Robbins, Ian Stewart, What is mathematics?: An Elementary Approach to Ideas and Methods, Oxford University Press US, 1996. ISBN 9780195105193. "SUPPLEMENT TO CHAPTER II THE ALGEBRA OF SETS" [2]

External links
• Operations on Sets at ProvenMath [3]

[1] http:/ / books. google. com/ books?id=3-nrPB7BQKMC& pg=PA16#v=onepage& q& f=false [2] http:/ / books. google. com/ books?id=UfdossHPlkgC& pg=PA17-IA8& dq=%22algebra+ of+ sets%22& hl=en& ei=k8-RTdXoF4K2tgfM-p1v& sa=X& oi=book_result& ct=result& resnum=3& ved=0CDYQ6AEwAg#v=onepage& q=%22algebra%20of%20sets%22& f=false [3] http:/ / www. apronus. com/ provenmath/ btheorems. htm

Algebraic structure
In abstract algebra, an algebraic structure consists of one or more sets, called underlying sets or carriers or sorts, closed under one or more operations, satisfying some axioms. Abstract algebra is primarily the study of algebraic structures and their properties. The notion of algebraic structure has been formalized in universal algebra. In a slight abuse of notation, the expression "structure" can also refer only to the operations on a structure, and not to the underlying set itself. For example, the group can be seen as a set which is equipped with an "algebraic structure", namely the operation .

Structures whose axioms are all identities
Universal algebra often considers classes of algebraic structures (such as the class of all groups), together with operations (such as products) and relations (such as "substructure") between these algebras. These classes are usually defined by "axioms", that is, a list of properties that all these structures have to share. If all axioms defining a class of algebras are "identities" , then the corresponding class is called variety (not to be confused with algebraic variety in the sense of algebraic geometry). Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two

Algebraic structure arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered to be an operator taking zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x,m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group. All structures in this section are elements of naturally defined varieties. Some of these structures are most naturally axiomatized using one or more nonidentities, but are nevertheless varieties because there exists an equivalent axiomatization, one perhaps less perspicuous, composed solely of identities. Algebraic structures that are not varieties are described in the following section, and differ from varieties in their metamathematical properties. In this section and the following one, structures are listed in approximate order of increasing complexity, operationalized as follows: • Simple structures requiring but one set, the universe S, are listed before composite ones requiring two sets; • Structures having the same number of required sets are then ordered by the number of binary operations (0 to 4) they require. Incidentally, no structure mentioned in this entry requires an operation whose arity exceeds 2; • Let A and B be the two sets that make up a composite structure. Then a composite structure may include 1 or 2 functions of the form A × A → B or A × B → A; • Structures having the same number and kinds of binary operations and functions are more or less ordered by the number of required unary and 0-ary (distinguished elements) operations, 0 to 2 in both cases. The indentation structure employed in this section and the one following is intended to convey information. If structure B is under structure A and more indented, then all theorems of A are theorems of B; the converse does not hold. Ringoids and lattices can be clearly distinguished despite both having two defining binary operations. In the case of ringoids, the two operations are linked by the distributive law; in the case of lattices, they are linked by the absorption law. Ringoids also tend to have numerical models, while lattices tend to have set-theoretic models. Simple structures: No binary operation: • • • • Set: a degenerate algebraic structure having no operations. Pointed set: S has one or more distinguished elements, often 0, 1, or both. Unary system: S and a single unary operation over S. Pointed unary system: a unary system with S a pointed set.


Group-like structures: One binary operation, denoted by concatenation. For monoids, boundary algebras, and sloops, S is a pointed set. • Magma or groupoid: S and a single binary operation over S. • Steiner magma: A commutative magma satisfying x(xy) = y. • Squag: an idempotent Steiner magma. • Sloop: a Steiner magma with distinguished element 1, such that xx = 1. • Semigroup: an associative magma. • Monoid: a unital semigroup. • Group: a monoid with a unary operation, inverse, giving rise to an inverse element. • Abelian group: a commutative group. • Band: a semigroup of idempotents. • Semilattice: a commutative band. The binary operation can be called either meet or join. • Boundary algebra: a unital semilattice (equivalently, an idempotent commutative monoid) with a unary operation, complementation, denoted by enclosing its argument in parentheses, giving rise to an inverse

Algebraic structure element that is the complement of the identity element. The identity and inverse elements bound S. Also, x(xy) = x(y) holds. Three binary operations. Quasigroups are listed here, despite their having 3 binary operations, because they are (nonassociative) magmas. Quasigroups feature 3 binary operations only because establishing the quasigroup cancellation property by means of identities alone requires two binary operations in addition to the group operation. • Quasigroup: a cancellative magma. Equivalently, ∀x,y∈S, ∃!a,b∈S, such that xa = y and bx = y. • Loop: a unital quasigroup with a unary operation, inverse. • Moufang loop: a loop in which a weakened form of associativity, (zx)(yz) = z(xy)z, holds. • Group: an associative loop. Lattice: Two or more binary operations, including meet and join, connected by the absorption law. S is both a meet and join semilattice, and is a pointed set if and only if S is bounded. Lattices often have no unary operations. Every true statement has a dual, obtained by replacing every instance of meet with join, and vice versa. • Bounded lattice: S has two distinguished elements, the greatest lower bound and the least upper bound. Dualizing requires replacing every instance of one bound by the other, and vice versa. • Complemented lattice: a lattice with a unary operation, complementation, denoted by postfix " ' ", giving rise to an inverse element. That element and its complement bound the lattice. • Modular lattice: a lattice in which the modular identity holds. • Distributive lattice: a lattice in which each of meet and join distributes over the other. Distributive lattices are modular, but the converse does not hold. • Kleene algebra: a bounded distributive lattice with a unary operation whose identities are x"=x, (x+y)'=x'y', and (x+x')yy'=yy'. See "ring-like structures" for another structure having the same name. • Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. • Interior algebra: a Boolean algebra with an added unary operation, the interior operator, denoted by postfix " ' " and obeying the identities x'x=x, x"=x, (xy)'=x'y', and 1'=1. • Heyting algebra: a bounded distributive lattice with an added binary operation, relative pseudo-complement, denoted by infix " ' ", and governed by the axioms x'x=1, x(x'y) = xy, x'(yz) = (x'y)(x'z), (xy)'z = (x'z)(y'z). Ringoids: Two binary operations, addition and multiplication, with multiplication distributing over addition. Semirings are pointed sets. • Semiring: a ringoid such that S is a monoid under each operation. Each operation has a distinct identity element. Addition also commutes, and has an identity element that annihilates multiplication. • Commutative semiring: a semiring with commutative multiplication. • Ring: a semiring with a unary operation, additive inverse, giving rise to an inverse element -x, which when added to x, yields the additive identity element. Hence S is an abelian group under addition. • Rng: a ring lacking a multiplicative identity. • Commutative ring: a ring with commutative multiplication. • Boolean ring: a commutative ring with idempotent multiplication, equivalent to a Boolean algebra. • Kleene algebra: a semiring with idempotent addition and a unary operation, the Kleene star, denoted by postfix * and obeying the identities (1+x*x)x*=x* and (1+xx*)x*=x*. See "Lattice-like structures" for another structure having the same name. N.B. The above definition of ring does not command universal assent. Some authorities employ "ring" to denote what is here called a rng, and refer to a ring in the above sense as a "ring with identity." Modules: Composite Systems Defined over Two Sets, M and R: The members of:


Algebraic structure 1. R are scalars, denoted by Greek letters. R is a ring under the binary operations of scalar addition and multiplication; 2. M are module elements (often but not necessarily vectors), denoted by Latin letters. M is an abelian group under addition. There may be other binary operations. The scalar multiplication of scalars and module elements is a function RxM→M which commutes, associates (∀r,s∈R, ∀x∈M, r(sx) = (rs)x ), has 1 as identity element, and distributes over module and scalar addition. If only the pre(post)multiplication of module elements by scalars is defined, the result is a left (right) module. • Free module: a module having a free basis, {e1, ... en}⊂M, where the positive integer n is the dimension of the free module. For every v∈M, there exist κ1, ..., κn∈R such that v = κ1e1 + ... + κnen. Let 0 and 0 be the respective identity elements for module and scalar addition. If r1e1 + ... + rnen = 0, then r1 = ... = rn = 0. (Note that the class of free modules over a given ring R is in general not a variety.) • Algebra over a ring (also R-algebra): a (free) module where R is a commutative ring. There is a second binary operation over M, called multiplication and denoted by concatenation, which distributes over module addition and is bilinear: α(xy) = (αx)y = x(αy). • Jordan ring: an algebra over a ring whose module multiplication commutes, does not associate, and respects the Jordan identity. Vector spaces, closely related to modules, are defined in the next section.


Structures with some axioms that are not identities
The structures in this section are not axiomatized with identities alone, so the classes considered below are not varieties. Nearly all of the nonidentities below are one of two very elementary kinds: 1. The starting point for all structures in this section is a "nontrivial" ring, namely one such that S≠{0}, 0 being the additive identity element. The nearest thing to an identity implying S≠{0} is the nonidentity 0≠1, which requires that the additive and multiplicative identities be distinct. 2. Nearly all structures described in this section include identities that hold for all members of S except 0. In order for an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations. Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and vector spaces. Moreover, much of theoretical physics can be recast as models of multilinear algebras. Although structures with nonidentities retain an undoubted algebraic flavor, they suffer from defects varieties do not have. For example, neither the product of integral domains nor a free field over any set exist. Arithmetics: Two binary operations, addition and multiplication. S is an infinite set. Arithmetics are pointed unary systems, whose unary operation is injective successor, and with distinguished element 0. • Robinson arithmetic. Addition and multiplication are recursively defined by means of successor. 0 is the identity element for addition, and annihilates multiplication. Robinson arithmetic is listed here even though it is a variety, because of its closeness to Peano arithmetic. • Peano arithmetic. Robinson arithmetic with an axiom schema of induction. Most ring and field axioms bearing on the properties of addition and multiplication are theorems of Peano arithmetic or of proper extensions thereof. Field-like structures: Two binary operations, addition and multiplication. S is nontrivial, i.e., S≠{0}. • Domain: a ring whose sole zero divisor is 0. • Integral domain: a domain whose multiplication commutes. Also a commutative cancellative ring. • Euclidean domain: an integral domain with a function f: S→N satisfying the division with remainder property.

Algebraic structure • Division ring (or sfield, skew field): a ring in which every member of S other than 0 has a two-sided multiplicative inverse. The nonzero members of S form a group under multiplication. • Field: a division ring whose multiplication commutes. The nonzero members of S form an abelian group under multiplication. • Ordered field: a field whose elements are totally ordered. • Real field: a Dedekind complete ordered field. The following structures are not varieties for reasons in addition to S≠{0}: • Simple ring: a ring having no ideals other than 0 and S. • Weyl algebra: • Artinian ring: a ring whose ideals satisfy the descending chain condition. Composite Systems: Vector Spaces, and Algebras over Fields. Two Sets, M and R, and at least three binary operations. The members of: 1. M are vectors, denoted by lower case letters. M is at minimum an abelian group under vector addition, with distinguished member 0. 2. R are scalars, denoted by Greek letters. R is a field, nearly always the real or complex field, with 0 and 1 as distinguished members. Three binary operations. • Vector space: a free module of dimension n except that R is a field. • Normed vector space: a vector space and with a norm, namely a function M → R that is positive homogeneous, subadditive, and positive definite. • Inner product space (also Euclidean vector space): a normed vector space such that R is the real field, whose norm is the square root of the inner product, M×M→R. Let i,j, and n be positive integers such that 1≤i,j≤n. Then M has an orthonormal basis such that ei•ej = 1 if i=j and 0 otherwise; see free module above. • Unitary space: Differs from inner product spaces in that R is the complex field, and the inner product has a different name, the hermitian inner product, with different properties: conjugate symmetric, bilinear, and positive definite. See Birkhoff and Mac Lane (1979: 369). • Graded vector space: a vector space such that the members of M have a direct sum decomposition. See graded algebra below. Four binary operations. • Algebra over a field: An algebra over a ring except that R is a field instead of a commutative ring. • Jordan algebra: a Jordan ring except that R is a field. • Lie algebra: an algebra over a field respecting the Jacobi identity, whose vector multiplication, the Lie bracket denoted [u,v], anticommutes, does not associate, and is nilpotent. • Associative algebra: an algebra over a field, or a module, whose vector multiplication associates. • Linear algebra: an associative unital algebra with the members of M being matrices. Every matrix has a dimension nxm, n and m positive integers. If one of n or m is 1, the matrix is a vector; if both are 1, it is a scalar. Addition of matrices is defined only if they have the same dimensions. Matrix multiplication, denoted by concatenation, is the vector multiplication. Let matrix A be nxm and matrix B be ixj. Then AB is defined if and only if m=i; BA, if and only if j=n. There also exists an mxm matrix I and an nxn matrix J such that AI=JA=A. If u and v are vectors having the same dimensions, they have an inner product, denoted 〈u,v〉. Hence there is an orthonormal basis; see inner product space above. There is a unary function, the determinant, from square (nxn for any n) matrices to R. • Commutative algebra: an associative algebra whose vector multiplication commutes.


Algebraic structure • Symmetric algebra: a commutative algebra with unital vector multiplication. Composite Systems: Multilinear algebras. Two sets, V and K. Four binary operations: 1. The members of V are multivectors (including vectors), denoted by lower case Latin letters. V is an abelian group under multivector addition, and a monoid under outer product. The outer product goes under various names, and is multilinear in principle but usually bilinear. The outer product defines the multivectors recursively starting from the vectors. Thus the members of V have a "degree" (see graded algebra below). Multivectors may have an inner product as well, denoted u•v: V×V→K, that is symmetric, linear, and positive definite; see inner product space above. 2. The properties and notation of K are the same as those of R above, except that K may have −1 as a distinguished member. K is usually the real field, as multilinear algebras are designed to describe physical phenomena without complex numbers. 3. The multiplication of scalars and multivectors, V×K→V, has the same properties as the multiplication of scalars and module elements that is part of a module. • Graded algebra: an associative algebra with unital outer product. The members of V have a direct sum decomposition resulting in their having a "degree," with vectors having degree 1. If u and v have degree i and j, respectively, the outer product of u and v is of degree i+j. V also has a distinguished member 0 for each possible degree. Hence all members of V having the same degree form an abelian group under addition. • Exterior algebra (also Grassmann algebra): a graded algebra whose anticommutative outer product, denoted by infix ∧, is called the exterior product. V has an orthonormal basis. v1 ∧ v2 ∧ ... ∧ vk = 0 if and only if v1, ..., vk are linearly dependent. Multivectors also have an inner product. • Clifford algebra: an exterior algebra with a symmetric bilinear form Q: V×V→K. The special case Q=0 yields an exterior algebra. The exterior product is written 〈u,v〉. Usually, 〈ei,ei〉 = -1 (usually) or 1 (otherwise). • Geometric algebra: an exterior algebra whose outer (called geometric) product is denoted by concatenation. The geometric product of parallel multivectors commutes, that of orthogonal vectors anticommutes. The product of a scalar with a multivector commutes. vv yields a scalar. • Grassmann-Cayley algebra: a geometric algebra without an inner product.


Some recurring universes: N=natural numbers; Z=integers; Q=rational numbers; R=real numbers; C=complex numbers. N is a pointed unary system, and under addition and multiplication, is both the standard interpretation of Peano arithmetic and a commutative semiring. Boolean algebras are at once semigroups, lattices, and rings. They would even be abelian groups if the identity and inverse elements were identical instead of complements. Group-like structures • • • • • • Nonzero N under addition (+) is a magma, and even a free semigroup. N under addition is a magma with an identity, and in particular a free monoid. Z under subtraction (−) is a quasigroup. Nonzero Q under division (÷) is a quasigroup. Every group is a loop, because a * x = b if and only if x = a−1 * b, and y * a = b if and only if y = b * a−1. 2x2 matrices(of non-zero determinant) with matrix multiplication form a group.

• Z under addition (+) is an abelian group. • Nonzero Q under multiplication (×) is an abelian group.

Algebraic structure • Every cyclic group G is abelian, because if x, y are in G, then xy = aman = am+n = an+m = anam = yx. In particular, Z is an abelian group under addition, as is the integers modulo n Z/nZ. • A monoid is a category with a single object, in which case the composition of morphisms and the identity morphism interpret monoid multiplication and identity element, respectively. • The Boolean algebra 2 is a boundary algebra. • More examples of groups and list of small groups. Lattices • • • • • The normal subgroups of a group, and the submodules of a module, are modular lattices. Any field of sets, and the connectives of first-order logic, are models of Boolean algebra. The connectives of intuitionistic logic form a model of Heyting algebra. The modal logic S4 is a model of interior algebra. Peano arithmetic and most axiomatic set theories, including ZFC, NBG, and New foundations, can be recast as models of relation algebra.


Ring-like structures • The set R[X] of all polynomials over some coefficient ring R is a ring. • 2x2 matrices with matrix addition and multiplication form a ring. • If n is a positive integer, then the set Zn = Z/nZ of integers modulo n (the additive cyclic group of order n ) forms a ring having n elements (see modular arithmetic). • Sets of hypercomplex numbers were early prototypes of algebraic structures now called rings. Integral domains • Z under addition and multiplication is an integral domain. • The p-adic integers. Fields • Each of Q, R, and C, under addition and multiplication, is a field. • R totally ordered by "<" in the usual way is an ordered field and is categorical. The resulting real field grounds real and functional analysis. • R contains several interesting subfields, the algebraic, the computable, and the definable numbers. • An algebraic number field is a finite field extension of Q, that is, a field containing Q which has finite dimension as a vector space over Q. Algebraic number fields are very important in number theory. • If q > 1 is a power of a prime number, then there exists (up to isomorphism) exactly one finite field with q elements, usually denoted Fq, or in the case that q is itself prime, by Z/qZ. Such fields are called Galois fields, whence the alternative notation GF(q). All finite fields are isomorphic to some Galois field. • Given some prime number p, the set Zp = Z/pZ of integers modulo p is the finite field with p elements: Fp = {0, 1, ..., p − 1} where the operations are defined by performing the operation in Z, dividing by p and taking the remainder; see modular arithmetic.

Algebraic structure


Allowing additional structure
Algebraic structures can also be defined on sets with added structure of a non-algebraic nature, such as a topology. The added structure must be compatible, in some sense, with the algebraic structure. • • • • • • • • Ordered group: a group with a compatible partial order. I.e., S is partially ordered. Linearly ordered group: a group whose S is a linear order. Archimedean group: a linearly ordered group for which the Archimedean property holds. Lie group: a group whose S has a compatible smooth manifold structure. Topological group: a group whose S has a compatible topology. Topological vector space: a vector space whose M has a compatible topology; a superset of normed vector spaces. Banach spaces, Hilbert spaces, Inner product spaces Vertex operator algebras

Category theory
The discussion above has been cast in terms of elementary abstract and universal algebra. Category theory is another way of reasoning about algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure. There are various concepts in category theory that try to capture the algebraic character of a context, for instance • • • • • • algebraic essentially algebraic presentable locally presentable monadic functors and categories universal property.

• Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (2nd ed.), AMS Chelsea, ISBN 978-0-8218-1646-2 • Michel, Anthony N.; Herget, Charles J. (1993), Applied Algebra and Functional Analysis, New York: Dover Publications, ISBN 978-0-486-67598-5 A monograph available online: • Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra [1], Berlin, New York: Springer-Verlag, ISBN 978-3-540-90578-3 Category theory: • Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 • Taylor, Paul (1999), Practical foundations of mathematics, Cambridge University Press, ISBN 978-0-521-63107-5

Algebraic structure


External links
• Jipsen's algebra structures. [2] Includes many structures not mentioned here. • Mathworld [3] page on abstract algebra. • Stanford Encyclopedia of Philosophy: Algebra [4] by Vaughan Pratt.

[1] [2] [3] [4] http:/ / www. thoralf. uwaterloo. ca/ htdocs/ ualg. html http:/ / math. chapman. edu/ cgi-bin/ structures http:/ / mathworld. wolfram. com/ topics/ Algebra. html http:/ / plato. stanford. edu/ entries/ algebra/


Notation & Symbols
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation.
Numeral systems by culture Hindu-Arabic numerals Western Arabic (Hindu numerals) Eastern Arabic Indian family Tamil Burmese Khmer Lao Mongolian Thai

East Asian numerals Chinese Japanese Suzhou Korean Vietnamese Counting rods Alphabetic numerals Abjad Armenian Āryabhaṭa Cyrillic Ge'ez Greek (Ionian) Hebrew

Other systems Aegean Attic Babylonian Brahmi Egyptian Etruscan Inuit Kharosthi Mayan Quipu Roman Sumerian Urnfield

List of numeral system topics Positional systems by base Decimal (10) 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 20, 24, 30, 36, 60, 64 List of numeral systems

The decimal numeral system (also called base ten or occasionally denary) has ten as its base. It is the numerical base most widely used by modern civilizations.[1] [2] Decimal notation often refers to a base-10 positional notation such as the Hindu-Arabic numeral system; however, it can also be used more generally to refer to non-positional systems such as Roman or Chinese numerals which are also based on powers of ten. Decimals also refer to decimal fractions, either separately or in contrast to vulgar fractions. In this context, a decimal is a tenth part, and decimals become a series of nested tenths. There was a notation in use like 'tenth-metre', meaning the tenth decimal of the metre, currently an Angstrom. The contrast here is between decimals and vulgar fractions,

Decimal and decimal divisions and other divisions of measures, like the inch. It is possible to follow a decimal expansion with a vulgar fraction; this is done with the recent divisions of the troy ounce, which has three places of decimals, followed by a trinary place.


Decimal notation
Decimal notation is the writing of numbers in a base-10 numeral system. Examples are Roman numerals, Brahmi numerals, and Chinese numerals, as well as the Hindu-Arabic numerals used by speakers of many European languages. Roman numerals have symbols for the decimal powers (1, 10, 100, 1000) and secondary symbols for half these values (5, 50, 500). Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100 and another for 1000. Chinese numerals have symbols for 1–9, and additional symbols for powers of 10, which in modern usage reach 1044. However, when people who use Hindu-Arabic numerals speak of decimal notation, they often mean not just decimal numeration, as above, but also decimal fractions, all conveyed as part of a positional system. Positional decimal systems include a zero and use symbols (called digits) for the ten values (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) to represent any number, no matter how large or how small. These digits are often used with a decimal separator which indicates the start of a fractional part, and with a symbol such as the plus sign + (for positive) or minus sign − (for negative) adjacent to the numeral to indicate whether it is greater or less than zero, respectively. Positional notation uses positions for each power of ten: units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier (power of ten) multiplied with that digit—each position has a value ten times that of the position to its right. There were at least two presumably independent sources of positional decimal systems in ancient civilization: the Chinese counting rod system and the Hindu-Arabic numeral system (the latter descended from Brahmi numerals). Ten is the number which is the count of fingers and thumbs on both hands (or toes on the feet). The English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal (decimus < Lat.) means tenth, decimate means reduce by a tenth, and denary (denarius < Lat.) means the unit of ten. The symbols for the digits in common use around the globe today are called Arabic numerals by Europeans and Indian numerals by Arabs, the two groups' terms both referring to the culture from which they learned the system. However, the symbols used in different areas are not identical; for instance, Western Arabic numerals (from which the European numerals are derived) differ from the forms used by other Arab cultures.

Decimal fractions
A decimal fraction is a fraction whose denominator is a power of ten. Decimal fractions are commonly expressed without a denominator, the decimal separator being inserted into the numerator (with leading zeros added if needed) at the position from the right corresponding to the power of ten of the denominator; e.g., 8/10, 83/100, 83/1000, and 8/10000 are expressed as 0.8, 0.83, 0.083, and 0.0008. In English-speaking and many Asian countries, a period (.) or raised period (·) is used as the decimal separator; in many other countries, a comma is used. The integer part or integral part of a decimal number is the part to the left of the decimal separator (see also truncation). The part from the decimal separator to the right is the fractional part; if considered as a separate number, a zero is often written in front. Especially for negative numbers, we have to distinguish between the fractional part of the notation and the fractional part of the number itself, because the latter gets its own decimal sign. It is usual for a decimal number whose absolute value is less than one to have a leading zero. Trailing zeros after the decimal point are not necessary, although in science, engineering and statistics they can be retained to indicate a required precision or to show a level of confidence in the accuracy of the number: Although 0.080 and 0.08 are numerically equal, in engineering 0.080 suggests a measurement with an error of up to one part in

Decimal two thousand (±0.0005), while 0.08 suggests a measurement with an error of up to one in two hundred (see significant figures).


Other rational numbers
Any rational number with a denominator whose only prime factors are 2 and/or 5 may be precisely expressed as a decimal fraction and has a finite decimal expansion.[3] 1/2 = 0.5 1/20 = 0.05 1/5 = 0.2 1/50 = 0.02 1/4 = 0.25 1/40 = 0.025 1/25 = 0.04 1/8 = 0.125 1/125= 0.008 1/10 = 0.1 If the rational number's denominator has any prime factors other than 2 or 5, it cannot be expressed as a finite decimal fraction,[3] and has a unique infinite decimal expansion ending with recurring decimals. 1/3 = 0.333333… (with 3 repeating) 1/9 = 0.111111… (with 1 repeating) 100-1=99=9×11 1/11 = 0.090909… (with 09 repeating) 1000-1=9×111=27×37 1/27 = 0.037037037… 1/37 = 0.027027027… 1/111 = 0 .009009009… also: 1/81= 0.012345679012… (with 012345679 repeating) Other prime factors in the denominator will give longer recurring sequences; see for instance 1/7, and 1/13. That a rational number must have a finite or recurring decimal expansion can be seen to be a consequence of the long division algorithm, in that there are only q-1 possible nonzero remainders on division by q, so that the recurring pattern will have a period less than q. For instance, to find 3/7 by long division: 0.4 7 ) 3.0 2 8 2 1 2 8 5 7 1 4 ... 0 0 0 0 0 0 0 30/7 = 4 r 2 0 4 6 0 5 6 4 0 3 5 5 0 20/7 = 2 r 6 60/7 = 8 r 4 40/7 = 5 r 5

Decimal 4 9 1 0 7 3 0 2 8 2 0 etc. The converse to this observation is that every recurring decimal represents a rational number p/q. This is a consequence of the fact that the recurring part of a decimal representation is, in fact, an infinite geometric series which will sum to a rational number. For instance, 50/7 = 7 r 1 10/7 = 1 r 3 30/7 = 4 r 2


Real numbers
Further information: Decimal representation Every real number has a (possibly infinite) decimal representation; i.e., it can be written as

where • sign() is the sign function, and • ai ∈ { 0,1,…,9 } for all i ∈ Z are its decimal digits, equal to zero for all i greater than some number (that number being the common logarithm of |x|). Such a sum converges as i increases, even if there are infinitely many non-zero ai. Rational numbers (e.g., p/q) with prime factors in the denominator other than 2 and 5 (when reduced to simplest terms) have a unique recurring decimal representation.

Non-uniqueness of decimal representation
Consider those rational numbers which have only the factors 2 and 5 in the denominator, i.e., which can be written as p/(2a5b). In this case there is a terminating decimal representation. For instance, 1/1 = 1, 1/2 = 0.5, 3/5 = 0.6, 3/25 = 0.12 and 1306/1250 = 1.0448. Such numbers are the only real numbers which do not have a unique decimal representation, as they can also be written as a representation that has a recurring 9, for instance 1 = 0.99999…, 1/2 = 0.499999…, etc. The number 0 = 0/1 is special in that it has no representation with recurring 9. This leaves the irrational numbers. They also have unique infinite decimal representations, and can be characterised as the numbers whose decimal representations neither terminate nor recur. So in general the decimal representation is unique, if one excludes representations that end in a recurring 9. The same trichotomy holds for other base-n positional numeral systems: • Terminating representation: rational where the denominator divides some nk • Recurring representation: other rational • Non-terminating, non-recurring representation: irrational A version of this even holds for irrational-base numeration systems, such as golden mean base representation.



Decimal computation
Decimal computation was carried out in ancient times in many ways, typically on sand tables or with a variety of abaci. Modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally).[4] For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems. For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.) Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal,[5] especially in database implementations, but there are other decimal representations in use (such as in the new IEEE 754 Standard for Floating-Point Arithmetic).[6] Decimal arithmetic is used in computers so that decimal fractional results can be computed exactly, which is not possible using a binary fractional representation. This is often important for financial and other calculations.[7]

Many ancient cultures calculated from early on with numerals based on ten: Egyptian hieroglyphs, in evidence since around 3000 BC, used a purely decimal system,[8] [9] just as the Cretan hieroglyphs (ca. 1625−1500 BC) of the Minoans whose numerals are closely based on the Egyptian model.[10] [11] The decimal system was handed down to the consecutive Bronze Age cultures of Greece, including Linear A (ca. 18th century BC−1450 BC) and Linear B (ca. 1375−1200 BC) — the number system of classical Greece also used powers of ten, including, like the Roman numerals did, an intermediate base of 5.[12] Notably, the polymath Archimedes (c. 287–212 BC) invented a decimal positional system in his Sand Reckoner which was based on 108[12] and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery.[13] The Hittites hieroglyphs (since 15th century BC), just like the Egyptian and early numerals in Greece, was strictly decimal.[14] The Egyptian herratic numerals, the Greek alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20, through 90, 100, 200, through 900, 1000, 2000, 3000, 4000, to 10,000.[15]

History of decimal fractions
According to Joseph Needham, decimal fractions were first developed and used by the Chinese in the 1st century BC, and then spread to the Middle East and from there to Europe.[16] The written Chinese decimal fractions were non-positional.[16] However, counting rod fractions were positional. Qin Jiushao in his book Mathematical Treatise in Nine Sections (1247) denoted 0.96644 by 寸

counting rod decimal fraction 1/7

, meaning 寸 096644



Immanuel Bonfils invented decimal fractions around 1350, anticipating Simon Stevin, but did not develop any notation to represent them.[18] The Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, though J. Lennart Berggren notes that positional decimal fractions were used five centuries before him by Arab mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.[19] Khwarizmi introduced fractions to Islamic countries in the early 9th century. His representation of fractions was taken from traditional Chinese mathematical fractions. This form of fraction with the numerator on top and the denominator on the bottom, without a horizontal bar, was also used in the 10th century by Abu'l-Hasan al-Uqlidisi and again in the 15th century work "Arithmetic Key" by Jamshīd al-Kāshī. A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century.[20]

Natural languages
A straightforward decimal rank system with a word for each order 10十,100百,1000千,10000万, and in which 11 is expressed as ten-one and 23 as two-ten-three, and 89345 is expressed as 8万9千3百4十5 is found in Chinese languages, and in Vietnamese with a few irregularities. Japanese, Korean, and Thai have imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example in English 11 is "eleven" not "ten-one". Incan languages such as Quechua and Aymara have an almost straightforward decimal system, in which 11 is expressed as ten with one and 23 as two-ten with three. Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability.[21]

Other bases
Some cultures do, or did, use other bases of numbers. • Pre-Columbian Mesoamerican cultures such as the Maya used a base-20 system (using all twenty fingers and toes). • The Babylonians used a combination of decimal with base 60. • Many or all of the Chumashan languages originally used a base-4 counting system, in which the names for numbers were structured according to multiples of 4 and 16.[22] • Many languages[23] use quinary number systems, including Gumatj, Nunggubuyu,[24] Kuurn Kopan Noot[25] and Saraveca. Of these, Gumatj is the only true 5–25 language known, in which 25 is the higher group of 5. • Some Nigerians use base-12 systems • The Huli language of Papua New Guinea is reported to have base-15 numbers.[26] Ngui means 15, ngui ki means 15×2 = 30, and ngui ngui means 15×15 = 225. • Umbu-Ungu, also known as Kakoli, is reported to have base-24 numbers.[27] Tokapu means 24, tokapu talu means 24×2 = 48, and tokapu tokapu means 24×24 = 576. • Ngiti is reported to have a base-32 number system with base-4 cycles.[28]



[1] The History of Arithmetic, Louis Charles Karpinski, 200pp, Rand McNally & Company, 1925. [2] Histoire universelle des chiffres, Georges Ifrah, Robert Laffont, 1994 (Also: The Universal History of Numbers: From prehistory to the invention of the computer, Georges Ifrah, ISBN 0471393401, John Wiley and Sons Inc., New York, 2000. Translated from the French by David Bellos, E.F. Harding, Sophie Wood and Ian Monk) [3] Math Made Nice-n-Easy (http:/ / books. google. com/ books?id=ebx9StilsqIC& pg=PA141#v=onepage& q& f=false). Piscataway, N.J.: Research Education Association. 1999. p. 141. ISBN 0-87891-200-2. . [4] Fingers or Fists? (The Choice of Decimal or Binary Representation), Werner Buchholz, Communications of the ACM, Vol. 2 #12, pp3–11, ACM Press, December 1959. [5] Decimal Computation, Hermann Schmid, John Wiley & Sons 1974 (ISBN 047176180X); reprinted in 1983 by Robert E. Krieger Publishing Company (ISBN 0898743184) [6] Decimal Floating-Point: Algorism for Computers, Cowlishaw, M. F., Proceedings 16th IEEE Symposium on Computer Arithmetic, ISBN 0-7695-1894-X, pp104-111, IEEE Comp. Soc., June 2003 [7] Decimal Arithmetic - FAQ (http:/ / speleotrove. com/ decimal/ decifaq. html) [8] Egyptian numerals (http:/ / www-gap. dcs. st-and. ac. uk/ ~history/ HistTopics/ Egyptian_numerals. html) [9] Georges Ifrah: From One to Zero. A Universal History of Numbers, Penguin Books, 1988, ISBN 0140099190, pp. 200-213 (Egyptian Numerals) [10] Graham Flegg: Numbers: their history and meaning, Courier Dover Publications, 2002, ISBN 9780486421650, p. 50 [11] Georges Ifrah: From One to Zero. A Universal History of Numbers, Penguin Books, 1988, ISBN 0140099190, pp.213-218 (Cretan numerals) [12] Greek numerals (http:/ / www-gap. dcs. st-and. ac. uk/ ~history/ HistTopics/ Greek_numbers. html) [13] Menninger, Karl: Zahlwort und Ziffer. Eine Kulturgeschichte der Zahl, Vandenhoeck und Ruprecht, 3rd. ed., 1979, ISBN 3-525-40725-4, pp. 150-153 [14] Georges Ifrah: From One to Zero. A Universal History of Numbers, Penguin Books, 1988, ISBN 0140099190, pp. 218f. (The Hittite hieroglyphic system) [15] Lam Lay Yong et al The Fleeting Footsteps p 137-139 [16] Joseph Needham (1959). "Decimal System". Science and Civilisation in China, Volume III, Mathematics and the Sciences of the Heavens and the Earth. Cambridge University Press. [17] Jean-Claude Martzloff, A History of Chinesse Mathematics, Springer 1997 ISBN 3-540-33782-2 [18] Gandz, S.: The invention of the decimal fractions and the application of the exponential calculus by Immanuel Bonfils of Tarascon (c. 1350), Isis 25 (1936), 16–45. [19] Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 9780691114859. [20] B. L. van der Waerden (1985). A History of Algebra. From Khwarizmi to Emmy Noether. Berlin: Springer-Verlag. [21] Azar, Beth (1999). "English words may hinder math skills development" (http:/ / web. archive. org/ web/ 20071021015527/ http:/ / www. apa. org/ monitor/ apr99/ english. html). American Psychology Association Monitor 30 (4). Archived from the original (http:/ / www. apa. org/ monitor/ apr99/ english. html) on 2007-10-21. . [22] There is a surviving list of Ventureño language number words up to 32 written down by a Spanish priest ca. 1819. "Chumashan Numerals" by Madison S. Beeler, in Native American Mathematics, edited by Michael P. Closs (1986), ISBN 0-292-75531-7. [23] Harald Hammarström, Rarities in Numeral Systems (http:/ / www. cs. chalmers. se/ ~harald2/ rara2006. pdf): "Bases 5, 10, and 20 are omnipresent." [24] Harris, John (1982). Hargrave, Susanne. ed. Facts and fallacies of aboriginal number systems (http:/ / www1. aiatsis. gov. au/ exhibitions/ e_access/ serial/ m0029743_v_a. pdf). 8. 153–181. [25] Dawson, J. " Australian Aborigines: The Languages and Customs of Several Tribes of Aborigines in the Western District of Victoria (http:/ / books. google. com/ books?id=OdEDAAAAMAAJ) (1881), p. xcviii. [26] Cheetham, Brian (1978). "Counting and Number in Huli" (http:/ / www. uog. ac. pg/ PUB08-Oct-03/ cheetham. htm). Papua New Guinea Journal of Education 14: 16–35. [27] Bowers, Nancy; Lepi, Pundia (1975). "Kaugel Valley systems of reckoning" (http:/ / www. ethnomath. org/ resources/ bowers-lepi1975. pdf). Journal of the Polynesian Society 84 (3): 309–324. [28] Hammarström, Harald (2006). "Proceedings of Rara & Rarissima Conference" (http:/ / www. cs. chalmers. se/ ~harald2/ rarapaper. pdf).



External links
• Decimal arithmetic FAQ ( • Cultural Aspects of Young Children's Mathematics Knowledge ( NCTM_pap.htm) nso:Letlase la lesome

Multiplication (often denoted by the cross symbol "×") is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division). Because the result of scaling by whole numbers can be thought of as consisting of some number of copies of the original, whole-number products greater than 1 can be computed by repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together:

Four bags of three marbles gives twelve marbles. There are also 3 sets consisting of 4 marbles of the same colour.

Multiplication can also be thought of as scaling. In the above animation, we see 2 being multiplied by 3, giving 6 as a result



4 × 5 = 20, the rectangle is composed of 20 squares, having dimensions of 4 by 5.

Area of a cloth 4.5m × 2.5m = 11.25m2; 4½ × 2½ = 11¼

Here 3 and 4 are the "factors" and 12 is the "product". There are differences amongst educators as to which number should normally be considered as the number of copies and whether multiplication should even be introduced as repeated addition.[1] For example 3 multiplied by 4 can also be calculated by adding 3 copies of 4 together:

Multiplication of rational numbers (fractions) and real numbers is defined by systematic generalization of this basic idea. Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The area of a rectangle does not depend on which side is measured first which illustrates that the order in which numbers are multiplied together does not matter. In general the result of multiplying two measurements gives a result of a new type depending on the measurements. For instance:

The inverse operation of multiplication is division. For example, 4 multiplied by 3 equals 12. Then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number. Multiplication is also defined for other types of numbers (such as complex numbers), and for more abstract constructs such as matrices. For these more abstract constructs, the order in which the operands are multiplied sometimes does matter.



Notation and terminology
Multiplication is often written using the multiplication sign "×" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, (verbally, "two times three equals six")

The multiplication sign × (HTML entity is &times;)

There are several other common notations for multiplication. Many of these are intended to reduce confusion between the multiplication sign × and the commonly used variable x: • Multiplication is sometimes denoted by either a middle dot or a period:

The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In other countries that use a comma as a decimal point, either the period or a middle dot is used for multiplication. • The asterisk (as in 5*2) is often used in programming languages because it appears on every keyboard. This usage originated in the FORTRAN programming language. • In algebra, multiplication involving variables is often written as a juxtaposition (e.g. xy for x times y or 5x for five times x). This notation can also be used for quantities that are surrounded by parentheses (e.g. 5(2) or (5)(2) for five times two). • In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication, while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors. The numbers to be multiplied are generally called the "factors" or "multiplicands". When thinking of multiplication as repeated addition, the number to be multiplied is called the "multiplicand", while the number of multiples is called the "multiplier". In algebra, a number that is the multiplier of a variable or expression (e.g. the 3 in 3xy2) is called a coefficient. The result of a multiplication is called a product, and is a multiple of each factor if the other factor is an integer. For example, 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5.



The common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not. Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand.

Historical algorithms
Methods of multiplication were documented in the Egyptian, Greek, Indian and Chinese civilizations. The Ishango bone, dated to about 18,000 to 20,000 BC, hints at a knowledge of multiplication in the Upper Paleolithic era in Central Africa. Egyptians The Egyptian method of multiplication of integers and fractions, documented in the Ahmes Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining 1 × 21 = 21, 2 × 21 = 42, 4 × 21 = 84, 8 × 21 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence: 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. Babylonians The Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. Chinese In the mathematical text Zhou Bi Suan Jing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication and division. These place value decimal arithmetic algorithms were introduced by Al Khwarizmi to Arab countries in the early 9th century.

38 × 76 = 2888



Modern method
The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication and division. Henry Burchard Fine, then professor of Mathematics at Princeton University, wrote the following: The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously.[2]

Product of 45 and 256. Note the order of the numerals in 45 is reversed down the left column. The carry step of the multiplication can be performed at the final stage of the calculation (in bold), returning the final product of 45 × 256 = 11520.

Computer algorithms
The standard method of multiplying two n-digit numbers requires n2 simple multiplications. Multiplication algorithms have been designed which reduce the computation time considerably when multiplying large numbers. In particular for very large numbers methods based on the Discrete Fourier Transform can reduce the number of simple multiplications to the order of n log2(n).

Products of measurements
When two measurements are multiplied together the product is of a type depending on the types of the measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics but has also found applications in finance. One can only meaningfully add or subtract quantities of the same type but can multiply or divide quantities of different types. A common example is multiplying speed by time gives distance, so 50 kilometers per hour × 3 hours = 150 kilometers.

Products of sequences
Capital Pi notation
The product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. Unicode position U+220F (∏) contains a glyph for denoting such a product, distinct from U+03A0 (Π), the letter. The meaning of this notation is given by:

The subscript gives the symbol for a dummy variable (i in this case), called the "index of multiplication" together with its lower bound (m), whereas the superscript (here n) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to and including the upper bound. So, for example:



In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1.

Infinite products
One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the lemniscate ∞. The product of such a series is defined as the limit of the product of the first n terms, as n grows without bound. That is, by definition,

One can similarly replace m with negative infinity, and define:

provided both limits exist.

For the natural numbers, integers, fractions, and real and complex numbers, multiplication has certain properties: Commutative property The order in which two numbers are multiplied does not matter: . Associative property Expressions solely involving multiplication are invariant with respect to order of operations:

Distributive property Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions:

Identity element The multiplicative identity is 1; anything multiplied by one is itself. This is known as the identity property:

Zero element Any number multiplied by zero is zero. This is known as the zero property of multiplication:

Zero is sometimes not included amongst the natural numbers. There are a number of further properties of multiplication not satisfied by all types of numbers. Negation Negative one times any number is equal to the opposite of that number.

Negative one times negative one is positive one.

Multiplication The natural numbers do not include negative numbers. Inverse element Every number x, except zero, has a multiplicative inverse, The natural numbers and integers do not have inverses. Order preservation Multiplication by a positive number preserves order: if a > 0, then if b > c then ab > ac. Multiplication by a negative number reverses order: if a < 0 and b > c then ab < ac. The complex numbers do not have an order predicate. Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. , such that .


In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers.[3] Peano arithmetic has two axioms for multiplication:

Here S(y) represents the successor of y, or the natural number which follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic including induction. For instance S(0). denoted by 1, is a multiplicative identity because

The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to x−y when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is

The rule that −1 × −1 = 1 can then be deduced from

Multiplication is extended in a similar way to rational numbers and then to real numbers.

Multiplication with set theory
It is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the Peano definition of multiplication.

Cartesian product
The definition of multiplication as repeated addition provides a way to arrive at a set-theoretic interpretation of multiplication of cardinal numbers. In the expression

if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of are exactly those of the Cartesian product . The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product.



Multiplication in group theory
There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example we have an abelian group, but that is not always the case. To see this, look at the set of invertible square matrices of a given dimension, over a given field. Now it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, therefore this group is nonabelian. Another fact of note is that the integers under multiplication is not a group, even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and -1. Multiplication in group theory is typically notated either by a dot, or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated a b or ab. When referring to a group via the indication of the set and operation, the dot is used, e.g. our first example could be indicated by

Multiplication of different kinds of numbers
Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). Integers is the sum of M copies of N when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and . The same sign rules apply to rational and real numbers. Rational numbers Generalization to fractions is by multiplying the numerators and denominators respectively: high and wide, and is the same as the

. This gives the area of a rectangle

number of things in an array when the rational numbers happen to be whole numbers. Real numbers is the limit of the products of the corresponding terms in certain sequences of rationals that converge to x and y, respectively, and is significant in calculus. This gives the area of a rectangle x high and y wide. See Products of sequences, above. Complex numbers Considering complex numbers product is and are zero. when the imaginary parts Further generalizations and as ordered pairs of real numbers and , the ,

. This is the same as for reals,

See Multiplication in group theory, above, and Multiplicative Group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted"

Multiplication (second) binary operation in a ring. An example of a ring which is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.) Division Often division, , is the same as multiplication by an inverse, . Multiplication for some types of


"numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse " " but may be defined. In a division ring there are inverses but they are not commutative (since is not the same as , may be ambiguous).

When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times to multiply base by itself, so that the expression

indicates that the base a to be multiplied by itself n times.

[1] Makoto Yoshida (2009). "Is Multiplication Just Repeated Addition?" (http:/ / www. globaledresources. com/ resources/ assets/ 042309_Multiplication_v2. pdf). . [2] Henry B. Fine. The Number System of Algebra – Treated Theoretically and Historically, (2nd edition, with corrections, 1907), page 90, http:/ / www. archive. org/ download/ numbersystemofal00fineuoft/ numbersystemofal00fineuoft. pdf [3] PlanetMath: Peano arithmetic (http:/ / planetmath. org/ encyclopedia/ PeanoArithmetic. html)

• Boyer, Carl B. (revised by Merzbach, Uta C.) (1991). History of Mathematics. John Wiley and Sons, Inc.. ISBN 0-471-54397-7.

External links
• Multiplication ( and Arithmetic Operations In Various Number Systems ( at cut-the-knot • Modern Chinese Multiplication Techniques on an Abacus ( mod_mult/)



In mathematics, especially in elementary arithmetic, division (÷) is an arithmetic operation. Specifically, if c times b equals a, written:

where b is not zero, then a divided by b equals c, written:

For instance,

since . In the above expression, a is called the dividend, b the divisor and c the quotient. Conceptually, division describes two distinct but related settings. Partitioning involves taking a set of size a and forming b groups that are equal in size. The size of each group formed, c, is the quotient of a and b. Quotative division involves taking a set of size a and forming groups of size b. The number of groups of this size that can be formed, c, is the quotient of a and b.[1] Teaching division usually leads to the concept of fractions being introduced to students. Unlike addition, subtraction, and multiplication, the set of all integers is not closed under division. Dividing two integers may result in a remainder. To complete the division of the remainder, the number system is extended to include fractions or rational numbers as they are more generally called.



Division is often shown in algebra and science by placing the dividend over the divisor with a horizontal line, also called a vinculum or fraction bar, between them. For example, a divided by b is written

This can be read out loud as "a divided by b", "a by b" or "a over b". A way to express division all on one line is to write the dividend, or numerator then a slash, then the divisor, or denominator like this:

This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of ASCII characters. A typographical variation, which is halfway between these two forms, uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor:


Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division needs to be evaluated further. A second way to show division is to use the obelus (or division sign), common in arithmetic, in this manner:

This form is infrequent except in elementary arithmetic. The obelus is also used alone to represent the division operation itself, as for instance as a label on a key of a calculator. In some non-English-speaking cultures, "a divided by b" is written a : b. However, in English usage the colon is restricted to expressing the related concept of ratios (then "a is to b"). In elementary mathematics the notation or is used to denote a divided by b. This notation was first introduced by Michael Stifel in Arithmetica integra, published in 1544.[2]

Computing division
Division is often introduced through the notion of "sharing out" a set of objects, for example a pile of sweets, into a number of equal portions. Distributing the objects several at a time in each round of sharing to each portion leads to the idea of "chunking", i.e. division by repeated subtraction. More systematic and more efficient (but also more formalised and more rule-based, and more removed from an overall holistic picture of what division is achieving), a person who knows the multiplication tables can divide two integers using pencil and paper and the method of short division, if the divisor is simple, or long division for larger integer divisors. If the dividend has a fractional part (expressed as a decimal fraction), we can continue the algorithm past the ones place as far as desired. If the divisor has a fractional part, we can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction. Modern computers compute division by methods that are faster than long division: see Division (digital). A person can calculate division with an abacus by repeatedly placing the dividend on the abacus, and then subtracting the divisor the offset of each digit in the result, counting the number of divisions possible at each offset. A person can use logarithm tables to divide two numbers, by subtracting the two numbers' logarithms, then looking up the antilogarithm of the result. A person can calculate division with a slide rule by aligning the divisor on the C scale with the dividend on the D scale. The quotient can be found on the D scale where it is aligned with the left index on the C scale. The user is responsible, however, for mentally keeping track of the decimal point.

Division In modular arithmetic, some numbers have a multiplicative inverse with respect to the modulus. We can calculate division by multiplication in such a case. This approach is useful in computers that do not have a fast division instruction.


Division algorithm
The division algorithm is a mathematical theorem that precisely expresses the outcome of the usual process of division of integers. In particular, the theorem asserts that integers called the quotient q and remainder r always exist and that they are uniquely determined by the dividend a and divisor d, with d ≠ 0. Formally, the theorem is stated as follows: There exist unique integers q and r such that a = qd + r and 0 ≤ r < | d |, where | d | denotes the absolute value of d.

Division of integers
Division of integers is not closed. Apart from division by zero being undefined, the quotient will not be an integer unless the dividend is an integer multiple of the divisor; for example 26 cannot be divided by 10 to give an integer. In such a case there are four possible approaches. 1. Say that 26 cannot be divided by 10; division becomes a partial function. 2. Give the answer as a decimal fraction or a mixed number, so usually taken in mathematics. 3. Give the answer as an integer quotient and a remainder, so 4. Give the integer quotient as the answer, so This is sometimes called integer division. One has to be careful when performing division of integers in a computer program. Some programming languages, such as C, will treat division of integers as in case 4 above, so the answer will be an integer. Other languages, such as MATLAB, will first convert the integers to real numbers, and then give a real number as the answer, as in case 2 above. Names and symbols used for integer division include div, /, \, and %. Definitions vary regarding integer division when the quotient is negative: rounding may be toward zero or toward −∞. Divisibility rules can sometimes be used to quickly determine whether one integer divides exactly into another. or This is the approach

Division of rational numbers
The result of dividing two rational numbers is another rational number when the divisor is not 0. We may define division of two rational numbers p/q and r/s by

All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication.



Division of real numbers
Division of two real numbers results in another real number when the divisor is not 0. It is defined such a/b = c if and only if a = cb and b ≠ 0.

Division by zero
Division of any number by zero (where the divisor is zero) is not defined. This is because zero multiplied by any finite number will always result in a product of zero. Entry of such an expression into most calculators will result in an error message being issued.

Division of complex numbers
Dividing two complex numbers results in another complex number when the divisor is not 0, defined thus:

All four quantities are real numbers. r and s may not both be 0. Division for complex numbers expressed in polar form is simpler than the definition above:

Again all four quantities are real numbers. r may not be 0.

Division of polynomials
One can define the division operation for polynomials. Then, as in the case of integers, one has a remainder. See polynomial long division or synthetic division.

Division of matrices
One can define a division operation for matrices. The usual way to do this is to define A / B = AB−1, where B−1 denotes the inverse of B, but it is far more common to write out AB−1 explicitly to avoid confusion.

Left and right division
Because matrix multiplication is not commutative, one can also define a left division or so-called backslash-division as A \ B = A−1B. For this to be well defined, B−1 need not exist, however A−1 does need to exist. To avoid confusion, division as defined by A / B = AB−1 is sometimes called right division or slash-division in this context. Note that with left and right division defined this way, A/(BC) is in general not the same as (A/B)/C and nor is (AB)\C the same as A\(B\C), but A/(BC) = (A/C)/B and (AB)\C = B\(A\C).

Matrix division and pseudoinverse
To avoid problems when A−1 and/or B−1 do not exist, division can also be defined as multiplication with the pseudoinverse, i.e., A / B = AB+ and A \ B = A+B, where A+ and B+ denote the pseudoinverse of A and B.

Division in abstract algebra
In abstract algebras such as matrix algebras and quaternion algebras, fractions such as or where are typically defined as

is presumed to be an invertible element (i.e. there exists a multiplicative inverse

Division such that where


is the multiplicative identity). In an integral domain where such elements may not exist, divisi or

still be performed on equations of the form

by left or right cancellation, respectively. More generally "division

the sense of "cancellation" can be done in any ring with the aforementioned cancellation properties. If such a ring is finite, then by an application of the pigeonhole principle, every nonzero element of the ring is invertible, so division by any nonzero element is possible in such a ring. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O.

Division and calculus
The derivative of the quotient of two functions is given by the quotient rule:

There is no general method to integrate the quotient of two functions.

[1] Fosnot and Dolk 2001. Young Mathematicians at Work: Constructing Multiplication and Division. Portsmouth, NH: Heinemann. [2] Earliest Uses of Symbols of Operation (http:/ / jeff560. tripod. com/ operation. html), Jeff MIller

External links
• Division (;from=objects&amp;id=6148) on PlanetMath • Division on a Japanese abacus ( selected from Abacus: Mystery of the Bead ( • Chinese Short Division Techniques on a Suan Pan ( • Rules of divisibility (



A fraction (from Latin: fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, we specify how many parts of a certain size there are, for example, one-half, five-eighths and three-quarters. A common or "vulgar" fraction, such as 1/2, 5/8, 3/4, etc., consists of a numerator and a denominator—the numerator representing a number of equal parts and the denominator indicating how many of those parts make up a whole. An example is 3/4, in which the numerator, 3, tells us that the fraction represents 3 equal parts, and the denominator, 4, tells us that 4 parts equal a whole.[1] The picture to the right illustrates 3/4 of a cake.

Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, or negative exponents (as in 0.01, 1%, and 10−2 respectively, all of which are equivalent to 1/100). An integer (e.g. the number 7) has an implied denominator of one, meaning that the number can be expressed as a fraction like 7/1. Other uses for fractions are to represent ratios and to represent division. Thus the fraction 3/4 is also used to represent the ratio 3:4 (the ratio of the part to the whole) and the division 3 ÷ 4 (three divided by four). In mathematics the set of all numbers which can be expressed as a fraction m/n, where m and n are integers and n is not zero, is called the set of rational numbers and is represented by the symbol Q. The word fraction is also used to describe continued fractions, algebraic fractions (quotients of algebraic expressions), and expressions that contain irrational numbers, such as √2/2 (see square root of 2) and π/4 (see proof that π is irrational).

A cake with one quarter removed. The remaining three quarters are shown. Dotted lines indicate where the cake may be cut in order to divide it into equal parts. Each quarter of the cake is denoted by the fraction 1/4.

Forms of fractions
Common, vulgar, or simple fractions
A common fraction (also known as a vulgar fraction or simple fraction) is a rational number written as an ordered pair of integers, called the numerator and denominator, separated by a line. The denominator cannot be zero. Two notations are widely used. One, as in the example 2/5, uses a slanting line to separate the numerator and denominator. The other, as in the example , has the numerator above a horizontal line and the denominator below the line. The slanting line is called a solidus or forward slash, the horizontal line is called a vinculum or, informally, a "fraction bar".

Writing simple fractions
In computer displays and typography, simple fractions are sometimes printed as a single character, e.g. ½ (one half). Scientific publishing distinguishes four ways to set fractions, together with guidelines on use:[2] • case fractions: (these are generally used only for simple fractions); • special fractions: ½ (these are not used in modern mathematical notation, but in other contexts); • shilling fractions: 1/2 (so called because this notation was used for pre-decimal British currency (£sd), as in 2/6 for a half crown, meaning two shillings and six pence, particularly recommended for fractions inline (rather than displayed), to avoid uneven lines, and for fractions within fractions (complex fractions) or within exponents to increase legibility);



• built-up fractions: complex fractions).

(while large and legible, these can be disruptive, particularly for simple fractions or within

Proper and improper common fractions
A common fraction is said to be a proper fraction if the absolute value of the numerator is less than the absolute value of the denominator—that is, if the absolute value of the entire fraction is less than 1. A vulgar fraction is said to be an improper fraction (U.S., British or Australian) or top-heavy fraction (British, occasionally North America) if the absolute value of the numerator is greater than or equal to the absolute value of the denominator (e.g. ).[3]

Mixed numbers
A mixed numeral (often called a mixed number, also called a mixed fraction) is the sum of a whole number and a proper fraction. This sum is implied without the use of any visible operator such as "+". For example, in referring to two entire cakes and three quarters of another cake, the whole and fractional parts of the number are written next to each other: . This is not to be confused with the algebra rule of implied multiplication. When two algebraic expressions are written next to each other, the operation of multiplication is said to be "understood". In algebra, for example is not a mixed number. Instead, multiplication is understood: . An improper fraction is another way to write a whole plus a part. A mixed number can be converted to an improper fraction as follows: 1. Write the mixed number as a sum . . .

2. Convert the whole number to an improper fraction with the same denominator as the fractional part, 3. Add the fractions. The resulting sum is the improper fraction. In the example, Similarly, an improper fraction can be converted to a mixed number as follows:

1. Divide the numerator by the denominator. In the example, , divide 11 by 4. 11 ÷ 4 = 2 with remainder 3. 2. The quotient (without the remainder) becomes the whole number part of the mixed number. The remainder becomes the numerator of the fractional part. In the example, 2 is the whole number part and 3 is the numerator of the fractional part. 3. The new denominator is the same as the denominator of the improper fraction. In the example, they are both 4. Thus .

Reciprocals and the "invisible denominator"
The reciprocal of a fraction is another fraction with the numerator and denominator reversed. The reciprocal of , for instance, is . The product of a fraction and its reciprocal is 1, hence the reciprocal is the multiplicative inverse of a fraction. Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as , where 1 is sometimes referred to as the invisible denominator. Therefore, every fraction or integer . except for zero has a reciprocal. The reciprocal of 17 is



Complex fractions
In a complex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number[4] corresponding to division of fractions. For example, and


are complex fractions. To reduce a complex

fraction to a simple fraction, treat the longest fraction line as representing division. For example:

If, in a complex fraction, there is no clear way to tell which fraction line takes precedence, then the expression is improperly formed, and meaningless.

Compound fractions
A compound fraction is a fraction of a fraction, or any mumber of fractions connected with the word of[4] [5] , corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see the section on multiplication). For example, of is a compound fraction, corresponding to . The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other.

Decimal fractions and percentages
A decimal fraction is a fraction whose denominator is not given explicitly, but is understood to be an integer power of ten. Decimal fractions are commonly expressed using decimal notation in which the implied denominator is determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, a raised period (•), a comma) depends on the locale (for examples, see decimal separator). Thus for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, viz. 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the decimal (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, 375/100, or as a mixed number, . Decimal fractions can also be expressed using scientific notation with negative exponents, such as 6.023 × 10−7, which represents 0.0000006023. The  × 10−7 represents a denominator of  × 107. Dividing by  × 107 moves the decimal point 7 places to the left. Decimal fractions with infinitely many digits to the right of the decimal separator represent an infinite series. For example, 1/3 = 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + ... . Another kind of fraction is the percentage, in which the implied denominator is always 100. Thus 75% means 75/100. Related concepts are the permille, with 1000 as implied denominator, and the more general parts-per notation, as in 75 parts per million, meaning that the proportion is 75/1,000,000. Whether common fractions or decimal fractions are used is often a matter of taste and context. Common fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3/16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more accurate to multiply 15 by 1/3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are

Fraction commonly expressed as decimal fractions, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example 3/6 (read "three and six") meaning 3 shillings and 6 pence, and having no relationship to the fraction 3/6.


Special cases
A unit fraction is a vulgar fraction with a numerator of 1, e.g. An Egyptian fraction is the sum of distinct unit fractions, e.g. . Unit fractions can also be expressed using . This term derives from the fact that the negative exponents, as in 2−1 which represents 1/2, and 2−2 which represents 1/(22) or 1/4. ancient Egyptians expressed all fractions except , and in this manner. Every positive rational number can be expanded as an Egyptian fraction, but the representation is in general not unique. For example, . A dyadic fraction is a vulgar fraction in which the denominator is a power of two, e.g. .

Arithmetic with fractions
Like whole numbers, fractions obey the commutative, associative, and distributive laws, and the rule against division by zero.

Equivalent fractions
Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero number , the fraction . Therefore, multiplying by is equivalent to multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction . When the numerator and denominator are both multiplied by 2, the result is , which has the same value (0.5) as . To picture this visually, imagine cutting a cake into four pieces; two of the pieces together ( ) make up half the cake ( ). Dividing the numerator and denominator of a fraction by the same non-zero number will also yield an equivalent fraction. This is called reducing or simplifying the fraction. A simple fraction in which the numerator and denominator are coprime [that is, the only positive integer that goes into both the numerator and denominator evenly is 1) is said to be irreducible, in lowest terms, or in simplest terms. For example, is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast, both 3 and 8 evenly is 1. Using these rules, we can show that = = = . A common fraction can be reduced to lowest terms by dividing both the numerator and denominator by their greatest common divisor. For example, as the greatest common divisor of 63 and 462 is 21, the fraction can be reduced to lowest terms by dividing the numerator and denominator by 21: is in lowest terms—the only positive integer that goes into

The Euclidean algorithm gives a method for finding the greatest common divisor of any two positive integers.



Comparing fractions
Comparing fractions with the same denominator only requires comparing the numerators. because 3>2. If two fractions have the same numerator, then the fraction with the smaller denominator is the larger fraction. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. The fraction with the smaller denominator represents these fewer but larger pieces. One way to compare fractions with different numerators and denominators is to find a common denominator. To compare and , these are converted to and . Then bd is a common denominator and the numerators ad and bc can be compared. ? gives It is not necessary to determine the value of the common denominator to compare fractions. This short cut is known as "cross multiplying" – you can just compare ad and bc, without computing the denominator. ? Multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator: ? The denominators are now the same, but it is not necessary to calculate their value – only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), . Also note that every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, so every negative fraction is less than any positive fraction.

The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows: . Adding unlike quantities To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction.

If of a cake is to be added to of a cake, the pieces need to be converted into comparable quantities, such as cake-eighths or cake-quarters.

Fraction For adding quarters to thirds, both types of fraction are converted to Consider adding the following two quantities: (twelfths).


First, convert into twelfths by multiplying both the numerator and denominator by three: is equivalent to 1, which shows that is equivalent to the resulting . Secondly, convert into twelfths by multiplying both the numerator and denominator by four: that is equivalent to 1, which shows that is equivalent to the resulting Now it can be seen that: .

. Note that . Note

is equivalent to:

This method can be expressed algebraically:

And for expressions consisting of the addition of three fractions:

This method always works, but sometimes there is a smaller denominator that can be used (a least common denominator). For example, to add and the denominator 48 can be used (the product of 4 and 12), but the smaller denominator 12 may also be used, being the least common multiple of 4 and 12.

The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance,

Multiplying a fraction by another fraction To multiply fractions, multiply the numerators and multiply the denominators. Thus:

Why does this work? First, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths. A short cut for multiplying fractions is called "cancellation". In effect, we reduce the answer to lowest terms during multiplication. For example:

Fraction A two is a common factor in both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both. Multiplying a fraction by a whole number Place the whole number over one and multiply.


This method works because the fraction 6/1 means six equal parts, each one of which is a whole. Mixed numbers When multiplying mixed numbers, it's best to convert the mixed number into an improper fraction. For example:

In other words,

is the same as

, making 11 quarters in total (because 2 cakes, each split into quarters , since 8 cakes, each made of quarters, is 32 quarters in total.

makes 8 quarters total) and 33 quarters is

To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example, equals and also equals , which reduces to . To divide by a fraction, multiply by the reciprocal of that fraction. Thus, .

Formulas for the arithmetic of fractions
Let be positive integers.

Two fractions can be compared using the rule equivalent if and only if .

if and only if

. Two fractions



Converting between decimals and fractions
To change a common fraction to a decimal, divide the denominator into the numerator. Round the answer to the desired accuracy. For example, to change 1/4 to a decimal, divide 4 into 1.00, to obtain 0.25. To change 1/3 to a decimal, divide 3 into 1.0000..., and stop when the desired accuracy is obtained. Note that 1/4 can be written exactly with two decimal digits, while 1/3 cannot be written exactly with any finite number of decimal digits. To change a decimal to a fraction, write in the denominator a 1 followed by as many zeroes as there are digits to the right of the decimal point, and write in the numerator all the digits in the original decimal, omitting the decimal point. Thus 12.3456 = 123456/10000.

Fraction Converting repeating decimals to fractions Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infinite number of repeating decimals is required to convey the same kind of precision. Thus, it is often useful to convert repeating decimals into fractions. The preferred way to indicate a repeating decimal is to place a bar over the digits that repeat, for example 0.789 = 0.789789789… For repeating patterns where the repeating pattern begins immediately after the decimal point, a simple division of the pattern by the same number of nines as numbers it has will suffice. For example: 0.5 = 5/9 0.62 = 62/99 0.264 = 264/999 0.6291 = 6291/9999 In case leading zeros precede the pattern, the nines are suffixed by the same number of trailing zeros: 0.05 = 5/90 0.000392 = 392/999000 0.0012 = 12/9900 In case a non-repeating set of decimals precede the pattern (such as 0.1523987), we can write it as the sum of the non-repeating and repeating parts, respectively: 0.1523 + 0.0000987 Then, convert the repeating part to a fraction: 0.1523 + 987/9990000


Continued fractions
A continued fraction is an expression such as

where ai are integers. Every rational number a/b has two closely related expressions as a finite continued fraction, whose coefficients ai can be determined by applying the Euclidean algorithm to (a,b).

Algebraic fractions
An algebraic fraction is the indicated quotient of two algebraic expressions. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same laws as arithmetic fractions.

If the numerator and the denominator are polynomials, as in

, the algebraic fraction is called a

rational fraction (or rational expression). An irrational fraction is one that contains the variable under a fractional exponent, as in .

The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factor common to the numerator and the denominator is 1. An algebraic fraction whose numerator or denominator, or both, contains a fraction, such as , is called a

Fraction complex fraction. Rational numbers are the quotient field of integers. Rational expressions are the quotient field of the polynomials (over some integral domain). Since a coefficient is a polynomial of degree zero, a radical expression such as √2/2 is a rational fraction. Another example (over the reals) is , the radian measure of a right angle. The term partial fraction is used when decomposing rational expressions. The goal is to write the rational expression as the sum of other rational expressions with denominators of lesser degree. For example, the rational expression can be rewritten as the sum of two fractions: and . This is useful in many areas such as integral calculus and differential equations.


Radical expressions
A fraction may also contain radicals in the numerator and/or the denominator. If the denominator contains radicals, it can be helpful to rationalize it (compare Simplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is a monomial square root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator:

The process of rationalization of binomial denominators involves multiplying the top and the bottom of a fraction by the conjugate of the denominator so that the denominator becomes a rational number. For example:

Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator.

Pedagogical tools
In primary schools, fractions have been demonstrated through Cuisenaire rods, fraction bars, fraction strips, fraction circles, paper (for folding or cutting), pattern blocks, pie-shaped pieces, plastic rectangles, grid paper, dot paper, geoboards, counters and computer software.

The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on.[6] The Egyptians used Egyptian fractions ca. 1000 BC. About 4,000 years ago Egyptians divided with fractions using slightly different methods. They used least common multiples with unit fractions. Their methods gave the same answer as modern methods.[7] The Greeks used unit fractions and later continued fractions and followers of the Greek philosopher Pythagoras, ca. 530 BC, discovered that the square root of two cannot be expressed as a fraction. In 150 BC Jain mathematicians in India wrote the "Sthananga Sutra", which contains work on the theory of numbers, arithmetical operations, operations with fractions. In Sanskrit literature, fractions, or rational numbers were always expressed by an integer followed by a fraction. When the integer is written on a line, the fraction is placed below it and is itself written on two lines, the numerator called amsa part on the first line, the denominator called cheda “divisor” on the second below. If the fraction is

Fraction written without any particular additional sign, one understands that it is added to the integer above it. If it is marked by a small circle or a cross (the shape of the “plus” sign in the West) placed on its right, one understands that it is subtracted from the integer. For example, Bhaskara I writes[8] ६ १ २ १ १ १० ४ ५ ९ That is, 6 1 4 1 1 5 2 1० 9


to denote 6+1/4, 1+1/5, and 2–1/9 Al-Hassār, a Muslim mathematician from Fez, Morocco specializing in Islamic inheritance jurisprudence during the 12th century, first mentions the use of a fractional bar, where numerators and denominators are separated by a horizontal bar. In his discussion he writes, "..., for example, if you are told to write three-fifths and a third of a fifth, write thus, 13th century.[10] In discussing the origins of decimal fractions, Dirk Jan Struik states that (p. 7):[11] "The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548-1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century).[12] " While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.[13] [14] ."

This same fractional notation appears soon after in the work of Leonardo Fibonacci in the

[1] Weisstein, Eric W.. "Common Fraction" (http:/ / mathworld. wolfram. com/ CommonFraction. html). . Retrieved Sep 30, 2011. [2] Galen, Leslie Blackwell (March 2004), "Putting Fractions in Their Place" (http:/ / www. integretechpub. com/ research/ papers/ monthly238-242. pdf), American Mathematical Monthly 111 (3), [3] World Wide Words: Vulgar fractions (http:/ / www. worldwidewords. org/ qa/ qa-vul1. htm) [4] Trotter, James (1853). A complete system of arithmetic (http:/ / books. google. com/ books?id=a0sDAAAAQAAJ& pg=PA65& dq=+ "complex+ fraction"+ + "compound+ fraction"& hl=sv& ei=kN-6TuKZIITc0QHStb3eCQ& sa=X& oi=book_result& ct=result& resnum=4& ved=0CD4Q6AEwAw#v=onepage& q="complex fraction"& f=false). p. 65. . [5] Barlow, Peter (1814). A new mathematical and philosophical dictionary (http:/ / books. google. com/ books?id=BBowAAAAYAAJ& pg=PT329& dq=+ "complex+ fraction"+ + "compound+ fraction"& hl=sv& ei=kN-6TuKZIITc0QHStb3eCQ& sa=X& oi=book_result& ct=result& resnum=10& ved=0CFwQ6AEwCQ#v=onepage& q=+ "complex fraction" + "compound fraction"& f=false). . [6] Eves, Howard ; with cultural connections by Jamie H. (1990). An introduction to the history of mathematics (6th ed. ed.). Philadelphia: Saunders College Pub.. ISBN 0030295580. [7] Milo Gardner (December 19, 2005). "Math History" (http:/ / egyptianmath. blogspot. com). . Retrieved 2006-01-18. See for examples and an explanation. [8] (Filliozat 2004, p. 152) [9] Cajori, Florian (1928), A History of Mathematical Notations (Vol.1), La Salle, Illinois: The Open Court Publishing Company pg. 269. [10] (Cajori 1928, pg.89) [11] D.J. Struik, A Source Book in Mathematics 1200-1800 (Princeton University Press, New Jersey, 1986). ISBN 0-691-02397-2 [12] P. Luckey, Die Rechenkunst bei Ğamšīd b. Mas'ūd al-Kāšī (Steiner, Wiesbaden, 1951).

[13] Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 9780691114859. [14] While there is some disagreement among history of mathematics scholars as to the primacy of al-Uqlidisi's contribution, there is no question as to his major contribution to the concept of decimal fractions. (http:/ / www-history. mcs. st-andrews. ac. uk/ Biographies/ Al-Uqlidisi. html) "MacTutor's al-Uqlidisi biography". Retrieved Nov. 22, 2011.


External links
• • • • • "Fraction, arithmetical" ( The Online Encyclopaedia of Mathematics. Weisstein, Eric W., " Fraction (" from MathWorld. "Fraction" ( Encyclopedia Britannica. "Fraction (mathematics)" ( Citizendium. "Fraction" ( PlanetMath.

In mathematics, factorization (also factorisation in British English) or factoring is the decomposition of an object (for example, a number, a polynomial, or a matrix) into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 factors into primes as 3 × 5, and the polynomial x2 − 4 factors as (x − 2)(x + 2). In all cases, a product of simpler objects is obtained.

A visual illustration of the polynomial x2 + cx + d = (x + a)(x + b) where a plus b equals c and a times b equals d.

The aim of factoring is usually to reduce something to "basic building blocks," such as numbers to prime numbers, or polynomials to irreducible polynomials. Factoring integers is covered by the fundamental theorem of arithmetic and factoring polynomials by the fundamental theorem of algebra. Viète's formulas relate the coefficients of a polynomial to its roots. The opposite of polynomial factorization is expansion, the multiplying together of polynomial factors to a "expanded" polynomial, written as just a sum of terms. Integer factorization for large integers appears to be a difficult problem. There is no known method to carry it out quickly. Its complexity is the basis of the assumed security of some public key cryptography algorithms, such as RSA. A matrix can also be factorized into a product of matrices of special types, for an application in which that form is convenient. One major example of this uses an orthogonal or unitary matrix, and a triangular matrix. There are different types: QR decomposition, LQ, QL, RQ, RZ. Another example is the factorization of a function as the composition of other functions having certain properties; for example, every function can be viewed as the composition of a surjective function with an injective function. This situation is generalized by factorization systems.



By the fundamental theorem of arithmetic, every positive integer greater than 1 has a unique prime factorization. Given an algorithm for integer factorization, one can factor any integer down to its constituent primes by repeated application of this algorithm. For very large numbers, no efficient algorithm is known.

Quadratic polynomials
Any quadratic polynomial over the complex numbers (polynomials of the form ∈ ) can be factored into an expression with the form method is as follows: where , , and using the quadratic formula. The



are the two roots of the polynomial, found with the quadratic formula.

Polynomials factorable over the integers



You can then set each binomial equal to zero, and solve for x to reveal the two roots. Factoring does not involve any other formulas, and is mostly just something you see when you come upon a quadratic equation. Take for example 2x2 − 5x + 2 = 0. Because a = 2 and mn = a, mn = 2, which means that of m and n, one is 1 and the other is 2. Now we have (2x + p)(x + q) = 0. Because c = 2 and pq = c, pq = 2, which means that of p and q, one is 1 and the other is 2 or one is −1 and the other is −2. A guess and check of substituting the 1 and 2, and −1 and −2, into p and q (while applying pn + mq = b) tells us that 2x2 − 5x + 2 = 0 factors into (2x − 1)(x − 2) = 0, giving us the roots x = {0.5, 2} Note: A quick way to check whether the second term in the binomial should be positive or negative (in the example, 1 and 2 and −1 and −2) is to check the second operation in the trinomial (+ or −). If it is +, then check the first operation: if it is +, the terms will be positive, while if it is −, the terms will be negative. If the second operation is −, there will be one positive and one negative term; guess and check is the only way to determine which one is positive and which is negative. If a polynomial with integer coefficients has a discriminant that is a perfect square, that polynomial is factorable over the integers. For example, look at the polynomial 2x2 + 2x − 12. If you substitute the values of the expression into the quadratic formula, the discriminant b2 − 4ac becomes 22 − 4 × 2 × −12, which equals 100. 100 is a perfect square, so the polynomial 2x2 + 2x − 12 is factorable over the integers; its factors are 2, (x − 2), and (x + 3). Now look at the polynomial x2 + 93x − 2. Its discriminant, 932 − 4 × 1 × (−2), is equal to 8657, which is not a perfect square. So x2 + 93x − 2 cannot be factored over the integers.

Factorization Perfect square trinomials Some quadratics can be factored into two identical binomials. These quadratics are called perfect square trinomials. Perfect square trinomials can be factored as follows:


A visual illustration of the identity (a + b)2 = a2 + 2ab + b2


Sum/difference of two squares Another common type of algebraic factoring is called the difference of two squares. It is the application of the formula

to any two terms, whether or not they are perfect squares. If the two terms are subtracted, simply apply the formula. If they are added, the two binomials obtained from the factoring will each have an imaginary term. This formula can be represented as

For example, Factoring by grouping

can be factored into


Another way to factor some polynomials is factoring by grouping. For those who like algorithms, "factoring by grouping" may be the best way to approach factoring a trinomial, as it takes the guess work out of the process. Factoring by grouping is done by placing the terms in the polynomial into two or more groups, where each group can be factored by a known method. The results of these factorizations can sometimes be combined to make an even more simplified expression. For example, to factor the polynomial

Group similar terms, Factor out Greatest Common Factor, Factor out binomial

Factorization AC Method If a quadratic polynomial has rational solutions, we can find p and q so that pq = ac and p + q = b. (If the discriminant is a square number these exist, otherwise we have irrational or complex solutions, and the assumption of rational solutions is not valid.)


The terms on top will have common factors that can be factored out and used to cancel the denominator, if it is not 1. As an example consider the quadratic polynomial:

Inspection of the factors of ac = 36 leads to 4 + 9 = 13 = b.

Factoring other polynomials
Sum/difference of two cubes Another formula for factoring is the sum or difference of two cubes. The sum can be represented by

and the difference by

Difference of two fourth powers
Another formula is the difference of two fourth powers, which is

Sum/difference of two fifth powers
Another formula for factoring is the sum or difference of two fifth powers. The sum can be represented by

and the difference by

Sum/difference of two sixth powers Then there's the formula for factoring the sum or difference of two sixth powers. The sum can be represented by

and the difference by

Factorization Sum/difference of two seventh powers And last there's the formula for factoring the sum or difference of two seventh powers. The sum can be represented by


and the difference by

Difference of nth powers This factorization can be extended to any positive integer power n by use of the geometric series. By noting that

and multiplying by the (x -1) factor, the desired result is found. To give the general form as above, we can replace x by a/b and multiply both sides by bn. This gives the general form for the difference of two nth powers as The corresponding sum of two nth powers depends on whether n is even or odd. If n is odd, b can be replaced by -b in the above formula. If n is even, the form is somewhat more tedious.

External links
• • • • One hundred million numbers factored on html pages. [1] A page about factorization, Algebra, Factoring [2] WIMS Factoris [3] is an online factorization tool. Wolfram Alpha can factorize too [4].

[1] [2] [3] [4] http:/ / factors. evalwave. com/ http:/ / library. thinkquest. org/ 20991/ alg/ factoring. html?tqskip1=1 http:/ / wims. unice. fr/ wims/ wims. cgi?module=tool/ algebra/ factor. en http:/ / www. wolframalpha. com/ input/ ?i=Factor%20-2006+ %2B+ 1155+ x+ -+ 78+ x^2+ %2B+ x^3


In mathematics, and in particular in abstract algebra, distributivity is a property of binary operations that generalizes the distributive law from elementary algebra. For example: 2 × (1 + 3) = (2 × 1) + (2 × 3). In the left-hand side of the first equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the 1 and the 3 individually, with the products added afterwards. Because these give the same final answer (8), we say that multiplication by 2 distributes over addition of 1 and 3. Since we could have put any real numbers in place of 2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over addition of real numbers.

Given a set S and two binary operators · and + on S, we say that the operation · • is left-distributive over + if, given any elements x, y, and z of S, x · (y + z) = (x · y) + (x · z); • is right-distributive over + if, given any elements x, y, and z of S: (y + z) · x = (y · x) + (z · x); • is distributive over + if it is left- and right-distributive.[1] Notice that when · is commutative, then the three above conditions are logically equivalent.

1. Multiplication of numbers is distributive over addition of numbers, for a broad class of different kinds of numbers ranging from natural numbers to complex numbers and cardinal numbers. 2. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. 3. The cross product is left- and right-distributive over vector addition, though not commutative. 4. Matrix multiplication is distributive over matrix addition, though also not commutative. 5. The union of sets is distributive over intersection, and intersection is distributive over union. 6. Logical disjunction ("or") is distributive over logical conjunction ("and"), and conjunction is distributive over disjunction. 7. For real numbers (or for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice versa: max(a,min(b,c)) = min(max(a,b),max(a,c)) and min(a,max(b,c)) = max(min(a,b),min(a,c)). 8. For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd(a,lcm(b,c)) = lcm(gcd(a,b),gcd(a,c)) and lcm(a,gcd(b,c)) = gcd(lcm(a,b),LCM(a,c)). 9. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a + max(b,c) = max(a+b,a+c) and a + min(b,c) = min(a+b,a+c).



Distributivity and rounding
In practice, the distributive property of multiplication (and division) over addition may appear to be compromized or lost because of the limitations of arithmetic precision. For example, the identity ⅓ + ⅓ + ⅓ = (1+1+1)/3 appears to fail if the addition is conducted in decimal arithmetic; however if many significant digits are used, the calculation will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form: 0.33333+0.33333+0.33333 = 0.99999 ≠ 1, this result is a closer approximation than if fewer significant digits had been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced if those arithmetical values are rounded or truncated. For example, buying two books, each priced at £14.99 before a tax of 17.5%, in two separate transactions will actually save £0.01, over buying them together: £14.99×1.175 = £17.61 to the nearest £0.01, giving a total expenditure of £35.22, but £29.98×1.175 = £35.23. Methods such as banker's rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.

Distributivity in rings
Distributivity is most commonly found in rings and distributive lattices. A ring has two binary operations (commonly called "+" and "*"), and one of the requirements of a ring is that * must distribute over +. Most kinds of numbers (example 1) and matrices (example 3) form rings. A lattice is another kind of algebraic structure with two binary operations, ∧ and ∨. If either of these operations (say ∧) distributes over the other (∨), then ∨ must also distribute over ∧, and the lattice is called distributive. See also the article on distributivity (order theory). Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras. Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive but not right-distributive; example 2 is a near-rig.

Generalizations of distributivity
In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice. In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either ≤ or ≥. Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic. In category theory, if (S, μ, η) and (S', μ', η') are monads on a category C, a distributive law S.S' → S'.S is a natural transformation λ : S.S' → S'.S such that (S' , λ) is a lax map of monads S → S and (S, λ) is a colax map of monads S' → S' . This is exactly the data needed to define a monad structure on S'.S: the multiplication map is S'μ.μ'S².S'λS and the unit map is η'S.η. See: distributive law between monads.



[1] Ayres, p. 20 (http:/ / books. google. com/ books?id=7r3bVWx2GHkC& pg=PA20& dq="binary+ operation"+ "Left+ distributive"#PPA20,M1).

• Ayres, Frank, Schaum's Outline of Modern Abstract Algebra, McGraw-Hill; 1st edition (June 1, 1965). ISBN 0070026556.

External links
• A demonstration of the Distributive Law ( DistributiveLaw.shtml) for integer arithmetic (from cut-the-knot)

In mathematical analysis, distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used to formulate generalized solutions of partial differential equations. Where a classical solution may not exist or be very difficult to establish, a distribution solution to a differential equation is often much easier. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta distribution. Generalized functions were introduced by Sergei Sobolev in 1935. They were re-introduced in the late 1940s by Laurent Schwartz, who developed a comprehensive theory of distributions.

Basic idea

A typical test function, the bump function Ψ(x). It is smooth (infinitely differentiable) and has compact support (is zero outside an interval, in this case the interval [-1, 1]).

Distributions are a class of linear functionals that map a set of test functions (conventional and well-behaved functions) onto the set of real numbers. In the simplest case, the set of test functions considered is D(R), which is the set of functions from R to R having two properties: • The function is smooth (infinitely differentiable); • The function has compact support (is identically zero outside some interval). Then, a distribution d is a linear mapping from D(R) to R. Instead of writing d(φ), where φ is a test function in D(R), it is conventional to write . A simple example of a distribution is the Dirac delta δ, defined by

Distribution There are straightforward mappings from both locally integrable functions and probability distributions to corresponding distributions, as discussed below. However, not all distributions can be formed in this manner. Suppose that


is a locally integrable function, and let

be a test function in D(R). We can then define a corresponding distribution Tf by: . This integral is a real number which linearly and continuously depends on φ. This suggests the requirement that a distribution should be linear and continuous over the space of test functions D(R), which completes the definition. In a conventional abuse of notation, f may be used to represent both the original function f and the distribution Tf derived from it. Similarly, if P is a probability distribution on the reals and φ is a test function, then a corresponding distribution TP may be defined by:

Again, this integral continuously and linearly depends on φ, so that TP is in fact a distribution. Such distributions may be multiplied with real numbers and can be added together, so they form a real vector space. In general it is not possible to define a multiplication for distributions, but distributions may be multiplied with infinitely differentiable functions. It's desirable to choose a definition for the derivative of a distribution which, at least for distributions derived from locally integrable functions, has the property that (Tf)' = Tf '. If φ is a test function, we can show that

using integration by parts and noting that

, since φ is zero outside of a bounded set. This

suggests that if S is a distribution, we should define its derivative S' by . It turns out that this is the proper definition; it extends the ordinary definition of derivative, every distribution becomes infinitely differentiable and the usual properties of derivatives hold. Example: Recall that the Dirac delta (so-called Dirac delta function) is the distribution defined by

It is the derivative of the distribution corresponding to the Heaviside step function H: For any test function φ,

so distribution

. Note,

because of compact support. Similarly, the derivative of the Dirac delta is the

This latter distribution is our first example of a distribution which is derived from neither a function nor a probability distribution.



Test functions and distributions
In the sequel, real-valued distributions on an open subset U of Rn will be formally defined. With minor modifications, one can also define complex-valued distributions, and one can replace Rn by any (paracompact) smooth manifold. The first object to define is the space D(U) of test functions on U. Once this is defined, it is then necessary to equip it with a topology by defining the limit of a sequence of elements of D(U). The space of distributions will then be given as the space of continuous linear functionals on D(U).

Test function space
The space D(U) of test functions on U is defined as follows. A function φ : U → R is said to have compact support if there exists a compact subset K of U such that φ(x) = 0 for all x in U \ K. The elements of D(U) are the infinitely differentiable functions φ : U → R with compact support – also known as bump functions. This is a real vector space. It can be given a topology by defining the limit of a sequence of elements of D(U). A sequence (φk) in D(U) is said to converge to φ ∈ D(U) if the following two conditions hold (Gelfand & Shilov 1966–1968, v. 1, §1.2): • There is a compact set K ⊂ U containing the supports of all φk:

• For each multiindex α, the sequence of partial derivatives Dαφk tends uniformly to Dαφ. With this definition, D(U) becomes a complete locally convex topological vector space satisfying the Heine–Borel property (Rudin 1991, §6.4-5). If Ui is a countable nested family of open subsets of U with compact closures , then

where DKi is the set of all smooth functions with support lying in Ki. The topology on D(U) is the final topology of the family of nested metric spaces DKi and so D(U) is an LF-space. The topology is not metrizable by the Baire category theorem, since D(U) is the union of subspaces of the first category in D(U) (Rudin 1991, §6.9).

A distribution on U is a linear functional S : D(U) → R (or S : D(U) → C), such that

for any convergent sequence φn in D(U). The space of all distributions on U is denoted by D'(U). Equivalently, the vector space D'(U) is the continuous dual space of the topological vector space D(U). The dual pairing between a distribution S in D′(U) and a test function 'φ' in D(U) is denoted using angle brackets thus:

Equipped with the weak-* topology, the space D'(U) is a locally convex topological vector space. In particular, a sequence (Sk) in D'(U) converges to a distribution S if and only if for all test functions φ. This is the case if and only if Sk converges uniformly to S on all bounded subsets of D(U). (A subset E of D(U) is bounded if there exists a compact subset K of U and numbers dn such that every φ in E has its support in K and has its n-th derivatives bounded by dn.)



Functions as distributions
The function ƒ : U → R is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions which includes all continuous functions and all Lp functions. The topology on D(U) is defined in such a fashion that any locally integrable function ƒ yields a continuous linear functional on D(U) – that is, an element of D′(U) – denoted here by Tƒ, whose value on the test function φ is given by the Lebesgue integral:

Conventionally, one abuses notation by identifying Tƒ with ƒ, provided no confusion can arise, and thus the pairing between ƒ and φ is often written

If ƒ and g are two locally integrable functions, then the associated distributions Tƒ and Tg are equal to the same element of D'(U) if and only if ƒ and g are equal almost everywhere (see, for instance, Hörmander (1983, Theorem 1.2.5)). In a similar manner, every Radon measure μ on U defines an element of D'(U) whose value on the test function φ is ∫φ dμ. As above, it is conventional to abuse notation and write the pairing between a Radon measure μ and a test function φ as . Conversely, essentially by the Riesz representation theorem, every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure. The test functions are themselves locally integrable, and so define distributions. As such they are dense in D'(U) with respect to the topology on D'(U) in the sense that for any distribution S ∈ D'(U), there is a sequence φn ∈ D(U) such that

for all ψ ∈ D(U). This follows at once from the Hahn–Banach theorem, since by an elementary fact about weak topologies the dual of D'(U) with its weak-* topology is the space D(U) (Rudin 1991, Theorem 3.10). This can also be proven more constructively by a convolution argument.

Operations on distributions
Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if

is a linear mapping of vector spaces which is continuous with respect to the weak-* topology, then it is possible to extend T to a mapping

by passing to the limit. (This approach works for more general non-linear mappings as well, provided they are assumed to be uniformly continuous.) In practice, however, it is more convenient to define operations on distributions by means of the transpose (or adjoint transformation) (Strichartz 1994, §2.3; Trèves 1967). If T : D(U) → D(U) is a continuous linear operator, then the transpose is an operator T* : D(U) → D(U) such that for all φ, ψ ∈ D(U). If such an operator T* exists, and is continuous, then the original operator T may be extended to distributions by defining



If T : D(U) → D(U) is given by the partial derivative

By integration by parts, if φ and ψ are in D(U), then

so that T* = −T. This is a continuous linear transformation D(U) → D(U). So, if S ∈ D'(U) is a distribution, then the partial derivative of S with respect to the coordinate xk is defined by the formula

for all test functions 'φ'. In this way, every distribution is infinitely differentiable, and the derivative in the direction xk is a linear operator on D′(U). In general, if α = (α1, ..., αn) is an arbitrary multi-index and ∂α denotes the associated mixed partial derivative operator, the mixed partial derivative ∂αS of the distribution S ∈ D′(U) is defined by

Differentiation of distributions is a continuous operator on D'(U); this is an important and desirable property that is not shared by most other notions of differentiation.

Multiplication by a smooth function
If m : U → R is an infinitely differentiable function and S is a distribution on U, then the product mS is defined by (mS)(φ) = S(mφ) for all test functions φ. This definition coincides with the transpose transformation of

for φ ∈ D(U). Then, for any test function ψ

so that Tm* = Tm. Multiplication of a distribution S by the smooth function m is therefore defined by Under multiplication by smooth functions, D'(U) is a module over the ring C∞(U). With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, a number of unusual identities also arise. For example, the Dirac delta distribution δ is defined on R by 〈δ, φ〉 = φ(0), so that mδ = m(0)δ. Its derivative is given by〈δ', φ〉 = −〈δ, φ〉 = −φ(0). But the product mδ' of m and δ' is the distribution

This definition of multiplication also makes it possible to define the operation of a linear differential operator with smooth coefficients on a distribution. A linear differential operator takes a distribution S ∈ D'(U) to another distribution given by a sum of the form

where the coefficients pα are smooth functions in U. If P is a given differential operator, then the minimum integer k for which such an expansion holds for every distribution S is called the order of P. The transpose of P is given by

Distribution The space D'(U) is a D-module with respect to the action of the ring of linear differential operators.


Composition with a smooth function
Let S be a distribution on an open set U ⊂ Rn. Let V be an open set in Rn, and F : V → U. Then provided F is a submersion, it is possible to define

This is the composition of the distribution S with F, and is also called the pullback of S along F, sometimes written The pullback is often denoted F*, but this notation risks confusion with the above use of '*' to denote the transpose of a linear mapping. The condition that F be a submersion is equivalent to the requirement that the Jacobian derivative dF(x) of F is a surjective linear map for every x ∈ V. A necessary (but not sufficient) condition for extending F# to distributions is that F be an open mapping (Hörmander 1983, Theorem 6.1.1). The inverse function theorem ensures that a submersion satisfies this condition. If F is a submersion, then F# is defined on distributions by finding the transpose map. Uniqueness of this extension is guaranteed since F# is a continuous linear operator on D(U). Existence, however, requires using the change of variables formula, the inverse function theorem (locally) and a partition of unity argument; see Hörmander (1983, Theorem 6.1.2). In the special case when F is a diffeomorphism from an open subset V of Rn onto an open subset U of Rn change of variables under the integral gives

In this particular case, then, F# is defined by the transpose formula:

Localization of distributions
There is no way to define the value of a distribution in D'(U) at a particular point of U. However, as is the case with functions, distributions on U restrict to give distributions on open subsets of U. Furthermore, distributions are locally determined in the sense that a distribution on all of U can be assembled from a distribution on an open cover of U satisfying some compatibility conditions on the overlap. Such a structure is known as a sheaf.

Let U and V be open subsets of Rn with V ⊂ U. Let EVU : D(V) → D(U) be the operator which extends by zero a given smooth function compactly supported in V to a smooth function compactly supported in the larger set U. Then the restriction mapping ρ VU is defined to be the transpose of EVU. Thus for any distribution S ∈ D'(U), the restriction ρVU S is a distribution in the dual space D'(V) defined by for all test functions φ ∈ D(V). Unless U = V, the restriction to V is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of V. For instance, if U = R and V = (0,2), then the distribution

is in D'(V) but admits no extension to D'(U).



Support of a distribution
Let S ∈ D′(U) be a distribution on an open set U. Then S is said to vanish on an open set V of U if S lies in the kernel of the restriction map ρVU. Explicitly S vanishes on V if for all test functions φ ∈ C∞(U) with support in V. Let V be a maximal open set on which the distribution S vanishes; i.e., V is the union of every open set on which S vanishes. The support of S is the complement of V in U. Thus

The distribution S has compact support if its support is a compact set. Explicitly, S has compact support if there is a compact subset K of U such that for every test function φ whose support is completely outside of K, we have S(φ) = 0. Compactly supported distributions define continuous linear functions on the space C∞(U); the topology on C∞(U) is defined such that a sequence of test functions φk converges to 0 if and only if all derivatives of φk converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support.

Tempered distributions and Fourier transform
By using a larger space of test functions, one can define the tempered distributions, a subspace of D'(Rn). These distributions are useful if one studies the Fourier transform in generality: all tempered distributions have a Fourier transform, but not all distributions have one. The space of test functions employed here, the so-called Schwartz space S(Rn), is the function space of all infinitely differentiable functions that are rapidly decreasing at infinity along with all partial derivatives. Thus φ : Rn → R is in the Schwartz space provided that any derivative of φ, multiplied with any power of |x|, converges towards 0 for |x| → ∞. These functions form a complete topological vector space with a suitably defined family of seminorms. More precisely, let

for α, β multi-indices of size n. Then φ is a Schwartz function if all the values

The family of seminorms pα, β defines a locally convex topology on the Schwartz-space. The seminorms are, in fact, norms on the Schwartz space, since Schwartz functions are smooth. The Schwartz space is metrizable and complete. Because the Fourier transform changes differentiation by xα into multiplication by xα and vice-versa, this symmetry implies that the Fourier transformations of a Schwartz function is also a Schwartz function. The space of tempered distributions is defined as the (continuous) dual of the Schwartz space. In other words, a distribution F is a tempered distribution if and only if

is true whenever,

holds for all multi-indices α, β. The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. All locally integrable functions ƒ with at most polynomial growth, i.e. such that ƒ(x) = O(|x|r) for some r, are tempered distributions. This includes all functions in Lp(Rn) for p ≥ 1.

Distribution The tempered distributions can also be characterized as slowly growing. This characterization is dual to the rapidly falling behaviour, e.g. , of the test functions. To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform F yields then an automorphism of Schwartz function space, and we can define the Fourier transform of the tempered distribution S by (FS)(ψ) = S(Fψ) for every test function ψ.   FS is thus again a tempered distribution. The Fourier transform is a continuous, linear, bijective operator from the space of tempered distributions to itself. This operation is compatible with differentiation in the sense that


and also with convolution: if S is a tempered distribution and ψ is a slowly increasing infinitely differentiable function on Rn (meaning that all derivatives of ψ grow at most as fast as polynomials), then Sψ is again a tempered distribution and

is the convolution of FS and Fψ. In particular, the Fourier transform of the unity function is the δ distribution.

Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions. Convolution of a test function with a distribution If ƒ ∈ D(Rn) is a compactly supported smooth test function, then convolution with ƒ defines an operator defined by Cƒg = ƒ∗g, which is linear (and continuous with respect to the LF space topology on D(Rn).) Convolution of ƒ with a distribution S ∈ D′(Rn) can be defined by taking the transpose of Cƒ relative to the duality pairing of D(Rn) with the space D′(Rn) of distributions (Trèves 1967, Chapter 27). If ƒ, g,  'φ' ∈ D(Rn), then by Fubini's theorem


. Extending by continuity, the convolution of ƒ with a distribution S is defined by

for all test functions 'φ' ∈ D(Rn). An alternative way to define the convolution of a function ƒ and a distribution S is to use the translation operator τx defined on test functions by

and extended by the transpose to distributions in the obvious way (Rudin 1991, §6.29). The convolution of the compactly supported function ƒ and the distribution S is then the function defined for each x ∈ Rn by

It can be shown that the convolution of a compactly supported function and a distribution is a smooth function. If the distribution S has compact support as well, then ƒ∗S is a compactly supported function, and the Titchmarsh convolution theorem (Hörmander 1983, Theorem 4.3.3) implies that

where ch denotes the convex hull. Distribution of compact support

Distribution It is also possible to define the convolution of two distributions S and T on Rn, provided one of them has compact support. Informally, in order to define S∗T where T has compact support, the idea is to extend the definition of the convolution ∗ to a linear operation on distributions so that the associativity formula


continues to hold for all test-functions 'φ'. Hörmander (1983, §IV.2) proves the uniqueness of such an extension. It is also possible to provide a more explicit characterization of the convolution of distributions (Trèves 1967, Chapter 27). Suppose that it is T that has compact support. For any test function 'φ' in D(Rn), consider the function

It can be readily shown that this defines a smooth function of x, which moreover has compact support. The convolution of S and T is defined by

This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense:

This definition of convolution remains valid under less restrictive assumptions about S and T; see for instance Gel'fand & Shilov (1966–1968, v. 1, pp. 103–104) and Benedetto (1997, Definition 2.5.8).

Distributions as derivatives of continuous functions
The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of D(U) (or S(Rd) for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary. Tempered distributions If ƒ ∈ S′(Rn) is a tempered distribution, then there exists a constant C > 0, and positive integers M and N such that for all Schwartz functions φ ∈S(Rn)

This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function F and a multiindex α such that

Compactly supported distributions Let U be an open set, and K a compact subset of U. If ƒ is a distribution supported on K, then there is a continuous function F compactly supported in U (possibly on a larger set than K itself) such that

for some multi-index α. This follows from the previously quoted result on tempered distributions by means of a localization argument. Distributions with point support If ƒ has support at a single point {P}, then ƒ is in fact a finite linear combination of distributional derivatives of the δ function at P. That is, there exists an integer m and complex constants aα for multi indices |α| ≤ m such that



where τP is the translation operator. General distributions A version of the above theorem holds locally in the following sense (Rudin 1991). Let S be a distribution on U. Then one can find for every multi-index α a continuous function gα such that

and that any compact subset K of U intersects the supports of only finitely many gα; therefore, to evaluate the value of S for a given smooth function f compactly supported in U, we only need finitely many gα; hence the infinite sum above is well-defined as a distribution. If the distribution S is of finite order, then one can choose gα in such a way that only finitely many of them are nonzero.

Using holomorphic functions as test functions
The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals.

Problem of multiplication
A possible limitation of the theory of distributions (and hyperfunctions) is that it is a purely linear theory, in the sense that the product of two distributions cannot consistently be defined (in general), as has been proved by Laurent Schwartz in the 1950s. For example, if p.v. 1/x is the distribution obtained by the Cauchy principal value

for all φ ∈ S(R), and δ is the Dirac delta distribution then


so the product of a distribution by a smooth function (which is always well defined) cannot be extended to an associative product on the space of distributions. Thus, nonlinear problems cannot be posed in general and thus not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) causal perturbation theory. This does not solve the problem in other situations. Many other interesting theories are non linear, like for example Navier-Stokes equations of fluid dynamics. In view of this, several not entirely satisfactory theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today. A simple solution of the multiplication problem is dictated by the path integral formulation of quantum mechanics. Since this is required to be equivalent to the Schrödinger theory of quantum mechanics which is invariant under coordinate transformations, this property must be shared by path integrals. This fixes all products of distributions as

Distribution shown by Kleinert & Chervyakov (2001). The result is equivalent to what can be derived from dimensional regularization (Kleinert & Chervyakov 2000).


• Benedetto, J.J. (1997), Harmonic Analysis and Applications, CRC Press. • Gel'fand, I.M.; Shilov, G.E. (1966–1968), Generalized functions, 1–5, Academic Press. • Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., 256, Springer, ISBN 3-540-12104-8, MR0717035. • Kleinert, H.; Chervyakov, A. (2001), "Rules for integrals over products of distributions from coordinate independence of path integrals" [1], Europ. Phys. J. C 19 (4): 743–747, Bibcode 2001EPJC...19..743K, doi:10.1007/s100520100600. • Kleinert, H.; Chervyakov, A. (2000), "Coordinate Independence of Quantum-Mechanical Path Integrals" [2], Phys. Lett. A 269: 63, doi:10.1016/S0375-9601(00)00475-8. • Rudin, W. (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 0-07-054236-8. • Schwartz, L. (1954), "Sur l'impossibilité de la multiplications des distributions", C.R.Acad. Sci. Paris 239: 847–848. • Schwartz, L. (1950–1951), Théorie des distributions, 1–2, Hermann. • Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 0-691-08078-X. • Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 0849382734. • Trèves, François (1967), Topological Vector Spaces, Distributions and Kernels, Academic Press, pp. 126 ff.

Further reading
• M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. ISBN 0-521-09128-4 (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals) • H. Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2006) [3](also available online here [4]). See Chapter 11 for defining products of distributions from the physical requirement of coordinate invariance. • V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 0-415-27356-0 • Vladimirov, V.S. (2001), "Generalized function" [5], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104. • Vladimirov, V.S. (2001), "Generalized functions, space of" [6], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104. • Vladimirov, V.S. (2001), "Generalized function, derivative of a" [7], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104. • Vladimirov, V.S. (2001), "Generalized functions, product of" [8], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104. • Oberguggenberger, Michael (2001), "Generalized function algebras" [9], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104.



[1] [2] [3] [4] [5] [6] [7] [8] [9] http:/ / www. physik. fu-berlin. de/ ~kleinert/ kleiner_re303/ wardepl. pdf http:/ / www. physik. fu-berlin. de/ ~kleinert/ 305/ klch2. pdf http:/ / www. worldscibooks. com/ physics/ 6223. html http:/ / www. physik. fu-berlin. de/ ~kleinert/ b5 http:/ / www. encyclopediaofmath. org/ index. php?title=G/ g043810 http:/ / www. encyclopediaofmath. org/ index. php?title=G/ g043840 http:/ / www. encyclopediaofmath. org/ index. php?title=G/ g043820 http:/ / www. encyclopediaofmath. org/ index. php?title=G/ g043830 http:/ / www. encyclopediaofmath. org/ index. php?title=G/ g130030

In mathematics, associativity is a property of some binary operations. It means that, within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, rearranging the parentheses in such an expression will not change its value. Consider, for instance, the following equations:

Consider the first equation. Even though the parentheses were rearranged (the left side requires adding 5 and 2 first, then adding 1 to the result, whereas the right side requires adding 2 and 1 first, then 5), the value of the expression was not altered. Since this holds true when performing addition on any real numbers, we say that "addition of real numbers is an associative operation." Associativity is not to be confused with commutativity. Commutativity justifies changing the order or sequence of the operands within an expression while associativity does not. For example,

is an example of associativity because the parentheses were changed (and consequently the order of operations during evaluation) while the operands 5, 2, and 1 appeared in exactly the same order from left to right in the expression. In contrast,

is an example of commutativity, not associativity, because the operand sequence changed when the 2 and 5 switched places. Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative. However, many important and interesting operations are non-associative; one common example would be the vector cross product.



Formally, a binary operation on a set S is called associative if it satisfies the associative law:

Using * to denote a binary operation performed on a set

An example of multiplicative associativity The evaluation order does not affect the value of such expressions, and it can be shown that the same holds for expressions containing any number of operations. Thus, when is associative, the evaluation order can therefore be left unspecified without causing ambiguity, by omitting the parentheses and writing simply:

However, it is important to remember that changing the order of operations does not involve or permit moving the operands around within the expression; the sequence of operands is always unchanged. A different perspective is obtained by rephrasing associativity using functional : when expressed in this form, associativity becomes less obvious. notation:

Associativity can be generalized to n-ary operations. Ternary associativity is (abc)de = a(bcd)e = ab(cde), i.e. the string abcde with any three adjacent elements bracketed. N-ary associativity is a string of length n+(n-1) with any n adjacent elements bracketed.[1]

Some examples of associative operations include the following. • The prototypical example of an associative operation is string concatenation: the concatenation of "hello", ", ", "world" can be computed by concatenating the first two strings (giving "hello, ") and appending the third string ("world"), or by joining the second and third string (giving ", world") and concatenating the first string ("hello") with the result. String concatenation is not commutative. • In arithmetic, addition and multiplication of real numbers are associative; i.e.,

• Addition and multiplication of complex numbers and quaternions is associative. Addition of octonions is also associative, but multiplication of octonions is non-associative. • The greatest common divisor and least common multiple functions act associatively.

• Because linear transformations are functions that can be represented by matrices with matrix multiplication being the representation of functional composition, one can immediately conclude that matrix multiplication is associative. • Taking the intersection or the union of sets:

• If M is some set and S denotes the set of all functions from M to M, then the operation of functional composition on S is associative:

• Slightly more generally, given four sets M, N, P and Q, with h: M to N, g: N to P, and f: P to Q, then



as before. In short, composition of maps is always associative. • Consider a set with three elements, A, B, and C. The following operation:
+ × A B C A A A A B A B A C A C A

is associative. Thus, for example, A(BC)=(AB)C. This mapping is not commutative.

A binary operation on a set S that does not satisfy the associative law is called non-associative. Symbolically,

For such an operation the order of evaluation does matter. For example: • Subtraction

• Division

• Exponentiation

Also note that infinite sums are not generally associative, for example:


The study of non-associative structures arises from reasons somewhat different from the mainstream of classical algebra. One area within non-associative algebra that has grown very large is that of Lie algebras. There the associative law is replaced by the Jacobi identity. Lie algebras abstract the essential nature of infinitesimal transformations, and have become ubiquitous in mathematics. They are an example of non-associative algebras. There are other specific types of non-associative structures that have been studied in depth. They tend to come from some specific applications. Some of these arise in combinatorial mathematics. Other examples: Quasigroup, Quasifield, Nonassociative ring.



Notation for non-associative operations
In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression. However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses. A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,

while a right-associative operation is conventionally evaluated from right to left:

Both left-associative and right-associative operations occur. Left-associative operations include the following: • Subtraction and division of real numbers:

• Function application:

This notation can be motivated by the currying isomorphism. Right-associative operations include the following: • Exponentiation of real numbers:

The reason exponentiation is right-associative is that a repeated left-associative exponentiation operation would be less useful. Multiple appearances could (and would) be rewritten with multiplication:

• Function definition

Using right-associative notation for these operations can be motivated by the Curry-Howard correspondence and by the currying isomorphism. Non-associative operations for which no conventional evaluation order is defined include the following. • Taking the Cross product of three vectors:

• Taking the pairwise average of real numbers:

• Taking the relative complement of sets:



The green part in the left Venn diagram represents represents .

. The green part in the right Venn diagram

[1] Dudek, W.A. (2001), "On some old problems in n-ary groups" (http:/ / www. quasigroups. eu/ contents/ contents8. php?m=trzeci), Quasigroups and Related Systems 8: 15–36, .

In mathematics an operation is commutative if changing the order of the operands does not change the end result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. The commutativity of simple operations, such as multiplication and addition of numbers, was for many years implicitly assumed and the property was not named until the 19th century when mathematics started to become formalized. By contrast, division and subtraction are not commutative.

Common uses
The commutative property (or commutative law) is a property associated with binary operations and functions. Similarly, if the commutative property holds for a pair of elements under a certain binary operation then it is said that the two elements commute under that operation. In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs.[1] [2] [3]

Mathematical definitions
Further information: Symmetric function The term "commutative" is used in several related senses.[4] [5] 1. A binary operation ∗ on a set S is called commutative if:

An operation that does not satisfy the above property is called noncommutative. 2. One says that x commutes with y under ∗ if:

3. A binary function

is called commutative if:



History and etymology
Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products.[6] [7] Euclid is known to have assumed the commutative property of multiplication in his book Elements.[8] Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics.

The first known use of the term was in a French Journal published in 1814

The first recorded use of the term commutative was in a memoir by François Servois in 1814,[9] [10] which used the word commutatives when describing functions that have what is now called the commutative property. The word is a combination of the French word commuter meaning "to substitute or switch" and the suffix -ative meaning "tending to" so the word literally means "tending to substitute or switch." The term then appeared in English in Philosophical Transactions of the Royal Society in 1844.[9]

Related properties
The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms doesn't change. In contrast, the commutative property states that the order of the terms does not affect the final result.


Graph showing the symmetry of the addition function

Symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line y = x. As an example, if we let a function f represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function, which can be seen in the image on the right. For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then .



Commutative operations in everyday life
• Putting on socks resembles a commutative operation, since which sock is put on first is unimportant. Either way, the end result (having both socks on), is the same. • The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total.

Commutative operations in mathematics
Two well-known examples of commutative binary operations are:[4] • The addition of real numbers, which is commutative since

For example 4 + 5 = 5 + 4, since both expressions equal 9. • The multiplication of real numbers, which is commutative since

For example, 3 × 5 = 5 × 3, since both expressions equal 15. • Further examples of commutative binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors, and intersection and union of sets.

Noncommutative operations in everyday life
• Concatenation, the act of joining character strings together, is a noncommutative operation. For example

• Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a markedly different result to drying and then washing. • The twists of the Rubik's Cube are noncommutative. This is studied in group theory.

Noncommutative operations in mathematics
Some noncommutative binary operations are:[11] • Subtraction is not commutative since • Division is noncommutative since • Matrix multiplication is noncommutative since

• The vector product (or cross product) of two vectors in three dimensions is anti-commutative, i.e., b × a = −(a × b).



Mathematical structures and commutativity
• • • • A commutative semigroup is a set endowed with a total, associative and commutative operation. If the operation additionally has an identity element, we have a commutative monoid An abelian group, or commutative group is a group whose group operation is commutative.[2] A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.)[12] • In a field both addition and multiplication are commutative.[13]

Non-commuting operators in quantum mechanics
In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as x (meaning multiply by x), and d/dx. These two operators do not commute as may be seen by considering the effect of their products x(d/dx) and (d/dx)x on a one-dimensional wave function ψ(x):

According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously measured or known precisely. For example, the position and the linear momentum of a particle are represented respectively (in the x-direction) by the operators x and (ħ/i)d/dx (where ħ is the reduced Planck constant). This is the same example except for the constant (ħ/i), so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary.

[1] [2] [3] [4] [5] [6] [7] Axler, p.2 Gallian, p.34 p. 26,87 Krowne, p.1 Weisstein, Commute, p.1 Lumpkin, p.11 Gay and Shute, p.?

[8] O'Conner and Robertson, Real Numbers [9] Cabillón and Miller, Commutative and Distributive [10] O'Conner and Robertson, Servois [11] Yark, p.1 [12] Gallian p.236 [13] Gallian p.250

• Axler, Sheldon (1997). Linear Algebra Done Right, 2e. Springer. ISBN 0-387-98258-2. Abstract algebra theory. Covers commutativity in that context. Uses property throughout book. • Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN 0-13-067342-0. Abstract algebra theory. Uses commutativity property throughout book. • Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. Boston, Mass.: Houghton Mifflin. ISBN 0-618-51471-6. Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.



• Lumpkin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished manuscript. Article describing the mathematical ability of ancient civilizations. • Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4 Translation and interpretation of the Rhind Mathematical Papyrus.

Online resources
• Krowne, Aaron, Commutative ( at PlanetMath., Accessed 8 August 2007. Definition of commutativity and examples of commutative operations • Weisstein, Eric W., " Commute (" from MathWorld., Accessed 8 August 2007. Explanation of the term commute • Yark ( Examples of non-commutative operations (http:// at PlanetMath., Accessed 8 August 2007 Examples proving some noncommutative operations • O'Conner, J J and Robertson, E F. MacTutor history of real numbers ( uk/HistTopics/Real_numbers_1.html), Accessed 8 August 2007 Article giving the history of the real numbers • Cabillón, Julio and Miller, Jeff. Earliest Known Uses Of Mathematical Terms ( html), Accessed 22 November 2008 Page covering the earliest uses of mathematical terms • O'Conner, J J and Robertson, E F. MacTutor biography of François Servois ( uk/~history/Biographies/Servois.html), Accessed 8 August 2007 Biography of Francois Servois, who first used the term


Orders of Operations
In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can also be elements from any given set. A function f with argument x is denoted f(x), which is read "f of x". An example of such a function is f(x) = 2x, the function which associates with every number x the number twice as large. For instance, if its argument is 5 its value is 10, and this is written f(5) = 10. The input to a function need not be a number, it can be any well defined object. For example, a function might associate the letter A with the number 1, the letter B with the number 2, and so on. There are many ways to describe or represent a function, such as a formula or Graph of example function, algorithm that computes the output for a given input, a graph that gives a picture of the function, or a table of values that gives the output for certain specified inputs. Tables of values are especially common in statistics, science, and engineering. The set of all inputs to a particular function is called its domain. The set of all outputs of a particular function is called its image. In modern mathematics functions also have a codomain. For every function, its codomain includes its image. For instance, a codomain of the function squaring real numbers usually is defined to be the set of all real numbers. The function ) is defined if the codomain of functions may follow does not have the real numbers is the same as the domain of as outputs, thus they are in its ( followed by defines what . Thus the codomain of codomain but not in its image. Codomains are useful for function composition. Composition

. The word range is used in some texts to refer to the image and in others to the codomain,

in particular in computing it often refers to the codomain. The domain and codomain are often "understood." Thus for the example given above, f(x) = 2x, the domain and codomain were not stated explicitly. They might both be the set of all real numbers, but they might also be the set of integers. If the domain is the set of integers, then image consists of just the even integers. The set of all the ordered pairs or inputs and outputs (x, f(x)) of a function is called its graph. A common way to define a function is as the triple (domain, codomain, graph), that is as the input set, the possible outputs and the mapping for each input to its output. A function may sometimes be described through its relationship to other functions, for example, as the inverse function of a given function, or as a solution of a differential equation. Functions can be added, multiplied, or combined in other ways to produce new functions. An important operation on functions, which distinguishes them from numbers, is the composition of functions. There are uncountably many different functions, most of which cannot be expressed with a formula or an algorithm.

Function Collections of functions with certain properties, such as continuous functions and differentiable functions, are called function spaces and are studied as objects in their own right, in such mathematical disciplines as real analysis and complex analysis.


Because functions are so widely used, many traditions have grown up around their use. The symbol for the input to a function is often called the independent variable or argument and is often represented by the letter x or, if the input is a particular time, by the letter t. The symbol for the output is called the dependent variable or value and is often represented by the letter y. The function itself is most often called f, and thus the notation y = f(x) indicates that a function named f has an input named x and an output named y. The set of all permitted inputs to a given function is called the domain of the function. The set of all resulting outputs is called the image or range of the function. The image is often a subset of some larger set, called the codomain of the function. Thus, for example, the function f(x) = x2 could take as its domain the set of all real numbers, as its image the set of all non-negative real numbers, and as its codomain the set of all real numbers. In that case, we would describe f as a real-valued function of a real variable. Sometimes, especially in computer science, the term "range" refers to the codomain rather than the image, so care needs to be taken when using the word. It is usual practice in mathematics to introduce functions with temporary names like ƒ. For example, ƒ(x) = 2x+1, implies ƒ(3) = 7; when a name for the function is not needed, the form y = 2x+1 may be used. If a function is often used, it may be given a more permanent name as, for example,
A function ƒ takes an input, x, and returns an output ƒ(x). One metaphor describes the function as a "machine" or "black box" that converts the input into the output.

Functions need not act on numbers: the domain and codomain of a function may be arbitrary sets. One example of a function that acts on non-numeric inputs takes English words as inputs and returns the first letter of the input word as output. Furthermore, functions need not be described by any expression, rule or algorithm: indeed, in some cases it may be impossible to define such a rule. For example, the association between inputs and outputs in a choice function often lacks any fixed rule, although each input element is still associated to one and only one output. A function of two or more variables is considered in formal mathematics as having a domain consisting of ordered pairs or tuples of the argument values. For example Sum(x,y) = x+y operating on integers is the function Sum with a domain consisting of pairs of integers. Sum then has a domain consisting of elements like (3,4), a codomain of integers, and an association between the two that can be described by a set of ordered pairs like ((3,4), 7). Evaluating Sum(3,4) then gives the value 7 associated with the pair (3,4). A family of objects indexed by a set is equivalent to a function. For example, the sequence 1, 1/2, 1/3, ..., 1/n, ... can be written as the ordered sequence <1/n> where n is a natural number, or as a function f(n) = 1/n from the set of natural numbers into the set of rational numbers.



One precise definition of a function is an ordered triple of sets, written (X, Y, F), where X is the domain, Y is the codomain, and F is a set of ordered pairs (a, b). In each of the ordered pairs, the first element a is from the domain, the second element b is from the codomain, and a necessary condition is that every element in the domain is the first element in exactly one ordered pair. The set of all b is known as the image of the function, and need not be the whole of the codomain. Most authors use the term "range" to mean the image, while some use "range" to mean the codomain. The domain X may be void, but if X = ∅ then F = ∅. The codomain Y may be also void, but if Y = ∅ then X = ∅ and F = ∅. Such “void” functions are not usual, but the theory assures their existence. The notation ƒ:X→Y indicates that ƒ is a function with domain X and codomain Y, and the function f is said to map or associate elements of X to elements of Y. If the domain and codomain are both the set of real numbers, using the ordered triple scheme we can, for example, write the function as , In most situations, the domain and codomain are understood from context, and only the relationship between the input and output is given. In set theory especially, a function f is often defined as a set of ordered pairs, with the property that if are in f, then . In this case statements such as are appropriate when, say, and

is defined

by , for all . The graph of a function is its set of ordered pairs. Part of such a set can be plotted on a pair of coordinate axes; for example, (3, 9), the point above 3 on the horizontal axis and to the right of 9 on the vertical axis, lies on the graph of . A specific input in a function is called an argument of the function. For each argument value x, the corresponding unique y in the codomain is called the function value at x, output of ƒ for an argument x, or the image of x under ƒ. The image of x may be written as ƒ(x) or as y. A function can also be called a map or a mapping. Some authors, however, use the terms "function" and "map" to refer to different types of functions. Other specific types of functions include functionals and operators. A function is a special case of a more general mathematical concept, the relation, for which the restriction that each element of the domain appear as the first element in one and only one ordered pair is removed. In other words, an element of the domain may not be the first element of any ordered pair, or may be the first element of two or more ordered pairs. A relation is "single-valued" when if an element of the domain is the first element of one ordered pair, it is not the first element of any other ordered pair. A relation is "left-total" or simply "total" if every element of the domain is the first element of some ordered pair. Thus a function is a total, single-valued relation. In some parts of mathematics, including recursion theory and functional analysis, it is convenient to study partial functions in which some values of the domain have no association in the graph; i.e., single-valued relations. For example, the function f such that f(x) = 1/x does not define a value for x = 0, and so is only a partial function from the real line to the real line. The term total function can be used to stress the fact that every element of the domain does appear as the first element of an ordered pair in the graph. In other parts of mathematics, non-single-valued relations are similarly conflated with functions: these are called multivalued functions, with the corresponding term single-valued function for ordinary functions. Many operations in set theory, such as the power set, have the class of all sets as their domain, and therefore, although they are informally described as functions, they do not fit the set-theoretical definition outlined above, because a class is not necessarily a set.



Formal description of a function typically involves the function's name, its domain, its codomain, and a rule of correspondence. Thus we frequently see a two-part notation, an example being

where the first part is read: • "ƒ is a function from N to R" (one often writes informally "Let ƒ: X → Y" to mean "Let ƒ be a function from X to Y"), or • "ƒ is a function on N into R", or • "ƒ is an R-valued function of an N-valued variable", and the second part is read: • maps to

Here the function named "ƒ" has the natural numbers as domain, the real numbers as codomain, and maps n to itself divided by π. Less formally, this long form might be abbreviated

where f(n) is read as "f as function of n" or "f of n". There is some loss of information: we no longer are explicitly given the domain N and codomain R. It is common to omit the parentheses around the argument when there is little chance of confusion, thus: sin x; this is known as prefix notation. Writing the function after its argument, as in x ƒ, is known as postfix notation; for example, the factorial function is customarily written n!, even though its generalization, the gamma function, is written Γ(n). Parentheses are still used to resolve ambiguities and denote precedence, though in some formal settings the consistent use of either prefix or postfix notation eliminates the need for any parentheses.

Injective and surjective functions
Three important kinds of function are the injections (or one-to-one functions), which have the property that if ƒ(a) = ƒ(b) then a must equal b; the surjections (or onto functions), which have the property that for every y in the codomain there is an x in the domain such that ƒ(x) = y; and the bijections, which are both one-to-one and onto. This nomenclature was introduced by the Bourbaki group. When the definition of a function by its graph only is used, since the codomain is not defined, the "surjection" must be accompanied with a statement about the set the function maps onto. For example, we might say ƒ maps onto the set of all real numbers.

Functions with multiple inputs and outputs
The concept of function can be extended to an object that takes a combination of two (or more) argument values to a single result. This intuitive concept is formalized by a function whose domain is the Cartesian product of two or more sets. For example, consider the function that associates two integers to their product: ƒ(x, y) = x·y. This function can be defined formally as having domain Z×Z , the set of all integer pairs; codomain Z; and, for graph, the set of all pairs ((x,y), x·y). Note that the first component of any such pair is itself a pair (of integers), while the second component is a single integer. The function value of the pair (x,y) is ƒ((x,y)). However, it is customary to drop one set of parentheses and consider ƒ(x,y) a function of two variables, x and y. Functions of two variables may be plotted on the three-dimensional

Function Cartesian as ordered triples of the form (x,y,f(x,y)). The concept can still further be extended by considering a function that also produces output that is expressed as several variables. For example, consider the integer divide function, with domain Z×N and codomain Z×N. The resultant (quotient, remainder) pair is a single value in the codomain seen as a Cartesian product. Currying An alternative approach to handling functions with multiple arguments is to transform them into a chain of functions that each takes a single argument. For instance, one can interpret Add(3,5) to mean "first produce a function that adds 3 to its argument, and then apply the 'Add 3' function to 5". This transformation is called currying: Add 3 is curry(Add) applied to 3. There is a bijection between the function spaces CA×B and (CB)A. When working with curried functions it is customary to use prefix notation with function application considered left-associative, since juxtaposition of multiple arguments—as in (ƒ x y)—naturally maps to evaluation of a curried function. Conversely, the → and ⟼ symbols are considered to be right-associative, so that curried functions may be defined by a notation such as ƒ: Z → Z → Z = x ⟼ y ⟼ x·y Binary operations The familiar binary operations of arithmetic, addition and multiplication, can be viewed as functions from R×R to R. This view is generalized in abstract algebra, where n-ary functions are used to model the operations of arbitrary algebraic structures. For example, an abstract group is defined as a set X and a function ƒ from X×X to X that satisfies certain properties. Traditionally, addition and multiplication are written in the infix notation: x+y and x×y instead of +(x, y) and ×(x, y).


Function composition
The function composition of two or more functions takes the output of one or more functions as the input of others. The functions ƒ: X → Y and g: Y → Z can be composed by first applying ƒ to an argument x to obtain y = ƒ(x) and then applying g to y to obtain z = g(y). The composite function formed in this way from general ƒ and g may be written

A composite function g(f(x)) can be visualized as the combination of two "machines". The first takes input x and outputs f(x). The second takes f(x) and outputs g(f(x)).



This notation follows the form such that

The function on the right acts first and the function on the left acts second, reversing English reading order. We remember the order by reading the notation as "g of ƒ". The order is important, because rarely do we get the same result both ways. For example, suppose ƒ(x) = x2 and g(x) = x+1. Then g(ƒ(x)) = x2+1, while ƒ(g(x)) = (x+1)2, which is x2+2x+1, a different function. In a similar way, the function given above by the formula y = 5x−20x3+16x5 can be obtained by composing several functions, namely the addition, negation, and multiplication of real numbers. An alternative to the colon notation, convenient when functions are being composed, writes the function name above the arrow. For example, if ƒ is followed by g, where g produces the complex number eix, we may write

A more elaborate form of this is the commutative diagram.

Identity function
The unique function over a set X that maps each element to itself is called the identity function for X, and typically denoted by idX. Each set has its own identity function, so the subscript cannot be omitted unless the set can be inferred from context. Under composition, an identity function is "neutral": if ƒ is any function from X to Y, then

Restrictions and extensions
Informally, a restriction of a function ƒ is the result of trimming its domain. More precisely, if ƒ is a function from a X to Y, and S is any subset of X, the restriction of ƒ to S is the function ƒ|S from S to Y such that ƒ|S(s) = ƒ(s) for all s in S. If g is a restriction of ƒ, then it is said that ƒ is an extension of g. The overriding of f: X → Y by g: W → Y (also called overriding union) is an extension of g denoted as (f ⊕ g): (X ∪ W) → Y. Its graph is the set-theoretical union of the graphs of g and f|X \ W. Thus, it relates any element of the domain of g to its image under g, and any other element of the domain of f to its image under f. Overriding is an associative operation; it has the empty function as an identity element. If f|X ∩ W and g|X ∩ W are pointwise equal (e.g., the domains of f and g are disjoint), then the union of f and g is defined and is equal to their overriding union. This definition agrees with the definition of union for binary relations.

Inverse function
If ƒ is a function from X to Y then an inverse function for ƒ, denoted by ƒ−1, is a function in the opposite direction, from Y to X, with the property that a round trip (a composition) returns each element to itself. Not every function has an inverse; those that do are called invertible. The inverse function exists if and only if ƒ is a bijection. As a simple example, if ƒ converts a temperature in degrees Celsius C to degrees Fahrenheit F, the function converting degrees Fahrenheit to degrees Celsius would be a suitable ƒ−1.

Function The notation for composition is similar to multiplication; in fact, sometimes it is denoted using juxtaposition, gƒ, without an intervening circle. With this analogy, identity functions are like the multiplicative identity, 1, and inverse functions are like reciprocals (hence the notation). For functions that are injections or surjections, generalized inverse functions can be defined, called left and right inverses respectively. Left inverses map to the identity when composed to the left; right inverses when composed to the right.


Image of a set
The concept of the image can be extended from the image of a point to the image of a set. If A is any subset of the domain, then ƒ(A) is the subset of im ƒ consisting of all images of elements of A. We say the ƒ(A) is the image of A under f. Use of ƒ(A) to denote the image of a subset A⊆X is consistent so long as no subset of the domain is also an element of the domain. In some fields (e.g., in set theory, where ordinals are also sets of ordinals) it is convenient or even necessary to distinguish the two concepts; the customary notation is ƒ[A] for the set { ƒ(x): x ∈ A }. Notice that the image of ƒ is the image ƒ(X) of its domain, and that the image of ƒ is a subset of its codomain. Inverse image The inverse image (or preimage, or more precisely, complete inverse image) of a subset B of the codomain Y under a function ƒ is the subset of the domain X defined by

So, for example, the preimage of {4, 9} under the squaring function is the set {−3,−2,2,3}. In general, the preimage of a singleton set (a set with exactly one element) may contain any number of elements. For example, if ƒ(x) = 7, then the preimage of {5} is the empty set but the preimage of {7} is the entire domain. Thus the preimage of an element in the codomain is a subset of the domain. The usual convention about the preimage of an element is that ƒ−1(b) means ƒ−1({b}), i.e

In the same way as for the image, some authors use square brackets to avoid confusion between the inverse image and the inverse function. Thus they would write ƒ−1[B] and ƒ−1[b] for the preimage of a set and a singleton. The preimage of a singleton set is sometimes called a fiber. The term kernel can refer to a number of related concepts.

Specifying a function
A function can be defined by any mathematical condition relating each argument to the corresponding output value. If the domain is finite, a function ƒ may be defined by simply tabulating all the arguments x and their corresponding function values ƒ(x). More commonly, a function is defined by a formula, or (more generally) an algorithm — a recipe that tells how to compute the value of ƒ(x) given any x in the domain. There are many other ways of defining functions. Examples include piecewise definitions, induction or recursion, algebraic or analytic closure, limits, analytic continuation, infinite series, and as solutions to integral and differential equations. The lambda calculus provides a powerful and flexible syntax for defining and combining functions of several variables.



Functions that send integers to integers, or finite strings to finite strings, can sometimes be defined by an algorithm, which gives a precise description of a set of steps for computing the output of the function from its input. Functions definable by an algorithm are called computable functions. For example, the Euclidean algorithm gives a precise process to compute the greatest common divisor of two positive integers. Many of the functions studied in the context of number theory are computable. Fundamental results of computability theory show that there are functions that can be precisely defined but are not computable. Moreover, in the sense of cardinality, almost all functions from the integers to integers are not computable. The number of computable functions from integers to integers is countable, because the number of possible algorithms is. The number of all functions from integers to integers is higher: the same as the cardinality of the real numbers. Thus most functions from integers to integers are not computable. Specific examples of uncomputable functions are known, including the busy beaver function and functions related to the halting problem and other undecidable problems.

Function spaces
The set of all functions from a set X to a set Y is denoted by X → Y, by [X → Y], or by YX. The latter notation is motivated by the fact that, when X and Y are finite and of size |X| and |Y|, then the number of functions X → Y is |YX| = |Y||X|. This is an example of the convention from enumerative combinatorics that provides notations for sets based on their cardinalities. Other examples are the multiplication sign X×Y used for the Cartesian product, where |X×Y| = |X|·|Y|; the factorial sign X!, used for the set of permutations where |X!| = |X|!; and the binomial coefficient sign , used for the set of n-element subsets where

If ƒ: X → Y, it may reasonably be concluded that ƒ ∈ [X → Y].

Pointwise operations
Pointwise operations inherit properties from the corresponding operations on the codomain. For example if ƒ: X → R and g: X → R are functions with a common domain of X and common codomain of a ring R, then the sum function ƒ + g: X → R and the product function ƒ ⋅ g: X → R can be defined as follows:

Other properties
There are many other special classes of functions that are important to particular branches of mathematics, or particular applications. Here is a partial list: • bijection, injection and surjection, or singularly: • injective, • surjective, and • bijective function continuous differentiable, integrable linear, polynomial, rational algebraic, transcendental trigonometric fractal

• • • • • •

Function • • • • • odd or even convex, monotonic, unimodal holomorphic, meromorphic, entire vector-valued computable


Functions prior to Leibniz
Historically, some mathematicians can be regarded as having foreseen and come close to a modern formulation of the concept of function. Among them is Oresme (1323–1382) . . . In his theory, some general ideas about independent and dependent variable quantities seem to be present.[1] [2] Ponte further notes that "The emergence of a notion of function as an individualized mathematical entity can be traced to the beginnings of infinitesimal calculus".[1] The notion of "function" in analysis As a mathematical term, "function" was coined by Gottfried Leibniz, in a 1673 letter, to describe a quantity related to a curve, such as a curve's slope at a specific point.[3] [4] The functions Leibniz considered are today called differentiable functions. For this type of function, one can talk about limits and derivatives; both are measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis of calculus. Johann Bernoulli "by 1718, had come to regard a function as any expression made up of a variable and some constants",[5] and Leonhard Euler during the mid-18th century used the word to describe an expression or formula involving variables and constants e.g., x2+3x+2.[6] Alexis Claude Clairaut (in approximately 1734) and Euler introduced the familiar notation " f(x) ".[6] At first, the idea of a function was rather limited. Joseph Fourier, for example, claimed that every function had a Fourier series, something no mathematician would claim today. By broadening the definition of functions, mathematicians were able to study "strange" mathematical objects such as continuous functions that are nowhere differentiable. These functions were first thought to be only theoretical curiosities, and they were collectively called "monsters" as late as the turn of the 20th century. However, powerful techniques from functional analysis have shown that these functions are, in a precise sense, more common than differentiable functions. Such functions have since been applied to the modeling of physical phenomena such as Brownian motion. During the 19th century, mathematicians started to formalize all the different branches of mathematics. Weierstrass advocated building calculus on arithmetic rather than on geometry, which favoured Euler's definition over Leibniz's (see arithmetization of analysis). Dirichlet and Lobachevsky are traditionally credited with independently giving the modern "formal" definition of a function as a relation in which every first element has a unique second element. Eves asserts that "the student of mathematics usually meets the Dirichlet definition of function in his introductory course in calculus,[7] but Dirichlet's claim to this formalization is disputed by Imre Lakatos: There is no such definition in Dirichlet's works at all. But there is ample evidence that he had no idea of this concept. In his [1837], for instance, when he discusses piecewise continuous functions, he says that at points of discontinuity the function has two values: ... (Proofs and Refutations, 151, Cambridge University Press 1976.) In the context of "the Differential Calculus" George Boole defined (circa 1849) the notion of a function as follows:

Function "That quantity whose variation is uniform . . . is called the independent variable. That quantity whose variation is referred to the variation of the former is said to be a function of it. The Differential calculus enables us in every case to pass from the function to the limit. This it does by a certain Operation. But in the very Idea of an Operation is . . . the idea of an inverse operation. To effect that inverse operation in the present instance is the business of the Int[egral] Calculus."[8] The logician's "function" prior to 1850 Logicians of this time were primarily involved with analyzing syllogisms (the 2000 year-old Aristotelian forms and otherwise), or as Augustus De Morgan (1847) stated it: "the examination of that part of reasoning which depends upon the manner in which inferences are formed, and the investigation of general maxims and rules for constructing arguments".[9] At this time the notion of (logical) "function" is not explicit, but at least in the work of De Morgan and George Boole it is implied: we see abstraction of the argument forms, the introduction of variables, the introduction of a symbolic algebra with respect to these variables, and some of the notions of set theory. De Morgan's 1847 "FORMAL LOGIC OR, The Calculus of Inference, Necessary and Probable" observes that "[a] logical truth depends upon the structure of the statement, and not upon the particular matters spoken of"; he wastes no time (preface page i) abstracting: "In the form of the proposition, the copula is made as absract as the terms". He immediately (p. 1) casts what he calls "the proposition" (present-day propositional function or relation) into a form such as "X is Y", where the symbols X, "is", and Y represent, respectively, the subject, copula, and predicate. While the word "function" does not appear, the notion of "abstraction" is there, "variables" are there, the notion of inclusion in his symbolism “all of the Δ is in the О” (p. 9) is there, and lastly a new symbolism for logical analysis of the notion of "relation" (he uses the word with respect to this example " X)Y " (p. 75) ) is there: " A1 X)Y To take an X it is necessary to take a Y" [or To be an X it is necessary to be a Y] " A1 Y)X To take an Y it is sufficient to take a X" [or To be a Y it is sufficient to be an X], etc. In his 1848 The Nature of Logic Boole asserts that "logic . . . is in a more especial sense the science of reasoning by signs", and he briefly discusses the notions of "belonging to" and "class": "An individual may possess a great variety of attributes and thus belonging to a great variety of different classes" .[10] Like De Morgan he uses the notion of "variable" drawn from analysis; he gives an example of "represent[ing] the class oxen by x and that of horses by y and the conjunction and by the sign + . . . we might represent the aggregate class oxen and horses by x + y".[11]


The logicians' "function" 1850–1950
Eves observes "that logicians have endeavored to push down further the starting level of the definitional development of mathematics and to derive the theory of sets, or classes, from a foundation in the logic of propositions and propositional functions".[12] But by the late 19th century the logicians' research into the foundations of mathematics was undergoing a major split. The direction of the first group, the Logicists, can probably be summed up best by Bertrand Russell 1903:9 – "to fulfil two objects, first, to show that all mathematics follows from symbolic logic, and secondly to discover, as far as possible, what are the principles of symbolic logic itself." The second group of logicians, the set-theorists, emerged with Georg Cantor's "set theory" (1870–1890) but were driven forward partly as a result of Russell's discovery of a paradox that could be derived from Frege's conception of "function", but also as a reaction against Russell's proposed solution.[13] Zermelo's set-theoretic response was his 1908 Investigations in the foundations of set theory I – the first axiomatic set theory; here too the notion of "propositional function" plays a role.

Function George Boole's The Laws of Thought 1854; John Venn's Symbolic Logic 1881 In his An Investigation into the laws of thought Boole now defined a function in terms of a symbol x as follows: "8. Definition. – Any algebraic expression involving symbol x is termed a function of x, and may be represented by the abbreviated form f(x)"[14] Boole then used algebraic expressions to define both algebraic and logical notions, e.g., 1−x is logical NOT(x), xy is the logical AND(x,y), x + y is the logical OR(x, y), x(x+y) is xx+xy, and "the special law" xx = x2 = x.[15] In his 1881 Symbolic Logic Venn was using the words "logical function" and the contemporary symbolism ( x = f(y), y = f−1(x), cf page xxi) plus the circle-diagrams historically associated with Venn to describe "class relations",[16] the notions "'quantifying' our predicate", "propositions in respect of their extension", "the relation of inclusion and exclusion of two classes to one another", and "propositional function" (all on p. 10), the bar over a variable to indicate not-x (page 43), etc. Indeed he equated unequivocally the notion of "logical function" with "class" [modern "set"]: "... on the view adopted in this book, f(x) never stands for anything but a logical class. It may be a compound class aggregated of many simple classes; it may be a class indicated by certain inverse logical operations, it may be composed of two groups of classes equal to one another, or what is the same thing, their difference declared equal to zero, that is, a logical equation. But however composed or derived, f(x) with us will never be anything else than a general expression for such logical classes of things as may fairly find a place in ordinary Logic".[17] Frege's Begriffsschrift 1879 Gottlob Frege's Begriffsschrift (1879) preceded Giuseppe Peano (1889), but Peano had no knowledge of Frege 1879 until after he had published his 1889.[18] Both writers strongly influenced Bertrand Russell (1903). Russell in turn influenced much of 20th-century mathematics and logic through his Principia Mathematica (1913) jointly authored with Alfred North Whitehead. At the outset Frege abandons the traditional "concepts subject and predicate", replacing them with argument and function respectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the words if, and, not, or, there is, some, all, and so forth, deserves attention".[19] Frege begins his discussion of "function" with an example: Begin with the expression[20] "Hydrogen is lighter than carbon dioxide". Now remove the sign for hydrogen (i.e., the word "hydrogen") and replace it with the sign for oxygen (i.e., the word "oxygen"); this makes a second statement. Do this again (using either statement) and substitute the sign for nitrogen (i.e., the word "nitrogen") and note that "This changes the meaning in such a way that "oxygen" or "nitrogen" enters into the relations in which "hydrogen" stood before".[21] There are three statements: • "Hydrogen is lighter than carbon dioxide." • "Oxygen is lighter than carbon dioxide." • "Nitrogen is lighter than carbon dioxide." Now observe in all three a "stable component, representing the totality of [the] relations";[22] call this the function, i.e., "... is lighter than carbon dioxide", is the function. Frege calls the argument of the function "[t]he sign [e.g., hydrogen, oxygen, or nitrogen], regarded as replaceable by others that denotes the object standing in these relations".[23] He notes that we could have derived the function as "Hydrogen is lighter than . . .." as well, with an argument position on the right; the exact observation is made by Peano (see more below). Finally, Frege allows for the case of two (or more arguments). For example, remove "carbon dioxide" to yield the invariant part (the function) as: • "... is lighter than ... " The one-argument function Frege generalizes into the form Φ(A) where A is the argument and Φ( ) represents the function, whereas the two-argument function he symbolizes as Ψ(A, B) with A and B the arguments and Ψ( , ) the


Function function and cautions that "in general Ψ(A, B) differs from Ψ(B, A)". Using his unique symbolism he translates for the reader the following symbolism: "We can read |--- Φ(A) as "A has the property Φ. |--- Ψ(A, B) can be translated by "B stands in the relation Ψ to A" or "B is a result of an application of the procedure Ψ to the object A".[24] Peano 1889 The Principles of Arithmetic 1889 Peano defined the notion of "function" in a manner somewhat similar to Frege, but without the precision.[25] First Peano defines the sign "K means class, or aggregate of objects",[26] the objects of which satisfy three simple equality-conditions,[27] a = a, (a = b) = (b = a), IF ((a = b) AND (b = c)) THEN (a = c). He then introduces φ, "a sign or an aggregate of signs such that if x is an object of the class s, the expression φx denotes a new object". Peano adds two conditions on these new objects: First, that the three equality-conditions hold for the objects φx; secondly, that "if x and y are objects of class s and if x = y, we assume it is possible to deduce φx = φy".[28] Given all these conditions are met, φ is a "function presign". Likewise he identifies a "function postsign". For example if φ is the function presign a+, then φx yields a+x, or if φ is the function postsign +a then xφ yields x+a.[29] Bertrand Russell's The Principles of Mathematics 1903 While the influence of Cantor and Peano was paramount,[30] in Appendix A "The Logical and Arithmetical Doctrines of Frege" of The Principles of Mathematics, Russell arrives at a discussion of Frege's notion of function, "...a point in which Frege's work is very important, and requires careful examination".[31] In response to his 1902 exchange of letters with Frege about the contradiction he discovered in Frege's Begriffsschrift Russell tacked this section on at the last moment. For Russell the bedeviling notion is that of "variable": "6. Mathematical propositions are not only characterized by the fact that they assert implications, but also by the fact that they contain variables. The notion of the variable is one of the most difficult with which logic has to deal. For the present, I openly wish to make it plain that there are variables in all mathematical propositions, even where at first sight they might seem to be absent. . . . We shall find always, in all mathematical propositions, that the words any or some occur; and these words are the marks of a variable and a formal implication".[32] As expressed by Russell "the process of transforming constants in a proposition into variables leads to what is called generalization, and gives us, as it were, the formal essence of a proposition ... So long as any term in our proposition can be turned into a variable, our proposition can be generalized; and so long as this is possible, it is the business of mathematics to do it";[33] these generalizations Russell named propositional functions".[34] Indeed he cites and quotes from Frege's Begriffsschrift and presents a vivid example from Frege's 1891 Function und Begriff: That "the essence of the arithmetical function 2x3 + x is what is left when the x is taken away, i.e., in the above instance 2( )3 + ( ). The argument x does not belong to the function but the two taken together make the whole".[31] Russell agreed with Frege's notion of "function" in one sense: "He regards functions – and in this I agree with him – as more fundamental than predicates and relations" but Russell rejected Frege's "theory of subject and assertion", in particular "he thinks that, if a term a occurs in a proposition, the proposition can always be analysed into a and an assertion about a".[31] Evolution of Russell's notion of "function" 1908–1913 Russell would carry his ideas forward in his 1908 Mathematical logical as based on the theory of types and into his and Whitehead's 1910–1913 Principia Mathematica. By the time of Principia Mathematica Russell, like Frege, considered the propositional function fundamental: "Propositional functions are the fundamental kind from which the more usual kinds of function, such as “sin ‘’x’’ or log x or "the father of x" are derived. These derivative functions . . . are called “descriptive functions". The functions of propositions . . . are a particular case of propositional functions".[35]


Function Propositional functions: Because his terminology is different from the contemporary, the reader may be confused by Russell's "propositional function". An example may help. Russell writes a propositional function in its raw form, e.g., as φŷ: "ŷ is hurt". (Observe the circumflex or "hat" over the variable y). For our example, we will assign just 4 values to the variable ŷ: "Bob", "This bird", "Emily the rabbit", and "y". Substitution of one of these values for variable ŷ yields a proposition; this proposition is called a "value" of the propositional function. In our example there are four values of the propositional function, e.g., "Bob is hurt", "This bird is hurt", "Emily the rabbit is hurt" and "y is hurt." A proposition, if it is significant—i.e., if its truth is determinate—has a truth-value of truth or falsity. If a proposition's truth value is "truth" then the variable's value is said to satisfy the propositional function. Finally, per Russell's definition, "a class [set] is all objects satisfying some propositional function" (p. 23). Note the word "all'" – this is how the contemporary notions of "For all ∀" and "there exists at least one instance ∃" enter the treatment (p. 15). To continue the example: Suppose (from outside the mathematics/logic) one determines that the propositions "Bob is hurt" has a truth value of "falsity", "This bird is hurt" has a truth value of "truth", "Emily the rabbit is hurt" has an indeterminate truth value because "Emily the rabbit" doesn't exist, and "y is hurt" is ambiguous as to its truth value because the argument y itself is ambiguous. While the two propositions "Bob is hurt" and "This bird is hurt" are significant (both have truth values), only the value "This bird" of the variable ŷ satisfies' the propositional function φŷ: "ŷ is hurt". When one goes to form the class α: φŷ: "ŷ is hurt", only "This bird" is included, given the four values "Bob", "This bird", "Emily the rabbit" and "y" for variable ŷ and their respective truth-values: falsity, truth, indeterminate, ambiguous. Russell defines functions of propositions with arguments, and truth-functions f(p).[36] For example, suppose one were to form the "function of propositions with arguments" p1: "NOT(p) AND q" and assign its variables the values of p: "Bob is hurt" and q: "This bird is hurt". (We are restricted to the logical linkages NOT, AND, OR and IMPLIES, and we can only assign "significant" propositions to the variables p and q). Then the "function of propositions with arguments" is p1: NOT("Bob is hurt") AND "This bird is hurt". To determine the truth value of this "function of propositions with arguments" we submit it to a "truth function", e.g., f(p1): f( NOT("Bob is hurt") AND "This bird is hurt" ), which yields a truth value of "truth". The notion of a "many-one" functional relation": Russell first discusses the notion of "identity", then defines a descriptive function (pages 30ff) as the unique value ιx that satisfies the (2-variable) propositional function (i.e., "relation") φŷ. N.B. The reader should be warned here that the order of the variables are reversed! y is the independent variable and x is the dependent variable, e.g., x = sin(y).[37] Russell symbolizes the descriptive function as "the object standing in relation to y": R'y =DEF (ιx)(x R y). Russell repeats that "R'y is a function of y, but not a propositional function [sic]; we shall call it a descriptive function. All the ordinary functions of mathematics are of this kind. Thus in our notation "sin y" would be written " sin 'y ", and "sin" would stand for the relation sin 'y has to y".[38]


Function Hardy 1908 Hardy 1908, pp. 26–28 defined a function as a relation between two variables x and y such that "to some values of x at any rate correspond values of y." He neither required the function to be defined for all values of x nor to associate each value of x to a single value of y. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics.


The Formalist's "function": David Hilbert's axiomatization of mathematics (1904–1927)
David Hilbert set himself the goal of "formalizing" classical mathematics "as a formal axiomatic theory, and this theory shall be proved to be consistent, i.e., free from contradiction" .[39] In his 1927 The Foundations of Mathematics Hilbert frames the notion of function in terms of the existence of an "object": 13. A(a) --> A(ε(A)) Here ε(A) stands for an object of which the proposition A(a) certainly holds if it holds of any object at all; let us call ε the logical ε-function".[40] [The arrow indicates “implies”.] Hilbert then illustrates the three ways how the ε-function is to be used, firstly as the "for all" and "there exists" notions, secondly to represent the "object of which [a proposition] holds", and lastly how to cast it into the choice function. Recursion theory and computability: But the unexpected outcome of Hilbert's and his student Bernays's effort was failure; see Gödel's incompleteness theorems of 1931. At about the same time, in an effort to solve Hilbert's Entscheidungsproblem, mathematicians set about to define what was meant by an "effectively calculable function" (Alonzo Church 1936), i.e., "effective method" or "algorithm", that is, an explicit, step-by-step procedure that would succeed in computing a function. Various models for algorithms appeared, in rapid succession, including Church's lambda calculus (1936), Stephen Kleene's μ-recursive functions(1936) and Alan Turing's (1936–7) notion of replacing human "computers" with utterly-mechanical "computing machines" (see Turing machines). It was shown that all of these models could compute the same class of computable functions. Church's thesis holds that this class of functions exhausts all the number-theoretic functions that can be calculated by an algorithm. The outcomes of these efforts were vivid demonstrations that, in Turing's words, "there can be no general process for determining whether a given formula U of the functional calculus K [Principia Mathematica] is provable";[41] see more at Independence (mathematical logic) and Computability theory.

Development of the set-theoretic definition of "function"
Set theory began with the work of the logicians with the notion of "class" (modern "set") for example De Morgan (1847), Jevons (1880), Venn 1881, Frege 1879 and Peano (1889). It was given a push by Georg Cantor's attempt to define the infinite in set-theoretic treatment (1870–1890) and a subsequent discovery of an antinomy (contradiction, paradox) in this treatment (Cantor's paradox), by Russell's discovery (1902) of an antinomy in Frege's 1879 (Russell's paradox), by the discovery of more antinomies in the early 20th century (e.g., the 1897 Burali-Forti paradox and the 1905 Richard paradox), and by resistance to Russell's complex treatment of logic[42] and dislike of his axiom of reducibility[43] (1908, 1910–1913) that he proposed as a means to evade the antinomies. Russell's paradox 1902 In 1902 Russell sent a letter to Frege pointing out that Frege's 1879 Begriffsschrift allowed a function to be an argument of itself: "On the other hand, it may also be that the argument is determinate and the function indeterminate . . .."[44] From this unconstrained situation Russell was able to form a paradox: "You state ... that a function, too, can act as the indeterminate element. This I formerly believed, but now this view seems doubtful to me because of the following contradiction. Let w be the predicate: to be a predicate that cannot be predicated of itself. Can w be predicated of itself?"[45] Frege responded promptly that "Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic".[46]

Function From this point forward development of the foundations of mathematics became an exercise in how to dodge "Russell's paradox", framed as it was in "the bare [set-theoretic] notions of set and element".[47] Zermelo's set theory (1908) modified by Skolem (1922) The notion of "function" appears as Zermelo's axiom III—the Axiom of Separation (Axiom der Aussonderung). This axiom constrains us to use a propositional function Φ(x) to "separate" a subset MΦ from a previously formed set M: "AXIOM III. (Axiom of separation). Whenever the propositional function Φ(x) is definite for all elements of a set M, M possesses a subset MΦ containing as elements precisely those elements x of M for which Φ(x) is true".[48] As there is no universal set—sets originate by way of Axiom II from elements of (non-set) domain B – "...this disposes of the Russell antinomy so far as we are concerned".[49] But Zermelo's "definite criterion" is imprecise, and is fixed by Weyl, Fraenkel, Skolem, and von Neumann.[50] In fact Skolem in his 1922 referred to this "definite criterion" or "property" as a "definite proposition": "... a finite expression constructed from elementary propositions of the form a ε b or a = b by means of the five operations [logical conjunction, disjunction, negation, universal quantification, and existential quantification].[51] van Heijenoort summarizes: "A property is definite in Skolem's sense if it is expressed . . . by a well-formed formula in the simple predicate calculus of first order in which the sole predicate constants are ε and possibly, =. ... Today an axiomatization of set theory is usually embedded in a logical calculus, and it is Weyl's and Skolem's approach to the formulation of the axiom of separation that is generally adopted.[52] In this quote the reader may observe a shift in terminology: nowhere is mentioned the notion of "propositional function", but rather one sees the words "formula", "predicate calculus", "predicate", and "logical calculus." This shift in terminology is discussed more in the section that covers "function" in contemporary set theory. The Wiener–Hausdorff–Kuratowski "ordered pair" definition 1914–1921 The history of the notion of "ordered pair" is not clear. As noted above, Frege (1879) proposed an intuitive ordering in his definition of a two-argument function Ψ(A, B). Norbert Wiener in his 1914 (see below) observes that his own treatment essentially "revert(s) to Schröder's treatment of a relation as a class of ordered couples".[53] Russell (1903) considered the definition of a relation (such as Ψ(A, B)) as a "class of couples" but rejected it: "There is a temptation to regard a relation as definable in extension as a class of couples. This is the formal advantage that it avoids the necessity for the primitive proposition asserting that every couple has a relation holding between no other pairs of terms. But it is necessary to give sense to the couple, to distinguish the referent [domain] from the relatum [converse domain]: thus a couple becomes essentially distinct from a class of two terms, and must itself be introduced as a primitive idea. . . . It seems therefore more correct to take an intensional view of relations, and to identify them rather with class-concepts than with classes."[54] By 1910–1913 and Principia Mathematica Russell had given up on the requirement for an intensional definition of a relation, stating that "mathematics is always concerned with extensions rather than intensions" and "Relations, like classes, are to be taken in extension".[55] To demonstrate the notion of a relation in extension Russell now embraced the notion of ordered couple: "We may regard a relation ... as a class of couples ... the relation determined by φ(x, y) is the class of couples (x, y) for which φ(x, y) is true".[56] In a footnote he clarified his notion and arrived at this definition: "Such a couple has a sense, i.e., the couple (x, y) is different from the couple (y, x) unless x = y. We shall call it a "couple with sense," ... it may also be called an ordered couple.[56]


Function But he goes on to say that he would not introduce the ordered couples further into his "symbolic treatment"; he proposes his "matrix" and his unpopular axiom of reducibility in their place. An attempt to solve the problem of the antinomies led Russell to propose his "doctrine of types" in an appendix B of his 1903 The Principles of Mathematics.[57] In a few years he would refine this notion and propose in his 1908 The Theory of Types two axioms of reducibility, the purpose of which were to reduce (single-variable) propositional functions and (dual-variable) relations to a "lower" form (and ultimately into a completely extensional form); he and Alfred North Whitehead would carry this treatment over to Principia Mathematica 1910–1913 with a further refinement called "a matrix".[58] The first axiom is *12.1; the second is *12.11. To quote Wiener the second axiom *12.11 "is involved only in the theory of relations".[59] Both axioms, however, were met with skepticism and resistance; see more at Axiom of reducibility. By 1914 Norbert Wiener, using Whitehead and Russell's symbolism, eliminated axiom *12.11 (the "two-variable" (relational) version of the axiom of reducibility) by expressing a relation as an ordered pair "using the null set. At approximately the same time, Hausdorff (1914, p. 32) gave the definition of the ordered pair (a, b) as { {a,1}, {b, 2} }. A few years later Kuratowski (1921) offered a definition that has been widely used ever since, namely { {a, b}, {a} }".[60] As noted by Suppes (1960) "This definition . . . was historically important in reducing the theory of relations to the theory of sets.[61] Observe that while Wiener "reduced" the relational *12.11 form of the axiom of reducibility he did not reduce nor otherwise change the propositional-function form *12.1; indeed he declared this "essential to the treatment of identity, descriptions, classes and relations".[62] Schönfinkel's notion of "function" as a many-one "correspondence" 1924 Where exactly the general notion of "function" as a many-one relationship derives from is unclear. Russell in his 1920 Introduction to Mathematical Philosophy states that "It should be observed that all mathematical functions result form one-many [sic – contemporary usage is many-one] relations . . . Functions in this sense are descriptive functions".[63] A reasonable possibility is the Principia Mathematica notion of "descriptive function" – R 'y =DEF (ιx)(x R y): "the singular object that has a relation R to y". Whatever the case, by 1924, Moses Schonfinkel expressed the notion, claiming it to be "well known": "As is well known, by function we mean in the simplest case a correspondence between the elements of some domain of quantities, the argument domain, and those of a domain of function values ... such that to each argument value there corresponds at most one function value".[64] According to Willard Quine, Schönfinkel's 1924 "provide[s] for ... the whole sweep of abstract set theory. The crux of the matter is that Schönfinkel lets functions stand as arguments. ¶ For Schönfinkel, substantially as for Frege, classes are special sorts of functions. They are propositional functions, functions whose values are truth values. All functions, propositional and otherwise, are for Schönfinkel one-place functions".[65] Remarkably, Schönfinkel reduces all mathematics to an extremely compact functional calculus consisting of only three functions: Constancy, fusion (i.e., composition), and mutual exclusivity. Quine notes that Haskell Curry (1958) carried this work forward "under the head of combinatory logic".[66]


Function Von Neumann's set theory 1925 By 1925 Abraham Fraenkel (1922) and Thoralf Skolem (1922) had amended Zermelo's set theory of 1908. But von Neumann was not convinced that this axiomatization could not lead to the antinomies.[67] So he proposed his own theory, his 1925 An axiomatization of set theory. It explicitly contains a "contemporary", set-theoretic version of the notion of "function": "[Unlike Zermelo's set theory] [w]e prefer, however, to axiomatize not "set" but "function". The latter notion certainly includes the former. (More precisely, the two notions are completely equivalent, since a function can be regarded as a set of pairs, and a set as a function that can take two values.)".[68] His axiomatization creates two "domains of objects" called "arguments" (I-objects) and "functions" (II-objects); where they overlap are the "argument functions" (I-II objects). He introduces two "universal two-variable operations" – (i) the operation [x, y]: ". . . read 'the value of the function x for the argument y) and (ii) the operation (x, y): ". . . (read 'the ordered pair x, y'") whose variables x and y must both be arguments and that itself produces an argument (x,y)". To clarify the function pair he notes that "Instead of f(x) we write [f,x] to indicate that f, just like x, is to be regarded as a variable in this procedure". And to avoid the "antinomies of naive set theory, in Russell's first of all . . . we must forgo treating certain functions as arguments".[69] He adopts a notion from Zermelo to restrict these "certain functions"[70]


Since 1950
Notion of "function" in contemporary set theory Both axiomatic and naive forms of Zermelo's set theory as modified by Fraenkel (1922) and Skolem (1922) define "function" as a relation, define a relation as a set of ordered pairs, and define an ordered pair as a set of two "dissymetric" sets. While the reader of Suppes (1960) Axiomatic Set Theory or Halmos (1970) Naive Set Theory observes the use of function-symbolism in the axiom of separation, e.g., φ(x) (in Suppes) and S(x) (in Halmos), they will see no mention of "proposition" or even "first order predicate calculus". In their place are "expressions of the object language", "atomic formulae", "primitive formulae", and "atomic sentences". Kleene 1952 defines the words as follows: "In word languages, a proposition is expressed by a sentence. Then a 'predicate' is expressed by an incomplete sentence or sentence skeleton containing an open place. For example, "___ is a man" expresses a predicate ... The predicate is a propositional function of one variable. Predicates are often called 'properties' ... The predicate calculus will treat of the logic of predicates in this general sense of 'predicate', i.e., as propositional function".[71] The reason for the disappearance of the words "propositional function" e.g., in Suppes (1960), and Halmos (1970), is explained by Alfred Tarski 1946 together with further explanation of the terminology: "An expression such as x is an integer, which contains variables and, on replacement of these variables by constants becomes a sentence, is called a SENTENTIAL [i.e., propositional cf his index] FUNCTION. But mathematicians, by the way, are not very fond of this expression, because they use the term "function" with a different meaning. ... sentential functions and sentences composed entirely of mathematical symbols (and not words of everyday languange), such as: x + y = 5 are usually referred to by mathematicians as FORMULAE. In place of "sentential function" we shall sometimes simply say "sentence" – but only in cases where there is no danger of any misunderstanding".[72] For his part Tarski calls the relational form of function a "FUNCTIONAL RELATION or simply a FUNCTION" .[73] After a discussion of this "functional relation" he asserts that: "The concept of a function which we are considering now differs essentially from the concepts of a sentential [propositional] and of a designatory function .... Strictly speaking ... [these] do not belong to the domain of logic or mathematics; they denote certain categories of expressions which serve to compose logical and

Function mathematical statements, but they do not denote things treated of in those statements... . The term "function" in its new sense, on the other hand, is an expression of a purely logical character; it designates a certain type of things dealt with in logic and mathematics."[74] See more about "truth under an interpretation" at Alfred Tarski. Further developments The idea of structure-preserving functions, or homomorphisms, led to the abstract notion of morphism, the key concept of category theory. More recently, the concept of functor has been used as an analogue of a function in category theory.[75]


[1] The history of the function concept in mathematics (http:/ / www. educ. fc. ul. pt/ docentes/ jponte/ docs-uk/ 92 Ponte (Functions). doc) J.P.Ponte, 1992 [2] Another short but useful history is found in Eves 1990 pages 234–235 [3] Thompson, S.P; Gardner, M; Calculus Made Easy. 1998. Pages 10–11. ISBN 0312185480. [4] Eves dates Leibniz's first use to the year 1694 and also similarly relates the usage to "as a term to denote any quantity connected with a curve, such as the coordinates of a point on the curve, the slope of the curve, and so on" (Eves 1990:234). [5] Eves 1990:234 [6] Eves 1990:235 [7] Eves asserts that Dirichlet "arrived at the following formulation: "[The notion of] a variable is a symbol that represents any one of a set of numbers; if two variables x and y are so related that whenever a value is assigned to x there is automatically assigned, by some rule or correspondence, a value to y, then we say y is a (single-valued) function of x. The variable x . . . is called the independent variable and the variable y is called the dependent variable. The permissible values that x may assume constitute the domain of definition of the function, and the values taken on by y constitute the range of values of the function . . . it stresses the basic idea of a relationship between two sets of numbers" Eves 1990:235. [8] Boole circa 1849 Elementary Treatise on Logic not mathematical including philosophy of mathematical reasoning in Grattan-Guiness and Bornet 1997:40 [9] De Morgan 1847:1 [10] Boole 1848 in Grattan-Guiness and Bornet 1997:1, 2 [11] Boole 1848 in Grattan-Guiness and Bornet 1997:6 [12] Eves 1990:222 [13] Some of this criticism is intense: see the introduction by Willard Quine preceding Russell 1908 Mathematical logic as based on the theory of types in van Heijenoort 1967:151. See also von Neumann's introduction to his 1925 Axiomatization of Set Theory in van Heijenoort 1967:395 [14] Boole 1854:86 [15] cf Boole 1854:31–34. Boole discusses this "special law" with its two algebraic roots x = 0 or 1, on page 37. [16] Although he gives others credit, cf Venn 1881:6 [17] Venn 1881: 86–87 [18] cf van Heijenoort's introduction to Peano 1889 in van Heijenoort 1967. For most of his logical symbolism and notions of propositions Peano credits "many writers, especially Boole". In footnote 1 he credits Boole 1847, 1848, 1854, Schröder 1877, Peirce 1880, Jevons 1883, MacColl 1877, 1878, 1878a, 1880; cf van Heijenoort 1967:86). [19] Frege 1879 in van Heijenoort 1967:7 [20] Frege's exact words are "expressed in our formula language" and "expression", cf Frege 1879 in van Heijenoort 1967:21–22. [21] This example is from Frege 1879 in van Heijenoort 1967:21–22 [22] Frege 1879 in van Heijenoort 1967:21–22 [23] Frege cautions that the function will have "argument places" where the argument should be placed as distinct from other places where the same sign might appear. But he does not go deeper into how to signify these positions and Russell 1903 observes this. [24] Gottlob Frege (1879) in van Heijenoort 1967:21–24 [25] "...Peano intends to cover much more ground than Frege does in his Begriffsschrift and his subsequent works, but he does not till that ground to any depth comparable to what Frege does in his self-allotted field", van Heijenoort 1967:85 [26] van Heijenoort 1967:89. [27] van Heijenoort 1967:91. [28] All symbols used here are from Peano 1889 in van Heijenoort 1967:91). [29] cf van Heijenoort 1967:91 [30] "In Mathematics, my chief obligations, as is indeed evident, are to Georg Cantor and Professor Peano. If I had become acquainted sooner with the work of Professor Frege, I should have owed a great deal to him, but as it is I arrived independently at many results which he had already established", Russell 1903:viii. He also highlights Boole's 1854 Laws of Thought and Ernst Schröder's three volumes of

"non-Peanesque methods" 1890, 1891, and 1895 cf Russell 1903:10 [31] Russell 1903:505 [32] Russell 1903:5–6 [33] Russell 1903:7 [34] Russell 1903:19 [35] Russell 1910–1913:15 [36] Whitehead and Russell 1910–1913:6, 8 respectively [37] Something similar appears in Tarski 1946. Tarski refers to a "relational function" as a "ONE-MANY [sic!] or FUNCTIONAL RELATION or simply a FUNCTION". Tarski comments about this reversal of variables on page 99. [38] Whitehead and Russell 1910–1913:31. This paper is important enough that van Heijenoort reprinted it as Whitehead and Russell 1910 Incomplete symbols: Descriptions with commentary by W. V. Quine in van Heijenoort 1967:216–223 [39] Kleene 1952:53 [40] Hilbert in van Heijenoort 1967:466 [41] Turing 1936–7 in Martin Davis The Undecidable 1965:145 [42] cf Kleene 1952:45 [43] "The nonprimitive and arbitrary character of this axiom drew forth severe criticism, and much of subsequent refinement of the logistic program lies in attempts to devise some method of avoiding the disliked axiom of reducibility" Eves 1990:268. [44] Frege 1879 in van Heijenoort 1967:23 [45] Russell (1902) Letter to Frege in van Heijenoort 1967:124 [46] Frege (1902) Letter to Russell in van Heijenoort 1967:127 [47] van Heijenoort's commentary to Russell's Letter to Frege in van Heijenoort 1967:124 [48] The original uses an Old High German symbol in place of Φ cf Zermelo 1908a in van Heijenoort 1967:202 [49] Zermelo 1908a in van Heijenoort 1967:203 [50] cf van Heijenoort's commentary before Zermelo 1908 Investigations in the foundations of set theory I in van Heijenoort 1967:199 [51] Skolem 1922 in van Heijenoort 1967:292–293 [52] van Heijenoort's introduction to Abraham Fraenkel's The notion "definite" and the independence of the axiom of choice in van Heijenoort 1967:285. [53] But Wiener offers no date or reference cf Wiener 1914 in van Heijenoort 1967:226 [54] Russell 1903:99 [55] both quotes from Whitehead and Russell 1913:26 [56] Whitehead and Russell 1913:26 [57] Russell 1903:523–529 [58] *12 The Hierarchy of Types and the axiom of Reducibility in Principia Mathematica 1913:161 [59] Wiener 1914 in van Heijenoort 1967:224 [60] commentary by van Heijenoort preceding Norbert Wiener's (1914) A simplification of the logic of relations in van Heijenoort 1967:224. [61] Suppes 1960:32. This same point appears in van Heijenoort's commentary before Wiener (1914) in van Heijenoort 1967:224. [62] Wiener 1914 in van Heijeoort 1967:224 [63] Russell 1920:46 [64] Schönfinkel (1924) On the building blocks of mathematical logic in van Heijenoort 1967:359 [65] commentary by W. V. Quine preceding Schönfinkel (1924) On the building blocks of mathematical logic in van Heijenoort 1967:356. [66] cf Curry and Feys 1958; Quine in van Heijenoort 1967:357. [67] von Neumann's critique of the history observes the split between the logicists (e.g., Russell et. al.) and the set-theorists (e.g., Zermelo et. al.) and the formalists (e.g., Hilbert), cf von Neumann 1925 in van Heijenoort 1967:394–396. [68] von Neumann 1925 in van Heijenoort 1967:396 [69] All quotes from von Neumann 1925 in van Heijenoort 1967:397–398 [70] This notion is not easy to summarize; see more at van Heijenoort 1967:397. [71] Kleene 1952:143–145 [72] Tarski 1946:5 [73] Tarski 1946:98 [74] Tarski 1946:102 [75] John C. Baez; James Dolan (1998). Categorification. arXiv:math/9802029.




• Anton, Howard (1980), Calculus with Analytical Geometry, Wiley, ISBN 978-0-471-03248-9 • Bartle, Robert G. (1976), The Elements of Real Analysis (2nd ed.), Wiley, ISBN 978-0-471-05464-1 • Husch, Lawrence S. (2001), Visual Calculus (, University of Tennessee, retrieved 2007-09-27 • Katz, Robert (1964), Axiomatic Analysis, D. C. Heath and Company. • Ponte, João Pedro (1992), "The history of the concept of function and some educational implications" (http:// MAED/History of functions.pdf), The Mathematics Educator 3 (2): 3–8 • Thomas, George B.; Finney, Ross L. (1995), Calculus and Analytic Geometry (9th ed.), Addison-Wesley, ISBN 978-0-201-53174-9 • Youschkevitch, A. P. (1976), "The concept of function up to the middle of the 19th century", Archive for History of Exact Sciences 16 (1): 37–85, doi:10.1007/BF00348305. • Monna, A. F. (1972), "The concept of function in the 19th and 20th centuries, in particular with regard to the discussions between Baire, Borel and Lebesgue", Archive for History of Exact Sciences 9 (1): 57–84, doi:10.1007/BF00348540. • Kleiner, Israel (1989), "Evolution of the Function Concept: A Brief Survey", The College Mathematics Journal (Mathematical Association of America) 20 (4): 282–300, doi:10.2307/2686848, JSTOR 2686848. • Ruthing, D. (1984), "Some definitions of the concept of function from Bernoulli, Joh. to Bourbaki, N.", Mathematical Intelligencer 6 (4): 72–77. • Dubinsky, Ed; Harel, Guershon (1992), The Concept of Function: Aspects of Epistemology and Pedagogy, Mathematical Association of America, ISBN 0883850818. • Malik, M. A. (1980), "Historical and pedagogical aspects of the definition of function", International Journal of Mathematical Education in Science and Technology 11 (4): 489–492, doi:10.1080/0020739800110404. • Boole, George (1854), An Investigation into the Laws of Thought on which are founded the Laws of Thought and Probabilities, Walton and Marberly, London UK; Macmillian and Company, Cambridge UK. Republished as a googlebook. • Eves, Howard. (1990), Foundations and Fundamental Concepts of Mathematics: Third Edition, Dover Publications, Inc. Mineola, NY, ISBN 0-486-69609-X (pbk) • Frege, Gottlob. (1879), Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle • Grattan-Guinness, Ivor and Bornet, Gérard (1997), George Boole: Selected Manuscripts on Logic and its Philosophy, Springer-Verlag, Berlin, ISBN 3-7643-5456-9 (Berlin...) • Halmos, Paul R. (1970) Naive Set Theory, Springer-Verlag, New York, ISBN 0-387-90092-6. • Hardy, Godfrey Harold (1908), A Course of Pure Mathematics, Cambridge University Press (published 1993), ISBN 978-0-521-09227-2 • Reichenbach, Hans (1947) Elements of Symbolic Logic, Dover Publishing Inc., New York NY, ISBN 0-486-24004-5. • Russell, Bertrand (1903) The Principles of Mathematics: Vol. 1, Cambridge at the University Press, Cambridge, UK, republished as a googlebook. • Russell, Bertrand (1920) Introduction to Mathematical Philosophy (second edition), Dover Publishing Inc., New York NY, ISBN 0-486-27724-0 (pbk). • Suppes, Patrick (1960) Axiomatic Set Theory, Dover Publications, Inc, New York NY, ISBN 0-486-61630-4. cf his Chapter 1 Introduction. • Tarski, Alfred (1946) Introduction to Logic and to the Methodolgy of Deductive Sciences, republished 1195 by Dover Publications, Inc., New York, NY ISBN 0-486-28462-x • Venn, John (1881) Symbolic Logic, Macmillian and Co., London UK. Republished as a googlebook.

Function • van Heijenoort, Jean (1967, 3rd printing 1976), From Frege to Godel: A Source Book in Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, ISBN 0-674-32449-8 (pbk) • Gottlob Frege (1879) Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought with commentary by van Heijenoort, pages 1–82 • Giuseppe Peano (1889) The principles of arithmetic, presented by a new method with commentary by van Heijenoort, pages 83–97 • Bertrand Russell (1902) Letter to Frege with commentary by van Heijenoort, pages 124–125. Wherein Russell announces his discovery of a "paradox" in Frege's work. • Gottlob Frege (1902) Letter to Russell with commentary by van Heijenoort, pages 126–128. • David Hilbert (1904) On the foundations of logic and arithmetic, with commentary by van Heijenoort, pages 129–138. • Jules Richard (1905) The principles of mathematics and the problem of sets, with commentary by van Heijenoort, pages 142–144. The Richard paradox. • Bertrand Russell (1908a) Mathematical logic as based on the theory of types, with commentary by Willard Quine, pages 150–182. • Ernst Zermelo (1908) A new proof of the possibility of a well-ordering, with commentary by van Heijenoort, pages 183–198. Wherein Zermelo rails against Poincaré's (and therefore Russell's) notion of impredicative definition. • Ernst Zermelo (1908a) Investigations in the foundations of set theory I, with commentary by van Heijenoort, pages 199–215. Wherein Zermelo attempts to solve Russell's paradox by structuring his axioms to restrict the universal domain B (from which objects and sets are pulled by definite properties) so that it itself cannot be a set, i.e., his axioms disallow a universal set. • Norbert Wiener (1914) A simplification of the logic of relations, with commentary by van Heijenoort, pages 224–227 • Thoralf Skolem (1922) Some remarks on axiomatized set theory, with commentary by van Heijenoort, pages 290–301. Wherein Skolem defines Zermelo's vague "definite property". • Moses Schönfinkel (1924) On the building blocks of mathematical logic, with commentary by Willard Quine, pages 355–366. The start of combinatory logic. • John von Neumann (1925) An axiomatization of set theory, with commentary by van Heijenoort , pages 393–413. Wherein von Neumann creates "classes" as distinct from "sets" (the "classes" are Zermelo's "definite properties"), and now there is a universal set, etc. • David Hilbert (1927) The foundations of mathematics by van Heijenoort, with commentary, pages 464–479. • Whitehead, Alfred North and Russell, Bertrand (1913, 1962 edition), Principia Mathematica to *56, Cambridge at the University Press, London UK, no ISBN or US card catalog number.


External links
• The Wolfram Functions Site ( gives formulae and visualizations of many mathematical functions. • Shodor: Function Flyer (, interactive Java applet for graphing and exploring functions. • xFunctions (, a Java applet for exploring functions graphically. • Draw Function Graphs (, online drawing program for mathematical functions. • Functions ( from cut-the-knot. • Function at ProvenMath ( • Comprehensive web-based function graphing & evaluation tool ( php).

Function • FunctionGame (, an educational interactive function guessing game.


A set is a collection of well defined and distinct objects, considered as an object in its own right. Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a ubiquitous part of mathematics, and can be used as a foundation from which nearly all of mathematics can be derived. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, while more advanced concepts are taught as part of a university degree.


The intersection of two sets is made up of the objects contained in both sets, shown in a Venn diagram.

Georg Cantor, the founder of set theory, gave the following definition of a set at the beginning of his Beiträge zur Begründung der transfiniten Mengenlehre:[1] A set is a gathering together into a whole of definite, distinct objects of our perception [Anschauung] and of our thought – which are called elements of the set. The elements or members of a set can be anything: numbers, people, letters of the alphabet, other sets, and so on. Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements.[2] As discussed below, the definition given above turned out to be inadequate for formal mathematics; instead, the notion of a "set" is taken as an undefined primitive in axiomatic set theory, and its properties are defined by the Zermelo–Fraenkel axioms. The most basic properties are that a set "has" elements, and that two sets are equal (one and the same) if and only if they have the same elements.

Describing sets
There are two ways of describing, or specifying the members of, a set. One way is by intensional definition, using a rule or semantic description: A is the set whose members are the first four positive integers. B is the set of colors of the French flag. The second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets: C = {4, 2, 1, 3} D = {blue, white, red}. Every element of a set must be unique; no two members may be identical. (A multiset is a generalized concept of a set that that relaxes this criterion.) All set operations preserve this property. The order in which the elements of a set or multiset are listed is irrelevant (unlike for a sequence or tuple). Combining these two ideas into an example {6, 11} = {11, 6} = {11, 11, 6, 11} because the extensional specification means merely that each of the elements listed is a member of the set. For sets with many elements, the enumeration of members can be abbreviated. For instance, the set of the first thousand positive integers may be specified extensionally as:

Set {1, 2, 3, ..., 1000}, where the ellipsis ("...") indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members. Thus the set of positive even numbers can be written as {2, 4, 6, 8, ... }. The notation with braces may also be used in an intensional specification of a set. In this usage, the braces have the meaning "the set of all ...". So, E = {playing card suits} is the set whose four members are ♠, ♦, ♥, and ♣. A more general form of this is set-builder notation, through which, for instance, the set F of the twenty smallest integers that are four less than perfect squares can be denoted: F = {n2 − 4 : n is an integer; and 0 ≤ n ≤ 19}. In this notation, the colon (":") means "such that", and the description can be interpreted as "F is the set of all numbers of the form n2 − 4, such that n is a whole number in the range from 0 to 19 inclusive." Sometimes the vertical bar ("|") is used instead of the colon. One often has the choice of specifying a set intensionally or extensionally. In the examples above, for instance, A = C and B = D.


The key relation between sets is membership – when one set is an element of another. If a is a member of B, this is denoted a ∈ B, while if c is not a member of B then c ∉ B. For example, with respect to the sets A = {1,2,3,4}, B = {blue, white, red}, and F = {n2 − 4 : n is an integer; and 0 ≤ n ≤ 19} defined above, 4 ∈ A and 285 ∈ F; but 9 ∉ F and green ∉ B.

If every member of set A is also a member of set B, then A is said to be a subset of B, written A ⊆ B (also pronounced A is contained in B). Equivalently, we can write B ⊇ A, read as B is a superset of A, B includes A, or B contains A. The relationship between sets established by ⊆ is called inclusion or containment. If A is a subset of, but not equal to, B, then A is called a proper subset of B, written A ⊊ B (A is a proper subset of B) or B ⊋ A (B is a proper superset of A). Note that the expressions A ⊂ B and B ⊃ A are used differently by different authors; some authors use them to mean the same as A ⊆ B (respectively B ⊇ A), whereas other use them to mean the same as A ⊊ B (respectively B ⊋ A).

A is a subset of B

Example: • The set of all men is a proper subset of the set of all people. • {1, 3} ⊊ {1, 2, 3, 4}.

Set • {1, 2, 3, 4} ⊆ {1, 2, 3, 4}. The empty set is a subset of every set and every set is a subset of itself: • ∅ ⊆ A. • A ⊆ A. An obvious but useful identity, which can often be used to show that two seemingly different sets are equal: • A = B if and only if A ⊆ B and B ⊆ A.


Power sets
The power set of a set S is the set of all subsets of S. This includes the subsets formed from all the members of S and the empty set. If a finite set S has cardinality n then the power set of S has cardinality 2n. The power set of S can be written as P(S). If S is an infinite (either countable or uncountable) set, then the power set of S is always uncountable. Moreover, if S is a set, then there is never a bijection from S onto P(S). In other words, the power set of S is always strictly "bigger" than S. As an example, the power set of {1, 2, 3} is {{1, 2, 3}, {1, 2}, {1, 3}, {2, 3}, {1}, {2}, {3}, ∅}. The cardinality of the original set is 3, and the cardinality of the power set is 23 = 8. This relationship is one of the reasons for the terminology power set.

The cardinality | S | of a set S is "the number of members of S." For example, if B = {blue, white, red}, | B | = 3. There is a unique set with no members and zero cardinality, which is called the empty set (or the null set) and is denoted by the symbol ∅ (other notations are used; see empty set). For example, the set of all three-sided squares has zero members and thus is the empty set. Though it may seem trivial, the empty set, like the number zero, is important in mathematics; indeed, the existence of this set is one of the fundamental concepts of axiomatic set theory. Some sets have infinite cardinality. The set N of natural numbers, for instance, is infinite. Some infinite cardinalities are greater than others. For instance, the set of real numbers has greater cardinality than the set of natural numbers. However, it can be shown that the cardinality of (which is to say, the number of points on) a straight line is the same as the cardinality of any segment of that line, of the entire plane, and indeed of any finite-dimensional Euclidean space.

Special sets
There are some sets which hold great mathematical importance and are referred to with such regularity that they have acquired special names and notational conventions to identify them. One of these is the empty set, denoted {} or ∅. Another is the unit set {x} which contains exactly one element, namely x.[2] Many of these sets are represented using blackboard bold or bold typeface. Special sets of numbers include: • • • • P or ℙ, denoting the set of all primes: P = {2, 3, 5, 7, 11, 13, 17, ...}. N or ℕ, denoting the set of all natural numbers: N = {1, 2, 3, . . .}. Z or ℤ, denoting the set of all integers (whether positive, negative or zero): Z = {... , −2, −1, 0, 1, 2, ...}. Q or ℚ, denoting the set of all rational numbers (that is, the set of all proper and improper fractions): Q = {a/b : a, b ∈ Z, b ≠ 0}. For example, 1/4 ∈ Q and 11/6 ∈ Q. All integers are in this set since every integer a can be expressed as the fraction a/1 (Z ⊊ Q).

• R or ℝ, denoting the set of all real numbers. This set includes all rational numbers, together with all irrational numbers (that is, numbers which cannot be rewritten as fractions, such as π, e, and √2, as well as numbers that

Set cannot be defined). • C or ℂ, denoting the set of all complex numbers: C = {a + bi : a, b ∈ R}. For example, 1 + 2i ∈ C. • H or ℍ, denoting the set of all quaternions: H = {a + bi + cj + dk : a, b, c, d ∈ R}. For example, 1 + i + 2j − k ∈ H. Each of the above sets of numbers has an infinite number of elements, and each can be considered to be a proper subset of the sets listed below it. The primes are used less frequently than the others outside of number theory and related fields.


Basic operations
There are several fundamental operations for constructing new sets from given sets.

Two sets can be "added" together. The union of A and B, denoted by A ∪ B, is the set of all things which are members of either A or B. Examples: • {1, 2} ∪ {red, white} ={1, 2, red, white}. • {1, 2, green} ∪ {red, white, green} ={1, 2, red, white, green}. • {1, 2} ∪ {1, 2} = {1, 2}. Some basic properties of unions: • A ∪ B = B ∪ A. • A ∪ (B ∪ C) = (A ∪ B) ∪ C. • • • • A ⊆ (A ∪ B). A ⊆ B if and only if A ∪ B = B. A ∪ A = A. A ∪ ∅ = A.
The union of A and B, denoted A ∪ B

A new set can also be constructed by determining which members two sets have "in common". The intersection of A and B, denoted by A ∩ B, is the set of all things which are members of both A and B. If A ∩ B = ∅, then A and B are said to be disjoint. Examples: • {1, 2} ∩ {red, white} = ∅. • {1, 2, green} ∩ {red, white, green} = {green}. • {1, 2} ∩ {1, 2} = {1, 2}. Some basic properties of intersections: • A ∩ B = B ∩ A. • A ∩ (B ∩ C) = (A ∩ B) ∩ C. • A ∩ B ⊆ A. • A ∩ A = A. • A ∩ ∅ = ∅. • A ⊆ B if and only if A ∩ B = A.
The intersection of A and B, denoted A ∩ B.



Two sets can also be "subtracted". The relative complement of B in A (also called the set-theoretic difference of A and B), denoted by A \ B (or A − B), is the set of all elements which are members of A but not members of B. Note that it is valid to "subtract" members of a set that are not in the set, such as removing the element green from the set {1, 2, 3}; doing so has no effect. In certain settings all sets under discussion are considered to be subsets of a given universal set U. In such cases, U \ A is called the absolute complement or simply complement of A, and is denoted by A′. Examples: • {1, 2} \ {red, white} = {1, 2}. • {1, 2, green} \ {red, white, green} = {1, 2}. • {1, 2} \ {1, 2} = ∅. • {1, 2, 3, 4} \ {1, 3} = {2, 4}. • If U is the set of integers, E is the set of even integers, and O is the set of odd integers, then E′ = O. Some basic properties of complements: • A \ B ≠ B \ A. • A ∪ A′ = U. • A ∩ A′ = ∅. • (A′)′ = A. • A \ A = ∅. • U′ = ∅ and ∅′ = U. • A \ B = A ∩ B′. An extension of the complement is the symmetric difference, defined for sets A, B as
The complement of A in U

The relative complementof B in A

The symmetric difference of A and B

For example, the symmetric difference of {7,8,9,10} and {9,10,11,12} is the set {7,8,11,12}.



Cartesian product
A new set can be constructed by associating every element of one set with every element of another set. The Cartesian product of two sets A and B, denoted by A × B is the set of all ordered pairs (a, b) such that a is a member of A and b is a member of B. Examples: • {1, 2} × {red, white} = {(1, red), (1, white), (2, red), (2, white)}. • {1, 2, green} × {red, white, green} = {(1, red), (1, white), (1, green), (2, red), (2, white), (2, green), (green, red), (green, white), (green, green)}. • {1, 2} × {1, 2} = {(1, 1), (1, 2), (2, 1), (2, 2)}. Some basic properties of cartesian products: • A × ∅ = ∅. • A × (B ∪ C) = (A × B) ∪ (A × C). • (A ∪ B) × C = (A × C) ∪ (B × C). Let A and B be finite sets. Then • | A × B | = | B × A | = | A | × | B |.

Set theory is seen as the foundation from which virtually all of mathematics can be derived. For example, structures in abstract algebra, such as groups, fields and rings, are sets closed under one or more operations. One of the main applications of naive set theory is constructing relations. A relation from a domain A to a codomain B is a subset of the Cartesian product A × B. Given this concept, we are quick to see that the set F of all ordered pairs (x, x2), where x is real, is quite familiar. It has a domain set R and a codomain set that is also R, because the set of all squares is subset of the set of all reals. If placed in functional notation, this relation becomes f(x) = x2. The reason these two are equivalent is for any given value, y that the function is defined for, its corresponding ordered pair, (y, y2) is a member of the set F.

Axiomatic set theory
Although initially naive set theory, which defines a set merely as any well-defined collection, was well accepted, it soon ran into several obstacles. It was found that this definition spawned several paradoxes, most notably: • Russell's paradox—It shows that the "set of all sets which do not contain themselves," i.e. the "set" { x : x is a set and x ∉ x } does not exist. • Cantor's paradox—It shows that "the set of all sets" cannot exist. The reason is that the phrase well-defined is not very well defined. It was important to free set theory of these paradoxes because nearly all of mathematics was being redefined in terms of set theory. In an attempt to avoid these paradoxes, set theory was axiomatized based on first-order logic, and thus axiomatic set theory was born. For most purposes however, naive set theory is still useful.



Principle of inclusion and exclusion
This principle gives us the cardinality of intersection of sets. |A1 ∩ A2 ∩ A3 ∩ A4 ∩ ....... ....∩ An|=(|A1| + |A2| + |A3| +...+ |An|)-(|A1 ∩ A2| +|A2 ∩ A3| + ....+|An-1 ∩ An|) + .........+(−1)n-1(|A1 ∩ A2 ∩ A3 ∩.....∩ An|)

[1] "Eine Menge, ist die Zusammenfassung bestimmter, wohlunterschiedener Objekte unserer Anschauung und unseres Denkens – welche Elemente der Menge genannt werden – zu einem Ganzen." (http:/ / www. brinkmann-du. de/ mathe/ fos/ fos01_03. htm) [2] Stoll, Robert. Sets, Logic and Axiomatic Theories. W. H. Freeman and Company. pp. 5.

• Dauben, Joseph W., Georg Cantor: His Mathematics and Philosophy of the Infinite, Boston: Harvard University Press (1979) ISBN 978-0-691-02447-9. • Halmos, Paul R., Naive Set Theory, Princeton, N.J.: Van Nostrand (1960) ISBN 0-387-90092-6. • Stoll, Robert R., Set Theory and Logic, Mineola, N.Y.: Dover Publications (1979) ISBN 0-486-63829-4. • Velleman, Daniel, How To Prove It: A Structured Approach, Cambridge University Press (2006) ISBN 978-0521675994

External links
• C2 Wiki – Examples of set operations using English operators. (

Binary operation
In mathematics, a binary operation is a calculation involving two operands (an operation whose arity is two). Examples include the familiar arithmetic operations of addition, subtraction, multiplication and division. Let A, B and C be three sets. Then a relation * from A X B --> C is a binary relation.This is a general binary relation. Usually binary relations defined on a single set is of interest. More precisely, a binary operation on a set to is a binary relation that maps elements of the Cartesian product × : Such a binary operation from SxS-->S is called a CLOSED binary operation on S


is not a function, but is instead a partial function, it is called a partial operation. For instance, division of real is not defined for any real . Note however

numbers is a partial function, because one can't divide by zero:

that both in algebra and model theory the binary operations considered are defined on the whole of . Sometimes, especially in computer science, the term is used for any binary function. That takes values in the same set that provides its arguments is the property of closure. Binary operations are the keystone of algebraic structures studied in abstract algebra: they form part of groups, monoids, semigroups, rings, and more. Most generally, a magma is a set together with any binary operation defined on it. Many binary operations of interest in both algebra and formal logic are commutative or associative. Many also have identity elements and inverse elements. Typical examples of binary operations are the addition (+) and multiplication (×) of numbers and matrices as well as composition of functions on a single set. An example of an operation that is not commutative is subtraction (−). Examples of partial operations that are not commutative include division (/), exponentiation(^), and tetration(↑↑).

Binary operation Binary operations are often written using infix notation such as symbol) rather than by functional notation of the form , , or (by juxtaposition with no


. Powers are usually also written without

operator, but with the second argument as superscript. Binary operations sometimes use prefix or postfix notation; this dispenses with parentheses. Prefix notation is also called Polish notation; postfix notation, also called reverse Polish notation, is probably more often encountered.

Pair and tuple
A binary operation, , depends on the ordered pair and so (where the parentheses here mean first depends operate on the ordered pair in general on the ordered pair represented with binary trees. However: • If the operation is associative, • If the operation is commutative, , then the value depends only on the tuple (a, b, c). , then the value depends only on { {a, b}, c}, where braces indicate and then operate on the result of that using the ordered pair

. Thus, for the general, non-associative case, binary operations can be

multisets. • If the operation is both associative and commutative then the value depends only on the multiset {a, b, c}. • If the operation is both associative and commutative and idempotent, the set {a, b, c}. but it is not in use in our daily life. , then the value depends only on

External binary operations
An external binary operation is a binary function from strict sense in that need not be × to . This differs from a binary operation in the is a field and is a ; its elements come from outside.

An example of an external binary operation is scalar multiplication in linear algebra. Here vector space over that field. An external binary operation may alternatively be viewed as a action; , where is a field and is a vector space over . is acting on .

Note that the dot product of two vectors is not a binary operation, external or otherwise, as it maps from



• Weisstein, Eric W., "Binary Operation [1]" from MathWorld.

[1] http:/ / mathworld. wolfram. com/ BinaryOperation. html

Inverse element


Inverse element
In abstract algebra, the idea of an inverse element generalises the concept of a negation, in relation to addition, and a reciprocal, in relation to multiplication. The intuition is of an element that can 'undo' the effect of combination with another given element. While the precise definition of an inverse element varies depending on the algebraic structure involved, these definitions coincide in a group.

Formal definitions
In an unital magma
Let be a set with a binary operation , then (i.e., a magma). If , then is an identity element of and (i.e., S is an unital . If an element . magma) and is called a left inverse of is called a right inverse of

is both a left inverse and a right inverse of An element with a two-sided inverse in Just like

is called a two-sided inverse, or simply an inverse, of

is called invertible in

. An element with an inverse element only on

one side is left invertible, resp. right invertible. If all elements in S are invertible, S is called a loop. can have several left identities or several right identities, it is possible for an element to have several ). It can even left inverses or several right inverses (but note that their definition above uses a two-sided identity

have several left inverses and several right inverses. If the operation is associative then if an element has both a left inverse and a right inverse, they are equal. In other words, in a monoid every element has at most one inverse (as defined in this section). In a monoid, the set of (left and right) invertible elements is a group, called the group of units of , and denoted by or H1. A left-invertible element is left-cancellative, and analogously for right and two-sided.

In a semigroup
The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity, i.e. in a semigroup. In a semigroup an element x is called (von Neumann) regular if there exists some element z in S such that xzx = x; z is sometimes called a pseudoinverse. An element y is called (simply) an inverse of x if xyx = x and y = yxy. Every regular element has at least one inverse: if x = xzx then it is easy to verify that y = zxz is an inverse of x as defined in this section. Another easy to prove fact: if y is an inverse of x then e = xy and f = yx are idempotents, that is ee = e and ff = f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, and ex = xf = x, ye = fy = y, and e acts as a left identity on x, while f acts a right identity, and the left/right roles are reversed for y. This simple observation can be generalized using Green's relations: every idempotent e in an arbitrary semigroup is a left identity for Re and right identity for Le.[1] An intuitive description of this is fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity. In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in H1 have an inverse from the unital magma perspective, whereas for any idempotent e, the elements of He have an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called an inverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have an absorbing element 0 because 000=0, whereas a group may not. Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. This is generally justified because in most applications (e.g. all examples in this article) associativity holds, which makes

Inverse element this notion a generalization of the left/right inverse relative to an identity.


A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)°=a for all a in S; this endows S with a type <2,1> algebra. A semigroup endowed with such an operation is called a U-semigroup. Although it may seem that a° will be the inverse of a, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes of U-semigroups have been studied: • I-semigroups, in which the interaction axiom is aa°a = a • *-semigroups, in which the interaction axiom is (ab)° = b°a°. Such an operation is called an involution, and typically denoted by a *. Clearly a group is both an I-semigroup and a *-semigroup. Inverse semigroups are exactly those semigroups that are both I-semigroups and *-semigroups. A class of semigroups important in semigroup theory are completely regular semigroups; these are I-semigroups in which one additionally has aa° = a°a; in other words every element has commuting pseudoinverse a°. There are few concrete examples of such semigroups however; most are completely simple semigroups. In contrast, a class of *-semigroups, the *-regular semigroups, yield one of best known examples of a (unique) pseudoinverse, the Moore–Penrose inverse. In this case however the involution a* is not the pseudoinverse. Rather, the pseudoinverse of x is the unique element y such that xyx = x, yxy = y, (xy)* = xy, (yx)* = yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called the generalized inverse or Penrose-Moore inverse. In a *-regular semigroup S one can identify a special subset of idempotents F(S) called a P-system; every element a of the semigroup has exactly one inverse a* such that aa* and a*a are in F(S). The P-systems of Yamada are based upon the notion of regular *-semigroup as defined by Nordahl and Scheiblich.

All examples in this section involve associative operators, thus we shall use the terms left/right inverse for the unital magma-based definition, and quasi-inverse for its more general version.

Real numbers
Every real number real number has an additive inverse (i.e. an inverse with respect to addition) given by . Every nonzero (or ). By has a multiplicative inverse (i.e. an inverse with respect to multiplication) given by

contrast, zero has no multiplicative inverse, but it has a unique quasi-inverse, 0 itself.

Functions and partial functions
A function is the left (resp. right) inverse of a function (for function composition), if and only if . The inverse of a function (resp. ) is the identity function on the domain (resp. codomain) of is often written

, but this notation is sometimes ambiguous. Only bijections have two-sided inverses, but any function has a quasi-inverse, i.e. the full transformation monoid is regular. The monoid of partial functions is also regular, whereas the monoid of injective partial transformations is the prototypical inverse semigroup.

Inverse element


Galois connections
The lower and upper adjoints in a (monotone) Galois connection, L and G are quasi-inverses of each other, i.e. LGL = L and GLG = G and one uniquely determines the other. They are not left or right inverses of each other however.

A square matrix with entries in a field is invertible (in the set of all square matrices of the same size, under is zero, it is matrix multiplication) if and only if its determinant is different from zero. If the determinant of

impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. See invertible matrix for more. More generally, a square matrix over a commutative ring in . Non-square matrices of full rank have one-sided inverses:[2] • For we have a left inverse: is invertible if and only if its determinant is invertible

• For

we have a right inverse:

The right inverse can be used to determine the least norm solution of Ax = b. No rank-deficient matrix has any (even one-sided) inverse. However, the Moore–Penrose pseudoinverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. As an example of matrix inverses, consider:

So, as m < n, we have a right inverse.

The left inverse doesn't exist, because

which is a singular matrix, and cannot be inverted.

Inverse element


[1] Howie, prop. 2.3.3, p. 51 [2] MIT Professor Gilbert Strang Linear Algebra Lecture #33 – Left and Right Inverses; Pseudoinverse. (http:/ / ocw. mit. edu/ OcwWeb/ Mathematics/ 18-06Spring-2005/ VideoLectures/ detail/ lecture33. htm)

• M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3110152487, p. 15 (def in unital magma) and p. 33 (def in semigroup) • Howie, John M. (1995). Fundamentals of Semigroup Theory. Clarendon Press. ISBN 0-19-851194-9. contains all of the semigroup material herein except *-regular semigroups. • Drazin, M.P., Regular semigroups with involution, Proc. Symp. on Regular Semigroups (DeKalb, 1979), 29–46 • Miyuki Yamada, P-systems in regular semigroups, Semigroup Forum, 24(1), December 1982, pp. 173–187 • Nordahl, T.E., and H.E. Scheiblich, Regular * Semigroups, Semigroup Forum, 16(1978), 369–377.

Elementary algebra
Elementary algebra is a fundamental and relatively basic form of algebra taught to students who are presumed to have little or no formal knowledge of mathematics beyond arithmetic. It is typically taught in secondary school under the term algebra. The major difference between algebra and arithmetic is the inclusion of variables. While in arithmetic only numbers and their arithmetical operations (such as +, −, ×, ÷) occur, in algebra, one also uses variables such as x and y, or a and b to replace numbers.

Features of algebra
The purpose of using variables, symbols that denote numbers, is to allow the making of generalizations in mathematics. This is useful because: • It allows arithmetical equations (and inequalities) to be stated as laws (such as a + b = b + a for all a and b), and thus is the first step to the systematic study of the properties of the real number system. • It allows reference to numbers which are not known. In the context of a problem, a variable may represent a certain value which is not yet known, but which may be found through the formulation and manipulation of equations. • It allows the exploration of mathematical relationships between quantities (such as "if you sell x tickets, then your profit will be 3x − 10 dollars").

Elementary algebra


In elementary algebra, an expression may contain numbers, variables and arithmetical operations. These are conventionally written with 'higher-power' terms on the left (see polynomial); a few examples are:

In more advanced algebra, an expression may also include elementary functions.

Properties of operations
Operation Addition Multiplication Is Written a+b commutative a+b=b+a associative identity element inverse operation

(a + b) + c = a + (b + c) 0, which preserves numbers: a + 0 = a Subtraction ( - ) (a × b) × c = a × (b × c) 1, which preserves numbers: a × 1 = a Division ( / ) 1, which preserves numbers: a1 = a Logarithm (Log)

a × b or a•b a × b = b × a

Exponentiation ab or a^b

Not commutative ab≠ba Not associative

• The operation of addition... • means repeated addition of ones: n = 1 + 1 +...+ 1 (n number of times); • has an inverse operation called subtraction: (a + b) − b = a, which is the same as adding a negative number, a − b = a + (−b); • The operation of multiplication... • means repeated addition: a × n = a + a +...+ a (n number of times); • has an inverse operation called division which is defined for non-zero numbers: (ab)/b = a, which is the same as multiplying by a reciprocal, a/b = a(1/b); • distributes over addition: (a + b)c = ac + bc; • is abbreviated by juxtaposition: a × b ≡ ab; • The operation of exponentiation... means repeated multiplication: an = a × a ×...× a (n number of times); has an inverse operation, called the logarithm: alogab = b = logaab; distributes over multiplication: (ab)c = acbc; can be written in terms of n-th roots: am/n ≡ (n√a)m and thus even roots of negative numbers do not exist in the real number system. (See: complex number system) • has the property: abac = ab + c; • has the property: (ab)c = abc. • in general ab ≠ ba and (ab)c ≠ a(bc); • • • •

Elementary algebra Order of operations In mathematics it is important that the value of an expression is always computed the same way. Therefore, it is necessary to compute the parts of an expression in a particular order, known as the order of operations. The standard order of operations is expressed in the following chart. parenthesis and other grouping symbols including brackets, absolute value symbols, and the fraction bar exponents and roots multiplication and division addition and subtraction A common mnemonic device for remembering this order is PEMDAS. Generally in Elementary Algebra, the use of brackets (often called parentheses) and their simple applications will be taught at most of the schools in the world.


An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as a + b = b + a); such equations are called identities. Conditional equations are true for only some values of the involved variables: x2 − 1 = 4. The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Properties of equality • The relation of equality (=) is... • reflexive: b = b; • symmetric: if a = b then b = a; • transitive: if a = b and b = c then a = c. • The relation of equality (=) has the property... • that if a = b and c = d then a + c = b + d and ac = bd; • that if a = b then a + c = b + c; • that if two symbols are equal, then one can be substituted for the other. Properties of inequality • The relation of inequality (<) has the property... • • • • of transivity: if a < b and b < c then a < c; that if a < b and c < d then a + c < b + d; that if a < b and c > 0 then ac < bc; that if a < b and c < 0 then bc < ac.

Elementary algebra


Algebraic examples
The following sections lay out examples of some of the types of alegbraic equations you might encounter.

Linear equations in one variable
The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. For example:
A typical algebra problem.

The central technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one [1] side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. For example, by subtracting 4 from both sides in the equation above:

can simplify to:

Dividing both sides by 2:

simplifies to the solution:

The general case,

follows the same procedure to obtain the solution:

Quadratic equations
Quadratic equations can be expressed in the form ax2 + bx + c = 0, where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term ax2, which is known as the quadratic term. Hence a ≠ 0, and so we may divide by a and rearrange the equation into the standard form

where p = b/a and q = −c/a. Solving this, by a process known as completing the square, leads to the quadratic formula. Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:

Which is the same thing as

It follows from the zero-product property that either x = 2 or x = −5 are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but

Elementary algebra need not have any in the real number system. For example,


has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as:

For this equation, −1 is a root of multiplicity 2. This means -1 appears two times.

Exponential and logarithmic equations
An exponential equation is an equation of the form aX = b for a > 0, which has solution

when b > 0. Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if

then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain



A logarithmic equation is an equation of the form logaX = b for a > 0, which has solution For example, if

then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get


from which we obtain

Radical equations
A radical equation is an equation of the form Xm/n = a, for m, n integers, which has solution

if m is odd, and solution, and

if m is even and a ≥ 0. For example, if --------------------


Elementary algebra


or .

System of linear equations
In the case of a system of linear equations, like, for instance, two equations in two variables, it is often possible to find the solutions of both variables that satisfy both equations. Elimination Method An example of solving a system of linear equations is by using the elimination method:

Multiplying the terms in the second equation by 2:

Adding the two equations together to get:

which simplifies to

Since the fact that x = 2 is known, it is then possible to deduce that y = 3 by either of the original two equations (by using 2 instead of x) The full solution to this problem is then

Note that this is not the only way to solve this specific system; y could have been solved before x. Second method of finding a solution Another way of solving the same system of linear equations is by substitution.

An equivalent for y can be deduced by using one of the two equations. Using the second equation:

Subtracting 2x from each side of the equation:

and multiplying by -1:

Using this y value in the first equation in the original system:

Elementary algebra


Adding 2 on each side of the equation:

which simplifies to

Using this value in one of the equations, the same solution as in the previous method is obtained.

Note that this is not the only way to solve this specific system; in this case as well, y could have been solved before x.

Other types of systems of linear equations
Unsolvable systems In the above example, it is possible to find a solution. However, there are also systems of equations which do not have a solution. An obvious example would be:

The second equation in the system has no possible solution. Therefore, this system can't be solved. However, not all incompatible systems are recognized at first sight. As an example, the following system is studied:

When trying to solve this (for example, by using the method of substitution above), the second equation, after adding 2x on both sides and multiplying by −1, results in:

And using this value for y in the first equation:

No variables are left, and the equality is not true. This means that the first equation can't provide a solution for the value for y obtained in the second equation.

Elementary algebra Undetermined systems There are also systems which have multiple or infinite solutions, in opposition to a system with a unique solution (meaning, two unique values for x and y) For example:


Isolating y in the second equation:

And using this value in the first equation in the system:

The equality is true, but it does not provide a value for x. Indeed, one can easily verify (by just filling in some values of x) that for any x there is a solution as long as y = −2x + 6. There are infinite solutions for this system. Over- and underdetermined systems Systems with more variables than the number of linear equations do not have a unique solution. An example of such a system is

Such a system is called underdetermined; when trying to find a solution, one or more variables can only be expressed in relation to the other variables, but cannot be determined numerically. Incidentally, a system with a greater number of equations than variables, in which necessarily some equations are sums or multiples of others, is called overdetermined.

Relation between solvability and multiplicity
Given any system of linear equations, there is a relation between multiplicity and solvability. If one equation is a multiple of the other (or, more generally, a sum of multiples of the other equations), then the system of linear equations is undetermined, meaning that the system has infinitely many solutions. Example:

has solutions (x,y) such as (1,1), (0,2), (1.8,0.2), (4,−2), (−3000.75,3002.75), and so on. When the multiplicity is only partial (meaning that for example, only the left hand sides of the equations are multiples, while the right hand sides are not or not by the same number) then the system is unsolvable. For example, in

the second equation yields that x + y = 1/4 which is in contradiction with the first equation. Such a system is also called inconsistent in the language of linear algebra. When trying to solve a system of linear equations it is generally a good idea to check if one equation is a multiple of the other. If this is precisely so, the solution cannot be uniquely determined. If this is only partially so, the solution does not exist. This, however, does not mean that the equations must be multiples of each other to have a solution, as shown in the sections above; in other words: multiplicity in a system of linear equations is not a necessary condition for

Elementary algebra solvability.


• Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, ISBN 978-1-899618-79-8, also online digitized editions[2] 2006,[3] 1822. • Charles Smith, A Treatise on Algebra [4], in Cornell University Library Historical Math Monographs [5].
[1] [2] [3] [4] [5] Slavin, Steve (1989). All the Math You'll Ever Need. John Wiley & Sons. p. 72. ISBN 0471506362. http:/ / web. mat. bham. ac. uk/ C. J. Sangwin/ euler/ http:/ / books. google. co. uk/ books?id=X8yv0sj4_1YC& dq=euler+ elements& psp=1 http:/ / digital. library. cornell. edu/ cgi/ t/ text/ text-idx?c=math;idno=smit025 http:/ / historical. library. cornell. edu/ math

FOIL method
In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method. The word FOIL is an acronym for the four terms of the product: • First (“first” terms of each binomial are multiplied together) • Outer (“outside” terms are multiplied—that is, the first term of the first binomial and the second term of the second) • Inner (“inside” terms are multiplied—second term of the first binomial and first term of the second) • Last (“last” terms of each binomial are multiplied) The general form is:

A visual representation of the FOIL rule. Each colored line represents two terms that must be multiplied.

Note that

is both a “first” term and an “outer” term;

is both a “last” and “inner” term, and so forth. The order of

the four terms in the sum is not important, and need not match the order of the letters in the word FOIL. The FOIL method is a special case of a more general method for multiplying algebraic expressions using the distributive law. The word FOIL was originally intended solely as a mnemonic for high-school students learning

FOIL method algebra, but many students and educators in the United States now use the word “foil” as a verb meaning “to expand the product of two binomials”. This neologism has not yet gained widespread acceptance in the mathematical community.


The FOIL method is most commonly used to multiply linear binomials. For example,

If either binomial involves subtraction, the corresponding terms must be negated. For example,

The distributive law
The FOIL method is equivalent to a two-step process involving the distributive law:

In the first step, the distributive property.

is distributed over the addition in first binomial. In the second step, the distributive law

is used to simplify each of the two terms. Note that this process involves a total of three applications of the

Reverse FOIL
The FOIL rule converts a product of two binomials into a sum of four (or fewer, if like terms are then combined) monomials. The reverse process is called factoring or factorization. In particular, if the proof above is read in reverse it illustrates the technique called factoring by grouping.

Table as an alternative to FOIL
A visual memory tool can replace the FOIL mnemonic for a pair of polynomials with any number of terms. Make a table with the terms of the first polynomial on the left edge and the terms of the second on the top edge, then fill in the table with products. The table equivalent to the FOIL rule looks like this.

In the case that these are polynomials, the antidiagonals

the terms of a given degree are found by adding along

so To multiply (a+b+c)(w+x+y+z), the table would be as follows.

FOIL method


The sum of the table entries is the product of the polynomials. Thus

Similarly, to multiply

one writes the same table

and sums along antidiagonals:

The FOIL rule cannot be directly applied to expanding products with more than two multiplicands, or multiplicands with more than two summands. However, applying the associative law and recursive foiling allows one to expand such products. For instance,

Alternate methods based on distributing forgo the use of the FOIL rule, but may be easier to remember and apply. For example,

FOIL method


• Steege, Ray; Bailey, Kerry (1997), Schaum's Outline of Theory and Problems of Intermediate Algebra, Schaum's Outline Series, New York: McGraw–Hill, p. 54, ISBN 978-0-07-060839-9

Antisymmetric relation
In mathematics, a binary relation R on a set X is antisymmetric if, for all a and b in X if R(a,b) and R(b,a), then a = b, or, equivalently, if R(a,b) with a ≠ b, then R(b,a) must not hold. In mathematical notation, this is:

or, equivalently,

The usual order relation ≤ on the real numbers is antisymmetric: if for two real numbers x and y both inequalities x ≤ y and y ≤ x hold then x and y must be equal. Similarly, given two sets A and B, if every element in A also is in B and every element in B is also in A, then A and B must contain all the same elements and therefore be equal:

Therefore, the subset order ⊆ on the subsets of any given set is antisymmetric. Partial and total orders are antisymmetric by definition. A relation can be both symmetric and antisymmetric (e.g., the equality relation), and there are relations which are neither symmetric nor antisymmetric (e.g., the "preys on" relation on biological species). Antisymmetry is different from asymmetry. According to one definition of asymmetric, anything that fails to be symmetric is asymmetric. Another definition of asymmetric makes asymmetry equivalent to antisymmetry plus irreflexivity.

Antisymmetric relation


The relation "x is even, y is odd" between a pair (x, y) of integers is antisymmetric:

The divisibility order of the natural numbers is another example of an antisymmetric relation.

• Weisstein, Eric W., "Antisymmetric Relation [1]" from MathWorld.

[1] http:/ / mathworld. wolfram. com/ AntisymmetricRelation. html

Symmetric polynomial


Symmetric polynomial
In mathematics, a symmetric polynomial is a polynomial P(X1, X2, …, Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial, if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), …, Xσ(n)) = P(X1, X2, …, Xn). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial. Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.

In two variables X1, X2 one has symmetric polynomials like • • and in three variables X1, X2, X3 one has for instance • There are many ways to make specific symmetric polynomials in any number of variables, see the various types below. An example of a somewhat different flavor is • where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant). On the other hand the polynomial in two variables • is not symmetric, since if one exchanges three variables • has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. and one gets a different polynomial, . Similarly in

Symmetric polynomial


Galois theory
One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree n having n roots in a given field. These n roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function f of the n roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if f is given by a symmetric polynomial. This yields the approach to solving polynomial equations in terms of inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots? This leads to studying solutions of polynomials in terms of the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory.

Relation with the roots of a monic univariate polynomial
Consider a monic polynomial in t of degree n

with coefficients ai in some field k. There exist n roots x1,…,xn of P in some possibly larger field (for instance if k is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has all roots is expressed by the relation

By comparison of the coefficients one finds that

These are in fact just instances of Viète's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial P there may be qualitative differences between the roots (like lying in the base field k or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions. Now one may change the point of view, by taking the roots rather than the coefficients as basic parameters for describing P, and considering them as indeterminates rather than as constants in an appropriate field; the coefficients ai then become just the particular symmetric polynomials given by the above equations. Those polynomials, without the sign , are known as the elementary symmetric polynomials in x1,…,xn. A basic fact, known as the fundamental theorem of symmetric polynomials states that any symmetric polynomial in n variables can be given by a polynomial expression in terms of these elementary symmetric polynomials. It follows that any symmetric polynomial expression in the roots of a monic polynomial can be expressed as a polynomial in the coefficients of the polynomial, and in particular that its value lies in the base field k that contains those coefficients. Thus, when working only with such symmetric polynomial expressions in the roots, it is unnecessary to know anything particular about those roots, or to compute in any larger field than k in which those roots may lie. In fact the values of the roots themselves become rather irrelevant, and the necessary relations between coefficients and symmetric polynomial expressions can be found by computations in terms of symmetric polynomials only. An example of such relations are Newton's identities, which express the sum of any fixed power of the roots in terms of the elementary symmetric polynomials.

Symmetric polynomial


Special kinds of symmetric polynomials
There are a few types of symmetric polynomials in the variables X1, X2, …, Xn that are fundamental.

Elementary symmetric polynomials
For each nonnegative integer k, the elementary symmetric polynomial ek(X1, …, Xn) is the sum of all distinct products of k distinct variables (some authors denote it by σk instead). For k = 0 there is only the empty product so e0(X1, …, Xn) = 1, while for k > n, no products at all can be formed, so ek(X1, X2, …, Xn) = 0 in these cases. The remaining n elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts: • any symmetric polynomial P in X1, …, Xn can be written as a polynomial expression in the polynomials ek(X1, …, Xn) with 1 ≤ k ≤ n; • this expression is unique up to equivalence of polynomial expressions; • if P has integral coefficients, then the polynomial expression also has integral coefficients. For example, for n = 2, the relevant elementary symmetric polynomials are e1(X1, X2) = X1+X2, and e2(X1, X2) = X1X2. The first polynomial in the list of examples above can then be written as (for a proof that this is always possible see the fundamental theorem of symmetric polynomials).

Monomial symmetric polynomials
Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic additive building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in X1, …, Xn can be written as X1α1…Xnαn where the exponents αi are natural numbers (possibly zero); writing α = (α1,…,αn) this can be abbreviated to Xα. The monomial symmetric polynomial mα(X1, …, Xn) is defined as the sum of all monomials xβ where β ranges over all distinct permutations of (α1,…,αn). For instance one has ,

Clearly mα = mβ when β is a permutation of α, so one usually considers only those mα for which α1 ≥ α2 ≥ … ≥ αn, in other words for which α is a partition. These monomial symmetric polynomials form a vector space basis: every symmetric polynomial P can be written as a linear combination of the monomial symmetric polynomials; to do this it suffices to separate the different types of monomials occurring in P. In particular if P has integer coefficients, then so will the linear combination. The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ k ≤ n one has where α is the partition of k into k parts 1 (followed by n − k zeros).

Symmetric polynomial


Power-sum symmetric polynomials
For each integer k ≥ 1, the monomial symmetric polynomial m(k,0,…,0)(X1, …, Xn) is of special interest, and called the power sum symmetric polynomial pk(X1, …, Xn), so All symmetric polynomials can be obtained from the first n power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely, Any symmetric polynomial in X1, …, Xn can be expressed as a polynomial expression with rational coefficients in the power sum symmetric polynomials p1(X1, …, Xn), …, pn(X1, …, Xn). In particular, the remaining power sum polynomials pk(X1, …, Xn) for k > n can be so expressed in the first n power sum polynomials; for example

In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in n variables with integral coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials. For an example, for n = 2, the symmetric polynomial

has the expression

Using three variables one gets a different expression

The corresponding expression was valid for two variables as well (it suffices to set X3 to zero), but since it involves p3, it could not be used to illustrate the statement for n = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first n power sum polynomials involves rational coefficients may depend on n. But rational coefficients are always needed to express elementary symmetric polynomials (except the constant ones, and e1 which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to n, which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however it is valid with coefficients in any ring containing the rational numbers.

Complete homogeneous symmetric polynomials
For each nonnegative integer k, the complete homogeneous symmetric polynomial hk(X1, …, Xn) is the sum of all distinct monomials of degree k in the variables X1, …, Xn. For instance The polynomial hk(X1, …, Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, …, Xn, for instance for the given example

All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in X1, …, Xn can be obtained from the complete homogeneous symmetric polynomials h1(X1, …, Xn), …, hn(X1, …, Xn) via multiplications and additions. More precisely: Any symmetric polynomial P in X1, …, Xn can be written as a polynomial expression in the polynomials hk(X1, …, Xn) with 1 ≤ k ≤ n. If P has integral coefficients, then the polynomial expression also has integral coefficients.

Symmetric polynomial For example, for

183 , the relevant complete homogeneous symmetric polynomials are h1(X1,X2) = X1+X2), and

h2(X1,X2) = X1 +X1X2+X22. The first polynomial in the list of examples above can then be written as

Like in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond hn(X1, …, Xn), allowing them to be expressed in the ones up to that point; again the resulting identities become invalid when the number of variables is increased. An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be given as the identities , for all k > 0, and any number of variables n. Since e0(X1, …, Xn) and h0(X1, …, Xn) are both equal to 1, one can isolate either the first or the last terms of these summations; the former gives a set of equations that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the hk(X1, …, Xn) with 1 ≤ k ≤ n: one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones.

Schur polynomials
Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details.

Symmetric polynomials in algebra
Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time.

Alternating polynomials
Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being invariant under permutation of the entries, change according to the sign of the permutation. These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant.

• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556 • Macdonald, I.G. (1979), Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford: Clarendon Press. • I.G. Macdonald (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998). • Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Camridge: Cambridge University Press. ISBN 0-521-56069-1



Summation is the operation of adding a sequence of numbers; the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a partial sum, prefix sum, or running total of the summation. The numbers to be summed may be integers, rational numbers, real numbers, or complex numbers. Besides numbers, other types of values can be added as well: vectors, matrices, polynomials and, in general, elements of any additive group (or even monoid). For finite sequences of such elements, summation always produces a well-defined sum (possibly by virtue of the convention for empty sums). Summation of an infinite sequence of values is not always possible, and when a value can be given for an infinite summation, this involves more than just the addition operation, namely also the notion of a limit. Such infinite summations are known as series. Another notion involving limits of finite sums is integration. The term summation has a special meaning related to extrapolation in the context of divergent series. The summation of the sequence [1, 2, 4, 2] is an expression whose value is the sum of each of the members of the sequence. In the example,1 + 2 + 4 + 2 = 9. Since addition is associative the value does not depend on how the additions are grouped, for instance (1 + 2) + (4 + 2) and 1 + ((2 + 4) + 2) both have the value 9; therefore, parentheses are usually omitted in repeated additions. Addition is also commutative, so permuting the terms of a finite sequence does not change its sum (for infinite summations this property may fail; see absolute convergence for conditions under which it still holds). There is no special notation for the summation of such explicit sequences, as the corresponding repeated addition expression will do. There is only a slight difficulty if the sequence has fewer than two elements: the summation of a sequence of one term involves no plus sign (it is indistinguishable from the term itself) and the summation of the empty sequence cannot even be written down (but one can write its value "0" in its place). If, however, the terms of the sequence are given by a regular pattern, possibly of variable length, then a summation operator may be useful or even essential. For the summation of the sequence of consecutive integers from 1 to 100 one could use an addition expression involving an ellipsis to indicate the missing terms: 1 + 2 + 3 + ... + 99 + 100. In this case the reader easily guesses the pattern; however, for more complicated patterns, one needs to be precise about the rule used to find successive terms, which can be achieved by using the summation operator "Σ". Using this notation the above summation is written as:

The value of this summation is 5050. It can be found without performing 99 additions, since it can be shown (for instance by mathematical induction) that

for all natural numbers n. More generally, formulas exist for many summations of terms following a regular pattern. The term "indefinite summation" refers to the search for an inverse image of a given infinite sequence s of values for the forward difference operator, in other words for a sequence, called antidifference of s, whose finite differences are given by s. By contrast, summation as discussed in this article is called "definite summation".



Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol ∑ (U+2211), an enlarged form of the upright capital Greek letter Sigma. This is defined thus:

The subscript gives the symbol for an index variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. Here i = m under the summation symbol means that the index i starts out equal to m. Successive values of i are found by adding 1 to the previous value of i, stopping when i = n. An example:

Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in

One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:

is the sum of f(k) over all (integer) k in the specified range,

is the sum of f(x) over all elements x in the set S, and

is the sum of μ(d) over all positive integers d dividing n.[1] There are also ways to generalize the use of many sigma signs. For example,

is the same as

A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with ∏, an enlarged form of the Greek capital letter Pi, replacing the ∑.



Special cases
It is possible to sum fewer than 2 numbers: • If the summation has one summand x, then the evaluated sum is x. • If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum. These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if m = n in the definition above, then there is only one term in the sum; if m > n, then there is none.

Formal Definition
If the iterated function notation is defined e.g. summation can be defined in terms of iterated functions as: and is considered a more primitive notation then

Where the curly braces define a 2-tuple and the right arrow is a function definition taking a 2-tuple to 2-tuple. The function is applied b-a+1 times on the tuple {a,0}.

Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,

where [a,b] is the subset of the integers from a to b, and where μ is the counting measure.

Fundamental theorem of discrete calculus
Indefinite sums can be used to calculate definite sums with the formula[2] :

Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any: increasing function f:

decreasing function f:

For more general approximations, see the Euler–Maclaurin formula. For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance



since the right hand side is by definition the limit for of the left hand side. However for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.

The formulas below involve finite sums; for infinite summations see list of mathematical series

General manipulations
, where C is a constant



Some summations of polynomial expressions

(See Harmonic number) (see arithmetic series) (Special case of the arithmetic series)


denotes a Bernoulli number

The following formulas are manipulations of number value (i.e., ):

generalized to begin a series at any natural

Some summations involving exponential terms
In the summations below x is a constant not equal to 1 (m < n; see geometric series)

(geometric series starting at 1)

(special case when x = 2)

(special case when x = 1/2)



Some summations involving binomial coefficients
There exist enormously many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.

, the binomial theorem

Growth rates
The following are useful approximations (using theta notation): for real c greater than −1 (See Harmonic number) for real c greater than 1 for non-negative real c for non-negative real c, d for non-negative real b > 1, c, d

[1] Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (i through q) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see x instead of k in the above formulae involving k. See also typographical conventions in mathematical formulae. [2] "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1

Further reading
• Nicholas J. Higham, " The accuracy of floating point summation ( summary?doi=", SIAM J. Scientific Computing 14 (4), 783–799 (1993).



External links
• Summation (;from=objects&amp;id=6361) on PlanetMath • Derivation of Polynomials to Express the Sum of Natural Numbers with Exponents (http://upload.wikimedia. org/wikipedia/commons/6/62/Sum_of_i.pdf)

In mathematics, a set is said to be closed under some operation if performance of that operation on members of the set always produces a unique member of the same set. For example, the real numbers are closed under subtraction, but the natural numbers are not: 3 and 8 are both natural numbers, but the result of 3 − 8 is not. 0 is a closed set under multiplication. Similarly, a set is said to be closed under a collection of operations if it is closed under each of the operations individually. A set that is closed under an operation or collection of operations is said to satisfy a closure property. Often a closure property is introduced as an axiom, which is then usually called the axiom of closure. Note that modern set-theoretic definitions usually define operations as maps between sets, so adding closure to a structure as an axiom is superfluous, though it still makes sense to ask whether subsets are closed. For example, the set of real numbers is closed under subtraction, where (as mentioned above) its subset of natural numbers is not. When a set S is not closed under some operations, one can usually find the smallest set containing S that is closed. This smallest closed set is called the closure of S (with respect to these operations). For example, the closure under subtraction of the set of natural numbers, viewed as a subset of the real numbers, is the set of integers. An important example is that of topological closure. The notion of closure is generalized by Galois connection, and further by monads. Note that the set S must be a subset of a closed set in order for the closure operator to be defined. In the preceding example, it is important that the reals are closed under subtraction; in the domain of the natural numbers subtraction is not always defined. The two uses of the word "closure" should not be confused. The former usage refers to the property of being closed, and the latter refers to the smallest closed set containing one that isn't closed. In short, the closure of a set satisfies a closure property.

Closed sets
A set is closed under an operation if that operation returns a member of the set when evaluated on members of the set. Sometimes the requirement that the operation be valued in a set is explicitly stated, in which case it is known as the axiom of closure. For example, one may define a group as a set with a binary product operator obeying several axioms, including an axiom that the product of any two elements of the group is again an element. However the modern definition of an operation makes this axiom superfluous; an n-ary operation on S is just a subset of Sn+1. By its very definition, an operator on a set cannot have values outside the set. Nevertheless, the closure property of an operator on a set still has some utility. Closure on a set does not necessarily imply closure on all subsets. Thus a subgroup of a group is a subset on which the binary product and the unary operation of inversion satisfy the closure axiom. An operation of a different sort is that of finding the limit points of a subset of a topological space (if the space is first-countable, it suffices to restrict consideration to the limits of sequences but in general one must consider at least limits of nets). A set that is closed under this operation is usually just referred to as a closed set in the context of topology. Without any further qualification, the phrase usually means closed in this sense. Closed intervals like [1,2]

Closure = {x: 1 ≤ x ≤ 2} are closed in this sense. A partially ordered set is downward closed (and also called a lower set) if for every element of the set all smaller elements are also in it; this applies for example for the real intervals (−∞, p) and (−∞, p], and for an ordinal number p represented as interval [ 0, p); every downward closed set of ordinal numbers is itself an ordinal number. Upward closed and upper set are defined similarly.


P closures of binary relations
The notion of a closure can be generalized for an arbitrary binary relation R ⊆ S×S, and an arbitrary property P in the following way: the P closure of R is the least relation Q ⊆ S×S that contains R (i.e. R ⊆ Q) and for which property P holds (i.e. P(Q) is true). For instance, one can define the symmetric closure as the least symmetric relation containing R. This generalization is often encountered in the theory of rewriting systems, where one often uses more "wordy" notions such as the reflexive transitive closure R*—the smallest preorder containing R, or the reflexive transitive symmetric closure R≡—the smallest equivalence relation containing R, and therefore also known as the equivalence closure. For arbitrary P and R, the P closure of R need not exist. In the above examples, these exist because reflexivity, transitivity and symmetry are closed under arbitrary intersections. In such cases, the P closure can be directly defined as the intersection of all sets with property P containing R.[1]

Closure operator
Given an operation on a set X, one can define the closure C(S) of a subset S in X to be the smallest subset closed under that operation that contains S as a subset. For example, the closure of a subset of a group is the subgroup generated by that set. The closure of sets with respect to some operation defines a closure operator on the subsets of X. The closed sets can be determined from the closure operator; a set is closed if it is equal to its own closure. Typical structural properties of all closure operations are: • The closure is increasing or extensive: the closure of an object contains the object. • The closure is idempotent: the closure of the closure equals the closure. • The closure is monotone, that is, if X is contained in Y, then also C(X) is contained in C(Y). An object that is its own closure is called closed. By idempotency, an object is closed if and only if it is the closure of some object. These three properties define an abstract closure operator. Typically, an abstract closure acts on the class of all subsets of a set.

• In topology and related branches, the relevant operation is taking limits. The topological closure of a set is the corresponding closure operator. The Kuratowski closure axioms characterize this operator. • In linear algebra, the linear span of a set X of vectors is the closure of that set; it is the smallest subset of the vector space that includes X and is closed under the operation of linear combination. This subset is a subspace. • In matroid theory, the closure of X is the largest superset of X that has the same rank as X. • In set theory, the transitive closure of a binary relation. • In algebra, the algebraic closure of a field. • In commutative algebra, closure operations for ideals, as integral closure and tight closure. • In geometry, the convex hull of a set S of points is the smallest convex set of which S is a subset. • In the theory of formal languages, the Kleene closure of a language can be described as the set of strings that can be made by concatenating zero or more strings from that language.

Closure • In group theory, the normal closure of a set of group elements is the smallest normal subgroup containing the set.


See Also
• Open set • Clopen set

[1] * Franz Baader and Tobias Nipkow, Term Rewriting and All That, Cambridge University Press, 1998, pp. 8–9


Zero divisor
In abstract algebra, a nonzero element a of a ring is a left zero divisor if there exists a nonzero b such that ab = 0.[1] Similarly, a nonzero element a of a ring is a right zero divisor if there exists a nonzero c such that ca = 0. An element that is both a left and a right zero divisor is simply called a zero divisor. If multiplication in the ring is commutative, then the left and right zero divisors are the same. A nonzero element of a ring that is neither a left nor right zero divisor is called regular.

• The ring Z of integers has no zero divisors, but in the ring Z × Z with componentwise addition and multiplication, (0,1)·(1,0) = (0,0), so both (0,1) and (1,0) are zero divisors. • An example of a zero divisor in the ring of 2-by-2 matrices is the matrix

because for instance

• More generally in the ring of n-by-n matrices over some field, the left and right zero divisors coincide; they are precisely the nonzero singular matrices. In the ring of n-by-n matrices over some integral domain, the zero divisors are precisely the nonzero matrices with determinant zero. • Here is an example of a ring with an element that is a zero divisor on one side only. Let S be the set of all sequences of integers (a1, a2, a3...). Take for the ring all additive maps from S to S, with pointwise addition and composition as the ring operations. (That is, our ring is End(S), the endomorphisms of the additive group S.) Three examples of elements of this ring are the right shift R(a1, a2, a3,...) = (0, a1, a2,...), the left shift L(a1, a2, a3,... ) = (a2, a3,...), and a third additive map T(a1, a2, a3,... ) = (a1, 0, 0, ... ). All three of these additive maps are not zero, and the composites LT and TR are both zero, so L is a left zero divisor and R is a right zero divisor in the ring of additive maps from S to S. However, L is not a right zero divisor and R is not a left zero divisor: the composite LR is the identity, so if some additive map f from S to S satisfies fL= 0 then composing both sides of this equation on the right with R shows (fL)R = f(LR) = f1 = f has to be 0, and similarly if some f satisfies Rf = 0 then composing both sides on the left with L shows f is 0. Continuing with this example, note that while RL is a left zero divisor ((RL)T = R(LT) is 0 because LT is), LR is not a zero divisor on either side because it is the identity. Concretely, we can interpret additive maps from S to S as countably infinite matrices. The matrix

Zero divisor realizes L explicitly (just apply the matrix to a vector and see the effect is exactly a left shift) and the transpose B = AT realizes the right shift on S. That AB is the identity matrix is the same as saying LR is the identity. In particular, as matrices A is a left zero divisor but not a right zero divisor.


Left or right zero divisors can never be units, because if a is invertible and ab = 0, then 0 = a−10 = a−1ab = b. Every nonzero idempotent element a ≠ 1 is a zero divisor, since a2 = a implies a(a − 1) = (a − 1)a = 0. Nonzero nilpotent ring elements are also trivially zero divisors. A commutative ring with 0 ≠ 1 and without zero divisors is called an integral domain. Zero divisors occur in the quotient ring Z/nZ if and only if n is composite. When n is prime, there are no zero divisors and this ring is, in fact, a field, as every nonzero element is a unit. Zero divisors also occur in the sedenions, or 16-dimensional hypercomplex numbers under the Cayley–Dickson construction. The set of zero divisors is the union of the associated prime ideals of the ring.

[1] See Hazewinkel et. al. (2004), p. 2.

• Michiel Hazewinkel, Nadiya Gubareni, Nadezhda Mikhaĭlovna Gubareni, Vladimir V. Kirichenko. Algebras, rings and modules. Volume 1. 2004. Springer, 2004. ISBN 1-4020-2690-0 • Weisstein, Eric W., " Zero Divisor (" from MathWorld.

Negative and non-negative numbers


Negative and non-negative numbers
A negative number is any real number that is less than zero. Such numbers are often used to represent the amount of a loss or absence. For example, a debt that is owed may be thought of as a negative asset, or a decrease in some quantity may be thought of as a negative increase. Negative numbers are also used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature. Negative numbers are usually written with a minus sign in front. For This thermometer is indicating a slightly negative example, −3 represents a negative quantity with a magnitude of three, temperature. and is pronounced “minus three”. Conversely, a number that is greater than zero is called positive; zero is usually thought of as neither positive nor negative.[1] The positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as its sign. In mathematics, every real number other than zero is either positive or negative. The positive whole numbers are referred to as natural numbers, while the positive and negative whole numbers (together with zero) are referred to as integers. In bookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers. Negative numbers appeared for the first time in history in the Nine Chapters on the Mathematical Art , which in its present form dates from the period of the Chinese Han Dynasty (202 BC. – AD 220), but may well contain much older material.[2] Indian mathematicians developed consistent and correct rules on the use of negative numbers, which later spread to the Middle East, and then into Europe. Prior to the concept of negative numbers, negative solutions to problems were considered "false" and equations requiring negative solutions were described as absurd.[3]

As the result of subtraction
Negative numbers can be thought of as resulting from the subtraction of a larger number from a smaller. For example, negative three is the result of subtracting three from zero: 0 − 3 = −3. In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers. For example, 5 − 8 = −3 since 8 − 5 = 3.

Negative and non-negative numbers


The number line
The relationship between negative numbers, positive numbers, and zero is often expressed in the form of a number line:

Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are less. Thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left. Note that a negative number with greater magnitude is considered less. For example even though (positive) 8 is greater than (positive) 5, written 8>5 negative 8 is considered to be less than negative 5: −8 < −5. In addition, any negative number is less than any positive number, so −8 < 5  and  −5 < 8.

Signed numbers
In the context of negative numbers, a number that is greater than zero is referred to as positive. Thus every real number other than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with a plus sign in front, e.g. +3 denotes a positive three. Because zero is neither positive nor negative, the term non-negative is sometimes used to refer to a number that is either positive or zero, while non-positive is used to refer to a number that is either negative or zero.

Arithmetic involving negative numbers
The minus sign "−" signifies the operator for both the binary (two-operand) operation of subtraction (as in y-z) and the unary (one-operand) operation of negation (as in -x, or twice in -(-x)). A special case of unary negation occurs when it operates on a positive number, in which case the result is a negative number (as in -5). The ambiguity of the "-" symbol does not generally lead to ambiguity in arithmetic expressions, because the order of operations makes only one interpretation or the other possible for each "-". However, it can lead to confusion and be difficult for a person to understand an expression when operator symbols appear adjacent to one another. A solution can be to parenthesize the unary "-" along with its operand. For example, the expression 7 + −5 may be clearer if written 7 + (−5) (even though they mean exactly the same thing formally). The subtraction expression 7 − 5 is a different expression that doesn't represent the same operations, but it evaluates to the same result. Sometimes in elementary schools a number may be prefixed by a superscript minus sign or plus sign to explicitly distinguish negative and positive numbers as in[4]

2 + −5  gives  −7.

Negative and non-negative numbers


Addition of two negative numbers is very similar to addition of two positive numbers. For example, (−3) + (−5) = −8. The idea is that two debts can be combined into a single debt of greater magnitude. When adding together a mixture of positive of negative numbers, one can think of the negative numbers as positive quantities as being subtracted. For example: 8 + (−3) = 8 − 3 = 5  and  (−2) + 7 = 7 − 2 = 5. In the first example, a credit of 8 is combined with a debt of 3, which yields a total credit of 5. If the negative number has greater magnitude, then the result is negative: (−8) + 3 = 3 − 8 = −5  and  2 + (−7) = 2 − 7 = −5. Here the credit is less than the debt, so the net result is a debt.

As discussed above, it is possible for the subtraction of two non-negative numbers to yield a negative answer: 5 − 8 = −3

A visual representation of the addition of positive and negative numbers. Larger balls represent numbers with greater magnitude.

In general, subtraction of a positive number is the same thing as addition of a negative. Thus 5 − 8 = 5 + (−8) = −3 and (−3) − 5 = (−3) + (−5) = −8 On the other hand, subtracting a negative number is the same as adding a positive. (The idea is that losing a debt is the same thing as gaining a credit.) Thus 3 − (−5) = 3 + 5 = 8 and (−5) − (−8) = (−5) + 8 = 3.

When multiplying numbers, the magnitude of the product is always just the product of the two magnitudes. The sign of the product is determined by the following rules: • The product of one positive number and one negative number is negative. • The product of two negative numbers is positive. Thus (−2) × 3 = −6 and (−2) × (−3) = 6. The reason behind the first example is simple: adding three −2's together yields −6: (−2) × 3 = (−2) + (−2) + (−2) = -6.

Negative and non-negative numbers The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six: (−2 debts ) × (−3 each) = +6 credit. The convention that a product of two negative numbers is positive is also necessary for multiplication to follow the distributive law. In this case, we know that (−2) × (−3) + 2 × (−3) = (−2 + 2) × (−3) = 0 × (−3) = 0. Since 2 × (−3) = −6, the product (−2) × (−3) must equal 6. These rules lead to another (equivalent) rule—the sign of any product a × b depends on the sign of a as follows: • if a is positive, then the sign of a × b is the same as the sign of b, and • if a is negative, then the sign of a × b is the opposite of the sign of b.


The sign rules for division are the same as for multiplication. For example, 8 ÷ (−2) = −4, (−8) ÷ 2 = −4, and (−8) ÷ (−2) = 4. If dividend and divisor have the same sign, the result is always positive.

The negative version of a positive number is referred to as its negation. For example, −3 is the negation of the positive number 3. The sum of a number and its negation is equal to zero: 3 + −3 = 0. That is, the negation of a positive number is the additive inverse of the number. Using algebra, we may write this principle as an algebraic identity: x + −x = 0. This identity holds for any positive number x. It can be made to hold for all real numbers by extending the definition of negation to include zero and negative numbers. Specifically: • The negation of 0 is 0, and • The negation of a negative number is the corresponding positive number. For example, the negation of −3 is +3. In general, −(−x) = x. The absolute value of a number is the non-negative number with the same magnitude. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3, and the absolute value of 0 is 0.

Negative and non-negative numbers


Formal construction of negative integers
In a similar manner to rational numbers, we can extend the natural numbers N to the integers Z by defining integers as an ordered pair of natural numbers (a, b). We can extend addition and multiplication to these pairs with the following rules: (a, b) + (c, d) = (a + c, b + d) (a, b) × (c, d) = (a × c + b × d, a × d + b × c) We define an equivalence relation ~ upon these pairs with the following rule: (a, b) ~ (c, d) if and only if a + d = b + c. This equivalence relation is compatible with the addition and multiplication defined above, and we may define Z to be the quotient set N²/~, i.e. we identify two pairs (a, b) and (c, d) if they are equivalent in the above sense. Note that Z, equipped with these operations of addition and multiplication, is a ring, and is in fact, the prototypical example of a ring. We can also define a total order on Z by writing (a, b) ≤ (c, d) if and only if a + d ≤ b + c. This will lead to an additive zero of the form (a, a), an additive inverse of (a, b) of the form (b, a), a multiplicative unit of the form (a + 1, a), and a definition of subtraction (a, b) − (c, d) = (a + d, b + c). This construction is a special case of the Grothendieck construction.

The negative of a number is unique, as is shown by the following proof. Let x be a number and let y be its negative. Suppose y′ is another negative of x. By an axiom of the real number system

And so, x + y′ = x + y. Using the law of cancellation for addition, it is seen that y′ = y. Thus y is equal to any other negative of x. That is, y is the unique negative of x.

Negative numbers appear for the first time in history in the Nine Chapters on the Mathematical Art (Jiu zhang suan-shu), which in its present form dates from the period of the Han Dynasty (202 BC. – AD 220), but may well contain much older material.[2] The Nine Chapters used red counting rods to denote positive coefficients and black rods for negative.[5] (This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values). The Chinese were also able to solve simultaneous equations involving negative numbers. For a long time, negative solutions to problems were considered "false". In Hellenistic Egypt, Diophantus in the third century A.D. referred to an equation that was equivalent to 4x + 20 = 0 (which has a negative solution) in Arithmetica, saying that the equation was absurd. The use of negative numbers was known in early India, and their role in situations like mathematical problems of debt was understood.[6] Consistent and correct rules for working with these numbers were formulated.[7] The diffusion of this concept led the Arab intermediaries to pass it to Europe.[6] The ancient Indian Bakhshali Manuscript, which Pearce Ian claimed was written some time between 200 BC. and AD 300,[8] while George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century,[9]

Negative and non-negative numbers carried out calculations with negative numbers, using "+" as a negative sign.[10] During the 7th century AD, negative numbers were used in India to represent debts. The Indian mathematician Brahmagupta, in Brahma-Sphuta-Siddhanta (written in A.D. 628), discussed the use of negative numbers to produce the general form quadratic formula that remains in use today. He also found negative solutions of quadratic equations and gave rules regarding operations involving negative numbers and zero, such as "A debt cut off from nothingness becomes a credit; a credit cut off from nothingness becomes a debt. " He called positive numbers "fortunes," zero "a cipher," and negative numbers "debts."[11] [12] During the 8th century AD, the Islamic world learned about negative numbers from Arabic translations of Brahmagupta's works, and by the 10th century Islamic mathematicians were using negative numbers for debts. The earliest known Islamic text that uses negative numbers is A Book on What Is Necessary from the Science of Arithmetic for Scribes and Businessmen by Abū al-Wafā' al-Būzjānī.[13] In the 12th century AD in India, Bhāskara II also gave negative roots for quadratic equations but rejected them because they were inappropriate in the context of the problem. He stated that a negative value is "in this case not to be taken, for it is inadequate; people do not approve of negative roots." Knowledge of negative numbers eventually reached Europe through Latin translations of Arabic and Indian works. European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be interpreted as debits (chapter 13 of Liber Abaci, AD 1202) and later as losses (in Flos). In the 15th century, Nicolas Chuquet, a Frenchman, used negative numbers as exponents and referred to them as “absurd numbers.” In A.D. 1759, Francis Maseres, an English mathematician, wrote that negative numbers "darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple". He came to the conclusion that negative numbers were nonsensical.[14] In the 18th century it was common practice to ignore any negative results derived from equations, on the assumption that they were meaningless.[15]


[1] Different languages have different conventions regarding the sign of zero. For example, in French, zero is considered to be both positive and negative. The French words positif and négatif mean the same as English "positive or zero" and "negative or zero" respectively. [2] Struik, page 32–33. "In these matrices we find negative numbers, which appear here for the first time in history." [3] Diophantus, Arithmetica. [4] Grant P. Wiggins; Jay McTighe (2005). Understanding by design. ACSD Publications. p. 210. ISBN 1416600353. [5] Temple, Robert. (1986). The Genius of China: 3,000 Years of Science, Discovery, and Invention. With a forward by Joseph Needham. New York: Simon & Schuster, Inc. ISBN 0671620282. Page 141. [6] Bourbaki, page 49 [7] Britannica Concise Encyclopedia (2007). algebra [8] Pearce, Ian (May 2002). "The Bakhshali manuscript" (http:/ / www-history. mcs. st-andrews. ac. uk/ HistTopics/ Bakhshali_manuscript. html). The MacTutor History of Mathematics archive. . Retrieved 2007-07-24. [9] Teresi, Dick. (2002). Lost Discoveries: The Ancient Roots of Modern Science–from the Babylonians to the Mayas. New York: Simon & Schuster. ISBN 0684837188. Page 65–66. [10] Teresi, Dick. (2002). Lost Discoveries: The Ancient Roots of Modern Science–from the Babylonians to the Mayas. New York: Simon & Schuster. ISBN 0684837188. Page 65. [11] Colva M. Roney-Dougal, Lecturer in Pure Mathematics at the University of St Andrews, stated this on the BBC Radio 4 programme "In Our Time," on 9 March 2006. [12] Knowledge Transfer and Perceptions of the Passage of Time, ICEE-2002 Keynote Address by Colin Adamson-Macedo. "Referring again to Brahmagupta's great work, all the necessary rules for algebra, including the 'rule of signs', were stipulated, but in a form which used the language and imagery of commerce and the market place. Thus 'dhana' (= fortunes) is used to represent positive numbers, whereas 'rina' (= debts) were negative". [13] Hashemipour, Behnaz (2007). "Būzjānī: Abū al‐Wafāʾ Muḥammad ibn Muḥammad ibn Yaḥyā al‐Būzjānī" (http:/ / islamsci. mcgill. ca/ RASI/ BEA/ Buzjani_BEA. htm). In Thomas Hockey et al. The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 188–9.

Negative and non-negative numbers
ISBN 9780387310220. . ( PDF version (http:/ / islamsci. mcgill. ca/ RASI/ BEA/ Buzjani_BEA. pdf)) [14] Maseres, Francis (1758). A dissertation on the use of the negative sign in algebra: containing a demonstration of the rules usually given concerning it; and shewing how quadratic and cubic equations may be explained, without the consideration of negative roots. To which is added, as an appendix, Mr. Machin's Quadrature of the Circle. [15] Martinez, Alberto A. (2006). Negative Math: How Mathematical Rules Can Be Positively Bent. Princeton University Press. a history of controversies on negative numbers, mainly from the 1600s until the early 1900s.


• Bourbaki, Nicolas (1998). Elements of the History of Mathematics. Berlin, Heidelberg, and New York: Springer-Verlag. ISBN 3540647678. • Struik, Dirk J. (1987). A Concise History of Mathematics. New York: Dover Publications.

External links
• Maseres' biographical information ( Maseres.html) • BBC Radio 4 series "In Our Time," on Negative Numbers, March 9, 2006 ( history/inourtime/inourtime_20060309.shtml) • Endless Examples & Exercises: Operations with Signed Integers ( arithmetic/SignedValues01_EE.asp) • Math Forum: Ask Dr. Math FAQ: Negative Times a Negative ( negxneg.html)

Real number
In mathematics, a real number is a value that represents a quantity along a continuum, such as -5 (an integer), 4/3 (a rational number that is not an integer), 8.6 (a rational number given by a finite decimal representation), √2 (the square root of two, an algebraic number that is not rational) and π A symbol of the set of real (3.1415926535..., a transcendental numbers number). Real numbers can be thought of as points on an infinitely long line called the number line or real line, where the points corresponding to integers are equally spaced. Any real Real numbers can be thought of as points on an infinitely long number line. number can be determined by a possibly infinite decimal representation (such as that of π above), where each consecutive digit ar at rank r indicates the addition of ar 10-r to the decimal truncated at rank r-1. The real line can be thought of as a part of the complex plane, and correspondingly, complex numbers include real numbers as a special case. These descriptions of the real numbers are not sufficiently rigorous by the modern standards of pure mathematics. The discovery of a suitably rigorous definition of the real numbers — indeed, the realization that a better definition was needed — was one of the most important developments of 19th century mathematics. The currently standard axiomatic definition is that real numbers form the unique complete totally ordered field (R,+,·,<), up to

Real number isomorphism,[1] whereas popular constructive definitions of real numbers include declaring them as equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, or certain infinite "decimal representations", together with precise interpretations for the arithmetic operations and the order relation. These definitions are equivalent in the realm of classical mathematics.


Basic properties
A real number may be either rational or irrational; either algebraic or transcendental; and either positive, negative, or zero. Real numbers are used to measure continuous quantities. They may in theory be expressed by decimal representations that have an infinite sequence of digits to the right of the decimal point; these are often represented in the same form as 324.823122147… The ellipsis (three dots) indicate that there would still be more digits to come. More formally, real numbers have the two basic properties of being an ordered field, and having the least upper bound property. The first says that real numbers comprise a field, with addition and multiplication as well as division by nonzero numbers, which can be totally ordered on a number line in a way compatible with addition and multiplication. The second says that if a nonempty set of real numbers has an upper bound, then it has a least upper bound. The second condition distinguishes the real numbers from the rational numbers: for example, the set of rational numbers whose square is less than 2 is a set with an upper bound (e.g. 1.5) but no least upper bound: hence the rational numbers do not satisfy the least upper bound property.

In physics
In the physical sciences, most physical constants such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact, the fundamental physical theories such as classical mechanics, electromagnetism, quantum mechanics, general relativity and the standard model are described using mathematical structures, typically smooth manifolds or Hilbert spaces, that are based on the real numbers although actual measurements of physical quantities are of finite accuracy and precision. In some recent developments of theoretical physics stemming from the holographic principle, the Universe is seen fundamentally as an information store, essentially zeroes and ones, organized in much less geometrical fashion and manifesting itself as space-time and particle fields only on a more superficial level. This approach removes the real number system from its foundational role in physics and even prohibits the existence of infinite precision real numbers in the physical universe by considerations based on the Bekenstein bound.[2]

In computation
Computer arithmetic cannot directly operate on real numbers, but only on a finite subset of rational numbers, limited by the number of bits used to store them, whether as floating-point numbers or arbitrary precision numbers. However, computer algebra systems can operate on irrational quantities exactly by manipulating formulas for them (such as , , or ) rather than their rational or decimal approximation;[3] however, it is not in general possible to determine whether two such expressions are equal (the Constant problem). A real number is said to be computable if there exists an algorithm that yields its digits. Because there are only countably many algorithms,[4] but an uncountable number of reals, almost all real numbers fail to be computable. Some constructivists accept the existence of only those reals that are computable. The set of definable numbers is broader, but still only countable. If computers could use unlimited-precision real numbers (real computation), then one could solve NP-complete problems, and even #P-complete problems in polynomial time, answering affirmatively the P = NP problem.

Real number


Mathematicians use the symbol R (or alternatively, , the letter "R" in blackboard bold, Unicode ℝ – U+211D) to represent the set of all real numbers (as this set is naturally endowed with a structure of field, the expression field of the real numbers is more frequently used than set of all real numbers). The notation Rn refers to the Cartesian product of n copies of R, which is an n-dimensional vector space over the field of the real numbers; this vector space may be identified to the n-dimensional space of Euclidean geometry as soon as a coordinate system has been chosen in the latter. For example, a value from R3 consists of three real numbers and specifies the coordinates of a point in 3-dimensional space. In mathematics, real is used as an adjective, meaning that the underlying field is the field of the real numbers (or the real field). For example real matrix, real polynomial and real Lie algebra. As a substantive, the term is used almost strictly in reference to the real numbers themselves (e.g., The "set of all reals").

Vulgar fractions had been used by the Egyptians around 1000 BC; the Vedic "Sulba Sutras" ("The rules of chords") in, ca. 600 BC, include what may be the first 'use' of irrational numbers. The concept of irrationality was implicitly accepted by early Indian mathematicians since Manava (c. 750–690 BC), who were aware that the square roots of certain numbers such as 2 and 61 could not be exactly determined.[5] Around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. The Middle Ages saw the acceptance of zero, negative, integral and fractional numbers, first by Indian and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects,[6] which was made possible by the development of algebra. Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[7] The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation, often in the form of square roots, cube roots and fourth roots.[8] In the 16th century, Simon Stevin created the basis for modern decimal notation, and insisted that there is no difference between rational and irrational numbers in this regard. In the 17th century, Descartes introduced the term "real" to describe roots of a polynomial, distinguishing them from "imaginary" ones. In the 18th and 19th centuries there was much work on irrational and transcendental numbers. Johann Heinrich Lambert (1761) gave the first flawed proof that π cannot be rational; Adrien-Marie Legendre (1794) completed the proof, and showed that π is not the square root of a rational number. Paolo Ruffini (1799) and Niels Henrik Abel (1842) both constructed proofs of Abel–Ruffini theorem: that the general quintic or higher equations cannot be solved by a general formula involving only arithmetical operations and roots. Évariste Galois (1832) developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Joseph Liouville (1840) showed that neither e nor e2 can be a root of an integer quadratic equation, and then established existence of transcendental numbers, the proof being subsequently displaced by Georg Cantor (1873). Charles Hermite (1873) first proved that e is transcendental, and Ferdinand von Lindemann (1882), showed that π is transcendental. Lindemann's proof was much simplified by Weierstrass (1885), still further by David Hilbert (1893), and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the entire set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871. In 1874 he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, which he published in 1891. See Cantor's first uncountability proof.

Real number


The real number system can be defined axiomatically up to an isomorphism, which is described below. There are also many ways to construct "the" real number system, for example, starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of their Cauchy sequences or as Dedekind cuts, which are certain subsets of rational numbers. Another possibility is to start from some rigorous axiomatization of Euclidean geometry (Hilbert, Tarski etc.) and then define the real number system geometrically. From the structuralist point of view all these constructions are on equal footing.

Axiomatic approach
Let R denote the set of all real numbers. Then: • The set R is a field, meaning that addition and multiplication are defined and have the usual properties. • The field R is ordered, meaning that there is a total order ≥ such that, for all real numbers x, y and z: • if x ≥ y then x + z ≥ y + z; • if x ≥ 0 and y ≥ 0 then xy ≥ 0. • The order is Dedekind-complete; that is, every non-empty subset S of R with an upper bound in R has a least upper bound (also called supremum) in R. The last property is what differentiates the reals from the rationals. For example, the set of rationals with square less than 2 has a rational upper bound (e.g., 1.5) but no rational least upper bound, because the square root of 2 is not rational. The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fields R1 and R2, there exists a unique field isomorphism from R1 to R2, allowing us to think of them as essentially the same mathematical object. For another axiomatization of R, see Tarski's axiomatization of the reals.

Construction from the rational numbers
The real numbers can be constructed as a completion of the rational numbers in such a way that a sequence defined by a decimal or binary expansion like {3, 3.1, 3.14, 3.141, 3.1415,...} converges to a unique real number. For details and other constructions of real numbers, see construction of the real numbers.

A main reason for using real numbers is that the reals contain all limits. More precisely, every sequence of real numbers having the property that consecutive terms of the sequence become arbitrarily close to each other necessarily has the property that after some term in the sequence the remaining terms are arbitrarily close to some specific real number. In mathematical terminology, this means that the reals are complete (in the sense of metric spaces or uniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section). This is formally defined in the following way: A sequence (xn) of real numbers is called a Cauchy sequence if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − xm| is less than ε for all n and m that are both greater than N. In other words, a sequence is a Cauchy sequence if its elements xn eventually come and remain arbitrarily close to each other. A sequence (xn) converges to the limit x if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − x| is less than ε provided that n is greater than N. In other words, a sequence has limit x if its elements eventually come and remain arbitrarily close to x.

Real number Notice that every convergent sequence is a Cauchy sequence. An important fact about the real numbers is that the converse is also true: Every Cauchy sequence of real numbers is convergent to a real number. That is, the reals are complete. Note that the rationals are not complete. For example, the sequence (1, 1.4, 1.41, 1.414, 1.4142, 1.41421, ...), where each term adds a digit of the decimal expansion of the positive square root of 2, is Cauchy but it does not converge to a rational number. (In the real numbers, in contrast, it converges to the positive square root of 2.) The existence of limits of Cauchy sequences is what makes calculus work and is of great practical use. The standard numerical test to determine if a sequence has a limit is to test if it is a Cauchy sequence, as the limit is typically not known in advance. For example, the standard series of the exponential function


converges to a real number because for every x the sums

can be made arbitrarily small by choosing N sufficiently large. This proves that the sequence is Cauchy, so we know that the sequence converges even if the limit is not known in advance.

"The complete ordered field"
The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways. First, an order can be lattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have no largest element (given any element z, z + 1 is larger), so this is not the sense that is meant. Additionally, an order can be Dedekind-complete, as defined in the section Axioms. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way. These two notions of completeness ignore the field structure. However, an ordered group (in this case, the additive group of the field) defines a uniform structure, and uniform structures have a notion of completeness (topology); the description in the section Completeness above is a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion for metric spaces, since the definition of metric space relies on already having a characterisation of the real numbers.) It is not true that R is the only uniformly complete ordered field, but it is the only uniformly complete Archimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Since it can be proved that any uniformly complete Archimedean field must also be Dedekind-complete (and vice versa, of course), this justifies using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way. But the original use of the phrase "complete Archimedean field" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of R. Thus R is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the

Real number construction of the reals from surreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield.


Advanced properties
The reals are uncountable; that is, there are strictly more real numbers than natural numbers, even though both sets are infinite. In fact, the cardinality of the reals equals that of the set of subsets (i.e., the power set) of the natural numbers, and Cantor's diagonal argument states that the latter set's cardinality is strictly bigger than the cardinality of N. Since only a countable set of real numbers can be algebraic, almost all real numbers are transcendental. The non-existence of a subset of the reals with cardinality strictly between that of the integers and the reals is known as the continuum hypothesis. The continuum hypothesis can neither be proved nor be disproved; it is independent from the axioms of set theory. The real numbers form a metric space: the distance between x and y is defined to be the absolute value |x − y|. By virtue of being a totally ordered set, they also carry an order topology; the topology arising from the metric and the one arising from the order are identical, but yield different presentations for the topology – in the order topology as intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals are a contractible (hence connected and simply connected), separable metric space of dimension 1, and are everywhere dense. The real numbers are locally compact but not compact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separable order topologies are necessarily homeomorphic to the reals. Every nonnegative real number has a square root in R, although no negative number does. This shows that the order on R is determined by its algebraic structure. Also, every polynomial of odd degree admits at least one real root: these two properties make R the premier example of a real closed field. Proving this is the first half of one proof of the fundamental theorem of algebra. The reals carry a canonical measure, the Lebesgue measure, which is the Haar measure on their structure as a topological group normalised such that the unit interval [0,1] has measure 1. The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first order logic as the real numbers themselves. The set of hyperreal numbers satisfies the same first order sentences as R. Ordered fields that satisfy the same first-order sentences as R are called nonstandard models of R. This is what makes nonstandard analysis work; by proving a first-order statement in some nonstandard model (which may be easier than proving it in R), we know that the same statement must also be true of R.

Generalizations and extensions
The real numbers can be generalized and extended in several different directions: • The complex numbers contain solutions to all polynomial equations and hence are an algebraically closed field unlike the real numbers. However, the complex numbers are not an ordered field. • The affinely extended real number system adds two elements +∞ and −∞. It is a compact space. It is no longer a field, not even an additive group, but it still has a total order; moreover, it is a complete lattice. • The real projective line adds only one value ∞. It is also a compact space. Again, it is no longer a field, not even an additive group. However, it allows division of a non-zero element by zero. It is not ordered anymore. • The long real line pastes together ℵ1* + ℵ1 copies of the real line plus a single point (here ℵ1* denotes the reversed ordering of ℵ1) to create an ordered set that is "locally" identical to the real numbers, but somehow longer; for instance, there is an order-preserving embedding of ℵ1 in the long real line but not in the real numbers.

Real number The long real line is the largest ordered set that is complete and locally Archimedean. As with the previous two examples, this set is no longer a field or additive group. • Ordered fields extending the reals are the hyperreal numbers and the surreal numbers; both of them contain infinitesimal and infinitely large numbers and thus are not Archimedean. • Self-adjoint operators on a Hilbert space (for example, self-adjoint square complex matrices) generalize the reals in many respects: they can be ordered (though not totally ordered), they are complete, all their eigenvalues are real and they form a real associative algebra. Positive-definite operators correspond to the positive reals and normal operators correspond to the complex numbers.


"Reals" in set theory
In set theory, specifically descriptive set theory, the Baire space is used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals".

Real numbers and logic
The real numbers are most often formalized using the Zermelo-Fraenkel axiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied in reverse mathematics and in constructive mathematics.[9] Abraham Robinson's theory of nonstandard or hyperreal numbers extends the set of the real numbers by infinitesimal numbers, which allows to built infinitesimal calculus in a way which is closer to the usual intuition of the notion of limit. Edward Nelson's internal set theory is a non Zermelo-Fraenkel set theory which allows to consider the non standard real numbers as elements of the set of the reals (and not of an extension of it as in Robinson's theory). The continuum hypothesis posits that the cardinality of the set of the real numbers is cardinal number after , i.e. the smallest infinite , the cardinality of the integers. Paul Cohen has proved in 1963 that it is an axiom which is

independent of the other axioms of set theory, that is one may choose either the continuum hypothesis or its negation as an axiom of set theory, without making it contradictory.

[1] More precisely, given two complete totally ordered fields, there is a unique isomorphism between them. This implies that the identity is the unique field automorphism of the reals which is compatible with the ordering. [2] Scott Aaronson, NP-complete Problems and Physical Reality (http:/ / arxiv. org/ abs/ quant-ph/ 0502072), ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52. [3] Cohen, Joel S. (2002), Computer algebra and symbolic computation: elementary algorithms, 1, A K Peters, Ltd., p. 32, ISBN 9781568811581 [4] James L. Hein, Discrete Structures, Logic, and Computability (http:/ / books. google. com/ books?id=vmlcc2IH9dEC), 3rd edition (Jones and Bartlett Publishers: Sudbury, MA), section 14.1.1 (2010). [5] T. K. Puttaswamy, "The Accomplishments of Ancient Indian Mathematicians", pp. 410–1, in Selin, Helaine; D'Ambrosio, Ubiratan (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 1402002602 [6] O'Connor, John J.; Robertson, Edmund F., "Arabic mathematics: forgotten brilliance?" (http:/ / www-history. mcs. st-andrews. ac. uk/ HistTopics/ Arabic_mathematics. html), MacTutor History of Mathematics archive, University of St Andrews, . [7] Matvievskaya, Galina (1987), "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics", Annals of the New York Academy of Sciences 500: 253–277 [254], doi:10.1111/j.1749-6632.1987.tb37206.x [8] Jacques Sesiano, "Islamic mathematics", p. 148, in Selin, Helaine; D'Ambrosio, Ubiratan (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 1402002602 [9] Bishop, Errett; Bridges, Douglas (1985), Constructive analysis, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 279, Berlin, New York: Springer-Verlag, ISBN 978-3-540-15066-4, chapter 2.

Real number


• Georg Cantor, 1874, "Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen", Journal für die Reine und Angewandte Mathematik, volume 77, pages 258–262. • Robert Katz, 1964, Axiomatic Analysis, D. C. Heath and Company. • Edmund Landau, 2001, ISBN 0-8218-2693-X, Foundations of Analysis, American Mathematical Society. • Howie, John M., Real Analysis, Springer, 2005, ISBN 1-85233-314-6

External links
• The real numbers: Pythagoras to Stevin ( Real_numbers_1.html) • The real numbers: Stevin to Hilbert ( Real_numbers_2.html) • The real numbers: Attempts to understand ( Real_numbers_3.html) • What are the "real numbers," really? (

Natural number
In mathematics, the natural numbers are the ordinary whole numbers used for counting ("there are 6 coins on the table") and ordering ("this is the 3rd largest city in the country"). These purposes are related to the linguistic notions of cardinal and ordinal numbers, respectively (see English numerals). A more recent notion is that of a nominal number, which is used only for naming. Properties of the natural numbers related to divisibility, such as the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partition enumeration, are studied in combinatorics. There is no universal agreement about whether to include zero in the set of natural numbers: some define the natural numbers to be the positive integers {1, 2, 3, ...}, while for others the term designates the non-negative integers {0, 1, 2, 3, ...}. The former definition is the traditional one, with the latter definition first appearing in the 19th century. Some authors use the term "natural number" to exclude zero and "whole number" to include it; others use "whole number" in a way that excludes zero, or in a way that includes both zero and the negative integers.

Natural numbers can be used for counting (one apple, two apples, three apples, ...) from top to bottom.

Natural number


History of natural numbers and the status of zero
The natural numbers had their origins in the words used to count things, beginning with the number 1. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to over one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10. A much later advance was the development of the idea that zero can be considered as a number, with its own numeral. The use of a zero digit in place-value notation (within other numbers) dates back as early as 700 BC by the Babylonians, but they omitted such a digit when it would have been the last symbol in the number.[1] The Olmec and Maya civilizations used zero as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral zero in modern times originated with the Indian mathematician Brahmagupta in 628. However, zero had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525, without being denoted by a numeral (standard Roman numerals do not have a symbol for zero); instead nulla or nullae, genitive of nullus, the Latin word for "none", was employed to denote a zero value.[2] The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. Note that many Greek mathematicians did not consider 1 to be "a number", so to them 2 was the smallest number.[3] Independent studies also occurred at around the same time in India, China, and Mesoamerica. Several set-theoretical definitions of natural numbers were developed in the 19th century. With these definitions it was convenient to include 0 (corresponding to the empty set) as a natural number. Including 0 is now the common convention among set theorists, logicians, and computer scientists. Many other mathematicians also include 0, although some have kept the older tradition and take 1 to be the first natural number.[4] Sometimes the set of natural numbers with 0 included is called the set of whole numbers or counting numbers. On the other hand, integer being Latin for whole, the integers usually stand for the negative and positive whole numbers (and zero) altogether.

Mathematicians use N or (an N in blackboard bold, displayed as ℕ in Unicode) to refer to the set of all natural numbers. This set is countably infinite: it is infinite but countable by definition. This is also expressed by saying that the cardinal number of the set is aleph-null . To be unambiguous about whether zero is included or not, sometimes an index (or superscript) "0" is added in the former case, and a superscript " " or subscript " " is added in the latter case:

(Sometimes, an index or superscript "+" is added to signify "positive". However, this is often used for "nonnegative" in other cases, as R+ = [0,∞) and Z+ = { 0, 1, 2,... }, at least in European literature. The notation " ", however, is standard for nonzero, or rather, invertible elements.) Some authors who exclude zero from the naturals use the terms natural numbers with zero, whole numbers, or counting numbers, denoted W, for the set of nonnegative integers. Others use the notation P for the positive integers if there is no danger of confusing this with the prime numbers. Set theorists often denote the set of all natural numbers including zero by a lower-case Greek letter omega: ω. This stems from the identification of an ordinal number with the set of ordinals that are smaller. One may observe that adopting the von Neumann definition of ordinals and defining cardinal numbers as minimal ordinals among those

Natural number with same cardinality, one gets .


Algebraic properties
The addition (+) and multiplication (×) operations on natural numbers have several algebraic properties: • Closure under addition and multiplication: for all natural numbers a and b, both a + b and a × b are natural numbers. • Associativity: for all natural numbers a, b, and c, a + (b + c) = (a + b) + c and a × (b × c) = (a × b) × c. • Commutativity: for all natural numbers a and b, a + b = b + a and a × b = b × a. • Existence of identity elements: for every natural number a, a + 0 = a and a × 1 = a. • Distributivity of multiplication over addition for all natural numbers a, b, and c, a × (b + c)  =  (a × b) + (a × c) • No zero divisors: if a and b are natural numbers such that a × b = 0   then a = 0 or b = 0

One can recursively define an addition on the natural numbers by setting a + 0 = a and a + S(b) = S(a + b) for all a, b. Here S should be read as "successor". This turns the natural numbers (N, +) into a commutative monoid with identity element 0, the so-called free monoid with one generator. This monoid satisfies the cancellation property and can be embedded in a group. The smallest group containing the natural numbers is the integers. If we define 1 := S(0), then b + 1 = b + S(0) = S(b + 0) = S(b). That is, b + 1 is simply the successor of b. Analogously, given that addition has been defined, a multiplication × can be defined via a × 0 = 0 and a × S(b) = (a × b) + a. This turns (N*, ×) into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Addition and multiplication are compatible, which is expressed in the distribution law: a × (b + c) = (a × b) + (a × c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that N is not closed under subtraction, means that N is not a ring; instead it is a semiring (also known as a rig). If we interpret the natural numbers as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that we start with a + 1 = S(a) and a × 1 = a. For the remainder of the article, we write ab to indicate the product a × b, and we also assume the standard order of operations. Furthermore, one defines a total order on the natural numbers by writing a ≤ b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and a ≤ b, then a + c ≤ b + c and ac ≤ bc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers this is expressed as "ω". While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: for any two natural numbers a and b with b ≠ 0 we can find natural numbers q and r such that a = bq + r and r < b. The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This, the Division algorithm, is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory.

Natural number


Two generalizations of natural numbers arise from the two uses: • A natural number can be used to express the size of a finite set; more generally a cardinal number is a measure for the size of a set also suitable for infinite sets; this refers to a concept of "size" such that if there is a bijection between two sets they have the same size. The set of natural numbers itself and any other countably infinite set has cardinality aleph-null ( ). • Linguistic ordinal numbers "first", "second", "third" can be assigned to the elements of a totally ordered finite set, and also to the elements of well-ordered countably infinite sets like the set of natural numbers itself. This can be generalized to ordinal numbers which describe the position of an element in a well-ordered set in general. An ordinal number is also used to describe the "size" of a well-ordered set, in a sense different from cardinality: if there is an order isomorphism between two well-ordered sets they have the same ordinal number. The first ordinal number that is not a natural number is expressed as ; this is also the ordinal number of the set of natural numbers itself. Many well-ordered sets with cardinal number possible). The least ordinal of cardinality have an ordinal number greater than ω (the latter is the lowest . (i.e., the initial ordinal) is

For finite well-ordered sets, there is one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. Hypernatural numbers are part of a non-standard model of arithmetic due to Skolem. Other generalizations are discussed in the article on numbers.

Formal definitions
Historically, the precise mathematical definition of the natural numbers developed with some difficulty. The Peano axioms state conditions that any successful definition must satisfy. Certain constructions show that, given set theory, models of the Peano postulates must exist.

Peano axioms
The Peano axioms give a formal theory of the natural numbers. The axioms are: • • • • • There is a natural number 0. Every natural number a has a natural number successor, denoted by S(a). Intuitively, S(a) is a+1. There is no natural number whose successor is 0. S is injective, i.e. distinct natural numbers have distinct successors: if a ≠ b, then S(a) ≠ S(b). If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers. (This postulate ensures that the proof technique of mathematical induction is valid.)

It should be noted that the "0" in the above definition need not correspond to what we normally consider to be the number zero. "0" simply means some object that when combined with an appropriate successor function, satisfies the Peano axioms. All systems that satisfy these axioms are isomorphic, the name "0" is used here for the first element (the term "zeroth element" has been suggested to leave "first element" to "1", "second element" to "2", etc.), which is the only element that is not a successor. For example, the natural numbers starting with one also satisfy the axioms, if the symbol 0 is interpreted as the natural number 1, the symbol S(0) as the number 2, etc. In fact, in Peano's original formulation, the first natural number was 1.

Natural number


Constructions based on set theory
A standard construction A standard construction in set theory, a special case of the von Neumann ordinal construction, is to define the natural numbers as follows: We set 0 := { }, the empty set, and define S(a) = a ∪ {a} for every set a. S(a) is the successor of a, and S is called the successor function. By the axiom of infinity, the set of all natural numbers exists and is the intersection of all sets containing 0 which are closed under this successor function. This then satisfies the Peano axioms. Each natural number is then equal to the set of all natural numbers less than it, so that • • • • • 0 = { } 1 = {0} = 2 = {0, 1} = {0, {0}} = { { }, } 3 = {0, 1, 2} = {0, {0}, {0, {0}}} = { { }, , {{ }, } } n = {0, 1, 2, ..., n-2, n-1} = {0, 1, 2, ..., n-2,} ∪ {n-1} = {n-1} ∪ (n-1) = S(n-1) and so on. When a natural number is used as a set, this is typically what is meant. Under this definition, there are exactly n elements (in the naïve sense) in the set n and n ≤ m (in the naïve sense) if and only if n is a subset of m. Also, with this definition, different possible interpretations of notations like Rn (n-tuples versus mappings of n into R) coincide. Even if the axiom of infinity fails and the set of all natural numbers does not exist, it is possible to define what it means to be one of these sets. A set n is a natural number means that it is either 0 (empty) or a successor, and each of its elements is either 0 or the successor of another of its elements. Other constructions Although the standard construction is useful, it is not the only possible construction. For example: one could define 0 = { } and S(a) = {a}, producing • 0 = { } • 1 = {0} = • 2 = {1} ={{{ }}}, etc. Each natural number is then equal to the set of the natural number preceding it. Or we could even define 0 = and S(a) = a ∪ {a} producing • 0= • 1 = {{ }, 0} = {{ }, } • 2 = {{ }, 0, 1}, etc. The oldest and most "classical" set-theoretic definition of the natural numbers is the definition commonly ascribed to Frege and Russell under which each concrete natural number n is defined as the set of all sets with n elements.[5] [6] This may appear circular, but can be made rigorous with care. Define 0 as {{ }} (clearly the set of all sets with 0 elements) and define S(A) (for any set A) as {x ∪ {y} | x ∈ A ∧ y ∉ x } (see set-builder notation). Then 0 will be the set of all sets with 0 elements, 1 = S(0) will be the set of all sets with 1 element, 2 = S(1) will be the set of all sets

Natural number with 2 elements, and so forth. The set of all natural numbers can be defined as the intersection of all sets containing 0 as an element and closed under S (that is, if the set contains an element n, it also contains S(n)). One could also define "finite" independently of the notion of "natural number", and then define natural numbers as equivalence classes of finite sets under the equivalence relation of equipollence. This definition does not work in the usual systems of axiomatic set theory because the collections involved are too large (it will not work in any set theory with the axiom of separation); but it does work in New Foundations (and in related systems known to be relatively consistent) and in some systems of type theory.


[1] "... a tablet found at Kish ... thought to date from around 700 BC, uses three hooks to denote an empty place in the positional notation. Other tablets dated from around the same time use a single hook for an empty place. (http:/ / www-history. mcs. st-and. ac. uk/ history/ HistTopics/ Zero. html)" [2] Cyclus Decemnovennalis Dionysii - Nineteen year cycle of Dionysius (http:/ / hbar. phys. msu. ru/ gorm/ chrono/ paschata. htm) [3] This convention is used, for example, in Euclid's Elements, see Book VII, definitions 1 and 2 (http:/ / aleph0. clarku. edu/ ~djoyce/ java/ elements/ bookVII/ defVII1. html). [4] This is common in texts about Real analysis. See, for example, Carothers (2000) p.3 or Thomson, Bruckner and Bruckner (2000), p.2. [5] Die Grundlagen der Arithmetik: (http:/ / www. ac-nancy-metz. fr/ enseign/ philo/ textesph/ Frege. pdf) eine logisch-mathematische Untersuchung über den Begriff der Zahl (1884). Breslau. [6] Whitehead, Alfred North, and Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56, Cambridge University Press, 1962.

• Edmund Landau, Foundations of Analysis, Chelsea Pub Co. ISBN 0-8218-2693-X. • Richard Dedekind, Essays on the theory of numbers, Dover, 1963, ISBN 0486210103 / Kessinger Publishing, LLC , 2007, ISBN 054808985X • N. L. Carothers. Real analysis. Cambridge University Press, 2000. ISBN 0521497566 • Brian S. Thomson, Judith B. Bruckner, Andrew M. Bruckner. Elementary real analysis., 2000. ISBN 0130190756 • Weisstein, Eric W., " Natural Number (" from MathWorld.

External links
• Axioms and Construction of Natural Numbers ( • Essays on the Theory of Numbers ( by Richard Dedekind at Project Gutenberg

Rational number


Rational number
In mathematics, a rational number is any number that can be expressed as the quotient or fraction a/b of two integers, with the denominator b not equal to zero. Since b may be equal to 1, every integer is a rational number. The set of all rational numbers is usually denoted by a boldface Q (or blackboard bold , Unicode, which stands for quotient.) The decimal expansion of a rational number always either terminates after finitely many digits or begins to repeat the same finite sequence of digits over and over. Moreover, any repeating or terminating decimal represents a rational number. These statements hold true not just for base 10, but also for binary, hexadecimal, or any other integer base. A real number that is not rational is called irrational. Irrational numbers include √2, π, and e. The decimal expansion of an irrational number continues forever without repeating. Since the set of rational numbers is countable, and the set of real numbers is uncountable, almost all real numbers are irrational. The rational numbers can be formally defined as the equivalence classes of the quotient set (Z × (Z ∖ {0})) / ~, where the cartesian product Z × (Z ∖ {0}) is the set of all ordered pairs (m,n) where m and n are integers, n is not zero (n ≠ 0), and "~" is the equivalence relation defined by (m1,n1) ~ (m2,n2) if, and only if, m1n2 − m2n1 = 0. In abstract algebra, the rational numbers together with certain operations of addition and multiplication form a field. This is the archetypical field of characteristic zero, and is the field of fractions for the ring of integers. Finite extensions of Q are called algebraic number fields, and the algebraic closure of Q is the field of algebraic numbers. In mathematical analysis, the rational numbers form a dense subset of the real numbers. The real numbers can be constructed from the rational numbers by completion, using either Cauchy sequences, Dedekind cuts, or infinite decimals. Zero divided by any other integer equals zero, therefore zero is a rational number (but division by zero is undefined).

The term rational in reference to the set Q refers to the fact that a rational number represents a ratio of two integers. In mathematics, the adjective rational often means that the underlying field considered is the field Q of rational numbers. Rational polynomial usually, and most correctly, means a polynomial with rational coefficients, also called a “polynomial over the rationals”. However, rational function does not mean the underlying field is the rational numbers, and a rational algebraic curve is not an algebraic curve with rational coefficients.

Embedding of integers
Any integer can be expressed as the rational number

if and only if

Where both denominators are positive: if and only if

Rational number If either denominator is negative, the fractions must first be converted into equivalent forms with positive denominators, through the equations



Two fractions are added as follows


The rule for multiplication is

Where :

Additive and multiplicative inverses exist in the rational numbers

Exponentiation to integer power
If is a non-negative integer, then

and (if


Rational number


Egyptian fractions
Any positive rational number can be expressed as a sum of distinct reciprocals of positive integers, such as

For any positive rational number, there are infinitely many different such representations, called Egyptian fractions, as they were used by the ancient Egyptians. The Egyptians also had a different notation for dyadic fractions in the Akhmim Wooden Tablet and several Rhind Mathematical Papyrus problems.

Formal construction
Mathematically we may construct the rational numbers as equivalence classes of ordered pairs of integers (m,n), with n ≠ 0. This space of equivalence classes is the quotient space (Z × (Z ∖ {0})) / ∼, where (m1,n1) ∼ (m2,n2) if, and only if, m1n2 − m2n1 = 0. We can define addition and multiplication of these pairs with the following rules:

a diagram showing a representation of the equivalent classes of pairs of integers

and, if m2 ≠ 0, division by

The equivalence relation (m1,n1) ~ (m2,n2) if, and only if, m1n2 − m2n1 = 0 is a congruence relation, i.e. it is compatible with the addition and multiplication defined above, and we may define Q to be the quotient set (Z × (Z ∖ {0})) / ∼, i.e. we identify two pairs (m1,n1) and (m2,n2) if they are equivalent in the above sense. (This construction can be carried out in any integral domain: see field of fractions.) We denote by [(m1,n1)] the equivalence class containing (m1,n1). If (m1,n1) ~ (m2,n2) then, by definition, (m1,n1) belongs to [(m2,n2)] and (m2,n2) belongs to [(m1,n1)]; in this case we can write [(m1,n1)] = [(m2,n2)]. Given any equivalence class [(m,n)] there are a countably infinite number of representation, since

The canonical choice for [(m,n)] is chosen so that gcd(m,n) = 1, i.e. m and n share no common factors, i.e. m and n are coprime. For example, we would write [(1,2)] instead of [(2,4)] or [(−12,−24)], even though [(1,2)] = [(2,4)] =

Rational number [(−12,−24)]. We can also define a total order on Q. Let ∧ be the and-symbol and ∨ be the or-symbol. We say that [(m1,n1)] ≤ [(m2,n2)] if: The integers may be considered to be rational numbers by the embedding that maps m to [(m, 1)].


The set Q, together with the addition and multiplication operations shown above, forms a field, the field of fractions of the integers Z. The rationals are the smallest field with characteristic zero: every other field of characteristic zero contains a copy of Q. The rational numbers are therefore the prime field for characteristic zero. The algebraic closure of Q, i.e. the field of roots of rational polynomials, is the algebraic numbers. The set of all rational numbers is countable. Since the set of all real numbers is uncountable, we say that almost all real numbers are irrational, in the sense of Lebesgue measure, i.e. the set of rational numbers is a null set. The rationals are a densely ordered set: between any two rationals, there sits another one, and, therefore, infinitely many other ones. For example, for any two fractions such that

a diagram illustrating the countability of the rationals


are positive), we have

Any totally ordered set which is countable, dense (in the above sense), and has no least or greatest element is order isomorphic to the rational numbers.

Real numbers and topological properties
The rationals are a dense subset of the real numbers: every real number has rational numbers arbitrarily close to it. A related property is that rational numbers are the only numbers with finite expansions as regular continued fractions. By virtue of their order, the rationals carry an order topology. The rational numbers, as a subspace of the real numbers, also carry a subspace topology. The rational numbers form a metric space by using the absolute difference metric d(x,y) = |x − y|, and this yields a third topology on Q. All three topologies coincide and turn the rationals into a topological field. The rational numbers are an important example of a space which is not locally compact. The rationals are characterized topologically as the unique countable metrizable space without isolated points. The space is also totally disconnected. The rational numbers do not form a complete metric space; the real numbers are the completion of Q under the metric d(x,y) = |x − y|, above.

Rational number


p-adic numbers
In addition to the absolute value metric mentioned above, there are other metrics which turn Q into a topological field: Let p be a prime number and for any non-zero integer a, let |a|p = p−n, where pn is the highest power of p dividing a. In addition set |0|p = 0. For any rational number a/b, we set |a/b|p = |a|p / |b|p. Then dp(x,y) = |x − y|p defines a metric on Q. The metric space (Q,dp) is not complete, and its completion is the p-adic number field Qp. Ostrowski's theorem states that any non-trivial absolute value on the rational numbers Q is equivalent to either the usual real absolute value or a p-adic absolute value.

External links
• “Rational Number” From MathWorld — A Wolfram Web Resource [1]

[1] http:/ / mathworld. wolfram. com/ RationalNumber. html

Irrational number
In mathematics, an irrational number is any real number that cannot be expressed as a ratio a/b, where a and b are integers, with b non-zero, and is therefore not a rational number. Informally, this means that an irrational number cannot be represented as a simple fraction. Irrational numbers are those real numbers that cannot be represented as terminating or repeating decimals. As a consequence of Cantor's proof that the real numbers are uncountable (and the rationals countable) it follows that almost all real numbers are irrational.[1]

Perhaps the best-known irrational numbers are: • • • • π e Φ √2.[2] [3] [4]

When the ratio of lengths of two line segments is irrational, the line segments are also described as being incommensurable, meaning they share no measure in common. A measure of a line segment I in this sense is a line segment J that "measures" I in the sense that some whole number of copies of J laid end-to-end occupy the same length as i.

Irrational number


It has been suggested that the concept of irrationality was implicitly accepted by Indian mathematicians since the 7th century BC, when Manava (c. 750–690 BC) believed that the square roots of numbers such as 2 and 61 could not be exactly determined,[5] but such claims are not well substantiated and unlikely to be true.[6]

Ancient Greece
The first proof of the existence of irrational numbers is usually attributed to a Pythagorean (possibly Hippasus of Metapontum),[7] who probably discovered them while identifying sides of the pentagram.[8] The number is irrational. The then-current Pythagorean method would have claimed that there must be some sufficiently small, indivisible unit that could fit evenly into one of these lengths as well as the other. However, Hippasus, in the 5th century BC, was able to deduce that there was in fact no common unit of measure, and that the assertion of such an existence was in fact a contradiction. He did this by demonstrating that if the hypotenuse of an isosceles right triangle was indeed commensurable with an arm, then that unit of measure must be both odd and even, which is impossible. His reasoning is as follows: • The ratio of the hypotenuse to an arm of an isosceles right triangle is c:b expressed in the smallest units possible. • By the Pythagorean theorem: c2 = a2+b2 = 2b2. (Since the triangle is isosceles, a = b.) • Since c2 is even, c must be even. • Since c:b is in its lowest terms, b must be odd (if it were also even, then both c and b would be divisible by 2, therefore not in lowest terms). • Since c is even, let c = 2y. • Then c2 = 4y2 = 2b2 • b2 = 2y2 so b2 must be even, therefore b is even. • However we have already asserted b must be odd. Since b cannot be both odd and even, here is the contradiction.[9] Greek mathematicians termed this ratio of incommensurable magnitudes alogos, or inexpressible. Hippasus, however, was not lauded for his efforts: according to one legend, he made his discovery while out at sea, and was subsequently thrown overboard by his fellow Pythagoreans “…for having produced an element in the universe which denied the…doctrine that all phenomena in the universe can be reduced to whole numbers and their ratios.”[10] Another legend states that Hippasus was merely exiled for this revelation. Whatever the consequence to Hippasus himself, his discovery posed a very serious problem to Pythagorean mathematics, since it shattered the assumption that number and geometry were inseparable–a foundation of their theory. The discovery of incommensurable ratios was indicative of another problem facing the Greeks: the relation of the discrete to the continuous. Brought into light by Zeno of Elea, he questioned the conception that quantities are discrete, and composed of a finite number of units of a given size. Past Greek conceptions dictated that they necessarily must be, for “whole numbers represent discrete objects, and a commensurable ratio represents a relation between two collections of discrete objects.”[11] However Zeno found that in fact “[quantities] in general are not discrete collections of units; this is why ratios of incommensurable [quantities] appear….[Q]uantities are, in other words, continuous.”[11] What this means is that, contrary to the popular conception of the time, there cannot be an indivisible, smallest unit of measure for any quantity. That in fact, these divisions of quantity must necessarily be infinite. For example, consider a line segment: this segment can be split in half, that half split in half, the half of the

Irrational number half in half, and so on. This process can continue infinitely, for there is always another half to be split. The more times the segment is halved, the closer the unit of measure will come to zero, but it will never reach exactly zero. This is exactly what Zeno sought to prove. He sought to prove this by formulating four paradoxes, which demonstrated the contradictions inherent in the mathematical thought of the time. While Zeno’s paradoxes accurately demonstrated the deficiencies of current mathematical conceptions, they were not regarded as proof of the alternative. In the minds of the Greeks, disproving the validity of one view did not necessarily prove the validity of another, and therefore further investigation had to occur. The next step was taken by Eudoxus of Cnidus, who formalized a new theory of proportion that took into account commensurable as well as incommensurable quantities. Central to his idea was the distinction between magnitude and number. A magnitude “was not a number but stood for entities such as line segments, angles, areas, volumes, and time which could vary, as we would say, continuously. Magnitudes were opposed to numbers, which jumped from one value to another, as from 4 to 5.”[12] Numbers are composed of some smallest, indivisible unit, whereas magnitudes are infinitely reducible. Because no quantitative values were assigned to magnitudes, Eudoxus was then able to account for both commensurable and incommensurable ratios by defining a ratio in terms of its magnitude, and proportion as an equality between two ratios. By taking quantitative values (numbers) out of the equation, he avoided the trap of having to express an irrational number as a number. “Eudoxus’ theory enabled the Greek mathematicians to make tremendous progress in geometry by supplying the necessary logical foundation for incommensurable ratios.”[13] Book 10 is dedicated to classification of irrational magnitudes. As a result of the distinction between number and magnitude, geometry became the only method that could take into account incommensurable ratios. Because previous numerical foundations were still incompatible with the concept of incommensurability, Greek focus shifted away from those numerical conceptions such as algebra and focused almost exclusively on geometry. In fact, in many cases algebraic conceptions were reformulated into geometrical terms. This may account for why we still conceive of x2 or x3 as x squared and x cubed instead of x second power and x third power. Also crucial to Zeno’s work with incommensurable magnitudes was the fundamental focus on deductive reasoning which resulted from the foundational shattering of earlier Greek mathematics. The realization that some basic conception within the existing theory was at odds with reality necessitated a complete and thorough investigation of the axioms and assumptions that comprised that theory. Out of this necessity Eudoxus developed his method of exhaustion, a kind of reductio ad absurdum which “established the deductive organization on the basis of explicit axioms…” as well as “…reinforced the earlier decision to rely on deductive reasoning for proof.”[14] This method of exhaustion is said to be the first step in the creation of calculus. Theodorus of Cyrene proved the irrationality of the surds of whole numbers up to 17, but stopped there probably because the algebra he used couldn't be applied to the square root of 17.[15] It wasn't until Eudoxus developed a theory of proportion that took into account irrational as well as rational ratios that a strong mathematical foundation of irrational numbers was created.[16]


Middle Ages
In the Middle ages, the development of algebra by Muslim mathematicians allowed irrational numbers to be treated as "algebraic objects".[17] Muslim mathematicians also merged the concepts of "number" and "magnitude" into a more general idea of real numbers, criticized Euclid's idea of ratios, developed the theory of composite ratios, and extended the concept of number to ratios of continuous magnitude.[18] In his commentary on Book 10 of the Elements, the Persian mathematician Al-Mahani (d. 874/884) examined and classified quadratic irrationals and cubic irrationals. He provided definitions for rational and irrational magnitudes, which he treated as irrational numbers. He dealt with them freely but explains them in geometric terms as follows:[19] "It will be a rational (magnitude) when we, for instance, say 10, 12, 3%, 6%, etc., because its value is pronounced and expressed quantitatively. What is not rational is irrational and it is impossible to pronounce and represent its value quantitatively. For example: the roots of numbers such as 10, 15, 20 which are not

Irrational number squares, the sides of numbers which are not cubes etc." In contrast to Euclid's concept of magnitudes as lines, Al-Mahani considered integers and fractions as rational magnitudes, and square roots and cube roots as irrational magnitudes. He also introduced an arithmetical approach to the concept of irrationality, as he attributes the following to irrational magnitudes:[19] "their sums or differences, or results of their addition to a rational magnitude, or results of subtracting a magnitude of this kind from an irrational one, or of a rational magnitude from it." The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation, often in the form of square roots, cube roots and fourth roots.[20] In the 10th century, the Iraqi mathematician Al-Hashimi provided general proofs (rather than geometric demonstrations) for irrational numbers, as he considered multiplication, division, and other arithmetical functions.[21] Abū Ja'far al-Khāzin (900–971) provides a definition of rational and irrational magnitudes, stating that if a definite quantity is:[22] "contained in a certain given magnitude once or many times, then this (given) magnitude corresponds to a rational number. . . . Each time when this (latter) magnitude comprises a half, or a third, or a quarter of the given magnitude (of the unit), or, compared with (the unit), comprises three, five, or three fifths, it is a rational magnitude. And, in general, each magnitude that corresponds to this magnitude (i.e. to the unit), as one number to another, is rational. If, however, a magnitude cannot be represented as a multiple, a part (l/n), or parts (m/n) of a given magnitude, it is irrational, i.e. it cannot be expressed other than by means of roots." Many of these concepts were eventually accepted by European mathematicians sometime after the Latin translations of the 12th century. Al-Hassār, a Moroccan mathematician from Fez specializing in Islamic inheritance jurisprudence during the 12th century, first mentions the use of a fractional bar, where numerators and denominators are separated by a horizontal bar. In his discussion he writes, "..., for example, if you are told to write three-fifths and a third of a fifth, write thus, Fibonacci in the 13th century.[24] During the 14th to 16th centuries, Madhava of Sangamagrama and the Kerala school of astronomy and mathematics discovered the infinite series for several irrational numbers such as π and certain irrational values of trigonometric functions. Jyesthadeva provided proofs for these infinite series in the Yuktibhāṣā.[25] ."


This same fractional notation appears soon after in the work of Leonardo

Modern period
The 17th century saw imaginary numbers become a powerful tool in the hands of Abraham de Moivre, and especially of Leonhard Euler. The completion of the theory of complex numbers in the nineteenth century entailed the differentiation of irrationals into algebraic and transcendental numbers, the proof of the existence of transcendental numbers, and the resurgence of the scientific study of the theory of irrationals, largely ignored since Euclid. The year 1872 saw the publication of the theories of Karl Weierstrass (by his pupil Ernst Kossak), Eduard Heine (Crelle's Journal, 74), Georg Cantor (Annalen, 5), and Richard Dedekind. Méray had taken in 1869 the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method has been completely set forth by Salvatore Pincherle in 1880,[26] and Dedekind's has received additional prominence through the author's later work (1888) and the endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Leopold Kronecker (Crelle, 101), and Charles Méray. Continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the nineteenth century were brought into prominence through the writings of Joseph Louis Lagrange. Dirichlet also added to the general theory, as have numerous contributors to the applications of the subject.

Irrational number Johann Heinrich Lambert proved (1761) that π cannot be rational, and that en is irrational if n is rational (unless n = 0).[27] While Lambert's proof is often said to be incomplete, modern assessments support it as satisfactory, and in fact for its time it is unusually rigorous. Adrien-Marie Legendre (1794), after introducing the Bessel–Clifford function, provided a proof to show that π2 is irrational, whence it follows immediately that π is irrational also. The existence of transcendental numbers was first established by Liouville (1844, 1851). Later, Georg Cantor (1873) proved their existence by a different method, that showed that every interval in the reals contains transcendental numbers. Charles Hermite (1873) first proved e transcendental, and Ferdinand von Lindemann (1882), starting from Hermite's conclusions, showed the same for π. Lindemann's proof was much simplified by Weierstrass (1885), still further by David Hilbert (1893), and was finally made elementary by Adolf Hurwitz and Paul Gordan.


Example proofs
Square roots
The square root of 2 was the first number to be proved irrational and that article contains a number of proofs. The golden ratio is the next most famous quadratic irrational and there is a simple proof of its irrationality in its article. The square roots of all numbers which are not perfect squares are irrational and a proof may be found in quadratic irrationals. The irrationality of the square root of 2 may be proved by assuming it is rational and inferring a contradiction, called an argument by reductio ad absurdum. The following argument appeals twice to the fact that the square of an odd integer is always odd. If √2 is rational it has the form m/n for integers m, n not both even. Then m2 = 2n2, hence m is even, say m = 2p. Thus 4p2 = 2n2 so 2p2 = n2, hence n is also even, a contradiction.

General roots
The proof above for the square root of two can be generalized using the fundamental theorem of arithmetic which was proved by Gauss in 1798. This asserts that every integer has a unique factorization into primes. Using it we can show that if a rational number is not an integer then no integral power of it can be an integer, as in lowest terms there must be a prime in the denominator which does not divide into the numerator whatever power each is raised to. Therefore if an integer is not an exact kth power of another integer then its kth root is irrational.

Perhaps the numbers most easily proved to be irrational are certain logarithms. Here is a proof by reductio ad absurdum that log2 3 is irrational. Notice that log2 3 ≈ 1.58 > 0. Assume log2 3 is rational. For some positive integers m and n, we have

It follows that

However, the number 2 raised to any positive integer power must be even (because it will be divisible by 2) and the number 3 raised to any positive integer power must be odd (since none of its prime factors will be 2). Clearly, an integer can not be both odd and even at the same time: we have a contradiction. The only assumption we made was that log2 3 is rational (and so expressible as a quotient of integers m/n with n ≠ 0). The contradiction means that this assumption must be false, i.e. log2 3 is irrational, and can never be expressed as a quotient of integers m/n with n ≠ 0.

Irrational number Cases such as log10 2 can be treated similarly.


Transcendental and algebraic irrationals
Almost all irrational numbers are transcendental and all transcendental numbers are irrational: the article on transcendental numbers lists several examples. e r and π r are irrational if r ≠ 0 is rational; eπ is irrational. Another way to construct irrational numbers is as irrational algebraic numbers, i.e. as zeros of polynomials with integer coefficients: start with a polynomial equation

where the coefficients ai are integers. Suppose you know that there exists some real number x with p(x) = 0 (for instance if n is odd and an is non-zero, then because of the intermediate value theorem). The only possible rational roots of this polynomial equation are of the form r/s where r is a divisor of a0 and s is a divisor of an; there are only finitely many such candidates which you can all check by hand. If neither of them is a root of p, then x must be irrational. For example, this technique can be used to show that x = (21/2 + 1)1/3 is irrational: we have (x3 − 1)2 = 2 and hence x6 − 2x3 − 1 = 0, and this latter polynomial does not have any rational roots (the only candidates to check are ±1). Because the algebraic numbers form a field, many irrational numbers can be constructed by combining transcendental and algebraic numbers. For example 3π + 2, π + √2 and e√3 are irrational (and even transcendental).

Decimal expansions
The decimal expansion of an irrational number never repeats or terminates, unlike a rational number. To show this, suppose we divide integers n by m (where m is nonzero). When long division is applied to the division of n by m, only m remainders are possible. If 0 appears as a remainder, the decimal expansion terminates. If 0 never occurs, then the algorithm can run at most m − 1 steps without using any remainder more than once. After that, a remainder must recur, and then the decimal expansion repeats. Conversely, suppose we are faced with a recurring decimal, we can prove that it is a fraction of two integers. For example: Here the length of the repitend is 3. We multiply by 103:

Note that since we multiplied by 10 to the power of the length of the repeating part, we shifted the digits to the left of the decimal point by exactly that many positions. Therefore, the tail end of 1000A matches the tail end of A exactly. Here, both 1000A and A have repeating 162 at the end. Therefore, when we subtract A from both sides, the tail end of 1000A cancels out of the tail end of A:


(135 is the greatest common divisor of 7155 and 9990). Alternatively, since 0.5 = 1/2, one can clear fractions by multiplying the numerator and denominator by 2:

(27 is the greatest common divisor of 1431 and 1998). 53/74 is a quotient of integers and therefore a rational number.

Irrational number


Irrational powers
Dov Jarden gave a simple non-constructive proof that there exist two irrational numbers a and b, such that ab is rational.[28] Indeed, if √2√2 is rational, then take a = b = √2. Otherwise, take a to be the irrational number √2√2 and b = √2. Then ab = (√2√2)√2 = √2√2·√2 = √22 = 2 which is rational. Although the above argument does not decide between the two cases, the Gelfond–Schneider theorem implies that √2√2 is transcendental, hence irrational. This theorem states that all non-rational algebraic powers of algebraic numbers other than 0 or 1 are transcendental.

Open questions
It is not known whether π + e or π − e is irrational or not. In fact, there is no pair of non-zero integers m and n for which it is known whether mπ + ne is irrational or not. Moreover, it is not known whether the set {π, e} is algebraically independent over Q. It is not known whether πe, π/e, 2e, πe, π√2, ln π, Catalan's constant, or the Euler–Mascheroni gamma constant γ are irrational.[29] [30] [31]

The set of all irrationals
Since the reals form an uncountable set, of which the rationals are a countable subset, the complementary set of irrationals is uncountable. Under the usual (Euclidean) distance function d(x, y) = |x − y|, the real numbers are a metric space and hence also a topological space. Restricting the Euclidean distance function gives the irrationals the structure of a metric space. Since the subspace of irrationals is not closed, the induced metric is not complete. However, being a G-delta set—i.e., a countable intersection of open subsets—in a complete metric space, the space of irrationals is topologically complete: that is, there is a metric on the irrationals inducing the same topology as the restriction of the Euclidean metric, but with respect to which the irrationals are complete. One can see this without knowing the aforementioned fact about G-delta sets: the continued fraction expansion of an irrational number defines a homeomorphism from the space of irrationals to the space of all sequences of positive integers, which is easily seen to be completely metrizable. Furthermore, the set of all irrationals is a disconnected metrizable space. In fact, the irrationals have a basis of clopen sets so the space is zero-dimensional.

[1] Cantor, Georg (1955, 1915). Philip Jourdain. ed. Contributions to the Founding of the Theory of Transfinite Numbers (http:/ / www. archive. org/ details/ contributionstot003626mbp). New York: Dover. ISBN 978-0486600451. . [2] The 15 Most Famous Transcendental Numbers (http:/ / sprott. physics. wisc. edu/ Pickover/ trans. html). by Clifford A. Pickover. URL retrieved 24 October 2007. [3] http:/ / www. mathsisfun. com/ irrational-numbers. html; URL retrieved 24 October 2007. [4] Weisstein, Eric W., " Irrational Number (http:/ / mathworld. wolfram. com/ IrrationalNumber. html)" from MathWorld. URL retrieved 26 October 2007. [5] T. K. Puttaswamy, "The Accomplishments of Ancient Indian Mathematicians", pp. 411–2, in Selin, Helaine; D'Ambrosio, Ubiratan (2000). Mathematics Across Cultures: The History of Non-western Mathematics. Springer. ISBN 1402002602. [6] Boyer (1991). "China and India". p. 208. "It has been claimed also that the first recognition of incommensurables is to be found in India during the Sulbasutra period, but such claims are not well substantiated. The case for early Hindu awareness of incommensurable magnitudes is rendered most unlikely by the lack of evidence that Indian mathematicians of that period had come to grips with fundamental concepts." [7] Kurt Von Fritz (1945). "The Discovery of Incommensurability by Hippasus of Metapontum". The Annals of Mathematics. [8] James R. Choike (1980). "The Pentagram and the Discovery of an Irrational Number". The Two-Year College Mathematics Journal..

Irrational number
[9] Kline, M. (1990). Mathematical Thought from Ancient to Modern Times, Vol. 1. New York: Oxford University Press. (Original work published 1972). p.33. [10] Kline 1990, p. 32. [11] Kline 1990, p.34. [12] Kline 1990, p.48. [13] Kline 1990, p.49. [14] Kline 1990, p.50. [15] Robert L. McCabe (1976). "Theodorus' Irrationality Proofs". Mathematics Magazine.. [16] Charles H. Edwards (1982). The historical development of the calculus. Springer. [17] O'Connor, John J.; Robertson, Edmund F., "Arabic mathematics: forgotten brilliance?" (http:/ / www-history. mcs. st-andrews. ac. uk/ HistTopics/ Arabic_mathematics. html), MacTutor History of Mathematics archive, University of St Andrews, .. [18] Matvievskaya, Galina (1987). "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics". Annals of the New York Academy of Sciences 500: 253–277 [254]. doi:10.1111/j.1749-6632.1987.tb37206.x. [19] Matvievskaya, Galina (1987). "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics". Annals of the New York Academy of Sciences 500: 253–277 [259]. doi:10.1111/j.1749-6632.1987.tb37206.x [20] Jacques Sesiano, "Islamic mathematics", p. 148, in Selin, Helaine; D'Ambrosio, Ubiratan (2000). Mathematics Across Cultures: The History of Non-western Mathematics. Springer. ISBN 1402002602. [21] Matvievskaya, Galina (1987). "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics". Annals of the New York Academy of Sciences 500: 253–277 [260]. doi:10.1111/j.1749-6632.1987.tb37206.x. [22] Matvievskaya, Galina (1987). "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics". Annals of the New York Academy of Sciences 500: 253–277 [261]. doi:10.1111/j.1749-6632.1987.tb37206.x. [23] Cajori, Florian (1928), A History of Mathematical Notations (Vol.1), La Salle, Illinois: The Open Court Publishing Company pg. 269. [24] (Cajori 1928, pg.89) [25] Katz, V. J. (1995), "Ideas of Calculus in Islam and India", Mathematics Magazine (Mathematical Association of America) 68 (3): 163–74. [26] Salvatore Pincherle (1880). "Saggio di una introduzione alla teorica delle funzioni analitiche secondo i principi del prof. Weierstrass". Giornale di Matematiche. [27] J. H. Lambert (1761). "Mémoire sur quelques propriétés remarquables des quantités transcendentes circulaires et logarithmiques". Histoire de l'Académie Royale des Sciences et des Belles-Lettres der Berlin: 265–276. [28] George, Alexander; Velleman, Daniel J. (2002). Philosophies of mathematics. Blackwell. pp. 3–4. ISBN 0-631-19544-0. [29] Weisstein, Eric W., " Pi (http:/ / mathworld. wolfram. com/ Pi. html)" from MathWorld. [30] Weisstein, Eric W., " Irrational Number (http:/ / mathworld. wolfram. com/ IrrationalNumber. html)" from MathWorld. [31] Some unsolved problems in number theory (http:/ / www. math. ou. edu/ ~jalbert/ courses/ openprob2. pdf)


Further reading
• Adrien-Marie Legendre, Éléments de Géometrie, Note IV, (1802), Paris • Rolf Wallisser, "On Lambert's proof of the irrationality of π", in Algebraic Number Theory and Diophantine Analysis, Franz Halter-Koch and Robert F. Tichy, (2000), Walter de Gruyer

External links
• Zeno's Paradoxes and Incommensurability ( (n.d.). Retrieved April 1, 2008 • Weisstein, Eric W., " Irrational Number (" from MathWorld. • Square root of 2 is irrational (

Imaginary number


Imaginary number
(repeats the pattern from blue area) i –3 = i i –2 = –1 i –1 = –i i 0= 1 i 1= i i 2 = –1 i 3 = –i i 4= 1 i 5= i i 6 = –1 i n = i n mod 4 (see modulus)

An imaginary number is any number whose square is a real number less than zero. When any real number is squared, the result is never negative, but the square of an imaginary number is always negative. Imaginary numbers have the form bi where b is a non-zero real number and i is the imaginary unit, defined by i2 = −1.[1] An imaginary number bi can be added to a real number a to form a complex number of the form a + bi, where a and b are called, respectively, the "real part" and the "imaginary part" of the complex number. Imaginary numbers can therefore be thought of as non-zero complex numbers whose real part is zero, and vice versa. The name "imaginary number" was coined in the 17th century as a derogatory term, as such numbers were regarded by some as fictitious or useless, but today they have a variety of essential, concrete applications in science and engineering.

Imaginary number


Although Greek mathematician and engineer Heron of Alexandria is noted as the first to have conceived these numbers,[2] [3] Rafael Bombelli first set down the rules for multiplication in the complex numbers in 1572. The concept had appeared in print earlier, for instance in work by Gerolamo Cardano. At the time, such numbers were poorly understood and regarded by some as fictitious or useless, much as zero and the negative numbers once were. Many other mathematicians were slow to adopt the use of imaginary numbers, including René Descartes, who wrote about them in his La Géométrie, where the term imaginary was used and meant to be derogatory.[4] The use of imaginary numbers was not widely accepted until the work of Leonhard Euler (1707–1783) and Carl Friedrich Gauss (1777–1855). The geometric significance of complex numbers as points in a plane was first found by Caspar Wessel (1745–1818).[5] In 1843 a mathematical physicist, William Rowan Hamilton, extended the idea of an axis of imaginary numbers in the plane to a three-dimensional space of quaternion imaginaries.
An illustration of the complex plane. The imaginary numbers are on the vertical coordinate axis.

With the development of quotient rings of polynomial rings, the concept behind an imaginary number became more substantial, but then one also finds other imaginary numbers such as the j of tessarines which has a square of +1. This idea first surfaced with the articles by James Cockle beginning in 1848.

Geometric interpretation
Geometrically, imaginary numbers are found on the vertical axis of the complex number plane, allowing them to be presented perpendicular to the real axis. One way of viewing imaginary numbers is to consider a standard number line, positively increasing in magnitude to the right, and negatively increasing in magnitude to the left. At 0 on this -axis, a -axis can be drawn with "positive" direction going up; "positive" imaginary numbers then "increase" in magnitude upwards, and "negative" imaginary numbers "decrease" in magnitude downwards. This vertical axis is often called the "imaginary axis" and is denoted , , or simply . In this representation, multiplication by –1 corresponds to a rotation of 180 degrees about the origin. Multiplication by corresponds to a
90-Degree Rotations in the Complex Plane

90-degree rotation in the "positive" direction (i.e., counterclockwise), and the equation is interpreted as saying that if we apply two 90-degree rotations about the origin, the net result is a single 180-degree rotation. Note that a 90-degree rotation in the "negative" direction (i.e. clockwise) also satisfies this interpretation. This reflects the fact that also solves the equation  — see imaginary unit.

Imaginary number


Applications of imaginary numbers
For most human tasks, real numbers (or even rational numbers or integers) offer an adequate description of data. For instance, fractions such as ⅔ and ⅛ would be meaningless to a person counting stones, but essential to a person comparing the sizes of different collections of stones. In the same way, negative numbers such as –3 and –5 would be meaningless when measuring the mass of an object, but essential when keeping track of monetary debits and credits.[4] Similarly, imaginary numbers have essential concrete applications in a variety of scientific and related areas such as signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. In electrical engineering, for example, the voltage produced by a battery is characterized by one real number called potential (e.g. +12 volts or –12 volts), but the alternating current (AC) voltage in a home requires two parameters — potential and an angle called phase. The AC voltage is, therefore, said to have two dimensions. A two dimensional quantity can be represented mathematically as either a vector or as a complex number (known in the engineering context as phasor). In the vector representation, the rectangular coordinates are typically referred to simply as and . But in the complex number representation, the same components are referred to as real and imaginary. When the complex number is purely imaginary, such as a real part of 0 and an imaginary part of 120, it means the voltage has a potential of 120 volts and a phase of 90°, which is, physically speaking, very much a real voltage.

Powers of
The powers of repeat in a cycle. This can be expressed with the following pattern where n is any integer:

This leads to the conclusion that


[1] Uno Ingard, K. (1988), Fundamentals of waves & oscillations (http:/ / books. google. com/ books?id=SGVfGIewvxkC), Cambridge University Press, p. 38, ISBN 0-521-33957-X, , Chapter 2, p 38 (http:/ / books. google. com/ books?id=SGVfGIewvxkC& pg=PA38) [2] Hargittai, István (1992). Fivefold symmetry (http:/ / books. google. com/ books?id=-Tt37ajV5ZgC) (2nd ed.). World Scientific. p. 153. ISBN 981020600-3. ., Extract of page 153 (http:/ / books. google. com/ books?id=-Tt37ajV5ZgC& pg=PA153) [3] Roy, Stephen Campbell (2007). Complex numbers: lattice simulation and zeta function applications (http:/ / books. google. com/ books?id=J-2BRbFa5IkC). Horwood. p. 1. ISBN 190427525-7. . [4] Martinez, Albert A. (2006), Negative Math: How Mathematical Rules Can Be Positively Bent, Princeton: Princeton University Press, ISBN 0691123098, discusses ambiguities of meaning in imaginary expressions in historical context. [5] Rozenfeld, Boris Abramovich (1988). A history of non-euclidean geometry: evolution of the concept of a geometric space (http:/ / books. google. com/ books?id=DRLpAFZM7uwC). Springer. p. 382. ISBN 0-387-96458-4. ., Chapter 10, page 382 (http:/ / books. google. com/ books?id=DRLpAFZM7uwC& pg=PA382)

Imaginary number


• Nahin, Paul (1998), An Imaginary Tale: the Story of the Square Root of −1, Princeton: Princeton University Press, ISBN 0691027951, explains many applications of imaginary expressions.

External links
• How can one show that imaginary numbers really do exist? ( imagexist.html) – an article that discusses the existence of imaginary numbers. • In our time: Imaginary numbers ( Discussion of imaginary numbers on BBC Radio 4. • 5Numbers programme 4 ( BBC Radio 4 programme

Algebraic number
In mathematics, an algebraic number is a number that is a root of a non-zero polynomial in one variable with rational (or equivalently, integer) coefficients. Numbers such as π that are not algebraic are said to be transcendental; almost all real numbers are transcendental. (Here "almost all" has the sense "all but a set of Lebesgue measure zero"; see Properties below.)

• The rational numbers, expressed as the quotient of two integers a and b, b not equal to zero, satisfy the above definition because is the root of .[1] • The constructible numbers (those that, starting with a unit, can be constructed with straightedge and compass, e.g. the square root of 2). • The quadratic surds (roots of a quadratic polynomial are algebraic numbers. If the quadratic polynomial is monic • Gaussian integers: those complex numbers • Trigonometric functions of rational multiples of cos( tan( ), cos( ), tan( ) satisfies where both and with integer coefficients , , and )

then the roots are quadratic integers. are integers are also quadratic integers. ), ), , and so are

(except when undefined). For example, each of cos( . This polynomial is irreducible over the ), tan(

rationals, and so these three cosines are conjugate algebraic numbers. Likewise, tan( ) all satisfy the irreducible polynomial conjugate algebraic integers. • Some irrational numbers are algebraic and some are not: • The numbers and are algebraic since they are the roots of polynomials



respectively. • The golden ratio is algebraic since it is a root of the polynomial . • The numbers and are not algebraic numbers (see the Lindemann–Weierstrass theorem);[2] hence they are transcendental.

Algebraic number


• The set of algebraic numbers is countable (enumerable).[3] • Hence, the set of algebraic numbers has Lebesgue measure zero (as a subset of the complex numbers), i.e. "almost all" complex numbers are not algebraic. • Given an algebraic number, there is a unique monic polynomial (with rational coefficients) of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree , then the algebraic number is said to be of degree . An algebraic number of degree 1 is a rational number. • All algebraic numbers are computable and therefore definable.

Algebraic numbers coloured by degree. (red=1, green=2, blue=3, yellow=4)

The field of algebraic numbers
The sum, difference, product and quotient of two algebraic numbers is again algebraic (this fact can be demonstrated using the resultant), and the algebraic numbers therefore form a field, sometimes denoted by A (which may also denote the adele ring) or Q. Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. This can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals, and is therefore called the algebraic closure of the rationals.

Related fields
Numbers defined by radicals
All numbers which can be obtained from the integers using a finite number of integer additions, subtractions, multiplications, divisions, and taking nth roots (where n is a positive integer) are algebraic. The converse, however, is not true: there are algebraic numbers which cannot be obtained in this manner. All of these numbers are solutions to polynomials of degree ≥ 5. This is a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). An example of such a number is the unique real root of polynomial x5 − x − 1 (which is approximately 1.167304).

Closed-form number
Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, any number that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms – these are called "elementary numbers", and include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include algebraic numbers, but does include some simple transcendental numbers such as e or log(2).

Algebraic number


Algebraic integers
An algebraic integer is an algebraic number which is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are 5 + 13√2, 2 − 6i, and ½(1 + i√3). (Note, therefore, that the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials x − k for all k ∈ Z.) The sum, difference and product of algebraic integers are again Algebraic numbers coloured by leading algebraic integers, which means that the algebraic integers form a ring. coefficient (red signifies 1 for an algebraic integer). The name algebraic integer comes from the fact that the only rational numbers which are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If K is a number field, its ring of integers is the subring of algebraic integers in K, and is frequently denoted as OK. These are the prototypical examples of Dedekind domains.

Special classes of algebraic number
• • • • • • • • Gaussian integer Eisenstein integer Quadratic irrational Fundamental unit Root of unity Gaussian period Pisot-Vijayaraghavan number Salem number

[1] Some of the following examples come from Hardy and Wright 1972:159-160 and pp. 178-179 [2] Also Liouville's theorem can be used to "produce as many examples of transcendentals numbers as we please," cf Hardy and Wright p. 161ff [3] Hardy and Wright 1972:160

• Artin, Michael (1991), Algebra, Prentice Hall, ISBN 0-13-004763-5, MR1129886 • Ireland, Kenneth; Rosen, Michael (1990), A Classical Introduction to Modern Number Theory, Graduate Texts in Mathematics, 84 (Second ed.), Berlin, New York: Springer-Verlag, ISBN 0-387-97329-X, MR1070716 • G. H. Hardy and E. M. Wright 1978, 2000 (with general index) An Introduction to the Theory of Numbers: 5th Edition, Clarendon Press, Oxford UK, ISBN 0-19-853171-0 • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556 • Øystein Ore 1948, 1988, Number Theory and Its History, Dover Publications, Inc. New York, ISBN 0-486-65620-9 (pbk.)

Complex number


Complex number
A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part. In this way the complex numbers contain the ordinary real numbers while extending them in order to solve problems that would be impossible with only real numbers. Complex numbers are used in many scientific fields, including engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers; he called them "fictitious", during his attempts to find solutions to cubic equations in the 16th century.[1]

Introduction and definition
Complex numbers have been introduced to allow for solutions of certain equations that have no real solution: the equation

A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram, representing the complex plane. Re is the real axis, Im is the imaginary axis, and i is the square root of –1.

has no real solution x, since the square of x is 0 or positive, so x2 + 1 cannot be zero. Complex numbers are a solution to this problem. The idea is to enhance the real numbers by introducing a non-real number i whose square is −1, so that x = i and x = −i are the two solutions to the preceding equation.

A complex number is an expression of the form where a and b are real numbers and i is the imaginary unit, satisfying i2 = −1. For example, −3.5 + 2i is a complex number. The real number a of the complex number z = a + bi is called the real part of z and the real number b is the imaginary part.[2] They are denoted Re(z) or ℜ(z) and Im(z) or ℑ(z), respectively. For example,

Some authors write a+ib instead of a+bi. In some disciplines (in particular, electrical engineering, where i is a symbol for current), in order to avoid notational conflict, the imaginary unit i is instead written as j, so complex numbers are written as a + bj or a + jb. A real number a can usually be regarded as a complex number with an imaginary part of zero, that is to say, a + 0i. However the sets are defined differently and have slightly different operations defined, for instance comparison operations are not defined for complex numbers. Complex numbers whose real part is zero, that is to say, those of the form 0 + bi, are called imaginary numbers. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write a − bi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i. The set of all complex numbers is denoted by C or .

Complex number


The complex plane
A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form.
Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand diagram; is the rectangular expression of the point.

The defining characteristic of a position vector is that it has magnitude and direction. These are emphasised in a complex number's polar form and it turns out notably that the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating a complex number counterclockwise through 90° about the origin: .

History in brief
Main section: History The solution in radicals (without trigonometric functions) of a general cubic equation contains the square roots of negative numbers when all three roots are real numbers, a situation that cannot be rectified by factoring aided by the rational root test if the cubic is irreducible (the so-called casus irreducibilis). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545, though his understanding was rudimentary. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[3] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.

Complex number


Elementary operations
The complex conjugate of the complex number z = x + yi is defined to be x − yi. It is denoted or . Geometrically, is the "reflection" of z about the real axis. In particular, conjugating twice gives the original complex number: . The real and imaginary parts of a complex number can be extracted using the conjugate:

Geometric representation of and its conjugate in the complex plane

Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations:

The reciprocal of a nonzero complex number z = x + yi is given by

This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying more general reflections than ones about a line, can also be expressed in terms of complex numbers.

Complex number


Addition and subtraction
Complex numbers are added by adding the real and imaginary parts of the summands. That is to say:

Addition of two complex numbers can be done geometrically by constructing a parallelogram.

Similarly, subtraction is defined by

Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are 0, A and B. Equivalently, X is the point such that the triangles with vertices 0, A, B, and X, B, A, are congruent.

Multiplication and division
The multiplication of two complex numbers is defined by the following formula:

In particular, the square of the imaginary unit is −1:

The preceding definition of multiplication of general complex numbers is the natural way of extending this fundamental property of the imaginary unit. Indeed, treating i as a variable, the formula follows from this (distributive law) (commutative law of addition—the order of the summands can be changed) (commutative law of multiplication—the order of the factors can be changed) (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division:

Division can be defined in this way because of the following observation:

Complex number


As shown earlier,

is the complex conjugate of the denominator

. The real part c and the imaginary

part d of the denominator must not both be zero for division to be defined.

Square root
The square roots of a + bi (with b ≠ 0) are , where


where sgn is the signum function. This can be seen by squaring

to obtain a + bi.[4] [5] Here

is called the modulus of a + bi, and the square root with non-negative real part is called the principal square root.

Polar form
Absolute value and argument
Another way of encoding points in the complex plane other than using the x- and y-coordinates is to use the distance of a point P to O, the point whose coordinates are (0, 0) (origin), and the angle of the line through P and O. This idea leads to the polar form of complex numbers. The absolute value (or modulus or magnitude) of a complex number z = x + yi is

Figure 2: The argument φ and modulus r locate a point on an Argand diagram; or are polar expressions of the point.

If z is a real number (i.e., y = 0), then r = |x|. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The argument or phase of z is the angle of the radius OP with the positive real axis, and is written as with the modulus, the argument can be found from the rectangular form :

. As

Complex number


The value of φ must always be expressed in radians. It can change by any multiple of 2π and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval is chosen. Values in the range are obtained by adding if the value is negative. The polar angle for the complex number 0 is undefined, but arbitrary choice of the angle 0 is common. The value of φ equals the result of atan2: . Together, r and φ give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form

Using Euler's formula this can be written as

Using the cis function, this is sometimes abbreviated to In angle notation, often used in electronics to represent a phasor with amplitude r and phase φ it is written as[7]

Multiplication, division and exponentiation in polar form
The relevance of representing complex numbers in polar form stems from the fact that the formulas for multiplication, division and exponentiation are simpler than the ones using Cartesian coordinates. Given two complex numbers z1 = r1(cos φ1 + isin φ1) and z2 =r2(cos φ2 + isin φ2) the formula for multiplication is

Multiplication of 2+i (blue triangle) and 3+i (red triangle). The red triangle is rotated to match the vertex of the blue one and stretched by √5, the length of the hypotenuse of the blue triangle.

In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-rotation counter-clockwise, which gives back i 2 = −1. The picture at the right illustrates the multiplication of

Complex number Since the real and imaginary part of 5+5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangle are arctan(1/3) and arctan(1/2), respectively. Thus, the formula


holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π. Similarly, division is given by

This also implies de Moivre's formula for exponentiation of complex numbers with integer exponents:

The n-th roots of z are given by

for any integer k satisfying 0 ≤ k ≤ n − 1. Here

is the usual (positive) nth root of the positive real number r.

While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = x there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as (which holds for positive real numbers), do in general not hold for complex numbers.

Field structure
The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number a, its negative −a is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2:

These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field. Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = −1 precludes the existence of an ordering on C. When the underlying field for a mathematical topic or construct is the field of complex numbers, the thing's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra.

Complex number


Solutions of polynomial equations
Given any complex numbers (called coefficients) a0, ..., an, the equation has at least one complex solution z, provided that at least one of the higher coefficients, a1, ..., an, is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 − 2 does not have a rational root, since √2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real solution for a > 0, since the square of x is positive for any real number x). There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one root. Because of this fact, theorems that hold "for any algebraically closed field", apply to C. For example, any complex matrix has at least one (complex) eigenvalue.

Algebraic characterization
The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ... + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Qp also satisfies these three properties, so these two fields are isomorphic. Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields which are isomorphic to C (the same is true of R, which contains many sub fields isomorphic to itself).

Characterization as a topological field
The preceding characterization of C describes the algebraic aspects of C, only. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: • P is closed under addition, multiplication and taking inverses. • If x and y are distinct elements of P, then either x − y or y − x is in P. • If S is any nonempty subset of P, then S + P = x + P for some x in C. Moreover, C has a nontrivial involutive automorphism in P for any nonzero x in C. Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = {y | p − (y − x)(y − x)∗ ∈ P} as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C. The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not. (namely the complex conjugation), such that xx∗ is

Complex number


Formal construction
Formal development
Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers. In this notation, the above formulas for addition and multiplication read

It is then just a matter of notation to express (a, b) as a + ib. Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with an addition, subtraction, multiplication and division operations which behave as is familiar from, say, rational numbers. For example, the distributive law

is required to hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form

where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X2+1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and −X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X2+1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a,b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach – the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R.

Matrix representation of complex numbers
Complex numbers can also be represented by 2×2 matrices that have the following form:

Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices. The geometric description of the multiplication of complex numbers can also be phrased in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:

The conjugate

corresponds to the transpose of the matrix.

Though this representation of complex numbers with matricies is the most common, many other representations arise from matrices other than that square to the negative of the identity matrix. See the article on 2 × 2 real

matrices for other representations of complex numbers.

Complex number


Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.

Complex exponential and related functions

Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values.

The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric

is a complete metric space, which notably includes the triangle inequality

for any two complex numbers z1 and z2. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series

and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states:

for any real number φ, in particular

Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation

for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of a—satisfies

where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log is often taken by restricting the

Complex number imaginary part to the interval (−π,π]. Complex exponentiation zω is defined as


Consequently, they are in general multi-valued. For ω = 1 / n, for some natural number n, this recovers the non-unicity of n-th roots mentioned above. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example they do not satisfy

Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.

Holomorphic functions
A function f : C → C is called holomorphic if it satisfies the Cauchy-Riemann equations. For example, any R-linear map C → C can be written in the form

with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand real-differentiable, but does not satisfy the Cauchy-Riemann equations.


Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(z − z0)n with a holomorphic function f(z), still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0.

Some applications of complex numbers are:

Control theory
In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a system has poles that are • in the right half plane, it will be unstable, • all in the left half plane, it will be stable, • on the imaginary axis, it will have marginal stability. If a system has zeros in the right half plane, it is a nonminimum phase system.

Complex number


Signal analysis
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the corresponding z is the amplitude and the argument arg(z) the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form

where ω represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.

Improper integrals
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.

Quantum mechanics
The complex number field is relevant in the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers.

In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.

Dynamic equations
In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = r t.

Complex number


Fluid dynamics
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.

Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets.

Algebraic number theory
As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example are Pythagorean triples (a, b, c), that is to say integers satisfying

Construction of a regular polygon using straightedge and compass.

(which implies that the triangle having sidelengths a, b, and c is a right triangle). They can be studied by considering Gaussian integers, that is, numbers of the form x + iy, where x and y are integers.

Analytic number theory
Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta-function ζ(s) is related to the distribution of prime numbers.

The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Heron of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive.[8] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's cubic formula gives the solution to the equation x3 − x = 0 as

Complex number


The three cube roots of −1, two of which are complex

At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions –i, and . Substituting these in turn for in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3 – x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature [9] [...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à celle qu’on imagine. ([...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.) A further source of confusion was that the equation inconsistent with the algebraic identity incorrect use of this identity (and the related identity seemed to be capriciously , which is valid for non-negative real numbers a and b, ) in the case when both a and b are negative even to

and which was also used in complex number calculations with one of a, b positive and the other negative. The bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of

guard against this mistake . Even so Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra [10], he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:

In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:

Complex number by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[11] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called factor, and the modulus; Cauchy (1828) called
2 2


the direction

the reduced form (l'expression , introduced the term complex

réduite) and apparently introduced the term argument; Gauss used i for

number for a + bi, and called a  + b the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others.

Generalizations and related notions
The process of extending the field R of reals to C is known as Cayley-Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. However, with increasing dimension, the algebraic properties familiar from real and complex numbers vanish: the quaternions are only a skew field, i.e. x·y ≠ y·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: (x·y)·z ≠ x·(y·z). However, all of these are normed division algebras over R. By Hurwitz's theorem they are the only ones. The next step in the Cayley-Dickson construction, the sedenions fail to have this structure. The Cayley-Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis 1, i. This means the following: the R-linear map

for some fixed complex number w can be represented by a 2×2 matrix (once a basis has been chosen). With respect to the basis 1, i, this matrix is

i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix

has the property that its square is the negative of the identity matrix: J2 = −I. Then

Complex number is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R, C, H, and O. For example this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 − 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closure of of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion turns out to be algebraically closed. This field is called p-adic complex numbers by analogy.


The fields R and Qp and their finite field extensions, including C, are local fields.

[1] Burton (1995, p. 294) [2] Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), College Algebra and Trigonometry (http:/ / books. google. com/ ?id=g5j-cT-vg_wC) (6 ed.), Cengage Learning, p. 66, ISBN 0618825150, , Chapter P, p. 66 (http:/ / books. google. com/ books?id=g5j-cT-vg_wC& pg=PA66) [3] Katz (2004, §9.1.4) [4] Abramowitz, Miltonn; Stegun, Irene A. (1964), Handbook of mathematical functions with formulas, graphs, and mathematical tables (http:/ / books. google. com/ books?id=MtU8uP7XMvoC), Courier Dover Publications, p. 17, ISBN 0-486-61272-4, , Section 3.7.26, p. 17 (http:/ / www. math. sfu. ca/ ~cbm/ aands/ page_17. htm) [5] Cooke, Roger (2008), Classical algebra: its nature, origins, and uses (http:/ / books. google. com/ books?id=lUcTsYopfhkC), John Wiley and Sons, p. 59, ISBN 0-470-25952-3, , Extract: page 59 (http:/ / books. google. com/ books?id=lUcTsYopfhkC& pg=PA59) [6] Kasana, H.S. (2005), Complex Variables: Theory And Applications (http:/ / books. google. com/ books?id=rFhiJqkrALIC) (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5, , Extract of chapter 1, page 14 (http:/ / books. google. com/ books?id=rFhiJqkrALIC& pg=PA14) [7] Nilsson, James William; Riedel, Susan A. (2008), Electric circuits (http:/ / books. google. com/ books?id=sxmM8RFL99wC) (8th ed.), Prentice Hall, p. 338, ISBN 0-131-98925-1, , Chapter 9, page 338 (http:/ / books. google. com/ books?id=sxmM8RFL99wC& pg=PA338) [8] Nahin, Paul J. (2007). An Imaginary Tale: The Story of (http:/ / mathforum. org/ kb/ thread. jspa?forumID=149& threadID=383188& messageID=1181284). Princeton University Press. ISBN 9780691127989. . Retrieved 20 April 2011. [9] Descartes, René (1954) [1637], La Géométrie | The Geometry of Rene Descartes with a facsimile of the first edition (http:/ / www. gutenberg. org/ ebooks/ 26400), Dover Publications, ISBN 0486600688, , retrieved 20 April 2011 [10] http:/ / web. mat. bham. ac. uk/ C. J. Sangwin/ euler/ [11] Hardy, G. H.; Wright, E. M. (2000) [1938], An Introduction to the Theory of Numbers, OUP Oxford, p. 189 (fourth edition), ISBN 0199219869

Mathematical references
• Ahlfors, Lars (1979), Complex analysis (3rd ed.), McGraw-Hill, ISBN 978-0070006577 • Conway, John B. (1986), Functions of One Complex Variable I, Springer, ISBN 0-387-90328-3 • Joshi, Kapil D. (1989), Foundations of Discrete Mathematics, New York: John Wiley & Sons, ISBN 978-0-470-21152-6 • Pedoe, Dan (1988), Geometry: A comprehensive course, Dover, ISBN 0-486-65812-0 • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 5.5 Complex Arithmetic" (http://, Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 • Solomentsev, E.D. (2001), "Complex number" ( c024140), in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104

Complex number


Historical references
• Burton, David M. (1995), The History of Mathematics (3rd ed.), New York: McGraw-Hill, ISBN 978-0-07-009465-9 • Katz, Victor J. (2004), A History of Mathematics, Brief Version, Addison-Wesley, ISBN 978-0-321-16193-2 • Nahin, Paul J. (1998), An Imaginary Tale: The Story of (hardcover ed.), Princeton University Press, ISBN 0-691-02795-1 A gentle introduction to the history of complex numbers and the beginnings of complex analysis. • H.-D. Ebbinghaus ... (1991), Numbers (hardcover ed.), Springer, ISBN 0-387-97497-0 An advanced perspective on the historical development of the concept of number.

Further reading
• The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4-7 in particular deal extensively (and enthusiastically) with complex numbers. • Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial equations and the structures of modern algebra. • Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations.

External links
• Imaginary Numbers ( on In Our Time at the BBC. • Euler's work on Complex Roots of Polynomials ( sa=viewDocument&nodeId=640&bodyId=1038) at Convergence. MAA Mathematical Sciences Digital Library. • John and Betty's Journey Through Complex Numbers ( • Dimensions: a math film. ( Chapter 5 presents an introduction to complex arithmetic and stereographic projection. Chapter 6 discusses transformations of the complex plane, Julia sets, and the Mandelbrot set.


Special References
• • • • • • • • • • Wikipedia portals: Culture Geography Health History Mathematics Natural sciences People Philosophy Religion

• Society • Technology Algebra is a branch of mathematics concerning the study of structure, relation and quantity. The name is derived from the treatise written by the Persian mathematician, astronomer, astrologer and geographer, Muhammad bin Mūsā al-Khwārizmī titled Kitab al-Jabr al-Muqabala (meaning "The Compendious Book on Calculation by Completion and Balancing"), which provided operations for the systematic solution of linear and quadratic equation Together with geometry, analysis, combinatorics, and number theory, algebra is one of the main branches of mathematics. Elementary algebra is often part of the curriculum in secondary education and provides an introduction to the basic ideas of algebra, including effects of adding and multiplying numbers, the concept of variables, definition of polynomials, along with factorization and determining their roots. In addition to working directly with numbers, algebra covers working with symbols, variables, and set elements. Addition and multiplication are viewed as general operations, and their precise definitions lead to structures such as groups, rings and fields. Abstract algebra • Algebraic geometry • Algebraic topology • Commutative algebra • Computer algebra • Diagram algebras • Elementary algebra • Galois theory • Homological algebra • Linear algebra • Mathematical identities • Permutations • Polynomials • Symmetric functions • Algebra stubs The Mathematics WikiProject is the center for mathematics-related editing on Wikipedia. Join the discussion on the project's talk page.

Project pages • • • • Participants Current activity Manual of style Conventions

Essays • Advice on using Wikipedia for mathematics self-study

Portal:Algebra • Proofs Subprojects • PlanetMath Exchange (Talk) • Graphics (Talk) • Article assessment (Talk) Related projects Computer science | Cryptography | Game theory | Numbers | Physics | Science | Statistics • ...that it is impossible to devise a single formula involving only polynomials and radicals for solving an arbitrary quintic equation? • ...that it is possible for a three dimensional figure to have a finite volume but infinite surface area? An example of this is Gabriel's Horn. • ...that the Gudermannian function relates the regular trigonometric functions and the hyperbolic trigonometric functions without the use of complex numbers? • ...that the classification of finite simple groups was not completed until the mid 1980s? • ...that a field is an algebraic structure in which the operations of addition, subtraction, multiplication and division (except division by zero) may be performed, and the same rules hold which are familiar from the arithmetic of ordinary numbers?
General • • • • • • Abstract algebra Algebraic number theory Algebraic geometry Topological algebra Algebraic combinatorics Commutative algebra • • • • • • Elementary algebra Elementary algebra Function (mathematics) Equation Exponent Square root, cube root, Nth root Logarithms • • • • • • Key concepts Identity element Inverse element Commutativity Associativity Distributivity Isomorphism • • Homological algebra • Polynomial • • • • Abstract algebra topics • • Algebraic structures • Algebraic structure (list) • Linear equation Quadratic equation Cubic equation • Linear algebra topics • The Isomorphism theorems • Linear algebra glossary • • • • • • Linear algebra Linear algebra Matrix, Matrix theory Determinant Vector, Vector space Inner product, Inner product space Hilbert space



Simultaneous equations Partial fraction Groups Group, Group theory • Rings and Fields Ring, Ring theory

Other • Order theory • • • • Well-ordered Domain theory Glossary of order theory List of order topics

• • • •

Group, Group theory Magma, quasigroup, loop, semigroup, monoid Lattice Rng, Semiring

• • • •

Order Subgroup, Normal subgroup Abelian group Simple group • • Finite simple groups Classification of finite simple groups

• • • •

Field, Field theory Factor ring Ideal Ring homomorphism

• • • •

Category theory (Portal) Tensor algebra K-theory Module theory

Ring, Ring theory

Lie group

Banach algebra

Representation theory

• • • Field, Field theory Module, Comodule Algebra over a ring, Algebra over a field • • • Lie algebra Jordan algebra • • • Topological group Glossary of group theory List of group theory topics • • • Symmetry group Group representation Computational group theory • • • Clifford algebra Glossary of ring theory Glossary of field theory • • Computer algebra system Normed algebra


Universal algebra


Analysis Category Computer Cryptography Discrete Geometry theory science mathematics


Mathematics Number theory



Set theory Statistics Topology

What are portals? · List of portals · Featured portals [1]

[1] http:/ / en. wikipedia. org/ wiki/ Portal%3Aalgebra?action=purge



In mathematics, two variable quantities are proportional if one of them is always the product of the other and a constant quantity, called the coefficient of proportionality or proportionality constant. In other words, x and y are proportional if the ratio is constant. We also say that one of the quantities is proportional to the other. For example, if the speed of an object is constant, it travels a distance that is proportional to the travel time.

y is directly proportional to x.

If a linear function transforms 0, a and b  into  0, c and d,  and if the product  a b c d  is not zero, we say a and b are proportional to c and d.  An equality of two ratios such as    where no term is zero, is called a proportion.

Geometric illustration

The two rectangles with stripes are similar, the ratios of their dimensions are horizontally written within the image. The duplication scale of a striped triangle is obliquely written, in a proportion obtained by inverting two terms of another proportion horizontally written.

When the duplication of a given rectangle preserves its shape, the ratio of the large dimension to the small dimension is a constant number in all the copies, and in the original rectangle. The largest rectangle of the drawing is similar to one or the other rectangle with stripes. From their width to their height, the coefficient is    A ratio of their dimensions horizontally written within the image, at the top or the bottom, determines the common

Proportionality shape of the three similar rectangles. The common diagonal of the similar rectangles divides each rectangle into two superposable triangles, with two different kinds of stripes. The four striped triangles and the two striped rectangles have a common vertex: the center of an homothetic transformation with a negative ratio – k  or  ,  that transforms one triangle and its stripes into another triangle with the same stripes, enlarged or reduced. The duplication scale of a striped triangle is the proportionality constant between the corresponding sides lengths of the triangles, equal to a positive ratio obliquely written within the image:   or   In the proportion ,  the terms a and d are called the extremes, while b and c are the means, because a and d


are the extreme terms of the list (a, b, c, d),  while b and c are in the middle of the list. From any proportion, we get another proportion by inverting the extremes or the means. And the product of the extremes equals the product of the means. Within the image, a double arrow indicates two inverted terms of the first proportion. Consider dividing the largest rectangle in two triangles, cutting along the diagonal. If we remove two triangles from either half rectangle, we get one of the plain gray rectangles. Above and below this diagonal, the areas of the two biggest triangles of the drawing are equal, because these triangles are superposable. Above and below the subtracted areas are equal for the same reason. Therefore, the two plain gray rectangles have the same area:  a d = b c.

The mathematical symbol '∝' is used to indicate that two values are proportional. For example, A ∝ B. In Unicode this is symbol U+221D.

Direct proportionality
Given two variables x and y, y is (directly) proportional to x (x and y vary directly, or x and y are in direct variation) if there is a non-zero constant k such that

The relation is often denoted

and the constant ratio

is called the proportionality constant or constant of proportionality.

• If an object travels at a constant speed, then the distance traveled is proportional to the time spent traveling, with the speed being the constant of proportionality. • The circumference of a circle is proportional to its diameter, with the constant of proportionality equal to π. • On a map drawn to scale, the distance between any two points on the map is proportional to the distance between the two locations that the points represent, with the constant of proportionality being the scale of the map. • The force acting on a certain object due to gravity is proportional to the object's mass; the constant of proportionality between the mass and the force is known as gravitational acceleration.




is equivalent to

it follows that if y is proportional to x, with (nonzero) proportionality constant k, then x is also proportional to y with proportionality constant 1/k. If y is proportional to x, then the graph of y as a function of x will be a straight line passing through the origin with the slope of the line equal to the constant of proportionality: it corresponds to linear growth.

Inverse proportionality
The concept of inverse proportionality can be contrasted against direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value of one inversely proportional variable will decrease if the other variable increases, while their product (the constant of proportionality k) is always the same. Formally, two variables are inversely proportional (or varying inversely, or in inverse variation, or in inverse proportion or in reciprocal proportion) if one of the variables is directly proportional with the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant. It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that

The constant can be found by multiplying the original x variable and the original y variable. As an example, the time taken for a journey is inversely proportional to the speed of travel; the time needed to dig a hole is (approximately) inversely proportional to the number of people digging. The graph of two variables varying inversely on the Cartesian coordinate plane is a hyperbola. The product of the X and Y values of each point on the curve will equal the constant of proportionality (k). Since neither x nor y can equal zero (if k is non-zero), the graph will never cross either axis.

Hyperbolic coordinates
The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that locates a point on a ray and the constant of inverse proportionality that locates a point on a hyperbola.

Exponential and logarithmic proportionality
A variable y is exponentially proportional to a variable x, if y is directly proportional to the exponential function of x, that is if there exist non-zero constants k and a

Likewise, a variable y is logarithmically proportional to a variable x, if y is directly proportional to the logarithm of x, that is if there exist non-zero constants k and a



Experimental determination
To determine experimentally whether two physical quantities are directly proportional, one performs several measurements and plots the resulting data points in a Cartesian coordinate system. If the points lie on or close to a straight line that passes through the origin (0, 0), then the two variables are probably proportional, with the proportionality constant given by the line's slope.

An equation is a mathematical statement that asserts the equality of two expressions.[1] In modern notation, this is written by placing the expressions on either side of an equals sign (=), for example

The first use of an equals sign, equivalent to 14x+15=71 in modern notation. From The Whetstone of Witte by Robert Recorde (1557).

asserts that x+3 is equal to 5. The = symbol was invented by Robert Recorde (1510–1558), who considered that nothing could be more equal than parallel straight lines with the same length.

Knowns and unknowns
Equations often express relationships between given quantities, the knowns, and quantities yet to be determined, the unknowns. By convention, unknowns are denoted by letters at the end of the alphabet, x, y, z, w, …, while knowns are denoted by letters at the beginning, a, b, c, d, … . The process of expressing the unknowns in terms of the knowns is called solving the equation. In an equation with a single unknown, a value of that unknown for which the equation is true is called a solution or root of the equation. In a set simultaneous equations, or system of equations, multiple equations are given with multiple unknowns. A solution to the system is an assignment of values to all the unknowns so that all of the equations are true.

Types of equations
Equations can be classified according to the types of operations and quantities involved. Important types include: • An algebraic equation is an equation involving only algebraic expressions in the unknowns. These are further classified by degree. • A linear equation is algebraic equation of degree one. • A polynomial equation is an equation in which a polynomial is set equal to another polynomial. • A transcendental equation is an equation involving a transcendental function of one of its variables. • A functional equation is an equation in which the unknowns are functions rather than simple quantities. • A differential equation is an equation involving derivatives. • An integral equation is an equation involving integrals. • A Diophantine equation is an equation where the unknowns are required to be integers.



One use of equations is in mathematical identities, assertions that are true independent of the values of any variables contained within them. For example, for any given value of x it is true that However, equations can also be correct for only certain values of the variables.[2] In this case, they can be solved to find the values that satisfy the equality. For example, consider the following.

The equation is true only for two values of x, the solutions of the equation. In this case, the solutions are .


Many mathematicians[2] reserve the term equation exclusively for the second type, to signify an equality which is not an identity. The distinction between the two concepts can be subtle; for example,

is an identity, while

is an equation with solutions


. Whether a statement is meant to be an identity or an equation can ) for an

usually be determined from its context. In some cases, a distinction is made between the equality sign ( equation and the equivalence symbol ( ) for an identity.

Letters from the beginning of the alphabet like a, b, c... often denote constants in the context of the discussion at hand, while letters from the end of the alphabet, like ...x, y, z, are usually reserved for the variables, a convention initiated by Descartes.

If an equation in algebra is known to be true, the following operations may be used to produce another true equation: 1. 2. 3. 4. 5. Any real number can be added to both sides. Any real number can be subtracted from both sides. Any real number can be multiplied to both sides. Any non-zero real number can divide both sides. Some functions can be applied to both sides. Caution must be exercised to ensure that the operation does not cause missing or extraneous solutions. For example, the equation y*x=x has 2 solutions: y=1 and x=0. Dividing both sides by x "simplifies" the equation to y=1, but the second solution is lost.

The algebraic properties (1-4) imply that equality is a congruence relation for a field; in fact, it is essentially the only one. The most well known system of numbers which allows all of these operations is the real numbers, which is an example of a field. However, if the equation were based on the natural numbers for example, some of these operations (like division and subtraction) may not be valid as negative numbers and non-whole numbers are not allowed. The integers are an example of an integral domain which does not allow all divisions as, again, whole numbers are needed. However, subtraction is allowed, and is the inverse operator in that system. If a function that is not injective is applied to both sides of a true equation, then the resulting equation will still be true, but it may be less useful. Formally, one has an implication, not an equivalence, so the solution set may get larger. The functions implied in properties (1), (2), and (4) are always injective, as is (3) if we do not multiply by zero. Some generalized products, such as a dot product, are never injective. More information at Equation solving.



[1] "Equation" (http:/ / dictionary. reference. com/ browse/ equation)., LLC. . Retrieved 2009-11-24. [2] Nahin, Paul J. (2006). Dr. Euler's fabulous formula: cures many mathematical ills. Princeton: Princeton University Press. p. 3. ISBN 0-691-11822-1.

External links
• Winplot ( General Purpose plotter which can draw and animate 2D and 3D mathematical equations. • Mathematical equation plotter ( Plots 2D mathematical equations, computes integrals, and finds solutions online. • Equation plotter ( A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). • EqWorld (—contains information on solutions to many different classes of mathematical equations. • EquationSolver ( A webpage that can solve single equations and linear equation systems.

Word problem
Abstract algebra has an unrelated term word problem for groups. In mathematics education, the term word problem is often used to refer to any math exercise where significant background information on the problem is presented as text rather than in mathematical notation.[1] As word problems often involve a narrative of some sort, they are occasionally also referred to as story problems and may vary in the amount of language used.[2]

A mathematical problem in mathematical notation: Solve for J: J = A − 20 J + 5 = (A + 5)/2 might be presented in a word problem as follows: John is twenty years younger than Amy, and in five years' time he will be half her age. What is John's age now? The answer to the word problem is that John is 15 years old, while the answer to the mathematical problem is J = 15 (and A =35). Example 2: The Cincinnati Reds are a very good Major League Baseball team. This season, they have won 4 out of 4 games. The Major League Baseball regular season is 162 games. If they continue to win at their current rate, how many games will the Reds win this season? The answer to the word problem is 162.

Word problem


Word problems can be examined on three levels:[3] Level a: the verbal formulation; Level b: the underlying mathematical relations; Level c: the symbolic mathematical expression. Word problems can be further analysed by examining their linguistic properties (Level a), their logico-mathematical properties (Level b) or their symbolic representations (Level c). Linguistic properties can include such variables as the number of words in the problem or the mean sentence length.[4] The logico-mathematical properties can be classified in numerous ways, but one such scheme is to classify the quantities in the problem (assuming the word problem is primarily numerical) into known quantities (the values given in the text of the problem), wanted quantities (the values that need to be found) and auxiliary quantities (values that may need to be found as intermediate stages of the problem).[4]

Common types
The most common types of word problems are distance problems, age problems, work problems, percentage problems, mixtures problems and numbers problems.

Purpose and use
Word problems commonly include mathematical modelling questions, where data and information about a certain system is given and a student is required to develop a model. For example: 1. Jane has $5 and she uses $2 to buy something. How much does she have now? 2. If the water level in a cylinder of radius 2 m is rising at a rate of 3 m per second, what is the rate of increase of the volume of water? These examples are not only intended to force the students into developing mathematical models on their own, but may also be used to promote mathematical interest and understanding by relating the subject to real-life situations. The relevance of these situations to the students is varying. The situation in the first example is well-known to most people and may be useful in helping primary school students to understand the concept of subtraction. The second example, however, does not necessarily have to be "real-life" to a high school student, who may find that it is easier to handle the following problem: Given r = 2 and dh/dt = 3, find d/dt (π r 2× h). Word problems are a common way to train and test understanding of underlying concepts within a descriptive problem, instead of solely testing the student's capability to perform algebraic manipulation or other "mechanical" skills.

History and Culture
The modern notation that enables mathematical ideas to be expressed symbolically was developed in Europe from the sixteenth century onwards. Prior to this, all mathematical problems and solutions were written out in words; the more complicated the problem, the more laborious and convoluted the verbal explanation. Examples of word problems can be found dating back to Babylonian times. Apart from a few procedure texts for finding things like square roots, most Old Babylonian problems are couched in a language of measurement of everyday objects and activities. Students had to find lengths of canals dug, weights of stones, lengths of broken reeds, areas of fields, numbers of bricks used in a construction, and so on.[5] Ancient Egyptian mathematics also has examples of word problems. The Rhind Mathematical Papyrus includes a problem that can be translated as:

Word problem There are seven houses; in each house there are seven cats; each cat kills seven mice; each mouse has eaten seven grains of barley; each grain would have produced seven hekat. What is the sum of all the enumerated things.[6] In more modern times the sometimes confusing and arbitrary nature of word problems has been the subject of satire. Gustave Flaubert wrote this nonsensical problem, now known as the Age of the captain: Since you are now studying geometry and trigonometry, I will give you a problem. A ship sails the ocean. It left Boston with a cargo of wool. It grosses 200 tons. It is bound for Le Havre. The mainmast is broken, the cabin boy is on deck, there are 12 passengers aboard, the wind is blowing East-North-East, the clock points to a quarter past three in the afternoon. It is the month of May. How old is the captain?


Word problems have also been satirised in The Simpsons: Bart: 7:30am an express train traveling 60 miles per hour leaves Santa Fe bound for Phoenix, 520 miles away. At the same time, a local train traveling 30 miles an hour carrying 40 passengers leaves Phoenix bound for Santa Fe. It's 8 cars long and always carries the same number of passengers in each car. An hour later, the number of passengers equal to half the number of minutes past the hour get off, but three times as many plus six get on. At the second stop, half the passengers plus two get off but twice as many get on as got on at the first stop. Train conductor: Bart: Ticket, please.

I don't have a ticket! Come with me, boy.

Train conductor:

[drags Bart off. Numbers circle Bart's head] We've got a stowaway, sir. Bart: I'll pay! How much?

[the train engineer is Martin, shoveling numbers into the engine.]
Martin: Twice the fare from Tucson to Flagstaff minus two-thirds of the fare from Albuquerque to El Paso! Ha ha ha ha!


[1] L Verschaffel, B Greer, E De Corte (2000) Making Sense of Word Problems, Taylor & Francis [2] John C. Moyer; Margaret B. Moyer; Larry Sowder; Judith Threadgill-Sowder (1984) Story Problem Formats: Verbal versus Telegraphic Journal for Research in Mathematics Education, Vol. 15, No. 1. (Jan., 1984), pp. 64-68. http:/ / links. jstor. org/ sici?sici=0021-8251%28198401%2915%3A1%3C64%3ASPFVVT%3E2. 0. CO%3B2-V [3] Perla Nesher Eva Teubal (1975)Verbal Cues as an Interfering Factor in Verbal Problem Solving Educational Studies in Mathematics, Vol. 6, No. 1. (Mar., 1975), pp. 41-51. http:/ / links. jstor. org/ sici?sici=0013-1954%28197503%296%3A1%3C41%3AVCAAIF%3E2. 0. CO%3B2-H [4] Madis Lepik (1990) Algebraic Word Problems: Role of Linguistic and Structural Variables, Educational Studies in Mathematics, Vol. 21, No. 1. (Feb., 1990), pp. 83-90., http:/ / links. jstor. org/ sici?sici=0013-1954%28199002%2921%3A1%3C83%3AAWPROL%3E2. 0. CO%3B2-8. [5] Duncan J Melville (1999) Old Babylonian Mathematics http:/ / it. stlawu. edu/ %7Edmelvill/ mesomath/ obsummary. html [6] Egyptian Algebra - Mathematicians of the African Diaspora (http:/ / www. math. buffalo. edu/ mad/ Ancient-Africa/ mad_ancient_egypt_algebra. html#rhind79) [7] Mathematical Quotations - F (http:/ / math. furman. edu/ ~mwoodard/ ascquotf. html)

Word problem
[8] Andrew Nestler's Guide to Mathematics and Mathematicians on The Simpsons (http:/ / homepage. smc. edu/ nestler_andrew/ SimpsonsMath. htm)


External links
• Word problems that lead to simple linear equations ( shtml) at cut-the-knot

Statistical population
A statistical population is a set of entities concerning which statistical inferences are to be drawn, often based on a random sample taken from the population. For example, if we were interested in generalizations about crows, then we would describe the set of crows that is of interest. Notice that if we choose a population like all crows, we will be limited to observing crows that exist now or will exist in the future. Probably, geography will also constitute a limitation in that our resources for studying crows are also limited. Population is also used to refer to a set of potential measurements or values, including not only cases actually observed but those that are potentially observable. Suppose, for example, we are interested in the set of all adult crows now alive in the county of Cambridgeshire, and we want to know the mean weight of these birds. For each bird in the population of crows there is a weight, and the set of these weights is called the population of weights.

A subset of a population is called a subpopulation. If different subpopulations have different properties, the properties and response of the overall population can often be better understood if it is first separated into distinct subpopulations. For instance, a particular medicine may have different effects on different subpopulations, and these effects may be obscured or dismissed if such special subpopulations are not identified and examined in isolation. Similarly, one can often estimate parameters more accurately if one separates out subpopulations: distribution of heights among people is better modeled by considering men and women as separate subpopulations, for instance. Populations consisting of subpopulations can be modeled by mixture models, which combine the distributions within subpopulations into an overall population distribution.

External links
• Statistical Terms Made Simple [1]

[1] http:/ / www. socialresearchmethods. net/ kb/ sampstat. htm

Article Sources and Contributors


Article Sources and Contributors
0 (number)  Source:  Contributors: 041744, 06jcoxon, 0rrAvenger, 10metreh, 1234r00t,, 1diot,, 4pq1injbok, A D Monroe III, AJCham, AMD, Abce2, Abdull, Abtract, Acalamari, Acumensch, AdamJacobMuller, Addshore, AdjustShift, Ae-a, Aelffin, Agathman, Agent Smith (The Matrix), AgentPeppermint, Ahoerstemeier, Akanemoto, Akshatme, Alansohn, Alasdair, Aldaron, Ale jrb, Alessgrimal, AlexiusHoratius, Algebraist, Ali 786 52, Allstarecho, Alphachimp, Alvaro, AmeliaElizabeth, American Hindu, Amiodarone, Anaxial, Andre Engels, AndreNatas, Andres, Andrew c, AndrewHowse, Andreworkney, AndyKali, Angela, Angelastic, Angus Lepper, Animum, Ann Stouter, Anonymous Dissident, Anthony Appleyard, Anton Mravcek, Antonielly, Aqwhuzaifa, Arcfrk, Arindam Biswas, Arjun G. Menon, Arthur Rubin, Arthurian Legend, Arvindn, AtheWeatherman, AtticusX, Audriusa, AugPi, Austin512, AverageAmerican, Avnjay, AxelBoldt, Axl, Azuris, B9 hummingbird hovering, Bailbeedu, Bamos, Bangabalunga, Barek, BarretBonden, Basketball110, Belg4mit, Bender235, Benna, Berria, Betterusername, Bharatveer, Billytrousers, Blehfu, Bloodbath 87, Bloodshedder, Bob Burkhardt, BobbyAFC, Bobo192, Bogdangiusca, Bongwarrior, Bookandcoffee, Boomshadow, Brianjd, Broderick 0, BrokenSegue, Bromskloss, Bryan Derksen, Bubba73, Bucketsofg, Bücherwürmlein, CJLL Wright, CMacMillan, CRGreathouse, Caltas, Calvin 1998, Calypso, Camembert, Can't sleep, clown will eat me, Captain-n00dle, CardinalDan, Carolgregor, Casull, Cbdorsett, Cgersten, Chair5, Charles Matthews, ChemicalDeathMan, Chovain, Chris the speller, Chris55, Christian List, Chuunen Baka, CiteCop, Cliff, Cmdrjameson, Coasterror, Cognita, Colonies Chris, CommonsDelinker, Computerman45, Comrade42, Conversion script, Coolmate abhay, CountdownCrispy, CrazyChemGuy, Cremepuff222, Crissov, Cronholm144, Ctac, Cybercobra, Cypa, Czolgolz, D, DARTH SIDIOUS 2, DD2K, DJ Clayworth, DSachan, DTM, DVD R W, DVdm, Da man times 2, Danith123horn, Daqu, Darren Lee, Darth Panda, David Eppstein, David Schaich, DavidCooke, Dawn Bard, Dbachmann, Dbenbenn, DeadEyeArrow, Deepak, Deeptrivia, Deflective, Dejan Jovanović, Delirium, Demmy, Denelson83, Denorris, Denyontheworld, Deor, DerBorg, DerHexer, Derek farn, Dezidor, Dfrg.msc, Diberri, Digital fuel, Dirac1933, Discospinster, Dmcq, Dnvrfantj, DoctorJeriactrick, DoctorWho42, Docu, Dodger67, Doradus, Doshell, DougsTech, Drbalaji md, Drift chambers, Drmies, Duja, Duncan, Duncharris, Dysepsion, Dysprosia, EEye, EamonnPKeane, EatAlbertaBeef, Eclectic hippie, Ed Poor, EdJohnston, Eddideigel, Edokter, Edward Z. Yang, Eequor, Efficiencyjacky154, Egil, Eivindgh, El C, Ellywa, Emperorbma, Empro2, EnDumEn, Enviroboy, Epbr123, Eregli bob, Eskovan, Evercat, Everyking, Evil saltine, Excirial, Ezekiel63745, FJPB, Fallenangel2009, Farside, Fedallah, Feydey, Fieldday-sunday, Fizzle1, Flora, Fordan, Fork me, Franjosp, Frapoz, Fredrik, Fromgermany, FrozenMan, Funandtrvl, Funnycricket, Furrykef, Fæ, G20071221, GB fan, GUllman, Garg.pankaj, Garyzx, Gavinatkinson, Gee Eight, Gene Ward Smith, Gfoley4, Ghazer, Giftlite, Gioto, Gisling, Gjd001, Gkc, Glass Sword, Globbet, Gman124, Gmelfi, Gnowor, GoonerDP, Graeme Bartlett, Graham87, Greengooglymonster, GreyKnight, Grondemar, Groovybill, Grstain, Guy Peters, H.ehsaan, HIVE MIND!1!1, Haham hanuka, Hairy Dude, Halosean, Halsteadk, Hangfromthefloor, Happy-melon, HappyInGeneral, Harej, Havocrazy, Head, Headbomb, HeartofaDog, Henrygb, HereToHelp, Heron, Homunq, Hongooi, Hosterweis, Hqb, Hudee, Hueykablooie, Hut 8.5, Hyperqube, IXella007, Ihavenolife, Iluvmesodou, Indeed123, Infrogmation, Insanity Incarnate, Io Katai, Iridescent, J Crow, J Raghu, J.delanoy, JDiPierro, JForget, JSR, JaGa, Jackol, Jagged 85, Jagun, Jalanpalmer, James Arboghast, James Hannam, JamesBWatson, JasonWoof, Jasonfitz, JavierMC, Jayjg, Jclemens, Jcobb, Jeandré du Toit, Jeanmichel, Jediknil, Jeffrey Mall, Jefu, Jeltz, Jeremiad, Jezmck, Jim Douglas, Jim.belk, Jimmy416, Joaopaulo1511, Joe Kress, JoeMystical, Joey Skywalker, John H. Victor, John Quincy Adding Machine, Jojhutton, Jojit fb, Jonathan de Boyne Pollard, Jonathunder, Jonverve, Joseph A. Spadaro, Joseph Solis in Australia, Josh Grosse, Joshua Scott, Jp3z, Jrincayc, Jshadias, Jujutacular, Jusdafax, Kaarel, Kajasudhakarababu, Kalamkaar, Kalupinka, KaroH, Keegan, Keivspare, Kevinkor2, Kfgauss, Kkonnorr, Kmg90, KnowledgeOfSelf, Koorogi,, Kragen, Krellis, Kudret abi, Kuru, Kwamikagami, Kyng, L Kensington, LMB, LOL, Lacrimosus, LakeHMM, Lambiam, Lanaface34, Laurensmells, LeaveSleaves, Ledzeppelinfan4, Leeannedy, LetsPlayMBP, Lexor, Life, Liberty, Property, Liftarn, Linas, Linguistus, Linnell, LittleDan, Livajo, Ljlego, Llull, LlywelynII, Llywrch, Loadmaster, Loohcsnuf, Loren.wilton, Luk, Luna Santin, LutherVinci, MER-C, Macellarius, Macy, Mad Cat, Madhero88, Madman2001, Mafmafmaf, Magicmonster, Magioladitis, Magnus Manske, Makotoy, Malaiya, Mange01, ManningBartlett, Mansikka, MarcAurel, Mario777Zelda, Markus Kuhn, Martarius, MartinHarper, Martinwilke1980, Master of Pies, MattGiuca, Maurice Carbonaro, Maurog, Maverick9711, Mboverload, Meaghan, Melchoir, Mendaliv, Metagraph, Mets501, Mfc, Mgnbar, Michael Hardy, Michael93555, Michealt, Mike s, Mike65535, MikeLynch, Mikespedia, Mikker, Milind 220696, Milogardner, Missmarple, Mitsubishievo 6, Mjbyars1, Monobi, Montrealais, Mps, Mr. Lefty, Mr1744, Mulad, Murkee, Muwaffaq, Mxn, Myanw, Narson, Nasa-verve, Nateisga, Natevw, NawlinWiki, NerdyScienceDude, Nevit, Nfutvol, Nichenbach, Nihiltres, Nikki Plikki Sikki, Nn123645, Nneonneo, No Guru, No One of Consequence, Noe, NorwegianBlue, Nothinfukenwerks, Nsaa, Number36, Numerao, Nuno Tavares, Nusumareta, Obaidz96, Ocaasi, Octahedron80, Ohconfucius, Oleg Alexandrov, Oli Filth, Omglazers, Omicronpersei8, OneWeirdDude, Onebravemonkey, Onorem, OsirisV, Ostinato2, Owen, Oxymoron83, Pasitbank, PassaMethod, Patrick, Paul August, Paul Erik, Paul Murray, Paul from Michigan, Paullaw, Peace keeper II, Pedro, Peregrin Took, PericlesofAthens, Petershank, Phantom321, Phearson, Phil1881, Philip Baird Shearer, Philip Trueman, Pinethicket, Pkrecker, Plasticup, Pm, Pmanderson, Poccil, Porsche997SBS, Portia327, Pranathi, PrimeFan, Professionalmoron, Project2501a, Pshent, Ptarjan, Pupster21, Q17, Qrystal, Quiddity, Quinsareth, R'n'B, R9tgokunks, RDBury, Radagast3, Radiosonde, Raflee, RainbowOfLight, Random832, Raoul NK, Rbb l181, Rdhettinger, Real Deuce, Reconsider the static, RedCoat10, RedWolf, Redheylin, Regregex, Repku, RepublicanJacobite, RetiredUser2, Retodon8, Rfc1394, Rgoodermote, Rgrg, Rhubbarb, Rich Farmbrough, Richard0612, Rick Block, Rjwilmsi, Rnt20, Robert Happelberg, Roberto.zanasi, Robixen, Robo37, RockMFR, Roeheat, Roger McCoy, Roundobi, Roylee, Royote, Rrburke, Rrjanbiah, Ruud Koot, Ruwanraj, RyanCross, Ryanaxp, S.Örvarr.S, S4mme, SCΛRECROW, SJP, Sade, Sagaci, Sagaciousuk, Sajman12, Salix alba, Salte45, Samohyl Jan, Samuelsen, Saros136, SchfiftyThree, Schneelocke, Scot.parker, Scwlong, Seaphoto, Secfan, Securiger, Senator Palpatine, SerduchkaFan, Sesshomaru, Shadowjams, Sharonlees, Shawnhath, Shields020, Shirulashem, Shoeofdeath, Shrap, Siddhant, Sikory, SillyLorenzo, SimenH, SimonTrew, Singularitarian, Sion8, Sj, Skalman, Skatelikekemp, Slawojarek, Sligocki, Smalljim, Smiloid, Smoove Z, So God created Manchester, Sobolewski, Sodium, Soman, Son, South Bay, Spearhead, Speciate, Speedy bell, Spellcast, Spitfire, Split, Spoon!, Sputnikpanicpuppet, Srushe, Stannered, Steinbach, Steve Farrell, Steven Zhang, StradivariusTV, Strait, Sunshine4921, Superawesome 9001, Superm401, Surfcar, Symmm, Synergism, Syrthiss, T.J.V., TAKASUGI Shinji, TArsenal, TOLCIN, TacticalBread, Tagishsimon, TakuyaMurata, Talroth, TangLab, Tarheel95, Tav, Tdjewell, Techdawg667, TeleComNasSprVen, TexasAndroid, Texture, Th3rokkcc3r, The Anome, The Epopt, The High Fin Sperm Whale, The Thing That Should Not Be, TheNepnoc, TheRingess, TheStarman, Thebobofi, Thecoolduke2008, Thegraciousfew, TheodoreNg, Thewalkingcrow, Thingg, Thr4wn, Three-quarter-ten, Tide rolls, Tlork Thunderhead, Tobias Bergemann, Toby Bartels, Toddst1, Tofutwitch11, Toliar, TomJF, Tommy2010, TonyTeacher, Trakesht, Traxs7, Tregoweth, Trevor MacInnis, Trovatore, Troy 07, Tuckerma, Tuntable, Twospoonfuls, Tylop, Tyman 101, Ulrich.fuchs, Uncle G, Untangledystopia, User86654, Ut mehrotra, Vaceituno, Vanka5, Varadarajan.narayanan, Varlaam, Vegaswikian, Velho, Velvetron, Versus22, Vflash10, Vikasthakurno1, Vilcxjo, Vipinhari, Voxish, WFPM, WadeSimMiser, Waggers, Wars, Wayne Slam, Wereon, Wheresthebrain, Whisky drinker, WikiRigby, WikiWebbie, Wikidudeman, William Avery, Winabet, Winchelsea, Winston786, Wolfmankurd, Woodstone, Woohookitty, Wpchen, Wrsh11, Wshun, Wyklety, Wysprgr2005, XJamRastafire, Xanzzibar, Xcrivener, Xil, Yacht, Yamamoto Ichiro, Yanamad, Yath, Yixuan05, Yohannesb, Yom, Yoninah, Yuide, Yun, ZALASTA111, Zack, Zakuragi, Zayani, Zik-Zak, Zundark, Zzuuzz, ^zer0dyer$, Кыс, ‫ 0441 ,ﻋﻠﯽ ﻭﯾﮑﯽ‬anonymous edits Identity element  Source:  Contributors: 2D, ABCD, Acroterion, Aeons, Andres, AxelBoldt, Baumgaertner, Belinrahs, Beremiz, Bggoldie, BiT, Bkkbrad, Bombshell, Charles Matthews, Conversion script, Dan653, Davehi1, David Shay, Dreish, Dysprosia, EdC, EmilJ, GLaDOS, Giftlite, Gubbubu, Haza-w, Helix84, Hyacinth, Ideyal, Jay-Sebastos, Jiddisch, Jim.belk, Jokes Free4Me, Keilana, Lethe, Luokehao, MattGiuca, Mattblack82, Melchoir, Mindmatrix, Neelix, Obradovic Goran, Octahedron80, Paul August, Pcap, Pcb21, Philip Hazelden, Philip Trueman, R'n'B, Riojajar, RobertG, Sajjad.r, Salix alba, Sandman, SebastianHelm, Tim Starling, Toby, Toby Bartels, TooMuchMath, Unyoyega, Varuna, WaysToEscape, Woodstone, XJamRastafire, Xyctka, Zundark, 42 anonymous edits Variable  Source:  Contributors: AwamerT, BioPupil, Boxplot, CRGreathouse, Cntras, Correogsk, Cybercobra, Eclecticos, Enviroboy, FrankFlanagan, Gaelen S., Gandalf61, Georg-Johann, Giftlite, Glane23, Gregbard,, Holmes7893, JaGa, Jojo966, Kgorman-ucb, Kusunose, Kwamikagami, LimStift, LokiClock, Marc van Leeuwen, MattGiuca, Michael Hardy, Michel421, Micru, Mrsmarenawalker, Muhandes, Nickshanks, P64, Paulmiko, Phantomsteve, Philip Trueman, Phrixussun, Pinethicket, QuiteUnusual, R'n'B, RDBury, Razertek, Rodneidy, Rp, Rrburke, Sean.hoyland, Seaphoto, Silivrenion, Some jerk on the Internet, Symane, Sławomir Biały, The High Fin Sperm Whale, Tide rolls, True Pagan Warrior, Vchorozopoulos, 103 anonymous edits Integer  Source:  Contributors: -Midorihana-, .:Ajvol:., 16@r, 28bytes, 5 albert square, 9258fahsflkh917fas, ALostIguana, Ace ETP, Ahoerstemeier, Airplaneman, Aitias, Akanemoto, Alansohn, Aldaron, Alejo2083, Altenmann, Amyx231, Analoguedragon, Andre Engels, Andres, AndyZ, Anita5192, Antandrus, Apparition11, Arammozuob, Arnejohs, Arthena, Arvindn, AxelBoldt, Az1568, BarretBonden, Bendono, Bishopgraeme, Bjankuloski06en, Blehfu, Blockofwood, Bobo192, Bongwarrior, BrianH123, Brianjd, Brion VIBBER, Bryan Derksen, Camw, Can't sleep, clown will eat me, Capricorn42, Celceus, Charles Matthews, Chris 73, Christian List, Christopher Parham, Chun-hian, Cikicdragan, Ciphergoth, Clader2, Computer97, Conversion script, Corvus cornix, Cxw, Cybercobra, DB.Gerry, DJ1AM, DMacks, Daniel Arteaga, Darkmeerkat, Darth Panda, Davexia, David Eppstein, DavidCBryant, Davidw1985, Dbenbenn, Deconstructhis, Demmy100, DerHexer, Deryck Chan, Digby Tantrum, Dirtylittlesecerts, Dkusic, Drake Redcrest, Dreadstar, Dreish, Dricherby, Dryke, Duperman01, Dwheeler, Dysprosia, Edgar181, El C, Elassint, Elroch, Epbr123, Eric119, Esanchez7587, Extransit, Faradayplank, Fayimora, Fieldday-sunday, Finell, Fluffernutter, Fnerchei, Fredrik, FrozenMan, Fæ, GT1345, Gaia Octavia Agrippa, Gaius Cornelius, Gakusha, Gauss, Georgia guy, Gfoley4, Gggh, Giftlite, Graham87, GreekHouse, GromXXVII, Grover cleveland, Guardian72, Hairhorn, Harland1, Henrygb, Hippietrail, Hornlitz, Hut 8.5, Igoldste, Imasleepviking, InvisibleK, Iridescent, Isaac909, Ivan Shmakov, J.delanoy, JSpung, Ja 62, JackSchmidt, Jake Nelson, Jarmiz, Jauhienij, Jfire, Jh12, Jiddisch, Jim.belk, Jj137, Jleedev, Jncraton, Johnferrer, Jojit fb, Jonkerz, Josh Parris, Jrobbinz1, Jshadias, Julian Mendez, Jumbuck, KSmrq, Kadri, Kahriman, Kanags, Karadimos, Katzmik, Kawehi 65, Keilana, King of Hearts, Kingpin13, KnowledgeOfSelf, Knutux, Koeplinger, Ksero, KuduIO, Kuru, Kwamikagami, L Kensington, La Pianista, Lambiam, LayZeeDK, Leonard Vertighel, Leszek Jańczuk, Liempt, Lights, Linas, LizardJr8, Lolimakethingsfun, Loren.wilton, Lowellian, Lukeelms, Lupin, MacMed, Macy, Marc van Leeuwen, Markus Kuhn, Mathenaire, MattGiuca, Mb5576, McSly, Mendaliv, Mentifisto, Mephistophelian, Merlion444, Metatron's Cube, Mets501, Michael Hardy, Michael Slone, Mike Segal, Miquonranger03, Moeron, Moogwrench, Msh210, NSR, Nakon, NawlinWiki, Nikai, Notinlist, Obradovic Goran, Odie5533, Oleg Alexandrov, Olegalexandrov, OmegaMan, Omerks, Oneiros, Optikos, Oxymoron83, P0mbal, Panoramix, ParisianBlade, Paul August, Pb30, Philip Trueman, PhilippWeissenbacher, Pinethicket, Pleasantville, PookeyMaster, Pretzelpaws, Pufferfish101, Quaeler, Quota, Qxz, RJFJR, RJaguar3, Radon210, Raja Hussain, Rholton, Rich Farmbrough, Riitoken, Rob Hooft, Rumping, SJP, SLSB, Saeed, Salix alba, Salsa Shark, Saric, Sasquatch, Sbowers3, Sceptre, Scgtrp, Seaphoto, Seba5618, Seresin, Shadowjams, Sillychiva593, Sin Harvest, Snigbrook, Spindled, Splintax, Stapler9124, Stephenb, Stevertigo, Sticky Parkin, Stuart Presnell, Stwalkerster, SunCreator, SuperMidget, Sweetnessman, T.J.V., TakuyaMurata, Tanyakh, Tardis, Tbhotch, TeleComNasSprVen, The Flying Spaghetti Monster, The Thing That Should Not Be, TheRingess, Theda, Timothy Clemans, Tobias Bergemann, TomdFr, Triwbe, Trovatore, Troy 07, Ttwo, Unyoyega, Vegard, Velella, Vsmith, Wafulz, Wikipe-tan, Wikipelli, Wimt, Wknight94, Wolfrock, Wyatt915, X!, XJamRastafire, Xantharius, Youssefsan, Zack, Zarcillo, ZooFari, Zundark, 735 anonymous edits

Article Sources and Contributors
Monomial  Source:  Contributors: Abovechief, Aisaac, Alansohn, Albmont, AndrewHowse, Anne Bauval, Antandrus, AugPi, Bloodshedder, Brianga, Calle, Charles Matthews, Chenxlee, Classicalecon, Closedmouth, Culix, D.Lazard, Danski14, DerHexer, Dogah, Edudobay, Epbr123, Freakoclark, Fresheneesz, Fropuff, Gf uip, Giftlite, Glane23, Gryakj, Iwnit, J.delanoy, JackSchmidt, Jeroendv, Johngcarlsson, Jokes Free4Me, JustJuthan, Kjmathew, Lanthanum-138, Linas, Logan, LutzL, MJ94, Manu bcn, Marc van Leeuwen, Marner, Martynas Patasius, Materialscientist, Mato, MatthewJMcMahon, Mets501, Mhss, Michael Hardy, Michael Slone, Moink, MonoAV, Mrjoerizkallah, Musiphil, Natalie Erin, Nbarth, Ncmvocalist, NewEnglandYankee, Octahedron80, Oleg Alexandrov, Oliphaunt, Peter Karlsen, Philip Trueman, Rich Farmbrough, Rymich13, Shadowjams, Stupid Erving, The Anome, Tom harrison, TomyDuby, Vrins, Welsh, WikHead, Wikipe-tan, WindRunner, Wowlookitsjoe, Лев Дубовой, 97 anonymous edits Binomial  Source:  Contributors: 16@r, Andres, Anonymous Dissident, Arakunem, Ashenai, AugPi, Aymatth2, Blaxthos, Bob.v.R, Brim, Can't sleep, clown will eat me, Cdang, D.Lazard, DEMcAdams, Dalegab, Dogah, Eleniel, Fresheneesz, Frodet, GB fan, Gerbrant, Giftlite, Icairns, JForget, JNW, Jatopian, Jleedev, Light current, LukeTurao, MFH, Maddie!, Maksim-e, Matyos1, Mets501, Michael Hardy, Michael Slone, Mikeo, Mingcc, Mr Molestah, MrOllie, Oleg Alexandrov, Paul August, Philip Trueman, Philomantis, Pne, Pollinator, RDBury, Romanm, Sayama, SchfiftyThree, Shanes, Shao, Stephenb, Sterrys, Tedernst, Tide rolls, Vrenator, Wayiran, West.andrew.g, Wroscel, Wtmitchell, Zundark, 99 anonymous edits Polynomial  Source:  Contributors: 0, 007pkier, 123ilikecheese, 139pm, 16@r, 2D, 2help, 4, 5*deluxe, A Stop at Willoughby, Abd, Abdull, Abisharan, Acalamari, Acannas, Acroterion, Adam majewski, AddisonRichard, Aitias, Akamad, AlanUS, Alexf, Alink, Andre Engels, Andrei Stroe, AndrewHowse, Anonymous Dissident, Ap, Apeiron, Arthena, Arvindn, Avoided, Awestcot, AxelBoldt, Aymatth2, Az1568,, Bananaman01, Bcherkas, BehnamFarid, Belly flop 96, BenFrantzDale, Bhny, Bitoffish, Bleh999, Bo Jacoby, Bongwarrior, Boredzo, Brad Eleven, Brentt, Brian the Editor, Bryan Derksen, Bullzeye, CBM, CRGreathouse, Calmer Waters, Caltas, Camw, Can't sleep, clown will eat me, Carinemily, CecilTyme, Cek, Chajadan, Charles Matthews, Chenxlee, Christian75, Cic, Clarince63, Colossus, Connie3221, Conversion script, Courcelles, Crazyaru, Cremepuff222, Crisis, CryptoDerk, Cybercobra, D.Lazard, D.M. from Ukraine, D17olphi17n, D6, DARTH SIDIOUS 2, DEMcAdams, DMacks, DVdm, DaltonSerey, Dan Hoey, Darkfight, David Eppstein, David Martland, Davidfg, Dcoetzee, Ddxc, Delldot, Demmy, Derpnuggets, Dessources, Dino, Discospinster, Doctormatt, DonDiego, Download, Dreadstar, Dysprosia, Egg, El C, Enochlau, Epbr123, Erebus Morgaine, Evercat, Ewen, Excirial, FCSundae, Falcon8765, Favonian, Feinoha, Fgnievinski, Fiberglass Monkey, Fieldday-sunday, Flewis, Frankenpuppy, Franklin.vp, Fredrik, FredrikMeyer, Fresheneesz, Frostbitten panda, Fæ, Gaiacarra, Gail, Gaius Cornelius, Gandalf61, Garde, Gazpacho, Gesslein, Giftlite, Gombang, Googl, GordonUS, Grafen, Graham87, Grstain, Haihe, Hajhouse, Happy-melon, Harsh parmar, Harvestdancer, Hdreuter, Hello32020, Henning Makholm, Henrygb, Herbee, Horselover Frost, Hotsiomai, Hvestermark, II MusLiM HyBRiD II, Iamstupido, Icairns, Iridescent, IronGargoyle, Isheden, Ivan Akira, J.delanoy, JRSpriggs, Jagged 85, Jamesdaff, Jamme123, Jan Hidders, JanCK, Jaredwf, Jccwiki, Jclemens, Jean de Parthenay, Jeronimo, Jim1138, JimVC3, Jittat, Jitterro, Jk2q3jrklse, Jmlk17, JoergenB, Joshuajohnson555, Jrtayloriv, Juliancolton, KPH2293, Kaal, Kablammo, Kan8eDie, Kappa, Katieh5584, Katovatzschyn, Kerotan, Kidburla, Kilo-Lima, KirbyJr, KnowledgeOfSelf, Kpengboy, Kungming2, Kuririmo, La Pianista, Lambiam, Lanthanum-138, Larry R. Holmgren, LeilaniLad, Leuko, LiDaobing, Lightmouse, Linas, Lklundin, Loadmaster, Logan, Lostella, LutzL, M2Ys4U, MER-C, MFH, Marc Venot, Marc van Leeuwen, MarkS, Martynas Patasius, MathMartin, Matthewmoncek, MattieTK, Mawfive, Maxal, McSly, Mecanismo, MeekSaffron, Megaboz, Merqurial, Michael B. Trausch, Michael Hardy, Michael Slone, Michaelrccurtis, Miguel, Mikez, Mingcc, Minimac's Clone, Mir76, Mitch patrikus, Mmxx, Moeron, Momet, Mon4, Monkey12345678910, Monoduo, MrAngy, MrOllie, Mrjoerizkallah, Msm, Msmdmmm, Mygerardromance, Müslimix, Nagytibi, Natl1, NawlinWiki, Ncik, Neilc, Neo of ZW, Nic bor, Nilfisk38, Nixdorf, Notedgrant, Nousernamesleft, Obradovic Goran, Oleg Alexandrov, OlexiyO, Omnipaedista, OneWeirdDude, OrgasGirl, Panzuriel, Paolo.dL, Patrick, Paul August, Paulmiko, Pbroks13, Pchov, Pentti71, Persian Poet Gal, Peterlin, Pharaoh of the Wizards, Phgao, PhilKnight, Philip Trueman, Pictwe, Plastikspork, Plugwash, Point-set topologist, Poor Yorick, Populus, Possum, Professor Fiendish, PseudoSudo, Puchiko, QueenCake, Qwertyui85, RA0808, RDBury, Ragib, Reyk, Richard L. Peterson, Rick Norwood, Rockfang, Rockstar915, Ronhjones, RoryReloaded, Ryguasu, Sakkura, Salix alba, Salvio giuliano, Sam Staton, Saturnight, Sayama, SchfiftyThree, ScottishDurge, Seaphoto, Selvik, Shaunmoss, Shoessss, Silly rabbit, Simetrical, Simranjitsinghchandhok, Slon02, Smcinerney, Some jerk on the Internet, Someguy1221, Special-T, Spodi, SpuriousQ, SteveMcKay, Stevertigo, Stormwyrm, SveinHarris, Szabolcs Nagy, Tarquin, The Thing That Should Not Be, The Utahraptor, TheSwami, Theking17825, Thetorpedodog, Timmay3221, Tom Lougheed, Tom.freeman, Tommo 87, Tommy2010, Traxs7, Tyrrell McAllister, Ubardak, Unyoyega, Utcursch, Vanadium, Vanished User 0001, Vipinhari, WDavis1911, Waltpohl, Wayne Slam, Whisky drinker, Wickz, Wikipelli, Willp4139, Wshun, Wtmitchell, Wwoods, X42bn6, Xrjunque, Yojo44, Youssefsan, Yuubinbako, Zariane, Zhints, Zoid62, Zuzzerack, Σ, ಥ, 697 anonymous edits Coefficient  Source:  Contributors: 33rogers, A Hauptfleisch, Alansohn, Amtiss, Biblbroks, Bkell, Bkerkanator, Bongwarrior, Bovlb, Braveorca, Capricorn42, Charles Matthews, Ciphers, CrazyChemGuy, Cronholm144, Crosbiesmith, Cybercobra, Darkfight, Darolew, DavidHouse, Decltype, Deviator13, Diannaa, Dogah, E2eamon, Eequor, Enuja, Epbr123, Ffransoo, Fresheneesz, GeeJo, Gene Nygaard, Gene93k, Gggh, Giftlite, Granburguesa, Hadal, Haemo, Hdt83, Heron, Hotcrocodile, Hynca-Hooley, IMacWin95, IRP, IdealOmniscience, Igoldste, Iokseng, JaGa, JackSchmidt, Jameshfisher, Jarry1250, Lantonov, Mad Max, Magister Mathematicae, Maksim-e, MarSch, Marc Venot, Marc van Leeuwen, Mets501, Mgtrevisan, Michael Hardy, Michael Slone, Michael Snow, Mormegil, Msh210, My76Strat, Octahedron80, Oddity-, Paolo.dL, Paul August, Pharaoh of the Wizards, Piano non troppo, Pip2andahalf, Proofreader77, Rade Kutil, Renaissancee, Res2216firestar, Rgdboer, Ronhjones, S.L., ST47, SchmuckyTheCat, Shreshth91, Sidious1701, Signalhead, Silverfish, Someguy1221, Speller26, Spliffy, SwisterTwister, The Anome, TheGreyHats, Thingg, Tide rolls, UberScienceNerd, Uknighter, Uncle Milty, Wikimichael22, Xanstarchild, ‫102 ,ﻋﻠﯽ ﻭﯾﮑﯽ‬ anonymous edits Simplification  Source:  Contributors: Algebraist, AnnaFrance, CBM, Dfrg.msc, Father Goose, Fratrep, Gdr, Gregbard, Grumpyyoungman01, Jim.belk, Jiy, Markus Prokott, Nicko6, Oleg Alexandrov, Owen, Silvergoat, Smjg, 6 anonymous edits Expression  Source:  Contributors: Abdull, Acdx, Alfie66, Anaxial, AndrewHowse, Animum, Anonymous Dissident, Aqwis, Arthena, Bensaccount, Bfigura, BiT, Blanchardb, CRGreathouse, Calabe1992, CanadianLinuxUser, Capitalist, Centrx, Charles Matthews, Chris5858, ChrisKalt, ClosedEyesSeeing, Cybercobra, DARTH SIDIOUS 2, Damja, Danakil, David Shay, Discospinster, Dogah, Dysprosia, E Wing, Eball, Ed g2s, Esznyter, FilipeS, Fredrik, Genius101, Gesslein, Giftlite, Gingerandfred, Gregbard, HJ Mitchell, Hallows AG, Hatmaskin, Helix84, Hrishikesh.24889, Hut 8.5, I dream of horses, Ichudov, Igor Yalovecky, Isheden, Ixfd64, JimVC3, Jitse Niesen, Johnanth, Jusdafax, Kilva, Madmardigan53, MarSch, Marc van Leeuwen, Martinkunev, Master of Puppets, Materialscientist, MattGiuca, Melchoir, Mendaliv, Mets501, MiNombreDeGuerra, Michael Hardy, Michael Slone, Mikeo, Mr Gronk, Nathan B. Kitchen, Nbarth, NeoJustin, Nick Number, Nsaa, Olaf, Oleg Alexandrov, Patar knight, Patrick, Patwotrik, Paul August, Philip Trueman, Pmanderson, RJHall, Reaper Eternal, RichardF, Rick Norwood, Ronhjones, Salgueiro, Salix alba, Sankalpdravid, SchfiftyThree, SeanJones8191, SharkD, ShelfSkewed, Silverfish, Skater42, Stdazi, Terrek, The High Fin Sperm Whale, Tide rolls, Tobias Bergemann, Tommy2010, W1k13rh3nry, Welsh, Wikipe-tan, Wikipelli, Yogsrisee, Zack wadghiri, ‫ 512 ,ﺳﻌﯽ‬anonymous edits Root of a function  Source:  Contributors: A dullard, Abdull, Aleph4, Anonymous Dissident, Anti-min, Arbitrarily0, ArglebargleIV, AvicAWB, AxelBoldt, Banus, BenFrantzDale, Bob.v.R, Cadillac, Cantons-de-l'Est, Cethegus, Culix, David Johnson, David Shay, Dogah, Duoduoduo, Dysprosia, ESkog, Elwikipedista, Erud, Evil saltine, Ferengi, Giftlite, Glenn, Haham hanuka, He Who Is, Ibest104, Icairns, IdealOmniscience, JJ Harrison, Jafet, JamesBWatson, Jaredwf, Jim.belk, KDesk, Keakealani, LOL, Lhf, Linas, LutzL, Marc Venot, Maxim, Melchoir, Mets501, Mhaitham.shammaa, Michael Hardy, Mike Segal, Mindmatrix, Misantropo, Mpvide65, MrOllie, Mschlindwein, Mxn, Narssarssuaq, Nickalh50, NielsHietberg, Ninjagecko, Oleg Alexandrov, Pacifism91, Patrick, Paul August, Pbroks13, PhJ, Pred, Rigadoun, Rijkbenik, Rmashhadi, Sabbut, SimonP, Smcinerney, Stevertigo, Tango, Tarquin, Tearlach, Tobias Bergemann, Tomas e, Turgidson, UninvitedCompany, WadeSimMiser, Wereon, Wshun, XJamRastafire, Xario, АлександрВв, 60 anonymous edits Exponentiation  Source:  Contributors: 3ICE, 493609453mglkdfngslkn, 9258fahsflkh917fas, A.K.A.47, AJRobbins, AVIATORADIL, AaronWL, Abu-Fool Danyal ibn Amir al-Makhiri, AdamSTL, Agnosonga, Aleksa Lukic, Aleph4, Algebraist, Allstarecho, Andre Engels, Andy1618, Andymc, Anonymous Dissident, Anthony Appleyard, Anton Mravcek, Apokrif, Aquilosion, Arjun01, ArnoldReinhold, Arthur Rubin, Ashenoy, AxelBoldt, AzaToth, BaboonOfTheYard, Barak Sh, Barticus88, Beetstra, Bemoeial, Ben Ben, Benjaminong, Bhadani, BiT, Bkell, BlackAndy, Blake-, Bmendonc, Bo Jacoby, Bob.v.R, BoltClock, Bongwarrior, BrokenSegue, Buki ben Yogli, Burga, C0617470r, CBM, CBM2, CDN99, CRGreathouse, Calréfa Wéná, Caramdir, Cgs, Chealer, Chrislk02, Christian List, Christian75, Circeus, Clementina, Cmglee, Coluberbri, ComplexZeta, Count Truthstein, CryptoDerk, Cybercobra, Cyex116, DaGizza, Dan Hoey, DanP, Daqu, Daren1997, Darth Panda, David Biddulph, DavidCBryant, Dcljr, Diego Moya, Dirac1933, Discospinster, Dmcq, Doctor sw27, Doctormatt, Dolda2000, Dr. John D. McCarthy, Drahflow, Drpaule, Drumsrcool3, Dudemanpeace, Dwheeler, Ebony Jackson, EdC, Egmontaz, Ehrenkater, El C, Eliko, EncMstr, Epbr123, Epsilon0, Error792, Fabartus, FactSpewer, Ferrous55882, Fgnievinski, FilipeS, FloydRTurbo, Foolip, Fredrik, FredrikMeyer, Fropuff, FrozenMan, Fyyer, GESICC, Gamextheory, Gandalf61, Garbo187, Gauss, Gdr, Gene Ward Smith, Geometry guy, Georgia guy, Gesslein, Ghaly, Giftlite, Gigacephalus, Gpvos, Graham87, Grover cleveland, Gsp8181, Gulliveig, HOOTmag, Hairy Dude, Headbomb, Henning Makholm, Henrygb, Herbee, Hexmaster, Hi you, Hirak 99, Hmains, Humilulo, Iameukarya, Ianmacm, Ihope127, Ikjune Yoon, Iohannes Animosus, Irisrune, Isheden, J.delanoy, JRSpriggs, Jackol, Jagged 85, Jakemalloy, Jayen466, Jic, Jiddisch, Jim.belk, Jimp, Jmath666, Joeylawn, Josh Parris, Jowa fan, Jshadias, Jumbuck, KSmrq, KYLEMONGER, Kaboldy, Kaimiddleton, Kevleyski, Kirbytime, Kjarda, Klparrot, KnightMove, Koavf, Kozuch, Ksbrown, Kwenchin, L Kensington, Lagarto, Lambiam, Lance6968, Lanthanum-138, Larsobrien, Lightmouse, Lildoodle, Loadmaster, Longhair, LouisWins, MacMed, Magister Mathematicae, Majopius, Marc van Leeuwen, Marek69, Markryherd, Martynas Patasius, Materialscientist, Math.geek3.1415926, MathsIsFun, Matthew Yeager, Maurice Carbonaro, Melchoir, Meni Rosenfeld, Miaow Miaow, Michael C Price, Michael Hardy, Michael Slone, Michkol1, Mikebrand, Mild Bill Hiccup, MisterSheik, Mlogic, Mobile Snail, Mr2001, Mrob27, Ms2ger, Mwtoews, Myrizio, N01b33tr, NOrbeck, Neshatian, Nickalh50, Nickdc, Nicodh, Nicolas Capens, Nitrolicious, NorwegianBlue, Nosferatütr, Octahedron80, Ojigiri, Oleg Alexandrov, Oliver Pereira, Olivier, Ooble, Osrevad, Paolo.dL, Patrick, Paul August, Philip Trueman, PhotoBox, Pizza Puzzle, PizzaMargherita, Pjrm, Plasticup, Pleroma, Pol098, Pt, Quisquillian, Quondum, Quuxplusone, Qwyrxian, R'n'B, RJASE1, RadioFan, Raijinili, Ralph Corderoy, Randomblue, RedWolf, RexNL, Ricardo sandoval, Rich Farmbrough, Rishmiester, Rjwilmsi, Robo37, RockMFR, Ronyclau, Rotem Dan, Rror, Ryan Reich, ST47, Salix alba, Sam Derbyshire, Saros136, Sbyrnes321, Scarian, Sheeana, Shlomital, Shortribs678, Shreevatsa, Sim, Simoneau, Slashdevslashtty, Smack, Spinningspark, Spoon!, Stevenj, Stevertigo, Svick, Symane, TedPavlic, Tedius Zanarukando, Thatguyflint, Thorwald, Thruston, Tide rolls, Tobby72, Tobias Bergemann, Toby Bartels, Tohd8BohaithuGh1, Tomgally, Torc2, TrippingTroubadour, Trovatore, Turlo Lomon, UU, Ultra.Power, Uncle G, VKokielov, Vadmium, Vanished user 34958, VectorPosse, VengeancePrime, Verdy p, Vesta, WardenWalk, Wfisher, Wikieditor06, Wikipelli, Will2k, Wingless, Wshun, Xantharius, Yeom0609, Yoichi123, Yoyoirox, Zaslav, Zenohockey, Zero2ninE, Zippedmartin, Ztobor, Zundark, Zven, ‫ 184 ,ﻋﻠﯽ ﻭﯾﮑﯽ ,ﺋﺎﺳﯚ ,.דניאל ב‬anonymous edits Symmetric function  Source:  Contributors: Konradek, Marc van Leeuwen, Michael Hardy, Nbarth, PMajer, Udoh, 3 anonymous edits


Article Sources and Contributors
Pre-algebra  Source:  Contributors: 1to0to-1, Akldawgs, AndrewHowse, Arakunem, Bfigura, Blehfu, Brandon5485, Charles Matthews, DARTH SIDIOUS 2, Dasani, Dcljr, Deltabeignet, Derrty2033, Don4of4, Dtrebbien, Excirial, Goodvac, IRP, Ibbn, Ichudov, Iqninja, Jitse Niesen, Makemi, Megaman en m, Melchoir, Oleg Alexandrov, PlayaFly101, Plinky, RadioKirk, SchuminWeb, Seqsea, Steorra, TheNewPhobia, Tobias Bergemann, Trovatore, Vary, Waybuilder, 56 anonymous edits Algebra of sets  Source:  Contributors: Alex10023, Arthur Rubin, AugPi, Charles Matthews, Corruptcopper, DonMTobin, Doshell, Esse, Hans Adler, Isnow, Jackzhp, Jamelan, Josephpetty100, Kupirijo, Matt Popat, MegaSloth, Oleg Alexandrov, Palltrast, Paul August, Salix alba, Set theorist, Slipstream, Splintercellguy, The Anome, The enemies of god, Tobias Bergemann, Trovatore, WhisperToMe, Wile E. Heresiarch, Woohookitty, 40 anonymous edits Algebraic structure  Source:  Contributors: Abcdefghijk99, Adriatikus, Aleph4, Alexey Feldgendler, Algebraist, AnakngAraw, AxelBoldt, Beroal, Berria, Bgohla, CRGreathouse, Charles Matthews, Charvest, Chromaticity, Crasshopper, Cronholm144, Danny, David Eppstein, Dmharvey, Don Gosiewski, DustinBernard, Dysprosia, Estevoaei, FlyHigh, Franamax, Fropuff, G716, Galaxiaad, Giftlite, Goldencako, Grafen, Grubber, Hans Adler, HeikoEvermann, Ht686rg90, I paton, Icep, Icey, Ideyal, IronGargoyle, Jakob.scholbach, Jayden54, Jeepday, Johnfuhrmann, Jorgen W, Josh Parris, Jshadias, Kuratowski's Ghost, Kusma, Lethe, Marvinfreeman, MaxSem, Melchoir, Mets501, Michael Hardy, Michal.burda, Modify, Msh210, Myasuda, Mysdaao, Naddy, Nascar1996, Netrapt, Nishantjr, Obradovic Goran, Olaf, Paul August, PeterJones1380, Philip Trueman, Puchiko, Quondum, RedWolf, Revolver, RexNL, Reyk, Rgdboer, Salix alba, Simon12, Sixtyninefourtyninefourtyfoureleven, SoroSuub1, Spiderboy, Staecker, Szquirrel, The enemies of god, Tobias Bergemann, Toby Bartels, Tomaxer, Tompw, Tristanreid, Trovatore, Varuna, Waltpohl, Wshun, Zinoviev, Zundark, ‫ 712 ,ﻣﺎﻧﻲ‬anonymous edits Decimal  Source:  Contributors:, ABCD, AGToth, Abu-Fool Danyal ibn Amir al-Makhiri, Aetheling, Ahoerstemeier, Aitias, Alan.A.Mick, Alexsis97, Alexwright, Altenmann, Alzuz, Andre Engels, Andrejj, AndrewKepert, Andrewa, AnonMoos, Anti wikipedia association(5J NYPS), Arda Xi, Arisa, Arthur Rubin, Au And Cs, BBB, BRUTE, Becritical, Ben Ben, Ben Standeven, Benlisquare, Blanchardb, Bobo192, Boboxing666, Bobtastic13, Bongwarrior, Brion VIBBER, CJLL Wright, CRGreathouse, Calabe1992, CanadianLinuxUser, Casper2k3, Charles Matthews, Charliebrown928, Chasmorox, Chmee2, Chris the speller, CiaPan, Ciphergoth, Cliff, Colonies Chris, Conversion script, CosineKitty, CrazyChemGuy, DARTH SIDIOUS 2, Davehi1, Dbachmann, Deeptrivia, Dejan Jovanović, Dendodge, Derek Ross, Dewan357, DocWatson42, Dodo48, Dominus, Domthedude001, Don4of4, Dpr, Drbalaji md, Dreadstar, Drilnoth, Drj, Dysprosia, Egil, Elipongo, Eman2129, Erath, Erodium, Evanseabrook63, Everyking, Excirial, Favonian, Fbifriday, Fetchcomms, FilipeS, Finalius, Flyguy649, Francs2000, Fredrik, Funandtrvl, Fuzzy Logic, G Prime, Gauravchauhan4, Gentlemath, George The Dragon, Giftlite, Gilliam, Gisling, Glane23, Glenn L, Greatgavini, Guanaco, Gun Powder Ma, Gzuufy, Hanacy, Haxor9, Hede2000, Henry Godric, Henrygb, Hoary, Hydrogen Iodide, IacobusUrokh, Ianmacm, InternetHero, Irisrune, IronGargoyle, Iseeaboar, J.delanoy, JCOwens, JFreeman, JSR, JaGa, Jackelfive, Jagged 85, Jamesontai, Jasper Deng, Javit, Jchthys, Jcrocker, Jeffrey Mall, Jefu, Jiddisch, Jitse Niesen, Johnhardcastle, Johnny9i6, Jose-Santana, Josh Parris, Joyous!, Jtir, Jusdafax, K. Annoyomous, Kaobear, Karl E. V. Palmen, Karl Palmen, Katzmik, Keka, Kevin Lamoreau, Khisanth, Kostisl, Kurykh, Kwamikagami, L Kensington, Lambiam, Larspedia, LeaveSleaves, Lenoxus, LibLord, Linas, Ling Kah Jai, Lotje, Lozzowilko, LuigiManiac, Luna Santin, MER-C, MFH, Mac, MacMed, Majestic-chimp, Majopius, Mandarax, Marcopolo112233, Masgatotkaca, MathGemstone, MathsIsFun, Mboverload, McGeddon, Meco, Melchoir, Metric, Mfc, Mhaitham.shammaa, Michael Hardy, Michaelgaul96, Mike6271, Mikus, Montrealais, Morwen, Mschlindwein, Mserror, Mwilso24, Nageeb, Nagy, Najro, Nakon, Nanami Kamimura, Nanino, Nauticashades, Neko-chan, Netralized, NewEnglandYankee, Nichalp, Noe, NrDg, Nø, OC Ripper, OKeh, Oleg Alexandrov, Oliver202, OrgasGirl, PL290, Pakaran, Palapa, Patrick, Paul August, Pbroks13, PericlesofAthens, Philip Trueman, Pinethicket, Pinky34567, Poccil, Polylerus, Pranathi, Prashanthns, Prosfilaes, Psb777, PuzzletChung, Quadell, Quarl, Quota, R. S. Shaw, RJASE1, Radagast3, Rajpaj, Raymond Feilner, ResearchRave, RexNL, Rgamble, Riana, Rich Farmbrough, Richard cocks, Robertolyra, Robo37, Rorro, Roylee, Rpresser, Ruud Koot, Ruy Pugliesi, SamNYK, Say-whaaahhh, SchfiftyThree, Sekkk, Shadowjams, Shanedidona, Shreevatsa, Silverxxx, Sligocki, Slowking Man, Sopoforic, Stephan Leeds, Stephenb, Stormie, Syncategoremata, TAKASUGI Shinji, Taroaldo, Tempodivalse, Terrek, The Evil IP address, The Rambling Man, The editor1, Thenub314, Tide rolls, Timaru, Tkuvho, Tobias Hoevekamp, TobyDZ, Tonybam, Tostadora, Trevor MacInnis, TrippingTroubadour, Triskaideka, Triwbe, Trovatore, Tsemii, Turlo Lomon, Utcursch, WAREL, Wavelength, Welsh, Wendy.krieger, Whaoshgh, WikHead, Will2k, William Avery, Xanthoxyl, Xmoogle, Xtrapwr, Zachull, Zereshk, ZmiLa, Zzuuzz, ‫ 544 ,ﻋﻠﯽ ﻭﯾﮑﯽ‬anonymous edits Multiplication  Source:  Contributors: -lulu-, .mau., 101nater,, 9258fahsflkh917fas, Aaron Schulz, Adam majewski, Aharter, Ahoerstemeier, Aitias, Alan U. Kennington, AlexiusHoratius, Am Fiosaigear, AndrewHowse, Angrysockhop, Anonymous Dissident, ArnoldReinhold, Arthur Rubin, Auntof6, AxelBoldt, Azrael81, Barticus88, BenFrantzDale, Bennylin, Berland, Betterusername, Bidabadi, Bjankuloski06en, Bluemask, Bo Jacoby, Bongwarrior, Brockert, Bryan Derksen, Bugtrio, Burga, CRGreathouse, Cassowary, Cenarium, Charles Matthews, Chris the speller, Chris.liberatore, Christian List, Chuck Marean, Chuunen Baka, Ciphers, ClockworkSoul, Computer97, Conversion script, Corey Green, Author, CryptoDerk, Cybercobra, Cyp, DAMurphy, DanEdmonds, Daniel, Daren1997, David Martland, Dcljr, Delirium, Demmy, DemonThing, Dfrg.msc, Dicklyon, Discospinster, Dixtosa, Djmips, Dmcq, Dominus, EdC, Edhwiki, EikwaR, Elassint, Eric119, Esanchez7587, Fastily, Ffffffffffffffffffffffff, Fluri, Forever Dusk, Frankenpuppy, Fredrik, Fresheneesz, Furrykef, Gareth Jones, GentlemanGhost, Georg Stillfried, GeorgePauley, Giftlite, Gil Dawson, Gisling, Glane23, GlenPeterson, Gurch, HamburgerRadio, Hapsiainen, HellShadowKing, Herbee, Hiddenfromview, HongQiGong, Hornlitz, Hotdebater, Hyacinth, Indeed123, Innv, InternetMeme, Irwangatot, Isopropyl, J.delanoy, J0equ1nn, Jagged 85, Jarhead422, Jersey Devil, Jiddisch, Jim.belk, Jimp, Jimpendo, JinJian, Jitse Niesen, John Cardinal, John Hancock 9796, Josh Parris, Joyous!, Jshadias, Jsk8111, Jusjih, Jzimba, Karl 334, Kieff, Kirrages, Kiwehtin, KnowledgeOfSelf, Kolbasz, Kosza, Kri, Kyng, Lambiam, Langbein Rise, Larry_Sanger, Laudaka, Leifisme, Lensovet, Leuko, Liftarn, Linas, LinkWalker, Loadmaster, Lugnuts, Lwilliams79, Magicmaths, Majopius, ManuelGR, Markr9, MarnetteD, Martynas Patasius, MathsIsFun, Melchoir, Mentifisto, Meshach, Miaow Miaow, Michael Hardy, Michael Slone, Mmxx, Moreschi, Mr.Z-man, MrOllie, Mygerardromance, N1ugl, N8chz, Naliboki, Nightgamer360, Noformation, Octahedron80, Ohnoitsjamie, Olaf Davis, Oleg Alexandrov, Oli Filth, Omicronpersei8, OrgasGirl, Ortolan88, Oxymoron83, PV=nRT, Papa Roach 619, Parerga, Patrick, Paul August, Paul Erik, PeterStJohn, Phgao, Pie Man 360, Piet Delport, Pizza Puzzle, Ploppery555, Poslfit, Prari, Prolog, Punkrocker459, Qxz, Qz, RJASE1, RadioFan, Randomblue, RebelzGang, Reindra, Reliableforever, RexNL, Rjwilmsi, Rodasmith, Roice3, Ronhjones, Ruwanraj, Ryeterrell, Schzmo, Seberle, Sethie, Shadowjams, Shanes, Silly rabbit, Silverpie, SirSirrrr, SkerHawx, Sligocki, Sm18, SoHome, Sopoforic, SpeedyGonsales, Stephen Shaw, Steve2011, Sumitgargwiki, SvavarL, Svick, Symane, THEN WHO WAS PHONE?, TakuyaMurata, TheNewPhobia, ThereIsNoSteve, Thierry Caro, Tide rolls, Titin P, TomasBat, TooMuchMath, Trainthh, Twaz, Tythomas26, Uncopy, Vixus, WaterCrane, Wayne Slam, Whatfg, WikiWonki222, Willtron, Wimt, Worrydream, XJamRastafire, Xerone55555, Xhaoz, XjuliaXcrazya, Yoenit, Zelos, Zero0000, Zidane tribal, ZimZalaBim, Zundark, Αναπόληση, రవిచంద్ర, 380 anonymous edits Division  Source:  Contributors:, .mau., 2D, Acertain, Ahoerstemeier, Alansohn, Am Fiosaigear, Andrejj, Andres, Anonymous Dissident, Arjun01, Arthur Rubin, Aspen1995, AxelBoldt, BaF, Beao, Bjankuloski06en, Blanchardb, Brian Crawford, Brisvegas, Bryan Derksen, Butko, CRGreathouse, Calabe1992, Carltonengesia, Chad1m, Chaffers, Charles Matthews, Christian List, Chuck Marean, Coffin, CryptoDerk, David Eppstein, Dfrg.msc, Emperorbma, Epbr123, Erel Segal, Eric119, Evil saltine, Flex, Frodet, Frokor, Furrykef, Gandalf61, Gary King, Gauge, Gauss, Gazpacho, Georg-Johann, Giftlite, Gisling, Glacialfox, GraemeMcRae, H.Marxen, Hairy Dude, Hajhouse, HappyCamper, Hardcore Hak, Haymaker, Hellno2, Helopticor, Henrygb, Herbee, Heron, Houtlijm, Hroðulf, IRP, Iranway, Isheden, J.delanoy, JEBrown87544, JMK, Jan1nad, Jengelh, Jesin, Jheald, Jitse Niesen, Jmealy718, Jncraton, Jnestorius, Josh Parris, Joshua Issac, Jpape, Jshadias, Juliancolton, Jusjih, Justin W Smith, Kate, Keenan Pepper, KnowledgeOfSelf, Kosza, LOL, Leszek Jańczuk, LiDaobing, Linas, Longhair, Lzur, MBisanz, MacMed, Majopius, ManuelGR, MarnetteD, Martynas Patasius, MathsIsFun, Mathtchr, Mauler90, Mav, MaxSem, Melchoir, Mets501, Mjb, Moink, Mrchapel0203, Muu-karhu, Mwtoews, Nk, Nn123645, Numbo3, Nuno Tavares, Octahedron80, Oerjan, Oleg Alexandrov, Olegalexandrov, Oli Filth, Omicronpersei8, Ortolan88, PV=nRT, Paolo.dL, Patrick, Paul August, Paulmiko, Petitvie, Philip Trueman, PhotoBox, Plasmic Physics, PrimeHunter, ProfOak, Qz, R. S. Shaw, R00723r0, RJASE1, Ruakh, Ryeterrell, Sakaa, Salix alba, Slysplace, So God created Manchester, Some jerk on the Internet, SparsityProblem, Stapler9124, Stephen Shaw, Suffusion of Yellow, Sukolsak, SvavarL, Symane, Tassedethe, The Anome, Thingg, Tide rolls, Toby Bartels, TomasBat, UKoch, UU, VMS Mosaic, Wahrmund, Welsh, WikiMichel, Wimt, Wysprgr2005, Zaraki, ZimZalaBim, ‫ ,שדדשכ‬రవిచంద్ర, 255 anonymous edits Fraction  Source:  Contributors: (jarbarf), .mau., 5 albert square, A3RO, Aaron Kauppi, Aaron north, Acdx, Acroterion, Ad43-409, AgadaUrbanit, AirdishStraus, Alansohn, Am Fiosaigear, Anonymous Dissident, Anti wikipedia association(5J NYPS), Arbitrarily0, Arthur Rubin, Aughost, Axlrosen, B.d.mills, Bachcell, Backslash Forwardslash, Barneca, Beaversd, Benthompson, Bertik, Blanchardb, Blob 007, Bo Jacoby, Bobo192, Bogey97, Borgx, Calabe1992, Caltas, Captain panda, Cece0344jones, Charles Matthews, ChrisHamburg, Chrislk02, Ciphergoth, Cliff, Coemgenus, Corvus cornix, Courcelles, Cpiral, CrookedAsterisk, CryptoDerk, Cybercobra, DXproton, Damotheomen, Danger, Daren1997, Darkwind, David Eppstein, Deeptrivia, Deflagro, Deflective, Demonuk, Denidowi, Dialectric, Die demand1, Digitalmind, Discospinster, Dlewis3, Doczilla, Dogma100, Dougofborg, Doulos Christos, Dragalis, Dragliker, Driveanddrinkv8, Dspradau, Dtm142, Dude1818, Duncan.beavers, Duoduoduo, Dysepsion, E. Ripley, EdC, EdgeNavidad, Edward Z. Yang, EikwaR, El C, EmilJ, Emovampireprincess, Epbr123, Eric-Wester, Exachix Zephyr, Excirial, FelisLeo, FilipeS, Finalius, Flewis, Fluffernutter, Fountain Lake, Fredrik, Fubar Obfusco, Funnyfarmofdoom, Fæ, GB fan, Gaedheal, Genius101, George de Moraes, Georgia guy, Gesslein, Giavoline, Giftlite, Ginger Warrior, Glenn, Glenspendlove, Gogo Dodo, Gracenotes, Guocuozuoduo, Gurch, Haham hanuka, Halo13teen, Hanacy, Henry Godric, Heron, Hirschtick3, Histrion, Hmrox, Hroðulf, Hyacinth, Ianbecerro, Ianmacm, Iantresman, Instinct, Iridescent, Isheden, Ixfd64, J.delanoy, Jael525, Jagged 85, JakeVortex, JamesAM, Jan1nad, Janko, Jbalint, Jeff G., Jgrahn, Jhansonxi, JimWae, Jitse Niesen, Jleon, Jmanigold, Johnny9i6, Jorunn, Josh Parris, Jowa fan, Jshadias, Justin W Smith, KJS77, KPH2293, KSmrq, Kaobear, Katalaveno, Keenan Pepper, Keilana, Kingturtle, Kjhf, Kkailas, Kpufferfish, Ktalon, Kubigula, L Kensington, LNchic, La Pianista, Larry R. Holmgren, Laurence Cuffe, Laurusnobilis, Linas, Ling Kah Jai, Littlechamp, Logan, LokiClock, Lolmanstrikesback, Luna Santin, LungZeno, MFH, MKoltnow, MacMed, Magioladitis, Mairi, Mandarax, Marek69, Marfalump, Markkawika, Martin451, Materialscientist, Matthew Yeager, Mazrit, Mdd4696, Melchoir, Mets501, Michael Hardy, Michael J, Mightygreenm, Mike Rosoft, MoA)gnome, Monty845, Mrchapel0203, Msikma, My76Strat, NYKevin, Najoj, Nanami Kamimura, NawlinWiki, Nbarth, NeilN, NellieBly, NeoJustin, NerdyScienceDude, NickBush24, NickPenguin, Nikoschance, Ninly, NobuTamura, Noe, Nsevs, Nskillen, Octahedron80, Oleg Alexandrov, Omegatron, Orange Suede Sofa, Orhanghazi, PV=nRT, Paul August, Pavel Vozenilek, Persian Poet Gal, Petowl, Pharaoh of the Wizards, Philip Trueman, PhotoBox, Pleasantville, Poing137, Pooryorick, Possum, ProperFraction, R'n'B, R. S. Shaw, RJASE1, RJaguar3, RainCoaster, Randomblue, RapidR, Razorflame, RedKiteUK, Redsquirrel118, Regent of the Seatopians, Revolver, Rick Norwood, Ronhjones, Ross Fraser, Rror, RyJones, Ryulong, S711, Salix alba, Salochin92, Sam Korn, Samwb123, Saralonde, Saturn star, Saturn07, SchfiftyThree, Seberle, SelfHalt, Shanadonohue, Shenme, Sherbrooke, Shimaspawn, Shirik, Shlomke, Silly rabbit, SirHippo, Skeptical scientist, Slash, Sleigh, Slon02, Slowking Man, SmilesALot, So God created Manchester, Sobreira, Spiritia, Srr712, Steorra, Stephenb, Stevertigo, SunCreator, Super Rad!, Superman125, Sven Manguard, Symane, T3h 1337 b0y, T3hZ10n, TAKASUGI Shinji, TakuyaMurata, TalulahRobinS., Tardis, Technopat, The Anome, The Firewall, The Thing That Should Not Be, The2nicks, Thisisatestpick, Tiddly Tom, Tide rolls, Tiheem, Timbabwe, TippTopp, Tired time, Tktktk, Tobby72, Tobias Bergemann, Tom harrison, Treisijs, Uruiamme, Vanished user 39948282, VarunRajendran, Visible Fred, Vivio Testarossa, VolatileChemical, Wayne Slam, Wcherowi, WestwoodMatt, Whateve101, Wideshanks, WikiDao, Wisdcom, Wtmitchell, Y2008sw, Yidisheryid, Yssha, ZVdP, Zhou Yu, 889 ,‫ דוד שי‬anonymous edits


Article Sources and Contributors
Factorization  Source:  Contributors: 16@r, 9ign, AV-2, Aditsu, Aioghps, Alansohn, Alkarex, Almogo, Andonic, Andrsvoss, Ariovistus, Atif.t2, Austinmurphy, AxelBoldt, Azuredu, BD2412,, Bacon narwhal, BenFrantzDale, Bhowmickr, BiT, Bjankuloski06en, Bloodshedder, Burki1907, Byronwai, CRGreathouse, Calmypal, CarbonUnit, CardinalDan, Carn, Centrx, Charles Matthews, Chevinki, Courcelles, Crazie 88, Crisófilax, Cybercobra, DB au, Damien Karras, Davidresseguie, Dcljr, Death motor, Dekisugi, Delirium, DerHexer, Dfrg.msc, Dino, Discospinster, Dmharvey, Doctormatt, Dollyhari, Dysprosia, EamonnPKeane, Edison, Eloquence, EmilJ, Enochlau, Epbr123, Error792, Fredrik, Fresheneesz, FromUA, Funnyfarmofdoom, Gesslein, Giftlite, Gogo Dodo, Goldencako, Googl, Guaka, Guiltyspark, Gwib, HappyCamper, Hede2000, Iamheredude, Iknowyourider, Imjustmatthew, InfoLuver, Invictus, Isheden, Isol8d, Jared Preston, JebBodily, Jeff G., Jiddisch, Jujutacular, Karadimos, Karl-Henner, Karthik.raman, Keilana, Kelisi, Ken Kuniyuki, Kilo-Lima, Konman72, Kortaggio, Kubigula, LC, Lambiam, Lantonov, LiDaobing, Lockeownzj00, Lowellian, Luk, Lupin, MFH, MacMed, Magioladitis, Marc van Leeuwen, Marek69, Martin Kozák, MartinPackerIBM, Materialscientist, MaxSem, Mayfare, McMaster, Meaghan, Medoaziz17, Member, Mets501, Michael Hardy, Michael Slone, Mikesown, MithrandirMage, Mpatel, Muslim-Researcher, MyNameIsNotBob, N1RK4UDSK714, NerdyScienceDude, Nguoimay, Nivaca, Nuno Tavares, OOODDD, Octahedron80, Orionus, OwenX, Paul August, Pdfm ipe, Philg88, Pinethicket, RDBury, Redaktor, ReneGMata, RexNL, Rich Farmbrough, Rizwanhabib007, Rohan Ghatak, Romanm, Runcorn, Sap00acm, SchfiftyThree, Schneau, Schutz, Siddhant, Sigma 7, Silver Spoon, Simeon, Slipstream, Smaug123, Smcinerney, Snigdh Kumar, Stephan Schulz, Stevertigo, Stringfoto, Supuhstar, Tarquin, The Nut, Thedreamdied, Tide rolls, Tobias Bergemann, Tromer, TuranLady, Utcursch, Venona2007, Vicarious, WGH, Wavelength, Widdma, Wimt, Winston365, Wk muriithi, Wolfrock, XJamRastafire, Xali, Yaakov, Zzuuzz, Лев Дубовой, ‫ 252 ,ﮐﺎﺷﻒ ﻋﻘﯿﻞ‬anonymous edits Distributivity  Source:  Contributors: 16@r, Acroterion, AlecJansen,, Andrea105, Andres, Andy85719, Anonymous Dissident, Arthena, AxelBoldt, Bando26, Banus, Barticus88, Ben Ben, Bfigura, Blaxthos, Bobo192, Bsadowski1, CallofDutyboy9, Chris Roy, Cliff, DVdm, Davejohnsan, Dictouray, Discospinster, Dissident, Dysprosia, EdC, EmilJ, Engelec, Enviroboy, Evershade, Exzakin, False vacuum, Favonian, Forkloop, FrozenMan, GaborLajos, Giftlite, Ichudov, Ideyal, Isnow, Janice Vian, Jiddisch, Jojhutton, Jusdafax, Keenan Pepper, Khazar, Ladislav the Posthumous, Linas, Lyctc, Malcohol, Marek69, Markus Krötzsch, Martin451, Melchoir, Mhaitham.shammaa, Michael Hardy, Michael Slone, Mm40, Montchav, Mykej, NERIC-Security, Nezzadar, Nodmonkey, NuclearWarfare, Numbo3, Octahedron80, Onkel Tuca, Orphan Wiki, Patrick, Paul August, Pichpich, Pmi1924, Romanm, Ronhjones, Salgueiro, Salix alba, Saul34, Simeon, Slon02, Smimram, Sp33dyphil, Squandermania, Tarquin, The Thing That Should Not Be, Tobias Bergemann, Toby Bartels, Trovatore, UNV, Vegaswikian, Vibhijain, Xavic69, Youssefsan, ZamorakO o, Zarcadia, 183 anonymous edits Distribution  Source:  Contributors: 99c, Acepectif, Acipsen, Adamace123, AlistairMcMillan, Arid Zkwelty, AxelBoldt, BD2412, Bdmy, BenFrantzDale, Blotwell, Bryan Derksen, CRGreathouse, CSTAR, CatherineMunro, Charles Matthews, Cj67, Cjordan1, Commander Nemet, Danielcohn, Daniele.tampieri, David Eppstein, David Martland, David.Monniaux, Dcoetzee, Discospinster, Dmn, Dori, Dragonflare82, Duleorlovic, Edward, Ensign beedrill, EnumaElish, Expert in topology, Fredvanner, FvdP, Gauge, Giftlite, Hairy Dude, Hans-AC, Harami, Haseldon, Headbomb, Heptadecagon, Hesam7, Ht686rg90, JeffBobFrank, Jesper Carlstrom, JimVC3, Jitse Niesen, Jmath666, JohnOwens, Joriki, Josh Cherry, JustKidding1, Karl-H, KlappCK, Kri, Kucoolw, Kusma, Kwamikagami, Leo Lazauskas, Lethe, Linas, Litoxe2718281828, Looxix, Luis Cortés Barbado, M gol, MFH, MarSch, Masudr, Materialscientist, Matthew Auger, Mboverload, Md2perpe, Mebden, Meier99, Michael Hardy, Mungbean, Nbarth, Ntmatter, Ntsimp, Oleg Alexandrov, Omnipaedista, PhotoBox, Phys, Poor Yorick, PouyaDT, Ptrf, R.e.b., Rbeas, Reverend T. R. Malthus, RexNL, Rgrizza, Rjwilmsi, Sanders muc, Sapphic, Saretakis, Sebastianlutz, Silly rabbit, Slawekb, Spireguy, Sullivan.t.j, Sławomir Biały, The Bad Boy 3584, TheObtuseAngleOfDoom, Thenub314, Tobi - Tobsen, Tobias Bergemann, Toby Bartels, Tosha, Vivacissamamente, Yimuyin, Zvika, Öncel Acar, 156 anonymous edits Associativity  Source:  Contributors: 16@r, AdiJapan, Alansohn, Andre Engels, Andres, Auntof6, AxelBoldt, Banus, Brona, Burn, Charles Matthews, Charvest, Christian List, Classicalecon, Cliff, Conversion script, Cothrun, Creidieki, Daniel5Ko, Darklich14, David Eppstein, Deadcorpse, Deleting Unnecessary Words, Dysprosia, Egriffin, Ex13, FreplySpang, Furby100, Furkaocean, Fuzzform, GaborLajos, Giftlite, GoingBatty, H2g2bob, Herbee, Ideyal, JCraw, JRSpriggs, Jeronimo, Josh Parris, Kgorman-ucb, KnightRider, Krishnachandranvn, Kuraga, Kwantus, Lethe, Linas, MacMed, Marek69, Mattblack82, Melchoir, Michael Hardy, Mmernex, Mr Gronk, Mudshark36, Octahedron80, Oleg Alexandrov, PI314r, Patrick, Paul August, Physis, Pizza Puzzle, Pyrospirit, Rbanzai, Rich Farmbrough, Robinh, Salgueiro, Salix alba, SchfiftyThree, SixWingedSeraph, Smjg, Smooth O, Some jerk on the Internet, The Thing That Should Not Be, Tide rolls, Tobias Bergemann, Toby, Toby Bartels, Trefgy, Ujoimro, Wereon, Will Beback, Wine Guy, XJamRastafire, Xtv, Yurik, Zundark, Zven, 89 anonymous edits Commutativity  Source:  Contributors: 100110100, 12cookk, 16@r, Acdx, Aceshooter, Ahoerstemeier, Am Fiosaigear, Ameulen11, Anonymous Dissident, Arthena, Ashmoo, B.d.mills, Bando26, BenFrantzDale, Burn, Charles Matthews, Childzy, Christian List, Classicalecon, Complexica, Cybercobra, DBooth, Dan Gluck, Daniele.tampieri, Dbachmann, Deathnomad, Derek.cashman, Dirac66, Dithridge, Dreftymac, Dysprosia, Dzer0, Ebelular, El C, Enochlau, Floridi, Fr33ke, French Tourist, Frencheigh, Ftbhrygvn, GTBacchus, Gail, Gcm, Gdr, Geometry guy, Giftlite, Grayshi, Ground Zero, Hairy Dude, HamburgerRadio, Haseldon, Headbomb, Herbee, IRWolfie-, Isopropyl, Ivan Štambuk, JamesAM, Jeff3000, Jeffq, Jll, Joe8824, JoergenB, Josh Parris, Joshlepaknpsa, JustUser, Knutux, Kozuch, Larsnostdal, Legion fi, Life, Liberty, Property, Linas, Lupin, MacMed, Malcolm rowe, Marechal Ney, Masnevets, Mattpickman, Melchoir, Michael Hardy, Michael Slone, MichaelRWolf, Mikael Brockman, Mike2vil, Mikemill, MisterSheik, Mlessard, Mlm42, Mormegil, Nacho Librarian, Nev1, Octahedron80, Oleg Alexandrov, OverlordQ, Patrick, Paul August, Peruvianllama, Petri Krohn, Psimmler, Quondum, R000t, RedWolf, Reetep, Rich Farmbrough, Rick Norwood, Rjwilmsi, Robert.McGibbon, Robinh, Romanm, Salix alba, Samadam, Samuel Huang, Sarregouset, Sayginer, SchfiftyThree, Scientific29, Snoyes, Srleffler, Starwiz, Stephen Poppitt, Stillnotelf, Szquirrel, The JPS, Thehakimboy, ThinkEnemies, Thomprod, Tony Sidaway, Ubcule, Ujoimro, Unmitigated Success, VKokielov, Vegaswikian, Vicarious, Waltpohl, Wayne Slam, Weisicong, Weston.pace, Wikiborg, Wmasterj, Wolfmankurd, Wshun, Xvani, YellowMonkey, Zundark, 148 anonymous edits Function  Source:  Contributors: 21655, ABCD, Aatomic1, Aazn, AbsolutDan, Ac44ck, Adam majewski, Adi4094, Ae-a, Agüeybaná, Aksi great, Al.locke, Alansohn, Aleph4, Alex43223,, Alexius08, Ali Obeid, Altenmann, Amahoney, Ams80, Andre Engels, Andreas Kaufmann, Andres, Andy.melnikov, Angela, AnonGuy, Anonymous Dissident, Anthony Kull, Arammozuob, Arcfrk, Army1987, Arthur Rubin, Asdfqwe123, Auntof6, Autonova, AvicAWB, Avraham, AxelBoldt, Ayda D, AznBurger, Azzard, Beroal, Bidabadi, Bigoperm, Bo Jacoby, Boute, Brad7777, Bradgib, Brainyiscool, Brianjd, BridgeBuilderKiwi, Buenasdiaz, CBM, CBM2, CRGreathouse, Calayodhen, Can't sleep, clown will eat me, Carl.bunderson, Cenarium, Cetinsert, Ceyockey, Charles Matthews, Chas zzz brown, Cheese Sandwich, Chridd, Christian List, Clark Kimberling, Classicalecon, Cmathio, Cnyrock, Cobaltcigs, Crucis, Cs32en, Cybercobra, DARTH SIDIOUS 2, DVdm, Danakil, Darkreason, Daven200520, Davethej, David Eppstein, David Gerard, David Shear, David spector, Dcoetzee, DefLog, Dfarrell07, Dfass, Digby Tantrum, Dino, Dmcq, Dominus, Domitori, Donarreiskoffer, Dpr, Drmies, DuaneLAnderson, Dylan Lake, Dysprosia, ELDRAS, East of Borschov, El C, Elizabeyth, Enviroboy, Epbr123, Equendil, EugeneZelenko, Excirial, Falcon8765, Fastily, Favonian, Foxjwill, Frankenpuppy, Fredrik, Fresheneesz, Furrykef, Fæ, GTBacchus, Gantlord, Gary King, Gauge, Gene s, Geometry guy, Gesslein, Giftlite, Glane23, Glenn, Gogo Dodo, Grafen, Gregbard, H.ehsaan, Hans Adler, Headbomb, Henrygb, Heryu, Hmains, Hu, Hxxvxxy, Hydrogen Iodide, I69U, IGraph, Iainspeed, IiXbox Liive, Ilya Voyager, Imjustmatthew, Immunize, Indil, Indon, Iner22, Inimino, Isheden, Iulianu, J heisenberg, J00tel, Jacj, Jackol, Jagged 85, Jcobb, Jeff3000, Jerry teps, Jiddisch, Jim.belk, Jimp, Jitse Niesen, Jojhutton, Jomomaindahouse, Jon Awbrey, Jorge Stolfi, Josh Parris, Jrtayloriv, Juliancolton, Jumbuck, Jusdafax, Justin W Smith, Jvohn, KSmrq, Kablammo, Kai Ojima, Katalaveno, Ken Kuniyuki, Keta, Kevs, Kierstend97, Kku, Kusunose, L Kensington, Lambiam, Langing, LarryLACa, Laurens-af, LeaveSleaves, Leibniz, Lightmouse, LinuxDude, LizardJr8, Lobas, Logan, Logicist, Lucky13pjn, Lunisneko, MFH, MagnaMopus, MarSch, Marc Venot, Marcos, Marcos (usurped), Marek69, Markmathman, MathMartin, Matijap, MattGiuca, Maurice Carbonaro, Melchoir, Merovingian, Mets501, Michael Hardy, Mindmatrix, Mintleaf, Misza13, Mjhsrocker, Mjhy0926, Mkawick, Mor, Mormegil, Mousomer, MrOllie, MrRadioGuy, Msh210, Mufka, My Core Competency is Competency, Natalie Erin, NawlinWiki, Newbyguesses, Nguyễn Hữu Dung, Nikai, Noisy, Ntmatter, Oleg Alexandrov, Orange Suede Sofa, Orphan Wiki, Ouzel Ring, OverlordQ, Oxymoron83, Ozob, Pak21, Palica, Palnot, Paolo.dL, Patrick, Paul August, PaulTanenbaum, Pcap, Pdcook, Peterhi, Philip Trueman, Phils, PhotoBox, Piano non troppo, Pieboy1012,, Pooryorick, Populus, Porcher, Possum, Pruneau, Quaeler, Quialls, R'n'B, RG2, RSStockdale, Radon210, Ramu50, Randall Holmes, Rasmus Faber, Reach Out to the Truth, Reaper Eternal, RedWolf, Reinderien, Renegadeshark, Retired user 0001, RexNL, Rich Farmbrough, Rick Norwood, Roentgenium111, Ronhjones, Rossami, Rousearts, Rrburke, Ruud Koot, Ryan Reich, Salix alba, Sam Staton, Sampayu, SamuelTheGhost, Saric, Sbandrews, Schapel, SchfiftyThree, Seaphoto, Senehas, Shades78, Shd, Silvaskull, Sirhanx2, SixWingedSeraph, Skybon, Slawekb, Sligocki, Some jerk on the Internet, Someguy1221, Sonett72, SpeedyGonsales, Splash, Spoon!, Sprachpfleger, Stefano85, Stephen Shaw, Steven Russell, Stevertigo, SuperHamster, Sverdrup, Symane, Sławomir Biały, THEN WHO WAS PHONE?, TakuyaMurata, Tarif Ezaz, Taxman, Template namespace initialisation script, The editor1, TheDJ, TheDinParis, TheNightFly, Thehotelambush, Thenub314, ThiagoRuiz, Thomasmeeks, Tide rolls, Tilla, Tkuvho, Tobias Bergemann, Toby Bartels, Tooto, Tosha, Tparameter, Trixx, Turgidson, Twsx, Tyrol5, Urdutext, UserDoe, VKokielov, Vagary, Vanished user 39948282, Vary, Velho, Versus22, Vivacissamamente, Vriullop, Wafulz, Wavelength, Waxex, Wayne Slam, Weixifan, Wild one, Wildog123, Winekeke, Wolfrock, Woodstone, Woohookitty, Wshun, Wvbailey, Yacht, Yamakiri, Yamamoto Ichiro, Yoshigev, ZamorakO o, Zawthet, Zayzya, Zfr, Zimbardo Cookie Experiment, Zundark, Zy26, Zzuuzz, 652 anonymous edits Set  Source:  Contributors: -- April, @modi, A Raider Like Indiana, Aaronbrick, Acather96, Ahsanlassi, Akerans, Ale jrb, Algebraist, Allmightyduck, Altenmann, Am Fiosaigear, Anakata, Anonymous Dissident, Antandrus, Apparition11, Arag0rn, Archibald Fitzchesterfield, Arcsecant, ArielGold, Army1987, Arthur Rubin, Asdasdasda, Astronautics, Asztal, AxelBoldt, Benjistern, BiT, Blotwell, Bob K, Bobo192, Bomac, Booyabazooka, BozMo, Brianjd, Brick Thrower, Bryan Derksen, Bubba73, C S, CBM, CRGreathouse, Centrx, CesarB, Chameleon, Charles Gaudette, Charles Matthews, Cliff, CloudNine, Conversion script, Corti, Cpiral, Crawdaddio, Crisófilax, Cxz111, Cybercobra, DARTH SIDIOUS 2, DEMcAdams, Damian Yerrick, Damien Karras, Daniel Olsen, Danman3459, David Eppstein, DavidCBryant, Dcoetzee, Declan Clam, Demaag, Den fjättrade ankan, Dfass, DionysosProteus, Dirac1933, Dissident, Dissipate, DivineAlpha, Dmcq, Docu, Dogah, Dpr, Drmies, Dudzcom, Dysprosia, EJR, Echoback, Epbr123, Eric Kvaalen, Etaoin, EugeneZelenko, EvenT, Evil saltine, Except, Ezhiki, Fangncurl, Finell, Frecklefoot, Fred Bauder, Fredrik, Fresheneesz, Furrykef, Gary King, Gdr, Giftlite, Gogamoga, Graham87, Gregbard, Gubbubu, Gulliveig, Hans Adler, Harsha6raju, He Who Is, Heah, Hgetnet, Hirak 99, Hyacinth, Ideyal, Ike9898, Iulianu, IvanLanin, JNW, JRSpriggs, Jauerback, Jerome Charles Potts, Jiddisch, Jitse Niesen, JoergenB, Johnteslade, Jonik, JorgeGG, Joseph Myers, Josh Parris, Jrtayloriv, Julian Mendez, Jumbuck, Jusdafax, Just plain Bill, KNM, Kaiserb, Kajasudhakarababu, Kendrick Hang, Knakts, Knutux, Kumioko, L Kensington, LA2, LBehounek, LapoLuchini, Lee Daniel Crocker, Leonard Vertighel, Lerdthenerd, Lethe, Leuko, Liempt, LimoWreck, Lindsay658, Lipedia, Lucinos, MKil, Makeemlighter, Malleus Fatuorum, Mani1, MarSch, MartinPoulter, MathStatWoman, Mathonius, Maximaximax, Medoshalaby, Mfc, Mfwitten, Michael Hardy, Mkweise, Mnmazur, Mosaick, Ms2ger, Msh210, Multipundit, Mustafaa, Mxn, Nakon, NearSetAccount, NewEnglandYankee, Nixdorf, Ocolon, Octahedron80, Oleg Alexandrov, Olivier, Omnipedian, Ott, OwenX, Oxymoron83, PaddyLeahy, Paolo.dL, Patrick, Paul August, Paul Ebermann, Paul.w.bennett, PaulTanenbaum, Pb, Peak, Phil Boswell, Philip Trueman, PhotoBox, Pinethicket, Pizza Puzzle, Pmlineditor, Porges, Prashanthns, Prosfilaes, Protonk, Prumpf, Python eggs, RayGates, RealLink, Reyk, Rgdboer, Richard Fannin, Rob Hooft, RobHar, Romanm, Rosejn, Rousearts, RupertMillard, RyanJones, Salix alba, Sam Hocevar, Sammi84, Sax Russell, Schneelocke, Scndlbrkrttmc, Shahab, Shashank rathore, ShelfSkewed, Shyamli rao, Spambit, Squigglet, Srinivasasha,


Article Sources and Contributors
Stormie, Symane, Sławomir Biały, Tablizer, Taffer9, TakuyaMurata, Tchakra, TeunSpaans, Tgv8925, The Anome, The world deserves the truth, TicketMan, Tide rolls, Timwi, Tkuvho, Tobias Bergemann, Toby, Toby Bartels, TonyFlury,, Trebor, Trovatore, Twri, Unyoyega, Urhixidur, Uriyan, UserGoogol, Utcursch, Utkwes, VKokielov, Vargenau, Vinoir, Viridae, Vivacissamamente, Vxk08u, Waltpohl, Wapcaplet, Waveguy, WikiPuppies, Wiml, Wowzavan, Wshun, XJamRastafire, Xantharius, Xcelerate, Youandme, Zfr, Zundark, Ævar Arnfjörð Bjarmason, Александър, 388 ,‫ לערי ריינהארט‬anonymous edits Binary operation  Source:  Contributors: 00Ragora00,, AJRobbins, Adrian.benko, Alexrudd, Am Fiosaigear, Andre Engels, Andres, Aravindk editing, Avaragado, AxelBoldt, Bentogoa, Bernstein2291, Bo Jacoby, Bodhisvaha, BraneJ, C4, Charles Matthews, Chendy, Chininazu12, Conversion script, Danny, Delirium, Dysprosia, Emperorbma, Georg Peter, Giftlite, Greenrd, Hairy Dude, Hede2000, IdealOmniscience, Isnow, Ivan Štambuk, Josh Parris, Jrugordon, KDesk, Klutzy, Linas, Maerk, Mattblack82, Melchoir, Mets501, Mhaitham.shammaa, Michael Hardy, Mmernex, Mohanapriya94, Mormegil, Nbarth, NoJoy, Noya Watan, Octahedron80, Orenburg1, Ornil, PV=nRT, Patrick, Paul August, PaulTanenbaum, Piet Delport, PlatypeanArchcow, Poor Yorick, R'n'B, RDBury, Richard L. Peterson, Romanm, Rotem Dan, Sae1962, Salgueiro, Salix alba, SixWingedSeraph, Sixequalszero, Skylarkmichelle, Spiros Bousbouras, Superion maximus, TakuyaMurata, Tarquin, Toby, Toby Bartels, Vijeenroshpw, Waldir, Wshun, Xantharius, Xtv, Yaronf, 41 anonymous edits Inverse element  Source:  Contributors: ABCD, Abovechief, BenFrantzDale, Beta16, Beusson, Charles Matthews, Ciphers, Cmdrjameson, David Shay, DavidCBryant, Digitizdat, Dysprosia, EnderWiggin1, Fropuff, Giftlite, Helix84, Ht686rg90, Isomorphic, JackSchmidt, Jacob grace, Kilva, LVC, Laser.Y, MFH, ManoaChild, Marino-slo, MathMartin, Melchoir, Mets501, Michael Hardy, Moocowpong1, Myasuda, Neparis, Obradovic Goran, Oli Filth, OneWeirdDude, Pcap, Phe, Reyk, Rhuso, Rludlow, Saippuakauppias, Salix alba, Tardis, Timwi, Toluno, Tosha, Zundark, 23 anonymous edits Elementary algebra  Source:  Contributors: -- April, Aaronm19, Aianrnoens, Akohart, Alexander256, AnakngAraw, Andy120290, AugPi, AxelBoldt, B.Wind,, Boopbee, Brad7777, Buskadeiro, CRGreathouse, Can't sleep, clown will eat me, Cenarium, Charles Matthews, Chrislk02, Chrisspurgeon, Ckatz, Colonies Chris, CompuChip, Conversion script, Crabchops, Da Joe, DaveG, David Eppstein, DeadEyeArrow, Den fjättrade ankan, Diggers2004, Dmharvey, Drmies, Dylan Lake, Dysprosia, EngineerScotty, Enviroboy, EulerGamma, FalseAxiom, Feydey, Florian Blaschke, Fred Bauder, Fresheneesz, Gaius Cornelius, Gdo01, GermanX, Gesslein, Giftlite, Gombang, Graham87, Guanaco, Hermajesty, Hroðulf, Hut 8.5, Idont Havaname, Interchange88, Isis, Jauhienij, Jj137, JohnBlackburne, Josh Parris, Justin Eiler, Karnesky, Kevin B12, Kevin Steinhardt, Kilva, Kislev, Kiv, Kubigula, Kusunose, Kvrajiv1893k, LeaveSleaves, Lectonar, Lilgurl100, Linas, Lincher, Lincolnshep, LittleDan, LokiClock, Loodog, Mandarax, Marc Venot, Materialscientist, Mathnerd, Matqkks, Melchoir, Michael Hardy, Michaelrccurtis, Mikrosam Akademija 1, Mitch patrikus, Motchy, Musiphil, Naryathegreat, Netesq, Netheril96, Nuno Tavares, Obradovic Goran, Odie5533, Oleg Alexandrov, Paul August, Paul Hsieh, Paulmiko, Pax85, Phancy Physicist, Phe, Quadrivium, Rettetast, Rgclegg, Robert Skyhawk, Ronhjones, Rumping, Runewiki777, Ruud Koot, Sangwinc, Scott.leishman, Sdobine, Sigma 7, Silivrenion, Silly rabbit, Silverblade234, SimonMorgan, Spacefarer, Spellchecker, SpuriousQ, Steeler2001, StradivariusTV, Superbrutaka07, Sweetness46, Terets, Tompw, UltraJoosh, Urdutext, Wapcaplet, Wayiran, We hope, Wing Nut, Wwoods, Xantharius, Yaris678, Young gee 1525, Zntrip, Zundark, 218 ,55‫ דוד‬anonymous edits FOIL method  Source:  Contributors: Alansohn, AvicAWB, Brandenads, Centrx, Chrylis, Dcljr, Deo Favente, GermanX, GraphicMarine, Henning Makholm, Jim.belk, Jokes Free4Me, KSmrq, Kevinduh, Kinu, Korn99, Lanthanum-138, Lapkam13, Lisonje, Lunch, Michael Hardy, Mild Bill Hiccup, Nbarth, Oconnor663, Old Moonraker, Pmanderson, Reconsider the static, Rnb, Ronk01, Supuhstar, Taxman, TeaDrinker, Tgeairn, The Obento Musubi, Wikimichael22, Zchenyu, 53 anonymous edits Antisymmetric relation  Source:  Contributors:, Adavis444, Adrian.benko, Arthur Rubin, Ascánder, AxelBoldt, Catskineater, Charles Matthews, Conversion script, DabMachine, Dcoetzee, De bezige bij, DuaneLAnderson, El C, EvenT, Fresheneesz, Giftlite, Gregbard, Hakuku, Henry Delforn, Incnis Mrsi, Isnow, Jcarroll, Jdthood, Jumbuck, Kelovy, Lambiam, Lumidek, Mark lee stillwell, MathMartin, Michael Hardy, Mike Fikes, Nivaca, Nparikh, P30Carl, Patrick, Paul August, PaulTanenbaum, RDBury, Rholton, Rpresser, Sam Hocevar, Semmelweiss, Spoon!, Tobias Bergemann, Verbal, Wasseralm, WillowW, 18 anonymous edits Symmetric polynomial  Source:  Contributors: Ali 24789, Arthur Rubin, Basemaze, Bender235, CBM, Charles Matthews, Dogaroon, Firoja, Giftlite, Gwaihir, Icairns, JCSantos, JoergenB, KnightRider, Linas, Marc van Leeuwen, Michael Hardy, Mon4, Nbarth, Oleg Alexandrov, Patrick, Polyade, RobHar, Small potato, Stellmach, Stephen B Streater, Warut, Zaslav, Zdaugherty, 22 anonymous edits Summation  Source:  Contributors: Abdull, Abtract, Akshaysrinivasan, Alansohn, Allstarecho, Alxndr, Anaverageaopser, Anonymous Dissident, Atomician, Atropos, Avicennasis, Avoided, BUF4Life, BenFrantzDale, Berland, Bethnim, BiT, Bkell, Bo Jacoby, Bonás, Buzzert, Calbaer, Canterbury Tail, CapitalSasha, Captain panda, Chick Bowen, Classical geographer, Coco5374, Connormah, Crandel, CàlculIntegral, DTM, DX-MON, David Eppstein, David Shay, David.coallier, Degarcia, DerHannes, Dfrg.msc, DisasterManX, Dreftymac, Drschawrz, Dvagner37, Edgetitleman, Ehamberg, Elwikipedista, EmilJ, Eregontp, Fredrik, Furrykef, Fuzzie, G2thef, Gary King, Geometry guy, Georgia guy, Gesslein, Giftlite, Hajhouse, Happysailor, Highegg, Hirak 99, Hyacinth, IVAN3MAN, Iancarter, IdealOmniscience, Igny, Iridescent, J.delanoy, JNLII, Jay-Sebastos, JayHenry, Jeffmalonie, Jim.belk, Jimothy 46, John of Reading, Js2794, Jump off duck, Jverkoey, Kbolino, Kiefer.Wolfowitz, Kieff, KnowledgeOfSelf, Komsaslt, LOL, LachlanA, Lightmouse, Lowellian, Lucinos, Luna Santin, MFH, Mandarax, Marc K, Marc van Leeuwen, Master lan, Materialscientist, MattGiuca, Megestic unicorn, Melab-1, Melchoir, Mentifisto, Mgccl, MiNombreDeGuerra, Michael Hardy, MichaelMcGuffin, MickMacNee, Mini-Geek, Misantropo, Mouse is back, Mr.Netrix, MrMALevin, Musiphil, N5iln, NHRHS2010, NYKevin, Naftalip, NatusRoma, NewEnglandYankee, Nick, Nick Levine, Nixeagle, Octahedron80, Olathe, Oleg Alexandrov, Paolo.dL, Parkyere, Patrick, Philip Trueman, Portalian, Qtax, R'n'B, Ravn, RekishiEJ, Rich Farmbrough, Robert Illes, Ryan moser, Salix alba, Salvar, Seeleschneider, Showgun45, Shreevatsa, Simoneau, Singularity, Sligocki, Smarty02, Stan Lioubomoudrov, Stevenj, Syp, Thadius856, TheHorse'sMouth, Tomaxer, Topaz, Torc2, Twisted86, Tyciol, Vanessa-Hudgens123 rocks, Vystrix Nexoth, Whitelined, Wicko3, Woohookitty, Xanbkn, Xanstheman, Xantharius, Yanah, Zhen Zhen, Σ, 372 anonymous edits Closure  Source:  Contributors: Arvindn, CIreland, Can't sleep, clown will eat me, Capricorn42, Classicalecon, DARTH SIDIOUS 2, David Shay, Dominus, Ettrig, Exe, Giftlite, Googl, Gregbard, Gwaihir, Iranway, JaGa, JuJube, Kallikanzarid, Kazvorpal, Kompik, Lambiam, Lethe, Linas, Mhss, Michael Hardy, Momobeauty, Mr Gronk, Nbarth, NeilFraser, Neilc, Obradovic Goran, Occono, Oleg Alexandrov, Patrick, Paul August, Pcap, Populus, PrincessofLlyr, Puffin, R. S. Shaw, RaulMiller, RexNL, Salix alba, Sapien2, Spayrard, Staecker, Trusilver, U.S.A.U.S.A.U.S.A., Unixxx, Vonkje, Wingchi, 46 anonymous edits Zero divisor  Source:  Contributors:, Andres, Astrale, Atlantia, AxelBoldt, Bryan Derksen, CRGreathouse, Carvasf, CiaPan, Conversion script, Dan Gardner, Eleuther, Eric119, Ersik, Etale, Feinoha, GaborLajos, Giftlite, Gleuschk, Graham87,, Jeepday, JoergenB, Joseph Dwayne, Jshadias, Karl-Henner, Lambiam, MathMartin, MathyGuy23, Mav, Melchoir, Michael Hardy, Michael Slone, OneWeirdDude, RDBury, Sabbut, Sabin4232, Sheogorath, SirJective, TakuyaMurata, Vivacissamamente, Xantharius, 33 anonymous edits Negative and non-negative numbers  Source:  Contributors: Ahoerstemeier, Akanemoto, Albmont, Alexius08, Algebraist, Alphanis, Am Fiosaigear, Amalthea, Andrewmc123, AxelBoldt, Batza, BenFrantzDale, Bfinn, Bob A, Bobo192, Bombshell, BrianH123, Brianjd, Brilliant Black, CBM, CRGreathouse, Cacycle, CanadianLinuxUser, Capricorn42, Carbonite, Carl.bunderson, Cfailde, Chappell, Charles Matthews, Chris Roy, Cliff, Closedmouth, Cometstyles, CompuChip, Connell66, Cybercobra, Cybergothiche, Cyp, DARTH SIDIOUS 2, Dan Granahan, DanielRigal, Daniil Maslyuk, Dbachmann, Deeptrivia, Demmy, Dhollm, Dissident, Dmcq, Dokee, Dominus, Doshell, Dwheeler, Dysprosia, Ed Poor, Edmundwoods, Eebster the Great, Eequor, Evercat, Excirial, Explicit, Ezrakilty, Falcorian, FarzanehSarafraz, Fibonacci, Fieldday-sunday, Fl, Foof, Fæ, GTBacchus, GVnayR, Gandalf61, Giftlite, Gwernol, HappyInGeneral, HarlandQPitt, Henrygb, Hmains, Hu12, Hut 6.5, Isnow, JSR, Jagged 85, Jamesooders, Jaxad0127, Jebus989, Jim.belk, Jim1138, Jmabel, Jonathan de Boyne Pollard, Josh Parris, Jowa fan, Jshadias, Jvr725, Jérôme, Kaobear, Katzmik, Kazrak, KennethJ, Koeplinger, LOL, LetsPlayMBP, Linas, Loadmaster, Logical Fuzz, Lotje, M00npirate, Mad Cat, Magog the Ogre, Majopius, Marc van Leeuwen, Meaghan, Melchoir, Mendaliv, Mentifisto, Metaprimer, MiNombreDeGuerra, Michael Hardy, Minesweeper, Mintleaf, Mltinus, NCurse, NewEnglandYankee, Ninthabout, Noctibus, Octahedron80, Ohms law, Olaf, Oleg Alexandrov, Oli Filth, OwenX, Pakaran, Paul August, PaulTanenbaum, PericlesofAthens, Peskydan, Phantomsteve, Pizza Puzzle, Poccil, Point-set topologist, Pol098, Pol430, Pranathi, Qxz, R. S. Shaw, RadioFan, Raiden10, Raja Hussain, Retired username, Rjwilmsi, Rustin Zalman, ST47, Sbrools, Sciurinæ, Seaphoto, Selfworm, Senator Palpatine, Shandris, Sharkey, Sigma 7, SimonTrew, Slash, Spoon!, StaticGull, StradivariusTV, Strait, SuperMidget, TAKASUGI Shinji, TakuyaMurata, Tarquin, The Thing That Should Not Be, The sock that should not be, Thenub314, Tide rolls, Tiger888, Toby Bartels, Tommy2010, Topsy turvy, Trần V Tuấn, Usien6, Vnusplanet, Voidvector, Wahrmund, Wapcaplet, Weeviator123, WikHead, Wikiklrsc, WillowW, Wiqi55, Wwoods, XJamRastafire, Zaslav, Zian, Zitterbewejung, Zundark, とある白い猫, 317 anonymous edits Real number  Source:  Contributors: 345Kai, AbcXyz, Acct001, Acetic Acid, Adam Krellenstein, Addshore, AgentPeppermint, Ahoerstemeier, Aitias, Aizenr, Akanemoto, Alai, AlexBedard, Aliotra, Amalas, Andre Engels, Andres, Andrewrost3241981, Angielaj, AnnaFrance, Anonymous56789, Antonio Lopez, Arichnad, Arthena, Arthur Rubin, AstroNomer, Avaya1, AxelBoldt, AzaToth, Bagatelle, Balster neb, BenB4, Benwing, Bertik, Bobo192, Boemanneke, Borgx, Brian0918, Brion VIBBER, Bryan Derksen, CBM, CONFIQ, CRGreathouse, Carwil, Catherineyronwode, Charles Gaudette, Charles Matthews, Chinju, Chris Roy, Christian List, Cliff, Conversion script, CorvetteZ51, Corwin.amber, Curps, Cyan, Cybercobra, CynicalMe, D.Lazard, DVdm, DYLAN LENNON, Damian Yerrick, Debresser, Demmy100, Den fjättrade ankan, DerHexer, Digger3000, Dirac1933, Dmharvey, Dmmaus, Dominus, DonAByrd, Doradus, Doshell, Dysprosia, Długosz, Eddideigel, Edokter, Egarres, Eiyuu Kou, Ejrh, El C, Elizabeyth, Elroch, Equendil, Eric119, Euandrew, FilipeS, FocalPoint, Franklin.vp, Fredrik, Freezercake4d4, Fresheneesz, Fropuff, Frungi, Future Perfect at Sunrise, Gaius Cornelius, Galoubet, Gemini1980, Gene Ward Smith, Gesslein, Gfoley4, Giftlite, GirasoleDE, Glane23, Goodnightmush, Grafen, Graham87, Grover cleveland, Hans Adler, Heartyact, Helenginn, Herbee, Hmains, Ian Maxwell, Ideyal, Immunize, Isheden, Isnow, Iulianu, IvanDurak, Izno, J.delanoy, J04n, Jaberwocky6669, JackSchmidt, Jagged 85, JamesMazur22, Jcrocker, Jerzy, Jiddisch, Jim.belk, Jimjamjak, Jitse Niesen, Joeblakesley, Josh Cherry, Josh Parris, Jrtayloriv, Jshadias, Jumbuck, Jusdafax, Karch, KarlFrei, Klemen Kocjancic, Knwlgc, Koeplinger, Lapasotka, Leks81, Lethe, Linas, LizardJr8, Lockeownzj00, LokiClock, LongAgedUser, Loodog, Ludo716, MC10, MPerel, Macrakis, Mani1, Marek69, Markcollinsx, Marquez, Masgatotkaca, MatthewMcComb, Mejor Los Indios, Michael Hardy, Michael Keenan, Miguel, Mike Rosoft, MikeHobday, Miles, Misshamid, Modernist, Motomuku, Mr Death, Ms2ger, Msh210, Myahmyah, Mygerardromance, N Shar, Nabla, Nasnema, NawlinWiki, Nbarth, Newone, Niking87, Nikoschance, Nil Einne, Nk, No Guru, Nono64, Northumbrian, Notinasnaid, Nowhither, Oleg Alexandrov, Omtay38, OrgasGirl, Oxymoron83, Panoramix, Paolo.dL, Patrick, Paul August, Paul


Article Sources and Contributors
Markel, Paxsimius, Pcap, Pdcook, Peql, Peterhi, PhotoBox, Piano non troppo, Pierre de Lyon, Pinethicket, Pizza Puzzle, Platonicglove, Pmanderson, Pomte, Poochy, Populus, Protonk, Puddleglum Marshwiggle, Puffin, Quaeler, Qwfp, Qz, R.e.b., R3m0t, RDBury, RPHv, Raja Hussain, Randomblue, RaseaC, Rasmus Faber, Rehernan, Renfield, Rich Farmbrough, RobinK, Romanm, Salix alba, Sam Hocevar, Sapphic, Saric, Scepia, Sesu Prime, Sfmammamia, Siddhant, Sjakkalle, Skizzik, Slowking Man, Smithpith, Sorrywikipedia1117, SpeedyGonsales, Spur, Stephanwehner, Stevenj, Stevertigo, Stewartadcock, Stifle, SudoMonas, SuperMidget, Symane, T00h00, TGothier, Taejo, TakuyaMurata, Tarquin, Tarret, TenPoint, Tero, The Thing That Should Not Be, The wub, Tide rolls, Tkuvho, Tobby72, Tobias Bergemann, Toby, Toby Bartels, Tosha, Tparameter, Trovatore, Tubalubalu, Tubby23, Tweenk, VKokielov, Varlaam, Velella, Virtualjmills, VladimirReshetnikov, WAREL, Wcherowi, Wimt, Wolfrock, WpZurp, Wshun, X42bn6, XJamRastafire, Xantharius, Yarnalgo, Zero sharp, Zundark, Ævar Arnfjörð Bjarmason, 448 anonymous edits Natural number  Source:  Contributors: .:Ajvol:.,,,, 16@r,, A. di M., Aaron north, Abdullais4u, Achim1999, Airplaneman, Aisaac, Akanemoto, Ale jrb, Alem Dain, Algebraist, Alink, Alpha Quadrant (alt), Altenmann, Andres, Angela, Angielaj, Animum, Anita5192, Aoeuidhtns, Arcturus, Arthur Rubin, AstroNomer, Asyndeton, Auclairde, AxelBoldt, AzaToth, B-Con, BCtl, Bballgrl42351, Bdmy, Being blunt, Ben Ben, Berteun, Bevo, BiT, Bkell, Bob.v.R, Boing! said Zebedee, Bomac, Bongwarrior, Brim, Brion VIBBER, Bryan Derksen, CBM, CRGreathouse, Calagahan, Can't sleep, clown will eat me, Caracho, Carl cuthbert, Catmoongirl, Cenarium, Chameleon, Charles Matthews, Cliff, CompositeFan, Conversion script, Cp111, Cream147, Crissov, Cxw, Da rulz07, Daniel Quinlan, Daran, Darksniperdragon, Dbenbenn, December21st2012Freak, Demmy100, DemonThing, Den fjättrade ankan, Dissident, Dmmaus, Doctormatt, DosDIS 778, Dreadstar, Drift chambers, Dysprosia, Elias, Epbr123, Excirial, FactSpewer, Fanghong, Fluffernutter, Fredrik, Freiddie, Fullmetalactor, Gail, Gaius Cornelius, Gauss, Gbeeker, Giftlite, Gill110951, Grafen, Graham87, GregorB, Grick, Grover cleveland, Gscshoyru, HJ Mitchell, Hans Adler, Haonhien,, Henrygb, Hmains, Hrumph, Hurmata, Husond, Hut 8.5, IW.HG, Ihope127, Infrogmation, InsérerNombreHere, IronGargoyle, JForget, JRSpriggs, JackSlash, Jamnik, Jeltz, Jiddisch, Jitse Niesen, Jni, Joe Kress, JoergenB, John H, Morgan, JohnyDog, Jojit fb, Jon Awbrey, Joshwashere10, Joyous!, Jshadias, Juanpabl, Jumbuck, KSmrq, Kahriman, Kajasudhakarababu, Keilana, Khalid, Khoikhoi, Knutux, Koyaanis Qatsi, Kuratowski's Ghost, LOL, Lambiam, Leohenrique0908, Linas, Liso, Lmatt, Loadmaster, MFH, Machine Elf 1735, Majorly, Mani1, MarSch, Marc van Leeuwen, MarkSweep, Markhurd, Martinwilke1980, Max Longint, Mendaliv, Mets501, Michael Hardy, Michael Slone, Moink, Mormegil, Mosesklein, Mrs.Barbera, Msh210, MuZemike, Muiranec, Neptune5000, Nikai, Nishkid64, Obradovic Goran, Odie5533, Oleg Alexandrov, Opelio, Ornithikos, Pakaran, Panoramix, Patrick, Patstuart, Paul August, Paul Martyn-Smith, Paul-L, Paulginz, Peak, Perey, Petershank, Phoebe alison, Pinethicket, PleaseStand, Pmanderson, Principianewton, Proofreader77, Qianfeng, QmunkE, Quaeler, RadicalBender, Raiden10, Raja Hussain, Ramanujam first, Randall Holmes, RandomP, Reinyday, ResearchRave, Revolver, Rjwilmsi, Roger McCoy, Rommels, Salix alba, SchfiftyThree, SepIHw, Sewing, Shizhao, SimenH, Simian1k, Siroxo, SixWingedSeraph, Skeetch, SlamDiego, Snigbrook, Stephanos21, Stephen B Streater, Stephenb, Stevenj, StradivariusTV, SuperMidget, TakuyaMurata, Template namespace initialisation script, TeunSpaans, That Guy, From That Show!, The Thing That Should Not Be, TheCatalyst31, Thomas1617, Tkuvho, Tobias Bergemann, Toby Bartels, Tom Lougheed, Tommy2010, Tosha, Travuun, Trovatore, Ttwo, Tubalubalu, Vanished User 8a9b4725f8376, Velpaedia Jenkuklordanus, VeryVerily, Vinsfan368, Vlma111, Wapcaplet, WardenWalk, Wayne Slam, Wdflake, Wham Bam Rock II, Wimt, Woodstone, Writ Keeper, Wshun, Wwoods, XJamRastafire, Xiahou, Xiong, YishayMor, Yvwv, Zack, ZeroOne, Zundark, Ævar Arnfjörð Bjarmason, Александър, 472 anonymous edits Rational number  Source:  Contributors: 9258fahsflkh917fas, AbcXyz, AgadaUrbanit, AgnosticPreachersKid, Aitias, Akanemoto, Allan1114, Alpha Quadrant, Andre Engels, Andres, Angular, Aschmitz, Ato, AugPi, Avicennasis, AxelBoldt, Bento00, Bentogoa, Bhadani, Bidabadi, BigDwiki, Blotwell, Bob.v.R, Bobo192, Borislav, Braindrain0000, Brion VIBBER, Bryan Derksen, Btornado, Btritchie, CRGreathouse, Calabe1992, Can't sleep, clown will eat me, Capricorn42, Charles Matthews, Charlie Wiederhold, Christian List, Ciphergoth, Ck lostsword, Cliff, CompuChip, ConCompS, Connormah, Conversion script, CorvetteZ51, Crapme, Crazy Boris with a red beard, Cronholm144, Crystallina, DVdm, Daqu, David H Braun (1964), DavidCBryant, Dcoetzee, Deanos, Demmy, Dendodge, Denelson83, DerHexer, Deryck Chan, Dfarrell07, Diego Grez, Discospinster, Dissident, DivineAlpha, Dnvrfantj, Doctor Rosenberg, Dr Greg, Dratman, Dreadstar, Dysprosia, E rulez, Eeekster, Ehamberg, Ekrub-ntyh, El C, Elipongo, Epbr123, Erikekahn, Evercat, Fieldday-sunday, Friginator, Fropuff, FvdP, Gene Ward Smith, GermanX, Gesslein, Giftlite, Gobonobo, Graham87, Grover cleveland, Guanaco, Guppy, Hackabhihack, Hawthorn, Hennobrandsma, Henrygb, HexaChord, Hrundi Bakshi, Hydrogen Iodide, ILikeThings, Ichliebepferde, Ihatechickens214, Isheden, ItsZippy, Ixfd64, J.delanoy, JForget, Jan Hidders, Jarhed, Jaydec, Jazzcello, Jim.belk, Jitse Niesen, Joe Photon, JoergenB, Johan1298, Jorge Stolfi, Josh Parris, Jpcs, Jshadias, Ju66l3r, JuanGabrielRobalino, Jujutacular, Jumbuck, Jung dalglish, Jusdafax, Katalaveno, Katanada, Kelsi1122, Khalidmathematics, Kingpin13, Koeplinger, Kostisl, Kuru, L Kensington, Lambiam, Lee Carre, Legion fi, Lethe, Linas, Loadmaster, Lostchicken, Luhar1997, LuigiManiac, Lupo, MER-C, MZaplotnik, Macy, Majora4, MarcelB612, MarkSweep, MartinHarper, Materialscientist, Maxim, Mayooranathan, Mdebets, Mercrutio, Mets501, Michael Hardy, Mikeo, MikuMikuCookie, Mormegil, Msh210, NawlinWiki, Ncik, NellieBly, Nick white03, NickPenguin, Nishkid64, Notinasnaid, Obradovic Goran, Oleg Alexandrov, OllieFury, Omerks, Orange Suede Sofa, Panoramix, Paritybit, Patrick, Paul August, Persian Poet Gal, PierreAbbat, Pinethicket, Pizza Puzzle, Prashanthns, Proofreader77, Pseudomonas, Quaeler, Qz, Raja Hussain, Randomblue, Raymond arritt, Revolver, Rheun, Rick Norwood, Roadrunner, Romanm, Ronhjones, Rumping, RyanCross, SJP, Salamurai, Salix alba, Scott Burley, Sdornan, Seberle, ShadowRangerRIT, Sheilds, Shunpiker, Silver86, SixWingedSeraph, Spitfire, Stevertigo, Stlrams22, StradivariusTV, Stwitzel, SuperMidget, Tagishsimon, TakuyaMurata, TallNapoleon, Tarquin, Teles, Template namespace initialisation script, The Thing That Should Not Be, The Wiki Octopus, The wub, Thunderbolt16, Tide rolls, Tim1357, Tiptoety, Tkuvho, Tobias Bergemann, Toby Bartels, Tokidoki27, TomT0m, Troy 07, TutterMouse, VasilievVV, Vina, Vrenator, Vvidetta, WODUP, Wafulz, Wayne Slam, WikiPuppies, Wikieditor06, Wikipelli, Wmasterj, Wolfrock, Worthingtonse, Wshun, Wtmitchell, XJamRastafire, Xelnx, Yurik, Zerida, Zrs 12, Zundark, Zzedar, Ævar Arnfjörð Bjarmason, 560 anonymous edits Irrational number  Source:  Contributors: 2D, 78.26, 9258fahsflkh917fas, A Stop at Willoughby, AbcXyz, Adam majewski, Aitias, Akanemoto, Alansohn,, Amazing Steve, AndrewKepert, Androl, Antandrus, Arthur Rubin, AstroWiki, Athenean, Atif.t2, AustinKnight, AxelBoldt, Az1568, B4hand, BL, Barneca, Bdesham, Beach drifter, Ben Standeven, Beremiz, Betterusername, BiT, Bidabadi, Bilal.alsallakh, Bjankuloski06en, Blue Tie, Bowlhover, Brfettig, Btg2290, C-4, CRGreathouse, Caltas, Captainj, Card, CardinalDan, Catherineyronwode, Charles Matthews, CharlesGillingham, ChrisfromHouston, Christian List, Chuck SMITH, Colm Keogh, Cometstyles, Connormah, Conversion script, CosineKitty, Cptmurdok, Crisco 1492, Crobzub, DMacks, DYLAN LENNON, Danny5000, Daran, Dauto, Demmy, DerHexer, Dharmabum420, Dialectric, Diggers2004, Discospinster, Dmcq, Dmr2, Dondegroovily, Drunken Pirate, Dwineman, Dycedarg, Dylan Lake, Dysepsion, Dysprosia, ESkog, EdC, Edward, El C, Enviroboy, Epbr123, Etioquaesitor, FF2010, Falcorian, Flyingspuds, FocalPoint, Franci.cariati, Fredrik, Fresheneesz, Furrykef, Fuzzypeg, Gene Ward Smith, Giftlite, GirasoleDE, Glane23, Glenn L, GregorB, Grubber, Grue, Gscshoyru, Gurch, Hairy Dude, Hallows AG, Haonhien, Hdt83, Henrygb, Howabout1, Iatemath, Ibbn, Igny, Isheden, J.delanoy, JForget, JHarris, Jagged 85, Jan Hidders, Janopus, JayShaw TT, JeffBobFrank, Jogers, John Reid, Johncatsoulis, JonathanFreed, Josh Parris, Joshalex88, Jusdafax, KSmrq, Kayau, Kelly Martin, Khalidmathematics, Khukri, Kleuske, Koeplinger, Krasniy, Ktalon, Kuru, Kutulu, L Kensington, LOL, Lambiam, LiDaobing, Loadmaster, Loonymonkey, LordFoom, Luke-Jr, MacMed, Macy, Madmath789, Manway, Martin451, Masgatotkaca, Maverick starstrider, MaybeJesusMaybeNot, Mdd, Melchoir, Michael Hardy, Mild Bill Hiccup, Minesweeper, Mjb, Momo san, Mouse is back, Mr Stephen, Mreult, Mxn, Natalie Erin, NawlinWiki, Newone, Nihiltres, Njaikrishna, Nohat, Nono64, Numbo3, Nuno Tavares, Obradovic Goran, Oleg Alexandrov, Opelio, Orange Suede Sofa, Oxymoron83, Panoramix, Paul August, Paulmiko, Pbroks13, Pgb23, PierreAbbat, Pinethicket, Pizza Puzzle, Plclark, Pne, Popovvk, Potatoswatter, Psb777, Quaeler, Quoth, RDBury, RJGray, Raja Hussain, RandomP, Randomblue, Recentchanges, ResearchRave, Retrovirus, RexNL, Robert2957, Romanm, Ronhjones, Rsocol, S711, Saforrest, Salix alba, Sam Hocevar, Schmock, Schutz, Seaphoto, Selfworm, Setitup, Shahkent, Shawnc, Shirt58, Shoy, Shreeradha, Shreevatsa, Simetrical, SimonTrew, Sir Arthur Williams, Skäpperöd, Smalljim, Smiloid, Some1new4ya, Sonicsuns, SpeedyGonsales, Spiffy sperry, Starx, Steel Tortoise, Stirred-not-shaken, SuperMidget, Supreme Deliciousness, T. Moitie, TakuyaMurata, TallNapoleon, The Anome, The Thing That Should Not Be, Thecheesykid, Thejerm, Theonlydavewilliams, Thymaridas, Tide rolls, TimDoster, Tiptoety, Tkuvho, Tobby72, Tobias Bergemann, Toby Bartels, Tommy2010, Traxs7, Trumpet marietta 45750, Ulrich Müller, Universalss, UserGoogol, Vaughan Pratt, VladimirReshetnikov, Vonbontee, Wayne Slam, Wcherowi, Wheelingcrows, White Shadows, Whywhenwhohow, Wolfrock, Woohookitty, Worldrimroamer, Xantharius, Xaos, Xororaz, Youssefsan, Ysangkok, Yurik, Zero sharp, Zfr, Zoso96, Zundark, Ævar Arnfjörð Bjarmason, გიგა, 519 anonymous edits Imaginary number  Source:  Contributors: 28421u2232nfenfcenc, 28bytes, 7, A8UDI, ABCD, ALargeElk, AbcXyz, Abtract, Abustos, Acroterion, AdjustShift, Akanemoto, Alex S, Alexfusco5, Ali Esfandiari, Allison&Valerie, Alpha Quadrant (alt), AndersL, AndyZ, Anonymous Dissident, Antandrus, Ap, Architeuthidae, Athenean, AustinKnight, AxelBoldt, BORANTRACE, Babcockd, Bcorr, Beano, BenRG, Benadin, Bender235, Beyond My Ken, Bfigura, BillFlis, Bjankuloski06en, Blanchardb, Blathnaid, Bob K, Bob.v.R, Bobakkamaei, Bobo192, Butko, CesarB, Charles Matthews, Chester Markel, ChongDae, ChrisfromHouston, Chungdem, Ciphers, Cleared as filed, Coffee, Conversion script, Crystallina, DVdm, DanP, Darth Wombat, David Shay, Dcljr, DeadEyeArrow, Demmy100, Deon, Deryck Chan, Dgsadgsa, Diannaa, Dirac66, Djrdjn, Doctoroxenbriery, DrBob, Dysprosia, E-Allen-Huang, EamonnPKeane, Edgarrr, Edokter, Eisnel, Ejrubio, El C, Eljitto, Empty Buffer, EngineerScotty, Eronixpress, Falcon8765, Favonian, Flewis, FocalPoint, Fredrik, FrozenMan, Furrykef, Fuzheado, GVnayR, Genius Of The Internet, Geoboe84, Gesslein, Giftlite, Gigoux14, Glenn, Gscshoyru, HalfShadow, Harddavid, Harriv, Harry Potter, He Who Is, HenryLi, Hetar, Hmains, Horndude77, Hyacinth, ISD, Ichudov, Immunize, Ioscius, Ivan1984, IvarTJ, J.delanoy, JLaTondre, Jaknouse, Jan.Smolik, Jape77, JayBeeEye, JayHenry, Jim.belk, Jimbryho, Johnbrownsbody, Jorge C.Al, Jowa fan, Jrobbinz1, Jrobbinz123, Jshadias, Justin W Smith, Jwaffe, Jyril, Karl Dickman, Kazvorpal, Keenan Pepper, Kinu, Klaus Zipfel, Koeplinger, Krackpipe, Krun, Kuru, LGF1992UK, La goutte de pluie, LagAttack, Lee S. Svoboda, Lights, Loadmaster, Lotje, LuigiManiac, Luihopan, MALCOLMGLADWELLHASCOOLHAIR, Malleus Fatuorum, Martian, Martyndorey, Mat cross, MathMartin, Megapixie, Member, Metacomet, Michael Hardy, Michiel.Grillet, Mike92591, Mild Bill Hiccup, Mlouns, Mschlindwein, Mshonle, Msikma, NYKevin, Nat2, NawlinWiki, Ncusa367, Newburro, Nineteenninetyfour, Ninly, NobuTamura, NocturneNoir, Oayk, Ohanian, Oleg Alexandrov, Omegatron, Orange Suede Sofa, P0mbal, Pacific PanDeist, Pakaran, Patrick, Paul August, Paul D. Anderson, Pcb21, Penubag, Phil Boswell, Philip Trueman, Plugwash, Poor Yorick, PorkHeart, Quaeler, Qwertyus, Qwfp, RSStockdale, Raja Hussain, Rbj, Redlock, Reedy, Reverendgraham, Revolver, Rgdboer, Rhrad, Richerman, Robert Foley, Robert Skyhawk, Rotideypoc41352, SAVE US.L2P, SMP, SRFoster, Sabalka, Sadisticality, Sam Hocevar, Scallop7, Scetoaux, SebastianHelm, Several Times, Shadowjams, Shii, Shoujun, Silly rabbit, Simetrical, Sleigh, Slightsmile, Snakehr3, Snigbrook, Snoyes, Sodium, Some jerk on the Internet, Sparklylexa33, Staffwaterboy, Stwalkerster, SuperMidget, Survival705, Swac, Swtbab13, Syneil, T.J.V., THF, TNTfan101, Taggart Transcontinental, TakuyaMurata, Tamfang, Tassedethe, Tdabombs9, The Thing That Should Not Be, Thegreyanomaly, Tide rolls, Tim!, Titoxd, Tobias Bergemann, Truthflux, Tubular, TuukkaH, Tylerni7, Umapathy, VanishedUser314159, Victorianycole, Vina, Vipinhari, VodkaJazz, Wa2ise, WaysToEscape, Weeliljimmy, WikiMan225, Wikipelli, William Larson, Wmahan, Wurzel, X-Anneh-X, Xxanthippe, Yamamoto Ichiro, Youssefsan, Yugsdrawkcabeht, Zahd, Ziggy Sawdust, Zoicon5, Ævar Arnfjörð Bjarmason, Πrate, Александър, ‫ 884 ,ﮐﺎﺷﻒ ﻋﻘﯿﻞ‬anonymous edits Algebraic number  Source:  Contributors: 9258fahsflkh917fas, Alodyne, Althai, Anonymous Dissident, Arcenciel, Arthur Rubin, AxelBoldt, Bdesham, Bidabadi, Bombshell, Brainfsck, Brighterorange, Bryan Derksen, CRGreathouse, Catslash, Charles Matthews, Christian List, Clausen, Conversion script, Crisófilax, Cwitty, Dbenbenn, Dcljr, Dependent Variable, Dessources, Dfeldmann, Dratman, Dtrebbien, Dysprosia, Egg, Eric119, Gaius Cornelius, Gene Ward Smith, Georgia guy, Giftlite, GregorB, GromXXVII,


Article Sources and Contributors
Hakeem.gadi, Headbomb, Herbee, Icairns, IdealOmniscience, Ideyal, Ihope127, JCSantos, JackSchmidt, JeLuF, Jhbdel, Jim.belk, Jj137, Jnc, Jncraton, Jorunn, Josh Parris, Koeplinger, Laurusnobilis, Linas, Loadmaster, Looxix, Lottamiata, Magioladitis, MathMartin, Matikkapoika, Mecanismo, Melchoir, Mets501, Michael C Price, Michael Hardy, Mike Segal, Minesweeper, Msh210, Nbarth, Nguyenthephuc, Nosferatütr, Ntmatter, Octahedron80, Offsure, Oleg Alexandrov, Omerks, PierreAbbat, Profvk, Quota, Qz, Raja Hussain, Rar, Rich Farmbrough, RobHar, SPUI, Salix alba, Scott MacLean, Shkwon, Spambit, Spireguy, Stephen J. Brooks, StradivariusTV, SuperMidget, Tkuvho, TomyDuby, Tosha, Trovatore, Vina, WikiDao, Wmahan, Wvbailey, Xantharius, Youandme, Zundark, 70 anonymous edits Complex number  Source:  Contributors:,,, 2.718281828459...,, 7severn7, Aaron Kauppi, Abtract, AdamSmithee, Adhanali, Aka042, Akashkumarrath, Alanius, Ale2006, Alison, AllTheThings, AnakngAraw, Andres, Andymc, Angus Lepper, Anonymous Dissident, Ap, Argentino, Armend, Arundhati bakshi, Asyndeton, Ato, Atongmy, AugPi, AxelBoldt, Barno uk, Barticus88, Basispoeter, Ben Ben, BenBaker, Bender2k14, Betacommand, Bevo, Bidabadi, BigJohnHenry, BitterMilton, Biógrafo, Bkell, Blogeswar, Bo Jacoby, Bob A, Bob K, Bobrayner, Boulaur, Br77rino, Brandon Brunner, Brazmyth, Brianga, Brion VIBBER, Bromskloss, Brona, Bruno Simões, Bryan Derksen, Bwe203, C S, CBM, CRGreathouse, Calréfa Wéná, Capricorn42, Card, Catherineyronwode, CeNobiteElf, Cenarium, Centrx, Charles Matthews, ChongDae, Chowbok, Chris-gore, ChrisHodgesUK, Chrisandtaund, Christian List, ChristopherWillis, Christopherkennedy1994, Chun-hian, Cometstyles, Commander, Conversion script, Crowsnest, Crust, Crywalt, Cybercobra, Cypa, DE logics, DVD R W, DVdm, Da nuke, DanBishop, Darth Panda, David R. Ingham, DavidCBryant, Deeptrivia, Denelson83, Dert34, Dgies, Diannaa, Dima373, Dino, Dissimul, Dlo1986, Dmcq, Dod1, Dominus, Dragon Dave, Dual Freq, Duoduoduo, Dysprosia, Eastlaw, Ehrenkater, Elb2000, Elf, Emmelie, Enchanter, Epbr123, Eric Shalov, Evil saltine, Ewlyahoocom, Falcon8765, Favonian, Felizdenovo, Fibonacci, Fiedorow, Fieldday-sunday, FightingMac, FilipeS, Finlay McWalter, Fireaxe888, Flamurai, Fleminra, Flex, FocalPoint, Freakofnurture, Fredrik, Fresheneesz, Fropuff, Furrykef, GTBacchus, Gandalf61, Gareth Owen, Gauss, Georgesawyer, Gesslein, Giftlite, GlenShennan, Goldencako, Googl, GraemeMcRae, Graemeb1967, Graham87, Gunark, Guru6969, Haein45, Haham hanuka, Hairy Dude, HalfShadow, Hannes Eder, Happy-melon, HappyCamper, Hard Sin, Harley peters, Haroldco, HenningThielemann, HenryLi, Henrygb, Herbee, HolIgor, Huppybanny, Hut 8.5, Iapetusuk, Igiffin, Igoldste, Inkington, Intangir, Iridescent, Isaac Dupree, Isheden, Itsmejudith, Ivan Štambuk, Ivysaur, Ixfd64, JDSperling, JForget, JRSpriggs, JRocketeer, Jackol, Jagged 85, JaimeLesMaths, Jakob.scholbach, JamesBWatson, Jamesg, Janisozaur, Jarek Duda, Jashaszun, Jasondet, JensMueller, Jesin, Jfredrickson, Jimbryho, Jitse Niesen, Jmvicent, Jni, JohnBlackburne, Jol123, Joriki, Josh Grosse, Jujutacular, Juliancolton, Kaldari, Kan8eDie, KarlJacobi, Kbdank71, Keenan Pepper, Kenneth Brookes, Kenyon, Kerotan, Keta, Ketiltrout, Khalid Mahmood, Kiensvay, Kine, Koeplinger, Krischik, Kudret abi, KyraVixen, L Kensington, L33tminion, LHOON, Lagrange123, Lambiam, Lambyte, Laurens-af, Laurent MAYER, Leafyplant, LeaveSleaves, Lessbread, Lifeonahilltop, LilHelpa, Linas, Livius3, Loadmaster, Loom91, Looxix, Lovibond, M4gnum0n, Madhero88, Madmath789, Majorly, Mange01, Manuel Trujillo Berges, MarSch, Marc van Leeuwen, Marek69, Maria Renee Jenkins, MarkSweep, Marozols, MarsRover, Maschen, MathMan64, MathMartin, Matqkks, Matthew Stannard, Matty j, Matusz, Mct mht, Mdd, MeltBanana, Mentifisto, Mets501, Mh, Michael Hardy, Michael Kinyon, Micro01, Miguel, Mild Bill Hiccup, Mindmatrix, Mlewis000, Mobower, Modelpanicer, Mogmiester, Momo kiki, Monroetransfer, Moralshearer, Mr Stephen, Ms2ger, Msh210, MuZemike, Nanshu, Nbarth, Ncrfgs, Nejko, Netrapt, Newone, Nic bor, Nicoli nicolivich, Nijdam, Niofis, Nivaca, Nixdorf, Nmulder, Obradovic Goran, ObserverFromAbove, Octahedron80, OdedSchramm, Oleg Alexandrov, Oli Filth, Oliphaunt, Omnieiunium, Onewhohelps, Onkel Tuca, Opticyclic, Oxinabox, P0807670, P0lyglut, Pablogrb, Pablothegreat85, Paolo.dL, Papa November, Patrick,, Paul August, Paul D. Anderson, Pegship, Petecrawford, Phlembowper99, PierreAbbat, Pitel, PizzaMargherita,, Pjacobi, Pleasantville, Plugwash, Pmanderson, Pne, Polymath618, Poor Yorick, Profjohn, Pstanton, Pt, Qe2eqe, Quaeler, Quantum Burrito, Quondum, Qwfp, RDBury, RG2, RJChapman, Rabkid15, RainbowOfLight, Raise exception, Raja Hussain, Randomblue, Raphaelsss, Rasmus Faber, Ravisingh57547, Rcog, Readysoaper, ReallyNiceGuy, Recentchanges, Red Winged Duck, Remyrem1, RepublicanJacobite, Revolver, RexNL, Rgdboer, Rhanekom, Ricardo sandoval, Roadrunner, Robinh, Rohan Ghatak, Romanm, Rookkey, Rossami, Rs2, Rsmuruga, RyanJones, Saga City, Salix alba, Santosga, Saric, Sbharris, Scentoni, Scottie 000, SeanMack, Sergiacid, ServiceAT, Seth Ilys, SeventyThree, Shizhao, Shoessss, Shoujun, Sidonath, Silly rabbit, SimonTrew, SirFozzie, Sligocki, Snoyes, Soap, Soliloquial, Som subhra, Spellcast, Spireguy, Steorra, Steven Windsor, Stevenj, Storkk, StradivariusTV, Strebe, Streyeder, SuperMidget, Supergeek22, Sverdrup, Symane, Szidomingo, Taggart Transcontinental, Takatodigi, TakuyaMurata, Tariqhada, Tarquin, Taw, Taxman, Tcha216, TechnoFaye, Template namespace initialisation script, Tentoes, Terry2012, Tesseran, Thalesfernandes, The Anome, The Thing That Should Not Be, Thenub314, Tiddly Tom, Tikiwont, Timir2, Tommy2010, Tosha, Treisijs, Trevor MacInnis, Trovatore, Truthnlove, Ttimespan, Twigletmac, Twooars, Tyco.skinner, UnitedStatesian, Urdutext, User27091, VKokielov, VTBushyTail, Vipinhari, Virga, Virginia-American, Voorlandt, Wafulz, Waltpohl, Watcharakorn, Wayp123, Wesaq, Whommighter, Wiki alf, WikiDao, Wikiborg, Wolfkeeper, Wolfrock, Wshun, Wurzel, Wwoods, Wyatt915, X42bn6, Xantharius, Xxglennxx, Yellowstone6, Yjwong, Zachlipton, Zirland, Zoffdino, Zoicon5, Zundark, Zzuuzz, Ævar Arnfjörð Bjarmason, Александър, 724 anonymous edits Portal:Algebra  Source:  Contributors: Bondeffect, Bookcats, BozMo, Cenarium, Ciphers, Cobi, Gcpeoples, JohnBlackburne, Oleg Alexandrov, SriMesh, Tangent747, Tompw, Underdogggg, White Shadows, 10 anonymous edits Proportionality  Source:  Contributors: 28421u2232nfenfcenc, Aidepikiw0000, Alansohn, Alberto da Calvairate, Andres, Aniten21, Asifkemalpasha, Audiosmurf, AugPi, Aughost, Auron Cloud, AxelBoldt, Baseball Watcher, Belajane123, Brandon, Btyner, CSTAR, Capitalist, Capricorn42, CarbonUnit, Cbowman713, Charles Matthews, Chris the speller, Commander Keane, Constructive editor, Cybercobra, DARTH SIDIOUS 2, Dadude3320, David R. Ingham, Dcirovic, Dendodge, DerHexer, Dhollm, Dicklyon, Dmyersturnbull, Ecemaml, Eighty, Enviroboy, Evan.oltmanns, Ewen, Excirial, Fdskjs, Finell, Funandtrvl, Fuzzform, Giftlite, Gigabyte, Hazard-SJ, Heron, Hq3473, In actu, Inflector, Irbisgreif, Isheden, Iwilllearn2010, Ja 62, JaGa, Jacobusmaximus34, JerroldPease-Atlanta, Jerzy, Jncraton, Jusdafax, Jóna Þórunn, Kaihsu, Kjarda, Kozuch, Krellis, Kuddle143, La goutte de pluie, Levineps, Luk, Marc Venot, Marek69, Mayfare, Miccoh, Michael Hardy, Mindlurker, Mpatel, Mutt20, Nbarth, Netalarm, Nicoli nicolivich, Noodle snacks, Omegatron, Oxymoron83, Paste, Patrick, Paul August, Pfctdayelise, Pjvpjv, Pred, Pstanton, Randywombat, Rgdboer, Rjd0060, Sakkura, Scottish lad, Skater, Sohmc, Sonett72, SpaceFlight89, Spinningspark, Spoon!, TBster2001, TejasDiscipulus2, The Pikachu Who Dared, The Thing That Should Not Be, Theoprakt, Tide rolls, Tobias Bergemann, Tom harrison, Tommy2010, Tompw, Traxs7, Tricore, Viridae, WeijiBaikeBianji, WikHead, Willking1979, Yeungchunk, Zonuleofzinn, 百家姓之四, 216 anonymous edits Equation  Source:  Contributors: AVand, Aaroncrick, Abdull, Access Denied, Addshore, Airolg10, Akanari, Alan Liefting, Alansohn, AllyUnion, Andrejj, Andres, AndrewHowse, Anonymous Dissident, Ask123, Asterion, AxelBoldt, BD2412, Bcherkas, Benbeltran, BoJosley, Bob is a bitch, Bobo192, Boing! said Zebedee, Bongwarrior, Brion VIBBER, Bryan Derksen, Btilm, CALR, CRGreathouse, Calvin 1998, CanadianLinuxUser, Cap601, Carl.bunderson, Cdc, Chaos, Charles Matthews, Charlielee111, Chas zzz brown, Christian List, Christian75, Cic, Complex (de), Conversion script, Corz0770, Cybercobra, Danger, Delirium, DerHexer, Dick Beldin, Discospinster, Dominus, Dysprosia, Eliyak, Eliz81, Ellywa, Enigmaman, Epbr123, Excirial, FalseAxiom, FilipeS, Firowkp, Freakoclark, Fredrik, Frogger3140, Furkhaocean, GB fan, Gagaspocket, Gesslein, Giftlite, Gordonofcartoon, Graham87, GrayFullbuster, Grim23, Hardyplants, Henrygb, Heraclesprogeny, Hhhhhannah, Hu12, IGraph, Iamthedeus, Ibbn, Indeed123, Ioscius, Island, Isnow, Iulianu, Ivan Štambuk, IvanLanin, J.delanoy, Jagged 85, JamesBWatson, Jeffrey Mall, Jiddisch, John254, Josh Parris, Jujutacular, Kan8eDie, Karadimos, Karl Dickman, Keegan, Khalid Mahmood, KnowledgeOfSelf, Kurykh, Lambiam, Linas, Macy, Mailseth, Mandarax, Mani1, Manishearth, ManoaChild, MarSch, Marek69, MarsRover, Martynas Patasius, Maurice Carbonaro, McSly, Melchoir, Michael Hardy, Mike Dillon, Mjj'sbff, MrOllie, Mxn, NOrbeck, Nagytibi, Nethac DIU, Netopalis, Nlefr, Nsaum75, Numbo3, Nuno Tavares, Obradovic Goran, Ocaasi, Ocolon, Octahedron80, Oleg Alexandrov, Olivier, OllieFury, Onorem, Orzetto, PL290, Paul August, Pb30, Pharaoh of the Wizards, PiAndWhippedCream, Pinethicket, Pizza Puzzle, PrestonH, Prestonmag, Pretzelpaws, QRX, Quiddity, Quintote, Qwertol, RDBury, Ragha joshi, Rejka, Rfdhasgfjhfgvhmavjvm, Riceplaytexas, Rick Norwood, Saalstin, Salih, Salix alba, Sardur, Scarian, Scohoust, Scrunter, Seaphoto, Septembrinol, Shoefly, Skater, Soliquid, Spens10, StradivariusTV, Suisui, Symane, Tarquin, Techman224, Terrek, The Thing That Should Not Be, Thebisch, Timir2, Timwi, Toby Bartels, Tom harrison, TravisTX, Treisijs, Unyoyega, Vchorozopoulos, Veinor, Versus22, Vicki Rosenzweig, Voyagerfan5761, Vrenator, Wayne Slam, Wiki alf, Willnaish, Xenoss, Youandme, Youssefsan, Zaphod Beeblebrox, Zzyzx11, 342 anonymous edits Word problem  Source:  Contributors:, Anonymi, Anonymous Dissident, Arknascar44, Bfigura, Borisshah, C S, Cambyses, Camw, Charles Matthews, Colonies Chris, Comte0, Courcelles, Cyan, DVD R W, Discospinster, Dmmaus, Dysprosia, Enauspeaker, Gandalf61, Gerbrant, Gordonofcartoon, Haonhien, I b pip, Ichudov, Iridescent, Jao, Laconic, Lambiam, Lendorien, LinkWalker, Luna Santin, Magister Mathematicae, Malo, Marfalump, Michael Hardy, Mon4, MrOllie, Neil Martin, Nick Connolly, Oleg Alexandrov, Ryulong, Sandgem Addict, Sexbear, Tom harrison, Trivialist, Visor, Wshun, Zkaradag, Zundark, 50 anonymous edits Statistical population  Source:  Contributors: Abanima, Aeon1006, Arodichevski, Avenue, BD2412, Boffob, CapitalR, Conversion script, Den fjättrade ankan, DerHexer, Dick Beldin, Dori, Giftlite, Graham87, HorsePunchKid, Irbobo, Jitse Niesen, Kembangraps, Kku, Lamro, Larry_Sanger, Latka, MCepek, Melcombe, Michael Hardy, Nbarth, Piotrus, Raul654, Ronz, Salix alba, Suisui, TakuyaMurata, Ugen64, Wildt, Wood Thrush, 42 anonymous edits


Image Sources, Licenses and Contributors


Image Sources, Licenses and Contributors
File:Loudspeaker.svg  Source:  License: Public Domain  Contributors: Bayo, Gmaxwell, Husky, Iamunknown, Mirithing, Myself488, Nethac DIU, Omegatron, Rocket000, The Evil IP address, Wouterhagens, 16 anonymous edits File:Estela C de Tres Zapotes.jpg  Source:  License: Public Domain  Contributors: Sorry, I dont understand the original text in japanse File:MAYA-g-num-0-inc-v1.svg  Source:  License: GNU Free Documentation License  Contributors: CJLL Wright File:Text figures 036.svg  Source:  License: Public Domain  Contributors: User:Skalman Image:Latex integers.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Alessio Damato File:Number-line.gif  Source:  License: Public Domain  Contributors: Original uploader was MathsIsFun at en.wikipedia File:Relatives Numbers Representation.png  Source:  License: Public Domain  Contributors: Thomas Douillard, thomas.douillard, with "asymptote" Image:Polynomialdeg2.svg  Source:  License: Public Domain  Contributors: N.Mori Image:Polynomialdeg3.svg  Source:  License: Public Domain  Contributors: N.Mori Image:Polynomialdeg4.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Geek3 Image:Polynomialdeg5.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Geek3 Image:Sextic_Graph.png  Source:  License: Creative Commons Attribution 3.0  Contributors: Jamme123 Image:Septic_Graph.gif  Source:  License: Public Domain  Contributors: Lanthanum-138 Image:X-intercepts.svg  Source:  License: Creative Commons Attribution 3.0  Contributors: --pbroks13talk? Original uploader was Pbroks13 at en.wikipedia Image:Expo02.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: EnEdC, Jalanpalmer Image:Root graphs.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: User:BrokenSegue Image:ExpIPi.gif  Source:  License: Public Domain  Contributors: Sbyrnes321 File:One3Root.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Loadmaster (David R. Tribble) Image:X^y.png  Source:^y.png  License: GNU Free Documentation License  Contributors: Original uploader was Sam Derbyshire at en.wikipedia Later version(s) were uploaded by Atropos235 at en.wikipedia. File:Rod fraction.jpg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Gisling File:Counting rod 0.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Counting rod h9 num.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Counting rod v6.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Counting rod h6.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Counting rod v4.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Counting rod h4.png  Source:  License: Public Domain  Contributors: Takasugi Shinji File:Multiply 4 bags 3 marbles.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Dmcq File:Multiplication as scaling integers.gif  Source:  License: Public Domain  Contributors: Kieff File:Multiplication scheme 4 by 5.jpg  Source:  License: Public Domain  Contributors: User:Majopius File:Multiply field fract.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Dmcq Image:Multiplication Sign.svg  Source:  License: Public Domain  Contributors: Jim.belk File:Multiplication algorithm.GIF  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Gisling Image:Gelosia multiplication 45 256.png  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Original uploader was ##:en:User:Silly rabbit image:Divide20by4.svg  Source:  License: Public Domain  Contributors: Amirki File:Cake quarters.svg  Source:  License: Public Domain  Contributors: Acdx, R. S. Shaw Image:Cake fractions.svg  Source:  License: Public Domain  Contributors: User:Acdx File:Factorisatie.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Silver Spoon File:A plus b au carre.svg  Source:  License: unknown  Contributors: Alkarex Image:Mollifier illustration.png  Source:  License: Public Domain  Contributors: Oleg Alexandrov 08:56, 19 July 2007 (UTC) File:RelativeComplement.png  Source:  License: Public domain  Contributors: Herbee File:Commutative Word Origin.PNG  Source:  License: Public Domain  Contributors: Francois Servois File:Symmetry Of Addition.svg  Source:  License: Public Domain  Contributors: Weston.pace Image:Graph of example function.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: KSmrq File:Function machine2.svg  Source:  License: Public Domain  Contributors: Wvbailey (talk). Original uploader was Wvbailey at en.wikipedia. Later version(s) were uploaded by Threecheersfornick at en.wikipedia. Image:Function machine5.png  Source:  License: Public Domain  Contributors: Wvbailey (talk) File:Venn A intersect B.svg  Source:  License: Public Domain  Contributors: Cepheus File:Venn A subset B.svg  Source:  License: Public Domain  Contributors: Booyabazooka, Darapti, Frank C. Müller, Lipedia, Roomba, 1 anonymous edits File:Venn0111.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Lipedia, Waldir File:Venn0001.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Lipedia, Tony Wills, Waldir, 9 anonymous edits File:Venn0100.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Lipedia, Waldir File:Venn1010.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Lipedia, Waldir File:Venn0110.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Herbythyme, Jarekt, Lipedia, Waldir, 3 anonymous edits File:Algebraproblem.jpg  Source:  License: GNU Free Documentation License  Contributors: Sweetness46 (talk) Original uploader was Sweetness46 at en.wikipedia Image:MonkeyFaceFOILRule.JPG  Source:  License: Public domain  Contributors: Brandenads (talk) Image:Evenandodd.PNG  Source:  License: Public Domain  Contributors: Darkuranium, Fresheneesz, Greenrd

Image Sources, Licenses and Contributors


Image:US Navy 070317-N-3642E-379 During the warmest part of the day, a thermometer outside of the Applied Physics Laboratory Ice Station's (APLIS) mess tent still does not break out of the sub-freezing temperatures.jpg  Source:,_a_thermometer_outside_of_the_Applied_Physics_Laboratory_Ice_Station's_(APLIS)_mess_tent_sti  License: Public Domain  Contributors: Multichill File:AdditionRules.svg  Source:  License: Creative Commons Attribution-Share Alike  Contributors: Ezra Katz Image:Latex real numbers.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Arichnad Image:Three apples.svg  Source:  License: Public Domain  Contributors: Oleg Alexandrov File:RationalRepresentation.pdf  Source:  License: Public Domain  Contributors: TomT0m File:Diagonal argument.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Cronholm144 Image:Square root of 2 triangle.svg  Source:  License: Public domain  Contributors: derivative work: Pbroks13 (talk) Square_root_of_2_triangle.png: en:User:Fredrik Image:Complex conjugate picture.svg  Source:  License: GNU Free Documentation License  Contributors: Oleg Alexandrov File:90-Degree Rotations in the Complex Plane.png  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Allisonandvalerie File:Algebraicszoom.png  Source:  License: Creative Commons Attribution 3.0  Contributors: Stephen J. Brooks (talk) Image:Leadingcoeff.png  Source:  License: Creative Commons Attribution 3.0  Contributors: Stephen J. Brooks (talk) Image:Complex number illustration.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Original uploader was Wolfkeeper at en.wikipedia Image:Complex number illustration.png  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Kan8eDie File:Complex conjugate picture.svg  Source:  License: GNU Free Documentation License  Contributors: Oleg Alexandrov Image:Vector Addition.svg  Source:  License: Public Domain  Contributors: Booyabazooka, Kilom691 Image:Complex_number_illustration_modarg.svg  Source:  License: GNU Free Documentation License  Contributors: Complex_number_illustration.svg: Original uploader was Wolfkeeper at en.wikipedia derivative work: Kan8eDie (talk) File:ComplexMultiplication.png  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Jakob.scholbach Image:Sin1perz.png  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Kovzol File:Pentagon construct.gif  Source:  License: Public domain  Contributors: TokyoJunkie at the English Wikipedia Image:NegativeOne3Root.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Loadmaster (David R. Tribble) Image:Arithmetic symbols.svg  Source:  License: Public Domain  Contributors: Elembis Image:Community.png  Source:  License: GNU Free Documentation License  Contributors: User:Andrew_pmk Image:Nuvola apps kmplot.svg  Source:  License: GNU Lesser General Public License  Contributors: Booyabazooka, Michael Reschke, Rocket000 Image:Commutative diagram for morphism.svg  Source:  License: Public Domain  Contributors: User:Cepheus Image:Crystal mycomputer.png  Source:  License: unknown  Contributors: Dake, Rocket000 Image:Crypto key.svg  Source:  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: MesserWoland Image:Nuvola apps atlantik.png  Source:  License: GNU Lesser General Public License  Contributors: AVRS, Alno, Alphax, Bayo, Drilnoth, Hyju, It Is Me Here, Rocket000, Tbleher Image:Crystal_Clear_app_3d.png  Source:  License: GNU Free Documentation License  Contributors: Beao, Cwbm (commons), CyberSkull, It Is Me Here, Juliancolton, Rocket000, Wknight94, 2 anonymous edits Image:Non-cont.png  Source:  License: Public Domain  Contributors: Pontiff Greg Bard (talk). Original uploader was Gregbard at en.wikipedia Image:Nuvola apps edu mathematics.png  Source:  License: unknown  Contributors: Alno, Alphax, Naconkantari, Rfc1394, Rocket000, Zundark, 1 anonymous edits Image:Number theory symbol.svg  Source:  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Radiant chains, previous versions by Tob and WillT.Net Image:Stylised Lithium Atom.svg  Source:  License: GNU Free Documentation License  Contributors: User:Halfdan, User:Indolences, User:Liquid_2003 Image:Nuvola apps kalzium.svg  Source:  License: GNU Lesser General Public License  Contributors: David Vignoni, SVG version by Bobarino Image:Venn0001.svg  Source:  License: Public Domain  Contributors: CommonsDelinker, Lipedia, Tony Wills, Waldir, 9 anonymous edits Image:Fisher_iris_versicolor_sepalwidth.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: en:User:Qwfp (original); Pbroks13 (talk) (redraw) Image:TrefoilKnot 02.svg  Source:  License: Public Domain  Contributors: Kilom691, Tompw Image:Variables proporcionals.png  Source:  License: GNU Free Documentation License  Contributors: M. Romero Schmidkte Image:Academ_homothetic_rectangles.svg  Source:  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Yves Baelde Image:First Equation Ever.png  Source:  License: Public Domain  Contributors: Robert Recorde



Creative Commons Attribution-Share Alike 3.0 Unported //

Sign up to vote on this title
UsefulNot useful