This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Articles

User:Rajah2770 Fractional calculus Code Simulation Plasma modeling Equations of motion Maxwell's equations Algorithm Computer programming Fortran 1 5 12 15 34 35 38 65 89 96

References

Article Sources and Contributors Image Sources, Licenses and Contributors 114 116

Article Licenses

License 117

User:Rajah2770

1

User:Rajah2770

Dr.A.B.Rajib Hazarika

Dr.A.B.Rajib Hazarika

[[File:File:Dr.A.B.Rajib Hazarika & his two kids.jpg||alt=]] Dr.A.B.Rajib Hazarika with Laquit(son) and Danisha(daughter) Born Residence Nationality Ethnicity Citizenship Education Alma mater Azad Bin Rajib HazarikaJuly 2, 1970Jammu, Jammu and Kashmir, India Nagaon, Assam, India Indian AssameseMuslim India PhD, PDF, FRAS University of Jodhpur Jai Narayan Vyas University [1] Institute of Advanced Study in Science & Technology [2] Kendriya Vidyalaya [3] Poona College of Arts, Science &Commerce Assistant Professor(Lecturer), Diphu Govt. College , Diphu,Assam,India 2004- onwards Diphu Government College Government of Assam,Assam Education Service Lecturer ,Assistant Professor,Mathematician,Academician,Fusion,Astronomy Nagaon, Assam, India Rs 40000 per month 6 feet and 2 inches 100 kg Doctorate, Dr., FRAS (London), Assam Education Service, AES Member of Scientific and Technical committee & Editorial review board of Natuaral and Applied sciences World Academy of [4] Science ,Engineering & Technology Sunni Islam, Helmin Begum Hazarika Laquit Ali Hazarika(son), Danisha Begum Hazarika(daughter) Rosmat Ali Hazarika@Rostam Ali Hazarika@Roufat Ali Hazarika and Anjena Begum Hazarika Drabrh or Raja Website [5] [6] [7] [8] [9]

Occupation Years active Employer

Known for Home town Salary Height Weight Title Board member of Religion Spouse Children Parents Call-sign

User:Rajah2770

2

Dr.A.B.Rajib Hazarika with Laquit (son) and Danisha(daughter)

Dr.A.B.Rajib Hazarika (born July 02, 1970, in Jammu, Jammu and Kashmir, India) is Assistant Professor(Lecturer) Diphu Government College ,Diphu in Karbi Anglong district , Government of Assam [10] , [11] , Karbi Anglong,Assam's largest conglomerate by Government of Assam . He is also the Fellow of Royal Astronomical Society[12] ,London ,Member of International Association of Mathematical Physics, World Academy of Science ,Engineering & Technology, Focus Fusion Society, Dense Plasma Focus, Plasma Science Society of India, Assam Science Society, Assam academy of mathematics,International Atomic Energy Agency,Nuclear and Plasma Sciences Society,Society of Industrial and Applied Mathematics,German Academy of Mathematics and Mechanics,Fusion Science & Technology Society,Indian National Science Academy,Indian Science Congress Association,Advisory Committee of Mathematical Education,Royal Society,International Biographical Centre.

Early life

Dr.A.B.Rajib Hazarika was born into the famous Hazarika family, a prominent family belonging to Dhing's wealthy Muslim Assamese community of Nagaon district. He was born to Anjena Begum Hazarika and Rusmat Ali Hazarika. He is eldest of two childrens of his parents younger one is a Shamim Ara Rahman(nee Hazarika)daughter .

Early career

Dr.A.B.Rajib Hazarika completed his PhD degree in Mathematics from J N Vyas University of Jodhpur in 1995 with specialization in Plasma instability, the thesis was awarded “best thesis” by Association of Indian Universities in 1998 and the Post-Doctoral Fellow Program from Institute of Advanced Study in Science & Technology [13] in Guwahati Assam in 1998 as Research Associate in Plasma Physics Division in theory group studying the Sheath phenomenon. As a Part-time Lecturer in Nowgong college, Assam before joining the present position in Diphu Government College ,Diphu in Karbi Anglong district[14] ,[15] He is a member of the wikipedia[16] , [17] . He is Fellow of Royal Astronomical Society[18] ,member of International Association Mathematical Physics[19] , member of World Academy of Science,Engineering & Technology [20] ,[21] , member of Plasma science Society of India [22] , [23] ,member of Focus Fusion Society forum [24] ,member of Dense Plasma Focus [25] , Member of Assam Science Society [26] , Member of Assam Academy of Mathematics [27]

User:Rajah2770 He joined the Diphu Government College in July2004 as Lecturer in Mathematics (Gazetted officer), through Assam Public Service commission [28] in Assam Education Service [29] , AES-I. [30] now redesignated as Assistant Professor.

3

Career

In May 1993, Dr.A.B.Rajib Hazarika was awarded Junior Research Fellowship,University Grants Commission, National Eligibility Test and eligibility for Lecturership ,Govt. of India and worked as JRF(UGC,NET) in Department of Mathematics and Statistics of J N Vyas University in Jodhpur. Later on in May 1995 got Senior Research Fellowship(UGC,NET) and continued research for completion of PhD on 27th Dec 1995 .From 1993 onwards taught in Kamala Nehru College for women, Jodhpur and in Faculty of Science in J N Vyas University in Jodhpur up to the completion of PhD .In 1998 May joined Plasma Physics Division of Institute of Advanced Study in Science & Technology in Guwahati as Research Associate for PDF in theory group to study the sheath phenomena of National Fusion Programme [31] of Govt. of India . Then joined Nowgong College as a part-time Lecturer after which in 2004, July joined the present position of Lecturer in Diphu Government College which is redesignated as Assistant Professor.

Research

During PhD [32] [33] [34] [35] [36] The research was based on Astronomy,Astrophysics, Geophysics , for plasma instability with the title of thesis as “Some Problems of instabilities in partially ionized and fully ionized plasmas” which later on in 1998 was assessed as best thesis of the year by Association of Indian Universities in New Delhi. He is known for Bhatia-Hazarika limitResearch at Diphu Govt. College [37] , [38] [39] [40] [41] [42] [43] [44] Applied for patent in US patent and trademarks office [45] [46] Research guidance is given to students in Mathematics for MPhil. He has written six books entitled Inventions of Dr.A.B.Rajib Hazarika on future devices and Dr.A.B.Rajib Hazarika's Pattern recognition on fusion ,Application of Dr.A.B.Rajib Hazarika's conceptual devices , Green tecnology for next genration , Invention of Dr.A.B.Rajib Hazarika's devices ,VASIMR DANISHA:A Hall Thruster Space Odyssey ,[47] , [48] , [49] He has derived a formula Hazarika's constant for VASIMR DANISHA as Hazarika constant Ch=1+4sin3φ sin θ-2sin φ-2sin θ the value is 2.646

Personal life

Dr.A.B.Rajib Hazarika has a metallic Scarlet red Tata Indigo CS of Tata motors make and loves to drive himself.He is married to Helmin Begum Hazarika and have two chidrens Laquit(son) and Danisha(daughter).

Quotes

• • • • • • "Fakir(saint) and lakir(line) stops at nothing but at destination" "Expert criticizes the wrong but demonstrates the right thing" “Intellectuals are measured by their brain not by their age and experience” “Two type of persons are happy in life one who knows everything another who doesn’t know anything” “Implosion in device to prove every notion wrong for fusion” “Meditation gives fakir(saint) long life and fusion devices the long lasting confinement”

User:Rajah2770

4

**Awards and recognition
**

Dr.A.B.Rajib Hazarika got Junior Research Fellowship,Government of India Senior Research Fellowship,Government of India Research AssociateshipDSTGovernment of India Fellowof Royal Astronomical Society [50] Member of Advisory committee of Mathematical Education Royal Society London Member of Scientific and Technical committee & editorial review board on Natural and applied sciences of World Academy of Science ,Engineering &Technology [51] Leading professional of the world-2010 as noted and eminent professional from International Biographical Centre Cambridge

References

[1] [2] [3] [4] [5] [6] http:/ / www. iasst. in http:/ / www. kvafsdigaru. org http:/ / www. akipoonacollege. com http:/ / www. waset. org/ NaturalandAppliedSciences. php?page=45 http:/ / www. facebook. com/ Drabrajib http:/ / in. linkedin. com/ pub/ dr-a-b-rajib-hazarika/ 25/ 506/ 549

[7] http:/ / en. wikipedia. org/ wiki/ Special:Contributions/ Drabrh [8] http:/ / www. diphugovtcollege. org [9] http:/ / www. karbianglong. nic. in/ diphugovtcollege. org/ teaching. html [10] http:/ / www. karbianglong. nic. in/ diphugovtcollege/ teaching. html [11] http:/ / www. diphugovtcollege. org/ DGC%20prospectus%2008-09. pdf [12] http:/ / www. ras. org. uk/ member?recid==5531 [13] http:/ / www. iasst. in [14] {{cite web|url=http:/ / www. diphugovtcollege. org/ DGC%20prospectus%2008-09. pdf [15] http:/ / karbianglong. nic. in/ diphugovtcollege/ teaching. html [16] http:/ / en. wikipedia. org/ wiki/ User:Drabrh [17] http:/ / en. wikipedia. org/ wiki/ Special:Contributions/ Drabrh [18] http:/ / www. ras. org. uk/ member?recid=5531, [19] http:/ / www. iamp. org/ bulletins/ old-bulletins/ 201001. pdf [20] http:/ / www. waset. org/ NaturalandAppliedSciences. php?page=45 [21] http:/ / www. waset. org/ Search. php?page=68& search= [22] http:/ / www. plasma. ernet. in/ ~pssi/ member/ pssi_new04. doc [23] http:/ / www. ipr. res. in/ ~pssi/ member/ pssidir_new-04. doc [24] http:/ / www. focusfusion. org/ index. php/ forums/ member/ 4165 [25] http:/ / www. denseplasmafocus. org/ index. php/ forum/ member/ 4165 [26] http:/ / www. assamsciencesociety. org/ member [27] http:/ / www. aam. org. in/ member/ 982004 [28] http:/ / apsc. nic. in [29] http:/ / aasc. nic. in/ . . . / Education%20Department/ The%20Assam%20Education%20Service%20Rules%201982. pdf [30] (http:/ / www. diphugovtcollege. org/ DGC prospests 08-09. pdf) [31] http:/ / nfp. pssi. in [32] http:/ / www. iopscience. iop. org/ 1402-4896/ 51/ 6/ 012/ pdf/ physcr_51_6_012. pdf [33] http:/ / www. iopsciences. iop. org/ 1402-4896/ 53/ 1/ 011/ pdf/ 1402-4896_53_1_011. pdf, [34] http:/ / www. niscair. res. in/ sciencecommunication/ abstractingjournals/ isa_1jul08. asp [35] http:/ / en. wiktionary. org/ wiki/ Wikitionary%3ASandbox [36] http:/ / adsabs. harvard. edu/ abs/ 1996PhyS. . 53. . . 578 [37] http:/ / en. wikipedia. org/ wiki/ Special:Contributions/ Drabrh/ File:Drabrhdouble_trios_saiph_star01. pdf [38] http:/ / en. wikipedia. org/ wiki/ File:Drabrh_bayer_rti. pdf [39] http:/ / en. wikipedia. org/ wiki/ File:Columb_drabrh. pdf [40] http:/ / en. wikipedia. org/ wiki/ File:Drabrh_double_trios. pdf [41] http:/ / en. wikipedia. org/ wiki/ File:Drabrhiterparabolic2007. pdf [42] http:/ / en. wikipedia. org/ wiki/ File:Drabrh_mctc_feedbackloop. pdf [43] http:/ / en. wikipedia. org/ wiki/ File:Drabrh_tasso_07. pdf

User:Rajah2770

[44] [45] [46] [47] [48] [49] [50] [51] http:/ / en. wikipedia. org/ wiki/ File:Abstracts. pdf?page=2 http:/ / upload. wikimedia. org/ wikipedia/ en/ 5/ 50/ EfilingAck5530228. pdf http:/ / upload. wikimedia. org/ wikipedia/ en/ c/ c4/ EfilingAck3442787. pdf http:/ / www. pothi. com http:/ / i-proclaimbookstore. com http:/ / ipppserver. homelinux. org:8080/ view/ creators/ Hazarika=3ADr=2EA=2EB=2ERajib=3A=3A. html http:/ / www. ras. org. uk/ members?recid=5531 http:/ / www. waset. org/ NaturalandAppliedSciences. php?page=46

5

External links

• (http://www.diphugovtcollege.org/) • Dr.A.B.Rajib Hazarika's profile on the Linkedin Website (http://in.linkedin.com/pub/dr-a-b-rajib-hazarika/25/ 506/549=) • (http://www.facebook.com/Drabrajib) Rajah2770 (talk) 18:12, 7 February 2011 (UTC)

Fractional calculus

Fractional calculus is a branch of mathematical analysis that studies the possibility of taking real number powers or complex number powers of the differential operator

and the integration operator J. (Usually J is used instead of I to avoid confusion with other I-like glyphs and identities.) In this context the term powers refers to iterative application or composition, in the same sense that f 2(x) = f(f(x)). For example, one may ask the question of meaningfully interpreting

as a square root of the differentiation operator (an operator half iterate), i.e., an expression for some operator that when applied twice to a function will have the same effect as differentiation. More generally, one can look at the question of defining

for real-number values of a in such a way that when a takes an integer value n, the usual power of n-fold differentiation is recovered for n > 0, and the −nth power of J when n < 0. There are various reasons for looking at this question. One is that, in this way, the semigroup of powers Dn in the discrete variable n is seen inside a continuous semigroup (one hopes) with parameter a which is a real number. Continuous semigroups are prevalent in mathematics, and have an interesting theory. Notice here that fraction is then a misnomer for the exponent, since it need not be rational, but the term fractional calculus has become traditional. Fractional differential equations are a generalization of differential equations through the application of fractional calculus.

Fractional calculus

6

**Nature of the fractional derivative
**

An important point is that the fractional derivative at a point x is a local property only when a is an integer; in non-integer cases we cannot say that the fractional derivative at x of a function f depends only on the graph of f very near x, in the way that integer-power derivatives certainly do. Therefore it is expected that the theory involves some sort of boundary conditions, involving information on the function further out. To use a metaphor, the fractional derivative requires some peripheral vision. As far as the existence of such a theory is concerned, the foundations of the subject were laid by Liouville in a paper from 1832. The fractional derivative of a function to order a is often now defined by means of the Fourier or Mellin integral transforms. [1]

Heuristics

A fairly natural question to ask is whether there exists an operator . It turns out that there is such an operator, and indeed for any , or to put it another way, the definition of can be extended to all real values of n. , which extends factorials to non-integer values. This is , there exists an operator such that , or half-derivative, such that

To delve into a little detail, start with the Gamma function defined such that . Assuming a function that is defined where . Repeating this process gives

, form the definite integral from 0 to x. Call this

, and this can be extended arbitrarily. The Cauchy formula for repeated integration, namely

leads to a straightforward way to a generalization for real n. Simply using the Gamma function to remove the discrete nature of the factorial function (recalling that , or equivalently ) gives us a natural candidate for fractional applications of the integral operator.

This is in fact a well-defined operator. It can be shown that the J operator satisfies

this relationship is called the semigroup property of fractional differintegral operators. Unfortunately the comparable process for the derivative operator D is significantly more complex, but it can be shown that D is neither

Fractional calculus commutative nor additive in general.

7

**Fractional derivative of a simple function
**

Let us assume that is a monomial of the form The first derivative is as usual

The half derivative (purple curve) of the function with the first derivative (red curve).

(blue curve) together

Repeating this gives the more general result that

Which, after replacing the factorials with the Gamma function, leads us to

For k = 1 and a = 1/2 we obtain the half-derivative of the function

as

Repeating this process yields

which is indeed the expected result of

Fractional calculus This extension of the above differential operator need not be constrained only to real powers. For example, the th derivative of the th derivative yields the 2nd derivative. Also notice that setting negative values for a yields integrals. The complete fractional derivative which will yield the same result as above is (for )

8

For arbitrary , since the gamma function is undefined for arguments whose real part is a negative integer, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example,

Laplace transform

We can also come at the question via the Laplace transform. Noting that

and

etc., we assert . For example

as expected. Indeed, given the convolution rule for clarity) we find that

(and shorthanding

which is what Cauchy gave us above. Laplace transforms "work" on relatively few functions, but they are often useful for solving fractional differential equations.

Fractional calculus

9

Riemann–Liouville integral

The classical form of fractional calculus is given by the Riemann–Liouville integral, essentially what has been described above. The theory for periodic functions, therefore including the 'boundary condition' of repeating after a period, is the Weyl differintegral. It is defined on Fourier series, and requires the constant Fourier coefficient to vanish (so, applies to functions on the unit circle integrating to 0). By contrast the Grünwald–Letnikov derivative starts with the derivative instead of the integral.

Functional calculus

In the context of functional analysis, functions f(D) more general than powers are studied in the functional calculus of spectral theory. The theory of pseudo-differential operators also allows one to consider powers of D. The operators arising are examples of singular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory of Riesz potentials. So there are a number of contemporary theories available, within which fractional calculus can be discussed. See also Erdélyi–Kober operator, important in special function theory (Kober 1940), (Erdélyi 1950–51).

Applications

Fractional Conservation of Mass As described by Wheatcraft and Meerschaert (2008)[2] , a fractional conservation of mass equation is needed when the control volume is not large enough compared to the scale of heterogeneity and when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is:

Fractional Advection Dispersion Equation This equation has been shown useful for modeling contaminant flow in heterogenous porous media [3] [4] [5] WKB approximation for the semiclassical approximation in one dimensional spatial system (x,t) the inverse of the potential inside the Hamiltonian is taken in units where given by the half-integral of the density of states

(ref: 6)

References

• Fractional Integrals and Derivatives: Theory and Applications, by Samko, S.; Kilbas, A.A.; and Marichev, O. Hardcover: 1006 pages. Publisher: Taylor & Francis Books. ISBN 2-88124-864-0 • Theory and Applications of Fractional Differential Equations, by Kilbas, A. A.; Srivastava, H. M.; and Trujillo, J. J. Amsterdam, Netherlands, Elsevier, February 2006. ISBN 0-444-51832-0 (http://www.elsevier.com/wps/ find/bookdescription.cws_home/707212/description#description) • An Introduction to the Fractional Calculus and Fractional Differential Equations, by Kenneth S. Miller, Bertram Ross (Editor). Hardcover: 384 pages. Publisher: John Wiley & Sons; 1 edition (May 19, 1993). ISBN 0-471-58884-9 • The Fractional Calculus; Theory and Applications of Differentiation and Integration to Arbitrary Order (Mathematics in Science and Engineering, V), by Keith B. Oldham, Jerome Spanier. Hardcover. Publisher: Academic Press; (November 1974). ISBN 0-12-525550-0

Fractional calculus • Fractional Differential Equations. An Introduction to Fractional Derivatives, Fractional Differential Equations, Some Methods of Their Solution and Some of Their Applications., (Mathematics in Science and Engineering, vol. 198), by Igor Podlubny. Hardcover. Publisher: Academic Press; (October 1998) ISBN 0-12-558840-2 • Fractional Calculus. An Introduction for Physicists, by Richard Herrmann. Hardcover. Publisher: World Scientific, Singapore; (February 2011) ISBN 978-981-4340-24-3 (http://www.worldscibooks.com/physics/ 8072.html) • Fractals and quantum mechanics, by N. Laskin. Chaos Vol.10, pp.780-790 (2000). (http://link.aip.org/link/ ?CHAOEH/10/780/1) • Fractals and Fractional Calculus in Continuum Mechanics, by A. Carpinteri (Editor), F. Mainardi (Editor). Paperback: 348 pages. Publisher: Springer-Verlag Telos; (January 1998). ISBN 3-211-82913-X • Physics of Fractal Operators, by Bruce J. West, Mauro Bologna, Paolo Grigolini. Hardcover: 368 pages. Publisher: Springer Verlag; (January 14, 2003). ISBN 0-387-95554-2 • Fractional Calculus and the Taylor-Riemann Series, Rose-Hulman Undergrad. J. Math. Vol.6(1) (2005). • Operator of fractional derivative in the complex plane, by Petr Zavada, Commun.Math.Phys.192, pp. 261-285,1998. doi:10.1007/s002200050299 (available online [6] or as the arXiv preprint [7]) • Relativistic wave equations with fractional derivatives and pseudodifferential operators, by Petr Zavada, Journal of Applied Mathematics, vol. 2, no. 4, pp. 163-197, 2002. doi:10.1155/S1110757X02110102 (available online [8] or as the arXiv preprint [9]) • Fractional differentiation by neocortical pyramidal neurons, by Brian N Lundstrom, Matthew H Higgs, William J Spain & Adrienne L Fairhall, Nature Neuroscience, vol. 11 (11), pp. 1335 - 1342, 2008. doi:10.1038/nn.2212 (abstract [10]) • Equilibrium points, stability and numerical solutions of fractional-order predator-prey and rabies models, by Ahmed E., A.M.A. El-Sayed, H.A.A. El-Saka. 2007. Jour. Math. Anal. Appl. 325,452. • Kober, Hermann (1940). "On fractional integrals and derivatives". The Quarterly Journal of Mathematics (Oxford Series) 11 (1): 193–211. doi:10.1093/qmath/os-11.1.193. • Erdélyi, Arthur (1950–51). "On some functional transformations". Rendiconti del Seminario Matematico dell'Università e del Politecnico di Torino 10: 217–234. MR0047818. • Recent history of fractional calculus [11] by J.T. Machado, V. Kiryakova, F. Mainardi,

10

Notes

[1] For the history of the subject, see the thesis (in French): Stéphane Dugowson, Les différentielles métaphysiques (http:/ / s. dugowson. free. fr/ recherche/ dones/ index. html) (histoire et philosophie de la généralisation de l'ordre de dérivation), Thèse, Université Paris Nord (1994) [2] Wheatcraft, S., Meerschaert, M., (2008). "Fractional Conservation of Mass." Advances in Water Resources 31, 1377-1381. [3] Benson, D., Wheatcraft, S., Meerschaert, M., (2000). "Application of a fractional advection-dispersion equation." Water Resources Res 36, 1403-1412. [4] Benson, D., Wheatcraft, S., Meerschaert, M., (2000). "The fractional-order governing equation of L\acute{e}vy motion." Water Resources Res 36, 1413-1423. [5] Benson, D., Schumer, R., Wheatcraft, S., Meerschaert, M., (2001). "Fractional dispersion, L\acute{e}vy motion, and the MADE tracer tests." Transport Porous Media 42, 211-240. [6] http:/ / www. springerlink. com/ content/ 2xbape94pk99k75a/ [7] http:/ / arxiv. org/ abs/ funct-an/ 9608002 [8] http:/ / www. hindawi. com/ GetArticle. aspx?doi=10. 1155/ S1110757X02110102& e=cta [9] http:/ / arxiv. org/ abs/ hep-th/ 0003126 [10] http:/ / www. nature. com/ neuro/ journal/ v11/ n11/ abs/ nn. 2212. html [11] http:/ / mechatronics. ece. usu. edu/ foc/ wcica2010tw/ Recent%20History%20of%20Fractional%20Calculus-typeset. pdf

The CRONE (R) Toolbox, a Matlab and Simulink Toolbox dedicated to fractional calculus, can be downloaded at http://cronetoolbox.ims-bordeaux.fr

Fractional calculus

11

External links

• Eric W. Weisstein. "Fractional Differential Equation." (http://mathworld.wolfram.com/ FractionalDifferentialEquation.html) From MathWorld — A Wolfram Web Resource. • MathWorld - Fractional calculus (http://mathworld.wolfram.com/FractionalCalculus.html) • MathWorld - Fractional derivative (http://mathworld.wolfram.com/FractionalDerivative.html) • Fractional Calculus (http://www.mathpages.com/home/kmath616/kmath616.htm) at MathPages • Specialized journal: Fractional Calculus and Applied Analysis (http://www.diogenes.bg/fcaa/) • Specialized journal: Fractional Differential Equations (FDE) (http://fde.ele-math.com/) • Specialized journal: Communications in Fractional Calculus (http://www.nonlinearscience.com/ journal_2218-3892.php) (ISSN 2218-3892) • www.nasatech.com (http://www.nasatech.com/Briefs/Oct02/LEW17139.html) • unr.edu (http://unr.edu/homepage/mcubed/FRG.html) (Broken Link) • Igor Podlubny's collection of related books, articles, links, software, etc. (http://www.tuke.sk/podlubny/ fc_resources.html) • GigaHedron - Richard Herrmann's collection of books, articles, preprints, etc. (http://www.gigahedron.de) • s.dugowson.free.fr (http://s.dugowson.free.fr/recherche/dones/index.html) • History, Definitions, and Applications for the Engineer (http://www.nd.edu/~msen/Teaching/UnderRes/ FracCalc.pdf) (PDF), by Adam Loverro, University of Notre Dame • Fractional Calculus Modelling (http://www.fracalmo.org/) • Introductory Notes on Fractional Calculus (http://www.xuru.org/fc/TOC.asp) • Pseudodifferential operators and diffusive representation in modeling, control and signal (http://www.laas.fr/ gt-opd/opdrd-en/index.html.en)

Code

12

Code

A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type. In communications and information processing, encoding is the process by which information from a source is converted into symbols to be communicated. Decoding is the reverse process, converting these code symbols back into information understandable by a receiver. One reason for coding is to enable communication in places where ordinary spoken or written language is difficult or impossible. For example, semaphore, where the configuration of flags held signaller or the arms of a semaphore tower encodes parts of the message, typically individual letters and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.

Theory

Morse code, a famous type of code.

In information theory and computer science, a code is usually considered as an algorithm which uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, we give a brief example. The mapping

is a code, whose source alphabet is the set

and whose target alphabet is the set

. Using the

extension of the code, the encoded string 0011001011 can be grouped into codewords as 0 – 011 – 0 – 01 – 011, and these in turn can be decoded to the sequence of source symbols acabc. Using terms from formal language theory, the precise mathematical definition of this concept is as follows: Let S and T be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from S to a sequence of symbols over T, and the extension of M to a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension.

Code

13

**Variable-length codes
**

In this section we consider codes, which encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding. A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes, so prefix codes are also widely referred to as "Huffman codes", even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS W-CDMA 3G Wireless Standard. Kraft's inequality characterizes the sets of code word lengths that are possible in a prefix code. Virtually, any uniquely decodable one-to-many code, not necessary a prefix one, must satisfy Kraft's inequality.

**Error correcting codes
**

Codes may also be used to represent data in a way more resistant to errors in transmission or storage. Such a "code" is called an error-correcting code, and works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh-Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, Goppa, low-density parity-check codes, and space–time codes. Error detecting codes can be optimised to detect burst errors, or random errors.

Examples

Codes in communication used for brevity

A cable code replaces words (e.g., ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and most important, less expensively. Codes can be used for brevity. When telegraph messages were the state of the art in rapid long distance communication, elaborate systems of commercial codes that encoded complete phrases into single words (commonly five-letter groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or AYYLU ("Not clearly coded, repeat more clearly."). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission.

Character encodings

Probably the most widely known data communications code so far (aka character representation) in use today is ASCII. In one or another (somewhat compatible) version, it is used by nearly all personal computers, terminals, printers, and other communication equipment. It represents 128 characters with seven-bit binary numbers—that is, as a string of seven 1s and 0s. In ASCIIvcx a lowercase "a" is always 1100001, an uppercase "A" always 1000001, and so on. There are many other encodings, which represent each character by a byte (usually referred as code pages), integer code point (Unicode) or a byte sequence (UTF-8).

Code

14

Genetic code

Biological organisms contain genetic material that is used to control their function and development. This is the DNA, which contains units named genes that can produce proteins through a code (genetic code) in which a series of triplets of four possible nucleotides are translated into one of twenty possible amino acids.

Gödel code

In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering).

Other

There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, biological, etc.) In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from an internet retailer. In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry in the battlefield, etc. Communication systems for sensory impairments, as the sign language for deaf people and braille for blind people, are based in movement or tactile codes. Musical scores are the most common way to encode music. Specific games, as chess, have their own code systems to record the matches (chess notation).

Cryptography

In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead. See code (cryptography). Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomatic, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requisite is the previous agreement of the meaning by both the sender and the receiver.

Other examples

Other examples of encoding include: • Encoding (in cognition) is a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into subjectively meaningful experience. • A content format is a specific encoding format for converting a specific type of data to information. • Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See also Text Encoding Initiative.) • Semantics encoding of formal language A in formal language B is a method of representing all terms (e.g. programs or descriptions) of language A using language B. • Electronic encoding transforms a signal into a code optimized for transmission or storage, generally done with a codec. • Neural encoding is the way in which information is represented in neurons. • Memory encoding is the process of converting sensations into memories. • Television encoding: NTSC, PAL and SECAM

Code Other examples of decoding include: • • • • Digital-to-analog converter, the use of analog circuit for decoding operations Decoding (Computer Science) Decoding methods, methods in communication theory for decoding codewords sent over a noisy channel Digital signal processing, the study of signals in a digital representation and the processing methods of these signals • Word decoding, the use of phonics to decipher print patterns and translate them into the sounds of language

15

**Codes and acronyms
**

Acronyms and abbreviations can be considered codes, and in a sense all languages and writing systems are codes for human thought. International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Occasionally a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean "end of story", and it is sometimes used in other contexts to signify "the end".

References

Simulation

Simulation is the imitation of some real thing, state of affairs, or process. The act of simulating something generally entails representing certain key characteristics or behaviours of a selected physical or abstract system. Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Training simulators include flight simulators for training aircraft pilots. Simulation is also used for scientific modeling of natural systems or human systems in order to gain insight into their functioning.[1] Wooden mechanical horse simulator during WWI. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist .[2] Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

Simulation

16

**Classification and terminology
**

Historically, simulations used in different fields developed largely independently, but 20th century studies of Systems theory and Cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing (some circles[3] use the term for computer simulations modelling selected laws of physics, but this article doesn't). These physical objects are often chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation, often referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator or a driving simulator. Human in the loop simulations can include a computer simulation as a so-called synthetic environment.[4]

Human-in-the-loop simulation of outer space.

Computer simulation

A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables, predictions may be made about the behaviour of the system.[1] Computer simulation has become a useful part of modeling many natural systems in physics, chemistry and biology,[5] and human Visualization of a direct numerical simulation systems in economics and social science (the computational sociology) model. as well as in engineering to gain insight into the operation of those systems. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation. In such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment. Traditionally, the formal modeling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation, the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible. Several software packages exist for running computer-based simulation modeling (e.g. Monte Carlo simulation, stochastic modeling, multimethod modeling) that makes the modeling almost effortless. Modern usage of the term "computer simulation" may encompass virtually any computer-based representation.

Simulation

17

Computer science

In computer science, simulation has some specialized meanings: Alan Turing used the term "simulation" to refer to what happens when a universal machine executes a state transition table (in modern terminology, a computer runs a program) that describes the state transitions, inputs and outputs of a subject discrete-state machine. The computer simulates the subject machine. Accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics. Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an emulator, is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see Computer architecture simulator and Platform virtualization). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer's operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will. Simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values. In the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies...

**Simulation in education and training
**

Simulation is extensively used for educational purposes. It is frequently used by way of adaptive hypermedia. Simulation is often used in the training of civilian and military personnel.[6] This usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a "safe" virtual environment. Often the convenience is to permit mistakes during training for a safety-critical system. For example, in simSchool [7] teachers practice classroom management and teaching techniques on simulated students, which avoids "learning on the job" that can damage real students. There is a distinction, though, between simulations used for training and Instructional simulation. Training simulations typically come in one of three categories:[8] • "live" simulation (where actual players use genuine systems in a real environment); • "virtual" simulation (where actual players use simulated systems in a synthetic environment [4] ), or • "constructive" simulation (where virtual players use simulated systems in a synthetic environment). Constructive simulation is often referred to as "wargaming" since it bears some resemblance to table-top war games in which players command armies of soldiers and equipment that move around a board. In standardized tests, "live" simulations are sometimes called "high-fidelity", producing "samples of likely performance", as opposed to "low-fidelity", "pencil-and-paper" simulations producing only "signs of possible performance",[9] but the distinction between high, moderate and low fidelity remains relative, depending on the context of a particular comparison. Simulations in education are somewhat like training simulations. They focus on specific tasks. The term 'microworld' is used to refer to educational simulations which model some abstract concept rather than simulating a realistic object or environment, or in some cases model a real world environment in a simplistic way so as to help a learner develop an understanding of the key concepts. Normally, a user can create some sort of construction within the microworld that will behave in a way consistent with the concepts being modeled. Seymour Papert was one of the first to advocate the value of microworlds, and the Logo (programming language) programming environment developed by Papert is one of the most famous microworlds. As another example, the Global Challenge Award

Simulation online STEM learning web site uses microworld simulations to teach science concepts related to global warming and the future of energy. Other projects for simulations in educations are Open Source Physics, NetSim etc. Management games (or business simulations) have been finding favour in business education in recent years.[10] Business simulations that incorporate a dynamic model enable experimentation with business strategies in a risk free environment and provide a useful extension to case study discussions. Social simulations may be used in social science classrooms to illustrate social and political processes in anthropology, economics, history, political science, or sociology courses, typically at the high school or university level. These may, for example, take the form of civics simulations, in which participants assume roles in a simulated society, or international relations simulations in which participants engage in negotiations, alliance formation, trade, diplomacy, and the use of force. Such simulations might be based on fictitious political systems, or be based on current or historical events. An example of the latter would be Barnard College's "Reacting to the Past" series of educational simulations.[11] The "Reacting to the Past" series also includes simulation games that address science education. In recent years, there has been increasing use of social simulations for staff training in aid and development agencies. The Carana simulation, for example, was first developed by the United Nations Development Programme, and is now used in a very revised form by the World Bank for training staff to deal with fragile and conflict-affected countries.[12]

18

**Common User Interaction Systems for Virtual Simulations
**

As defined earlier on this page, Virtual Simulations represent a specific category of simulation that utilizes simulation equipment to create a simulated world for the user. Virtual Simulations allow users to interact with a virtual world. Virtual worlds operate on platforms of integrated software and hardware components. In this manner, the system can accept input from the user (e.g., body tracking, voice/sound recognition, physical controllers) and produce output to the user (e.g., visual display, aural display, haptic display) .[13] Virtual Simulations use the aforementioned modes of interaction to produce a sense of immersion for the user.

**Virtual Simulation Input Hardware
**

There is a wide variety of input hardware available to accept user input for virtual simulations. The following list briefly describes several of them: Body Tracking The motion capture method is often used to record the user’s movements and translate the captured data into inputs for the virtual simulation. For example, if a user physically turns their head, the motion would be captured by the simulation hardware in some way and translated to a corresponding shift in view within the simulation. • Capture Suits and/or gloves may be used to capture movements of users body parts. The systems may have sensors incorporated inside them to sense movements of different body parts (e.g., fingers). Alternatively, these systems may have exterior tracking devices or marks that can be detected by external ultrasound, optical receivers or electromagnetic sensors. Internal inertial sensors are also available on some systems. The units may transmit data either wirelessly or through cables. • Eye trackers can also be used to detect eye movements so that the system can determine precisely where a user is looking at any given instant. Physical Controllers Physical controllers provide input to the simulation only through direct manipulation by the user. In virtual simulations, tactile feedback from physical controllers is highly desirable in a number of simulation environments. • Omni directional treadmills can be used to capture the users locomotion as they walk or run.

Simulation • High fidelity instrumentation such as instrument panels in virtual aircraft cockpits provides users with actual controls to raise the level of immersion. For example, pilots can use the actual global positioning system controls from the real device in a simulated cockpit to help them practice procedures with the actual device in the context of the integrated cockpit system. Voice/Sound Recognition This form of interaction may be used either to interact with agents within the simulation (e.g., virtual people) or to manipulate objects in the simulation (e.g., information). Voice interaction presumably increases the level of immersion for the user. • Users may use headsets with boom microphones, lapel microphones or the room may be equipped with strategically located microphones. Current Research into User Input Systems Research in future input systems hold a great deal of promise for virtual simulations. Systems such as brain-computer interfaces (BCIs)Brain-computer interface offer the ability to further increase the level of immersion for virtual simulation users. Lee, Keinrath, Scherer, Bischof, Pfurtscheller [14] proved that naïve subjects could be trained to use a BCI to navigate a virtual apartment with relative ease. Using the BCI, the authors found that subjects were able to freely navigate the virtual environment with relatively minimal effort. It is possible that these types of systems will become standard input modalities in future virtual simulation systems.

19

**Virtual Simulation Output Hardware
**

There is a wide variety of output hardware available to deliver stimulus to users in virtual simulations. The following list briefly describes several of them: Visual Display Visual displays provide the visual stimulus to the user. • Stationary displays can vary from a conventional desktop display to 360-degree wrap around screens to stereo three-dimensional screens. Conventional desktop displays can vary in size from 15 to 60+ inches. Wrap around screens are typically utilized in what is known as a Cave Automatic Virtual Environment (CAVE) Cave Automatic Virtual Environment. Stereo three-dimensional screens produce three-dimensional images either with or without special glasses—depending on the design. • Head mounted displays (HMDs) have small displays that are mounted on headgear worn by the user. These systems are connected directly into the virtual simulation to provide the user with a more immersive experience. Weight, update rates and field of view are some of the key variables that differentiate HMDs. Naturally, heavier HMDs are undesirable as they cause fatigue over time. If the update rate is too slow, the system is unable to update the displays fast enough to correspond with a quick head turn by the user. Slower update rates tend to cause simulation sickness and disrupt the sense of immersion. Field of view or the angular extent of the world that is seen at a given moment Field of view can vary from system to system and has been found to affect the users sense of immersion. Aural Display Several different types of audio systems exist to help the user hear and localize sounds spatially. Special software can be used to produce 3D audio effects 3D audio to create the illusion that sound sources are placed within a defined three-dimensional space around the user. • Stationary conventional speaker systems may be used provide dual or multi-channel surround sound. However, external speakers are not as effective as headphones in producing 3D audio effects.[13] • Conventional headphones offer a portable alternative to stationary speakers. They also have the added advantages of masking real world noise and facilitate more effective 3D audio sound effects.[13] Haptic Display These displays provide sense of touch to the userHaptic technology. This type of output is sometimes referred to as force feedback. • Tactile Tile Displays use different types of actuators such as inflatable bladders, vibrators, low frequency sub-woofers, pin actuators and/or thermo-actuators to produce sensations for the user.

Simulation • End Effector Displays can respond to users inputs with resistance and force.[13] These systems are often used in medical applications for remote surgeries that employ robotic instruments.[15] Vestibular Display These displays provide a sense of motion to the userMotion simulator. They often manifest as motion bases for virtual vehicle simulation such as driving simulators or flight simulators. Motion bases are fixed in place but use actuators to move the simulator in ways that can produce the sensations pitching, yawing or rolling. The simulators can also move in such a way as to produce a sense of acceleration on all axes (e.g., the motion base can produce the sensation of falling).

20

**Clinical healthcare simulators
**

Medical simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery [16] and trauma care. They are also important to help on prototyping new devices[17] for biomedical engineering problems. Currently, simulators are applied to research and development of tools for new therapies,[18] treatments[19] and early diagnosis[20] in medicine. Many medical simulators involve a computer connected to a plastic simulation of the relevant anatomy. Sophisticated simulators of this type employ a life size mannequin that responds to injected drugs and can be programmed to create simulations of life-threatening emergencies. In other simulations, visual components of the procedure are reproduced by computer graphics techniques, while touch-based components are reproduced by haptic feedback devices combined with physical simulation routines computed in response to the user's actions. Medical simulations of this sort will often use 3D CT or MRI scans of patient data to enhance realism. Some medical simulations are developed to be widely distributed (such as web-enabled simulations [21] that can be viewed via standard web browsers) and can be interacted with using standard computer interfaces, such as the keyboard and mouse. Another important medical application of a simulator — although, perhaps, denoting a slightly different meaning of simulator — is the use of a placebo drug, a formulation that simulates the active drug in trials of drug efficacy (see Placebo (origins of technical term)).

**Improving Patient Safety through New Innovations
**

Patient safety is a concern in the medical industry. Patients have been known to suffer injuries and even death due to management error, and lack of using best standards of care and training. According to Building a National Agenda for Simulation-Based Medical Education (Eder-Van Hook, Jackie, 2004) , “A health care provider’s ability to react prudently in an unexpected situation is one of the most critical factors in creating a positive outcome in medical emergency, regardless of whether it occurs on the battlefield, freeway, or hospital emergency room.” simulation. Eder-Van Hook (2004) also noted that medical errors kill up to 98,000 with an estimated cost between $37 and $50 million and $17 to $29 billion for preventable adverse events dollars per year. “Deaths due to preventable adverse events exceed deaths attributable to motor vehicle accidents, breast cancer, or AIDS” Eder-Van Hook (2004). With these types of statistics it is no wonder that improving patient safety is a prevalent concern in the industry. New innovative simulation training solutions are now being used to train medical professionals in an attempt to reduce the number of safety concerns that have adverse effects on the patients. However, according to the article Does Simulation Improve Patient Safety? Self-efficacy, Competence, Operational Performance, and Patient Safety (Nishisaki A., Keren R., and Nadkarni, V., 2007), the jury is still out. Nishisaki states that “There is good evidence that simulation training improves provider and team self-efficacy and competence on manikins. There is also good evidence that procedural simulation improves actual operational performance in clinical settings.[22] However, no evidence yet shows that crew resource management training through simulation, despite its promise, improves team operational performance at the bedside. Also, no evidence to date proves that simulation training actually improves

Simulation patient outcome. Even so, confidence is growing in the validity of medical simulation as the training tool of the future.” This could be because there are not enough research studies yet conducted to effectively determine the success of simulation initiatives to improve patient safety. Examples of [recently implemented] research simulations used to improve patient care [and its funding] can be found at Improving Patient Safety through Simulation Research (US Department of Human Health Services) http://www.ahrq.gov/qual/simulproj.htm. One such attempt to improve patient safety through the use of simulations training is pediatric care to deliver just-in-time service or/and just-in-place. This training consists of 20 minutes of simulated training just before workers report to shift. It is hoped that the recentness of the training will increase the positive and reduce the negative results that have generally been associated with the procedure. The purpose of this study is to determine if just-in-time training improves patient safety and operational performance of orotracheal intubation and decrease occurrences of undesired associated events and “to test the hypothesis that high fidelity simulation may enhance the training efficacy and patient safety in simulation settings.” The conclusion as reported in Abstract P38: Just-In-Time Simulation Training Improves ICU Physician Trainee Airway Resuscitation Participation without Compromising Procedural Success or Safety (Nishisaki A., 2008), were that simulation training improved resident participation in real cases; but did not sacrifice the quality of service. It could be therefore hypothesized that by increasing the number of highly trained residents through the use of simulation training, that the simulation training does in fact increase patient safety. This hypothesis would have to be researched for validation and the results may or may not generalize to other situations.

21

**History of simulation in healthcare
**

The first medical simulators were simple models of human patients.[23] Since antiquity, these representations in clay and stone were used to demonstrate clinical features of disease states and their effects on humans. Models have been found from many cultures and continents. These models have been used in some cultures (e.g., Chinese culture) as a "diagnostic" instrument, allowing women to consult male physicians while maintaining social laws of modesty. Models are used today to help students learn the anatomy of the musculoskeletal system and organ systems.[23]

Type of models

Active models Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous “Harvey” mannikin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography. Interactive models More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgements, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction. Computer simulators

Simulation

22 Simulators have been proposed as an ideal tool for assessment of students for clinical skills.[24] For patients, "cybertherapy" can be used for sessions simulating traumatic expericences, from fear of heights to social anxiety.[25] Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These “lifelike” simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context.[26] [27]

3DiTeams learner is percussing the patients chest in virtual field hospital

Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state. Such a simulator meets the goals of an objective and standardized examination for clinical competence.[28] This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings.[29]

Simulation in entertainment

Entertainment simulation is a term that encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature. Advances in technology in the 1980s and 1990s caused simulation to become more widely used and it began to appear in movies such as Jurassic Park (1993) and in computer-based games such as Atari’s Battlezone.

History

Early History (1940’s and 50’s) The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958 a computer game called “Tennis for Two” was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope.[30] This was one of the first electronic video games to use a graphical display. Modern Simulation (1980’s-present) Advances in technology in the 1980s made the computer more affordable and more capable than they were in previous decades [31] which facilitated the rise of computer gaming. The first video game consoles released in the 1970s and early '80s fell prey to the industry crash in 1983, but in 1985 Nintendo released the Nintendo Entertainment System (NES) which became the best selling console in video game history.[32] In the 1990s computer games became widely popular with the release of such game as The Sims and Command and Conquer and the still increasing power of desktop computers. Today, computer simulation games such as World of Warcraft are played by millions of people around the world.

Simulation Computer-generated imagery was used in film to simulate objects as early as 1976, though in 1982 the movie Tron was the first film to use computer-generated imagery for more than a couple of minutes. However, the commercial failure of the movie may have caused the industry to step away from the technology.[33] In 1993, the movie Jurassic Park became the first popular film to use computer-generated graphics extensively, integrating the simulated dinosaurs almost seamlessly into live action scenes. This event transformed the film industry; in 1995 the movie Toy Story was the first film to use only computer-generated images and by the new millennium computer generated graphics were the leading choice for special effects in movies.[34] Simulators have been used for entertainment since the Link Trainer in the 1930s.[35] The first modern simulator ride to open at a theme park was Disney’s Star Tours in 1987 soon followed by Universal’s The Funtastic World of Hanna-Barbera in 1990 which was the first ride to be done entirely with computer graphics.[36]

23

**Examples of entertainment simulation
**

Computer and video games Simulation games, as opposed to other genres of video and computer games, represent or simulate an environment accurately. Moreover, they represent the interactions between the playable characters and the environment realistically. These kinds of games are usually more complex in terms of game play.[37] Simulation games have become incredibly popular among people of all ages.[38] Popular simulation games include SimCity, Tiger Woods PGA Tour and Virtonomics. Film Computer-generated imagery is “the application of the field of 3D computer graphics to special effects”. This technology is used for visual effects because they are high in quality, controllable, and can create effects that would not be feasible using any other technology either because of cost, resources or safety.[39] Computer-generated graphics can be seen in many live action movies today, especially those of the action genre. Further, computer generated imagery has almost completely supplanted hand-drawn animation in children's movies which are increasingly computer-generated only. Examples of movies that use computer-generated imagery include Finding Nemo, 300 and Iron Man. Theme park rides Simulator rides are the progeny of military training simulators and commercial simulators, but they are different in a fundamental way. While military training simulators react realistically to the input of the trainee in real time, ride simulators only feel like they move realistically and move according to prerecorded motion scripts.[36] One of the first simulator rides, Star Tours, which cost $32 million, used a hydraulic motion based cabin. The movement was programmed by a joystick. Today’s simulator rides, such as The Amazing Adventures of Spider-man include elements to increase the amount of immersion experienced by the riders such as: 3D imagery, physical effects (spraying water or producing scents), and movement through an environment.[40] Examples of simulation rides include Mission Space and The Simpsons Ride.

Simulation

24

**Simulation and Manufacturing
**

Manufacturing represents one of the most important applications of Simulation. This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipments and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem.[41] Another important goal of manufacturing-simulations is to quantify system performance. Common measures of system performance include the following:[42] • • • • • • • • • Throughput under average and peak loads; System cycle time (how long it take to produce one part); Utilization of resource, labor, and machines; Bottlenecks and choke points; Queuing at work locations; Queuing and delays caused by material-handling devices and systems; WIP storage needs; Staffing requirements; Effectiveness of scheduling systems;

• Effectiveness of control systems.

**More examples in different areas
**

City and urban simulation

A city simulator can be a city-building game but can also be a tool used by urban planners to understand how cities are likely to evolve in response to various policy decisions. AnyLogic is an example of modern, large-scale urban simulators designed for use by urban planners. City simulators are generally agent-based simulations with explicit representations for land use and transportation. UrbanSim and LEAM are examples of large-scale urban simulation models that are used by metropolitan planning agencies and military bases for land use and transportation planning.

**Classroom of the future
**

The "classroom of the future" will probably contain several kinds of simulators, in addition to textual and visual learning tools. This will allow students to enter the clinical years better prepared, and with a higher skill level. The advanced student or postgraduate will have a more concise and comprehensive method of retraining — or of incorporating new clinical procedures into their skill set — and regulatory bodies and medical institutions will find it easier to assess the proficiency and competency of individuals. The classroom of the future will also form the basis of a clinical skills unit for continuing education of medical personnel; and in the same way that the use of periodic flight training assists airline pilots, this technology will assist practitioners throughout their career. The simulator will be more than a "living" textbook, it will become an integral a part of the practice of medicine. The simulator environment will also provide a standard platform for curriculum development in institutions of medical education.

Simulation

25

**Digital Lifecycle Simulation
**

Simulation solutions are being increasingly integrated with CAx (CAD, CAM, CAE....) solutions and processes. The use of simulation throughout the product lifecycle, especially at the earlier concept and design stages, has the potential of providing substantial benefits. These benefits range from direct cost issues such as reduced prototyping and shorter time-to-market, to better performing products and higher margins. However, for some companies, simulation has not provided the expected benefits. The research firm Aberdeen Group has found that nearly all best-in-class manufacturers use simulation early in the design process as compared to 3 or 4 laggards who do not.

Simulation of airflow over an engine

The successful use of Simulation, early in the lifecycle, has been largely driven by increased integration of simulation tools with the entire CAD, CAM and PLM solution-set. Simulation solutions can now function across the extended enterprise in a multi-CAD environment, and include solutions for managing simulation data and processes and ensuring that simulation results are made part of the product lifecycle history. The ability to use simulation across the entire lifecycle has been enhanced through improved user interfaces such as tailorable user interfaces and "wizards" which allow all appropriate PLM participants to take part in the simulation process.

**Disaster Preparedness and Simulation Training
**

Simulation training has become a method for preparing people for disasters. Simulations can replicate emergency situations and track how learners respond. Disaster preparedness simulations can involve training on how to handle terrorism attacks, natural disasters, pandemic outbreaks, or other life-threatening emergencies. One organization that has used simulation training for disaster preparedness is CADE (Center for Advancement of Distance Education). CADE[43] has used a video game to prepare emergency workers for multiple types of attacks. As reported by News-Medical.Net, ”The video game is the first in a series of simulations to address bioterrorism, pandemic flu, smallpox and other disasters that emergency personnel must prepare for.[44] ” Developed by a team from the University of Illinois at Chicago (UIC), the game allows learners to practice their emergency skills in a safe, controlled environment. The Emergency Simulation Program (ESP) at the British Columbia Institute of Technology (BCIT), Vancouver, British Columbia, Canada is another example of an organization that uses simulation to train for emergency situations. ESP uses simulation to train on the following situations: forest fire fighting, oil or chemical spill response, earthquake response, law enforcement, municipal fire fighting, hazardous material handling, military training, and response to terrorist attack [45] One feature of the simulation system is the implementation of “Dynamic Run-Time Clock,” which allows simulations to run a 'simulated' time frame, 'speeding up' or 'slowing down' time as desired”[45] Additionally, the system allows session recordings, picture-icon based navigation, file storage of individual simulations, multimedia components, and launch external applications. At the University of Québec in Chicoutimi, a research team at the outdoor research and expertise laboratory (Laboratoire d'Expertise et de Recherche en Plein Air - LERPA) specializes in using wilderness backcountry accident simulations to verify emergency response coordination. Instructionally, the benefits of emergency training through simulations are that learner performance can be tracked through the system. This allows the developer to make adjustments as necessary or alert the educator on topics that may require additional attention. Other advantages are that the learner can be guided or trained on how to respond appropriately before continuing to the next emergency segment—this is an aspect that may not be available in the live-environment. Some emergency training simulators also allows for immediate feedback, while other simulations

Simulation may provide a summary and instruct the learner to engage in the learning topic again. In a live-emergency situation, emergency responders do not have time to waste. Simulation-training in this environment provides an opportunity for learners to gather as much information as they can and practice their knowledge in a safe environment. They can make mistakes without risk of endangering lives and be given the opportunity to correct their errors to prepare for the real-life emergency.

26

**Engineering, technology or process simulation
**

Simulation is an important feature in engineering systems or any system that involves many processes. For example in electrical engineering, delay lines may be used to simulate propagation delay and phase shift caused by an actual transmission line. Similarly, dummy loads may be used to simulate impedance without simulating propagation, and is used in situations where propagation is unwanted. A simulator may imitate only a few of the operations and functions of the unit it simulates. Contrast with: emulate.[46] Most engineering simulations entail mathematical modeling and computer assisted investigation. There are many cases, however, where mathematical modeling is not reliable. Simulation of fluid dynamics problems often require both mathematical and physical simulations. In these cases the physical models require dynamic similitude. Physical and chemical simulations have also direct realistic uses,[47] rather than research uses; in chemical engineering, for example, process simulations are used to give the process parameters immediately used for operating chemical plants, such as oil refineries.

**Payment and Securities Settlement System Simulations
**

Simulation techniques have also been applied to payment and securities settlement systems. Among the main users are central banks who are generally responsible for the oversight of market infrastructure and entitled to contribute to the smooth functioning of the payment systems. Central Banks have been using payment system simulations to evaluate things such as the adequacy or sufficiency of liquidity available ( in the form of account balances and intraday credit limits) to participants (mainly banks) to allow efficient settlement of payments.[48] [49] The need for liquidity is also dependent on the availability and the type of netting procedures in the systems, thus some of the studies have a focus on system comparisons.[50] Another application is to evaluate risks related to events such as communication network breakdowns or the inability of participants to send payments (e.g. in case of possible bank failure).[51] This kind of analysis fall under the concepts of Stress testing or scenario analysis. A common way to conduct these simulations is to replicate the settlement logics of the real payment or securities settlement systems under analysis and then use real observed payment data. In case of system comparison or system development, naturally also the other settlement logics need to be implemented. To perform stress testing and scenario analysis, the observed data needs to be altered, e.g. some payments delayed or removed. To analyze the levels of liquidity, initial liquidity levels are varried. System comparisons (benchmarking)or evaluations of new netting algorithms or rules are performed by running simulations with a fixed set of data and wariating only the system setups. Inference is usually done by comparing the benchmark simulation results to the results of altered simulation setups by comparing indicators such as unsettled transactions or settlement delays.

Simulation

27

**Space Shuttle Countdown Simulation
**

Simulation is used at Kennedy Space Center (KSC) to train and certify Space Shuttle engineers during simulated launch countdown operations. The Space Shuttle engineering community participates in a launch countdown integrated simulation before each shuttle flight. This simulation is a virtual simulation where real people interact with simulated Space Shuttle vehicle and Ground Support Equipment (GSE) hardware. The Shuttle Final Countdown Phase Simulation, also known as S0044, involves countdown processes Firing Room 1 configured for space shuttle launches that integrate many of the Space Shuttle vehicle and GSE systems. Some of the Shuttle systems integrated in the simulation are the Main Propulsion System, Main Engines, Solid Rocket Boosters, ground Liquid Hydrogen and Liquid Oxygen, External Tank, Flight Controls, Navigation, and Avionics.[52] The high-level objectives of the Shuttle Final Countdown Phase Simulation are: • To demonstrate Firing Room final countdown phase operations. • To provide training for system engineers in recognizing, reporting and evaluating system problems in a time critical environment. • To exercise the launch teams ability to evaluate, prioritize and respond to problems in an integrated manner within a time critical environment. • To provide procedures to be used in performing failure/recovery testing of the operations performed in the final countdown phase.[53] The Shuttle Final Countdown Phase Simulation takes place at the Kennedy Space Center Launch Control Center Firing Rooms. The firing room used during the simulation is the same control room where real launch countdown operations are executed. As a result, equipment used for real launch countdown operations is engaged. Command and control computers, application software, engineering plotting and trending tools, launch countdown procedure documents, launch commit criteria documents, hardware requirement documents, and any other items used by the engineering launch countdown teams during real launch countdown operations are used during the simulation. The Space Shuttle vehicle hardware and related GSE hardware is simulated by mathematical models (written in Shuttle Ground Operations Simulator (SGOS) modeling language [54] ) that behave and react like real hardware. During the Shuttle Final Countdown Phase Simulation, engineers command and control hardware via real application software executing in the control consoles – just as if they were commanding real vehicle hardware. However, these real software applications do not interface with real Shuttle hardware during simulations. Instead, the applications interface with mathematical model representations of the vehicle and GSE hardware. Consequently, the simulations bypass sensitive and even dangerous mechanisms while providing engineering measurements detailing how the hardware would have reacted. Since these math models interact with the command and control application software, models and simulations are also used to debug and verify the functionality of application software.[55]

Simulation

28

**Satellite Navigation Simulators
**

The only true way to test GNSS receivers (commonly known as Sat-Nav's in the commercial world)is by using an RF Constellation Simulator. A receiver that may for example be used on an aircraft, can be tested under dynamic conditions without the need to take it on a real flight. The test conditions can be repeated exactly, and there is full control over all the test parameters. this is not possible in the 'real-world' using the actual signals. For testing receivers that will use the new Galileo (satellite navigation) there is no alternative, as the real signals do not yet exist.

**Communication Satellite Simulation
**

Modern satellite communications systems (SatCom) are often large and complex with many interacting parts and elements. In addition, the need for broadband connectivity on a moving vehicle has increased dramatically in the past few years for both commercial and military applications. To accurately predict and deliver high quality of service, satcom system designers have to factor in terrain as well as atmospheric and meteorological conditions in their planning. To deal with such complexity, system designers and operators increasingly turn towards computer models of their systems to simulate real world operational conditions and gain insights in to usability and requirements prior to final product sign-off. Modeling improves the understanding of the system by enabling the SatCom system designer or planner to simulate real world performance by injecting the models with multiple hypothetical atmospheric and environmental conditions.

Finance simulation

In finance, computer simulations are often used for scenario planning. Risk-adjusted net present value, for example, is computed from well-defined but not always known (or fixed) inputs. By imitating the performance of the project under evaluation, simulation can provide a distribution of NPV over a range of discount rates and other variables. Simulations are frequently used in financial training to engage participates in experiencing various historical as well as fictional situations. There are stock market simulations, portfolio simulations, risk management simulations or models and forex simulations. Using these simulations in a training program allows for the application of theory into a something akin to real life. As with other industries, the use of simulations can be technology or case-study driven.

Flight simulation

Flight Simulation Training Devices (FSTD) are used to train pilots on the ground. In comparison to training in an actual aircraft, simulation based training allows for the training of maneuvers or situations that may be impractical (or even dangerous) to perform in the aircraft, while keeping the pilot and instructor in a relatively low-risk environment on the ground. For example, electrical system failures, instrument failures, hydraulic system failures, and even flight control failures can be simulated without risk to the pilots or an aircraft. Instructors can also provide students with a higher concentration of training tasks in a given period of time than is usually possible in the aircraft. For example, conducting multiple instrument approaches in the actual aircraft may require significant time spent repositioning the aircraft, while in a simulation, as soon as one approach has been completed, the instructor can immediately preposition the simulated aircraft to an ideal (or less than ideal) location from which to begin the next approach. Flight simulation also provides an economic advantage over training in an actual aircraft. Once fuel, maintenance, and insurance costs are taken into account, the operating costs of an FSTD are usually substantially lower than the operating costs of the simulated aircraft. For some large transport category airplanes, the operating costs may be several times lower for the FSTD than the actual aircraft. Some people who use simulator software, especially flight simulator software, build their own simulator at home. Some people — in order to further the realism of their homemade simulator — buy used cards and racks that run the same software used by the original machine. While this involves solving the problem of matching hardware and

Simulation software — and the problem that hundreds of cards plug into many different racks — many still find that solving these problems is well worthwhile. Some are so serious about realistic simulation that they will buy real aircraft parts, like complete nose sections of written-off aircraft, at aircraft boneyards. This permits people to simulate a hobby that they are unable to pursue in real life.

29

Automobile simulator

An automobile simulator provides an opportunity to reproduce the characteristics of real vehicles in a virtual environment. It replicates the external factors and conditions with which a vehicle interacts enabling a driver to feel as if they are sitting in the cab of their own vehicle. Scenarios and events are replicated with sufficient reality to ensure that drivers become fully immersed in the experience rather than simply viewing it as an educational experience. The simulator provides a constructive experience for the novice driver and enables more complex exercises to be undertaken by the more A soldier tests out a heavy-wheeled-vehicle driver simulator. mature driver. For novice drivers, truck simulators provide an opportunity to begin their career by applying best practice. For mature drivers, simulation provides the ability to enhance good driving or to detect poor practice and to suggest the necessary steps for remedial action. For companies, it provides an opportunity to educate staff in the driving skills that achieve reduced maintenance costs, improved productivity and, most importantly, to ensure the safety of their actions in all possible situations.

Marine simulators

Bearing resemblance to flight simulators, marine simulators train ships' personnel. The most common marine simulators include: • • • • • Ship's bridge simulators Engine room simulators Cargo handling simulators Communication / GMDSS simulators ROV simulators

Simulators like these are mostly used within maritime colleges, training institutions and navies. They often consist of a replication of a ships' bridge, with operating desk(s), and a number of screens on which the virtual surroundings are projected.

Military simulations

Military simulations, also known informally as war games, are models in which theories of warfare can be tested and refined without the need for actual hostilities. They exist in many different forms, with varying degrees of realism. In recent times, their scope has widened to include not only military but also political and social factors (for example, the NationLab series of strategic exercises in Latin America.[56] Whilst many governments make use of simulation, both individually and collaboratively, little is known about the model's specifics outside professional circles.

Simulation

30

Robotics simulators

A robotics simulator is used to create embedded applications for a specific (or not) robot without being dependent on the 'real' robot. In some cases, these applications can be transferred to the real robot (or rebuilt) without modifications. Robotics simulators allow reproducing situations that cannot be 'created' in the real world because of cost, time, or the 'uniqueness' of a resource. A simulator also allows fast robot prototyping. Many robot simulators feature physics engines to simulate a robot's dynamics.

Biomechanics simulators

A biomechanics simulator is used to analyze walking dynamics, study sports performance, simulate surgical procedures, analyze joint loads, design medical devices, and animate human and animal movement. A neuromechanical simulator that combines biomechanical and biologically realistic neural network simulation. It allows the user to test hypotheses on the neural basis of behavior in a physically accurate 3-D virtual environment.

**Sales process simulators
**

Simulations are useful in modeling the flow of transactions through business processes, such as in the field of sales process engineering, to study and improve the flow of customer orders through various stages of completion (say, from an initial proposal for providing goods/services through order acceptance and installation). Such simulations can help predict the impact of how improvements in methods might impact variability, cost, labor time, and the quantity of transactions at various stages in the process. A full-featured computerized process simulator can be used to depict such models, as can simpler educational demonstrations using spreadsheet software, pennies being transferred between cups based on the roll of a die, or dipping into a tub of colored beads with a scoop.[57]

**Simulation and games
**

Strategy games — both traditional and modern — may be viewed as simulations of abstracted decision-making for the purpose of training military and political leaders (see History of Go for an example of such a tradition, or Kriegsspiel for a more recent example). Many other video games are simulators of some kind. Such games can simulate various aspects of reality, from business, to government, to construction, to piloting vehicles (see above).

Historical usage

Historically, the word had negative connotations: …for Distinction Sake, a Deceiving by Words, is commonly called a Lye, and a Deceiving by Actions, Gestures, or Behavior, is called Simulation… —Robert South, South, 1697, p.525 However, the connection between simulation and dissembling later faded out and is now only of linguistic interest.[58]

Simulation

31

References

[1] In the words of the Simulation article (http:/ / www. modelbenders. com/ encyclopedia/ encyclopedia. html) in Encyclopedia of Computer Science, "designing a model of a real or imagined system and conducting experiments with that model". [2] Sokolowski, J.A., Banks, C.M. (2009). Principles of Modeling and Simulation. Hoboken, NJ: Wiley. p. 6. ISBN 0470289430. [3] For example in computer graphics (http:/ / www. siggraph. org/ s2007/ attendees/ papers/ 12. html) (http:/ / wiki. blender. org/ index. php/ BSoD/ Physical_Simulation). [4] Thales defines synthetic environment as "the counterpart to simulated models of sensors, platforms and other active objects" for "the simulation of the external factors that affect them" (http:/ / www. thalesresearch. com/ Default. aspx?tabid=181) while other vendors use the term for more visual, virtual reality-style simulators (http:/ / www. cae. com/ www2004/ Products_and_Services/ Civil_Simulation_and_Training/ Simulation_Equipment/ Visual_Solutions/ Synthetic_Environments/ index. shtml). [5] For a popular research project in the field of biochemistry where "computer simulation is particularly well suited to address these questions" (http:/ / folding. stanford. edu/ Pande/ Main), see Folding@Home. [6] For an academic take on a training simulator, see e.g. Towards Building an Interactive, Scenario-based Training Simulator (http:/ / gel. msu. edu/ magerko/ papers/ 11TH-CGF-058. pdf), for medical application Medical Simulation Training Benefits (http:/ / www. immersion. com/ medical/ benefits1. php) as presented by a simulator vendor and for military practice A civilian's guide to US defense and security assistance to Latin America and the Caribbean (http:/ / ciponline. org/ facts/ exe. htm) published by Center for International Policy. [7] http:/ / www. simschool. org [8] Classification used by the Defense Modeling and Simulation Office. [9] "High Versus Low Fidelity Simulations: Does the Type of Format Affect Candidates' Performance or Perceptions?" (http:/ / www. ipmaac. org/ conf/ 03/ havighurst. pdf) [10] For example All India management association (http:/ / www. aima-ind. org/ ) maintains that playing to win, participants "imbibe new forms of competitive behavior that are ideal for today's highly chaotic business conditions" (http:/ / www. aima-ind. org/ management_games. asp) and IBM claims that "the skills honed playing massive multiplayer dragon-slaying games like World of Warcraft can be useful when managing modern multinationals". [11] "Reacting to the Past Home Page" (http:/ / www. barnard. columbia. edu/ reacting/ ) [12] "Carana," at 'PaxSims' blog, 27 January 2009 (http:/ / paxsims. wordpress. com/ 2009/ 01/ 27/ carana/ ) [13] Sherman, W.R., Craig, A.B. (2003). Understanding Virtual Reality. San Francisco, CA: Morgan Kaufmann. ISBN 1558603530. [14] Leeb, R., Lee, F., Keinrath, C., Schere, R., Bischof, H., Pfurtscheller, G. (2007). "Brain-Computer Communication: Motivation, Aim, and Impact of Exploring a Virtual Apartment". IEEE Transactions on Neural Systems and Rehabilitation Engineering 15 (4): 473–481. doi:10.1109/TNSRE.2007.906956. [15] Zahraee, A.H., Szewczyk, J., Paik, J.K., Guillaume, M. (2010). Robotic hand-held surgical device: evaluation of end-effector’s kinematics and development of proof-of-concept prototypes. Proceedings of the 13th International Conference on Medical Image Computing and Computer Assisted Intervention, Beijing, China. [16] Ahmed K, Keeling AN, Fakhry M, Ashrafian H, Aggarwal R, Naughton PA, Darzi A, Cheshire N, et al. (January 2010). "Role of Virtual Reality Simulation in Teaching and Assessing Technical Skills in Endovascular Intervention". J Vasc Interv Radiol 21. [17] Narayan, Roger; Kumta, Prashant; Sfeir, Charles; Lee, Dong-Hyun; Choi, Daiwon; Olton, Dana (October 2004). "Nanostructured ceramics in medical devices: Applications and prospects" (http:/ / www. ingentaconnect. com/ content/ tms/ jom/ 2004/ 00000056/ 00000010/ art00011). JOM 56 (10): 38–43. doi:10.1007/s11837-004-0289-x. PMID 11196953. . [18] Couvreur P, Vauthier C (July 2006). "Nanotechnology: intelligent design to treat complex disease". Pharm. Res. 23 (7): 1417–50. doi:10.1007/s11095-006-0284-8. PMID 16779701. [19] Hede S, Huilgol N (2006). ""Nano": the new nemesis of cancer" (http:/ / www. cancerjournal. net/ article. asp?issn=0973-1482;year=2006;volume=2;issue=4;spage=186;epage=195;aulast=Hede). J Cancer Res Ther 2 (4): 186–95. doi:10.4103/0973-1482.29829. PMID 17998702. . [20] Leary SP, Liu CY, Apuzzo ML (June 2006). "Toward the emergence of nanoneurosurgery: part III—nanomedicine: targeted nanotherapy, nanosurgery, and progress toward the realization of nanoneurosurgery" (http:/ / meta. wkhealth. com/ pt/ pt-core/ template-journal/ lwwgateway/ media/ landingpage. htm?issn=0148-396X& volume=58& issue=6& spage=1009). Neurosurgery 58 (6): 1009–26; discussion 1009–26. doi:10.1227/01.NEU.0000217016.79256.16. PMID 16723880. . [21] http:/ / vam. anest. ufl. edu/ wip. html [22] Nishisaki A, Keren R, Nadkarni V (June 2007). "Does simulation improve patient safety? Self-efficacy, competence, operational performance, and patient safety" (http:/ / linkinghub. elsevier. com/ retrieve/ pii/ S1932-2275(07)00025-0). Anesthesiol Clin 25 (2): 225–36. doi:10.1016/j.anclin.2007.03.009. PMID 17574187. . [23] Meller, G. (1997). "A Typology of Simulators for Medical Education" (http:/ / www. medsim. com/ profile/ article1. html). Journal of Digital Imaging. . [24] Murphy D, Challacombe B, Nedas T, Elhage O, Althoefer K, Seneviratne L, Dasgupta P. (May 2007). "[Equipment and technology in robotics]" (in Spanish; Castilian). Arch. Esp. Urol. 60 (4): 349–55. PMID 17626526. [25] "In Cybertherapy, Avatars Assist With Healing" (http:/ / www. nytimes. com/ 2010/ 11/ 23/ science/ 23avatar. html?_r=1& ref=science). New York Times. 2010-11-22. . Retrieved 2010-11-23.

Simulation

[26] Dagger, Jacob (May–June 2008). Update: "The New Game Theory" (http:/ / www. dukemagazine. duke. edu/ dukemag/ issues/ 050608/ depupd. html). 94. Duke Magazine. . Retrieved 2011-02-08. [27] Steinberg, Scott (2011-01-31). "How video games can make you smarter" (http:/ / articles. cnn. com/ 2011-01-31/ tech/ video. games. smarter. steinberg_1_video-games-interactive-simulations-digital-world?_s=PM:TECH). Cable News Network (CNN Tech). . Retrieved 2011-02-08. [28] Vlaovic PD, Sargent ER, Boker JR, et al. (2008). "Immediate impact of an intensive one-week laparoscopy training program on laparoscopic skills among postgraduate urologists" (http:/ / openurl. ingenta. com/ content/ nlm?genre=article& issn=1086-8089& volume=12& issue=1& spage=1& aulast=Vlaovic). JSLS 12 (1): 1–8. PMID 18402731. . [29] Leung J, Foster E (April 2008). "How do we ensure that trainees learn to perform biliary sphincterotomy safely, appropriately, and effectively?" (http:/ / www. current-reports. com/ article_frame. cfm?PubID=GR10-2-2-03& Type=Abstract). Curr Gastroenterol Rep 10 (2): 163–8. doi:10.1007/s11894-008-0038-3. PMID 18462603. . [30] http:/ / www. pong-story. com/ intro. htm [31] http:/ / homepages. vvm. com/ ~jhunt/ compupedia/ History%20of%20Computers/ history_of_computers_1980. htm [32] "Video Game Console Timeline - Video Game History - Xbox 360 - TIME Magazine" (http:/ / www. time. com/ time/ covers/ 1101050523/ console_timeline/ ). Time. 2005-05-23. . Retrieved 2010-05-23. [33] http:/ / design. osu. edu/ carlson/ history/ tron. html [34] http:/ / www. beanblossom. in. us/ larryy/ cgi. html [35] http:/ / www. starksravings. com/ linktrainer/ linktrainer. htm [36] http:/ / www. trudang. com/ simulatr/ simulatr. html [37] http:/ / open-site. org/ Games/ Video_Games/ Simulation [38] http:/ / www. ibisworld. com/ industry/ retail. aspx?indid=2003& chid=1 [39] http:/ / www. sciencedaily. com/ articles/ c/ computer-generated_imagery. htm [40] http:/ / www. awn. com/ mag/ issue4. 02/ 4. 02pages/ kenyonspiderman. php3 [41] Benedettini, O., Tjahjono, B. (2008). "Towards an improved tool to facilitate simulation modeling of complex manufacturing systems". International Journal of Advanced Manufacturing Technology 43 (1/2): 191–9. doi:10.1007/s00170-008-1686-z. [42] Banks, J., Carson J., Nelson B.L., Nicol, D. (2005). Discrete-event system simulation (4th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0130887021. [43] CADE- http:/ / www. uic. edu/ sph/ cade/ [44] News-Medical.Net article- http:/ / www. news-medical. net/ news/ 2005/ 10/ 27/ 14106. aspx [45] http:/ / www. straylightmm. com/ [46] Federal Standard 1037C [47] D. Passeri et al. (May 2009). "Analysis of 3D stacked fully functional CMOS Active Pixel Sensor detectors" (http:/ / meroli. web. cern. ch/ meroli/ Analysisof3Dstacked. html). Journal of Instrumentation 4 (4): 4009. doi:10.1088/1748-0221/4/04/P04009. . [48] Leinonen (ed.): Simulation studies of liquidity needs, risks and efficiency in payment networks (Bank of Finland Studies E:39/2007) Simulation publications (http:/ / pss. bof. fi/ Pages/ Publications. aspx) [49] Neville Arjani: Examining the Trade-Off between Settlement Delay and Intraday Liquidity in Canada's LVTS: A Simulation Approach (Working Paper 2006-20, Bank of Canada) Simulation publications (http:/ / pss. bof. fi/ Pages/ Publications. aspx) [50] Johnson, K. - McAndrews, J. - Soramäki, K. 'Economizing on Liquidity with Deferred Settlement Mechanisms' (Reserve Bank of New York Economic Policy Review, December 2004) [51] H. Leinonen (ed.): Simulation analyses and stress testing of payment networks (Bank of Finland Studies E:42/2009) Simulation publications (http:/ / pss. bof. fi/ Pages/ Publications. aspx) [52] Sikora, E.A. (2010, July 27). Space Shuttle Main Propulsion System expert, John F. Kennedy Space Center. Interview. [53] Shuttle Final Countdown Phase Simulation. National Aeronautics and Space Administration KSC Document # RTOMI S0044, Revision AF05, 2009. [54] Shuttle Ground Operations Simulator (SGOS) Summary Description Manual. National Aeronautics and Space Administration KSC Document # KSC-LPS-SGOS-1000, Revision 3 CHG-A, 1995. [55] Math Model Main Propulsion System (MPS) Requirements Document, National Aeronautics and Space Administration KSC Document # KSCL-1100-0522, Revision 9, June 2009. [56] See, for example, United States Joint Forces Command "Multinational Experiment 4" (http:/ / www. jfcom. mil/ about/ experiments/ mne4. htm) [57] Paul H. Selden (1997). Sales Process Engineering: A Personal Workshop. Milwaukee, WI: ASQ Quality Press. ISBN 0873894189. [58] South, in the passage quoted, was speaking of the differences between a falsehood and an honestly mistaken statement; the difference being that in order for the statement to be a lie the truth must be known, and the opposite of the truth must have been knowingly uttered. And, from this, to the extent to which a lie involves deceptive words, a simulation involves deceptive actions, deceptive gestures, or deceptive behavior. Thus, it would seem, if a simulation is false, then the truth must be known (in order for something other than the truth to be presented in its stead); and, for the simulation to simulate. Because, otherwise, one would not know what to offer up in simulation. Bacon’s essay Of Simulation and Dissimulation (http:/ / www. authorama. com/ essays-of-francis-bacon-7. html) expresses somewhat similar views; it is also significant that Samuel Johnson thought so highly of South's definition, that he used it in the entry for simulation in his Dictionary of the English Language.

32

Simulation

33

Further reading

• C. Aldrich (2003). Learning by Doing : A Comprehensive Guide to Simulations, Computer Games, and Pedagogy in e-Learning and Other Educational Experiences. San Francisco: Pfeifer — John Wiley & Sons. ISBN 0787977357. • C. Aldrich (2004). Simulations and the future of learning: an innovative (and perhaps revolutionary) approach to e-learning. San Francisco: Pfeifer — John Wiley & Sons. ISBN 0787969621. • Steve Cohen (2006). Virtual Decisions. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0805849947. • R. Frigg, S. Hartmann (2007). "Models in Science" (http://plato.stanford.edu/entries/models-science/). Stanford Encyclopedia of Philosophy. • S. Hartmann (1996). "The World as a Process: Simulations in the Natural and Social Sciences" (http:// philsci-archive.pitt.edu/archive/00002412/). In R. Hegselmann, et al.. Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer. pp. 77–100. • J.P. Hertel (2002). Using Simulations to Promote Learning in Higher Education. Sterling, Virginia: Stylus. ISBN 1579220525. • P. Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press. ISBN 0195158709. • F. Percival, S. Lodge, D. Saunders (1993). The Simulation and Gaming Yearbook: Developing Transferable Skills in Education and Training. London: Kogan Page. • D. Saunders, ed (2000). The International Simulation and Gaming Research Yearbook. London: Kogan Page. • Roger D. Smith: Simulation Article (http://www.modelbenders.com/encyclopedia/encyclopedia.html), Encyclopedia of Computer Science, Nature Publishing Group, ISBN 0-333-77879-0. • Roger D. Smith: "Simulation: The Engine Behind the Virtual World" (http://www.modelbenders.com/ Bookshop/techpapers.html), eMatter, December, 1999. • R. South (1688). "A Sermon Delivered at Christ-Church, Oxon., Before the University, Octob. 14. 1688: Prov. XII.22 Lying Lips are abomination to the Lord", pp. 519–657 in South, R., Twelve Sermons Preached Upon Several Occasions (Second Edition), Volume I, Printed by S.D. for Thomas Bennet, (London), 1697. • Eric Winsberg (1999) Sanctioning Models: The epistemology of simulation (http://www.cas.usf.edu/~ewinsb/ SiC_Eric_Winsberg.pdf), in Sismondo, Sergio and Snait Gissis (eds.) (1999), Modeling and Simulation. Special Issue of Science in Context 12. • Eric Winsberg (2001). "Simulations, Models and Theories: Complex Physical Systems and their Representations". Philosophy of Science 68: 442–454. • Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World" (http://www.cas.usf.edu/ ~ewinsb/methodology.pdf) (PDF). Philosophy of Science 70: 105–125. doi:10.1086/367872. • Joseph Wolfe, David Crookall (1998). "Developing a scientific knowledge of simulation/gaming" (http://sag. sagepub.com/cgi/reprint/29/1/7). Simulation & Gaming: an International Journal of Theory, Design and Research 29 (1): 7–19. • Ellen K. Levy (2004). "Synthetic Lighting: Complex Simulations of Nature". Photography Quarterly (88): 5–9.

External links

• Bibliographies containing more references (http://www.unice.fr/sg/resources/bibliographies.htm) to be found on the website of the journal Simulation & Gaming (http://www.unice.fr/sg/).

Plasma modeling

34

Plasma modeling

Plasma Modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's Equations for electromagnetic fields (or Poisson's Equation for electrostatic fields). There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles.

**Single Particle Description
**

The single particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law. In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point.

Kinetic Description

The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function

where the independent variables

and

are position and velocity, respectively. A kinetic description is achieved

by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker-Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations.

Fluid Description

To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient, averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics.

**Hybrid Kinetic/Fluid Description
**

Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically.

Gyrokinetic Description

In the gyrokinetic model, which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius. This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications.

Plasma modeling

35

References

• Francis F. Chen (2006). Introduction to Plasma Physics and Controlled Fusion, 2nd ed. Springer. ISBN 0306413322. • Nicholas Krall and Alvin Trivelpiece (1986). Principles of Plasma Physics. San Francisco Press. ISBN 0911302581.

Equations of motion

Equations of motion are equations that describe the behavior of a system (e.g., the motion of a particle under the influence of a force) as a function of time.[1] Sometimes the term refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations.

**Equations of uniformly accelerated linear motion
**

The equations that apply to bodies moving linearly (in one dimension) with constant acceleration are often referred to as "SUVAT" equations where the five variables are represented by those letters (s = displacement, u = initial velocity, v = final velocity, a = acceleration, t = time); the five letters may be shown in a different order. The body is considered between two instants in time: one initial point and one current (or final) point. Problems in kinematics may deal with more than two instants, and several applications of the equations are then required. If a is constant, the differential, a dt, may be integrated over an interval from 0 to ( ), to obtain a linear relationship for velocity. Integration of the velocity yields a quadratic relationship for position at the end of the interval.

where... is the body's initial velocity is the body's initial position and its current state is described by: , The velocity at the end of the interval , the position at the end of the interval (displacement) , the time interval between the initial and current states , the constant acceleration, or in the case of bodies moving under the influence of gravity, g.

Note that each of the equations contains four of the five variables. Thus, in this situation it is sufficient to know three out of the five variables to calculate the remaining two.

Classic version

The equations below (often informally known as the "suvat"[2] equations) are often written in the following form:[3]

Equations of motion By substituting (1) into (2), we can get (3), (4) and (5). (6) can be constructed by rearranging (1). where s = the distance between initial and final positions (displacement) (sometimes denoted R or x) u = the initial velocity (speed in a given direction) v = the final velocity a = the constant acceleration t = the time taken to move from the initial state to the final state

36

Examples

Many examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial speed u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. At this point one must remember that while these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as uni-directional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore v = 0. Using the fifth equation, we have:

Substituting and cancelling minus signs gives:

Extension

More complex versions of these equations can include s0 for the initial position of the body, and v0 for u for consistency.

**Equations of circular motion
**

The analogues of the above equations can be written for rotation:

where: is the angular acceleration is the angular velocity

Equations of motion is the angular displacement is the initial angular velocity. t is the time taken to move from the initial state to the final state

37

Derivation

These equations assume constant acceleration and non-relativistic velocities.

Equation 2

By definition:

Hence:

Equation 5

Using equation 2, substitute t with above:

Equation 4

Using equation 1 to substitute v in equation 3 gives:

External links

• Equations of Motion Applet [4]

References

[1] Halliday, David; Resnick, Robert; Walker, Jearl (2004-06-16). Fundamentals of Physics (7 Sub ed.). Wiley. ISBN 0471232319. [2] Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (http:/ / books. google. com/ books?id=D4nrQDzq1jkC& pg=PA135& dq=suvat& hl=en& ei=aF0OTNP9IY-INoDX0b4M& sa=X& oi=book_result& ct=result& resnum=7& ved=0CEEQ6AEwBg#v=onepage& q=suvat& f=false) (4th ed.). Nelson Thornes. p. 135. ISBN 9780748762361. . "You can remember the 5 symbols by 'suvat'. If you know any three of 'suvat', the other two can be found." [3] Hanrahan, Val; Porkess, R (2003). Additional Mathematics for OCR. London: Hodder & Stoughton. p. 219. ISBN 0-340-86960-7. [4] http:/ / www. physics-lab. net/ applets/ equations-of-motion

Maxwell's equations

38

Maxwell's equations

Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These in turn underlie modern electrical and communications technologies. Maxwell's equations have two major variants. The "microscopic" set of Maxwell's equations uses total charge and total current including the difficult-to-calculate atomic level charges and currents in materials. The "macroscopic" set of Maxwell's equations defines two new auxiliary fields that can sidestep having to know these 'atomic' sized charges and currents. Maxwell's equations are named after the Scottish physicist and mathematician James Clerk Maxwell, since in an early form they are all found in a four-part paper, "On Physical Lines of Force," which he published between 1861 and 1862. The mathematical form of the Lorentz force law also appeared in this paper. It is often useful to write Maxwell's equations in other forms; these representations are still formally termed "Maxwell's equations". A relativistic formulation in terms of covariant field tensors is used in special relativity, while, in quantum mechanics, a version based on the electric and magnetic potentials is preferred.

Conceptual description

Conceptually, Maxwell's equations describe how electric charges and electric currents act as sources for the electric and magnetic fields. Further, it describes how a time varying electric field generates a time varying magnetic field and vice versa. (See below for a mathematical description of these laws.) Of the four equations, two of them, Gauss's law and Gauss's law for magnetism, describe how the fields emanate from charges. (For the magnetic field there is no magnetic charge and therefore magnetic fields lines neither begin nor end anywhere.) The other two equations describe how the fields 'circulate' around their respective sources; the magnetic field 'circulates' around electric currents and time varying electric field in Ampère's law with Maxwell's correction, while the electric field 'circulates' around time varying magnetic fields in Faraday's law.

Gauss's law

Gauss's law describes the relationship between an electric field and the generating electric charges: The electric field points away from positive charges and towards negative charges. In the field line description, electric field lines begin only at positive electric charges and end only at negative electric charges. 'Counting' the number of field lines in a closed surface, therefore, yields the total charge enclosed by that surface. More technically, it relates the electric flux through any hypothetical closed "Gaussian surface" to the electric charge within the surface.

Maxwell's equations

39

**Gauss's law for magnetism
**

Gauss's law for magnetism states that there are no "magnetic charges" (also called magnetic monopoles), analogous to electric charges.[1] Instead, the magnetic field due to materials is generated by a configuration called a dipole. Magnetic dipoles are best represented as loops of current but resemble positive and negative 'magnetic charges', inseparably bound together, having no net 'magnetic charge'. In terms of field lines, this equation states that magnetic field lines neither begin nor end but make loops or extend to infinity and back. In other words, any magnetic field line that enters a given volume must somewhere exit that volume. Equivalent technical statements are that the total magnetic flux through any Gaussian surface is zero, or that the magnetic field is a solenoidal vector field.

Gauss's law for magnetism: magnetic field lines never begin nor end but form loops or extend to infinity as shown here with the magnetic field due to a ring of current.

Faraday's law

Faraday's law describes how a time varying magnetic field creates ("induces") an electric field.[1] This aspect of electromagnetic induction is the operating principle behind many electric generators: for example a rotating bar magnet creates a changing magnetic field, which in turn generates an electric field in a nearby wire. (Note: there are two closely related equations which are called Faraday's law. The form used in Maxwell's equations is always valid but more restrictive than that originally formulated by Michael Faraday.)

In a geomagnetic storm, a surge in the flux of charged particles temporarily alters Earth's magnetic field, which induces electric fields in Earth's atmosphere, thus causing surges in our electrical power grids.

**Ampère's law with Maxwell's correction
**

Ampère's law with Maxwell's correction states that magnetic fields can be generated in two ways: by electrical current (this was the original "Ampère's law") and by changing electric fields (this was "Maxwell's correction"). Maxwell's correction to Ampère's law is particularly important: It means that a changing magnetic field creates an electric field, and a changing electric field creates a magnetic field.[1] [2] Therefore, these equations allow self-sustaining "electromagnetic waves" to travel through empty space (see electromagnetic wave equation).

An Wang's magnetic core memory (1954) is an application of Ampere's law. Each core stores one bit of data.

Maxwell's equations The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents,[3] exactly matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the previously-separate fields of electromagnetism and optics.

40

**Units and summary of equations
**

Maxwell's equations vary with the unit system used. Though the general form remains the same, various definitions get changed and different constants appear at different places. The equations in this section are given in SI units. Other units commonly used are Gaussian units (based on the cgs system[4] ), Lorentz-Heaviside units (used mainly in particle physics) and Planck units (used in theoretical physics). See below for CGS-Gaussian units. For a description of the difference between the microscopic and macroscopic variants of Maxwell's equations see the relevant sections below. In the equations given below, symbols in bold represent vector quantities, and symbols in italics represent scalar quantities. The definitions of terms used in the two tables of equations are given in another table immediately following.

**Table of 'microscopic' equations Formulation in terms of total charge and current[5]
**

Name Gauss's law Differential form Integral form

Gauss's law for magnetism

Maxwell–Faraday equation (Faraday's law of induction) Ampère's circuital law (with Maxwell's correction)

**Table of 'macroscopic' equations Formulation in terms of free charge and current
**

Name Gauss's law Differential form Integral form

Gauss's law for magnetism

Maxwell–Faraday equation (Faraday's law of induction) Ampère's circuital law (with Maxwell's correction)

Maxwell's equations

41

**Table of terms used in Maxwell's equations
**

The following table provides the meaning of each symbol and the SI unit of measure:

**Definitions and units
**

Symbol Meaning (first term is the most common) electric field also called the electric field intensity magnetic field also called the magnetic induction also called the magnetic field density also called the magnetic flux density electric displacement field also called the electric induction also called the electric flux density magnetizing field also called auxiliary magnetic field also called magnetic field intensity also called magnetic field the divergence operator the curl operator partial derivative with respect to time SI Unit of Measure volt per meter or, equivalently, newton per coulomb tesla, or equivalently, weber per square meter, volt-second per square meter

coulombs per square meter or equivalently, newton per volt-meter ampere per meter

per meter (factor contributed by applying either operator) per second (factor contributed by applying the operator) square meters

differential vector element of surface area A, with infinitesimally small magnitude and direction normal to surface S differential vector element of path length tangential to the path/curve permittivity of free space, also called the electric constant, a universal constant permeability of free space, also called the magnetic constant, a universal constant free charge density (not including bound charge) total charge density (including both free and bound charge) free current density (not including bound current) total current density (including both free and bound current) net free electric charge within the three-dimensional volume V (not including bound charge) net electric charge within the three-dimensional volume V (including both free and bound charge) line integral of the electric field along the boundary ∂S of a surface S (∂S is always a closed curve). line integral of the magnetic field over the closed boundary ∂S of the surface S

meters farads per meter henries per meter, or newtons per ampere squared coulombs per cubic meter coulombs per cubic meter amperes per square meter amperes per square meter coulombs

coulombs

joules per coulomb

tesla-meters

the electric flux (surface integral of the electric field) through the (closed) surface (the boundary of the volume V)

joule-meter per coulomb

the magnetic flux (surface integral of the magnetic B-field) through the (closed) tesla meters-squared or webers surface (the boundary of the volume V) magnetic flux through any surface S, not necessarily closed webers or equivalently, volt-seconds

Maxwell's equations

42

electric flux through any surface S, not necessarily closed joule-meters per coulomb

flux of electric displacement field through any surface S, not necessarily closed coulombs

net free electrical current passing through the surface S (not including bound current) net electrical current passing through the surface S (including both free and bound current)

amperes

amperes

**Proof that the two general formulations are equivalent
**

The two alternate general formulations of Maxwell's equations given above are mathematically equivalent and related by the following relations:

where P and M are polarization and magnetization, and ρb and Jb are bound charge and current, respectively. Substituting these equations into the 'macroscopic' Maxwell's equations gives identically the microscopic equations.

**Maxwell's 'microscopic' equations
**

The microscopic variant of Maxwell's equation expresses the electric E field and the magnetic B field in terms of the total charge and total current present including the charges and currents at the atomic level. It is sometimes called the general form of Maxwell's equations or "Maxwell's equations in a vacuum". Both variants of Maxwell's equations are equally general, though, as they are mathematically equivalent. The microscopic equations are most useful in waveguides for example, when there are no dielectric or magnetic materials nearby.

**Formulation in terms of total charge and current[6]
**

Name Gauss's law Differential form Integral form

Gauss's law for magnetism

Maxwell–Faraday equation (Faraday's law of induction) Ampère's circuital law (with Maxwell's correction)

Maxwell's equations

43

**With neither charges nor currents
**

In a region with no charges (ρ = 0) and no currents (J = 0), such as in a vacuum, Maxwell's equations reduce to:

These equations lead directly to E and B satisfying the wave equation for which the solutions are linear combinations of plane waves traveling at the speed of light,

In addition, E and B are mutually perpendicular to each other and the direction of motion and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. In fact, Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's correction to Ampère's law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c.

**Maxwell's 'macroscopic' equations
**

Unlike the 'microscopic' equations, "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, factor out the bound charge and current to obtain equations that depend only on the free charges and currents. These equations are more similar to those that Maxwell himself introduced. The cost of this factorization is that additional fields need to be defined: the displacement field D which is defined in terms of the electric field E and the polarization P of the material, and the magnetic-H field, which is defined in terms of the magnetic-B field and the magnetization M of the material.

Bound charge and current

Maxwell's equations

44

When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles—their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of Left: A schematic view of how an assembly of microscopic dipoles produces opposite surface charges as shown at top and bottom. Right: How an assembly of positive bound charge on one side of the microscopic current loops add together to produce a macroscopically circulating material and a layer of negative charge on current loop. Inside the boundaries, the individual contributions tend to cancel, but the other side. The bound charge is most at the boundaries no cancellation occurs. conveniently described in terms of a polarization, P, in the material. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enter and leave the material. For non-uniform P, a charge is also produced in the bulk.[7] Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the atoms' components, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual magnetic moment is traveling a large distance. These bound currents can be described using the magnetization M.[8] The very complicated and granular bound charges and bound currents, therefore can be represented on the macroscopic scale in terms of P and M which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, the Maxwell's macroscopic equations ignores many details on a fine scale that may be unimportant to understanding matters on a grosser scale by calculating fields that are averaged over some suitably sized volume.

**Equations Formulation in terms of free charge and current
**

Name Gauss's law Differential form Integral form

Gauss's law for magnetism

Maxwell's equations

45

Constitutive relations

In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B. These equations specify the response of bound charge and current to the applied fields and are called constitutive relations. Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves:

where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine some special cases, though. Without magnetic or dielectric materials In the absence of magnetic or dielectric materials, the constitutive relations are simple:

where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively. Substituting these back into Maxwell's macroscopic equations lead directly to Maxwell's microscopic equations, except that the currents and charges are replaced with free currents and free charges. This is expected since there are no bound charges nor currents. Isotropic Linear materials In an (isotropic[9] ) linear material, where P is proportional to E and M is proportional to B the constitutive relations are also straightforward. In terms of the polarizaton P and the magnetization M they are:

where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are:

where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by:

Substituting in the constitutive relations above into Maxwell's equations in linear, dispersionless, time-invariant materials (differential form only) are:

These are formally identical to the general formulation in terms of E and B (given above), except that the permittivity of free space was replaced with the permittivity of the material, the permeability of free space was replaced with the permeability of the material, and only free charges and currents are included (instead of all charges and currents). Unless that material is homogeneous in space, ε and μ cannot be factored out of the derivative expressions on the left sides.

Maxwell's equations General case For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B.[10] These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny. In general, though the constitutive relations can usually still be written:

46

but ε and μ are not, in general, simple constants, but rather functions. Examples are: • Dispersion and absorption where ε and μ are functions of frequency. (Causality does not permit materials to be nondispersive; see, for example, Kramers–Kronig relations). Neither do the fields need to be in phase which leads to ε and μ being complex. This also leads to absorption. • Bi-(an)isotropy where H and D depend on both B and E:[11]

• Nonlinearity where ε and μ are functions of E and B. • Anisotropy (such as birefringence or dichroism) which occurs when ε and μ are second-rank tensors,

• Dependence of P and M on E and B at other locations and times. This could be due to spatial inhomogeneity; for example in a domained structure, heterostructure or a liquid crystal, or most commonly in the situation where there are simply multiple materials occupying different regions of space). Or it could be due to a time varying medium or due to hysteresis. In such cases P and M can be calculated as:[12] [13]

in which the permittivity and permeability functions are replaced by integrals over the more general electric and magnetic susceptibilities.[14] In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration). It may be noted that man-made materials can be designed to have customized permittivity and permeability, such as metamaterials and photonic crystals.

Maxwell's equations Calculation of constitutive relations In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields. The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation. These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function. Various approximate transport equations have evolved, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. Some examples where these equations are applied are magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium[15] [16] (valid for excitations with wavelengths much larger than the scale of the inhomogeneity).[17] [18] [19] [20] The theoretical modeling of the continuum-approximation properties of many real materials often rely upon measurement as well,[21] for example, ellipsometry measurements.

47

History

Relation between electricity, magnetism, and the speed of light

The relation between electricity, magnetism, and the speed of light can be summarized by the modern equation:

The left-hand side is the speed of light, and the right-hand side is a quantity related to the equations governing electricity and magnetism. Although the right-hand side has units of velocity, it can be inferred from measurements of electric and magnetic forces, which involve no physical velocities. Therefore, establishing this relationship provided convincing evidence that light is an electromagnetic phenomenon. The discovery of this relationship started in 1855, when Wilhelm Eduard Weber and Rudolf Kohlrausch determined that there was a quantity related to electricity and magnetism, "the ratio of the absolute electromagnetic unit of charge to the absolute electrostatic unit of charge" (in modern language, the value ), and determined that it should have units of velocity. They then measured this ratio by an experiment which involved charging and discharging a Leyden jar and measuring the magnetic force from the discharge current, and found a value 3.107 × 108 m/s,[22] remarkably close to the speed of light, which had recently been measured at 3.14 × 108 m/s by Hippolyte Fizeau in 1848 and at 2.98 × 108 m/s by Léon Foucault in 1850.[22] However, Weber and Kohlrausch did not make the connection to the speed of light.[22] Towards the end of 1861 while working on part III of his paper On Physical Lines of Force, Maxwell travelled from Scotland to London and looked up Weber and Kohlrausch's results. He converted them into a format which was compatible with his own writings, and in doing so he established the connection to the speed of light and concluded that light is a form of electromagnetic radiation.[23]

Maxwell's equations

48

**The term Maxwell's equations
**

The four modern Maxwell's equations can be found individually throughout his 1861 paper, derived theoretically using a molecular vortex model of Michael Faraday's "lines of force" and in conjunction with the experimental result of Weber and Kohlrausch. But it wasn't until 1884 that Oliver Heaviside,[24] concurrently with similar work by Willard Gibbs and Heinrich Hertz,[25] grouped the four together into a distinct set. This group of four equations was known variously as the Hertz-Heaviside equations and the Maxwell-Hertz equations,[24] and are sometimes still known as the Maxwell–Heaviside equations.[26] Maxwell's contribution to science in producing these equations lies in the correction he made to Ampère's circuital law in his 1861 paper On Physical Lines of Force. He added the displacement current term to Ampère's circuital law and this enabled him to derive the electromagnetic wave equation in his later 1865 paper A Dynamical Theory of the Electromagnetic Field and demonstrate the fact that light is an electromagnetic wave. This fact was then later confirmed experimentally by Heinrich Hertz in 1887. The physicist Richard Feynman predicted that, "The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade."[27] The concept of fields was introduced by, among others, Faraday. Albert Einstein wrote: The precise formulation of the time-space laws was the work of Maxwell. Imagine his feelings when the differential equations he had formulated proved to him that electromagnetic fields spread in the form of polarised waves, and at the speed of light! To few men in the world has such an experience been vouchsafed ... it took physicists some decades to grasp the full significance of Maxwell's discovery, so bold was the leap that his genius forced upon the conceptions of his fellow-workers —(Science, May 24, 1940) Heaviside worked to eliminate the potentials (electric potential and magnetic potential) that Maxwell had used as the central concepts in his equations;[24] this effort was somewhat controversial,[28] though it was understood by 1884 that the potentials must propagate at the speed of light like the fields, unlike the concept of instantaneous action-at-a-distance like the then conception of gravitational potential.[25] Modern analysis of, for example, radio antennas, makes full use of Maxwell's vector and scalar potentials to separate the variables, a common technique used in formulating the solutions of differential equations. However the potentials can be introduced by algebraic manipulation of the four fundamental equations.

**On Physical Lines of Force
**

The four modern day Maxwell's equations appeared throughout Maxwell's 1861 paper On Physical Lines of Force: 1. Equation (56) in Maxwell's 1861 paper is ∇ ⋅ B = 0. 2. Equation (112) is Ampère's circuital law with Maxwell's displacement current added. It is the addition of displacement current that is the most significant aspect of Maxwell's work in electromagnetism, as it enabled him to later derive the electromagnetic wave equation in his 1865 paper A Dynamical Theory of the Electromagnetic Field, and hence show that light is an electromagnetic wave. It is therefore this aspect of Maxwell's work which gives the equations their full significance. (Interestingly, Kirchhoff derived the telegrapher's equations in 1857 without using displacement current. But he did use Poisson's equation and the equation of continuity which are the mathematical ingredients of the displacement current. Nevertheless, Kirchhoff believed his equations to be applicable only inside an electric wire and so he is not credited with having discovered that light is an electromagnetic wave). 3. Equation (115) is Gauss's law. 4. Equation (54) is an equation that Oliver Heaviside referred to as 'Faraday's law'. This equation caters for the time varying aspect of electromagnetic induction, but not for the motionally induced aspect, whereas Faraday's original flux law caters for both aspects.[29] [30] Maxwell deals with the motionally dependent aspect of electromagnetic

Maxwell's equations induction, v × B, at equation (77). Equation (77) which is the same as equation (D) in the original eight Maxwell's equations listed below, corresponds to all intents and purposes to the modern day force law F = q( E + v × B ) which sits adjacent to Maxwell's equations and bears the name Lorentz force, even though Maxwell derived it when Lorentz was still a young boy. The difference between the B and the H vectors can be traced back to Maxwell's 1855 paper entitled On Faraday's Lines of Force which was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations. It is later clarified in his concept of a sea of molecular vortices that appears in his 1861 paper On Physical Lines of Force. Within that context, H represented pure vorticity (spin), whereas B was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability µ to be a measure of the density of the vortex sea. Hence the relationship, 1. Magnetic induction current causes a magnetic current density

49

Figure of Maxwell's molecular vortex model. For a uniform magnetic field, the field lines point outward from the display screen, as can be observed from the black dots in the middle of the hexagons. The vortex of each hexagonal molecule rotates counter-clockwise. The small green circles are clockwise rotating particles sandwiching between the molecular vortices.

was essentially a rotational analogy to the linear electric current relationship, 1. Electric convection current

where ρ is electric charge density. B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the vortices. With µ representing vortex density, it follows that the product of µ with vorticity H leads to the magnetic field denoted as B. The electric current equation can be viewed as a convective current of electric charge that involves linear motion. By analogy, the magnetic equation is an inductive current involving spin. There is no linear motion in the inductive current along the direction of the B vector. The magnetic inductive current represents lines of force. In particular, it represents lines of inverse square law force. The extension of the above considerations confirms that where B is to H, and where J is to ρ, then it necessarily follows from Gauss's law and from the equation of continuity of charge that E is to D. i.e. B parallels with E,

Maxwell's equations whereas H parallels with D.

50

**A Dynamical Theory of the Electromagnetic Field
**

In 1864 Maxwell published A Dynamical Theory of the Electromagnetic Field in which he showed that light was an electromagnetic phenomenon. Confusion over the term "Maxwell's equations" is exacerbated because it is also sometimes used for a set of eight equations that appeared in Part III of Maxwell's 1864 paper A Dynamical Theory of the Electromagnetic Field, entitled "General Equations of the Electromagnetic Field,"[31] a confusion compounded by the writing of six of those eight equations as three separate equations (one for each of the Cartesian axes), resulting in twenty equations and twenty unknowns. (As noted above, this terminology is not common: Modern references to the term "Maxwell's equations" refer to the Heaviside restatements.) The eight original Maxwell's equations can be written in modern vector notation as follows: (A) The law of total currents

(B) The equation of magnetic force

(C) Ampère's circuital law

(D) Electromotive force created by convection, induction, and by static electricity. (This is in effect the Lorentz force)

(E) The electric elasticity equation

(F) Ohm's law

(G) Gauss's law

(H) Equation of continuity

or

Notation H is the magnetizing field, which Maxwell called the magnetic intensity. J is the current density (withJtot being the total current including displacement current).[32] D is the displacement field (called the electric displacement by Maxwell). ρ is the free charge density (called the quantity of free electricity by Maxwell). A is the magnetic potential (called the angular impulse by Maxwell). E is called the electromotive force by Maxwell. The term electromotive force is nowadays used for voltage, but it is clear from the context that Maxwell's meaning corresponded more to the modern term electric field.

Maxwell's equations φ is the electric potential (which Maxwell also called electric potential). σ is the electrical conductivity (Maxwell called the inverse of conductivity the specific resistance, what is now called the resistivity). It is interesting to note the μv × H term that appears in equation D. Equation D is therefore effectively the Lorentz force, similarly to equation (77) of his 1861 paper (see above). When Maxwell derives the electromagnetic wave equation in his 1865 paper, he uses equation D to cater for electromagnetic induction rather than Faraday's law of induction which is used in modern textbooks. (Faraday's law itself does not appear among his equations.) However, Maxwell drops the μv × H term from equation D when he is deriving the electromagnetic wave equation, as he considers the situation only from the rest frame.

51

**A Treatise on Electricity and Magnetism
**

In A Treatise on Electricity and Magnetism, an 1873 treatise on electromagnetism written by James Clerk Maxwell, eleven general equations of the electromagnetic field are listed and these include the eight that are listed in the 1865 paper.[33]

**Maxwell's equations and relativity
**

Maxwell's original equations are based on the idea that light travels through a sea of molecular vortices known as the 'luminiferous aether', and that the speed of light has to be respective to the reference frame of this aether. Measurements designed to measure the speed of the Earth through the aether conflicted, though.[34] A more theoretical approach was suggested by Hendrik Lorentz along with George FitzGerald and Joseph Larmor. Both Larmor (1897) and Lorentz (1899, 1904) derived the Lorentz transformation (so named by Henri Poincaré) as one under which Maxwell's equations were invariant. Poincaré (1900) analyzed the coordination of moving clocks by exchanging light signals. He also established mathematically the group property of the Lorentz transformation (Poincaré 1905). Einstein dismissed the aether as unnecessary and concluded that Maxwell's equations predict the existence of a fixed speed of light, independent of the speed of the observer, and as such he used Maxwell's equations as the starting point for his special theory of relativity. In doing so, he established the Lorentz transformation as being valid for all matter and not just Maxwell's equations. Maxwell's equations played a key role in Einstein's famous paper on special relativity; for example, in the opening paragraph of the paper, he motivated his theory by noting that a description of a conductor moving with respect to a magnet must generate a consistent set of fields irrespective of whether the force is calculated in the rest frame of the magnet or that of the conductor.[35] General relativity has also had a close relationship with Maxwell's equations. For example, Theodor Kaluza and Oskar Klein showed in the 1920s that Maxwell's equations can be derived by extending general relativity into five dimensions. This strategy of using higher dimensions to unify different forces remains an active area of research in particle physics.

**Modified to include magnetic monopoles
**

Maxwell's equations provide for an electric charge, but posit no magnetic charge. Magnetic charge has never been seen[36] and may not exist. Nevertheless, Maxwell's equations including magnetic charge (and magnetic current) is of some theoretical interest.[37] For one reason, Maxwell's equations can be made fully symmetric under interchange of electric and magnetic field by allowing for the possibility of magnetic charges with magnetic charge density ρm and currents with magnetic current density Jm.[38] The extended Maxwell's equations (in cgs-Gaussian units) are:

Maxwell's equations

52

Name Gauss's law: Gauss's law for magnetism: Maxwell–Faraday equation (Faraday's law of induction): Ampère's law (with Maxwell's extension):

Without magnetic monopoles

With magnetic monopoles (hypothetical)

If magnetic charges do not exist, or if they exist but not in the region studied, then the new variables are zero, and the symmetric equations reduce to the conventional equations of electromagnetism such as ∇ · B = 0. Further if every particle has the same ratio of electric to magnetic charge then an E and a B field can be defined that obeys the normal Maxwell's equation (having no magnetic charges or currents) with its own charge and current densities.[39]

**Boundary conditions using Maxwell's equations
**

Like all sets of differential equations, Maxwell's equations cannot be uniquely solved without a suitable set of boundary conditions[40] [41] [42] and initial conditions.[43] For example, consider a region with no charges and no currents. One particular solution that satisfies all of Maxwell's equations in that region is that both E and B = 0 everywhere in the region. This solution is obviously false if there is a charge just outside of the region. In this particular example, all of the electric and magnetic fields in the interior are due to the charges outside of the volume. Different charges outside of the volume produce different fields on the surface of that volume and therefore have a different boundary conditions. In general, knowing the appropriate boundary conditions for a given region along with the currents and charges in that region allows one to solve for all the fields everywhere within that region. An example of this type is a an electromagnetic scattering problem, where an electromagnetic wave originating outside the scattering region is scattered by a target, and the scattered electromagnetic wave is analyzed for the information it contains about the target by virtue of the interaction with the target during scattering.[44] In some cases, like waveguides or cavity resonators, the solution region is largely isolated from the universe, for example, by metallic walls, and boundary conditions at the walls define the fields with influence of the outside world confined to the input/output ends of the structure.[45] In other cases, the universe at large sometimes is approximated by an artificial absorbing boundary,[46] [47] [48] or, for example for radiating antennas or communication satellites, these boundary conditions can take the form of asymptotic limits imposed upon the solution.[49] In addition, for example in an optical fiber or thin-film optics, the solution region often is broken up into subregions with their own simplified properties, and the solutions in each subregion must be joined to each other across the subregion interfaces using boundary conditions.[50] [51] [52] A particular example of this use of boundary conditions is the replacement of a material with a volume polarization by a charged surface layer, or of a material with a volume magnetization by a surface current, as described in the section Bound charge and current. Following are some links of a general nature concerning boundary value problems: Examples of boundary value problems, Sturm–Liouville theory, Dirichlet boundary condition, Neumann boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld radiation condition. Needless to say, one must choose the boundary conditions appropriate to the problem being solved. See also Kempel[53] and the book by Friedman.[54]

Maxwell's equations

53

Gaussian units

Gaussian units is a popular electromagnetism variant of the centimetre gram second system of units (cgs). In gaussian units, Maxwell's equations are:[55]

where c is the speed of light in a vacuum. The microscopic equations are:

The relation between electric displacement field, electric field and polarization density is:

And likewise the relation between magnetic induction, magnetic field and total magnetization is:

In the linear approximation, the electric susceptibility and magnetic susceptibility are defined so that: , (Note: although the susceptibilities are dimensionless numbers in both cgs and SI, they differ in value by a factor of 4π.) The permittivity and permeability are: , so that , In vacuum, ε = μ = 1, therefore D = E, and B = H. The force exerted upon a charged particle by the electric field and magnetic field is given by the Lorentz force equation:

where q is the charge on the particle and v is the particle velocity. This is slightly different from the SI-unit expression above. For example, the magnetic field B has the same units as the electric field E. Some equations in the article are given in Gaussian units but not SI or vice-versa. Fortunately, there are general rules to convert from one to the other; see the article Gaussian units for details.

Maxwell's equations

54

**Alternative formulations of Maxwell's equations
**

Special relativity motivated a compact mathematical formulation of Maxwell's equations, in terms of covariant tensors. Quantum mechanics also motivated other formulations. For example, consider a conductor moving in the field of a magnet.[56] In the frame of the magnet, that conductor experiences a magnetic force. But in the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The following formulation shows how Maxwell's equations take the same form in any inertial coordinate system.

**Covariant formulation of Maxwell's equations
**

In special relativity, in order to more clearly express the fact that Maxwell's ('microscopic') equations take the same form in any inertial coordinate system, Maxwell's equations are written in terms of four-vectors and tensors in the "manifestly covariant" form. The purely spatial components of the following are in SI units. One ingredient in this formulation is the electromagnetic tensor, a rank-2 covariant antisymmetric tensor combining the electric and magnetic fields:

and the result of raising its indices

The other ingredient is the four-current:

where ρ is the charge density and J is the current density. With these ingredients, Maxwell's equations can be written:

and

The first tensor equation is an expression of the two inhomogeneous Maxwell's equations, Gauss's law and Ampere's law with Maxwell's correction. The second equation is an expression of the two homogeneous equations, Faraday's law of induction and Gauss's law for magnetism. The second equation is equivalent to

where

is the contravariant version of the Levi-Civita symbol, and

is the 4-gradient. In the tensor equations above, repeated indices are summed over according to Einstein summation convention. We have displayed the results in several common notations. Upper and lower components of a vector, vα and vα respectively, are interchanged with the fundamental tensor g, e.g., g = η = diag(−1, +1, +1, +1).

Maxwell's equations Alternative covariant presentations of Maxwell's equations also exist, for example in terms of the four-potential; see Covariant formulation of classical electromagnetism for details.

55

Potential formulation

In advanced classical mechanics and in quantum mechanics (where it is necessary) it is sometimes useful to express Maxwell's equations in a 'potential formulation' involving the electric potential (also called scalar potential), φ, and the magnetic potential, A, (also called vector potential). These are defined such that:

With these definitions, the two homogeneous Maxwell's equations (Faraday's Law and Gauss's law for magnetism) are automatically satisfied and the other two (inhomogeneous) equations give the following equations (for "Maxwell's microscopic equations"):

These equations, taken together, are as powerful and complete as Maxwell's equations. Moreover, if we work only with the potentials and ignore the fields, the problem has been reduced somewhat, as the electric and magnetic fields each have three components which need to be solved for (six components altogether), while the electric and magnetic potentials have only four components altogether. Many different choices of A and φ are consistent with a given E and B, making these choices physically equivalent – a flexibility known as gauge freedom. Suitable choice of A and φ can simplify these equations, or can adapt them to suit a particular situation.

Four-potential

In the Lorenz gauge, the two equations that represent the potentials can be reduced to one manifestly Lorentz invariant equation, using four-vectors: the four-current defined by

formed from the current density j and charge density ρ, and the electromagnetic four-potential defined by

formed from the vector potential A and the scalar potential

. The resulting single equation, due to Arnold

Sommerfeld, a generalization of an equation due to Bernhard Riemann and known as the Riemann–Sommerfeld equation[57] or the covariant form of the Maxwell equations,[58] is: , where , or , where is the d'Alembertian operator, or four-Laplacian, is the four-gradient. , sometimes written

Maxwell's equations

56

**Differential geometric formulations
**

In free space, where ε = ε0 and μ = μ0 are constant everywhere, Maxwell's equations simplify considerably once the language of differential geometry and differential forms is used. In what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) The electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional spacetime manifold. Maxwell's equations then reduce to the Bianchi identity

where d denotes the exterior derivative — a natural coordinate and metric independent differential operator acting on forms — and the source equation

where the (dual) Hodge star operator * is a linear transformation from the space of 2-forms to the space of (4-2)-forms defined by the metric in Minkowski space (in four dimensions even by any metric conformal to this metric), and the fields are in natural units where 1/4πε0 = 1. Here, the 3-form J is called the electric current form or current 3-form satisfying the continuity equation

The current 3-form can be integrated over a 3-dimensional space-time region. The physical interpretation of this integral is the charge in that region if it is spacelike, or the amount of charge that flows through a surface in a certain amount of time if that region is a spacelike surface cross a timelike interval. As the exterior derivative is defined on any manifold, the differential form version of the Bianchi identity makes sense for any 4-dimensional manifold, whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In particular the differential form version of the Maxwell equations are a convenient and intuitive formulation of the Maxwell equations in general relativity. In a linear, macroscopic theory, the influence of matter on the electromagnetic field is described through more general linear transformation in the space of 2-forms. We call

the constitutive transformation. The role of this transformation is comparable to the Hodge duality transformation. The Maxwell equations in the presence of matter then become:

where the current 3-form J still satisfies the continuity equation dJ = 0. When the fields are expressed as linear combinations (of exterior products) of basis forms θp,

the constitutive relation takes the form

where the field coefficient functions are antisymmetric in the indices and the constitutive coefficients are antisymmetric in the corresponding pairs. In particular, the Hodge duality transformation leading to the vacuum equations discussed above are obtained by taking

which up to scaling is the only invariant tensor of this type that can be defined with the metric. In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented manifold or with small adaptations any manifold, requiring not even a metric. Thus the expression of Maxwell's equations in terms of differential forms leads to a further notational and conceptual simplification. Whereas Maxwell's Equations could be

Maxwell's equations written as two tensor equations instead of eight scalar equations, from which the propagation of electromagnetic disturbances and the continuity equation could be derived with a little effort, using differential forms leads to an even simpler derivation of these results. Conceptual insight from this formulation On the conceptual side, from the point of view of physics, this shows that the second and third Maxwell equations should be grouped together, be called the homogeneous ones, and be seen as geometric identities expressing nothing else than: the field F derives from a more "fundamental" potential A. While the first and last one should be seen as the dynamical equations of motion, obtained via the Lagrangian principle of least action, from the "interaction term" A J (introduced through gauge covariant derivatives), coupling the field to matter. Often, the time derivative in the third law motivates calling this equation "dynamical", which is somewhat misleading; in the sense of the preceding analysis, this is rather an artifact of breaking relativistic covariance by choosing a preferred time direction. To have physical degrees of freedom propagated by these field equations, one must include a kinetic term F *F for A; and take into account the non-physical degrees of freedom which can be removed by gauge transformation A → A' = A − dα. See also gauge fixing and Faddeev–Popov ghosts.

57

**Geometric Algebra (GA) formulation
**

In geometric algebra, Maxwell's equations are reduced to a single equation,

[59]

where F and J are multivectors

and with the unit pseudoscalar I2 = −1 The GA spatial gradient operator ∇ acts on a vector field, such that

In spacetime algebra using the same geometric product the equation is simply

the spacetime derivative of the electromagnetic field is its source. Here the (non-bold) spacetime gradient

is a four vector, as is the current density

For a demonstration that the equations given reproduce Maxwell's equations see the main article.

**Classical electrodynamics as the curvature of a line bundle
**

An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles or principal bundles with fibre U(1). The connection on the line bundle has a curvature which is a two-form that automatically satisfies and can be interpreted as a field-strength. If the line bundle is trivial with flat and F = dA with A the 1-form composed of the electric potential reference connection d we can write and the magnetic vector potential. In quantum mechanics, the connection itself is used to define the dynamics of the system. This formulation allows a natural description of the Aharonov-Bohm effect. In this experiment, a static magnetic field runs through a long

Maxwell's equations magnetic wire (e.g., an iron wire magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to the vector potential, which essentially depends on the magnetic flux through the cross-section of the wire and does not vanish outside. Since there is no electric field either, the Maxwell tensor F = 0 throughout the space-time region outside the tube, during the experiment. This means by definition that the connection is flat there. However, as mentioned, the connection depends on the magnetic field through the tube since the holonomy along a non-contractible curve encircling the tube is the magnetic flux through the tube in the proper units. This can be detected quantum-mechanically with a double-slit electron diffraction experiment on an electron wave traveling around the tube. The holonomy corresponds to an extra phase shift, which leads to a shift in the diffraction pattern.[60] [61]

58

Curved spacetime

Traditional formulation

Matter and energy generate curvature of spacetime. This is the subject of general relativity. Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and momentum also generates curvature in spacetime. Maxwell's equations in curved spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with covariant derivatives. (Whether this is the appropriate generalization requires separate investigation.) The sourced and source-free equations become (cgs-Gaussian units):

and

Here,

is a Christoffel symbol that characterizes the curvature of spacetime and Dγ is the covariant derivative.

**Formulation in terms of differential forms
**

The formulation of the Maxwell equations in terms of differential forms can be used without change in general relativity. The equivalence of the more traditional general relativistic formulation using the covariant derivative with the differential form formulation can be seen as follows. Choose local coordinates xα which gives a basis of 1-forms dxα in every point of the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we define • The antisymmetric infinitesimal field tensor , corresponding to the field 2-form F

• The current-vector infinitesimal 3-form J

Here g is as usual the determinant of the metric tensor

. A small computation that uses the symmetry of the

Christoffel symbols (i.e., the torsion-freeness of the Levi Civita connection) and the covariant constantness of the Hodge star operator then shows that in this coordinate neighborhood we have: • the Bianchi identity

• the source equation

Maxwell's equations

59

• the continuity equation

Notes

[1] J.D. Jackson, "Maxwell's Equations" video glossary entry (http:/ / videoglossary. lbl. gov/ 2009/ maxwells-equations/ ) [2] Principles of physics: a calculus-based text (http:/ / books. google. com/ books?id=1DZz341Pp50C& pg=PA809), by R.A. Serway, J.W. Jewett, page 809. [3] The quantity we would now call , with units of velocity, was directly measured before Maxwell's equations, in an 1855 experiment by Wilhelm Eduard Weber and Rudolf Kohlrausch. They charged a leyden jar (a kind of capacitor), and measured the electrostatic force associated with the potential; then, they discharged it while measuring the magnetic force from the current in the discharge-wire. Their result was 3.107 × 108 m/s, remarkably close to the speed of light. See The story of electrical and magnetic measurements: from 500 B.C. to the 1940s, by Joseph F. Keithley, p115 (http:/ / books. google. com/ books?id=uwgNAtqSHuQC& pg=PA115) [4] David J Griffiths (1999). Introduction to electrodynamics (http:/ / worldcat. org/ isbn/ 013805326X) (Third ed.). Prentice Hall. pp. 559–562. ISBN 013805326X. . [5] In some books *e.g., in U. Krey and A. Owen's Basic Theoretical Physics (Springer 2007)), the term effective charge is used instead of total charge, while free charge is simply called charge. [6] In some books *e.g., in ), the term effective charge is used instead of total charge, while free charge is simply called charge. [7] See David J. Griffiths (1999). Introduction to Electrodynamics (third ed.). Prentice Hall. for a good description of how P relates to the bound charge. [8] See David J. Griffiths (1999). Introduction to Electrodynamics (third ed.). Prentice Hall. for a good description of how M relates to the bound current. [9] The generalization to non-isotropic materials is straight forward; simply replace the constants with tensor quantities. [10] The free charges and currents respond to the fields through the Lorentz force law and this response is calculated at a fundamental level using mechanics. The response of bound charges and currents is dealt with using grosser methods subsumed under the notions of magnetization and polarization. Depending upon the problem, one may choose to have no free charges whatsoever. [11] In general materials are bianisotropic. TG Mackay and A Lakhtakia (2010). Electromagnetic Anisotropy and Bianisotropy: A Field Guide (http:/ / www. worldscibooks. com/ physics/ 7515. html). World Scientific. . [12] Halevi, Peter (1992). Spatial dispersion in solids and plasmas. Amsterdam: North-Holland. ISBN 978-0444874054. [13] Jackson, John David (1999). Classical Electrodynamics (3rd ed.). New York: Wiley. ISBN 0-471-30932-X. [14] Note that the 'magnetic susceptibility' term used here is in terms of B and is different from the standard definition in terms of H. [15] Aspnes, D.E., "Local-field effects and effective-medium theory: A microscopic perspective," Am. J. Phys. 50, p. 704-709 (1982). [16] Habib Ammari & Hyeonbae Kang (2006). Inverse problems, multi-scale analysis and effective medium theory : workshop in Seoul, Inverse problems, multi-scale analysis, and homogenization, June 22–24, 2005, Seoul National University, Seoul, Korea (http:/ / books. google. com/ ?id=dK7JwVPbUkMC& printsec=frontcover& dq="effective+ medium"). Providence RI: American Mathematical Society. p. 282. ISBN 0821839683. . [17] O. C. Zienkiewicz, Robert Leroy Taylor, J. Z. Zhu, Perumal Nithiarasu (2005). The Finite Element Method (http:/ / books. google. com/ ?id=rvbSmooh8Y4C& printsec=frontcover& dq=finite+ element+ inauthor:Zienkiewicz) (Sixth ed.). Oxford UK: Butterworth-Heinemann. p. 550 ff. ISBN 0750663219. . [18] N. Bakhvalov and G. Panasenko, Homogenization: Averaging Processes in Periodic Media (Kluwer: Dordrecht, 1989); V. V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of Differential Operators and Integral Functionals (Springer: Berlin, 1994). [19] Vitaliy Lomakin, Steinberg BZ, Heyman E, & Felsen LB (2003). "Multiresolution Homogenization of Field and Network Formulations for Multiscale Laminate Dielectric Slabs" (http:/ / www. ece. ucsd. edu/ ~vitaliy/ A8. pdf). IEEE Transactions on Antennas and Propagation 51 (10): 2761 ff. doi:10.1109/TAP.2003.816356. . [20] AC Gilbert (Ronald R Coifman, Editor) (2000-05). Topics in Analysis and Its Applications: Selected Theses (http:/ / books. google. com/ ?id=d4MOYN5DjNUC& printsec=frontcover& dq=homogenization+ date:2000-2009). Singapore: World Scientific Publishing Company. p. 155. ISBN 9810240945. . [21] Edward D. Palik & Ghosh G (1998). Handbook of Optical Constants of Solids (http:/ / books. google. com/ ?id=AkakoCPhDFUC& dq=optical+ constants+ inauthor:Palik). London UK: Academic Press. p. 1114. ISBN 0125444222. . [22] The story of electrical and magnetic measurements: from 500 B.C. to the 1940s, by Joseph F. Keithley, p115 (http:/ / books. google. com/ books?id=uwgNAtqSHuQC& pg=PA115) [23] "The Dictionary of Scientific Biography", by Charles Coulston Gillispie [24] but are now universally known as Maxwell's equations. However, in 1940 Einstein referred to the equations as Maxwell's equations in "The Fundamentals of Theoretical Physics" published in the Washington periodical Science, May 24, 1940.

Maxwell's equations Paul J. Nahin (2002-10-09). Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age (http:/ / books. google. com/ ?id=e9wEntQmA0IC& pg=PA111& dq=nahin+ hertz-heaviside+ maxwell-hertz). JHU Press. pp. 108–112. ISBN 9780801869099. .

[25] Jed Z. Buchwald (1994). The creation of scientific effects: Heinrich Hertz and electric waves (http:/ / books. google. com/ ?id=2bDEvvGT1EYC& pg=PA194& dq=maxwell+ faraday+ time-derivative+ vector-potential). University of Chicago Press. p. 194. ISBN 9780226078885. . [26] Myron Evans (2001-10-05). Modern nonlinear optics (http:/ / books. google. com/ ?id=9p0kK6IG94gC& pg=PA240& dq=maxwell-heaviside+ equations). John Wiley and Sons. p. 240. ISBN 9780471389316. . [27] Crease, Robert. The Great Equations: Breakthroughs in Science from Pythagoras to Heisenberg (http:/ / books. google. com/ books?id=IU04tZsVjXkC& lpg=PA133& dq="Civil War will pale into provincial insignificance"& pg=PA133#v=onepage& q="Civil War will pale into provincial insignificance"& f=false), page 133 (2008). [28] Oliver J. Lodge (November 1888). "Sketch of the Electrical Papers in Section A, at the Recent Bath Meeting of the British Association". Electrical Engineer 7: 535. [29] J. R. Lalanne, F. Carmona, and L. Servant (1999-11). Optical spectroscopies of electronic absorption (http:/ / books. google. com/ ?id=7rWD-TdxKkMC& pg=PA8& dq=maxwell-faraday+ derivative). World Scientific. p. 8. ISBN 9789810238612. . [30] Roger F. Harrington (2003-10-17). Introduction to Electromagnetic Engineering (http:/ / books. google. com/ ?id=ZlC2EV8zvX8C& pg=PR7& dq=maxwell-faraday-equation+ law-of-induction). Courier Dover Publications. pp. 49–56. ISBN 9780486432410. . [31] page 480. (http:/ / upload. wikimedia. org/ wikipedia/ commons/ 1/ 19/ A_Dynamical_Theory_of_the_Electromagnetic_Field. pdf) [32] Here it is noted that a quite different quantity, the magnetic polarization, μ0M by decision of an international IUPAP commission has been given the same name J. So for the electric current density, a name with small letters, j would be better. But even then the mathematicians would still use the large-letter-name J for the corresponding current-twoform (see below). [33] http:/ / www. mathematik. tu-darmstadt. de/ ~bruhn/ Original-MAXWELL. htm [34] Experiments like the Michelson-Morley experiment in 1887 showed that the 'aether' moved at the same speed as Earth. While other experiments, such as measurements of the aberration of light from stars, showed that the ether is moving relative to earth. [35] "On the Electrodynamics of Moving Bodies" (http:/ / www. fourmilab. ch/ etexts/ einstein/ specrel/ www/ ). Fourmilab.ch. . Retrieved 2008-10-19. [36] Recently, scientists have described behavior in a crystalline state of matter known as spin-ice which have macroscopic behavior like magnetic monopoles. (See http:/ / www. sciencemag. org/ cgi/ content/ abstract/ 1178868 and http:/ / www. nature. com/ nature/ journal/ v461/ n7266/ full/ nature08500. html .) The divergence of B is still zero for this system, though. [37] J.D. Jackson. "6.12". Clasical Electrodynamics (3rd ed.). ISBN 047143132x. [38] "IEEEGHN: Maxwell's Equations" (http:/ / www. ieeeghn. org/ wiki/ index. php/ Maxwell's_Equations). Ieeeghn.org. . Retrieved 2008-10-19. [39] This is known as a duality transformation. See J.D. Jackson. "6.12". Clasical Electrodynamics (3rd ed.). ISBN 047143132x.. [40] Peter Monk; ), Peter Monk (Ph.D (2003). Finite Element Methods for Maxwell's Equations (http:/ / books. google. com/ ?id=zI7Y1jT9pCwC& pg=PA1& dq=electromagnetism+ "boundary+ conditions"). Oxford UK: Oxford University Press. p. 1 ff. ISBN 0198508883. . [41] Thomas B. A. Senior & John Leonidas Volakis (1995-03-01). Approximate Boundary Conditions in Electromagnetics (http:/ / books. google. com/ ?id=eOofBpuyuOkC& pg=PA261& dq=electromagnetism+ "boundary+ conditions"). London UK: Institution of Electrical Engineers. p. 261 ff. ISBN 0852968493. . [42] T Hagstrom (Björn Engquist & Gregory A. Kriegsmann, Eds.) (1997). Computational Wave Propagation (http:/ / books. google. com/ ?id=EdZefkIOR5cC& pg=PA1& dq=electromagnetism+ "boundary+ conditions"). Berlin: Springer. p. 1 ff. ISBN 0387948740. . [43] Henning F. Harmuth & Malek G. M. Hussain (1994). Propagation of Electromagnetic Signals (http:/ / books. google. com/ ?id=6_CZBHzfhpMC& pg=PA45& dq=electromagnetism+ "initial+ conditions"). Singapore: World Scientific. p. 17. ISBN 9810216890. . [44] Fioralba Cakoni; Colton, David L (2006). "The inverse scattering problem for an imperfect conductor" (http:/ / books. google. com/ ?id=7DqqOjPJetYC& pg=PR6#PPA61,M1). Qualitative methods in inverse scattering theory. Springer Science & Business. p. 61. ISBN 3540288449. ., Khosrow Chadan et al. (1997). An introduction to inverse scattering and inverse spectral problems (http:/ / books. google. com/ ?id=y2rywYxsDEAC& pg=PA45). Society for Industrial and Applied Mathematics. p. 45. ISBN 0898713870. . [45] S. F. Mahmoud (1991). Electromagnetic Waveguides: Theory and Applications applications (http:/ / books. google. com/ ?id=toehQ7vLwAMC& pg=PA2& dq=Maxwell's+ equations+ waveguides). London UK: Institution of Electrical Engineers. Chapter 2. ISBN 0863412327. . [46] Jean-Michel Lourtioz (2005-05-23). Photonic Crystals: Towards Nanoscale Photonic Devices (http:/ / books. google. com/ ?id=vSszZ2WuG_IC& pg=PA84& dq=electromagnetism+ boundary+ + -element). Berlin: Springer. p. 84. ISBN 354024431X. . [47] S. G. Johnson, Notes on Perfectly Matched Layers (http:/ / math. mit. edu/ ~stevenj/ 18. 369/ pml. pdf), online MIT course notes (Aug. 2007). [48] Taflove A & Hagness S C (2005). Computational Electrodynamics: The Finite-difference Time-domain Method (http:/ / www. amazon. com/ gp/ reader/ 1580538320/ ref=sib_dp_pop_toc?ie=UTF8& p=S008#reader-link). Boston MA: Artech House. Chapters 6 & 7. ISBN 1580538320. .

60

Maxwell's equations

[49] David M Cook (2002). The Theory of the Electromagnetic Field (http:/ / books. google. com/ ?id=bI-ZmZWeyhkC& pg=RA1-PA335& dq=electromagnetism+ infinity+ boundary+ conditions). Mineola NY: Courier Dover Publications. p. 335 ff. ISBN 0486425673. . [50] Korada Umashankar (1989-09). Introduction to Engineering Electromagnetic Fields (http:/ / books. google. com/ ?id=qp5qHvB_mhcC& pg=PA359& dq=electromagnetism+ "boundary+ conditions"). Singapore: World Scientific. p. §10.7; pp. 359ff. ISBN 9971509210. . [51] Joseph V. Stewart (2001). Intermediate Electromagnetic Theory (http:/ / books. google. com/ ?id=mwLI4nQ0thQC& printsec=frontcover& dq=intitle:Intermediate+ intitle:electromagnetic+ intitle:theory). Singapore: World Scientific. Chapter III, pp. 111 ff Chapter V, Chapter VI. ISBN 9810244703. . [52] Tai L. Chow (2006). Electromagnetic theory (http:/ / books. google. com/ ?id=dpnpMhw1zo8C& pg=PA153& dq=isbn=0763738271). Sudbury MA: Jones and Bartlett. p. 333ff and Chapter 3: pp. 89ff. ISBN 0-7637-3827-1. . [53] John Leonidas Volakis, Arindam Chatterjee & Leo C. Kempel (1998). Finite element method for electromagnetics : antennas, microwave circuits, and scattering applications (http:/ / books. google. com/ ?id=55q7HqnMZCsC& pg=PA79& dq=electromagnetism+ "boundary+ conditions"). New York: Wiley IEEE. p. 79 ff. ISBN 0780334256. . [54] Bernard Friedman (1990). Principles and Techniques of Applied Mathematics (http:/ / www. amazon. com/ Principles-Techniques-Applied-Mathematics-Friedman/ dp/ 0486664449/ ref=sr_1_1?ie=UTF8& s=books& qisbn=1207010487& sr=1-1). Mineola NY: Dover Publications. ISBN 0486664449. . [55] Littlejohn, Robert (Fall 2007). "Gaussian, SI and Other Systems of Units in Electromagnetic Theory" (http:/ / bohr. physics. berkeley. edu/ classes/ 221/ 0708/ notes/ emunits. pdf) (PDF). Physics 221A, University of California, Berkeley lecture notes. . Retrieved 2008-05-06. [56] Albert Einstein (1905) On the electrodynamics of moving bodies [57] Carver A. Mead (2002-08-07). Collective Electrodynamics: Quantum Foundations of Electromagnetism (http:/ / books. google. com/ ?id=GkDR4e2lo2MC& pg=PA37& dq=Riemann+ Summerfeld). MIT Press. pp. 37–38. ISBN 9780262632607. . [58] Frederic V. Hartemann (2002). High-field electrodynamics (http:/ / books. google. com/ ?id=tIkflVrfkG0C& pg=PA102& dq=d'Alembertian+ covariant-form+ maxwell-lorentz). CRC Press. p. 102. ISBN 9780849323782. . [59] Oersted Medal Lecture David Hestenes (Am. J. Phys. 71 (2), February 2003, pp. 104--121) Online:http:/ / geocalc. clas. asu. edu/ html/ Oersted-ReformingTheLanguage. html p26 [60] M. Murray (5 September 2008). "Line Bundles. Honours 1996" (http:/ / www. maths. adelaide. edu. au/ michael. murray/ line_bundles. pdf). University of Adelaide. . Retrieved 2010-11-19. [61] R. Bott (1985). "On some recent interactions between mathematics and physics". Canadian Mathematical Bulletin 28 (2): 129–164.

61

**References Further reading
**

Journal articles

• James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) The developments before relativity • Joseph Larmor (1897) "On a dynamical theory of the electric and luminiferous medium", Phil. Trans. Roy. Soc. 190, 205-300 (third and last in a series of papers with the same name). • Hendrik Lorentz (1899) "Simplified theory of electrical and optical phenomena in moving systems", Proc. Acad. Science Amsterdam, I, 427-43. • Hendrik Lorentz (1904) "Electromagnetic phenomena in a system moving with any velocity less than that of light", Proc. Acad. Science Amsterdam, IV, 669-78. • Henri Poincaré (1900) "La theorie de Lorentz et la Principe de Reaction", Archives Néerlandaises, V, 253-78. • Henri Poincaré (1901) Science and Hypothesis • Henri Poincaré (1905) "Sur la dynamique de l'électron" (http://www.soso.ch/wissen/hist/SRT/P-1905-1. pdf), Comptes rendus de l'Académie des Sciences, 140, 1504-8. see • Macrossan, M. N. (1986). "A note on relativity before Einstein" (http://eprint.uq.edu.au/archive/00002307/). Brit. J. Phil. Sci. 37: 232–234.

Maxwell's equations

62

**University level textbooks
**

Undergraduate • Feynman, Richard P. (2005). The Feynman Lectures on Physics. 2 (2nd ed.). Addison-Wesley. ISBN 978-0805390650. • Fleisch, Daniel (2008). A Student's Guide to Maxwell's Equations. Cambridge University Press. ISBN 978-0521877619. • Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X. • Hoffman, Banesh (1983). Relativity and Its Roots. W. H. Freeman. • Krey, U.; Owen, A. (2007). Basic Theoretical Physics: A Concise Overview. Springer. ISBN 978-3-540-36804-5. See especially part II. • Purcell, Edward Mills (1985). Electricity and Magnetism. McGraw-Hill. ISBN 0-07-004908-4. • Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (2008). Foundations of Electromagnetic Theory (4th ed.). Addison Wesley. ISBN 978-0321581747. • Sadiku, Matthew N. O. (2006). Elements of Electromagnetics (4th ed.). Oxford University Press. ISBN 0-19-5300483. • Schwarz, Melvin (1987). Principles of Electrodynamics. Dover. ISBN 0-486-65493-1. • Stevens, Charles F. (1995). The Six Core Theories of Modern Physics. MIT Press. ISBN 0-262-69188-4. • Tipler, Paul; Mosca, Gene (2007). Physics for Scientists and Engineers. 2 (6th ed.). W. H. Freeman. ISBN 978-1429201339. • Ulaby, Fawwaz T. (2007). Fundamentals of Applied Electromagnetics (5th ed.). Pearson Education. ISBN 0-13-241326-4. Graduate • Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X. • Panofsky, Wolfgang K. H.; Phillips, Melba (2005). Classical Electricity and Magnetism (2nd ed.). Dover. ISBN 978-0486439242. Older classics • Lifshitz, Evgeny; Landau, Lev (1980). The Classical Theory of Fields (4th ed.). Butterworth-Heinemann. ISBN 0750627689. • Lifshitz, Evgeny; Landau, Lev; Pitaevskii, L. P. (1984). Electrodynamics of Continuous Media (2nd ed.). Butterworth-Heinemann. ISBN 0750626348. • Maxwell, James Clerk (1873). A Treatise on Electricity and Magnetism. Dover. ISBN 0-486-60637-6. • Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973). Gravitation. W. H. Freeman. ISBN 0-7167-0344-0. Sets out the equations using differential forms.

Maxwell's equations Computational techniques • Chew, W. C.; Jin, J.; Michielssen, E. ; Song, J. (2001). Fast and Efficient Algorithms in Computational Electromagnetics. Artech House. ISBN 1-58053-152-0. • Harrington, R. F. (1993). Field Computation by Moment Methods. Wiley-IEEE Press. ISBN 0-78031-014-4. • Jin, J. (2002). The Finite Element Method in Electromagnetics (2nd ed.). Wiley-IEEE Press. ISBN 0-47143-818-9. • Lounesto, Pertti (1997). Clifford Algebras and Spinors. Cambridge University Press.. ISBN 0521599164. Chapter 8 sets out several variants of the equations using exterior algebra and differential forms. • Taflove, Allen; Hagness, Susan C. (2005). Computational Electrodynamics: The Finite-Difference Time-Domain Method (3rd ed.). Artech House. ISBN 1-58053-832-0.

63

External links

• Mathematical aspects of Maxwell's equation are discussed on the Dispersive PDE Wiki (http://tosio.math. toronto.edu/wiki/index.php/Main_Page).

Modern treatments

• Electromagnetism (http://www.lightandmatter.com/html_books/0sn/ch11/ch11.html), B. Crowell, Fullerton College • Lecture series: Relativity and electromagnetism (http://farside.ph.utexas.edu/~rfitzp/teaching/jk1/lectures/ node6.html), R. Fitzpatrick, University of Texas at Austin • Electromagnetic waves from Maxwell's equations (http://www.physnet.org/modules/pdf_modules/m210.pdf) on Project PHYSNET (http://www.physnet.org). • MIT Video Lecture Series (36 x 50 minute lectures) (in .mp4 format) - Electricity and Magnetism (http://ocw. mit.edu/OcwWeb/Physics/8-02Electricity-and-MagnetismSpring2002/VideoAndCaptions/index.htm) Taught by Professor Walter Lewin.

Historical

• James Clerk Maxwell, A Treatise on Electricity And Magnetism Vols 1 and 2 (http://www.antiquebooks.net/ readpage.html#maxwell) 1904—most readable edition with all corrections—Antique Books Collection suitable for free reading online. • Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 1 - 1873 (http://posner.library.cmu.edu/ Posner/books/book.cgi?call=537_M46T_1873_VOL._1) - Posner Memorial Collection - Carnegie Mellon University • Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 2 - 1873 (http://posner.library.cmu.edu/ Posner/books/book.cgi?call=537_M46T_1873_VOL._2) - Posner Memorial Collection - Carnegie Mellon University • On Faraday's Lines of Force - 1855/56 (http://blazelabs.com/On Faraday's Lines of Force.pdf) Maxwell's first paper (Part 1 & 2) - Compiled by Blaze Labs Research (PDF) • On Physical Lines of Force - 1861 Maxwell's 1861 paper describing magnetic lines of Force - Predecessor to 1873 Treatise • Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) • Catt, Walton and Davidson. "The History of Displacement Current". Wireless World, March 1979. (http://www. electromagnetism.demon.co.uk/z014.htm) • Reprint from Dover Publications (ISBN 0-486-60636-8)

Maxwell's equations • Full text of 1904 Edition including full text search. (http://www.antiquebooks.net/readpage.html#maxwell) • A Dynamical Theory Of The Electromagnetic Field - 1865 (http://books.google.com/ books?id=5HE_cmxXt2MC&vid=02IWHrbcLC9ECI_wQx&dq=Proceedings+of+the+Royal+Society+Of+ London+Vol+XIII&ie=UTF-8&as_brr=1&jtp=531) Maxwell's 1865 paper describing his 20 Equations in 20 Unknowns - Predecessor to the 1873 Treatise

64

Other

• Feynman's derivation of Maxwell equations and extra dimensions (http://uk.arxiv.org/abs/hep-ph/0106235) • Nature Milestones: Photons -- Milestone 2 (1861) Maxwell's equations (http://www.nature.com/milestones/ milephotons/full/milephotons02.html)

Algorithm

65

Algorithm

In mathematics and computer science, an algorithm ( /en-us-algorithm.oggˈælɡəɹɪðm/) is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function[3] . Algorithms are used for calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps null)[4] , the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input[7] . A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939.

**Why algorithms are necessary: an informal definition
**

For a detailed presentation of the various points of view around the definition of "algorithm" see Algorithm characterizations. For examples of simple addition algorithms specified in the detailed manner described in Algorithm characterizations, see Algorithm examples. While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations."[10] For some people, a program is only an algorithm if it stops eventually; for others, a program is only an algorithm if it stops before a given number of calculation steps [11] .

A prototypical example of an algorithm is Euclid's algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flow chart above and as an example in a later section.

Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≤ A yields "yes" (or true) (more accurately the number b in location B is less than or equal to the number a in location A) THEN the algorithm specifies B ← B - A (meaning the number b - a replaces the old b). Similarly IF A > B THEN A ← A - B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977).

Boolos & Jeffrey (1974, 1999) offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit

Algorithm instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols[12] The term "enumerably infinite" means "countable using integers perhaps extending to infinity." Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be chosen from 0 to infinity. Thus an algorithm can be an algebraic equation such as y = m + n — two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion (see more at Algorithm characterizations) indicate that the word implies much more than this, something on the order of (for the addition example): Precise instructions (in language understood by "the computer")[13] for a fast, efficient, "good"[14] process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities)[15] to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively"[16] produce, in a "reasonable" time[17] , output-integer y at a specified place and in a specified format. The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related with our customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

66

Formalization

Algorithms are essential to the way computers process data. Many computer programs contain algorithms that specify the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000): Minsky: "But we will also maintain, with Turing . . . that any procedure which could "naturally" be called effective, can in fact be realized by a (simple) machine. Although this may seem extreme, the arguments . . . in its favor are hard to refute".[18] Gurevich: "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine ... according to Savage [1987], an algorithm is a computational process defined by a Turing machine".[19] Typically, when an algorithm is associated with processing information, data is read from an input source, written to an output device, and/or stored for further processing. Stored data is regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures. For some such computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable). Because an algorithm is a precise list of precise steps, the order of computation will always be critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom", an idea that is described more formally by flow of control. So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception, and it attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from the intuition of "memory" as a scratchpad. There is an example below of such an assignment.

Algorithm For some alternate conceptions of what constitutes an algorithm see functional programming and logic programming .

67

Termination

Some writers restrict the definition of algorithm to procedures that eventually finish. In such a category Kleene places the "decision procedure or decision method or algorithm for the question".[20] Others, including Kleene, include procedures that could run forever without stopping; such a procedure has been called a "computational method"[21] or "calculation procedure or algorithm (and hence a calculation problem) in relation to a general question which requires for an answer, not yes or no, but the exhibiting of some object".[22] Minsky makes the pertinent observation, in regards to determining whether an algorithm will eventually terminate (from a particular starting state): But if the length of the process isn't known in advance, then "trying" it may not be decisive, because if the process does go on forever—then at no time will we ever be sure of the answer.[18] As it happens, no other method can do any better, as was shown by Alan Turing with his celebrated result on the undecidability of the so-called halting problem. There is no algorithmic procedure for determining whether or not arbitrary algorithms terminate from given starting states. The analysis of algorithms for their likelihood of termination is called termination analysis. See the examples of (im-)"proper" subtraction at partial function for more about what can happen when an algorithm fails for certain of its input numbers—e.g., (i) non-termination, (ii) production of "junk" (output in the wrong format to be considered a number) or no number(s) at all (halt ends the computation with no output), (iii) wrong number(s), or (iv) a combination of these. Kleene proposed that the production of "junk" or failure to produce a number is solved by having the algorithm detect these instances and produce e.g., an error message (he suggested "0"), or preferably, force the algorithm into an endless loop.[23] Davis (1958) does this to his subtraction algorithm—he fixes his algorithm in a second example so that it is proper subtraction and it terminates.[24] Along with the logical outcomes "true" and "false" Kleene (1952) also proposes the use of a third logical symbol "u" — undecided[25] — thus an algorithm will always produce something when confronted with a "proposition". The problem of wrong answers must be solved with an independent "proof" of the algorithm e.g., using induction: We normally require auxiliary evidence for this [that the algorithm correctly defines a mu recursive function], e.g, in the form of an inductive proof that, for each argument value, the computation terminates with a unique value.[26]

Expressing algorithms

Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in natural language statements, while remaining independent of a particular implementation language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are often used as a way to define or document algorithms. There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see more at finite state machine and state transition table), as flowcharts (see more at state diagram), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see more at Turing machine). Sometimes it is helpful in the description of an algorithm to supplement small "flow charts" (state diagrams) with natural-language and/or arithmetic expressions written inside "block diagrams" to summarize what the "flow charts"

Algorithm are accomplishing. Representations of algorithms are generally classed into three accepted levels of Turing machine description:[27] • 1 High-level description: "...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head." • 2 Implementation description: "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level we do not give details of states or transition function." • 3 Formal description: Most detailed, "lowest level", gives the Turing machine's "state table". For an example of the simple algorithm "Add m+n" described in all three levels see Algorithm examples.

68

Algorithm

69

Implementation

Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

Computer algorithms

In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended "target" computer(s), in order for the target machines to produce output from given input (perhaps null). "Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: Knuth: ". . .we want good algorithms in some loosely defined aesthetic sense. One criterion . . . is the length of time taken to perform the algorithm . . .. Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc"[28] Chaitin: " . . . a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does"[29] Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant'" -- such a proof would solve the Halting problem (ibid). Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This will be true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is . . . important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms"[30] . Unfortunately there may be a tradeoff between goodness (speed) and elegance (compactness) -- an elegant program may take more steps to complete a computation than one less elegant. An example of using Euclid's algorithm will be shown below.

Flowchart examples of the canonical Böhm-Jacopini structures: the SEQUENCE (rectangles descending the page), the WHILE-DO and the IF-THEN-ELSE. The three structures are made of the primitive conditional GOTO (IF test=true THEN GOTO step xxx) (a diamond), the unconditional GOTO (rectangle), various assignment operators (rectangle), and HALT (rectangle). Nesting of these structures inside assignment-blocks result in complex diagrams (cf Tausworthe 1977:100,114).

Computers (and computors), models of computation: A computer (or human "computor"[31] ) is a restricted type of machine, a "discrete deterministic mechanical device"[32] that blindly follows its instructions[33] . Melzak's and Lambek's primitive models[34] reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters[35] (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent[36] . Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability" [37] . Minsky's machine proceeds sequentially through its five (or six depending on how one counts) instructions unless either a conditional IF - THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution)[38] operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g.

Algorithm L ← L - 1)[39] . Rarely will a programmer have to write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT[40] . Simulation of an algorithm: computer(computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example"[41] . But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't then for the algorithm to be effective it must provide a set of rules for extracting a square root[42] . This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor). But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters"[43] . When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" (division) instruction available rather than just subtraction (or worse: just Minsky's "decrement"). Structured programming, canonical structures: Per the Church-Turing thesis any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations Turing completeness requires only four instruction types -- conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code" a programmer can write structured programs using these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language"[44] . Tausworthe augments the three Böhm-Jacopini canonical structures:[45] SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE[46] . An additional benefit of a structured program will be one that lends itself to proofs of correctness using mathematical induction[47] . Canonical flowchart symbols[48] : The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program of one). Like program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only 4: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm-Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram.

70

Algorithm

71

Examples

Sorting example

One of the simplest algorithms is to find the largest number in an (unsorted) list of numbers. The solution necessarily requires looking at every number in the list, but only once at each. From this follows a simple algorithm, which can be stated in a high-level description English prose, as: High-level description: 1. Assume the first item is largest. 2. Look at each of the remaining items in the list and if it is larger than the largest item so far, make a note of it. 3. The last noted item is the largest in the list when the process is complete. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Algorithm LargestNumber Input: A non-empty list of numbers L. Output: The largest number in the list L. largest ← L0 for each item in the list (Length(L)≥1), do if the item > largest, then largest ← the item return largest • •

"←" is a loose shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item. "return" terminates the algorithm and outputs the value that follows.

An animation of the quicksort algorithm sorting an array of randomized values. The red bars mark the pivot element; at the start of the animation, the element farthest to the right hand side is chosen as the pivot.

Algorithm

72

Euclid’s algorithm

Euclid’s algorithm appears as Proposition II in Book VII ("Elementary Number Theory") of his Elements [49] . Euclid poses the problem: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including 0. And to "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s[50] . In modern words, remainder r = l - q*s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division[51] . For Euclid’s method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be 0, AND (ii) the subtraction must be “proper”, a test must guarantee that the smaller of the two numbers is subtracted from the larger (alternately, the two can be equal so their subtraction yields 0).

Euclid's original proof adds a third: the two lengths are not prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest[52] . While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm. Computer (computor) language for Euclid's algorithm

The example-diagram of Euclid's algorithm from T.L. Heath 1908 with more detail added. Euclid does not go beyond a third measuring and gives no numerical examples. Nicomachus gives the example of 49 and 21: "I subtract the less from the greater; 28 is left; then again I subtract from this the same 21 (for this is possible); 7 is left; I subtact this from 21, 14 is left; from which I again subtract 7 (for this is possible); 7 will be left, but 7 cannot be subtracted from 7." Heath comments that "The last phrase is curious, but the meaning of it is obvious enough, as also the meaning of the phrase about ending "at one and the same number"(Heath 1908:300).

Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction. • A location is symbolized by upper case letter(s), e.g. S, A, etc. • The varying quantity (number) in a location will be written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.

Algorithm An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's 4-step version of Euclid's and Nichomachus', but rather than using division to find the remainder it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2-4: INPUT: 1 [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S 2 [Initialize R: make the remaining length r equal to the starting/initial/input length l] R ← L E0: [Insure r ≥ s.] 3 [Insure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 6 ELSE swap the contents of R and S.] 4 L ← R (this first step is redundant, but will be useful for later discussion). 5R←S 6S←L E1:[Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R. 7 IF S > R THEN done measuring so GOTO 10 ELSE measure again, 8R←R-S 9 [Remainder-loop]: GOTO 7. E2: [Is the remainder 0?]: EITHER (i) the last measure was exact and the remainder in R is 0 program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S. 10 IF R = 0 then done so GOTO step 15 ELSE continue to step 11, E3: [Interchange s and r: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s:; L serves as a temporary location. 11 L ← R 12 R ← S 13 S ← L 14 [Repeat the measuring process]: GOTO 7 OUTPUT: 15 [Done. S contains the greatest common divisor]: PRINT S DONE: 16 HALT, END, STOP.

73

"Inelegant" is a translation of Knuth's version of the algorithm with a subtraction-based remainder-loop replacing his use of division (or a "modulus" instruction). Derived from Knuth 1973:2-4. Depending on the two numbers "Inelegant" may compute the g.c.d. in fewer steps than "Elegant".

Algorithm An elegant program for Euclid's algorithm The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by "Inelegant"; worse, "Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET [ ] = [ ] is the assignment instruction symbolized by ←. 5 REM Euclid's algorithm for greatest common divisor 6 PRINT "Type two integers greater than 0" 10 INPUT A,B 20 IF B=0 THEN GOTO 80 30 IF A > B THEN GOTO 60 40 LET B=B-A 50 GOTO 20 60 LET A=A-B 70 GOTO 20 80 PRINT A 90 END How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A - B, and a B ≤ A loop that computes B ← B - A. This works because, when at last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend - Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses.

74

**Testing the Euclid algorithms
**

Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality. One source[53] uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively-prime numbers 14157 and 5950. But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane V rocket failure. Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm" [54] . Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof [55] .

**Measuring and improving the Euclid algorithms
**

Elegance (compactness) versus goodness (speed) : With only 6 core instructions, "Elegant" is the clear winner compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis[56] indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.

Algorithm Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective" -- that is, it computes the function intended by its author -- then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of 5 steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm[57] ; rather, it can only be done heuristically, i.e. by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps. The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster; whether for any given A, B and R, S this is always the case would require a detailed analysis.

75

Algorithmic analysis

It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O(n), using the big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm will usually outperform a brute force sequential search when used for table lookups on sorted lists.

**Formal versus empirical
**

The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.

Classification

There are various ways to classify algorithms, each with its own merits.

By implementation

One way to classify algorithms is by implementation means. • Recursion or iteration: A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood in recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.

Algorithm • Logical: An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control.[58] The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms has a well defined change in the algorithm. • Serial or parallel or distributed: Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems. • Deterministic or non-deterministic: Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. • Exact or approximate: While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Approximation may use either a deterministic or a random strategy. Such algorithms have practical value for many hard problems. • Quantum algorithm: Quantum algorithm run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

76

**By design paradigm
**

Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories will include many different types of algorithms. Some commonly found paradigms include: • Brute-force or exhaustive search. This is the naïve method of trying every possible solution to see which is best.[59] • Divide and conquer. A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage will be more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm. • Dynamic programming. When a problem shows optimal substructure, meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems, and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in

Algorithm divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity. • The greedy method. A greedy algorithm is similar to a dynamic programming algorithm, but the difference is that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of what looks best for the moment. The greedy method extends the solution with the best possible decision (not all feasible decisions) at an algorithmic stage based on the current local optimum and the best decision (not all possible decisions) made in a previous stage. It is not exhaustive, and does not give an accurate answer to many problems. But when it works, it will be the fastest method. The most popular greedy algorithm is finding the minimal spanning tree as given by Huffman Tree, Kruskal, Prim, Sollin. • Linear programming. When solving a problem using linear programming, specific inequalities involving the inputs are found and then an attempt is made to maximize (or minimize) some linear function of the inputs. Many problems (such as the maximum flow for directed graphs) can be stated in a linear programming way, and then be solved by a 'generic' algorithm such as the simplex algorithm. A more complex variant of linear programming is called integer programming, where the solution space is restricted to the integers. • Reduction. This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. • Search and enumeration. Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. 1. Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such algorithms: 1. Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time) 2. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. 2. In optimization problems, heuristic algorithms do not try to find an optimal solution, but an approximate solution where the time or resources are limited. They are not practical to find perfect solutions. An example of this would be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. Approximation algorithms are those heuristic algorithms that additionally provide some bounds on the error. Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a problem.

77

Algorithm

78

**By field of study
**

Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques. Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry, but is now used in solving a broad range of problems in many fields.

By complexity

Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety: some algorithms complete in linear time relative to input size, some do so in an exponential amount of time or even worse, and some never halt. Additionally, some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them. Burgin (2005, p. 24) uses a generalized definition of algorithms that relaxes the common requirement that the output of the algorithm that computes a function must be determined after a finite number of steps. He defines a super-recursive class of algorithms as "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005, p. 107). This is closely related to the study of methods of hypercomputation.

Continuous algorithms

The adjective "continuous" when applied to the word "algorithm" can mean: 1. An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations – such algorithms are studied in numerical analysis; or 2. An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer.[60]

Legal issues

See also: Software patents for a general overview of the patentability of software, including computer-implemented algorithms. Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However, practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).

Algorithm

79

Etymology

The word "Algorithm" or "Algorism" in some other writing versions, comes from the name Al-Khwārizmī (c. 780-850), a Persian mathematician, astronomer, geographer and a scholar in the House of Wisdom in Baghdad, whose name means "the native of Kharazm", a city that was part of the Greater Iran during his era and now is in modern day Uzbekistan[61] [62] [63] He wrote a treatise in Arabic language in the 9th century, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.[64] Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra.[65] In late medieval Latin, algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of modern English algorism. In 17th century French the word's form, but not its meaning, changed to algorithme. English adopted the French very soon afterwards, but it wasn't until the late 19th century that "Algorithm" took on the meaning that it has in modern English.[66]

**History: Development of the notion of "algorithm"
**

Discrete and distinguishable symbols

Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks, or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations.

**Manipulation of symbols as "place holders" for numbers: algebra
**

The work of the ancient Greek geometers (Euclidean algorithm), Persian mathematician Al-Khwarizmi (from whose name the terms "algorism" and "algorithm" are derived), and Western European mathematicians culminated in Leibniz's notion of the calculus ratiocinator (ca 1680): A good century and a half ahead of his time, Leibniz proposed an algebra of logic, an algebra that would specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules for manipulating numbers.[67]

**Mechanical contrivances with discrete states
**

The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular the verge escapement[68] that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine"[69] led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines" – the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace.[70] Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what are now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically . . . More recently however I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc] . . .". With this machine he could analyze a "syllogism or any other simple logical argument".[71]

Algorithm This machine he displayed in 1870 before the Fellows of the Royal Society.[72] Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine".[73] Jacquard loom, Hollerith punch cards, telegraphy and telephony—the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers.[74] By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the Teletype (ca. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".[75] Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned."[76]

80

**Mathematics during the 1800s up to the mid-1900s
**

Symbols and rules: In rapid succession the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".[77] But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules".[78] The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913). The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox.[79] The resultant considerations led to Kurt Gödel's paper (1931) — he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers. Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus[80] a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene.[81] Church's proof[82] that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no

Algorithm decision about the next instruction.[83] Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"[84] – in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine".[85] S. C. Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I",[86] and a few years later Kleene's renaming his Thesis "Church's Thesis"[87] and proposing "Turing's Thesis".[88]

81

**Emil Post (1936) and Alan Turing (1936–7, 1939)
**

Here is a remarkable coincidence of two men not knowing each other but describing a process of men-as-computers working on computations—and they yield virtually identical definitions. Emil Post (1936) described the actions of a "computer" (human being) as follows: "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions. His symbol space would be "a two way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. "One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes.... "A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process will terminate only when it comes to the direction of type (C ) [i.e., STOP]".[89] See more at Post–Turing machine Alan Turing's work[90] preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'".[91] Given the prevalence of Morse code and telegraphy, ticker tape machines, and Teletypes we might conjecture that all were influences. Turing—his model of computation is now called a Turing machine — begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers.[92] "Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book....I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite.... "The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite... "Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided".[93] Turing's reduction yields the following: "The simple operations must therefore include: "(a) Changes of the symbol on one of the observed squares

Algorithm "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. "It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must therefore be taken to be one of the following: "(A) A possible change (a) of symbol together with a possible change of state of mind. "(B) A possible change (b) of observed squares, together with a possible change of state of mind" "We may now construct a machine to do the work of this computer".[93] A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Although it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition . . . [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing and Post] . . . We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability . . . . "† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions".[94]

82

**J. B. Rosser (1939) and S. C. Kleene (1943)
**

J. Barkley Rosser boldly defined an 'effective [mathematical] method' in the following manner (boldface added): "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–6) Rosser's footnote #5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion in particular Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–7) in their mechanism-models of computation. Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original): "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273)

Algorithm

83

**History after 1950
**

A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments around artificial intelligence). For more, see Algorithm characterizations.

Notes

[1] "Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2). [2] Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2). [3] "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) . . . this limitation (to numerical functions) results in no loss of generality", (Rogers 1987:1). [4] "An algorithm has zero or more inputs, i.e., [[|quantity|quantities]] which are given to it initially before the algorithm begins" (Knuth 1973:5). [5] "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5). [6] "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5). [7] Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without use of continuous methods or analogue devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" Rogers 1987:2). [8] Kleene 1943 in Davis 1965:274 [9] Rosser 1939 in Davis 1965:225 [10] Stone 1973:4 [11] Stone simply requires that "it must terminate in a finite number of steps" (Stone 1973:7-8). [12] Boolos and Jeffrey 1974,1999:19 [13] cf Stone 1972:5 [14] Knuth 1973:7 states: "In practice we not only want algorithms, we want good agorithms ... one criterion of goodness is the length of time taken to perform the algorithm ... other criteria are the adaptability of the algorithm to computers, its simplicty and elegance, etc." [15] cf Stone 1973:6 [16] Stone 1973:7-8 states that there must be "a procedure that a robot [i.e. computer] can follow in order to determine pecisely how to obey the instruction". Stone adds finiteness of the process, and definiteness (having no ambiguity in the instructions) to this definition. [17] Knuth, loc. cit [18] Minsky 1967:105 [19] Gurevich 2000:1, 3 [20] Kleene 1952:136 [21] Knuth 1997:5 [22] Boldface added, Kleene 1952:137 [23] Kleene 1952:325 [24] Davis 1958:12–15 [25] Kleene 1952:332 [26] Minsky 1967:186 [27] Sipser 2006:157 [28] Knuth 1973:7 [29] Chaitin 2005:32 [30] Rogers 1987:1-2 [31] In his essay "Calculations by Man and Machine: Conceptual Analysis" Seig 2002:390 credits this distinction to Robin Gandy, cf Wilfred Seig, et. al., 2002 Reflections on the foundations of mathematics: Essays in honor of Solomon Feferman, Association for Symbolic Logic, A. K Peters Ltd, Natick, MA. [32] cf Gandy 1980:126, Robin Gandy Church's Thesis and Principles for Mechanisms appearing on pp.123-148 in J. Barwise et. al. 1980 The Kleene Symposium, North-Holland Publishing Company. [33] A "robot": "A computer is a robot that will perform any task that can be described as a sequence of instructions" cf Stone 1972:3 [34] Lambek’s "abacus" is a "countably infinite number of locations (holes, wires etc.) together with an unlimited supply of counters (pebbles, beads, etc). The locations are distinguishable, the counters are not". The holes will have unlimited capacity, and standing by is an agent who understands and is able to carry out the list of instructions" (Lambek 1961:295). Lambek references Melzak who defines his Q-machine as "an indefinitely large number of locations . . . an indefinitely large supply of counters distributed among these locations, a program, and an operator whose sole purpose is to carry out the program" (Melzak 1961:283). B-B-J (loc. cit.) add the stipulation that the holes are "capable of holding any number of stones" (p. 46). Both Melzak and Lambek appear in The Canadian Mathematical Bulletin, vol. 4, no. 3, September

Algorithm

1961. [35] If no confusion will result, the word "counters" can be dropped, and a location can be said to contain a single "number". [36] "We say that an instruction is effective if there is a procedure that the robot can follow in order to determine precisely how to obey the instruction" (Stone 1972:6) [37] cf Minsky 1967: Chapter 11 "Computer models" and Chapter 14 "Very Simple Bases for Computability" pp. 255-281 in particular [38] cf Knuth 1973:3. [39] But always preceded by IF-THEN to avoid improper subtraction. [40] However, a few different assignment instructions (e.g. DECREMENT, INCREMENT and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx will be unconditional. [41] Knuth 1973:4 [42] Stone 1972:5. Methods for extracting roots are not trivial: see Methods of computing square roots. [43] Cf page 875 in Peter van Emde Boas's "Machine Models and Simulation" in Jan van Leeuwen ed., 1990, "Handbook of Theoretical Computer Science. Volume A: algoritms and Compexity", The MIT Press/Elsevier, ISBN 0-444-88071-2 (Volume A). [44] John G. Kemeney and Thomas E. Kurtz 1985 Back to Basic: The History, Corruption, and Future of the Language, Addison-Wesley Publishing Company, Inc. Reading, MA, ISBN 0-201-13433-0. [45] Tausworthe 1977:101 [46] Tausworthe 1977:142 [47] Knuth 1973 section 1.2.1, expanded by Tausworthe 1977 at pages 100ff and Chapter 9.1 [48] cf Tausworthe 1977 [49] Heath 1908:300; Hawking’s Dover 2005 edition derives from Heath. [50] " 'Let CD, measuring BF, leave FA less than itself.' This is a neat abbreviation for saying, measure along BA successive lengths equal to CD until a point F is reached such that the length FA remaining is less than CD; in other words, let BF be the largest exact multiple of CD contained in BA" (Heath 1908:297 [51] For modern treatments using division in the algorithm see Hardy and Wright 1979:180, Knuth 1973:2 (Volume 1), plus more discussion of Euclid's algorithm in Knuth 1969:293-297 (Volume 2). [52] Euclid covers this question in his Proposition 1. [53] http:/ / aleph0. clarku. edu/ ~djoyce/ java/ elements/ bookVII/ propVII2. html [54] Knuth 1973:13-18. He credits "the formulation of algorithm-proving in terms of asertions and induction" to R. W. Floyd, Peter Naur, C. A. R. Hoare, H. H. Goldstine and J. von Neumann. Tausworth 1977 borrows Knuth's Euclid example and extends Knuth's method in section 9.1 Formal Proofs (pages 288-298). [55] Tausworthe 1997:294 [56] cf Knuth 1973:7 (Vol. I), and his more-detailed analyses on pp. 1969:294-313 (Vol II). [57] Breakdown occurs when an algorithm tries to compact itself. Success would solve the Halting problem. [58] Kowalski 1979 [59] Sue Carroll, Taz Daughtrey (2007-07-04). Fundamental Concepts for the Software Quality Engineer (http:/ / books. google. com/ ?id=bz_cl3B05IcC& pg=PA282). pp. 282 et seq.. ISBN 9780873897204. . [60] Adaptation and learning in automatic systems (http:/ / books. google. com/ books?id=sgDHJlafMskC), page 54, Ya. Z. Tsypkin, Z. J. Nikolic, Academic Press, 1971, ISBN 978-0-12-702050-1 [61] Toomer 1990 [62] Hogendijk, Jan P. (1998). "al-Khwarzimi" (http:/ / www. kennislink. nl/ web/ show?id=116543). Pythagoras 38 (2): 4–5. ISSN 0033–4766. . [63] Oaks, Jeffrey A.. "Was al-Khwarizmi an applied algebraist?" (http:/ / facstaff. uindy. edu/ ~oaks/ MHMC. htm). University of Indianapolis. . Retrieved 2008-05-30. [64] Al-Khwarizmi: The Inventor of Algebra (http:/ / books. google. co. uk/ books?id=3Sfrxde0CXIC& printsec=frontcover& source=gbs_ge_summary_r& cad=0#v=onepage& q& f=false), by Corona Brezina (2006) [65] Foremost mathematical texts in history (http:/ / www-history. mcs. st-and. ac. uk/ Extras/ Boyer_Foremost_Text. html), according to Carl B. Boyer. [66] Etymology of algorithm at Dictionary.Reference.com (http:/ / dictionary. reference. com/ browse/ algorithm) [67] Davis 2000:18 [68] Bolter 1984:24 [69] Bolter 1984:26 [70] Bolter 1984:33–34, 204–206) [71] All quotes from W. Stanley Jevons 1880 Elementary Lessons in Logic: Deductive and Inductive, Macmillan and Co., London and New York. Republished as a googlebook; cf Jevons 1880:199–201. Louis Couturat 1914 the Algebra of Logic, The Open Court Publishing Company, Chicago and London. Republished as a googlebook; cf Couturat 1914:75–76 gives a few more details; interestingly he compares this to a typewriter as well as a piano. Jevons states that the account is to be found at Jan . 20, 1870 The Proceedings of the Royal Society. [72] Jevons 1880:199–200

84

Algorithm

[73] All quotes from John Venn 1881 Symbolic Logic, Macmillan and Co., London. Republished as a googlebook. cf Venn 1881:120–125. The interested reader can find a deeper explanation in those pages. [74] Bell and Newell diagram 1971:39, cf. Davis 2000 [75] * Melina Hill, Valley News Correspondent, A Tinkerer Gets a Place in History, Valley News West Lebanon NH, Thursday March 31, 1983, page 13. [76] Davis 2000:14 [77] van Heijenoort 1967:81ff [78] van Heijenoort's commentary on Frege's Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought in van Heijenoort 1967:1 [79] Dixon 1906, cf. Kleene 1952:36–40 [80] cf. footnote in Alonzo Church 1936a in Davis 1965:90 and 1936b in Davis 1965:110 [81] Kleene 1935–6 in Davis 1965:237ff, Kleene 1943 in Davis 1965:255ff [82] Church 1936 in Davis 1965:88ff [83] cf. "Formulation I", Post 1936 in Davis 1965:289–290 [84] Turing 1936–7 in Davis 1965:116ff [85] Rosser 1939 in Davis 1965:226 [86] Kleene 1943 in Davis 1965:273–274 [87] Kleene 1952:300, 317 [88] Kleene 1952:376 [89] Turing 1936–7 in Davis 1965:289–290 [90] Turing 1936 in Davis 1965, Turing 1939 in Davis 1965:160 [91] Hodges, p. 96 [92] Turing 1936–7:116) [93] Turing 1936–7 in Davis 1965:136 [94] Turing 1939 in Davis 1965:160

85

References

• Axt, P. (1959) On a Subrecursive Hierarchy and Primitive Recursive Degrees, Transactions of the American Mathematical Society 92, pp. 85–105 • Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. ISBN 0-07-004357-4}. • Blass, Andreas; Gurevich, Yuri (2003). "Algorithms: A Quest for Absolute Definitions" (http://research. microsoft.com/~gurevich/Opera/164.pdf). Bulletin of European Association for Theoretical Computer Science 81. Includes an excellent bibliography of 56 references. • Boolos, George; Jeffrey, Richard (1974, 1980, 1989, 1999). Computability and Logic (4th ed.). Cambridge University Press, London. ISBN 0-521-20402-X.: cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable". • Burgin, M. Super-recursive algorithms, Monographs in computer science, Springer, 2005. ISBN 0-387-95569-0 • Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109 • Church, Alonzo (1936a). "An Unsolvable Problem of Elementary Number Theory" (http://jstor.org/stable/ 2371045). The American Journal of Mathematics 58 (2): 345–363. doi:10.2307/2371045. Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc. • Church, Alonzo (1936b). "A Note on the Entscheidungsproblem" (http://jstor.org/stable/2269326). The Journal of Symbolic Logic 1 (1): 40–41. doi:10.2307/2269326. Church, Alonzo (1936). "Correction to a Note on the Entscheidungsproblem" (http://jstor.org/stable/2269030). The Journal of Symbolic Logic 1 (3): 101–102. doi:10.2307/2269030. Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes. • Daffa', Ali Abdullah al- (1977). The Muslim contribution to mathematics. London: Croom Helm. ISBN 0-85664-464-1.

Algorithm • Davis, Martin (1965). The Undecidable: Basic Papers On Undecidable Propositions, Unsolvable Problems and Computable Functions. New York: Raven Press. ISBN 0486432289. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name. • Davis, Martin (2000). Engines of Logic: Mathematicians and the Origin of the Computer. New York: W. W. Nortion. ISBN 0393322297. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc. • Paul E. Black, algorithm (http://www.nist.gov/dads/HTML/algorithm.html) at the NIST Dictionary of Algorithms and Data Structures. • Dennett, Daniel (1995). Darwin's Dangerous Idea. New York: Touchstone/Simon & Schuster. ISBN 0684802902. • Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms (http://research.microsoft. com/~gurevich/Opera/141.pdf), ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77–111. Includes bibliography of 33 sources. • Kleene C., Stephen (1936). "General Recursive Functions of Natural Numbers". Mathematische Annalen 112 (5): 727–742. doi:10.1007/BF01565439. Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result). • Kleene C., Stephen (1943). "Recursive Predicates and Quantifiers" (http://jstor.org/stable/1990131). American Mathematical Society Transactions 54 (1): 41–73. doi:10.2307/1990131. Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis). • Kleene, Stephen C. (First Edition 1952). Introduction to Metamathematics (Tenth Edition 1991 ed.). North-Holland Publishing Company. ISBN 0720421039. Excellent—accessible, readable—reference source for mathematical "foundations". • Knuth, Donald (1997). Fundamental Algorithms, Third Edition. Reading, Massachusetts: Addison–Wesley. ISBN 0201896834. • Knuth, Donald (1969). Volume 2/Seminumerical Algorithms, The Art of Computer Programming First Edition. Reading, Massachusetts: Addison–Wesley. • Kosovsky, N. K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981 • Kowalski, Robert (1979). "Algorithm=Logic+Control". Communications of the ACM 22 (7): 424–436. doi:10.1145/359131.359136. ISSN 0001-0782. • A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.] • Minsky, Marvin (1967). Computation: Finite and Infinite Machines (First ed.). Prentice-Hall, Englewood Cliffs, NJ. ISBN 0131654497. Minsky expands his "...idea of an algorithm—an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines." • Post, Emil (1936). "Finite Combinatory Processes, Formulation I" (http://jstor.org/stable/2269031). The Journal of Symbolic Logic 1 (3): 103–105. doi:10.2307/2269031. Reprinted in The Undecidable, p. 289ff. Post

86

Algorithm defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis. Rogers, Jr, Hartley (1987). Theory of Recursive Functions and Effective Computbility. The MIT Press. ISBN 0-262-68052-1 (pbk.). Rosser, J.B. (1939). "An Informal Exposition of Proofs of Godel's Theorem and Church's Theorem". Journal of Symbolic Logic 4. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable) Scott, Michael L. (2009). Programming Language Mathematics: Third Edition. Morgan Kaufmann Publishers:Elsevier. ISBN 13:978-0-12-274514-9. Sipser, Michael (2006). Introduction to the Theory of Computation. PWS Publishing Company. ISBN 053494728X.

87

• •

• •

• Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1972 ed.). McGraw-Hill, New York. ISBN 0070617260. Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4). • Tausworthe, Robert C (1977). Standardized Development of Computer Software Part 1 Methods. Englewood Cliffs NJ: Prentice-Hall, Inc.. ISBN 0-13-842195-1. • Turing, Alan M. (1936–7). "On Computable Numbers, With An Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society, Series 2 42: 230–265. doi:10.1112/plms/s2-42.1.230.. Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK. • Turing, Alan M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society, Series 2 45: 161–228. doi:10.1112/plms/s2-45.1.161. Reprinted in The Undecidable, p. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton USA. • United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms< - 2100 Patentability (http://www.uspto.gov/web/offices/pac/mpep/documents/2100_2106_02.htm), Manual of Patent Examining Procedure (MPEP). Latest revision August 2006

Secondary references

• Bolter, David J. (1984). Turing's Man: Western Culture in the Computer Age (1984 ed.). The University of North Carolina Press, Chapel Hill NC. ISBN 0807815640., ISBN 0-8078-4108-0 pbk. • Dilson, Jesse (2007). The Abacus ((1968,1994) ed.). St. Martin's Press, NY. ISBN 031210409X., ISBN 0-312-10409-X (pbk.) • van Heijenoort, Jean (2001). From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931 ((1967) ed.). Harvard University Press, Cambridge, MA. ISBN 0674324498., 3rd edition 1976[?], ISBN 0-674-32449-8 (pbk.) • Hodges, Andrew (1983). Alan Turing: The Enigma ((1983) ed.). Simon and Schuster, New York. ISBN 0671492071., ISBN 0-671-49207-1. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.

Algorithm

88

Further reading

• Jean-Luc Chabert, Évelyne Barbin, A history of algorithms: from the pebble to the microchip, Springer, 1999, ISBN 3-540-63369-3 • David Harel, Yishai A. Feldman, Algorithmics: the spirit of computing, Edition 3, Pearson Education, 2004, ISBN 0-321-11784-0 • Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms (http://www-cs-faculty.stanford.edu/ ~uno/aa.html). Stanford, California: Center for the Study of Language and Information. • Knuth, Donald E. (2010). Selected Papers on Design of Algorithms (http://www-cs-faculty.stanford.edu/~uno/ da.html). Stanford, California: Center for the Study of Language and Information.

External links

• Algorithms (http://www.dmoz.org/Computers/Algorithms//) at the Open Directory Project • Weisstein, Eric W., " Algorithm (http://mathworld.wolfram.com/Algorithm.html)" from MathWorld. • Dictionary of Algorithms and Data Structures (http://www.nist.gov/dads/) - National Institute of Standards and Technology Algorithm repositories • The Stony Brook Algorithm Repository (http://www.cs.sunysb.edu/~algorith/) - State University of New York at Stony Brook • Library of Efficient Datastructures and Algorimths (LEDA) (http://www.algorithmic-solutions.com/) previously from Max-Planck-Institut für Informatik • Netlib Reposity (http://www.netlib.org/) - University of Tennessee and Oak Ridge National Laboratory • Collected Algorithms of the ACM (http://calgo.acm.org/) - Association for Computing Machinery • The Stanford GraphBase (http://www-cs-staff.stanford.edu/~knuth/sgb.html) - Stanford University • Combinatorica (http://www.combinatorica.com/) - University of Iowa and State University of New York at Stony Brook Lecture notes • Algorithms Course Materials (http://compgeom.cs.uiuc.edu/~jeffe//teaching/algorithms/). Jeff Erickson. University of Illinois.

Computer programming

89

Computer programming

Computer programming (often shortened to programming or coding) is the process of designing, writing, testing, debugging / troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The purpose of programming is to create a program that exhibits a certain desired behaviour. The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Definition

Hoc and Nguyen-Xuan define computer programming as "the process of transforming a mental plan in familiar terms into one compatible with the computer." [1] Said another way, programming is the craft of transforming requirements into something that a computer can execute.

Overview

Within software engineering, programming (the implementation) is regarded as one phase in a software development process. There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline.[2] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world. However, because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance). Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir–Whorf hypothesis [3] in linguistics, which postulates that a particular spoken language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.

Computer programming

90

History

The Antikythera mechanism from ancient Greece was a calculator utilizing gears of various sizes and configuration to determine its operation,[4] which tracked the metonic cycle still used in lunar-to-solar calendars, and which is consistent for calculating the dates of the Olympiads.[5] Al-Jazari built programmable Automata in 1206. One system employed in these devices was the use of pegs and cams placed into a wooden drum at specific locations. which would sequentially trigger levers that in turn operated percussion instruments. The output of this device was a small drummer playing various Wired plug board for an IBM 402 Accounting rhythms and drum patterns.[6] [7] The Jacquard Loom, which Joseph Machine. Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming. Development of computer programming accelerated through the Industrial Revolution. In the late 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards..."[8] To process these punched cards, first known as "Hollerith cards" he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were a variety of plug-board programmable machines, called unit record equipment, to perform data-processing tasks (card reading). Early computer programmers used plug-boards for the variety of complex calculations requested of the newly invented machines. The invention of the von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions (elementary operations) of the particular machine, often in binary notation. Every model of computer would likely use different instructions (machine language) to do the same task. Later, assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly language is usually more convenient, faster, and less prone to human error than using machine language, but because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages. In 1954, FORTRAN was invented; it was the first high level programming language to have a functional implementation, as

Data and instructions could be stored on external punched cards, which were kept in order and arranged in program decks.

opposed to just a design on paper.[9] [10] (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions,

Computer programming i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and a new one punched to replace it.) As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages typically are easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital. Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.

91

Modern programming

Quality requirements

Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant: • Efficiency/performance: the amount of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks. • Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors). • Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error. • Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface. • Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behaviour of the hardware and operating system, and availability of platform specific compilers (and sometimes libraries) for the language of the source code.

Computer programming • Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or customizations, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.

92

**Readability of source code
**

In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability. Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study[11] found that a few simple readability transformations made code shorter and drastically reduced the time to understand it. Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability.[12] Some of these factors include: • • • • Different indentation styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, procedures, etc)

Algorithmic complexity

The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.

Methodologies

The first step in most formal software development projects is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Nowadays many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process. Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA. A similar technique used for database design is Entity-Relationship Modeling (ER Modeling). Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.

Computer programming

93

**Measuring language usage
**

It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, scripting languages in web development, and C in embedded applications), while some languages are regularly used to write many different kinds of applications. Also many applications use a mix of several languages in their construction and use. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language,[13] the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).

Debugging

Debugging is a very important task in the software development process, because an incorrect program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static analysis tool can help detect some possible problems. Debugging is often done with IDEs like Eclipse, Kdevelop, NetBeans,Code::Blocks, and Visual Studio. Standalone debuggers like gdb are also used, and these often provide less of a visual environment, usually using a command line.

A bug, which was debugged in 1947.

Programming languages

Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones. Allen Downey, in his book How To Think Like A Computer Scientist, writes: The details look different in different languages, but a few basic instructions appear in just about every language: • • • • input: Get data from the keyboard, a file, or some other device. output: Display data on the screen or send data to a file or other device. arithmetic: Perform basic arithmetical operations like addition and multiplication. conditional execution: Check for certain conditions and execute the appropriate sequence of statements.

• repetition: Perform some action repeatedly, usually with some variation.

Computer programming Many computer languages provide a mechanism to call functions provided by libraries such as in .dlls. Provided the functions in a library follow the appropriate run time conventions (e.g., method of passing arguments), then these functions may be written in any other language.

94

Programmers

Computer programmers are those who write computer software. Their jobs usually involve: • • • • • • • • • • Coding Compilation Debugging Documentation Integration Maintenance Requirements analysis Software architecture Software testing Specification

References

[1] Hoc, J.-M. and Nguyen-Xuan, A. Language semantics, mental models and analogy. J.-M. Hoc et al., Eds. Psychology of Programming. Academic Press. London, 1990, 139–156, cited through Brad A. Myers , John F. Pane , Andy Ko, Natural programming languages and environments, Communications of the ACM, v.47 n.9, September 2004 (http:/ / dx. doi. org/ 10. 1145/ 1015864. 1015888) [2] Paul Graham (2003). Hackers and Painters (http:/ / www. paulgraham. com/ hp. html). . Retrieved 2006-08-22. [3] Kenneth E. Iverson, the originator of the APL programming language, believed that the Sapir–Whorf hypothesis applied to computer languages (without actually mentioning the hypothesis by name). His Turing award lecture, "Notation as a tool of thought", was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms. Iverson K.E.," Notation as a tool of thought (http:/ / elliscave. com/ APL_J/ tool. pdf)", Communications of the ACM, 23: 444-465 (August 1980). [4] " Ancient Greek Computer's Inner Workings Deciphered (http:/ / news. nationalgeographic. com/ news/ 2006/ 11/ 061129-ancient-greece. html)". National Geographic News. November 29, 2006. [5] Freeth, Tony; Jones, Alexander; Steele, John M.; Bitsakis, Yanis (July 31, 2008). "Calendars with Olympiad display and eclipse prediction on the Antikythera Mechanism" (http:/ / www. nature. com/ nature/ journal/ v454/ n7204/ full/ nature07130. html). Nature 454 (7204): 614–617. doi:10.1038/nature07130. PMID 18668103. . [6] A 13th Century Programmable Robot (http:/ / www. shef. ac. uk/ marcoms/ eview/ articles58/ robot. html), University of Sheffield [7] Fowler, Charles B. (October 1967). "The Museum of Music: A History of Mechanical Instruments" (http:/ / jstor. org/ stable/ 3391092). Music Educators Journal (Music Educators Journal, Vol. 54, No. 2) 54 (2): 45–49. doi:10.2307/3391092. [8] "Columbia University Computing History - Herman Hollerith" (http:/ / www. columbia. edu/ acis/ history/ hollerith. html). Columbia.edu. . Retrieved 2010-04-25. [9] 12:10 p.m. ET (2007-03-20). "Fortran creator John Backus dies - Tech and gadgets- msnbc.com" (http:/ / www. msnbc. msn. com/ id/ 17704662/ ). MSNBC. . Retrieved 2010-04-25. [10] "CSC-302 99S : Class 02: A Brief History of Programming Languages" (http:/ / www. math. grin. edu/ ~rebelsky/ Courses/ CS302/ 99S/ Outlines/ outline. 02. html). Math.grin.edu. . Retrieved 2010-04-25. [11] James L. Elshoff , Michael Marcotty, Improving computer program readability to aid modification (http:/ / doi. acm. org/ 10. 1145/ 358589. 358596), Communications of the ACM, v.25 n.8, p.512-521, Aug 1982. [12] Multiple (wiki). "Readability" (http:/ / docforge. com/ wiki/ Readability). Docforge. . Retrieved 2010-01-30. [13] Survey of Job advertisements mentioning a given language (http:/ / www. computerweekly. com/ Articles/ 2007/ 09/ 11/ 226631/ sslcomputer-weekly-it-salary-survey-finance-boom-drives-it-job. htm)>

Computer programming

95

Further reading

• Weinberg, Gerald M., The Psychology of Computer Programming, New York: Van Nostrand Reinhold, 1971

External links

• Programming Wikia (http://programming.wikia.com/wiki/Main_Page) • How to Think Like a Computer Scientist (http://openbookproject.net/thinkCSpy) - by Jeffrey Elkner, Allen B. Downey and Chris Meyers

Fortran

96

Fortran

Fortran

The Fortran Automatic Coding System for the IBM 704 (October 15, 1956), the first Programmer's Reference Manual for Fortran Paradigm Appeared in Designed by Developer Stable release Typing discipline Major implementations Influenced by Influenced Usual file extensions multi-paradigm: imperative (procedural), structured, object-oriented, generic 1957 John Backus John Backus & IBM Fortran 2008 (ISO/IEC 1539-1:2010) (2010) strong, static, manifest Absoft, Cray, GFortran, G95, IBM, Intel, Lahey/Fujitsu, Open Watcom, Pathscale, PGI, Silverfrost, Oracle, XL Fortran, Visual Fortran, others Speedcoding ALGOL 58, BASIC, C, PL/I, PACT I, MUMPS, Ratfor .f, .for, .f90, .f95

Fortran (previously FORTRAN;[1] both blends derived from IBM Mathematical Formula Translating System) is a general-purpose,[2] procedural,[3] imperative programming language that is especially suited to numeric computation and scientific computing. Originally developed by IBM at their campus in south San Jose, California[4] in the 1950s for scientific and engineering applications, Fortran came to dominate this area of programming early on and has been in continual use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry. It is one of the most popular languages in the area of high-performance computing [5] and is the language used for programs that benchmark and rank the world's fastest supercomputers. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with previous versions. Successive versions have added support for processing of character-based data (FORTRAN 77), array programming, modular programming and object-based programming (Fortran 90 / 95), and object-oriented and generic programming (Fortran 2003).

Fortran

97

History

In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre.[6] A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. This was an optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code whose performance was comparable to that of hand-coded assembly language.

An IBM 704 mainframe

FORTRAN code on a punched card, showing the specialized uses of columns 1-5, 6 and 73-80.

While the community was skeptical that this new method could possibly out-perform hand-coding, it reduced the number of programming statements necessary to operate a machine by a factor of 20, and quickly gained acceptance. Said creator John Backus during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."[7] The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex number data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM 709, 650, 1620, and 7090 computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures. The development of FORTRAN paralleled the early evolution of compiler technology; indeed many advances in the theory and design of compilers were specifically motivated by the need to generate efficient code for FORTRAN programs.

Fortran

98

FORTRAN

The initial release of FORTRAN for the IBM 704 contained 32 statements, including: • • • • • • • • • • • DIMENSION and EQUIVALENCE statements Assignment statements Three-way arithmetic IF statement.[8] IF statements for checking exceptions (ACCUMULATOR OVERFLOW, QUOTIENT OVERFLOW, and DIVIDE CHECK); and IF statements for manipulating sense switches and sense lights GOTO, computed GOTO, ASSIGN, and assigned GOTO DO loops Formatted I/O: FORMAT, READ, READ INPUT TAPE, WRITE, WRITE OUTPUT TAPE, PRINT, and PUNCH Unformatted I/O: READ TAPE, READ DRUM, WRITE TAPE, and WRITE DRUM Other I/O: END FILE, REWIND, and BACKSPACE PAUSE, STOP, and CONTINUE FREQUENCY statement (for providing optimization hints to the compiler).[9]

Before the development of disk files, text editors and terminals, programs were most often entered on a keypunch keyboard onto 80 column punched cards, one line to a card, which would be fed into a card reader in a batch. Originally programs were written in a "fixed" column format. Column 1 was the comment field, a letter C caused the entire card to be ignored by the compiler. Columns 2 to 5 were the label field: a sequence of digits here was taken as a label for the purpose of a GOTO or a FORMAT reference in a WRITE or READ statement. Column 6 was a continuation field: a non-blank character here caused the card to be taken as a continuation of the statement on the previous card. Columns 73 to 80 were ignored, so they were reserved for punching a sequence number which in theory could be used to re-order cards if a stack of cards was dropped, though in practice these were rarely so punched as re-assembling a stack would have to be done manually. Columns 7 to 72 served as the statement field. Later compilers relaxed these restrictions. Early FORTRAN compilers did not support recursion in subroutines. Early computer architectures did not support the concept of a stack, and when they did directly support subroutine calls, the return location was often stored in a fixed location adjacent to the subroutine code which does not support more than one level of calling. Although not specified in Fortran 77, many F77 compilers supported recursion as an option, while it became a standard in Fortran 90.[10]

**IBM 1401 FORTRAN
**

FORTRAN was provided for the IBM 1401 by an innovative 63-pass compiler that ran in only 8k of core. It kept the program in memory and loaded overlays that gradually transformed it, in place, into executable form, as described by Haines et al.[11] The executable form was not machine language; rather it was interpreted, anticipating UCSD Pascal P-code by two decades.

FORTRAN II

IBM's FORTRAN II appeared in 1958. The main enhancement was to support procedural programming by allowing user-written subroutines and functions which returned values, with parameters passed by reference. The COMMON statement provided a way for subroutines to access common (or global) variables. Six new statements were introduced: • SUBROUTINE, FUNCTION, and END • CALL and RETURN • COMMON

Fortran Over the next few years, FORTRAN II would also add support for the DOUBLE PRECISION and COMPLEX data types. Simple FORTRAN II program This program, for Heron's formula, reads one data card containing three 5-digit integers A, B, and C as input. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number with 2 digits after the decimal point. C C C C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION INPUT - CARD READER UNIT 5, INTEGER INPUT OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING READ INPUT TAPE 5, 501, IA, IB, IC 501 FORMAT (3I5) IA, IB, AND IC MAY NOT BE NEGATIVE FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE IS GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO IF (IA) 777, 777, 701 701 IF (IB) 777, 777, 702 702 IF (IC) 777, 777, 703 703 IF (IA+IB-IC) 777,777,704 704 IF (IA+IC-IB) 777,777,705 705 IF (IB+IC-IA) 777,777,799 777 STOP 1 USING HERON'S FORMULA WE CALCULATE THE AREA OF THE TRIANGLE 799 S = FLOATF (IA + IB + IC) / 2.0 AREA = SQRT( S * (S - FLOATF(IA)) * (S - FLOATF(IB)) * + (S - FLOATF(IC))) WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA 601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2, + 13H SQUARE UNITS) STOP END

99

C C C

C C

Fortran

100

FORTRAN III

IBM also developed a FORTRAN III in 1958 that allowed for inline assembler code among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine. Early versions of FORTRAN provided by other vendors suffered from the same disadvantage.

FORTRAN IV

Starting in 1961, as a result of customer demands, IBM began development of a FORTRAN IV that removed the machine-dependent features of FORTRAN II (such as READ INPUT TAPE), while adding new features such as a LOGICAL data type, logical Boolean expressions and the logical IF statement as an alternative to the arithmetic IF statement. FORTRAN IV was eventually released in 1962, first for the IBM 7030 ("Stretch") computer, followed by versions for the IBM 7090 and IBM 7094. By 1965, Fortran IV was supposed to be the "standard" and in compliance with American Standards Association X3.4.3 FORTRAN Working Group.[12]

A FORTRAN coding form, formerly printed on paper and intended to be used by programmers to prepare programs for punching onto cards by keypunch operators. Now obsolete.

FORTRAN 66

Perhaps the most significant development in the early history of FORTRAN was the decision by the American Standards Association (now ANSI) to form a committee to develop an "American Standard Fortran." The resulting two standards, approved in March 1966, defined two languages, FORTRAN (based on FORTRAN IV, which had served as a de facto standard), and Basic FORTRAN (based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard became known as FORTRAN 66 (although many continued to refer to it as FORTRAN IV, the language upon which the standard was largely based). FORTRAN 66 effectively became the first "industry-standard" version of FORTRAN. FORTRAN 66 included: • • • • • • • • • • • • Main program, SUBROUTINE, FUNCTION, and BLOCK DATA program units INTEGER, REAL, DOUBLE PRECISION, COMPLEX, and LOGICAL data types COMMON, DIMENSION, and EQUIVALENCE statements DATA statement for specifying initial values Intrinsic and EXTERNAL (e.g., library) functions Assignment statement GOTO, assigned GOTO, and computed GOTO statements Logical IF and arithmetic (three-way) IF statements DO loops READ, WRITE, BACKSPACE, REWIND, and ENDFILE statements for sequential I/O FORMAT statement CALL, RETURN, PAUSE, and STOP statements

• Hollerith constants in DATA and FORMAT statements, and as actual arguments to procedures • Identifiers of up to six characters in length

Fortran • Comment lines

101

FORTRAN 77

After the release of the FORTRAN 66 standard, compiler vendors introduced a number of extensions to "Standard Fortran", prompting ANSI in 1969 to begin work on revising the 1966 standard. Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, known as FORTRAN 77, added a number of significant features to address many of the shortcomings of FORTRAN 66: • Block IF and END IF statements, with optional ELSE and ELSE IF clauses, to provide improved language support for structured programming • DO loop extensions, including parameter expressions, negative increments, and zero trip counts • OPEN, CLOSE, and INQUIRE statements for improved I/O capability • Direct-access file I/O • IMPLICIT statement • CHARACTER data type, with vastly expanded facilities for character input and output and processing of character-based data • PARAMETER statement for specifying constants • SAVE statement for persistent local variables • Generic names for intrinsic functions • A set of intrinsics (LGE, LGT, LLE, LLT) for lexical comparison of strings, based upon the ASCII collating sequence. (ASCII functions were demanded by the U.S. Department of Defense, in their conditional approval vote.) In this revision of the standard, a number of features were removed or altered in a manner that might invalidate previously standard-conforming programs. (Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.) While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the previous standard but rarely used, a small number of specific capabilities were deliberately removed, such as: • Hollerith constants and Hollerith data, such as: GREET = 12HHELLO THERE! • Reading into a H edit (Hollerith field) descriptor in a FORMAT specification. • Overindexing of array bounds by subscripts. DIMENSION A(10,5) Y= A(11,1) • Transfer of control into the range of a DO loop (also known as "Extended Range").

Fortran Variants: Minnesota FORTRAN Control Data Corporation computers had another version of FORTRAN 77, called Minnesota FORTRAN (MNF), designed especially for student use, with variations in output constructs, special uses of COMMONs and DATA statements, optimizations code levels for compiling, and detailed error listings, extensive warning messages, and debugs.[13]

102

**Transition to ANSI Standard Fortran
**

The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect. An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978.[14] This specification, developed by the U.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard. • DO WHILE and END DO statements • INCLUDE statement • IMPLICIT NONE variant of the IMPLICIT statement • Bit manipulation intrinsic functions, based on similar functions included in Industrial Real-Time Fortran (ANSI/ISA S61.1 (1976)) The IEEE 1003.9 POSIX Standard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls.[15] Over 100 calls were defined in the document — allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner.

Fortran 90

The much delayed successor to FORTRAN 77, informally known as Fortran 90 (and prior to that, Fortran 8X), was finally released as an ISO standard in 1991 and an ANSI Standard in 1992. This major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard: • • • • Free-form source input, also with lowercase Fortran keywords Identifiers up to 31 characters in length Inline comments Ability to operate on arrays (or array sections) as a whole, thus greatly simplifying math and engineering computations. • whole, partial and masked array assignment statements and array expressions, such as X(1:N)=R(1:N)*COS(A(1:N)) • WHERE statement for selective array assignment • array-valued constants and expressions, • user-defined array-valued functions and array constructors. RECURSIVE procedures Modules, to group related procedures and data together, and make them available to other program units, including the capability to limit the accessibility to only specific parts of the module. A vastly improved argument-passing mechanism, allowing interfaces to be checked at compile time User-written interfaces for generic procedures

• • • •

• Operator overloading • Derived/abstract data types

Fortran • New data type declaration syntax, to specify the data type and other attributes of variables • Dynamic memory allocation by means of the ALLOCATABLE attribute and the ALLOCATE and DEALLOCATE statements • POINTER attribute, pointer assignment, and NULLIFY statement to facilitate the creation and manipulation of dynamic data structures • Structured looping constructs, with an END DO statement for loop termination, and EXIT and CYCLE statements for "breaking out" of normal DO loop iterations in an orderly way • SELECT . . . CASE construct for multi-way selection • Portable specification of numerical precision under the user's control • New and enhanced intrinsic procedures. Obsolescence and deletions Unlike the previous revision, Fortran 90 did not delete any features. (Appendix B.1 says, "The list of deleted features in this standard is empty.") Any standard-conforming FORTRAN 77 program is also standard-conforming under Fortran 90, and either standard should be usable to define its behavior. A small set of features were identified as "obsolescent" and expected to be removed in a future standard.

Obsolescent feature Arithmetic IF-statement Non-integer DO parameters or control variables Shared DO-loop termination or termination with a statement other than END DO or CONTINUE Branching to END IF from outside a block IF (X) 10, 20, 30 DO 9 X= 1.7, 1.6, -0.1 DO 9 J= 1, 10 DO 9 K= 1, 10 9 L= J + K 66 GO TO 77 ; . . . IF (E) THEN ; . . . 77 END IF CALL SUBR( X, Y *100, *200 ) PAUSE 600 100 . . . ASSIGN 100 TO H ... GO TO H . . . ASSIGN F TO 606 606 FORMAT ( 9H1GOODBYE. ) GO TO (10, 20, 30, 40), index FOIL( X, Y )= X**2 + 2*X*Y + Y**2 X= 27.3 DATA A, B, C / 5.0, 12.0. 13.0 / . . . Deleted Deleted Deleted Deleted Example Status / Fate in Fortran 95

103

Alternate return PAUSE statement ASSIGN statement and assigned GO TO statement

Assigned FORMAT specifiers H edit descriptors Computed GO TO statement Statement functions DATA statements among executable statements

Deleted Deleted (Obso.) (Obso.) (Obso.)

CHARACTER* form of CHARACTER declaration CHARACTER*8 STRING ! Use CHARACTER(8) (Obso.) Assumed character length functions Fixed form source code * Column 1 contains * or ! or C for comments. C Column 6 for continuation.

Fortran

104

Fortran 95

Fortran 95 was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from the High Performance Fortran specification: • • • • • FORALL and nested WHERE constructs to aid vectorization User-defined PURE and ELEMENTAL procedures Default initialization of derived type components, including pointer initialization Expanded the ability to use initialization expressions for data objects Clearly defined that ALLOCATABLE arrays are automatically deallocated when they go out of scope.

A number of intrinsic functions were extended (for example a dim argument was added to the maxloc intrinsic). Several features noted in Fortran 90 to be deprecated were removed from Fortran 95: • • • • • DO statements using REAL and DOUBLE PRECISION variables Branching to an END IF statement from outside its block PAUSE statement ASSIGN and assigned GOTO statement, and assigned format specifiers H edit descriptor.

An important supplement to Fortran 95 was the ISO technical report TR-15581: Enhanced Data Type Facilities, informally known as the Allocatable TR. This specification defined enhanced use of ALLOCATABLE arrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses include ALLOCATABLE arrays as derived type components, in procedure dummy argument lists, and as function return values. (ALLOCATABLE arrays are preferable to POINTER-based arrays because ALLOCATABLE arrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility of memory leakage. In addition, aliasing is not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.) Another important supplement to Fortran 95 was the ISO technical report TR-15580: Floating-point exception handling, informally known as the IEEE TR. This specification defined support for IEEE floating-point arithmetic and floating point exception handling. Conditional compilation and varying length strings In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also includes two optional modules: • Varying character strings (ISO/IEC 1539-2 : 2000) • Conditional compilation (ISO/IEC 1539-3 : 1998) which, together, comprise the multi-part International Standard (ISO/IEC 1539). According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard."

Fortran

105

Fortran 2003

Fortran 2003 is a major revision introducing many new features. A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (WG5) official Web site.[16] From that article, the major enhancements for this revision include: • Derived type enhancements: parameterized derived types, improved control of accessibility, improved structure constructors, and finalizers. • Object-oriented programming support: type extension and inheritance, polymorphism, dynamic type allocation, and type-bound procedures. • Data manipulation enhancements: allocatable components (incorporating TR 15581), deferred type parameters, VOLATILE attribute, explicit type specification in array constructors and allocate statements, pointer enhancements, extended initialization expressions, and enhanced intrinsic procedures. • Input/output enhancements: asynchronous transfer, stream access, user specified transfer operations for derived types, user specified control of rounding during format conversions, named constants for preconnected units, the FLUSH statement, regularization of keywords, and access to error messages. • Procedure pointers. • Support for IEEE floating-point arithmetic and floating point exception handling (incorporating TR 15580). • Interoperability with the C programming language. • Support for international usage: access to ISO 10646 4-byte characters and choice of decimal or comma in numeric formatted input/output. • Enhanced integration with the host operating system: access to command line arguments, environment variables, and processor error messages. An important supplement to Fortran 2003 was the ISO technical report TR-19767: Enhanced module facilities in Fortran. This report provided submodules, which make Fortran modules more similar to Modula-2 modules. They are similar to Ada private child subunits. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades.

Fortran 2008

The most recent standard, ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010.[17] As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing a select few new capabilities. The new capabilities include: • • • • • • Submodules – Additional structuring facilities for modules; supersedes ISO/IEC TR 19767:2005 Co-array Fortran – a parallel execution model The DO CONCURRENT construct – for loop iterations with no interdependencies The CONTIGUOUS attribute – to specify storage layout restrictions The BLOCK construct – can contain declarations of objects with construct scope Recursive allocatable components – as an alternative to recursive pointers in derived types

The Final Draft international Standard (FDIS) is available as document N1830.[18]

Legacy

Since Fortran has been in use for more than fifty years, there is a vast body of Fortran in daily use throughout the scientific and engineering communities. It is the primary language for some of the most intensive supercomputing tasks, such as weather and climate modeling, computational fluid dynamics, computational chemistry, computational economics, animal breeding, plant breeding and computational physics. Even today, half a century later, many of the floating-point benchmarks to gauge the performance of new computer processors are still written in Fortran (e.g.,

Fortran CFP2006 [19], the floating-point component of the SPEC CPU2006 [20] benchmarks).

106

Portability

Portability was a problem in the early days because there was no agreed standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a reference syntax and semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such as The PFORT Verifier, it was not until after the 1977 standard, when the National Bureau of Standards (now NIST) published FIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions. Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by the PORT library. The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of the IEEE 754 standard for binary floating-point arithmetic has essentially removed this problem. Access to the computing environment (e.g. the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard. Large collections of "library" software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard. It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to a preprocessor.

Variants

Fortran 5

Fortran 5 was a programming language marketed by Data General Corp in the late 1970s and early 80s, for the Nova, Eclipse, and MV line of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles Fortran 66. The name is a pun on the earlier Fortran IV. Univac also offered a compiler for the 1100 series known as Fortran V. A spinoff of Univac Fortran V was Athena Fortran.

Fortran

107

Fortran V

Fortran V was a programming language distributed by Control Data Corporation in 1968 for the CDC 6600 series. The language was based upon Fortran IV.[21]

Fortran 6

Fortran 6 or Visual Fortran 2001 was licensed to a company called Compaq by Microsoft. They have licensed Compaq Visual Fortran and have provided the Visual Studio 5 environment interface for Compaq v6 up to v6.1.[22]

Specific variants

Vendors of high-performance scientific computers (e.g., Burroughs, CDC, Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine code instructions to keep multiple internal arithmetic units busy simultaneously. Another example is CFD, a special variant of Fortran designed specifically for the ILLIAC IV supercomputer, running at NASA's Ames Research Center. IBM Research Labs also developed an extended FORTRAN-based language called "VECTRAN" for processing of vectors and matrices. Object-Oriented Fortran was an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available for Sun, Iris, iPSC, and nCUBE, but is no longer supported. Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards; the major remaining extension is OpenMP, which is a cross-platform extension for shared memory programming. One new extension, Co-array Fortran, is intended to support parallel programming. FOR TRANSIT for the IBM 650 "FOR TRANSIT" was the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie [23] in the late 1950s. The following comment appears in the IBM Reference Manual ("FOR TRANSIT Automatic Coding System" C28-4038, Copyright 1957, 1959 by IBM): The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704. The permissible statements were: Arithmetic assignment statements, e.g. a = b GO to n GO TO (n1, n2, ..., nm), i IF (a) n1, n2, n3 PAUSE STOP DO n i = m1, m2 CONTINUE END READ n, list

Fortran PUNCH n, list DIMENSION V, V, V, ... EQUIVALENCE (a,b,c), (d,c), ... Up to ten subroutines could be used in one program. FOR TRANSIT statements were limited to columns 7 thru 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards.) Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for the IBM 533 control panel.

108

**Fortran-based languages
**

Prior to FORTRAN 77, a number of preprocessors were commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler. Popular preprocessors included FLECS, MORTRAN, SFtran, S-Fortran, Ratfor, and Ratfiv. (Ratfor and Ratfiv, for example, implemented a remarkably C-like language, outputting preprocessed code in standard FORTRAN 66).[24] LRLTRAN was developed at the Lawrence Radiation Laboratory to provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included the LTSS operating system. The Fortran-95 Standard includes an optional Part 3 which defines an optional conditional compilation capability. This capability is often referred to as "CoCo". Many Fortran compilers have integrated subsets of the C preprocessor into their systems. SIMSCRIPT is an application specific Fortran preprocessor for modeling and simulating large discrete systems. The F programming language was designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as the EQUIVALENCE statement. F retains the array features added in Fortran 90, and removes control statements that were obsoleted by structured programming constructs added to both Fortran 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing."[25]

Code examples

The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence of DO loops and IF/THEN statements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively. program average ! Read in some numbers and take the average ! As written, if there are no data points, an average of zero is returned ! While this may not be desired behavior, it keeps this example simple

Fortran implicit none real, dimension(:), allocatable :: points integer :: number_of_points real :: average_points=0., positive_average=0., negative_average=0. write (*,*) "Input number of points to average:" read (*,*) number_of_points allocate (points(number_of_points)) write (*,*) "Enter the points to average:" read (*,*) points ! Take the average by summing points and dividing by number_of_points if (number_of_points > 0) average_points = sum(points) / number_of_points ! Now form average over positive and negative points only if (count(points > 0.) > 0) then positive_average = sum(points, points > 0.) / count(points > 0.) end if if (count(points < 0.) > 0) then negative_average = sum(points, points < 0.) / count(points < 0.) end if deallocate (points) ! Print result to terminal write (*,'(a,g12.4)') 'Average = ', average_points write (*,'(a,g12.4)') 'Average of positive points = ', positive_average write (*,'(a,g12.4)') 'Average of negative points = ', negative_average end program average

109

Humor

During the same Fortran Standards Committee meeting at which the name "FORTRAN 77" was chosen, a technical proposal was incorporated into the official distribution, bearing the title, "Letter O considered harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notorious GO TO statement as before. (Troublesome FORMAT statements would also be eliminated.) It was

Fortran noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway".[26] [27] During the standards committee battle over whether the "minimum trip count" for the FORTRAN 77 "DO" statement should be zero (allowing no execution of the block) or one (the "plunge-ahead" DO), another facetious alternative was proposed (by Loren Meissner) to have the minimum trip be two—since there is no need for a loop if it is only executed once.

110

Notes

[1] The names of earlier versions of the language through FORTRAN 77 were conventionally spelled in all-caps (FORTRAN 77 was the version in which the use of lowercase letters in keywords was strictly nonstandard). The capitalization has been dropped in referring to newer versions beginning with Fortran 90. The official language standards now refer to the language as "Fortran." Because the capitalization (or lack thereof) of the word FORTRAN was never 100% consistent in actual usage, and because many hold impassioned beliefs on the issue, this article, rather than attempt to be normative, adopts the convention of using the all-caps FORTRAN in referring to versions of FORTRAN through FORTRAN 77 and the title-caps Fortran in referring to versions of Fortran from Fortran 90 onward. This convention is reflected in the capitalization of FORTRAN in the ANSI X3.9-1966 (FORTRAN 66) and ANSI X3.9-1978 (FORTRAN 77) standards and the title caps Fortran in the ANSI X3.198-1992 (Fortran 90), ISO/IEC 1539-1:1997 (Fortran 95) and ISO/IEC 1539-1:2004 (Fortran 2003) standards. [2] Since FORTRAN 77, which introduced the CHARACTER data type. [3] Since FORTRAN II (1958). [4] "Math 169 Notes - Santa Clara University" (http:/ / math. scu. edu/ ~dsmolars/ ma169/ notesfortran. html). . [5] Eugene Loh (June 18, 2010). "The Ideal HPC Programming Language" (http:/ / queue. acm. org/ detail. cfm?id=1820518). Association of Computing Machines. . [6] Softwarepreservation.org (http:/ / www. softwarepreservation. org/ projects/ FORTRAN/ index. html#By_FORTRAN_project_members) [7] Fortran creator John Backus dies - Gadgets - MSNBC.com (http:/ / www. msnbc. msn. com/ id/ 17704662/ ), MSN.com [8] Note: It is commonly believed that this statement corresponded to a three-way branch instruction on the IBM 704. This is not true, the 704 branch instructions all contained only one destination address (e.g., TZE — Transfer AC Zero, TNZ — Transfer AC Not Zero, TPL — Transfer AC Plus, TMI — Transfer AC Minus). The machine (and its successors in the 700/7000 series) did have a three-way skip instruction (CAS — Compare AC with Storage), which was probably the origin of this belief, but using this instruction to implement the IF would consume 4 instruction words, require the constant Zero in a word of storage, and take 3 machine cycles to execute; using the Transfer instructions to implement the IF could be done in 1 to 3 instruction words, required no constants in storage, and take 1 to 3 machine cycles to execute. An optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare (use of Transfers also allowed the FREQUENCY statement to optimize IFs, which could not be done using the Compare). Also the Compare considered −0 and +0 to be different values while the Transfer Zero and Transfer Not Zero considered them to be the same. [9] The FREQUENCY statement in FORTRAN was used originally and optionally to give branch probabilities for the three branch cases of the Arithmetic IF statement to bias the way code was generated and order of the basic blocks of code generated, in the global optimisation sense, were arranged in memory for optimality. The first FORTRAN compiler used this weighting to do a Monte Carlo simulation of the run-time generated code at compile time. It was very sophisticated for its time. This technique is documented in the original article in 1957 on the first FORTRAN compiler implementation by J. Backus, et al. Many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation.

Below is a part of the 1957 paper, "The FORTRAN Automatic Coding System" by Backus, et al., with this snippet on the FREQUENCY statement and its use in a compile-time Monte Carlo simulation of the run-time to optimise the code generated. Quoting … The fundamental unit of program is the basic block; a basic block is a stretch of program which has a single entry point and a single exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by an actual "execution" of the program in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO'S is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided.

[10] Ibibilio.org (http:/ / www. ibiblio. org/ pub/ languages/ fortran/ ch1-12. html) [11] Haines, L. H. (1965). "Serial compilation and the 1401 FORTRAN compiler" (http:/ / domino. research. IBM. com/ tchjr/ journalindex. nsf/ 495f80c9d0f539778525681e00724804/ cde711e5ad6786e485256bfa00685a03?OpenDocument). IBM Systems Journal 4 (1): 73–80. doi:10.1147/sj.41.0073. . This article was reprinted, edited, in both editions of Lee, John A. N. (1967(1st), 1974(2nd)). Anatomy of a

Fortran

Compiler. Van Nostrand Reinhold. [12] McCracken, Daniel D. (1965). "Preface". A Guide to FORTRAN IV Programming. New York: Wiley. p. v. ISBN 0-471-58281-6. [13] Chilton Computing with FORTRAN (http:/ / www. chilton-computing. org. uk/ acd/ literature/ reports/ p008. htm), Chilton-computing.org.uk [14] MIL-STD-1753. DoD Supplement to X3.9-1978 (http:/ / www. fortran. com/ fortran/ mil_std_1753. html). United States Government Printing Office. . [15] POSIX 1003.9-1992. POSIX FORTRAN 77 Language Interface – Part 1: Binding for System Application Program Interface API (http:/ / standards. ieee. org/ reading/ ieee/ std_public/ description/ posix/ 1003. 9-1992_desc. html). IEEE. . [16] Fortran Working Group (WG5) (http:/ / www. nag. co. uk/ sc22wg5/ ). It may also be downloaded as a PDF file (ftp:/ / ftp. nag. co. uk/ sc22wg5/ N1551-N1600/ N1579. pdf) or gzipped PostScript file (ftp:/ / ftp. nag. co. uk/ sc22wg5/ N1551-N1600/ N1579. ps. gz), FTP.nag.co.uk [17] N1836, Summary of Voting/Table of Replies on ISO/IEC FDIS 1539-1, Information technology - Programming languages - Fortran - Part 1: Base language ftp:/ / ftp. nag. co. uk/ sc22wg5/ N1801-N1850/ N1836. pdfPDF ( 101 KiB) [18] N1830, Information technology — Programming languages — Fortran — Part 1: Base language ftp:/ / ftp. nag. co. uk/ sc22wg5/ N1801-N1850/ N1830. pdfPDF ( 7.9 MiB) [19] http:/ / www. spec. org/ cpu2006/ CFP2006/ [20] http:/ / www. spec. org/ cpu2006/ [21] Healy, MJR (1968). "Towards FORTRAN VI" (http:/ / hopl. murdoch. edu. au/ showlanguage. prx?exp=1092& language=CDC Fortran). Advanced scientific Fortran by CDC. CDC. pp. 169–172. . Retrieved 2009-04-10. [22] "third party release notes for Fortran v6.1" (http:/ / www. cs-software. com/ software/ fortran/ compaq/ cvf_relnotes. html#61ver_news). 15 March 2011. . [23] "Internal Translator (IT) A Compiler for the IBM 650", by A. J. Perlis, J. W. Smith, and H. R. Van Zoeren, Computation Center, Carnegie Institute of Technology [24] This is not altogether surprising, as Brian Kernighan, one of the co-creators of Ratfor, is also co-author of The C Programming Language. [25] "F Programming Language Homepage" (http:/ / www. fortran. com/ F/ index. html). . [26] X3J3 post-meeting distribution for meeting held at Brookhaven National Laboratory in November 1976. [27] "The obliteration of O", Computer Weekly, March 3, 1977

111

**References Further reading
**

Articles • Allen, F.E. (September 1981). "A History of Language Processor Technology in IBM" (http://www.research. ibm.com/journal/rd/255/ibmrd2505Q.pdf). IBM Journal of Research and Development (IBM) 25 (5). • Backus, J. W.; H. Stern, I. Ziller, R. A. Hughes, R. Nutt, R. J. Beeber, S. Best, R. Goldberg, L. M. Haibt, H. L. Herrick, R. A. Nelson, D. Sayre, P. B. Sheridan (1957). "The FORTRAN Automatic Coding System". Western joint computer conference: Techniques for reliability (Los Angeles, California: Institute of Radio Engineers, American Institute of Electrical Engineers, ACM): 188–198. doi:10.1145/1455567.1455599. • Chivers, Ian D.; Sleightholme, Jane (2009). "Compiler support for the Fortran 2003 standard" (http://www. fortranplus.co.uk/resources/fortran_2003_2008_compiler_support.pdf). ACM SIGPLAN Fortran Forum (ACM) 28 (1): 26–28. doi:10.1145/1520752.1520755. ISSN 10617264. • Pigott, Diarmuid (2006). "FORTRAN - Backus et al high-level compiler (Computer Language)" (http://hopl. murdoch.edu.au/showlanguage.prx?exp=8&language=FORTRAN). The Encyclopedia of Computer Languages. Murdoch University. Retrieved 2010-05-05. • Roberts, Mark L.; Griffiths, Peter D. (1985). "Design Considerations for IBM Personal Computer Professional FORTRAN, an Optimizing Compiler" (http://www.research.ibm.com/journal/sj/241/ibmsj2401G.pdf). IBM Systems Journal (IBM) 24 (1): 49–60. doi:10.1147/sj.241.0049. "Core" language standards • ANSI X3.9-1966. USA Standard FORTRAN (http://www.fh-jena.de/~kleine/history/languages/ ansi-x3dot9-1966-Fortran66.pdf). American National Standards Institute. Informally known as FORTRAN 66.

Fortran • ANSI X3.9-1978. American National Standard – Programming Language FORTRAN (http://www.fortran. com/fortran/F77_std/rjcnf.html). American National Standards Institute. Also known as ISO 1539-1980, informally known as FORTRAN 77. • ANSI X3.198-1992 (R1997) / ISO/IEC 1539:1991. American National Standard – Programming Language Fortran Extended (http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=17366). American National Standards Institute / ISO/IEC. Informally known as Fortran 90. • ISO/IEC 1539-1:1997. Information technology – Programming languages – Fortran – Part 1: Base language (http://j3-fortran.org/doc/standing/archive/007/97-007r2/pdf/97-007r2.pdf). Informally known as Fortran 95. There are a further two parts to this standard. Part 1 has been formally adopted by ANSI. • ISO/IEC 1539-1:2004. Information technology – Programming languages – Fortran – Part 1: Base language (http://www.dkuug.dk/jtc1/sc22/open/n3661.pdf). Informally known as Fortran 2003. • ISO/IEC 1539-1:2010 (Final Draft International Standard). Information technology – Programming languages – Fortran – Part 1: Base language (ftp://ftp.nag.co.uk/sc22wg5/N1801-N1850/N1830.pdf). Informally known as Fortran 2008. Related standards • Kneis, Wilfried (October 1981). "Draft standard Industrial Real-Time FORTRAN". ACM SIGPLAN Notices (ACM Press) 16 (7): 45–60. doi:10.1145/947864.947868. ISSN 0362-1340. • ISO 8651-1:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 1: FORTRAN (http://www.iso.org/iso/catalogue_detail?csnumber=16024). Geneva, Switzerland: ISO. 1988. Textbooks • Akin, Ed (2003). Object Oriented Programming via Fortran 90/95 (1st ed.). Cambridge University Press. ISBN 0-521-52408-3. • Chapman, Stephen J. (2007). Fortran 95/2003 for Scientists and Engineers (3rd ed.). McGraw-Hill. ISBN 978-0-07-319157-7. • Chivers, Ian; Sleightholme, Jane (2006). Introduction to Programming with Fortran (1st ed.). Springer. ISBN 1-84628-053-2. • Etter, D. M. (1990). Structured FORTRAN 77 for Engineers and Scientists (3rd ed.). The Benjamin/Cummings Publishing Company, Inc.. ISBN 0-8053-0051-1. • Ellis, T. M. R.; Phillips, Ivor R.; Lahey, Thomas M. (1994). Fortran 90 Programming (1st ed.). Addison Wesley. ISBN 0-201-54446-6. • Kupferschmid, Michael (2002). Classical Fortran: Programming for Engineering and Scientific Applications. Marcel Dekker (CRC Press). ISBN 0-8247-0802-4. • McCracken, Daniel D. (1961). A Guide to FORTRAN Programming. New York: Wiley. LCCN 61-016618. • Metcalf, Michael; John Reid, Malcolm Cohen (2011). Modern Fortran Explained. Oxford University Press. ISBN 0-19-960142-9. • Nyhoff, Larry; Sanford Leestma (1995). FORTRAN 77 for Engineers and Scientists with an Introduction to Fortran 90 (4th ed.). Prentice Hall. ISBN 0-13-363003-X. • Page, Clive G. (1988). Professional Programmer's Guide to Fortran77 (http://www.star.le.ac.uk/~cgp/ prof77.html) (7th June 2005 ed.). London: Pitman. ISBN 0-273-02856-1. Retrieved 2010-05-04. • Press, William H. (1996). Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing (http:// www.nrbook.com/a/bookf90pdf.php). Cambridge, UK: Cambridge University Press. ISBN 0-521-57439-0. • Sleighthome, Jane; Chivers, Ian David (1990). Interactive Fortran 77: A Hands-On Approach (http://citeseerx. ist.psu.edu/viewdoc/summary?doi=10.1.1.95.9503). Computers and their applications (2nd ed.). Chichester: E. Horwood. ISBN 0-13-466764-6.

112

Fortran

113

External links

• JTC1/SC22/WG5 (http://www.nag.co.uk/sc22wg5/) — The ISO/IEC Fortran Working Group • History of Fortran (http://www.softwarepreservation.org/projects/FORTRAN/) at the Computer History Museum's Software Preservation Group • Bemer, Bob, "Who Was Who in IBM's Programming Research? -- Early FORTRAN Days" (http://www. trailing-edge.com/~bobbemer/PRORES.HTM), January 1957, Computer History Vignettes • Comprehensive Fortran Standards Documents (http://gcc.gnu.org/wiki/GFortranStandards) from the gfortran project • Fortran IV - IBM System/360 and System/370 Fortran IV Language - GC28-6515-10 (http://www.fh-jena.de/ ~kleine/history/languages/GC28-6515-10-FORTRAN-IV-Language.pdf) • Fortran 77, 90, 95, 2003 Information & Resources (http://www.fortranplus.co.uk/fortran_info.html) • Fortran 77 4.0 Reference Manual (http://www.physics.ucdavis.edu/~vem/F77_Ref.pdf) • Fortran 90 Reference Card (http://users.physik.fu-berlin.de/~goerz/blog/2008/12/fortran-90-reference-card/ )

Article Sources and Contributors

114

**Article Sources and Contributors
**

User:Rajah2770 Source: http://en.wikipedia.org/w/index.php?oldid=416910398 Contributors: ArcAngel, JohnCD, MatthewVanitas, Rajah2770 Fractional calculus Source: http://en.wikipedia.org/w/index.php?oldid=415855420 Contributors: A little insignificant, Aiden Fisher, Albmont, Aleksas, Ancheta Wis, AndreaPersephone, Antipodean Contributor, Awanta, Baccyak4H, Baz.77.243.99.32, BenFrantzDale, Blotwell, Charles Matthews, ComplexZeta, CornellRunner314, Cyan, DasAuge, Dcoetzee, Docu, Doradus, Dratman, Drusus 0, Dysprosia, Escalas, Fredrik, Fryed-peach, Gene Ward Smith, Gesslein, Giftlite, Gipsonb, GreenRoot, Guaka, Hair Commodore, Isnow, JabberWok, JamesBWatson, Joe056, Joriki, Josh Cherry, Jpbowen, Kate, Kevin Baas, Kwantus, Kwertii, Lrgerma, Mat-Fiz-C, MathFacts, Meni Rosenfeld, Michael Hardy, Mike Peel, Nixeagle, OdedSchramm, Omegatron, Patrick, Paul August, Perturbationist, Physicistjedi, Pinethicket, Quuxplusone, RJFJR, Reyk, RobHar, Saganatsu, SamsaGregor, Sarregouset, Someone else, Stigin, Stotr, Sławomir Biały, Tabletop, The Anome, Wik, Wile E. Heresiarch, Zunaid, 98 anonymous edits Code Source: http://en.wikipedia.org/w/index.php?oldid=412459856 Contributors: 95McCartney, Adrian.benko, Aintsemic, Aldaron, Alex756, Alexf, AlphaPyro, Alphanis, Ancheta Wis, Andonic, Antaeus Feldspar, ArnoldReinhold, Asxvideos, AugPi, Avicennasis, BOBFRED102, Beao, Beta16, Blahma, Bookandcoffee, Brianjd, COMPATT, Canderson7, Captain panda, Casper2k3, Connormah, Conversion script, Courcelles, DARTH SIDIOUS 2, Danlaycock, Dcljr, Dcoetzee, Dekisugi, Denny, DerHexer, DesuDesuDesuDesu4, Dicklyon, DocWatson42, Dpv, Dust Filter, Egg, Electron, Elonka, Epbr123, Evertype, Everyking, Ewlyahoocom, FarisL, Femto, FeralDruid, Francs2000, Frap, Gail, Galoubet, Gene.arboit, Gimboid13, God of Slaughter, GreatHappenings, GregorB, Gurch, Gurchzilla, Gveret Tered, Gwernol, Hagedis, Hannan98, HappyCamper, Hermel, Hetar, Hezink8, Hirzel, Hu12, IW.HG, Icseaturtles, Incnis Mrsi, Iridescent, Isheden, Isomorphic, IvanLanin, J.delanoy, J04n, JForget, Jay, Jdude97, Jfdwolff, Jheald, JodyB, Jojhutton, Josh3580, KFP, KGasso, Keilana, Korg, Kuru, LC, Leafyplant, LeaveSleaves, Lee Daniel Crocker, Lk9, Lotje, Luk, Matilda, Matt Crypto, Mercurywoodrose, Metamagician3000, Michael Hardy, Miguel Andrade, Mike4ty4, Mikeblas, Mmernex, Morken, MrStalker, MuthuKutty, Mygerardromance, Neg, NickShaforostoff, Nickthebest, Nis81, Ohnoitsjamie, Ojigiri, OlEnglish, Oldekop, Once in a Blue Moon, OrgasGirl, Petersam, Pinkychanti, Poccil, Poor Yorick, PrologFan, Quest for Truth, Quevaal, Qxz, Razorflame, RedWolf, Reina riemann, RexNL, Roadrunner, Rochus, Rockero, Rogerd, Rror, Rxblair, SHallathome, SJP, Salvio giuliano, Sam Hocevar, Schapel, Shze, SimonP, Siroxo, Sligocki, Slon02, Smite-Meister, Solar Rocker, Sooner Dave, Splash, Stevenj, Strmore, Svick, Tabletop, Tassedethe, Technopilgrim, Thatdog, The Thing That Should Not Be, Tightill, TonyW, ToweySoftware, Tpbradbury, Trevor MacInnis, TutterMouse, Tyrol5, Ukexpat, Uriyan, WadeSimMiser, WarthogDemon, Wik, Willking1979, Wtmitchell, Wtshymanski, Ww, Xian777, Ylloh, Yt95, ZachPruckowski, 142 ,لیقع فشاکanonymous edits Simulation Source: http://en.wikipedia.org/w/index.php?oldid=417426801 Contributors: 16@r, 6024kingedward1, AManWithNoPlan, Aaron Solomon Adelman, Aavviof, Acdoyle2000, Addshore, Aeroman213, Aerotone, Aetheling, Agilemolecule, Agostinobruzzone, Ahanberg, Alan Fitzgerald, Albert ip, Aleenf1, Amorymeltzer, Andre Engels, AndrewDressel, Anomalocaris, Ap, Arasut, Arthena, Arthur Rubin, Asparagus, Atlant, Axlq, B7582, BIONICLE233, BMF81, BackwardsBoy, Bapho, BarryList, Barticus88, Bdalgarno, Bdel63, Behaafarid, Bel.Alena, Ben Ben, Bobbajobb, Bobblehead, Bobet, Brucevdk, C.Fred, CLandsberg85, COMPFUNK2, Caitlin Timms, Capricorn42, Cddejong, Celticsfan730, ChemGardener, Cleared as filed, Closedmouth, Commander Keane, Cortes IDS5717C, Cyclonenim, Dannyc77, Dastly75, Davehi1, David Wright, Davidcofer73, Dayorkmd, DeepAnim1, Dgibson, Dhchilies, Dirkbb, Dlacombe, Drbreznjev, Duk, Enoch the red, EnumaElish, Ephebi, Erkan Yilmaz, Eterry, Eve Hall, EyalBrill, EyeSerene, Fabricationary, FactChecker1199, Felonyboy, Fenice, Fordmadoxfraud, Fox, Franciscoesquembre, Frankenpuppy, G716, GJeffery, Gaius Cornelius, Gdawgrancid, Gene Hobbs, Geniac, Gfoz, Gh5046, Gianfranco, Giftlite, GraemeL, Graham87, Greensburger, Guyd, Gz33, Hadal, Haffner, Hesperian, Hodsondd, Holdendp, Hongooi, Ian Geoffrey Kennedy, Icairns, Icmauri84, Impact red, Indon, J.delanoy, J04n, JamesLee, Jdu, JeremyA, Jeremykemp, Jeroen12321, Jhadley6, Jillglen, Jimbojimbo2, Jlewis501, Jm0130, Jmkim dot com, Joewski, Johnpseudo, Jonathan Drain, Jonemerson, Joneskarenl, Joyous!, Jpk, Jskiles1, Juansempere, Jvs.cz, KJS77, Kahmed198, Kai.velten, Kak Dela?, Kayau, Kdakin, Kkorpinen, Kqharvey, KrakatoaKatie, Kubanczyk, Kuffner, Kuru, Kylin.Ma, LA2, LeavesFX, Leuko, Lexor, Liao, LilHelpa, Lim Sem Chow, Lindsay658, Listowy, Llacy, Loopblair, Lorna24, Lupo, M7, MC10, MKFI, MacGyverMagic, Malcolmxl5, Manop, Marasmusine, Master on EN, Maurice Carbonaro, Mausy5043, Maxnegro, McGeddon, Mdd, Mitaphane, Mlewis000, MrOllie, Mynocturama, Nargoth, Neparis, Nick Number, Nivix, NoNameXII, Normsch, Nv8200p, Olaf Davis, Oli Filth, Omicronpersei8, Oxymoron83, Oğuz Ergin, Petedarnell, Petrov pv, Philip Trueman, PhilipMW, Pixie2000, Poiuyt Man, Ponyo, Powo, Pranav v, Prolog, Pwaddell, R'n'B, R. fiend, RDBrown, RGeurtz, RandomXYZb, Randomran, Rettetast, RexBrynen, Rich Farmbrough, Rjwilmsi, Rlippert, Roberthu179, Roger.smith, SMcCandlish, Satish2106, Scarletb, Scoo, Sega381, Serge, Sergio.ballestrero, ShelfSkewed, SimonJETaylor, Simulation123simulation123, Slakr, Smith609, Sp3, Sphinx, Stack, StaticGull, Stephanhartmannde, Stephenb, Stephenchou0722, Stopitallready, Stricklandjs59, Tarotcards, TastyPoutine, Tawker, Tecknotrove, Tedickey, Telamon, The Anome, The Thing That Should Not Be, The demiurge, The wub, ThreePD, Thseamon, Tiddly Tom, Tim.cooke2007, Truecobb, Tuckerj1976, Valueyou, Vespristiano, VictorAnyakin, Vikom, Vsims, Vuo, Walter Görlitz, Wiki alf, WikiDao, Wikid77, WilfriedC, Will Beback Auto, Wisdom89, Yeom0609, Ymoore33, Zaphod Beeblebrox, Zennie, 467 anonymous edits Plasma modeling Source: http://en.wikipedia.org/w/index.php?oldid=393311294 Contributors: Abdulmismail, Dcoetzee, Debresser, GEMCIPS, Jcandy, Jschutkeker, Maxime.lesur, Mindgame123, Rich Farmbrough, RockScient, Rror, Tamtamar, Will Pittenger, 7 anonymous edits Equations of motion Source: http://en.wikipedia.org/w/index.php?oldid=418308300 Contributors: Aboalbiss, Abtract, Alzarian16, AndrewDressel, Angelamaher, Atomiktoaster, Awkwardusername, Bean49, Berland, Bevo, Boing! said Zebedee, BoomerAB, Bryan Derksen, Bubba73, Capricorn42, Catgut, Charles Matthews, Clarince63, Ctjf83, DARTH SIDIOUS 2, Darkspots, Db099221, DerHexer, DewiMorgan, Doulos Christos, Dr Greg, Drw25, EdC, Epbr123, Falcon8765, Fezanmian, Gandalf61, Gavinlobo, Gebjon, Gene Nygaard, GeoGreg, Giftlite, GigaAndy, Goldom, Grafen, Grilo-TC, Harryboyles, Henrygb, Herbee, Icairns, Inwind, It Is Me Here, JLaTondre, JRSpriggs, Jane Fairfax, Jaxl, Jorunn, Juliancolton, Kieranmrhunt, Kjhf, Kwswim08, Kylemcinnes, LOTRrules, Largoplazo, Lord fabs, Lukelukematt, Lumidek, Mentifisto, Mets501, Michael Hardy, Muwaffaq, Narendra Sisodiya, Natural Philosopher, Neil9999, Nikolai T, NotAnonymous0, Olathe, OldakQuill, Paolo.dL, Parshvam Jain, Patrick, Paul August, Peterlin, Phatmonkey, PhySusie, RMFan1, Radagast83, Ram einstein, Random18, Reach Out to the Truth, Rhyshuw1, Roadrunner, Samw, Sanpaz, Shearyears394, Snow93, Somody, Sonett72, Stevertigo, Taken567, Tarquin, Telewatho, Thehotelambush, Thorwald, Tom harrison, Turgan, VIGNERON, Venkat.nadimpalli, Vishnava, Wereon, WhiteHatLurker, WikiHaquinator, Willking1979, Wolfkeeper, Wood Thrush, Wwoods, Yaris678, Yevgeny Kats, Zbvhs, Zzyzx11, 247 anonymous edits Maxwell's equations Source: http://en.wikipedia.org/w/index.php?oldid=415344795 Contributors: 130.225.29.xxx, 16@r, 213.253.39.xxx, A.C. Norman, Acroterion, Ahoerstemeier, Alan Peakall, Alejo2083, Almeo, Ambuj.Saxena, Ancheta Wis, Andre Engels, Andrei Stroe, Andries, AndyBuckley, Anthony, Antixt, Anville, Anythingyouwant, Ap, Areldyb, Arestes, Army1987, Art Carlson, Asar, AugPi, Aulis Eskola, Avenged Eightfold, AxelBoldt, BD2412, Barak Sh, BehzadAhmadi, BenBaker, BenFrantzDale, Bender235, Berland, Bernardmarcheterre, Blaze Labs Research, Bob1817, Bora Eryilmaz, Brainiac2595, Brendan Moody, Brequinda, Brews ohare, Bryan Derksen, CYD, Calliopejen1, Can't sleep, clown will eat me, CanDo, Cardinality, Cassini83, Charles Matthews, Childzy, Chodorkovskiy, Cloudmichael, Coldwarrier, Colliand, Comech, Complexica, Conversion script, Cooltom95, Corkgkagj, Courcelles, Cpl.Luke, Craig Pemberton, Cronholm144, Crowsnest, D6, DAGwyn, DGJM, DJIndica, Daniel.Cardenas, David Tombe, David spector, Delirium, DeltaIngegneria, Dgrant, Dicklyon, Dilwala314, Dmr2, Donarreiskoffer, Donreed, DrSank, Dratman, DreamsReign, Drw25, Dxf04, Długosz, El C, Electrodynamicist, Eliyak, Enok.cc, Enormousdude, Farhamalik, Fgnievinski, Fibonacci, Find the way, Finell, Fir-tree, Firefly322, First Harmonic, Fizicist, Fjomeli, Fledylids, Freepopcornonfridays, Fuhghettaboutit, Gaius Cornelius, Geek1337, Gene Nygaard, Geometry guy, George Smyth XI, GeorgeLouis, Ghaspias, Giftlite, Giorgiomugnaini, Giraffedata, Glicerico, GordonWatts, Gremagor, Gseryakov, H.A.L., HaeB, Headbomb, Herbee, Heron, Hope I. Chen, Hpmv, Icairns, Ignorance is strength, Igor m, Ixfd64, Izno, JATerg, JNW, JRSpriggs, JTB01, JabberWok, Jaknelaps, Jakohn, Jan S., Janfri, Jao, Jasón, Jfrancis, Joconnor, Johann Wolfgang, JohnBlackburne, Johnlogic, Jordgette, Joseph Solis in Australia, Jpbowen, Jtir, JustinWick, Karada, Karl Dickman, Keenan Pepper, Kelvie, KingofSentinels, Kissnmakeup, Kooo, Kragen, Kri, Kzollman, L-H, LAUBO, Lambiam, Larryisgood, Laurascudder, LedgendGamer, Lee J Haywood, Lethe, Light current, LilHelpa, Linas, Linuxlad, Lir, Lixo2, Lockeownzj00, Loom91, Looxix, Lseixas, Luk, LutzL, MFH, MarSch, Marek69, Marmelad, Marozols, Masudr, MathKnight, Maxim Razin, Meier99, Melchoir, MeltBanana, Metacomet, Mets501, Mfrosz, Mgiganteus1, Michael Hardy, Michielsen, Micru, Miguel, Miserlou, Mjb, Mleaning, Modster, Mokakeiche, Monkeyjmp, Mpatel, Msablic, Msh210, NHRHS2010, Nabla, Nakon, Nebeleben, Neparis, NewEnglandYankee, Niteowlneils, Nmnogueira, Nousernamesleft, Nudve, Oleg Alexandrov, Omegatron, One zero one, Opspin, Orbst, Out of Phase User, Paksam, Paolo.dL, Paquitotrek, Passw0rd, Patrick, Paul August, Paul D. Anderson, Peeter.joot, Pervect, Pete Rolph, Peterlin, Phlie, Phys, Physchim62, Pieter Kuiper, Pigsonthewing, Pinethicket, PiratePi, PlantTrees, Pouyan12, Qrystal, Quibik, RG2, RK, RandomP, Ranveig, Rdrosson, Red King, Reddi, Reedy, Revilo314, Rgdboer, Rhtcmu, Rich Farmbrough, Rjwilmsi, Rklawton, Roadrunner, RogierBrussee, Rogper, Rossami, Rudchenko, Rudminjd, S7evyn, SJP, Sadi Carnot, Salsb, Sam Derbyshire, Sanders muc, Sannse, Sbyrnes321, Scurvycajun, SebastianHelm, Selfstudier, Sgiani, Shamanchill, Shanes, Sheliak, Shlomke, Slakr, Sonygal, Spartaz, Srleffler, Steve Quinn, Steve p, Steven Weston, Stevenj, StewartMH, Stikonas, StradivariusTV, TStein, Tarquin, TeaDrinker, Template namespace initialisation script, Tercer, That Guy, From That Show!, The Anome, The Cunctator, The Original Wildbear, The Sanctuary Sparrow, The Wiki ghost, The undertow, TheObtuseAngleOfDoom, Thincat, Thingg, Tim Shuba, Tim Starling, Tkirkman, Tlabshier, Tobias Bergemann, Tom.Reding, Toymontecarlo, Tpbradbury, Treisijs, Trelvis, Trusilver, Truthnlove, Tunheim, Urvabara, Warfvinge, Warlord88, Waveguy, Wik, Wing gundam, Woodstone, Woohookitty, Wordsmith, WriterHound, Wtshymanski, Wurzel, Wwoods, XJamRastafire, Xenonice, Xonqnopp, Yamaguchi先生, Yevgeny Kats, Youandme, Zhenyok 1, Zoicon5, ^musaz, 老陳, 792 anonymous edits Algorithm Source: http://en.wikipedia.org/w/index.php?oldid=420571217 Contributors: "alyosha", 0, 12.35.86.xxx, 128.214.48.xxx, 151.26.10.xxx, 161.55.112.xxx, 204.248.56.xxx, 24.205.7.xxx, 747fzx, 84user, 98dblachr, APH, Aarandir, Abovechief, Abrech, Acroterion, Adam Marx Squared, Adamarthurryan, Adambiswanger1, Addshore, Aekamir, Agasta, Agent phoenex, Ahy1, Alcalazar, Ale2006, Alemua, Alex43223, Alexandre Bouthors, Alexius08, Algogeek, Allan McInnes, Amberdhn, Andonic, Andre Engels, Andreas Kaufmann, Andrejj, Andres, Anomalocaris, Antandrus, Anthony, Anthony Appleyard, Anwar saadat, Apofisu, Arvindn, Athaenara, AxelBoldt, Azurgi, B4hand, Bact, Bapi mahanta, Bart133, Bb vb, BeavisSanchez, Benn Adam, Bethnim, Bill4341, BillC, Billcarr178, Blankfaze, Bob1312, Bobblewik, Boing! said Zebedee, Bonadea, Bongwarrior, BorgQueen, Boud, Brenont, BriEnBest, Brion VIBBER, Brutannica, Bryan Derksen, Bth, Bucephalus, CBM, CRGreathouse, Cameltrader, CarlosMenendez, Cascade07, Cbdorsett, Cedars, Chadernook, Chamal N, Charles Matthews, Charvex, Chasingsol, Chatfecter, Chinju, Chris 73, Chris Roy, Cic, Citneman, Ckatz, Clarince63, Closedmouth, Cmdieck, Colonel Warden, Conversion script, Cornflake pirate, Corti, CountingPine, Crazysane, Cremepuff222, Curps, Cybercobra, Cyberjoac, Cyrusace, DASSAF, DAndC, DCDuring, Danakil, Danger, Daven200520, David Eppstein, David Gerard, Dbabbitt, Dcoetzee, DeadEyeArrow, Deadcracker, Deeptrivia, Deflective, Delta Tango, Den fjättrade ankan, Deor, Depakote, DerHexer, Derek farn, DevastatorIIC, Dgrant, Diannaa, Diego Moya, Dinsha 89, Discospinster, Dmcq,

**Article Sources and Contributors
**

DopefishJustin, Dreftymac, Drilnoth, Drpaule, Drrevu, DslIWG,UF, Duncharris, Dwheeler, Dylan Lake, Dysprosia, EconoPhysicist, Ed Poor, Ed g2s, Editorinchief1234, Eequor, Efflux, El C, ElectricRay, Electron9, ElfMage, Ellegantfish, Eloquence, Emadfarrokhih, Epbr123, Eric Wester, Eric.ito, Erik9, Essjay, Eubulides, Everything counts, Evil saltine, EyeSerene, Fabullus, Falcon Kirtaran, Fantom, Farosdaughter, Farshadrbn, Fastfission, Fastilysock, Fernkes, Fetchcomms, FiP, FlyHigh, Fragglet, Frecklefoot, Fredrik, Friginator, Frikle, GOV, GRAHAMUK, Gabbe, Gaius Cornelius, Galoubet, Gandalf61, Gary King, Geniac, Geo g guy, Geometry guy, Ghimboueils, Gianfranco, Giantscoach55, Giftlite, Gilgamesh, Giminy, Gioto, Glass Sword, Gnowor, Gogo Dodo, Goochelaar, Goodnightmush, Googl, GraemeL, Graham87, Gregbard, Groupthink, Grubber, Gubbubu, Gurch, Guruduttmallapur, Guy Peters, Guywhite, H3l1x, Hadal, Hairy Dude, Hamid88, Hannes Eder, Hannes Hirzel, Harryboyles, Harvester, HenryLi, HereToHelp, Heron, Hexii, Hfastedge, Hiraku.n, Hmains, Hu12, Hurmata, IGeMiNix, Iames, Ian Pitchford, Imfa11ingup, Inkling, InterruptorJones, Intgr, Iridescent, Isis, Isofox, Ixfd64, J.delanoy, JForget, JIP, JSimmonz, Jacomo, Jacoplane, Jagged 85, Jaredwf, Jeff Edmonds, Jeronimo, Jersey Devil, Jerzy, Jidan, JoanneB, Johan1298, Johantheghost, Johneasley, Johnsap, Jojit fb, Jonik, Jonpro, Joosyfoo, Jorvik, Josh Triplett, Jpbowen, Jtvisona, Jundi78, Jusdafax, Jóna Þórunn, K3fka, KHamsun, Kabton14, Kanags, Kanjy, Kanzure, Kazvorpal, Keilana, Kenbei, Kevin Baas, Kh0061, Khakbaz, Kku, Kl4m, Klausness, Kntg, Kozuch, Kragen, Krellis, LC, Lambiam, LancerSix, Larry R. Holmgren, Ldo, Ldonna, Leszek Jańczuk, Levineps, Lexor, Lhademmor, Lightmouse, Lilwik, Ling.Nut, Lissajous, Loggerjack, Lucyin, Lumidek, Lumos3, Lupin, Luís Felipe Braga, MARVEL, MSPbitmesra, MagnaMopus, Makewater, Makewrite, Maldoddi, Malleus Fatuorum, Mange01, Mani1, ManiF, Manik762007, Marek69, Mark Dingemanse, Markaci, Markh56, Markluffel, Marysunshine, MathMartin, Mathviolintennis, Matt Crypto, MattOates, Maurice Carbonaro, Mav, Maxamegalon2000, McDutchie, Meowist, Mesimoney, Mfc, Michael Hardy, Michael Slone, Michael Snow, MickWest, Miguel, Mikael Häggström, Mikeblas, Mindmatrix, Mission2ridews, Miym, Mlpkr, Mpeisenbr, MrOllie, Mttcmbs, Multipundit, MusicNewz, MustangFan, Mutinus, Mxn, Nanshu, Napmor, Nikai, Nikola Smolenski, Nil Einne, Nmnogueira, Noisy, Nummer29, Obradovic Goran, Od Mishehu, Odin of Trondheim, Ohnoitsjamie, Onorem, OrgasGirl, Orion11M87, Ortolan88, Oskar Sigvardsson, Oxinabox, Oxymoron83, P Carn, PAK Man, PMDrive1061, Paddu, PaePae, Pascal.Tesson, Paul August, Paul Foxworthy, Paxinum, Pb30, Pcap, Pde, Penumbra2000, Persian Poet Gal, Pgr94, PhageRules1, Philip Trueman, Philipp Wetzlar, Pit, Plowboylifestyle, Poor Yorick, Populus, Possum, PradeepArya1109, Proffesershean, Quendus, Quintote, Quota, Qwertyus, R. S. Shaw, Raayen, RainbowOfLight, Randomblue, Raul654, Rdsmith4, Reconsider the static, Rejka, Rettetast, RexNL, Rgoodermote, Rholton, Riana, Rich Farmbrough, Rizzardi, RobertG, RobinK, Rpwikiman, Rror, RussBlau, Ruud Koot, Ryguasu, SJP, Salix alba, Salleman, SamShearman, SarekOfVulcan, Savidan, Scarian, Seanwal111111, Seb, Sesse, Shadowjams, Shipmaster, Silly rabbit, SilverStar, Sitharama.iyengar1, SlackerMom, Snowolf, Snoyes, Soler97, Sonjaaa, Sophus Bie, Sopoforic, Spankman, Speck-Made, Spellcast, Spiff, Splang, Sridharinfinity, Stephan Leclercq, Storkk, Sundar, Susurrus, Swerdnaneb, Systemetsys, TakuyaMurata, Tarquin, Taw, Tempodivalse, The Firewall, The Fish, The High Fin Sperm Whale, The Nut, The Thing That Should Not Be, The ansible, TheGWO, TheNewPhobia, Thecarbanwheel, Theodore7, Tiddly Tom, Tide rolls, Tim Marklew, Timc, Timhowardriley, Timir2, Tizio, Tlesher, Tlork Thunderhead, Tobby72, Toncek, Tony1, Torchwoodwho, Trevor MacInnis, Treyt021, TuukkaH, UberScienceNerd, Urenio, User A1, V31a, Vasileios Zografos, Vikreykja, Vildricianus, Vincent Lextrait, Wainkelly, Waltnmi, Wavelength, Wayiran, Waynefan23, Weetoddid, Wellithy, Wexcan, Who, Whosyourjudas, WhyDoIKeepForgetting, WikHead, Wiki-uk, Willking1979, WillowW, Winston365, Woohookitty, Wvbailey, Xact, Xashaiar, Yamamoto Ichiro, Yintan, Ysindbad, Zfr, Zocky, Zondor, Zoney, Zundark, 965 anonymous edits Computer programming Source: http://en.wikipedia.org/w/index.php?oldid=419571270 Contributors: *drew, 1exec1, 206.26.152.xxx, 209.157.137.xxx, 64.24.16.xxx, 84user, ABF, AKGhetto, AbstractClass, Acalamari, Acroterion, AdamCox9, Adrignola, Adw2000, Aeram16, Aeternus, AgentCDE, Ahoerstemeier, Aitias, Akanemoto, Al Lemos, AlanH, Alansohn, Alberto Orlandini, Alex.g, AlistairMcMillan, AllCalledByGod, Alyssa3467, Amire80, Ancheta Wis, Andonic, Andrejj, Andres, AndrewHowse, Andy Dingley, AnnaFrance, Antonielly, Antonio Lopez, AquaFox, Arnabatcal, ArnoldReinhold, Arvindn, Asijut, AtticusX, Auroranorth, Avoided, Bakabaka, Banaticus, Bangsanegara, Betterworld, Bevo, BiT, Bigk105, Blackworld2005, Bluemoose, Bobo192, Bonadea, Bookofjude, Bootedcat, Boson, Bougainville, Breadbaker444, Brianga, Brichard12, Brighterorange, Brother Dysk, Bubba hotep, Bucephalus, BurntSky, Butterflylunch, C550456, CRGreathouse, Caltas, Can't sleep, clown will eat me, CanadianLinuxUser, Cander0000, Capi, Capi crimm, Capricorn42, Captain Disdain, Centrx, Cflm001, Cgmusselman, CharlesC, CharlotteWebb, Chirp Cricket, Chocolateboy, Chovain, ChrisLoosley, Christopher Agnew, Christopher Fairman, Chriswiki, Chuck369, Ciaccona, Closedmouth, Cmtam922, Cnilep, Cnkids, Colonies Chris, Cometstyles, Conversion script, Crazytales, Cstlarry, Curps, Curvers, Cybercobra, CzarB, DARTH SIDIOUS 2, DMacks, DVD R W, Daekharel, Damian Yerrick, Damieng, DanP, Danakil, Dante Alighieri, Daverose 33, Davey-boy-10042969, DavidCary, DavidLevinson, Davidwil, Daydreamer302000, Dcljr, Deepakr, Dekisugi, DerHexer, Derek farn, Deryck Chan, Dfmclean, Dialectric, Diberri, Diego Moya, Digitize, Discospinster, Dkasak, Doc9871, Dominic7848, Donald Albury, Donhalcon, DoorsRecruiting.com, Dougofborg, Downtownee, Dr Dec, Dravir5, Drphilharmonic, Dureo, Dusto4, Dylan Lake, Dysprosia, ERcheck, Ed Poor, Edderso, Edward, Edward Z. Yang, Eeekster, Eiwot, El C, ElAmericano, Elektrik Shoos, Elendal, Elf, Elkman, Emperorbma, Epbr123, Ephidel, Epolk, Ericbeg, Essexmutant, Etphonehome, Extremist, F41t3r, Fakhr245, Falcon Kirtaran, Falcon8765, Fazilati, Femto, Fg, Fratrep, Frecklefoot, FreplySpang, Frosted14, FunPika, Fvw, Galoubet, Gamernotnerd, Gandalf61, Garik, Gary2863, Garzo, GeorgeBills, Geremy659, Ghewgill, Ghyll, Giftlite, Gildos, Gilliam, Glass Tomato, GoodDamon, Goodvac, Goooogler, Gploc, Gracenotes, GraemeL, Graham87, Granpire Viking Man, Gregsometimes, Grison, Grouf, Guaka, Guanaco, Gwernol, Gökhan, Habbo sg, Hadal, Hairy Dude, Hanberke, Handbuddy, Hannes Hirzel, Happyisenough, Harvester, Heltzgersworld, Henry Flower, Hermione1980, Heron, Hipocrite, Hustlestar4, Iamjakejakeisme, Igoldste, Ikanreed, Imroy, Intgr, InvertRect, Inzy, Ivan Štambuk, Ixfd64, J.delanoy, J3ff, JLaTondre, Jackal2323, Jackol, Jacob.jose, Jagged 85, Jason4789, Jason5ayers, Jatos, Jaysweet, Jedediah Smith, Jeff02, Jmundo, Joedaddy09, Jojhutton, Josh1billion, JoshCarter15, Jpbowen, Jph, Jwestbrook, Jwh335, K.Nevelsteen, KHaskell, KJS77, KMurphyP, Kaare, Keilana, Kenny sh, Kglavin, Kifcaliph, Klutzy, Konstable, Laurusnobilis, Lee J Haywood, Leedeth, Leibniz, LeinadSpoon, Lemlwik, Lerdsuwa, Levineps, Lgrover, Lightmouse, LilHelpa, LittleOldMe, LittleOldMe old, Loadmaster, Logan, Loren.wilton, Luckdao, Luk, Lumos3, Luna Santin, Lysander89, M4573R5H4K3, M4gnum0n, MER-C, MacMed, Macrakis, Macy, Mahanga, Marek69, Mark Renier, Maroux, Marquez, Maryreid, Matthew, Matthew Stannard, Mauler90, Maury Markowitz, Maximaximax, Mdd, Mentifisto, Mesimoney, Metalhead816, Mets501, MiCovran, Michael Drüing, Michael93555, MichaelBillington, Michal Jurosz, MikeDogma, Mindmatrix, Minghong, Minimac, Minna Sora no Shita, Mipadi, Mjchonoles, Mr Stephen, MrFish, MrOllie, Msikma, Mwanner, Mxn, NERIUM, NHRHS2010, Nagy, Nanshu, Narayanese, Neilc, Nephtes, Nertzy, Netkinetic, Netsnipe, Neurovelho, NewEnglandYankee, Nigholith, Nikai, Ningauble, Nk, Notheruser, Nothingofwater, Nubiatech, Nuno Tavares, Nwbeeson, OSXfan, Obli, Ohnoitsjamie, Optim, Orangutan, Orlady, Oxymoron83, P.jansson, Paperfork, Paul D. Buck, Paxse, Pcu123456789, Peberdah, Pedro, Philip Trueman, PhilipO, Phoe6, Piet Delport, Pilotguy, Pimpedshortie99, Pinethicket, Pkalkul, Plugwash, Pm2214, Poco a poco, Pointillist, Poor Yorick, Poweroid, Prashanthellina, Premvvnc, Prgrmr@wrk, PrimeHunter, Pt9 9, Qirex, Qrex123, Quadell, RCX, Ragib, Rajnishbhatt, Rama's Arrow, Rasmus Faber, RaveTheTadpole, Rawling, RedWolf, RedWordSmith, RexNL, Ricky15233, Rl, Rmayo, Robert Bond, Robert L, Robert Merkel, Robertinventor, Rodolfo miranda, Ronz, Rrelf, Rror, Rsg, Ruud Koot, Rwwww, S.K., S0ulfire84, SDC, Salv236, Sanbeg, Sardanaphalus, Sasajid, Satansrubberduck, SchfiftyThree, Schwarzbichler, SciAndTech, Scoop Dogy, Scwlong, Sean7341, Senator Palpatine, Seyda, Shahidsidd, Shanak, Shappy, Shirik, Simeon, SimonEast, SkyWalker, Someone42, SpacemanSpiff, Splang, Starionwolf, SteinbDJ, Stephen Gilbert, Stephenb, Stephenbooth uk, Steven Zhang, Studerby, Suisui, SunCountryGuy01, Suruena, Svemir, Tablizer, TakuyaMurata, Tanalam, Tangent747, TarkusAB, Tedickey, Teo64x, TestPilot, Texture, The Divine Fluffalizer, The Mighty Clod, The Thing That Should Not Be, The Transhumanist (AWB), TheRanger, TheTito, Thedjatclubrock, Thedp, Thingg, Thisma, Tide rolls, Tifego, Timhowardriley, Tnxman307, Tobby72, Tobias Bergemann, Tom2we, TomasBat, TonyClarke, Tpbradbury, Trusilver, Turnstep, Tushar.kute, TuukkaH, Tweak232, Twest2665, Tysto, Tyw7, Udayabdurrahman, Ukexpat, Umofomia, Uncle Dick, Unforgettableid, Uniquely Fabricated, Unknown Interval, Userabc, Versus22, Vipinhari, Viriditas, Vladimer663, WBP Boy, Wadamja, WadeSimMiser, Wangi, Whazzups, Whiskey in the Jar, Wiki alf, Wikiklrsc, Wilku997, Wing (usurped), Witchwooder, Wizard191, Wj32, Worldeng, Wre2wre, Wtf242, Wtmitchell, Wwagner, Xagyg creator programmer, YellowMonkey, Yintan, Yk Yk Yk, Youssefsan, Yummifruitbat, Zeboober, Zkac050, Zoul1380, Zsinj, Zvn, 1012 anonymous edits Fortran Source: http://en.wikipedia.org/w/index.php?oldid=419937790 Contributors: 16@r, 1exec1, Aaron Schulz, Abeliavsky, AdjustShift, Agateller, Agricola44, Aivosto, AmosWolfe, Anarchivist, Andre Engels, Andrejj, Arch dude, ArnoldReinhold, Atlant, Audriusa, AxelBoldt, Bachcell, Badger Drink, Bashen, Bchabala2, Bduke, Beliavsky, BenFrantzDale, Bender235, Benjaminevans82, Betacommand, Bevo, Bitsmart, Blaxthos, Bobdc, Brion VIBBER, Broh., Bryan, Bubba73, Bunnyhop11, CRGreathouse, CYD, Captain Fortran, Casey56, Changcho, ChaosCon, Chbarts, ChongDae, Chris, Closeapple, Conversion script, Corti, Corvus cornix, Cromis, Crumley, Csabo, Cybercobra, D. F. Schmidt, DBrane, DMacks, DNewhall, DRady, Damian Yerrick, Dan100, Danakil, Dav4is, David Biddulph, David.Monniaux, Dcoetzee, Dead3y3, Dejvid, Dennette, DerHexer, Derek Ross, Derek farn, DesertAngel, Dmsar, DoctorWho42, Dpbsmith, Drilnoth, Drj, Duja, Dune, Dycedarg, Dysprosia, Eagleal, EncMstr, Epbr123, Ericamick, Eschnett, Eubulides, Eustress, Evolauxia, Ewlyahoocom, Fireuzer, Flex, Flowerheaven, Frecklefoot, Funandtrvl, Furrykef, Fæ, Gaius Cornelius, Gantlord, Gareth Owen, Geoff97, Georg Peter, GeorgeMoney, Ghettoblaster, Gjd001, Glenn, Graham87, Greg L, Greg Lindahl, Gringer, Gscshoyru, Gudeldar, H Debussy-Jones, H2g2bob, Hairy Dude, Herbee, Howardjp, I already forgot, I dream of horses, II MusLiM HyBRiD II, Infologue, Intgr, Inwind, Iridescent, IsaacGS, Isis, Iulian.serbanoiu, Ixfd64, JW1805, Jacobolus, Jamesgibbon, Jerryobject, JonEAhlquist, Jossi, Jpfagerback, Jrockley, Julesd, Jvhertum, Jwmwalrus, Jwoodger, K-car, Kajasudhakarababu, Karl-Henner, Kbh3rd, Keflavich, Kenyon, Kglavin, KieferSkunk, Kleb, Knebel, KymFarnik, LOL, LOTRrules, LanceBarber, Larsivi, Leandrod, Lerdsuwa, Liao, Ligulem, Little Mountain 5, Ljn917, Lotje, Lousyd, Luminifer, Maghnus, Malaki1874, MarcusMaximus, Mark Foskey, MarkSweep, Mathchem271828, Mathmo, MattGiuca, Matthewdunsdon, Mav, Mboverload, Meisam, Michael Hardy, Misha Stepanov, Mjchonoles, Mortus Est, Mr Wednesday, Mr.Fortran, Muhandis, Muro de Aguas, Nanshu, NapoliRoma, Nbarth, Neuralwiki, Nikai, Nneonneo, Northumbrian, Ojcit, Okedem, Oligomous, Opelio, Ospix, Paul August, Paxsimius, Pgr94, Phil Boswell, Plasynins, Pmcjones, Pnm, Pol098, Poor Yorick, Quadrescence, RTC, Rafiko77, Rangek, RaveRaiser, Raysonho, Rchrd, Rcingham, RedAndr, RedWolf, Reedy, Rege, Rich Farmbrough, Rigmahroll, Rjwilmsi, Roadrunner, RobChafer, Robert Merkel, Robo.mind, Rockear, Rogerd, Rowan Moore, Roy Brumback, Rpropper, Rravii, Rudnei.cunha, RupertMillard, Ruud Koot, Rwwww, SMC, Salih, Saxton, Sbassi, Sccosel, Seba5618, Sequentious, Skew-t, Sleigh, Smilitude, Smithck0, SnowRaptor, SpaceFlight89, Spacerat3004, Stevenj, Strait, Stsp0269, Suruena, Svanslyck, T-bonham, TakuyaMurata, Tannin, Tarquin, Taylock, Template namespace initialisation script, The Cute Philosopher, The Recycling Troll, The Thing That Should Not Be, The morgawr, Thestarsatnight, Thom2002, Thumperward, Tiuks, Tokek, Tony Sidaway, Tripodics, Twas Now, UnicornTapestry, Urhixidur, Uriyan, User A1, UtherSRG, Van der Hoorn, Van helsing, Van.snyder, Vicarage, VictorAnyakin, Vina, Waggers, Wavelength, Wcrowe, Wdfarmer, Wernher, Wflong, Who, Widefox, Wiki Wikardo, WikiWookie, Wikiklrsc, Wikip rhyre, Wilkowiki, Williamv1138, Windharp, Woohookitty, Wormholio, Wws, Xamian, Xenfreak, Xlent, YUL89YYZ, Yakushima, Ylai, Yuletide, Yworo, Zap Rowsdower, ZeroOne, Zicraccozian, Zoltar0, 604 anonymous edits

115

Image Sources, Licenses and Contributors

116

**Image Sources, Licenses and Contributors
**

Image:Dr.A.B.Rajib Hazarika & his kids.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Dr.A.B.Rajib_Hazarika_&_his_kids.jpg License: Public Domain Contributors: User:Rajah2770 Image:Half-derivative.svg Source: http://en.wikipedia.org/w/index.php?title=File:Half-derivative.svg License: Public Domain Contributors: User:GreenRoot Image:International Morse Code.svg Source: http://en.wikipedia.org/w/index.php?title=File:International_Morse_Code.svg License: Public Domain Contributors: Rhey T. Snodgrass & Victor F. Camp, 1922 File:Horse simulator WWI.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Horse_simulator_WWI.jpg License: unknown Contributors: User:Scoo File:Christer Fuglesang underwater EVA simulation for STS-116.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Christer_Fuglesang_underwater_EVA_simulation_for_STS-116.jpg License: Public Domain Contributors: NASA File:lambda2 scherschicht.png Source: http://en.wikipedia.org/w/index.php?title=File:Lambda2_scherschicht.png License: unknown Contributors: Andreas Babucke File:3DiTeams percuss chest.JPG Source: http://en.wikipedia.org/w/index.php?title=File:3DiTeams_percuss_chest.JPG License: unknown Contributors: Gene Hobbs File:ugs-nx-5-engine-airflow-simulation.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ugs-nx-5-engine-airflow-simulation.jpg License: GNU Free Documentation License Contributors: Freeformer, Normsch Image:KSCFiringroom1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:KSCFiringroom1.jpg License: Public Domain Contributors: NASA File:Vehicle simulator.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Vehicle_simulator.jpg License: Public Domain Contributors: Bapho, Mattes, Nv8200p, Trockennasenaffe Image:VFPt dipole magnetic1.svg Source: http://en.wikipedia.org/w/index.php?title=File:VFPt_dipole_magnetic1.svg License: GNU Free Documentation License Contributors: User:Geek3 File:Magnetosphere rendition.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Magnetosphere_rendition.jpg License: Public Domain Contributors: w:NASANASA Image:Magnetic core.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Magnetic_core.jpg License: Creative Commons Attribution 2.5 Contributors: Fayenatic london, Gribozavr, Uberpenguin File:Polarization and magnetization.svg Source: http://en.wikipedia.org/w/index.php?title=File:Polarization_and_magnetization.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Marmelad File:Molecular Vortex Model.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Molecular_Vortex_Model.jpg License: Creative Commons Attribution 3.0 Contributors: user:老陳 Image:Euclid flowchart 1.png Source: http://en.wikipedia.org/w/index.php?title=File:Euclid_flowchart_1.png License: Creative Commons Attribution 3.0 Contributors: User:Wvbailey File:Euclid's algorithm structured blocks 1.png Source: http://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_structured_blocks_1.png License: Creative Commons Attribution 3.0 Contributors: User:Wvbailey File:Sorting quicksort anim.gif Source: http://en.wikipedia.org/w/index.php?title=File:Sorting_quicksort_anim.gif License: Creative Commons Attribution-Sharealike 2.5 Contributors: Berrucomons, Cecil, Chamie, Davepape, Diego pmc, Editor at Large, German, Gorgo, Howcheng, Jago84, JuTa, Kozuch, Lokal Profil, MaBoehm, Minisarm, Miya, Mywood, NH, PatríciaR, Qyd, Soroush83, Stefeck, Str4nd, W like wiki, 11 anonymous edits File:Euclid's algorithm Book VII Proposition 2_2.png Source: http://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_Book_VII_Proposition_2_2.png License: Creative Commons Attribution 3.0 Contributors: User:Wvbailey File:Euclid's algorithm Inelegant program 1.png Source: http://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_Inelegant_program_1.png License: Creative Commons Attribution 3.0 Contributors: User:Wvbailey Image:IBM402plugboard.Shrigley.wireside.jpg Source: http://en.wikipedia.org/w/index.php?title=File:IBM402plugboard.Shrigley.wireside.jpg License: Creative Commons Attribution 2.5 Contributors: User:ArnoldReinhold Image:PunchCardDecks.agr.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PunchCardDecks.agr.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Arnold Reinhold Image:H96566k.jpg Source: http://en.wikipedia.org/w/index.php?title=File:H96566k.jpg License: Public Domain Contributors: Courtesy of the Naval Surface Warfare Center, Dahlgren, VA., 1988. File:Fortran acs cover.jpeg Source: http://en.wikipedia.org/w/index.php?title=File:Fortran_acs_cover.jpeg License: unknown Contributors: Uncredited File:Ibm704.gif Source: http://en.wikipedia.org/w/index.php?title=File:Ibm704.gif License: Attribution Contributors: Lawrence Livermore National Laboratory File:FortranCardPROJ039.agr.jpg Source: http://en.wikipedia.org/w/index.php?title=File:FortranCardPROJ039.agr.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Arnold Reinhold File:FortranCodingForm.png Source: http://en.wikipedia.org/w/index.php?title=File:FortranCodingForm.png License: Public Domain Contributors: Original uploader was Agateller at en.wikipedia

License

117

License

Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/

- Dr.A.B.Rajib Hazarika ,PhD,FRAS,AES's IIEM Achievers' who is who
- Dr.A.B.Rajib Hazarika ,PhD,FRAS,AES's IIEM Achievers' who is who
- Dr.A.B.Rajib Hazarika,PhD,FRAS,AES 's award
- Dr.A.B.Rajib Hazarika,PhD,FRAS,AES awards
- Dr A B Rajib Hazarika South India Tour
- Dr.A.B.Rajib Hazarika,PhD,FRAS AES at completion of refresher course
- Dr.A.B.Rajib Hazarika,AES for RC Maths
- Drabrh notes on know how for Mars travel
- Drabrh Talk on Klystron designs
- Drabrh Lecture on Klysron by Simrock Part 4.2a
- Drabrh Lecture on Klystron
- Drabrh Intro to Fuzzy by Nelson
- Drabrh note on Fuzzy Logic by other author
- Drabrh Recent Trends in Mathematical Sciences
- Dr.A.B.Rajib Hazarika,PhD,FRAS,AES’s Best citizen of India 2013 award
- DrA.B.Rajib Hazarika,PhD,FRAS,AES got Best Citizen of India 2013 Award
- Facinating World of Klystron by Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
- drabrh klystron facination.docx
- Facinating World of Klystron by Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
- Best Citizen of India Award 2013 to Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
- Drabrh Klystron Toroidal by Dr.A.B.Rajib Hazarika,Phd,FRAS,AES
- Drabrh Klystron Toroidal by Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
- DRABRH IPv12 comparison by Dr.A.B.Rajib Hazarika,PhD,FRAS,AES
- Dr.A.B.Rajib Hazarika,PhD,FRAS,AES with Glory of India award with medal
- Internet Protocol v12(IPv12) by Dr.A.B.Rajib Hazarika,PhD,FRAS,AES

This book is on the fractional derivative and know how of computer simulation codes in making conditions by Dr.A.B.Rajib Hazarika.

This book is on the fractional derivative and know how of computer simulation codes in making conditions by Dr.A.B.Rajib Hazarika.

- Fractional Order Derivative and Integral Using LabVIEW
- AN INTRODUCTION TO FRACTIONAL CALCULUS
- 05195104(1)
- Fractional Order Pid
- Fractional Calculus
- Mainardi, Francesco - Fractional Calculus and Waves in Liner Viscoelasticity
- Chaos Complexity Dvine Action
- 1936420236 a i 21
- Technical Books
- Chaotic Systems
- A Pikovsky - M Rosenblum - J Kurths - Synchronization - A Universal Concept in Nonlinear Sciences
- Cont. Ind.de Temperatura
- NonLinearitybyMilitars
- mvngShpAnal
- [Gulick]_Encounters With Chaos
- Introduction to Wavelets and Wavelet Transforms - A Primer , Brrus C. S., 1998.
- sabook
- Chaos&Dynamics
- Pspice vs Matlab
- Dynamics_The Geometry of Behavior.pdf
- Petroleum Refinery Studies
- A Concise Introduction to Image Processing using C++
- Artificial Intelligence With Uncertainty
- Design_of_Digital_Controller_for_Switch_Mode_Power_(7)
- Game Theory Relaunched
- GISEnvironmentalModelingandEngineeringS2ndEd
- Petroleum Refinery
- Smooth Particle Applied Mechanics the State of the Art Advanced Series in Nonlinear Dynamics Advanced Series in Nonlinear Dynamics
- OIL REFINERY FEASIBILITY STUDY.pdf
- Dynamical Systems and Control

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd