This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more

Download

Standard view

Full view

of .

Look up keyword

Like this

Share on social networks

1Activity

×

0 of .

Results for: No results containing your search query

P. 1

De Ce Prima Mea Conjectura Sau My First Conjecture Nu Poate Fi DemonstrataRatings:

(0)|Views: 14|Likes: 0

Published by Spanu Dumitru Viorel

Text care explica de ce Prima mea Conjectura sau My First Conjecture nu poate fi demonstrata . Idei pentru cautarea unui contraexemplu la aceasta conjectura , care este o ecuatie diofantina .

Text si despre conjectura lui Ore si deasemenea despre existenta lui Dumnezeu .

Text si despre conjectura lui Ore si deasemenea despre existenta lui Dumnezeu .

Text care explica de ce Prima mea Conjectura sau My First Conjecture nu poate fi demonstrata . Idei pentru cautarea unui contraexemplu la aceasta conjectura , care este o ecuatie diofantina .

Text si despre conjectura lui Ore si deasemenea despre existenta lui Dumnezeu .

Text si despre conjectura lui Ore si deasemenea despre existenta lui Dumnezeu .

See more

See less

https://www.scribd.com/doc/125566226/De-Ce-Prima-Mea-Conjectura-Sau-My-First-Conjecture-Nu-Poate-Fi-Demonstrata

02/15/2013

text

original

Autor : Spanu Dumitru Viorel ,Romania

De ce Prima mea Conjectura sau My FirstConjecture nu poate fi demonstrata.Conjectura lui Ore .Despre Dumnezeu .

http://www.cs.auckland.ac.nz/~chaitin/sciamer2.htmlRandomness in ArithmeticScientific American 259, No. 1 (July 1988), pp. 80-85by Gregory J. ChaitinIt is impossible to prove whether each member of a family of algebraic equations has a finite or an infinite number of solutions: the answers vary randomly and therefore elude mathematicalreasoning.What could be more certain than the fact that 2 plus 2 equals 4? Since the time of the ancientGreeks mathematicians have believed there is little---if anything---as unequivocal as a provedtheorem. In fact, mathematical statements that can be proved true have often been regarded as amore solid foundation for a system of thought than any maxim about morals or even physicalobjects. The 17th-century German mathematician and philosopher Gottfried Wilhelm Leibniz evenenvisioned a ``calculus'' of reasoning such that all disputes could one day be settled with thewords ``Gentlemen, let us compute!'' By the beginning of this century symbolic logic hadprogressed to such an extent that the German mathematician David Hilbert declared that allmathematical questions are in principle decidable, and he confidently set out to codify once andfor all the methods of mathematical reasoning.Such blissful optimism was shattered by the astonishing and profound discoveries of Kurt Gödeland Alan M. Turing in the 1930's. Gödel showed that no finite set of axioms and methods of reasoning could encompass all the mathematical properties of the positive integers. Turing later couched Gödel's ingenious and complicated proof in a more accessible form. He showed thatGödel's incompleteness theorem is equivalent to the assertion that there can be no generalmethod for systematically deciding whether a computer program will ever halt, that is, whether itwill ever cause the computer to stop running. Of course, if a particular program does cause thecomputer to halt, that fact can be easily proved by running the program. The difficulty lies inproving that an arbitrary program never halts.I have recently been able to take a further step along the path laid out by Gödel and Turing. Bytranslating a particular computer program into an algebraic equation of a type that was familiar even to the ancient Greeks, I have shown that there is randomness in the branch of puremathematics known as number theory. My work indicates that---to borrow Einstein's metaphor---God sometimes plays dice with whole numbers!This result, which is part of a body of work called algorithmic information theory, is not a cause for pessimism; it does not portend anarchy or lawlessness in mathematics. (Indeed, mostmathematicians continue working on problems as before.) What it means is that mathematical

laws of a different kind might have to apply in certain situations: statistical laws. In the same waythat it is impossible to predict the exact moment at which an individual atom undergoesradioactive decay, mathematics is sometimes powerless to answer particular questions.Nevertheless, physicists can still make reliable predictions about averages over large ensemblesof atoms. Mathematicians may in some cases be limited to a similar approach.My work is a natural extension of Turing's, but whereas Turing considered whether or not anarbitrary program would ever halt, I consider the probability that any general-purpose computer will stop running if its program is chosen completely at random. What do I mean when I say``chosen completely at random''? Since at the most fundamental level any program can bereduced to a sequence of bits (each of which can take on the value 0 or 1) that are ``read'' and``interpreted'' by the computer hardware, I mean that a completely random program consisting of n bits could just as well be the result of flipping a coin n times (in which a ``heads'' represents a 0and a ``tails'' represents 1, or vice versa).

The probability that such a completely random program will halt, which I have namedΩ, can be expressed in terms of a real number between 0 and 1. (The statement Ω =0 would mean that no random program will ever halt, and Ω = 1 would mean thatevery random program halts. For a general-purpose computer neither of theseextremes is actually possible.) Because Ω is a real number, it can be fully expressedonly as an unending sequence of digits. In base 2 such a sequence would amount toan infinite string of 0's and 1's.Perhaps the most interesting characteristic of Ω is that it is algorithmi

cally random: itcannot be compressed into a program (considered as a string of bits) shorter than itself. Thisdefinition of randomness, which has a central role in algorithmic information theory, wasindependently formulated in the mid-1960's by the late A. N. Kolmogorov and me. (I have sincehad to correct the definition.)The basic idea behind the definition is a simple one. Some sequences of bits can be compressedinto programs much shorter than they are, because they follow a pattern or rule. For example, a200-bit sequence of the form 0101010101... can be greatly compressed by describing it as ``100repetitions of 01.'' Such sequences certainly are not random. A 200-bit sequence generated bytossing a coin, on the other hand, cannot be compressed, since in general there is no pattern tothe succession of 0's and 1's: it is a completely random sequence.Of all the possible sequences of bits, most are incompressible and therefore random. Since asequence of bits can be considered to be a base-2 represe

ntation of any real number (if oneallows infinite sequences), it follows that most real numbers are in fact random. It isnot difficult to show that an algorithmically random number, such as Ω, exhibits theusual statistical properties one associates with randomness. One such property isnormality: every possible digit appears with equal frequency in the number. In abase-2 representation this means that as the number of digits of Ω approachesinfinity, 0 and 1 respectively account for exactly 50 percent of Ω's digits.A key technical point that must be stipulated in order for Ω to make sense is that aninput program must be self-delimiting: its total length (in bits) must be given withinthe program itself. (This seemingly minor point, which paralyzed pro

gress in the field for nearly a decade, is what entailed the redefinition of algorithmic randomness.) Real programminglanguages are self-delimiting, because they provide constructs for beginning and ending aprogram. Such constructs allow a program to contain well-defined subprograms, which may alsohave other subprograms nested in them. Because a self-delimiting program is built up byconcatenating and nesting self-delimiting subprograms, a program is syntactically complete onlywhen the last open subprogram is closed. In essence the beginning and ending constructs for programs and subprograms function respectively like left and right parentheses in mathematicalexpressions.

If programs were not self-delimiting, they could not be constructed from subprogr

ams, andsumming the halting probabilities for all programs would yield an infinite number. If one considers only self-delimiting programs, not only is Ω limited to the rangebetween 0 to 1 but also it can be explicitly calculated ``in the limit from below.'' Thatis to say, it is possible to calculate an infinite sequence of rational numbers (whichcan be expressed in terms of a finite sequence of bits) each of which is closer to thetrue value of Ω than the preceding number.

One way to do this is to syst

ematically calculate Ωn for increasing values of n; Ωn is theprobability that a completely random program up to n bits in size will halt within nseconds if the program is run on a given computer. Since there are 2k possibleprograms that are k bits long, Ωn can in principle be calculated by determining forevery value of k between 1 and n how many of the possible programs actually haltwithin n seconds, multiplying that number by 2

−k and then summing all the products. Inother words, each k-bit program that halts contributes 2−

k to Ωn; programs that do not haltcontribute 0.If one were miraculously given the value of Ω with k bits of precision, one couldcalculate a sequence of Ωn's until one reached a value that equaled the given valueof Ω. At this poi

nt one would know all programs of a size less than k bits that halt; in essenceone would have solved Turing's halting problem for all programs of a size less than k bits. Of course, the time required for the calculation would be enormous for reasonable values of k.So far I have been referring exclusively to computers and their programs in discussing the haltingproblem, but it took on a new dimension in light of the work of J. P. Jones of the University of Calgary and Y. V. Matijasevic of the V. A. Steklov Institute of Mathematics in Leningrad. Their work provides a method for casting the problem as assertions about particular diophantineequations. These algebraic equations, which involve only multiplication, addition andexponentiation of whole numbers, are named after the third-century Greek mathematicianDiophantos of Alexandria.To be more specific, by applying the method of Jones and Matijasevic one can equate thestatement that a particular program does not halt with the assertion that one of a particular familyof diophantine equations has no solution in whole numbers. As with the original version of thehalting problem for computers, it is easy to prove a solution exists: all one has to do is to plug inthe correct numbers and verify that the resulting numbers on the left and right sides of the equalsign are in fact equal. The much more difficult problem is to prove that there are absolutely nosolutions when this is the case.The family of equations is constructed from a basic equation that contains a particular variable k,called the parameter, which takes on the values 1, 2, 3 and so on. Hence there is an infinitelylarge family of equations (one for each value of k) that can be generated from one basic equationfor each of a ``family'' of programs. The mathematical assertion that the diophantine equation withparameter k has no solution encodes the assertion that the kth computer program never halts. Onthe other hand, if the kth program does halt, then the equation has exactly one solution. In asense the truth or falsehood of assertions of this type is mathematically uncertain, since it variesunpredictably as the parameter k takes on different values.My approach to the question of unpredictability in mathematics is similar, but it achieves a muchgreater degree of randomness. Instead of ``arithmetizing'' computer programs that may or maynot halt as a family of diophantine equations, I apply the method of Jones

and Matijasevic toarithmetize a single program to calculate the kth bit in Ωn.

The method is based on a curious property of the parity of binomial coefficients (whether they areeven or odd numbers) that was noticed by Édouard A. Lucas a century ago but was not properly

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd