You are on page 1of 11

Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

Albert Einstein once said, “The only source of knowledge is experience.” Very few times
in history did human innovation spark from originality. The way the brain works, innovation is
often nothing more than repudiation of something unoriginal that we have already come to know
or familiarize ourselves with. Einstein, however, also said “Imagination is more important than
knowledge… for knowledge is limited.” Ideas had to come from something. Quite often, some of
the greatest innovations occurred, not because someone decided to try something completely
original, but because they dared to dream, because they felt that simply being content with old
methods of doing things was ignorance at its worst. The greatest innovators in human history
were always those who carefully examined and even built upon their predecessors’ findings in
the name of bettering humanity.
As there is agreement in that there is a beginning, it is likely the best place to start. In the
centuries preceding 3500 BC, mankind could very much be likened to the Flintstones family. Of
course, people didn’t own dinosaurs, nor did they travel in stone automobiles run by their
pedaling feet, but the Paleolithic-look of everything could not have been more accurate. The only
materials these primitive beings knew how to craft tools and weaponry out of were stones, and
even while copper had already been discovered, humans found that it was too easily bent to
produce any sufficient tools out if it. Around 3500 BC, however, the Stone Age came to an
abrupt end when the Maykop people, an ancient civilization in present day southern Russia,
discovered that melting together tin and copper enabled them to produce a metal far greater in
strength. Of course, during such times when widespread communication was unlikely (if not
impossible) for some, its impact was only felt by a number of civilizations that would gradually
diffuse throughout the known world in the centuries to come. Little did the people of these times
know that this single, and seemingly simple, innovation would spark the beginning of an entirely
new chapter in history, built upon the creation and evolution of the oldest applied science of
mankind. Starting with the Bronze Age, humans learned to make everything from aesthetic beads
and jewelry to better weapons and tools; and less than half a millennia later the Sumerians and
the Egyptians were already smelting iron, which would eventually lead to the production of steel
–the most valuable metal of the time. The ability to work and weld metals into stronger materials
meant a variety of things for both the ancient civilizations and today’s modern world. “Those
with tools and weaponry fashioned from the strongest available metals could easily gain the
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

upper hand in war. Those who held abundances of rare metals could gain economic power, and
those who engineered devices or any kind of architecture with strong metals could provide for
much safer and efficient means of getting things done,” ("Bronze Age." Encyclopædia
Britannica. 2010. Encyclopædia Britannica Online. 29 Sep. 2010). Most argue that the Iron Age
never truly ended, as mankind continues to expand upon the knowledge their ancestors gained
thousands of years ago. Some, however, will argue that we live in an age known as the Silicon
Age. Needless to say, their contributions expanded the potential and the capabilities of mankind
beyond that of any other innovation in human history and continue to serve as the building
blocks of today’s complex and innovative society.
In that same world, it is easy to forget that other intricate systems and methods created by
civilizations of ancient times still bear great resemblance to those used today. Perhaps that is
because those very systems used today are often taken for granted. Imagine, for a moment, living
in the heavily populated Indus Valley region around 5000 BC –a time long before the
introduction of sewage systems into society, a time without all the fancy underground pipes and
toilets. Today when we think of cleanliness, smelling fresh seems to be our definition for
hygiene, however we often take for granted what sanitation systems really do for us. Taking a
peak back at civilization during such archaic times would be a real eye-opener, but the people of
today forget that they don’t need a time machine to take a gander at a society that lacked such
systems. One needs only look at those third world countries, primarily in Africa, where disease
runs rampant throughout the lands; where rivers –the very source of peoples’ drinking water-
often end up contaminated by hazardous wastes and an immeasurable amount of harmful
bacteria. It’s no wonder, then, why the famous Harrappan civilization of India is often revered
for its underground sanitation system, in systematic use by 2600 BC. Faced with the reality that
their precious drinking water was being contaminated by the very populace who thrived off of it,
the Harappan civilization engineered a system unlike anything the world had ever seen. In
conjunction to using aqueducts to transport fresh water to people out of immediate reach of the
Indus River, the Harappans installed drainage systems both inside and outside of their homes.
These drains lead to an underground system of pipelines that would transport the wastes away
from the city. These “pipes” weren’t the kinds of pipes that we are accustomed to seeing today.
Rather, they were large holes with slight inclinations that would use gravity to direct the water
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

and the wastes down into a central area that carried it away from the city. In addition, the
Harrappans purposely built elevated housing. Drains, they realized, could get stopped up and the
possibilities of flooding were always being considered. To protect themselves from having to
wade in unclean water, they began building elevated houses that allowed for larger drains in the
roads. The interiors of their houses weren’t much different. As archeologist Jonathan Kenoyer
noted, “nearly every Harappan home had a bathing platform and a latrine,” (Discover Magazine).
Ultimately, listing superior sewerage systems as a remarkable innovation may get a good laugh
or two from some, but anyone with common sense would realize the risks and serious health
hazards associated with living in an unclean society. Today in developed countries, having
access to a toilet or a sink is considered a necessity, and the idea of not having one or the other is
simply mind boggling. Sanitation engineering has and will always be a field of utmost
importance. As some might say, it’s a dirty job but someone’s gotta do it!
Interestingly, the transportation of water to and from large populations has a long history
that tends to draw a lot from the history of other structures that were often instituted in the
process. As time passed and technological innovation evolved, many civilizations –including the
Harrapan people- began to use the arch in their architecture and in the construction of their
aqueducts. The structure was found to have existed in architecture preceding even the Egyptians
and the Sumerians, and was used for practical construction projects by the Greeks and the
Etruscnas –a civilization often credited with the invention and perfection of the arch due to its
famous Porta Augusta. Yet, the arch was never systematically used until the Romans discovered
the beautifully intricate science behind using such a seemingly simplistic structure. Borrowing
from the Greeks and the Etruscans, the Romans constructed absolutely stunning monuments and
temples that displayed their extravagant and refined arches. The arch, however, was not simply a
structure intended to make a city aesthetically appealing. It was a remarkable solution to a
problem that many civilizations had experienced in the early centuries. “Essentially, its usage
and instillation enabled civilizations to build multiple stories and proper roofing for buildings
without the use of beams, which, until the 19th century, was seen as the only alternative method,”
(K., Alchin L. "Roman Arches." Roman Colosseum). After setting up a wooden frame, builders
would set wedge-shaped blocks, called voussoirs, on top of it, consciously setting the narrower
of the two sides of each block towards the opening of the arch so that they would lock together.
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

At the very tip of the arch, they placed a special voussoir known as the keystone, the single most
important piece of the arch. In doing so, they were completing a force distribution network that
would “eliminate tensile stresses in spanning an open space; all the forces would be resolved into
compressive stresses.” Or, in other words, all stresses felt on the upper voussoirs would be
directed downward into the voussoirs below them until it reached the impost, the last section of
the arch. That force would then travel down in a straight line from the impost to the bottom level
or ground the arch was set upon. The intricate and absolutely genius design allowed builders to
apply vast quantities of weight to arches without even blinking at the idea that the structure
would collapse.
Arguably, one of the single most important technological innovations of all time dates
back over three thousand years. Spanning a great length of the time over the course of human
existence, its standard use wasn’t particularly noted or accredited to anyone until Roger Bacon,
an Englishman in the 12th century, invented a newer way of using it. This transparent apparatus
would come to serve as the central unit of everything ranging from binoculars and scopes, to
bifocals and cameras. The lens, in all its glory, can easily be overlooked. After all, the telescope
was what finally enabled man to view other galaxies, and the microscope equally revealed a
hidden world beyond any viewing capabilities of the naked eye. In fact, historians argue, “the
lens found its first practical use in Assyria, serving three primary functions: as a means of
studying astronomy, as a magnifying glass for the sake of extravagantly detailed carvings, and
lastly as a burning-glass to start fires by concentrating sunlight,” (Optics and Optical
Engineering. Optics 1. “The History of Optics.”). This statement alone not only reveals that
those inventions such as the magnifying glass and the telescope, credited to much more relatively
modern science and individuals, existed long before their acclaimed debuts; but it also serves as
a reminder of the extensive application of the lens itself. Until its discovery reached the Roman
Empire, it is actually noted that individuals such as Emperor Nero found extensive use in holding
up individual emeralds to the eye during the famed gladiatorial games as a way of coping with
their myopia, which today, would indicate that such emeralds must have been concave in
structure to allow for magnification. -Of course, the lens was undoubtedly accepted as a much
more convenient tool for viewing things at a long distance, especially due to its clarity in color
and translucency in comparison to the gems. Today, scientists separate the lens into three distinct
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

categories: concave (growing outward from the lens), convex (growing inward toward the lens),
or planar. Though they refract light differently, and therefore produce images extremely diverse
in size, they all essentially work the same way. When light enters the lens, the lens itself causes
the light to converge if convex, or diverge if concave, depending upon how the light refracts off
the walls of the lens. For humans, this can cause a lens to magnify or diminish the size of an
image. This, of course, has led to the lens’ extensive use in correcting visual impairments.
It seems rather silly today to ponder on the fact that ancient civilizations such as the
Romans could construct and engineer marvels like the Great Coliseum, and the great Greek
minds like Pythagoras could calculate the constant pi and side lengths of geometric figures; yet
none could give a numeric solution to problems that involved subtracting a number’s entire value
from itself. Today, anyone with even the most rudimentary education could tell you that one
minus one is zero, but imagine; just one millennium ago, the Greeks were still arguing about
whether the absence of value could even be represented in a numeric form. After all, they
argued, how could a symbol for the absence of value be quantifiable if it were nothing? The
debate became one of heated intensity, and the concept of zero as a number became one of the
last to gain universal acceptance. Today, zero is viewed as a number that is absolutely essential
to mathematics; serving as both a placeholder in the base-ten numeral system, as well as a
transition value between positive and negative numbers that enables solutions to be found in
basic algebra. Without it, the development and understanding of advanced mathematics such as
calculus would have been largely incomplete if not impossible. So who exactly should the world
accredit for its discovery? The answer, unfortunately, isn’t entirely clear. According to Charles
Seife, author of Zero: The Biography of a Dangerous Idea, “there have been at least two
[independent] discoveries, or inventions, of zero.” The first was made in the 3rd century AD by
the Mayans who used the number to denote the beginning of time (as they viewed it) in the
newly created world on their calendar. The second discovery, however, is the one that is often
disputed. According to many historians, the second invention of the number zero occurred in
India around the 4th century AD, developing as a mere place holder in their decimal system.
Seife and other historians, however, argue that the concept of zero as a placeholder -denoted as a
dot between numbers- developed first in Babylonia between 400 and 300 BC, before it
eventually found its way to India, as well as Europe. In any case, it is evident that the first time
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

zero was ever accepted as a true numeral occurred during the 9th century in India; and almost a
century later, was brought to Europe by the famous Leonardo of Pisa (Fibonacci). A little over a
thousand years later, zero remains a figure enthralled in mystery and dispute. Sure, while the
introduction of zero into mathematics created a convenient solution to the problems early
mathematicians once faced, it within itself, also introduced a whole new series of problems (such
as the inability to divide by it. Regardless, one thing’s for sure: it certainly gave mathematicians
a way to identify one another in a crowd by saying, “Let epsilon be less than zero.”
Speaking of clarity, Dmitri Mendeleev’s own contributions to science provided just that
in the realm of molecular science. His name is one that lives on in every chemistry book which
often refers to the Russian scientist by his famous title: the father of the periodic table. It could
be said that it all started with a list he made, after having read through the recently published
“Law of Octaves” by John Newlands. This list essentially placed the different elements in order
of their respective atoms’ atomic masses. While it made for a nice list, it didn’t exactly provide
for any real order or insight on the topic of elements. Such a list, after all, had already existed.
Shortly after, however, Mendeleev rearranged the elements in a table in accordance to an
extremely important observation he made. Chemical and physical properties shared by elements
had a tendency to show up periodically. Through this, he constructed the table so that “elements
that shared chemical and physical properties fell in the same vertical columns,” (Roberg, Pierre
R. "Mendeleev's Periodic Table." Corrosion Science and Engineering Information Hub). While
his arrangement of the elements would later prove to be an extremely accurate blueprint for
today’s more modern periodic table, it has been said that perhaps the most significant features of
his periodic table were not the elements he did include, but rather the blank spaces he often left
between the known elements. Take scandium for example, an element which had not yet been
discovered. Mendeleev recognized that calcium, the preceding element, shared many properties
with beryllium and magnesium; meanwhile, titanium –the next logical element- was more like
carbon and silicon than boron and aluminum. It was through this reasoning that he purposely left
a space between calcium and titanium, stating that this gap and others like it in his table were
most likely elements that had yet to be discovered. He didn’t stop there, however. Having
observed the nature of the similarities between other elements, he went on to predict many of the
physical and chemical properties of those unknown elements such scandium, gallium, and
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

germanium (to name a few). The first, gallium, was discovered in 1875 and Mendeleev’s
predictions were proven to have been stunningly accurate. While the modern periodic table has
since been rearranged to reflect natural order, the periods or groups that related the elements
have ultimately remained the same. Let us not forget that alchemy was considered a prominent
science before Mendeleev came along. His greatest legacy –a single chart- has become both a
symbol of chemistry, as well an important tool in conceptualizing the particulate physics of
matter, quantum mechanics. His discovery of these patterns existing in nature would pave the
way for chemists and engineers alike, even one and a half centuries later, as they toiled to
uncover more about the microscopic world that makes up everything in the world around us.
In 1938, the German director of the National Museum of Iraq brought close attention to a
number of artifacts, excavated years earlier, that dated back to the Parthian period of
Mesopotamia. One object in particular got worldwide attention. Coined “the Bagdad Battery,”
the object has since aroused endless speculation as to whether or not it indeed could have served
as a galvanic cell at a time predating 200 BC (what would be an absolutely stunning feat).
Unable to prove if the object was used for such a purpose, however, historians still attribute the
invention of the voltaic cell, or rather the battery, to none other than the Italian physicist,
Alessandro Volta. The year was 1800. Less than a decade ago, Luigi Galvani has published a
series of experiments in which found that muscular contraction in a frog’s legs could be caused
by touching its nerves with electrostatically charged metals. His conclusion, that “animal tissue
contained an innate vital force, or ‘animal electricity,’” (Lewis, Nancy D., "Alessandro Volta:
The Voltaic Pile.”), however, was one that Volta wished to disprove. In his series of
experiments, Volta demonstrated that the stacking of electrodes, alternating copper and zinc
disks, between electrolyte-soaked materials could produce an electric current when both the top
and the bottom of the pile was connected by a wire. Essentially, by soaking a material such as a
piece of cardboard or cloth with brine (an electrolyte), Volta was increasing the electrolyte
conductivity between the metal disks. At the bottom of the pile, Volta always placed the copper
disk which would be the positive end of the pile, while he placed a zinc disk at the very top of
the pile which would serve as the negative end. This ultimately produced the first
electrochemical cell. Over the course of the last two centuries, batteries have become a common
power source for thousands of devices, and have (like everything else, it would seem) gradually
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

become smaller and smaller in size without having to sacrifice the amount of power available per
unit. In everything from digital cameras and lap tops, to cell phones and Game Boy’s, batteries
have transformed and completely changed society’s view of portability. With the capabilities to
store vast amounts of power, transportable energy and portable electronic technology has merely
continued to grow in importance and emphasis in our innovative world. Perhaps the most
amazing feat in the evolution of batteries since Volta’s electrochemical pile, has been the
relatively new innovations that allow for certain batteries, known as secondary batteries, to
recharge power. Granted, the battery requires the current of a device attached to an outlet to
recharge itself, but it also means that those willing to pay more for the ability to recharge can
save a hefty load of cash in the long run.
The world of electronic technology pretty much skyrocketed after that point. Everything
began to rapidly evolve and industry in anything related was getting set to take off. It has often
been said that the people of today live in the Silicon Age; and the validity of that statement could
hardly be denied. The evolution and the sheer innovation of electronic technologies in the past
twenty years alone is absolutely draw dropping. Children who have grown up with access to all
of this can barely even imagine a world without the Xbox or their I-phones. And of course, those
who can remember times without the texting plague and the endless YouTube-gaping will likely
always reminisce about the simple life. The fact is, all those who have ever been found
convenience in accessing the internet, watching a movie, or even reading an eBook owe it all to a
tiny device that is a transmitting resistor, or as it is widely known, a transistor. Anyone with an
eye for anything electrical ought to know that these devices that are made out of semiconductor
materials are their best friends. Essentially, the transistor is a device that can either amplify a
signal or serve as a switch that opens or closes a circuit. Considering just how widely used they
are, it is a little sad that more people don’t thank John Bardeen, Walter Brattain, or William
Shockley every year at Thanksgiving. After all, if it weren’t for these three co-inventors of the
transistor in 1947 at Bell Labs, Lee De Forest’s vacuum tube triode invention would have still
been used in computers and radios. Computers would still take up over ninety square meters of
space, and they would also require greater energy costs to keep them working (not to mention the
costs associated with replacing one of the triodes which were always burning out easily). Today,
transistors have simply gotten smaller and smaller, producing low heat and requiring little energy
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

to run highly complex circuits –all at no cost in reliability. They have transformed the world into
one of digital nature, becoming the central units in all digital circuits since then. Those
electronics aren’t even running off of a few. “During the late 1960's and 1970's individual
transistors were superseded by integrated circuits in which a multiple of transistors and other
components (diodes, resistors, etc.) were formed on a single tiny wafer of semi conducting
material,” (Ruben, Julian. "The Invention of the Transistor." The Orchid Grower: A Juvenile
Science Adventure Novel. Web. 29 Sept. 2010.). That means that millions of these little devices
can go into just one computer, and most of them can be found in computer microprocessors that
contain microscopic sized transistors!
Fast forward about a decade down the road, when two inventors independently develop
an engine unlike any the world has previously seen. Deemed the internal combustion engine for
the process which occurred within the invention, a fossil fuel of some sort would combust with
an oxidizer in the combustion chamber of the engine and produce large quantities of carbon
dioxide and water. The increase in heat and pressure from the gas expansion that occurred within
the chamber would apply a force through a nozzle or piston structure in the engine that would
consequently convert energy into the mechanical energy useful to move something. Such
application would find great prominence in the transportation industry, including and especially
automobiles. Unlike the steam engines that proceeded it, the internal combustion engine would
generate much more energy through the combusting of hydrocarbons that would produce vast
amounts of carbon dioxide and water. The first of the independent inventors, Jean Lenoir, was a
Belgian engineer whose success was most notable due to the fact that even though his engine
was not the first of its kind, it was the first and only by 1959 to have been commercially
successful. Having experimented greatly with electricity, his findings led him to develop “the
first single-cylinder two-stroke engine which burnt a mixture of coal gas and air ignited by a
‘jumping spark’ ignition system” (E. F. Obert, Internal Combustion Engine, 1950). By this point
in time, magazine and newspapers were capitalizing on his great invention even though the
engine was largely inefficient in its fuel use, extremely noisy, and had a tendency to overheat
often. It wasn’t until Nikolaus Otto, a German engineer, invented the first successful four-stoke
internal combustion system, that humankind saw the advent of an engine that efficiently burned
its fuel in the piston chamber. Essentially, his internal combustion engine went through a process
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

of four distinct steps. First, the hydrocarbon combustible fossil fuel would be taken in to the
combustion chamber along with the oxide (oxygen or nitrogen). The mixture would then be
placed under a great pressure, and then burned until the hot gaseous products would begin to
expand and force themselves through the differing compartments in order to convert the eneegry
into useful mechanical energy. Finally, once the products had cooled, they would be exhausted
from a pipe of some sort into the air around the engine. Generally quieter and more efficient than
the two-stroke internal combustion engines, Otto’s four-stroke internal combustion system
became the central unit used in most automobiles and large boats. Today, his innovative system
is still in use.
Poor Alexander Graham Bell had no idea how close he was to unlocking the secrets of
today’s modern optical communications in 1880. Already famous for his amazing innovations in
communication through the telephone, his research continued on well into old age. While his
findings through his experiments did not go to waste, it is quite daunting to find that he
discontinued his prominent research in the field of fiber optics when he experienced a single
setback. The photophone often goes unmentioned in the field of fiber optics since the modern
world accredits everything having to do with the topic with modern science. Essentially, Bell’s
photophone allowed him to transmit a voice signal on a beam of light by talking into a tube that
caused the mirror it was attached to, to vibrate. When focusing a beam of light (such as sunlight)
on the mirror as the same time, a receiver at the other end could “pick up the vibrations of the
beam and decoded it back into a voice,” (Bellis, Mary. "Alexander Graham Bell's Photophone -
Ahead of Its Time." About.com). Sadly, Bell realized that certain uncontrollable variables such as
bad weather had a negative effect on the photophone and caused interference, and ultimately
decided to discontinue his prominent research. By 1964, many other notable scientists had made
a great range of discoveries discerning the idea of using long glass or plastic rods and cables to
transmit information at a speed nearing that of light. However, none proved more critical than
Dr. C. K. Kao’s finding of the standard “10 or 20 decibels of light loss per kilometer,” on copper
wire. That standard, he theorized, could most likely be reduced if a purer form of glass could be
used around the fibrous cables. Less than six years later, a team of researchers Robert Maurer,
Peter Shultz, and Donald Keck, began experimenting with silica fibers and a much purer form of
glass. What they found was that their newly invented silicon optic wires could carry up to 65,000
Gonzales, Analysa Marie

What Thousands of Years of Human Existence Have Amounted To

times more information that copper wire without losing large quantities in even the longest of
distances. Their discoveries led to the proving of Dr. Kao’s theories and consequently led to the
revolutionary innovation of fiber optics in telecommunications. Not only could the information
be transmitted at a much higher speed, but the distances from which they could be transmitted
was also significantly increased. Talk about a total communication revolution! Even today,
beyond the internet, optical fiber is used by numerous companies to transmit both telephone
signals, as well as high definition cable television signals. Perhaps the greatest advantage for
those on the demand side of the products that use optical fiber is the mere fact that the signals
experience significantly less interference than other signals, making its usage and signals much
more reliable.
Overall, pinpointing the top ten innovations of all time is no easy task. To some, anything
electronic is absolutely worthy of mention. To others, the world began a lot longer ago than
when the microprocessor was invented. Whoever said great minds think alike was likely not
great since history seems to imply that true greatness came with innovative and unique methods
and ways of thinking. The legacies they have left behind for humankind are probably not done
any justice in history, either. As innovation sparks others’ inspirations, it can be said that without
certain discoveries or inventions, most of the greatest things in existence today would likely
cease to exist as well. Like a chain reaction, the powers curiosity and discovery have continued
to inspire and enthrall those whose lives they touch.

You might also like