You are on page 1of 21

INTRODUCTION TO

COMPUTERS

ANIS HASSAN
Center of Excellence – Hyderabad

INTRODUCTION TO COMPUTERS
Introduction:
The main purpose of this chapter is:




To introduce the amazing machine, “Computer”, its limitation and
characteristics.
To introduce the history of computer.
To learn various types of computers in different aspects
To know the advantages and impact of computer in our society.
To give the first hand knowledge about programming languages and
translator.

Computer: Computer, machine that performs tasks, such as calculations or
electronic communication, under the control of a set of instructions called a
program. Programs usually reside within the computer and are retrieved and
processed by the computer’s electronics. The program results are stored or
routed to output devices, such as video display monitors or printers.
Computers perform a wide variety of activities reliably, accurately, and quickly.
Hardware:
What is a computer?
A computer is defined as "an electronic device that accepts data (input),
performs arithmetic and logical operations using the data (process) to produce
information (output).
Computers process and store digital signals based on binary arithmetic.
Binary arithmetic uses only two digits, 0 and 1, to represent any number, letter
or special character.
Data--Numbers, characters, images or other method of recording, in a form
which can be assessed by a human or (especially) input into a computer,
stored and processed there.
Information--Data on its own has no meaning, only when interpreted by some
kind of data processing system does it take on meaning and become
information. For example, the number 1234.56 is data but if it is output as
your bank balance then it is information.
A set of instructions, called a program, operates the computer.
Brief History:
The first large-scale electronic digital computer, ENIAC (Electronic Numerical
Integrator and Computer), was introduced in 1946. The ENIAC weighed thirty
tons, contained 18,000 vacuum tubes, and filled a 30 x 50 foot room, yet was
far less powerful than today’s personal computers.
In 1955, Bell labs introduce its first transistor computer. Transistors are faster,
smaller and create less heat than traditional vacuum tubes, making these
computers much more efficient.
MAJID AZIZ KALERI

but on a more limited scale. Windows 3. These terminals look like small computers but they are only devices used to send and receive information from the actual computer using wires. homes. They have become an indispensable part of modern life. In 1990.0. They can store enormous of information. is designed. These computers are usually divided into desktop models and laptop models. and Commodore all introduce mass-market computers. which runs the new DOS operating system. but not as many as the mainframes. The cost of these computers can vary from a few thousand dollars to several hundred thousand dollars. but they can also be in different rooms. can communicate with many users at the same time. They are terribly limited in what they can do when compared to the larger models discussed above because they can only be used by one person at a time. but again usually not as much as the mainframe. Microcomputers: are the types of computers we are using in your classes at Floyd College. a complete rewrite of previous versions and one in which most desktop users will eventually spend most of their time. Apple computer Inc. Microsoft releases Microsoft Office 97. or cities. IBM joins the personal computer race with its IBM PC. They can store a tremendous amount of information. In 1981. In 1997. called DOS.ANIS HASSAN Center of Excellence – Hyderabad In 1971. They can still have many terminals. can perform many tasks at the same time. beginning the PC era and the microcomputer race. . IBM allows Allen and Gates to retain the marketing rights to the operating system.. In 1995. the first microprocessor. buildings. and school classrooms. Classification of Computers Mainframe Computers Minicomputers Microcomputers Supercomputers Mainframe computers: are very large. The price of a mainframe computer frequently runs into the millions of dollars. but they are excellent when used in small businesses. IBM hires Paul Allen and Bill Gates to create an operating system for a new PC. often filling an entire room. Terminals can be located in the same room with the mainframe computer. In 1977. Radio Shack. Mainframe computers usually have many terminals connected to them. they are much slower than the larger computers. This single chip contains all the basic parts of a central processor. and universities usually use this type of computer. Minicomputers: are much smaller than mainframe computers and they are also much less expensive. government agencies. They possess most of the features found on mainframe computers. These computers are inexpensive and easy to use. Microsoft releases Windows 3. and are very expensive. Microsoft releases Windows 95. Medium and small businesses typically use these computers. the Intel 4004. MAJID AZIZ KALERI . and they cannot store nearly as much information. Microsoft Office 95 and the online Microsoft Network.0 uses a graphical user interface [GUI]. In 1980. Large businesses.

but two of them are worth mentioning at this point. The computer has memory chips. is responsible for mathematical calculations and logical comparisons.ANIS HASSAN Center of Excellence – Hyderabad Computer Tasks Input Storage Processing Output When a computer is asked to do a job. The computer has an electronic brain called the Central Processing Unit. Central Processing. It stored the information until it is ready for use. Input. It processes the information. Input Devices Keyboard Mouse Scanner Microphone CD-ROM Joystick Output Devices Monitor Speakers Printer Impact Daisy Wheel Dot Matrix Non-Impact Ink Jet Laser MAJID AZIZ KALERI . The control unit controls the electronic flow of information around the computer. It accepts the information from the user. The central processing unit is made up of many components. and Output. ALU. Whether it is a multimillion dollar mainframe or a thousand dollar personal computer. This is called output. it has the following four components. This is called input. It then returns the processed information to the user. Every computer has special parts to do each of the jobs listed above. which is responsible for processing all data and instructions given to the computer. The arithmetic and logic unit. These are the arithmetic and logic unit and the control unit. Memory. it handles the task in a very special way. which are designed to hold information until it is needed.

For every circuit in a computer chip. Every computer needs an operating system. Random Access Memory (RAM) This is the area of memory where data and program instructions are stored while the computer is in operation. This is a special computer program that must be loaded into memory as soon as the computer is turned on. The last instruction in ROM directs the computer to load the operating system. The operating system also translates the results generated by your computer into English when it is finished so that we can understand and use the results. For this reason it is very important that you save your work before turning off your computer. Permanent Memory (Auxiliary Storage) Your files are stored in permanent memory only when saved to your disk in a: drive or saved to your computer's hard disk. This is why we have peripheral storage devices like your computer’s hard disk and floppy diskettes. This is temporary memory.ANIS HASSAN Center of Excellence – Hyderabad Storage Devices Floppy disk Tape drive Local drive (c) Network drive (z) CD-ROM Zip disk Memory Read Only Memory (ROM) ROM is a small area of permanent memory that provides startup instructions when the computer is turned on. NOTE: The data stored in RAM is lost forever when the power is turned off. there are two possibilities: an electric circuit flows through the circuit or An electric circuit does not flow through the circuit. You can not store any data in ROM. Your computer is made of millions of tiny electric circuits. The operating system comes with a computer. Its purpose is to translate your instructions in English into Binary so that the computer can understand your instructions. Drive c: To better understand how a computer handles information and to also understand why information is lost if the power goes off. The instructions in ROM are set by the manufacturer and cannot be changed by the user. let’s take a closer look at how a computer handles information. MAJID AZIZ KALERI .

which represents how the computer stores information. 1’s and 0’s. MAJID AZIZ KALERI . Each time a computer reads an instruction.000 kilobyte (KB) = 1000 Bytes megabyte (MB) = 1. Consider the way the following characters are translated into binary code for use by the computer.000. the circuit is off.000.000. Calculation was a need from the early days when it was necessary to account to others for individual or group actions. A B C X Z 01000001 01000010 01000011 01011000 01011010 00110001 00110010 Consider the column at the right.000. a combination of eight 1’s and 0’s. An “on” circuit is represented by the number one (1) and an off circuit is represented by the number zero (0). At this point it would be good to point out why information stored in RAM is lost if the power goes off. the circuit is on. The two numbers 1 and 0 are called bits. metric prefixes are combined with the word byte to give some common multiples you will encounter in computer literature. Early man counted by means of matching one set of objects with another set (stones and sheep). Kilo means 1000 Mega means 1.000.000 Bytes gigabyte (GB) = 1. 8 bits = 1 Byte Since computers can handle such large numbers of characters at one time.000 Giga Means 1. Each of the 1’s in the second column represents a circuit that is “on”.ANIS HASSAN Center of Excellence – Hyderabad When an electric current flows through a circuit. If the power goes off. On most computers every character from the keyboard is translated into eight bits. the laptop computers that you are using at Floyd College have 32 MB of RAM. When no electricity flows. it translates that instruction into a series of bits.000 Bytes As a side note. these circuits can NOT be “on” any more because the electricity has been turned off and any data represented by these circuits is lost. ABACUS Fascinating facts about the invention of the Abacus by Chinese in 3000 BC. Byte – The amount of space in memory or on a disk needed to store one character. Each group of eight bits is called a byte. particularly in relation to maintaining inventories (of flocks of sheep) or reconciling finances.000. The word bit comes from “binary digit”.

The abacus does not actually do the computing. The abacus is still in use in some countries today. The next logical step was to produce the first "personal calculator"—the abacus—which used the same concepts of one set of objects standing in for objects in another set. Only when the process of counting and arithmetic became a more abstract process and different sizes of groups were given a symbolic representation so that the results could be written on a "storage medium" such as papyrus or clay did the process of calculation become a process of symbol manipulation. It helps people keep track of numbers as they do the computing. It was built out of wood and beads. as today's calculators do. To keep track of the number of animals killed. The abacus was so successful that its use spread form China to many other countries. but also the concept of a single object standing for a collection of objects—positional notation. Each stone stood for one animal. they collected small rocks and pebbles in a pile. People who are good at using an abacus can often do calculations as quickly as a person who is using a calculator This one-for-one correspondence continued for many centuries even up through the many years when early calculators used the placement of holes in a dial to signify a count—such as in a rotary dial telephone. not only formalized this counting method but also introduced the concept of positional notation that we use today. Primitive people also needed a way to calculate and store information for future use. MAJID AZIZ KALERI . Later they scratched notches and symbols in stone or wood to record and store information.ANIS HASSAN Center of Excellence – Hyderabad The operations of addition and subtraction were simply the operations of adding or subtracting groups of objects to the sack of counting stones or pebbles. Early counting tables. The Chinese abacus was developed about 5000 years ago. Although these machine often had the number symbol engraved alongside the dial holes. the user did not have to know the relationship between the symbols and their numeric value. named abaci. It could be held and carried around easily.

electronic computers had been built in the twentieth century. mechanical computer inspired some of the great minds of the nineteenth century but failed to persuade any backer to provide the funds to actually construct it. science. His vision of a massive brass. anticipated virtually every aspect of present-day computers. You'll get a sense of how mathematics. general-purpose. Babbage was a Fellow of the Royal Society and co-founder of the British Association for the Advancement of Science. in the history of technology has so long an interval separated the invention of a device and its realization in hardware as that which elapsed between Charles Babbage's description. and Alexander Humboldt. Here you will find a collection of original historical documents tracing the evolution of the Engine from the original concept through concrete design. the Royal Astronomical Society. of the Analytical Engine. and was author of more than eighty papers and books on a broad variety of topics. Laplace. viewed with the benefit of a century and a half's hindsight. Function: c3000 BC A counting device: a mechanical device for making calculations consisting of a frame mounted with rods along which beads or balls are moved Nationality Chinese : The First Computer Introduction Seldom. Sir John Herschel. a mechanical digital computer which. steam-powered. and for the elegant language used in discussing those disciplines. ending in disappointment when it became clear it would never be built. He was a close acquaintance of Charles Darwin. and technology felt in the nineteenth century. and thereby peek into the personalities MAJID AZIZ KALERI . elected Lucasian Professor of Mathematics at Cambridge in 1828 (the same Chair held by Newton and. and the Statistical Society of London. that designers of those machines discovered the extent to which Babbage had anticipated almost every aspect of their work. in 1837.ANIS HASSAN Center of Excellence – Hyderabad AT A GLANCE: Invention: abacus in ab·a·cus / noun . These pages are an on-line museum celebrating Babbage's Analytical Engine. he resigned this professorship in 1839 to devote his full attention to the Analytical Engine. in our days. Charles Babbage (1792-1871) was an eminent figure in his day. You'll see concepts used every day in the design and programming of modern computers described for the very first time. Stephen Hawking). often in a manner more lucid than contemporary expositions. It was only after the first electromechanical and later. if ever.

returning to the Table of Contents to select the next document that strikes you as interesting.1956-1963: Transistors Transistors replaced vacuum tubes and ushered in the second generation of computers. and they could only solve one problem at a time. First Generation . The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client. and output was displayed on printouts. The Five Generations of Computers The history of computer development is often referred to in reference to the different generations of computing devices. If you are their intellectual heir. Second Generation . Input was based on punched cards and paper tape. the U. cheaper. First generation computers relied on machine language to perform operations. These pages are a museum. Read about each generation and the developments that led to the current devices that we use today. MAJID AZIZ KALERI .1940-1956: Vacuum Tubes The first computers used vacuum tubes for circuitry and magnetic drums for memory. allowing computers to become smaller. The transistor was far superior to the vacuum tube. we can not only read about The Analytical Engine but experience it for ourselves. taking up entire rooms.ANIS HASSAN Center of Excellence – Hyderabad of the first computer engineer and programmer our species managed to produce. cheaper. more energy-efficient and more reliable than their first-generation predecessors. Since we're fortunate enough to live in a world where Babbage's dream has been belatedly realized. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. These pages include a Java-based emulator for The Analytical Engine and a variety of programs for it. Rather than forcing you to follow a linear path through the various resources here. to which all other documents are linked. so it's easy to get back when you've finished reading a document or decided to put it aside and explore elsewhere. and were often enormous.S. Census Bureau in 1951. perhaps you'll see yourself and your own work through their Victorian eyes. more powerful and more efficient and reliable devices. you can explore in any order you wish. generated a lot of heat. They were very expensive to operate and in addition to using a great deal of electricity. faster. which was often the cause of malfunctions. resulting in increasingly smaller. Every page has a link to the Table of Contents at the bottom. albeit in silicon rather than brass. You can run the emulator as an applet within a Web page or as a command-line application on your own computer (assuming it is equipped with a Java runtime environment). Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate. and its lobby is the Table of Contents.

Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.1964-1971: Integrated Circuits The development of the integrated circuit was the hallmark of the third generation of computers. and in 1984 Apple introduced the Macintosh. such as early versions of COBOL and FORTRAN.on a single chip. based on artificial intelligence. users interacted with third generation computers through keyboards and monitors and interfaced with an operating system. Computer Software MAJID AZIZ KALERI . High-level programming languages were also being developed at this time. Third Generation . As these small computers became more powerful.1971-Present: Microprocessors The microprocessor brought the fourth generation of computers. Second-generation computers moved from cryptic binary machine language to symbolic. which eventually led to the development of the Internet. What in the first generation filled an entire room could now fit in the palm of the hand. Fourth generation computers also saw the development of GUIs. which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts.ANIS HASSAN Center of Excellence – Hyderabad Though the transistor still generated a great deal of heat that subjected the computer to damage. are still in development. as thousands of integrated circuits were built onto a single silicon chip. which allowed the device to run many different applications at one time with a central program that monitored the memory. which allowed programmers to specify instructions in words. In 1981 IBM introduced its first computer for the home user. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Transistors were miniaturized and placed on silicon chips. which moved from a magnetic drum to magnetic core technology. Fifth Generation . The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization. called semiconductors. that are being used today. such as voice recognition. it was a vast improvement over the vacuum tube. The first computers of this generation were developed for the atomic energy industry. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. These were also the first computers that stored their instructions in their memory. The Intel 4004 chip. though there are some applications. languages. or assembly. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. the mouse and handheld devices. they could be linked together to form networks. developed in 1971. Fourth Generation . Second-generation computers still relied on punched cards for input and printouts for output.Present and Beyond: Artificial Intelligence Fifth generation computing devices. located all the components of the computer from the central processing unit and memory to input/output controls .

This means that it uses graphics or "icons" to represent various operations. file transport. You no longer have to memorize commands. electronic mail (E-Mail). and services offered by the World Wide Web. but at a cost: proprietary and often complex. satellite links. bulletin boards. the term "Information Superhighway" is being used to illustrate that need. but that term is more a political smoke screen than a truthful representation of the Internet. Gopher and WAIS. you simply point to an icon and click. crop production. what does all of this mean to the practicing physician? The main advantage of the Internet is the interactive exchange of information regardless of location or medium. The MAJID AZIZ KALERI . Windows 95 is a more recent version of system software and is known as a graphical interface. on-line text books. Games THE INTERNET The Internet is a global computer and resource network that provides the user with a huge amount of information and services including such things as database access. Researchers and commercial and federal agencies were looking for methods to interconnect computers and various attached devices to allow for this exchange and sharing of information.ANIS HASSAN Center of Excellence – Hyderabad System Software System software will come provided with each computer and is necessary for the computer’s operation. airline traffic. online news. Physically linking computers and other devices together with cables. weather information. Now. telephone lines. incompatible and expensive systems resulted. This software acts as an interpreter between the computer and user. In the past you may have used MS-DOS or Microsoft Disk Operating System which was a command line interface. Nowadays. Application Software Application software is any software used for specified applications such as: Word Processing Spreadsheet Database Presentation Graphics Communication Tutorials Entertainment. Interconnectivity of individual networks and different technologies to form larger networks that could span the world became prudent. discussion lists. It interprets your instructions into binary code and likewise interprets binary code into language the user can understand. Program Software Program software is software used to write computer programs in specific computer languages. and special electronic equipment into a computer network has been accomplished. This form of system software required specific commands to be typed.

and educational networks interconnecting millions of systems. All communication between the two stations uses the established dedicated connection. or terminate a connection. the United States Department of Defense early on recognized and funded research in the area of interconnecting diverse networks. Nowadays. it is helpful to spend some time on its roots. Internet connections for home use are becoming available from commercial providers. and employees. Network connections rely on another type of connection: packetswitched. To appreciate the current state of the Internet. This type of dedicated connection is called circuit-switched. The information stream is chopped into small pieces (packets) that are placed onto the network by the network hardware of the source system. The other side of the coin is that commercialization will increase on the Internet. It is the responsibility of other network hardware to deliver MAJID AZIZ KALERI . students. and that private industry replaces this financial void such as CompuServe and America Online. The TCP/IP protocol that resulted is the basis of the current Internet user community and defines how computers do such things as start an interconnection. How does it Work? When we pick up the phone to make a call. Toward this goal. there is no dedicated connection between the two systems. and the policy to make the TCP/IP protocol available to everyone in both the academic and research environment has stimulated today's enormous popularity. Interestingly. if not phased out. send data back and forth. The initial "Internet" was primarily a private information access tool for researchers and scientists at universities and federal agencies. Soon. the Internet Protocol and Transmission Control Protocol (TCP/IP) family of protocols was completed. a dedicated connection between our phone and the other phone is made. Nor is the Internet maintained or managed by someone or some organization." There is no such thing. the ease of interconnecting TCP/IP networks together. Compared to circuit-switched.ANIS HASSAN Center of Excellence – Hyderabad Internet (Note the capital I) is NOT a "network of networks. the Advanced Research Projects Agency (ARPA) of the United States in the mid-1960's sponsored universities and other research organizations in developing a protocol that would satisfy ARPA's (military) requirements: being able to withstand a nuclear attack. These fallacies do injustice to the Internet since they only address the physical layer and do not emphasize the functional integration of the diverse and dispersed resources that this physical layer transports. In the late 1960's. commercial. However. combined with the fact that TCP/IP networks are allowed to grow without disrupting existing networks. networks based on the TCP/IP protocol grew from just a few hundred computers to the world largest network of academic. It should not come as a surprise in the current climate of cost containment that the federal subsidies are being reduced. Academic institutions and commercial parties also bear some of the cost to make an Internet connection available to their faculty. government. Who Pays for the Internet? The federal government through the National Science Foundation subsidizes the Internet.

Computer languages were first composed of a series of steps to wire a particular program. provide the TCP/IP standards were followed. In the beginning. The History of Computer Programming Languages Ever since the invention of Charles Babbage's difference engine in 1822. and electronic mail (E-MAIL). but to make it more readable for us humans it is commonly written as four numbers separated by periods also called the dotted decimal notation. you need to know the other system's IP number. these morphed into a series of steps keyed into the computer and then executed. How then does the network know where to send the packet? The answer to this question is: every computer system that is on the Internet has a unique address referred to as the Internet address or IP number. The TCP/IP family of protocols defines the standard of how to communicate among the interconnected networks of the Internet. This set of rules allows systems from various manufacturers. However. computers have required a means of instructing them to perform a specific task. Once the destination receives the packets. physical motion was replaced by electrical signals when the US Government built the ENIAC in 1942. If you wish to communicate with another computer over the Internet. remote access to a host computer (Telnet). the earliest form of a computer language was physical motion. since the computer really uses the IP number. what to do in case of errors. (Please note that this is NOT your E-Mail address. and so on. Internet address written in the dotted decimal notation When you want to connect your computer to the Internet you must obtain such a unique address. are very much an essential part of any TCP/IP implementation. The computer languages of the last fifty years have come in two stages. These standards define how the packets look. which are in use today. Thus. each computer on the Internet can also be given a name.ANIS HASSAN Center of Excellence – Hyderabad that packet to the destination using the address of the destination system that is contained in each packet. Modern computer networks are comprised of powerful personal computers instead of dumb terminals. the first major languages and the second major languages.) This unique address is a 32-bit number. England. We humans are better at remembering names rather than numbers and. Charles Babbage's difference engine could only be made to execute tasks by changing the gears which executed the calculations. how computers set up a connection to exchange data. The network hardware only knows where to deliver the packet and need not know what the route is the packet will be traveling. therefore. similar to the way we use a telephone book. but the basic TCP/IP services: file transfer (the File Transport Protocol FTP). running different computer software on incompatible hardware to exchange information. The versatility of the TCP/IP protocol was illustrated in 1978 when a dumb terminal located in a mobile van driving along a California highway made a connection to a computer located in London. the corresponding IP address needs to be found for each name. It followed many of the same MAJID AZIZ KALERI . Eventually. they must be re-assembled into the original information stream. later these languages acquired advanced features such as logical branching and object orientation. This means is known as a programming language.

Von Neumann called it "conditional control transfer" (www. it was not so good at handling input and output. these commands were a big step forward. allowing it to be reprogrammed much faster. A-0 (www. Though FORTAN was good at handling numbers. with four or five major sections that build into MAJID AZIZ KALERI . The language was designed at IBM for scientific computing. a few years after Von Neumann's work. Today. It was designed from the ground up as the language for businessmen. This idea gave rise to the notion of subroutines. Its only data types were numbers and strings of text. and GOTO statements. and because of this. and integer. so that data could be tracked and organized better. DO. In 1957. and provided the programmer with low-level access to the computers innards. which mattered most to business computing. A compiler is a program that turns the language's statements into 0's and 1's for the computer to understand. the language Short Code appeared (www. Instead. This lead to faster programming. This process proved to be very tedious.byte." which are blocks of code that can be reused over and over. Grace Hopper wrote the first compiler. Still. The basic types of data in use today got their start in FORTRAN. In 1951. as the programmer no longer had to do the work by hand. but at the time.softlord. This technique stated that the actual computer hardware should be simple and not need to be hand-wired for each program. Its name stands for FORmula TRANslating system. and double-precision numbers. could only be "programmed" by presetting switches and rewiring the entire system for each new "program" or calculation. COBOL was developed.com). In 1949. "Conditional control transfer" gave rise to the idea of "libraries. real.softlord.com). The second concept was also extremely important to the development of programming languages.ANIS HASSAN Center of Excellence – Hyderabad principles of Babbage's engine and hence. and looped such as with a FOR statement. complex instructions should be used to control the simple hardware. or small blocks of code that could be jumped to in any order. He developed two important concepts that directly affected the path of computer programming languages. this language would be considered restrictive as it only included IF. It also allowed for these to be grouped into arrays and records. the first of the major languages appeared in the form of FORTRAN. The components were very simple.byte. The second part of the idea stated that computer code should be able to branch based on logical statements such as IF (expression) THEN. The first was known as "shared-program technique" (www.com).com). these included logical variables (TRUE or FALSE). Business computing started to take off in 1959. It was the first computer language for electronic devices and it required the programmer to change its statements into 0's and 1's by hand. it was the first step towards the complex languages of today. It is interesting to note that a COBOL program is built in a way similar to an essay. In 1945. instead of a single set of chronologically ordered steps for the computer to take. John Von Neumann was working at the Institute for Advanced Study.

Because it was designed for such a highly specialized field. LISP programs themselves are written as a set of lists. a very powerful feature of any language that implements it. It also added a CASE statement that allowed instructions to to branch like a tree in such a manner: Programming Languages MAJID AZIZ KALERI . and Java. It was designed for Artificial Intelligence (AI) research. Though Algol implemented some novel concepts.such as Pascal. so that LISP has the unique ability to modify itself. known as Backus-Naar Form or BNF (McGraw-Hill Encyclopedia of Science and Technology. input/output and solid mathematical features. 177) : x V y . 100101). making it quite easy to learn.parenthesized prefix notation.Cambridge Polish.ANIS HASSAN Center of Excellence – Hyderabad an elegant whole. they concentrated on developing good tools for teaching such as a debugger and editing system and support for common early microprocessor machines which were in use in teaching institutions. many of the irregularities and oddball statements of these languages were cleaned up.com).standard Boolean logic LISP remains in use today because its highly specialized and abstract nature. which helped it gain users (Bergin. what was used in the LISP program x OR y . its syntax has rarely been seen before or since. the language designers had no hopes for it to enjoy widespread adoption. While doing so. 454).byte. the next version of the language. made it a highly successful language. It was also the first language with a formal grammar. Instead. The most obvious difference between this language and other languages is that the basic and only type of data is the list. FORTRAN. Pascal was designed in a very orderly approach. COBOL. All of these features were designed to make it easier for the average business to learn and adopt it. In the beginning. and hence grow on its own. such as recursive calling of functions. The Algol language was created by a committee for scientific use in 1958. COBOL statements also have a very English-like grammar.y) . The combination of features. denoted by a sequence of items enclosed by parentheses." as it was very different from standard Boolean logic (Wexelblat. became bloated and difficult to use (www. John McCarthy of MIT created the LISt Processing (or LISP) language. The LISP syntax was known as "Cambridge Polish. C++. Its major contribution is being the root of the tree that has led to such languages as Pascal. it combined many of the best features of the languages in use at the time. and ALGOL. In 1958. This lead to the adoption of smaller and more compact languages. what was used to describe the LISP program OR(x. Pascal also improved the "pointer" data type. Pascal was begun in 1968 by Niklaus Wirth. C. Its development was mainly out of necessity for a good teaching tool. Algol 68.

the addresses of other data items may be changed. For example in a hypothetical computer successive bytes of a program may contain: operation code 00010101 00010111 00010110 address meaning 10100001 load c(129) into accumulator 10100010 add c(130) to accumulator 10100011 store c(accumulator) in location 131 where c( ) means `the contents of' and the accumulator is a special register in the CPU. The programmer can also use symbolic addresses for data items. a depressingly common occurrence in machine language programs.ANIS HASSAN Center of Excellence – Hyderabad All computers have an internal machine language which they execute directly. Machine language commands are replaced by mnemonic commands on a one-to-one basis. Most computers have no way of deciding whether a particular bit pattern is supposed to represent data or an instruction. For example the short section of program above might be written in assembly language as: operation address MAJID AZIZ KALERI . However the ability to interpret the same bit pattern as both an instruction and as data is a very powerful feature. The programmer will have to carefully examine the whole program deciding which bit patterns represent the addresses which have changed. The assembler program takes care of converting from the mnemonic to the corresponding machine language code. This language is coded in a binary representation and is very tedious to write. The assembler will assign machine addresses and ensure that distinct data items do not overlap in storage. This sequence of code then adds the contents of location 130 to the contents of the accumulator. Assembly languages are a more human friendly form of machine language. Human beings are notoriously bad at simple repetitive tasks. or erroneous programs which try to execute part of their data are very difficult to correct. Programming errors which lead to instructions being overwritten with data. and which locations are to form the executable program. computers thrive on them. it allows programs to generate other programs and have them executed. If a programmer is modifying a program and decides to insert an extra data item. Programmers using machine language have to keep careful track of which locations they are using to store data. Most instructions will consist of an operation code part and an address part. which has been previously loaded with the contents of location 129. The operation code indicates which operation is to be carried out while the address part of the instruction indicates which memory location is to be used as the operand of the instruction. and then stores the result in location 131. Assembly Language The bookkeeping involved in machine language programming is very tedious. and modify them.

ANIS HASSAN Center of Excellence – Hyderabad code LOAD ADD STORE A B C Obviously this leaves less scope for error but since the computer does not directly understand assembly language this has to be translated into machine language by a program called an assembler. The assembler replaces the mnemonic operation codes such as ADD with the corresponding binary codes and allocates memory addresses for all the symbolic variables the programmer uses. quicker to write and less error-prone. As with assembly languages the computer does not understand these highlevel languages directly and hence they have to be processed by passing them through a program called a compiler which translates them into internal machine language before they can be executed. Then each manufacturer produces a compiler to compile programs that conform to the standard into their own internal machine language. These high-level languages were much closer to human language. These languages were designed to make programming far easier. If programs could be presented in a language that was more familiar to the person solving the problem. Then it should be easy to MAJID AZIZ KALERI . B. less error-prone and to remove the programmer from having to know the details of the internal structure of a particular computer. Thus by making the process of programming easier for the human being another level of processing for the computer has been introduced. It is responsible for associating the symbol A. and C with an addresses. High level Languages Very early in the development of computers attempts were made to make programming easier by reducing the amount of knowledge of the internal workings of the computer that was needed to write programs. In Fortran II our program above would be written as: C =A+ B which is obviously much more readable. High-level programming languages allow the specification of a problem solution in terms closer to those used by human beings. For example. Assembly languages still require that the programmer should have a good knowledge of the internal structure of the computer. and ensuring that they are all distinct. then fewer mistakes would be made. different ADD instructions will be needed for different types of data item. Assembly languages are still machine specific and hence the program will have to be re-written if it is to be implemented on another type of computer. One of the first of these languages was Fortran II which was introduced in about 1958. Assembly languages are still used in some time-critical programs since they give the programmer very precise control of what exactly happens inside the computer. Another advantage accrues from the use of high-level languages if the languages are standardized by some international body.

carries out arithmetic operations on integer (whole number) and real (with a decimal point) operands. It can also perform simple logical tests for equality and greater than and less than between operands. namely: The Control Unit -. It is well worth becoming familiar with the standard and writing programs which obey it. from the main memory. Central Processing Unit The Central Processing Unit (CPU) performs the actual processing of data. The compilation time used in the computer is trivial compared with the human time saved. and the selection of the next instruction for processing. among the most widely used have been: COBOL Business applications FORTRAN Engineering & Scientific Applications PASCAL General use and as a teaching tool C & C++ General Purpose . the CPU of a typical PC is capable of executing many tens of millions of instructions per second. Many high level languages have appeared since Fortran II (and many have also disappeared!). Among other tasks carried out are the control of input and output devices and the passing of data to the Arithmetic/Logical Unit for computation. time required to translate the program to internal machine language. The data it processes is obtained. so that your programs are more likely to be portable. It is worth noting here that the only operations that the CPU can carry out are simple arithmetic operations. All the rest of the apparently limitless things a computer can do are built on this very primitive base by programming! Modern CPUs are very fast. Results from the CPU are then sent back to main memory via the system bus. At the time of writing. As with assembly language human time is saved at the expense of the compilation. typically seconds as compared with weeks. via the system bus.ANIS HASSAN Center of Excellence – Hyderabad take a program which conforms to the standard and implement it on many different computers merely by re-compiling it on the appropriate computer.gaining popularity rapidly All these languages are available on a large variety of computers. Unfortunately many compiler writers add new features of their own which means that if a programmer uses these features then their program becomes non-portable.controls the fetching of instructions from the main memory and the subsequent execution of these instructions. The Arithmetic/Logical Unit (ALU) -. In addition to computation the CPU controls and co-ordinates the operation of the other major components. The CPU has two main components. This great advantage of portability of programs has been achieved for several high-level languages and it is now possible to move programs from one computer to another without too much difficulty. comparisons between the result of a calculation and other values.currently most popular PROLOG Artificial Intelligence JAVA General Purpose . MAJID AZIZ KALERI .

with fast access and limited numbers of bytes being transferred. as in the case of USB. the number of bytes which are accessed by a single operation. two. four or eight bytes. four. Main Memory is the working memory of the CPU. It is constructed from integrated circuits and needs to have electrical power in order to maintain its information. or connected by switched hubs. The CPU will perform its operations on groups of one. but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Main memory is expensive compared to external memory so it has limited capacity.ANIS HASSAN Center of Excellence – Hyderabad Computer bus In computer architecture. depending on the interpretation being placed on the data. a bus can logically connect several peripherals over the same set of wires. a bus is a subsystem that transfers data or power between computer components inside a computer or between computers. standing for Random Access Memory. All the information in memory is encoded in fixed size cells called bytes. or eight bytes. characterised by the time it takes to access the information stored there. When power is lost. and can be wired in either a multidrop (electrical parallel) or daisy chain topology. Access to the external memory is much slower. and the intermediate results of calculations. such as a single character or a numeric value between 0 and 255. even if the operation it is undertaking only requires a single byte. cards or cables together. A byte can hold a small amount of information. Each bus defines its set of connectors to physically plug devices. This is broadly comparable with the speed at which the CPU will need to access data. For example many home Personal Computers now have a capacity of 16 megabytes (million bytes). data values. External memory is for the long term storage of information. while 64 megabytes is commonplace on commercial workstations. Data from external memory will be transferred to the main memory before the CPU can operate on it. Early computer buses were literally parallel electrical buses with multiple connections. the information is lost too! It can be directly accessed by the CPU. and usually involves groups of several hundred bytes. and the total number of bytes which can be stored. Main memory The main memory of the computer is also known as RAM. There are two main categories of memory. Memory The memory of a computer can hold program instructions. The CPU will normally transfer data to and from the main memory in groups of two. and the operations required. The capacity available for a given price is increasing all the time. The access time to read or write any particular byte are independent of whereabouts in the memory that byte is. Modern computer buses can use both parallel and bit-serial connections. External Memory MAJID AZIZ KALERI . Unlike a point-to-point connection. and currently is approximately 50 nanoseconds (a thousand millionth of a second).

External memory has the important property that the information stored is not lost when the computer is switched off. This means that before accessing or modifying data the read/write head must be positioned over the correct track. The most common device for this purpose is a floppy disc which has a very limited capacity. Notice that the access time to data stored in secondary storage will depend on its location. Accessing external memory to find the appropriate data is slow (milliseconds) in relation to CPU speeds but the rate of transfer of data to main memory is reasonably fast once it has been located.ANIS HASSAN Center of Excellence – Hyderabad External memory which is sometimes called backing store or secondary memory. Transfers are usually of blocks of several hundred bytes or even more. All the user's data and programs will also be stored on the hard disc. This discrepancy between the speed of access to the first byte required. There is also a small delay waiting for the appropriate section of the track to rotate under the head. The hard disc will hold all the software that is required to run the computer. This time is called the seek time and is measured in milliseconds. and subsequent bytes on the same track means that it is not economic to transfer small numbers of bytes. The capacity of external memory is high. Some method of magnetic recording on magnetic disks or tapes is most commonly used. although they remain more expensive than magnetic media. Information is recorded on the disc by magnetising spots of the oxide coating on concentric circular tracks. Once the correct section of the track is under the head. A hard disc is a flat. MAJID AZIZ KALERI . This latency is much smaller than the seek time. Various magnetic tape devices can be used for storing larger quantities of data and more recently removable optical discs have been used. It is important to note that the CPU can only directly access data that is in main memory. successive bytes of information can be transferred to the main memory at rates of several megabytes per second. More recently optical methods which rely upon marks etched by a laser beam on the surface of a disc (CD-ROM) have become popular. To process data that resides in external memory the CPU must first transfer it to main memory. The most common form of external memory is a hard disc which is permanently installed in the computer and will typically have a capacity of hundreds of megabytes. allows the permanent storage of large quantities of data. circular oxide-coated disc which rotates continuously. An access arm in the disc drive positions a read/write head over the appropriate track to read and write data from and to the track. from the operating system to packages like word-processing and spreadsheet programs. In addition most computers have some form of removable storage device which can be used to save copies of important files etc. usually measured in hundreds of megabytes or even in gigabytes (thousand million bytes) at present.

a port (derived from seaport) is usually a connection through which data is sent and received. The most common output device is a monitor which is usually a Cathode Ray Tube device which can display text and graphics. and is considered to be a legacy port MAJID AZIZ KALERI . A parallel port carries one bit on each wire thus multiplying the transfer rate obtainable over a single cable (contrast serial port). Mice. Also information has to be returned from the computer to the user. If hard-copy output is required then some form of printer is used. commands to the computer and data for processing have to be entered. light pens and touch sensitive screens. The most common input devices used by the computer are the keyboard and the mouse. and is the basic method of communicating with many current computing systems. This interaction requires the use of input and output devices. the parallel port is omitted for cost savings. this was accomplished using the RS-232 standard over simple cables connecting the computer to a device such as a terminal or modem. and other devices were also often connected this way. Throughout most of the history of personal computers. keyboards. that is. In computing. Port In computing. Using the mouse in this way allows the selection from menus on the screen etc. An exception is a software port (porting. On many modern (2005) computers. on more than one wire. indicate an error condition (such as paper out). There are also several extra wires on the port that are used for control and status signals to indicate when data is ready to be sent or received. derived from transport). a serial port is an interface on a computer system with which information is transferred in or out one bit at a time (contrast parallel port). Alternative devices to the mouse are tracker balls. initiate a reset. The keyboard allows the entry of textual information while the mouse allows the selection of a point on the screen by moving a screen cursor to the point and pressing a mouse button. a parallel port is an interface from a computer system where data is transferred in or out in parallel. Parallel port In computing. and so forth. which is software that has been "transported" to another computer system Serial port A male DE-9 serial port on the rear panel of a PC.ANIS HASSAN Center of Excellence – Hyderabad Input/Output Devices When using a computer the text of programs.