This action might not be possible to undo. Are you sure you want to continue?
Introduction to Information Technology Course Code ITE210
Instructor: Dr. Moustafa Kurdi Lebanon - Sour
Information technology (IT), as defined by the Information Technology Association of America (ITAA), is "the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware." IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit, and securely retrieve information. Today, the term information technology has ballooned to encompass many aspects of computing and technology, and the term is more recognizable than ever before. The information technology umbrella can be quite large, covering many fields. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include data management, networking, engineering computer hardware, database and software design, as well as the management and administration of entire systems. When computer and communications technologies are combined, the result is information technology, or "infotech". Information Technology (IT) is a general term that describes any technology that helps to produce, manipulate, store, communicate, and/or disseminate information. Presumably, when speaking of Information Technology (IT) as a whole, it is noted that the use of computers and information are associated. The term information system (IS) sometimes refers to a system of persons, data records and activities that process the data and information in an organization, and it includes the organization's manual and automated processes. Computer-based information systems are the field of study for information technology, elements of which are sometimes called an "information system" as well, a usage some consider to be incorrect.
Personal Computer (PC), computer in the form of a desktop or laptop device designed
for use by a single person. PCs function using a display monitor and a keyboard. Since their introduction in the 1980s, PCs have become powerful and extremely versatile tools that have revolutionized how people work, learn, communicate, and find entertainment. Many households in the United States now have PCs, thanks to affordable prices and software that has made PCs easy to use without special computer expertise. Personal computers are also a crucial component of information technology (IT) and play a key role in modern economies worldwide. The usefulness and capabilities of personal computers can be greatly enhanced by connection to the Internet and World Wide Web, as well as to smaller networks that link to local computers or databases. Personal computers can also be used to access content stored on compact discs (CDs) or digital versatile discs (DVDs), and to transfer files to personal media devices and video players. Personal computers are sometimes called microcomputers or micros. Powerful PCs designed for professional or technical use are known as work stations. Other names that reflect different roles for PCs include home computers and small-business computers. The PC is generally larger and more powerful than handheld computers, including personal digital assistants (PDAs) and gaming devices.
HOW COMPUTERS WORK
The physical computer and its components are known as hardware. Computer hardware includes the memory that stores data and program instructions; the central processing unit (CPU) that carries out program instructions; the input devices, such as a keyboard or mouse, that allow the user to communicate with the computer; the output devices, such as printers and video display monitors, that enable the computer to present information to the user; and buses (hardware lines or wires) that connect these and other computer components. The programs that run the computer are called software. Software generally is designed to perform a particular type of task—for example, to control the arm of a robot to weld a car’s body, to write a letter, to display and modify a photograph, or to direct the general operation of the computer.
II- PARTS OF A PERSONAL COMPUTER
The different types of equipment that make a computer function are known as hardware; the coded instructions that make a computer work are known as software.
Types of Hardware
PCs consist of electronic circuitry called a microprocessor, such as the central processing unit (CPU), that directs logical and arithmetical functions and executes computer programs. The CPU is located on a motherboard with other chips. A PC also has electronic memory known as random access memory (RAM) to temporarily store programs and data. A basic component of most PCs is a disk drive, commonly in the form of a hard disk or hard drive. A hard disk is a magnetic storage device in the form of a disk or disks that rotate. The magnetically stored information is read or modified using a drive head that scans the surface of the disk.
Removable storage devices—such as floppy drives, compact disc (CD-ROM) and digital versatile disc (DVD) drives, and additional hard drives—can be used to permanently store as well as access programs and data. PCs may have CD or DVD “burners” that allow users to write or rewrite data onto recordable discs. Other external devices to transfer and store files include memory sticks and flash drives, small solid-state devices that do not have internal moving parts. Cards are printed circuit boards that can be plugged into a PC to provide additional functions such as recording or playing video or audio, or enhancing graphics (see Graphics Card). A PC user enters information and commands with a keyboard or with a pointing device such as a mouse. A joystick may be used for computer games or other tasks. Information from the PC is displayed on a video monitor or on a liquid crystal display (LCD) video screen. Accessories such as speakers or headphones allow audio to be listened to. Files, photographs, or documents can be printed on laser, dot-matrix, or inkjet printers. The various components of the computer system are physically attached to the PC through the bus. Some PCs have wireless systems that use infrared or radio waves to link to the mouse, the keyboard, or other components. PC connections to the Internet or local networks may be through a cable attachment or a phone line and a modem (a device that permits transmission of digital signals). Wireless links to the Internet and networks operate through a radio modem. Modems also are used to link other devices to communication systems.
Types of Software
PCs are run by software called the operating system. Widely used operating systems include Microsoft’s Windows, Apple’s Mac OS, and Linux. Other types of software called applications allow the user to perform a wide variety of tasks such as word processing; using spreadsheets; manipulating or accessing data; or editing video, photographs, or audio files. Drivers are special software programs that operate specific devices that can be either crucial or optional to the functioning of the computer. Drivers help operate keyboards, printers, and DVD drives, for example. Most PCs use software to run a screen display called a graphical user interface (GUI). A GUI allows a user to open and move files, work with applications, and perform other tasks by clicking on graphic icons with a mouse or other pointing device. In addition to text files, PCs can store digital multimedia files such as photographs, audio recordings, and video. These media files are usually in compressed digital formats such as JPEG for photographs, MP3 for audio, and MPEG for video.
USES FOR PERSONAL COMPUTERS
The wide variety of tasks that PCs can perform in conjunction with the PC’s role as a portal to the Internet and World Wide Web have had profound effects on how people conduct their lives and work, and pursue education. In the home, PCs can help with balancing the family checkbook, keeping track of finances and investments, and filing taxes, as well as preserving family documents for easy access or indexing recipes. PCs are also a recreational device for playing computer games, watching videos with webcasting, downloading music, saving photographs, or cataloging records and books. Together with the Internet, PCs are a link to social contacts through electronic mail (email), text-messaging, personal Web pages, blogs, and chat groups. PCs can also allow quick and convenient access to news and sports information on the World Wide Web, as well as consumer information. Shopping from home over the Internet with a PC generates billions of dollars in the economy.
PCs can greatly improve productivity in the workplace, allowing people to collaborate on tasks from different locations and easily share documents and information. Many people with a PC at home are able to telecommute, working from home over the Internet. Laptop PCs with wireless connections to the Internet allow people to work in virtually any environment when away from the office. PCs can help people to be self-employed. Special software can make running a small business from home much easier. PCs can also assist artists, writers, and musicians with their creative work, or allow anyone to make their own musical mixes at home. Medical care has been improved and costs have been reduced by transferring medical records into electronic form that can be accessed through PC terminals. PCs have become an essential tool in education at all levels, from grammar school to university. Many school children are given laptop computers to help with schoolwork and homework. Classrooms of all kinds commonly use PCs. Many public libraries make PCs available to members of the public. The Internet and World Wide Web provide access to enormous amounts of information, some of it free and some of it available through subscription or fee. Online education as a form of distance education or correspondence education is a growing service, allowing people to take classes and work on degrees at their convenience using PCs and the Internet.
PCs can also be adapted to help people with disabilities, using special devices and software. Special keyboards, cursors that translate head movements, or accessories such as foot mice can allow people with limited physical movement to use a PC. PCs can also allow people with speech or auditory disabilities to understand or generate speech. Visual disabilities can be aided by speech-recognition software that allows spoken commands to work a PC or for e-mail and text to be read aloud. Text display can also be magnified for individuals with low vision.
Hardware (computer), equipment involved in the function of a computer. Computer
hardware consists of the components that can be physically handled. The function of these components is typically divided into three main categories: input, output, and storage. Components in these categories connect to microprocessors, specifically, the computer’s central
processing unit (CPU), the electronic circuitry that provides the computational ability and control of the computer, via wires or circuitry called a bus. Software, on the other hand, is the set of instructions a computer uses to manipulate data, such as a word-processing program or a video game. These programs are usually stored and transferred via the computer's hardware to and from the CPU. Software also governs how the hardware is utilized; for example, how information is retrieved from a storage device. The interaction between the input and output hardware is controlled by software called the Basic Input Output System software (BIOS). Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware.
Input hardware consists of external devices—that is, components outside of the computer’s CPU —that provide information and instructions to the computer. A light pen is a stylus with a lightsensitive tip that is used to draw directly on a computer’s video screen or to select information on the screen by pressing a clip in the light pen or by pressing the light pen against the surface of the screen. The pen contains light sensors that identify which portion of the screen it is passed over. A mouse is a pointing device designed to be gripped by one hand. It has a detection device (usually a ball, a light-emitting diode [LED], or a low-powered laser) on the bottom that enables the user to control the motion of an on-screen pointer, or cursor, by moving the mouse on a flat surface. As the device moves across the surface, the cursor moves across the screen. To select items or choose commands on the screen, the user presses a button on the mouse. A joystick is a pointing device composed of a lever that moves in multiple directions to navigate a cursor or other graphical object on a computer screen. A keyboard is a typewriter-like device that allows the user to type in text and commands to the computer. Some keyboards have special function keys or integrated pointing devices, such as a trackball or touch-sensitive regions that let the user’s finger motions move an on-screen cursor. Touch-screen displays, which are video displays with a special touch-sensitive surface, are also becoming popular with personal electronic devices—examples include the Apple iPhone and Nintendo DS video game system. Touch-screen displays are also becoming common in everyday use. Examples include ticket kiosks in airports and automated teller machines (ATM). An optical scanner uses light-sensing equipment to convert images such as a picture or text into electronic signals that can be manipulated by a computer. For example, a photograph can be scanned into a computer and then included in a text document created on that computer. The two most common scanner types are the flatbed scanner, which is similar to an office photocopier, and the handheld scanner, which is passed manually across the image to be processed. A microphone is a device for converting sound into signals that can then be stored, manipulated, and played back by the computer. A voice recognition module is a device that converts spoken words into information that the computer can recognize and process. A modem, which stands for modulator-demodulator, is a device that connects a computer to a telephone line or cable television network and allows information to be transmitted to or received from another computer. Each computer that sends or receives information must be connected to a modem. The digital signal sent from one computer is converted by the modem into an analog signal, which is then transmitted by telephone lines or television cables to the receiving modem, which converts the signal back into a digital signal that the receiving computer can understand. A network interface card (NIC) allows the computer to access a local area network (LAN) through either a specialized cable similar to a telephone line or through a wireless (Wi-Fi) connection.
The vast majority of LANs connect through the Ethernet standard, which was introduced in 1983.
Output hardware consists of internal and external devices that transfer information from the computer’s CPU to the computer user. Graphics adapters, which are either an add-on card (called a video card) or connected directly to the computer’s motherboard, transmit information generated by the computer to an external display. Displays commonly take one of two forms: a video screen with a cathode-ray tube (CRT) or a video screen with a liquid crystal display (LCD). A CRT-based screen, or monitor, looks similar to a television set. Information from the CPU is displayed using a beam of electrons that scans a phosphorescent surface that emits light and creates images. An LCD-based screen displays visual information on a flatter and smaller screen than a CRT-based video monitor. Laptop computers use LCD screens for their displays. Printers take text and image from a computer and print them on paper. Dot-matrix printers use tiny wires to impact upon an inked ribbon to form characters. Laser printers employ beams of light to draw images on a drum that then picks up fine black particles called toner. The toner is fused to a page to produce an image. Inkjet printers fire droplets of ink onto a page to form characters and pictures. Computers can also output audio via a specialized chip on the motherboard or an add-on card called a sound card. Users can attach speakers or headphones to an output port to hear the audio produced by the computer. Many modern sound cards allow users to create music and record digital audio, as well.
Storage hardware provides permanent storage of information and programs for retrieval by the computer. The two main types of storage devices are disk drives and memory. There are several types of disk drives: hard, floppy, magneto-optical, magnetic tape, and compact. Hard disk drives store information in magnetic particles embedded in a disk. Usually a permanent part of the computer, hard disk drives can store large amounts of information and retrieve that information very quickly. Floppy disk drives also store information in magnetic particles embedded in removable disks that may be floppy or rigid. Floppy disks store less information than a hard disk drive and retrieve the information at a much slower rate. While most computers still include a floppy disk drive, the technology has been gradually phased out in favor of newer technologies. Magneto-optical disk drives store information on removable disks that are sensitive to both laser light and magnetic fields. They can store up to 9.1 gigabytes (GB) of data, but they have slightly slower retrieval speeds as opposed to hard drives. They are much more rugged than floppy disks, making them ideal for data backups. However, the introduction of newer media that is both less expensive and able to store more data has made magneto-optical drives obsolete. Magnetic tape drives use magnetic tape similar to the tape used in VCR cassettes. Tape drives have a very slow read/write time, but have a very high capacity; in fact, their capacity is second only to hard disk drives. Tape drives are mainly used to back up data. Compact disc drives store information on pits burned into the surface of a disc of reflective material (see CD-ROM). CD-ROMs can store up to 737 megabytes (MB) of data. A Compact DiscRecordable (CD-R) or Compact Disc-ReWritable (CD-RW) drive can record data onto a specialized disc, but only the CD-RW standard allows users to change the data stored on the disc. A digital versatile disc (DVD) looks and works like a CD-ROM but can store up to 17.1 GB of data on a single disc. Like CD-ROMs, there are specialized versions of DVDs, such as DVD-Recordable (DVD-R) and DVD-ReWritable (DVD-RW), that can have data written onto them by the user. More recently Sony Electronics developed DVD technology called Blu-ray. It has much higher storage capacities than standard DVD media. Memory refers to the computer chips that store information for quick retrieval by the CPU. Random access memory (RAM) is used to store the information and instructions that operate the computer's programs. Typically, programs are transferred from storage on a disk drive to RAM.
RAM is also known as volatile memory because the information within the computer chips is lost when power to the computer is turned off. Read-only memory (ROM) contains critical information and software that must be permanently available for computer operation, such as the operating system that directs the computer's actions from start up to shut down. ROM is called nonvolatile memory because the memory chips do not lose their information when power to the computer is turned off. A more recent development is solid-state RAM. Unlike standard RAM, solid state RAM can contain information even if there is no power supply. Flash drives are removable storage devices that utilize solid-state RAM to store information for long periods of time. Solid-state drives (SSD) have also been introduced as a potential replacement for hard disk drives. SSDs have faster access speeds than hard disks and have no moving parts. However, they are quite expensive and do not have the ability to store as much data as a hard disk. Solid-state RAM technology is also used in memory cards for digital media devices, such as digital cameras and media players. Some devices serve more than one purpose. For example, floppy disks may also be used as input devices if they contain information to be used and processed by the computer user. In addition, they can be used as output devices if the user wants to store the results of computations on them.
To function, hardware requires physical connections that allow components to communicate and interact. A bus provides a common interconnected system composed of a group of wires or circuitry that coordinates and moves information between the internal parts of a computer. A computer bus consists of two channels, one that the CPU uses to locate data, called the address bus, and another to send the data to that address, called the data bus. A bus is characterized by two features: how much information it can manipulate at one time, called the bus width, and how quickly it can transfer these data. In today’s computers, a series of buses work together to communicate between the various internal and external devices.
Expansion, or add-on, cards use one of three bus types to interface with the computer. The Peripheral Connection Interface (PCI) is the standard expansion card bus used in most computers. The Accelerated Graphics Port (AGP) bus was developed to create a high-speed interface with the CPU that bypassed the PCI bus. This bus was specifically designed for modern video cards, which require a large amount of bandwidth to communicate with the CPU. A newer version of PCI called PCI Express (PCIe) was designed to replace both PCI and AGP as the main bus for expansion cards. Internal storage devices use one of three separate standards to connect to the bus: parallel AT attachment (PATA), serial AT attachment (SATA), or small computer system interface (SCSI). The term AT refers to the IBM AT computer, first released in 1984. The PATA and SCSI standards were first introduced in 1986; the SATA standard was introduced in 2002 as a replacement for the PATA standard. The SCSI standard is mainly used in servers or high-end systems.
Parallel and Serial Connections
For most of the history of the personal computer, external and internal devices have communicated to each other through parallel connections. However, given the limitations of parallel connections, engineers began to develop technology based on serial connections, since these have greater data transfer rates, as well as more reliability. A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only
one piece of data at a time. The advantage to using a serial connection is that it provides effective connections over long distances. A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to shorter distances between the CPU and the external device than serial connections. The best way to see the difference between parallel and serial connections is to imagine the differences between a freeway and a high-speed train line. The freeway is the parallel connection—lots of lanes for cars. However, as more cars are put onto the freeway, the slower each individual car travels, which means more lanes have to be built at a high cost if the cars are to travel at high speed. The train line is the serial connection; it consists of two tracks and can only take two trains at a time. However, these trains do not need to deal with traffic and can go at higher speeds than the cars on the freeway. As CPU speeds increased and engineers increased the speed of the parallel connections to keep up, the main problem of parallel connections—maintaining data integrity at high speed—became more evident. Engineers began to look at serial connections as a possible solution to the problem. This led to the development of both SATA and PCI Express, which, by using serial connections, provide high data transfer rates with less materials used and no data loss.
The oldest external connections used by computers were the serial and parallel ports. These were included on the original IBM PC from 1981. Originally designed as an interface to connect computer to computer, the serial port was eventually used with various devices, including modems, mice, keyboards, scanners, and some types of printers. Parallel ports were mainly used with printers, but some scanners and external drives used the parallel port. The Universal Serial Bus (USB) interface was developed to replace both the serial and parallel ports as the standard for connecting external devices. Developed by a group of companies including Microsoft, Intel, and IBM, the USB standard was first introduced in 1995. Besides transferring data to and from the computer, USB can also provide a small amount of power, eliminating the need for external power cables for most peripherals. The USB 2.0 standard, which came into general usage in 2002, drastically improved the data transfer rate. A competing standard to USB was developed at the same time by Apple and Texas Instruments. Officially called IEEE 1394, it is more commonly called FireWire. It is capable of transferring data at a higher rate than the original USB standard and became the standard interface for multimedia hardware, such as video cameras. But Apple’s royalty rate and the introduction of USB 2.0—as well as the fact that Intel, one of the companies behind USB, is responsible for most motherboards and chipsets in use—meant that FireWire was unlikely to become the standard peripheral interface for PCs. Today most computers have both USB and FireWire ports connected to the motherboard. Wireless devices have also become commonplace with computers. The initial wireless interface used was infrared (IR), the same technology used in remote controls. However, this interface required that the device have a direct line of sight to the IR sensor so that the data could be transferred. It also had a high power requirement. Most modern wireless devices use radio frequency (RF) signals to communicate to the computer. One of the most common wireless standards used today is Bluetooth. It uses the same frequencies as the Wi-Fi standard used for wireless LANs.
THE FUTURE OF COMPUTERS
In 1965 semiconductor pioneer Gordon Moore predicted that the number of transistors contained on a computer chip would double every year. This is now known as Moore’s Law, and it has proven to be somewhat accurate. The number of transistors and the computational speed of microprocessors currently doubles approximately every 18 months. Components continue to shrink in size and are becoming faster, cheaper, and more versatile. With their increasing power and versatility, computers simplify day-to-day life. Unfortunately, as computer use becomes more widespread, so do the opportunities for misuse. Computer hackers —people who illegally gain access to computer systems—often violate privacy and can tamper with or destroy records. Programs called viruses or worms can replicate and spread from computer to computer, erasing information or causing malfunctions. Other individuals have used computers to electronically embezzle funds and alter credit histories (see Computer Security). New ethical issues also have arisen, such as how to regulate material on the Internet and the World Wide Web. Long-standing issues, such as privacy and freedom of expression, are being reexamined in light of the digital revolution. Individuals, companies, and governments are working to solve these problems through informed conversation, compromise, better computer security, and regulatory legislation. Computers will become more advanced and they will also become easier to use. Improved speech recognition will make the operation of a computer easier. Virtual reality, the technology of interacting with a computer using all of the human senses, will also contribute to better human and computer interfaces. Standards for virtual-reality program languages—for example, Virtual Reality Modeling language (VRML)—are currently in use or are being developed for the World Wide Web. Other, exotic models of computation are being developed, including biological computing that uses living organisms, molecular computing that uses molecules with particular properties, and computing that uses deoxyribonucleic acid (DNA), the basic unit of heredity, to store data and carry out operations. These are examples of possible future computational platforms that, so far, are limited in abilities or are strictly theoretical. Scientists investigate them because of the physical limitations of miniaturizing circuits embedded in silicon. There are also limitations related to heat generated by even the tiniest of transistors. Intriguing breakthroughs occurred in the area of quantum computing in the late 1990s. Quantum computers under development use components of a chloroform molecule (a combination of chlorine and hydrogen atoms) and a variation of a medical procedure called magnetic resonance imaging (MRI) to compute at a molecular level. Scientists use a branch of physics called quantum mechanics, which describes the behavior of subatomic particles (particles that make up atoms), as the basis for quantum computing. Quantum computers may one day be thousands to millions of times faster than current computers, because they take advantage of the laws that govern the behavior of subatomic particles. These laws allow quantum computers to examine all possible answers to a query simultaneously. Future uses of quantum computers could include code breaking (see cryptography) and large database queries. Theorists of chemistry, computer science, mathematics, and physics are now working to determine the possibilities and limitations of quantum computing. Communications between computer users and networks will benefit from new technologies such as broadband communication systems that can carry significantly more data faster or more conveniently to and from the vast interconnected databases that continue to grow in number and type.
Central Processing Unit
Central Processing Unit (CPU), in computer science, microscopic circuitry that serves
as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected
processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.
HOW A CPU WORKS -
A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.
As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.
The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.
The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.
Fixed-Point and Floating-Point Numbers
Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixedpoint numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.
Software, computer programs; instructions that cause the hardware—the machines—to do
work. Software as a whole can be divided into a number of categories based on the types of work done by programs. The two primary software categories are operating systems (system software), which control the workings of the computer, and application software, which addresses the multitude of tasks for which people use computers. System software thus handles such essential, but often invisible, chores as maintaining disk files and managing the screen, whereas application software performs word processing, database management, and the like. Two additional categories that are neither system nor application software, although they contain elements of both, are network software, which enables groups of computers to communicate, and language software, which provides programmers with the tools they need to write programs. See also Operating System; Programming Language. In addition to these task-based categories, several types of software are described based on their method of distribution. These include the so-called canned programs or packaged software developed and sold primarily through retail outlets; freeware and public-domain software, which is made available without cost by its developer; shareware, which is similar to freeware but usually carries a small fee for those who like the program; and the infamous vaporware, which is software that either does not reach the market or appears much later than promised.
Operating System (OS), in computer
science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, digital video disc, and tape; and it manages hardware errors and the loss of data.
HOW AN OS WORKS
Operating systems control different computer processes, such as running a spreadsheet program or
accessing information from the computer's memory. One important process is interpreting commands, enabling the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in or to be selected via function keys on a keyboard. Other command interpreters use graphics and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters. Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed. All modern operating systems are multitasking and can run several processes simultaneously. In most computers, however, there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer. Operating systems can use a technique known as virtual memory to run processes that require more main memory than is actually available. To implement this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more timeconsuming than accessing main memory, however, so performance of the computer slows.
CURRENT OPERATING SYSTEMS
Operating systems commonly found on personal computers include UNIX, Macintosh OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet. Software for the Internet was initially designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users control access to their files. The commands in UNIX are not readily apparent, however, and mastering the system is difficult. Consequently, although UNIX is popular for professionals, it is not the operating system of choice for the general public. Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardware —such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.
Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failure—become more complex in distributed systems. Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input
from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.
Computer Program, set of instructions that directs a computer to perform some
processing function or combination of functions. For the instructions to be carried out, a computer must execute a program, that is, the computer reads the program, and then follows the steps encoded in the program in a precise order until completion. A program can be executed many different times, with each execution yielding a potentially different result depending upon the options and data that the user gives the computer. Programs fall into two major classes: application programs and operating systems. An application program is one that carries out some function directly for a user, such as word processing or game-playing. An operating system is a program that manages the computer and the various resources and devices connected to it, such as RAM (random access memory), hard drives, monitors, keyboards, printers, and modems, so that they may be used by other programs. Examples of operating systems are DOS, Windows 95, OS/2, and UNIX.
I- PROGRAM DEVELOPMENT
Software designers create new programs by using special applications programs, often called utilityprograms or development programs. A programmer uses another type of program called a text editor to write the new program in a special notation called a programming language. With the text editor, the programmer creates a text file, which is an ordered list of instructions, also called the program source file. The individual instructions that make up the program source file are called source code. At this point, a special applications program translates the source code into machine language, or object code—a format that the operating system will recognize as a proper program and be able to execute. Three types of applications programs translate from source code to object code: compilers, interpreters, and assemblers. The three operate differently and on different types of programming languages, but they serve the same purpose of translating from a programming language into machine language. A compiler translates text files written in a high-level programming language—such as Fortran, C, or Pascal—from the source code to the object code all at once. This differs from the approach taken by interpreted languages such as BASIC, APL and LISP, in which a program is translated into object code statement by statement as each instruction is executed. The advantage to interpreted languages is that they can begin executing the program immediately instead of having to wait for all of the source code to be compiled. Changes can also be made to the
program fairly quickly without having to wait for it to be compiled again. The disadvantage of interpreted languages is that they are slow to execute, since the entire program must be translated one instruction at a time, each time the program is run. On the other hand, compiled languages are compiled only once and thus can be executed by the computer much more quickly than interpreted languages. For this reason, compiled languages are more common and are almost always used in professional and scientific applications. Another type of translator is the assembler, which is used for programs or parts of programs written in assembly language. Assembly language is another programming language, but it is much more similar to machine language than other types of high-level languages. In assembly language, a single statement can usually be translated into a single instruction of machine language. Today, assembly language is rarely used to write an entire program, but is instead most often used when the programmer needs to directly control some aspect of the computer’s function. Programs are often written as a set of smaller pieces, with each piece representing some aspect of the overall application program. After each piece has been compiled separately, a program called a linker combines all of the translated pieces into a single executable program. Programs seldom work correctly the first time, so a program called a debugger is often used to help find problems called bugs. Debugging programs usually detect an event in the executing program and point the programmer back to the origin of the event in the program code. Recent programming systems, such as Java, use a combination of approaches to create and execute programs. A compiler takes a Java source program and translates it into an intermediate form. Such intermediate programs are then transferred over the Internet into computers where an interpreter program then executes the intermediate form as an application program.
II- PROGRAM ELEMENTS
Most programs are built from just a few kinds of steps that are repeated many times in different contexts and in different combinations throughout the program. The most common step performs some computation, and then proceeds to the next step in the program, in the order specified by the programmer. Programs often need to repeat a short series of steps many times, for instance in looking through a list of game scores and finding the highest score. Such repetitive sequences of code are called loops. One of the capabilities that makes computers so useful is their ability to make conditional decisions and perform different instructions based on the values of data being processed. Ifthen-else statements implement this function by testing some piece of data and then selecting one of two sequences of instructions on the basis of the result. One of the instructions in these alternatives may be a goto statement that directs the computer to select its next instruction from a different part of the program. For example, a program might compare two numbers and branch to a different part of the program depending on the result of the comparison: If x is greater than y then goto instruction #10 else continue
Programs often use a specific sequence of steps more than once. Such a sequence of steps can be grouped together into a subroutine, which can then be called, or accessed, as needed in different parts of the main program. Each time a subroutine is called, the computer remembers where it was in the program when the call was made, so that it can return there upon completion of the subroutine. Preceding each call, a program can specify that different data be used by the subroutine, allowing a very general piece of code to be written once and used in multiple ways. Most programs use several varieties of subroutines. The most common of these are functions, procedures, library routines, system routines, and device drivers. Functions are short subroutines that compute some value, such as computations of angles, which the computer cannot compute with a single basic instruction. Procedures perform a more complex function, such as sorting a set of names. Library routines are subroutines that are written for use by many different programs. System routines are similar to library routines but are actually found in the operating system. They provide some service for the application programs, such as printing a line of text. Device drivers are system routines that are added to an operating system to allow the computer to communicate with a new device, such as a scanner, modem, or printer. Device drivers often have features that can be executed directly as applications programs. This allows the user to directly control the device, which is useful if, for instance, a color printer needs to be realigned to attain the best printing quality after changing an ink cartridge.
III- PROGRAM FUNCTION
Modern computers usually store programs on some form of magnetic storage media that can be accessed randomly by the computer, such as the hard drive disk permanently located in the computer, or a portable floppy disk. Additional information on such disks, called directories, indicate the names of the various programs on the disk, when they were written to the disk, and where the program begins on the disk media. When a user directs the computer to execute a particular application program, the operating system looks through these directories, locates the program, and reads a copy into RAM. The operating system then directs the CPU (central processing unit) to start executing the instructions at the beginning of the program. Instructions at the beginning of the program prepare the computer to process information by locating free memory locations in RAM to hold working data, retrieving copies of the standard options and defaults the user has indicated from a disk, and drawing initial displays on the monitor. The application program requests a copy of any information the user enters by making a call to a system routine. The operating system converts any data so entered into a standard internal form. The application then uses this information to decide what to do next—for example, perform some desired processing function such as reformatting a page of text, or obtain some additional information from another file on a disk. In either case, calls to other system routines are used to actually carry out the display of the results or the accessing of the file from the disk. When the application reaches completion or is prompted to quit, it makes further system calls to make sure that all data that needs to be saved has been written back to disk. It then makes a final system call to the operating system indicating that it is finished. The operating system then frees up the RAM and any devices that the application was using and awaits a command from the user to start another program.
Programming Language, in computer science, artificial language used to write a
sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise. Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as Fortran and COBOL were written to solve certain general types of programming problems—Fortran for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that they may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used programming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL, and BASIC fall into this category.
I- LANGUAGE TYPES
Programming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-level languages are C, C++, PASCAL, and Fortran. Assembly languages are intermediate languages that are very close to machine language and do not have the level of linguistic sophistication exhibited by other highlevel languages, but must still be translated into machine language.
In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.
High-level languages are relatively sophisticated sets of statements utilizing words and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.
Computer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B,A in a typical assembly language statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assembly languages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple.
Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.
CLASSIFICATION OF HIGH-LEVEL LANGUAGES
High-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of operations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just a mini-program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be used in different situations. Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared, or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages. Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simple example of a class is the class Book. Objects within this class might be Novel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in objectoriented languages makes them very useful for complicated programming tasks. Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example: If the statement X is true, then the statement Y is false. In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.
LANGUAGE STRUCTURE AND COMPONENTS
Programming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program). Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable.
An expression is a piece of a statement that describes a series of computations to be performed on some of the program’s variables, such as X + Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived from some expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next. Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit minitranslation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.
Data Processing, in computer science, the analysis and organization of data by the
repeated use of one or more computer programs. Data processing is used extensively in business, engineering, and science and to an increasing extent in nearly all areas in which computers are used. Businesses use data processing for such tasks as payroll preparation, accounting, record keeping, inventory control, sales analysis, and the processing of bank and credit card account statements. Engineers and scientists use data processing for a wide variety of applications, including the processing of seismic data for oil and mineral exploration, the analysis of new product designs, the processing of satellite imagery, and the analysis of data from scientific experiments. Data processing is divided into two kinds of processing: database processing and transaction processing. A database is a collection of common records that can be searched, accessed, and modified, such as bank account records, school transcripts, and income tax data. In database processing, a computerized database is used as the central source of reference data for the computations. Transaction processing refers to interaction between two computers in which one computer initiates a transaction and another computer provides the first with the data or computation required for that function. Most modern data processing uses one or more databases at one or more central sites. Transaction processing is used to access and update the databases when users need to immediately view or add information; other data processing programs are used at regular intervals to provide summary reports of activity and database status. Examples of systems that involve all of these functions are automated teller machines, credit sales terminals, and airline reservation systems.
THE DATA-PROCESSING CYCLE
The data-processing cycle represents the chain of processing events in most data-processing applications. It consists of data recording, transmission, reporting, storage, and retrieval. The original data is first recorded in a form readable by a computer. This can be accomplished in several ways: by manually entering information into some form of computer memory using a keyboard, by using a sensor to transfer data onto a magnetic tape or floppy disk, by filling in ovals on a computer-readable paper form, or by swiping a credit card through a reader. The data are then transmitted to a computer that performs the data-processing functions. This step may involve physically moving the recorded data to the computer or transmitting it electronically over telephone lines or the Internet. See Information Storage and Retrieval. Once the data reach the computer, the computer processes it. The operations the computer performs can include accessing and updating a database and creating or modifying statistical information. After processing the data, the computer reports summary results to the program’s operator.
As the computer processes the data, it stores both the modifications and the original data. This storage can be both in the original data-entry form and in carefully controlled computer data forms such as magnetic tape. Data are often stored in more than one place for both legal and practical reasons. Computer systems can malfunction and lose all stored data, and the original data may be needed to recreate the database as it existed before the crash. The final step in the data-processing cycle is the retrieval of stored information at a later time. This is usually done to access records contained in a database, to apply new data-processing functions to the data, or—in the event that some part of the data has been lost—to recreate portions of a database. Examples of data retrieval in the data-processing cycle include the analysis of store sales receipts to reveal new customer spending patterns and the application of new processing techniques to seismic data to locate oil or mineral fields that were previously overlooked.
Information Storage and Retrieval
Information Storage and Retrieval, in computer science, term
used to describe the organization, storage, location, and retrieval of encoded information in computer systems. Important factors in storing and retrieving information are the type of media, or storage device, used to store information; the media’s storage capacity; the speed of access and information transfer to and from the storage media; the number of times new information can be written to the media; and how the media interacts with the computer.
TYPES OF INFORMATION STORAGE
Information storage can be classified as being either permanent, semipermanent, or temporary. Information can also be classified as having been stored to or retrieved from primary or secondary memory. Primary memory, also known as main memory, is the computer’s main random access memory (RAM). All information that is processed by the computer must first pass through main memory. Secondary memory is any form of memory other than the main computer memory, including the hard disk, floppy disks, CD-ROMs, and magnetic tape.
Information is stored permanently on storage media that is written to only once, such as ROM (read-only memory) chips and CD-ROMs (compact disc read-only memory). Permanent storage media is used for archiving information or, in the case of ROM chips, for storing basic information that the computer needs to function that cannot be overwritten.
Semipermanent information storage is also often used for archival purposes, but the media used can be overwritten. A common example of a semipermanent storage material is a floppy disk. The magnetic material that serves as the storage media in a floppy disk can be written to many times when a removable tab is in place. Once this tab is removed, the disk is protected from further overwriting; the write-protection may be bypassed by placing a piece of tape over the hole left by the removal of the tab.
Temporary information storage is used as intermediate storage between permanent or semipermanent storage and a computer’s central processing unit (CPU). Temporary storage is in the form of memory chips called RAM. Information is stored in RAM while it is being used by the CPU; it is then returned to a more permanent form of memory. RAM chips are known as volatile memory because they must have power supplied to them continuously or they lose the contents of their memory.
HOW INFORMATION STORAGE AND RETRIEVAL WORKS
All information processed by a computer can be expressed as numbers in binary notation. These binary numbers are strings of bits, or 1s and 0s, that are grouped together in sequences of eight called bytes. The two values a bit can have are physically stored in a storage media in a variety of ways. For instance, they can be represented by the presence or absence of electrical charge on a pair of closely spaced conductors called a capacitor, by the direction of magnetization on the surface of a magnetic material, or by the presence or absence of a hole in a thin material, such as a plastic disk. In computer systems, the CPU processes information and maintains it temporarily in RAM in the form of strings of bits called files. When this temporary information needs to be stored permanently or semipermanently, the CPU finds unused space on the storage media—generally a hard drive, floppy disk, or magnetic tape. It then instructs a device capable of making changes to the media (called a read/write head) to start transmitting bits. The read/write head and its associated electronics convert each bit to the equivalent physical value on the media. The recording mechanism then moves to the next location on the media capable of storing a bit. When the CPU needs to access some piece of stored information, the process is reversed. The CPU determines where on the physical media the appropriate file is stored, directs the read/write head to position itself at that location on the media, and then directs it to read the information stored there.
Information is stored on many different types of media, the most common being floppy disks, hard drives, CD-ROMs, and magnetic tape. Floppy disks are most often used to store material that is not accessed frequently or to back up files contained on a computer’s hard drive. Hard drives are most often used to store information, such as application programs, that is frequently accessed by the user. CD-ROMs are a type of storage medium that is capable of being written to only once but read many times, hence the name read-only memory. CD-ROMs are useful for storing a large amount of information that does not need to be changed or updated by the user. They are usually purchased with information already written to them, although special types of CD drives, called WORM (write once, read many) drives, allow the user to write data to a blank CD once, after which it is a CD-ROM. Magnetic tape is most commonly used in situations where large databases of information need to be stored.
A floppy disk is a thin piece of magnetizable material inside a protective envelope. The size of the disk is usually given as the diameter of the magnetic media, with the two most common sizes being 5.25 inch and 3.5 inch. Although both sizes are called floppies, the name actually comes from the 5.25-inch size, in which both the envelope and the disk itself are thin enough to bend easily. Both sizes of floppies are removable disks—that is, they must be inserted into a compatible disk drive in order to be read from or written to. This drive is usually internal to, or part of, a computer. Inside the drive, a motor spins the disk inside its envelope and a read/write head moves over the surface of the disk on the end of an arm called an actuator. The head in the floppy drive is much like that in a tape recorder. To record information, the head magnetizes a small area on the surface of the disk in a certain direction. To read information stored on the disk, the disk controller—circuitry that controls the disk drive—directs the actuator to the location of the information on the disk. The head then senses the direction of magnetization of a small area on the disk and translates this into a signal that gets stored in RAM until the CPU retrieves it. Most floppy drives today are double sided, with one head on each side of the disk. This doubles the storage capacity of the disk, allowing it to be written to on either side. Information is organized on the disk by dividing the disk into tracks and sectors. Tracks are concentric circular regions on the surface of the disk; sectors are pie-shaped wedges that intersect each of the tracks, further dividing them. Before a floppy disk can be used, the computer must format it by placing special information on the disk that enables the computer to find each track and sector.
Hard drives consist of rigid circular platters of magnetizable material sealed in a metal box with associated read/write heads. They are usually internal to a computer. Most hard drives have multiple platters stacked on top of one another, each with its own read/write heads. The media in a hard drive is generally not removable from the drive assembly, although external hard drives do exist with removable hard disks. The read/write heads in a hard drive are precisely aligned with the surfaces of the hard disks, allowing thousands of tracks and dozens of sectors per track. The combination of more heads and more tracks allows hard drives to store more data and to transfer data at a higher rate than floppy disks. Accessing information on a hard disk involves moving the heads to the right track and then waiting for the correct sector to revolve underneath the heads. Seek time is the average time required to move the heads from one track to some other desired track on the disk. The time needed to move from one track to a neighboring track is often in the 1 millisecond (onethousandth of a second) range, and the average seek time to reach arbitrary tracks anywhere on the disk is in the 6 to 15 millisecond range. Rotational latency is the average time required for the correct sector to come under the heads once they are positioned on the correct track. This time depends on how fast the disk is revolving. Today, many drives run at 60 to 120 revolutions per second or faster, yielding average rotational latencies of a few milliseconds. If a file requires more than one sector for storage, the positions of the sectors on the individual tracks can greatly affect the average access time. Typically, it takes the disk controller a small amount of time to finish reading a sector. If the next sector to be read is the neighboring sector on the track, the electronics may not have enough time to get ready to read it before it rotates under the read/write head. If this is the case, the drive must wait until the sector comes all the way around again. This access time can be reduced by interleaving, or alternatively placing, the sectors on the tracks so that sequential sectors for the same file are separated from each other by one or two sectors. When information is distributed optimally, the device controller is ready to start reading just as the appropriate sector comes under the read-write heads. After many files have been written to and erased from a disk, fragmentation can occur. Fragmentation happens when pieces of single files are inefficiently distributed in many locations on a disk. The result is an increase in the average file access time. This problem can be fixed by running a defragmentation program, which goes through the drive track by track and rearranges the sectors for each file so that they can be accessed more quickly.
Unlike floppy drives, in which the read/write heads actually touch the surface of the material, the heads in most hard disks float slightly off the surface. When the heads accidentally touch the media, either because the drive is dropped or bumped hard or because of an electrical malfunction, the surface becomes scratched. Any data stored where the head has touched the disk is lost. This is called a head crash. To help reduce the possibility of a head crash, most disk controllers park the heads over an unused track on the disk when the drive is not being used by the CPU.
While magnetic material is the dominant media for read/write information storage (files that are read from and written to frequently), other media have become popular for more permanent storage applications. One of the most common alternative information storage mediums is the CD-ROM. CD-ROMs are plastic disks on which individual bits are stored as pits burned onto the surface of the disk by high-powered lasers. The surface of the disk is then covered with a layer of reflecting material such as aluminum. The computer uses a CD-ROM drive to access information on the CD-ROM. The drive may be external to, or part of, the computer. A lightsensitive instrument in the drive reads the disk by watching the amount of light reflected back from a smaller laser positioned over the spinning disk. Such disks can hold large amounts of information, but can only be written to once. The drives capable of writing to CD-ROMs are called write once, read many (WORM) drives. Due to their inexpensive production costs, CDROMs are widely used today for storing music, video, and application programs.
Magnetic tape has served as a very efficient and reliable information storage media since the early 1950s. Most magnetic tape is made of mylar, a type of strong plastic, into which metallic particles have been embedded. A read/write head identical to those used for audio tape reads and writes binary information to the tape. Reel-to-reel magnetic tape is commonly used to store information for large mainframe or supercomputers. High-density cassette tapes, resembling audio cassette tapes, are used to store information for personal computers and mainframes. Magnetic tape storage has the advantage of being able to hold enormous amounts of data; for this reason it is used to store information on the largest computer systems. However, magnetic tape has two major shortcomings: It has a very slow data access time when compared to other forms of storage media, and access to information on magnetic tape is sequential. In sequential data storage, data are stored with the first bit at the beginning of the tape and the last bit at the end of the tape, in a linear fashion. To access a random bit of information, the tape drive has to forward or reverse through the tape until it finds the location of the bit. The bits closest to the location of the read/write head can be accessed relatively quickly, but bits far away may take considerable time to access. RAM, on the other hand, is random access, meaning that it can locate any one bit as easily as any other.
Other Types of Storage Media
Variations in hard and floppy disk drive technology are used in read-mostly drives, in which the same drive media may be written to multiple times, although at much slower rates than data can be read. In magneto-optical (MO) drives, a strong laser heats up and re-orients metallic crystals in the surface of the MO disk, effectively erasing any information stored on the disk. To write to the MO disk, an electromagnetic head similar to that in a floppy drive , or orients, the magnetic crystals in one of two directions while the laser is on, thus storing information in a binary form. To read the disk, a light-sensitive instrument reads the light from a separate, lowerpower laser that reflects light from the crystals. The crystals polarize the reflected light in one of two directions depending on which way they point. Another type of storage media, called a flash memory, traps small amounts of electric charge in “wells” on the surface of a chip. Side effects of this trapped charge, such as the electric field it creates, are later used to read the stored value. To rewrite to flash memory, the charges in the wells must first be drained. Such drives are useful for storing information that changes infrequently.
Although magnetic and CD-ROM technologies continue to increase in storage density, a variety of new technologies are emerging. Redundant Arrays of Independent Disks (RAIDs) are storage systems that look like one device but are actually composed of multiple hard disks. These systems provide more storage and also read data simultaneously from many drives. The result is a faster rate of data transfer to the CPU, which is important for many very high speed computer applications, especially those involving large databases of information. Several experimental technologies offer the potential for storage densities that are thousands or millions of times better than is possible today. Some approaches use individual molecules, sometimes at superconducting temperatures, to trap very small magnetic fields or electrical charges for data storage. In other technologies, large two-dimensional data sets such as pictures are stored as holograms in cubes of material. Individual bits are not stored at any one location, but instead are spread out over a much larger area and mixed in with other bits. Loss of information from any one spot thus does not cause the irreplaceable loss of any one bit of information.
Database, any collection of data organized for storage in a computer memory and designed
for easy access by authorized users. The data may be in the form of text, numbers, or encoded graphics. Since their first, experimental appearance in the 1950s, databases have become so important in industrial societies that they can be found in almost every field of information. Government, military, and industrial databases are often highly restricted, and professional databases are usually of limited interest. A wide range of commercial, governmental, and nonprofit databases are available to the general public, however, and may be used by anyone who owns or has access to the equipment that they require. Small databases were first developed or funded by the U.S. government for agency or professional use. In the 1960s, some databases became commercially available, but their use was funneled through a few so-called research centers that collected information inquiries and handled them in batches. Online databases—that is, databases available to anyone who could link up to them by computer—first appeared in the 1970s. For the home user, the equipment required may include a computer terminal and a connection to the Internet or other network, including a telephone and a modem, or a digital cable line. Modified television sets can also be equipped to receive some specifically designed database services. In some older systems, the user dialed the telephone number of a service, provided a password code for identification and billing, and typed in questions to a chosen database on the terminal’s keyboard. Databases available over the Internet and as Web pages on the World Wide Web can also require signing in and passwords. The data received may be displayed on a terminal screen, copied or downloaded to the computer, or printed out.
Computer Memory, a mechanism that stores data for use by a computer. In a computer
all data consist of numbers. A computer stores a number into a specific location in memory and later fetches the value. Most memories represent data with the binary number system. In the binary number system, numbers are represented by sequences of the two binary digits 0 and 1, which are called bits (see Number Systems). In a computer, the two possible values of a bit correspond to the on and off states of the computer's electronic circuitry. In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words. Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilomeans 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
HOW MEMORY WORKS
Computer memory may be divided into two broad categories known as internal memory and external memory. Internal memory operates at the highest speed and can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information (see Microprocessor). External memory consists of storage on peripheral devices that are slower than internal memories but offer lower cost and the ability to hold data after the computer’s power has been turned off. External memory uses inexpensive mass-storage devices such as magnetic hard drives. See also Information Storage and Retrieval. Internal memory is also known as random access memory (RAM) or read-only memory (ROM). Information stored in RAM can be accessed in any order, and may be erased or written over. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is usually permanent and cannot be erased or written over.
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor. DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge
slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits. In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption. The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM. The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits. When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off. Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a surface coated with material that can be magnetized in two possible ways. The surface rotates
under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic medium.
Memory stored on external magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape is a form of external computer memory used primarily for backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As the tape passes over an electromagnet, individual bits are magnetically encoded. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape). Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks. Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.
Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio CD. Because its contents are permanently stored on it when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.
Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, more expensive than CD-ROMs and are used mostly in industrial applications. MO devices are not popular consumer products.
CPU speeds continue to increase much more rapidly than memory access times decrease. The result is a growing gap in performance between the CPU and its main RAM memory. To compensate for the growing difference in speeds, engineers add layers of cache memory between the CPU and the main memory. A cache consists of a small, high-speed memory system that holds recently used values. When the CPU makes a request to fetch or store a memory value, the CPU sends the request to the cache. If the item is already present in the cache, the cache can honor the request quickly because the cache operates at higher speed than main memory. For example, if the CPU needs to add two numbers, retrieving the values from the cache can take less than one-tenth as long as retrieving the values from main memory. However, because the cache is smaller than main memory, not all values can fit in the cache at
one time. Therefore, if the requested item is not in the cache, the cache must fetch the item from main memory. Cache cannot replace conventional RAM because cache is much more expensive and consumes more power. However, research has shown that even a small cache that can store only 1 percent of the data stored in main memory still provides a significant speedup for memory access. Therefore, most computers include a small, external memory cache attached to their RAM. More important, multiple caches can be arranged in a hierarchy to lower memory access times even further. In addition, most CPUs now have a cache on the CPU chip itself. The on-chip internal cache is smaller than the external cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data item has been fetched from the external cache, the CPU can use the item without having to wait for an external cache access.
DEVELOPMENTS AND LIMITATIONS
Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer engineers consider it possible to make multigigabyte memory chips and disks capable of storing a terabyte (one trillion bytes) of memory. Some computer engineers are concerned that the silicon-based memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Engineers also expect that the external dimensions of memory chips will increase by a factor of four, meaning that larger amounts of memory will fit on a single chip. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity, including the use of biotechnology to produce memories out of living cells. The memory in a computer is composed of many memory chips. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM, computer users can purchase memory cards that each contain many memory chips. In addition, future computers will likely have advanced data transfer capabilities and additional caches that enable the CPU to access memory faster.
Network (computer science)
Network (computer science), a system used to link
two or more computers. Network
users are able to share files, printers, and other resources; send electronic messages; and run programs on other computers. A network has three layers of components: application software, network software, and network hardware. Application software consists of computer programs that interface with network users and permit the sharing of information, such as files, graphics, and video, and resources, such as printers and disks. One type of application software is called client-server. Client computers send requests for information or requests to use resources to other computers, called servers, that control data and applications. Another type of application software is called peer-to-peer. In a peer-to-peer network, computers send messages and requests directly to one another without a server intermediary. Network software consists of computer programs that establish protocols, or rules, for computers to talk to one another. These protocols are carried out by sending and receiving formatted instructions of data called packets. Protocols make logical connections between network applications, direct the movement of packets through the physical network, and minimize the possibility of collisions between packets sent at the same time. Network hardware is made up of the physical components that connect computers. Two important components are the transmission media that carry the computer's signals, typically on wires or fiber-optic cables, and the network adapter, which accesses the physical media that link computers, receives packets from network software, and transmits instructions and requests to other computers. Transmitted information is in the form of binary digits, or bits (1s and 0s), which the computer's electronic circuitry can process.
A network has two types of connections: physical connections that let computers directly transmit and receive signals and logical, or virtual, connections that allow computer applications, such as e-mail programs and the browsers used to explore the World Wide Web, to exchange information. Physical connections are defined by the medium used to carry the signal, the geometric arrangement of the computers (topology), and the method used to share information. Logical connections are created by network protocols and allow data sharing between applications on different types of computers, such as an Apple Macintosh or a personal computer (PC) running the Microsoft Corporation Windows operating system, in a network. Some logical connections use client-server application software and are primarily for file and printer sharing. The Transmission Control Protocol/Internet Protocol (TCP/IP) suite, originally developed by the United States Department of Defense, is the set of logical connections used by the Internet, the worldwide consortium of computer networks. TCP/IP, based on peer-to-peer application software, creates a connection between any two computers.
The medium used to transmit information limits the speed of the network, the effective distance between computers, and the network topology. Copper wires and coaxial cable provide transmission speeds of a few thousand bits per second for long distances and about 100 million bits per second for short distances. (A million bits is equal to one megabit, and one megabit per second is abbreviated Mbps.) Optical fibers carry 100 million to 40 billion bits of information per second over long distances. (A billion bits is equal to one gigabit, and a billion bits per second is abbreviated Gbps.) Wireless networks, often used to connect mobile, or laptop, computers, send information using infrared or radio-frequency transmitters. Infrared wireless local area networks (LANs) work only within a room, while wireless LANs based on radio-frequency transmissions can penetrate most walls. Wireless LANs using Wi-Fi technology have capacities of around 54 Mbps and operate at distances up to a few hundred meters. Wireless communications for wide area networks (WANs) use cellular radio telephone networks, satellite transmissions, or dedicated equipment to provide regional or global coverage. Although transmission speeds continue to improve, today’s wide area cellular networks run at speeds ranging from 14 to 230 kilobits per second. (A kilobit is equal to 1,000 bits, and one kilobit per second is abbreviated Kbps.) Some networks use a home’s existing telephone and power lines to connect multiple machines. HomePNA networks,
which use phone lines, can transmit data as fast as 128 Mbps, and similar speeds are available on Power Line or HomePlug networks.
Common topologies used to arrange computers in a network are point-to-point, bus, star, ring, and mesh. Point-to-point topology is the simplest, consisting of two connected computers. The bus topology is composed of a single link connected to many computers. All computers on this common connection receive all signals transmitted by any attached computer. The star topology connects many computers to a common hub computer. This hub can be passive, repeating any input to all computers similar to the bus topology, or it can be active, selectively switching inputs to specific destination computers. The ring topology uses multiple links to form a circle of computers. Each link carries information in one direction. Information moves around the ring in sequence from its source to its destination. On a mesh network, topology can actually change on the fly. No central device oversees a mesh network, and no set route is used to pass data back and forth between computers. Instead, each computer includes everything it needs to serve as a relay point for sending information to any other computer on the network. Thus, if any one computer is damaged or temporarily unavailable, information is dynamically rerouted to other computers—a process known as self-healing. see Computer Architecture. LANs commonly use bus, star, or ring topologies. WANs, which connect distant equipment across the country or internationally, often use special leased telephone lines as point-to-point links.
When computers share physical connections to transmit information packets, a set of Media Access Control (MAC) protocols are used to allow information to flow smoothly through the network. An efficient MAC protocol ensures that the transmission medium is not idle if computers have information to transmit. It also prevents collisions due to simultaneous transmission that would waste media capacity. MAC protocols also allow different computers fair access to the medium. One type of MAC is Ethernet, which is used by bus or star network topologies. An Ethernetlinked computer first checks if the shared medium is in use. If not, the computer transmits. Since two computers can both sense an idle medium and send packets at the same time, transmitting computers continue to monitor the shared connection and stop transmitting information if a collision occurs. When used on local area networks, Ethernet typically transmits information at a rate of either 10 or 100 Mbps, but newer wide-area technologies are capable of speeds as high as 10 gigabits per second (Gbps). Computers also can use Token Ring MAC protocols, which pass a special message called a token through the network. This token gives the computer permission to send a packet of information through the network. If a computer receives the token, it sends a packet, or, if it has no packet to send, it passes the token to the next computer. Since there is only one token in the network, only one computer can transmit information at a time. Token Ring networks are now quite rare. Most LANs now use Ethernet technology. International Business Machines Corporation (IBM), the company that invented Token Ring in the early 1980s, no longer promotes the technology. In the mid-1990s a new protocol called Asynchronous Transfer Mode (ATM) was introduced. This protocol encodes data in fixed-sized packets called cells rather than variable-sized packets used on an Ethernet network. It was designed as a way of merging old, circuit-switched telephone networks with more modern packet-switched computer networks in order to deliver data, voice, and video over the same channel. This can now be done with other protocols as well. Capable of speeds of nearly 10 Gbps, ATM is often used in wide area networks, but never really caught on with LANs.
NETWORK OPERATION AND MANAGEMENT
Network management and system administration are critical for a complex system of interconnected computers and resources to remain operating. A network manager is the person or team of people responsible for configuring the network so that it runs efficiently. For example, the network manager might need to connect computers that communicate frequently to reduce interference with other computers. The system administrator is the person or team of people responsible for configuring the computer and its software to use the network. For example, the system administrator may install network software and configure a server's file system so client computers can access shared files. Networks are subject to hacking, or illegal access, so shared files and resources must be protected. A network intruder could eavesdrop on packets being sent across a network or send fictitious messages. For sensitive information, data encryption (scrambling data using mathematical equations) renders captured packets unreadable to an intruder. Most servers also use authentication schemes to ensure that a request to read or write files or to use resources is from a legitimate client and not from an intruder. See Computer Security.
FUTURE TECHNOLOGIES AND TRENDS
As of 2005, much of the Internet’s backbone—that is, its core infrastructure connecting so many of the world’s PCs—had been converted to 10 Gbps Ethernet, and researchers were working to develop even faster technologies. Ethernet speeds tend to grow by a factor of ten every half decade, and if this trend continues, speeds as high as 100 Gbps can be expected by 2008. Many researchers, however, believe that 100 Gbps is a bit further off, expecting an intermediate stop at 20 or 40 Gbps. On the wireless side, some researchers are working on a new local area standard, known as 802.11n, which would double transmission speeds for wireless devices to nearly 200 Mbps. Other researchers are developing a new standard called Unlicensed Mobile Access (UMA), which would allow wireless mobile devices to seamlessly move from local area networks to wide area cellular networks and back again. Currently, there is no way for personal digital assistant (PDA) handhelds and cell phones to move automatically between a wireless LAN and a wireless WAN or vice versa. Service is interrupted, and a manual adjustment must be made on the device for wireless service to continue.
Telecommunications, devices and systems that transmit electronic or optical signals
across long distances. Telecommunications enables people around the world to contact one another, to access information instantly, and to communicate from remote areas. Telecommunications usually involves a sender of information and one or more recipients linked by a technology, such as a telephone system, that transmits information from one place to another. Telecommunications enables people to send and receive personal messages across town, between countries, and to and from outer space. It also provides the key medium for delivering news, data, information, and entertainment. Telecommunications devices convert different types of information, such as sound and video, into electronic or optical signals. Electronic signals typically travel along a medium such as copper wire or are carried over the air as radio waves. Optical signals typically travel along a medium such as strands of glass fibers. When a signal reaches its destination, the device on the receiving end converts the signal back into an understandable message, such as sound over a telephone, moving images on a television, or words and pictures on a computer screen. Telecommunications messages can be sent in a variety of ways and by a wide range of devices. The messages can be sent from one sender to a single receiver (point-to-point) or from one sender to many receivers (point-to-multipoint). Personal communications, such as a telephone conversation between two people or a facsimile (fax) message (see Facsimile Transmission), usually involve point-to-point transmission. Point-to-multipoint telecommunications, often called broadcasts, provide the basis for commercial radio and television programming.
HOW TELECOMMUNICATIONS WORKS
Telecommunications begin with messages that are converted into electronic or optical signals. Some signals, such as those that carry voice or music, are created in an analog or wave format, but may be converted into a digital or mathematical format for faster and more efficient transmission. The signals are then sent over a medium to a receiver, where they are decoded back into a form that the person receiving the message can understand. There are a variety of ways to create and decode signals, and many different ways to transmit signals.
Creating and Receiving the Signal
Devices such as the telegraph and telephone relay messages by creating modulated electrical impulses, or impulses that change in a systematic way. These impulses are then sent along wires, through the air as radio waves, or via other media to a receiver that decodes the modulation. The telegraph, the earliest method of delivering telecommunications, works by converting the contacts (connections between two conductors that permit a flow of current) between a telegraph key and a metal conductor into electrical impulses. These impulses are sent along a wire to a receiver, which converts the impulses into short and long bursts of sound or into dots and dashes on a simple printing device. Specific sequences of dots and dashes represent letters of the alphabet. In the early days of the telegraph, these sequences were decoded by telegraph operators (see Morse Code, International). In this way, telegraph operators could transmit and receive letters that spelled words. Later versions of the telegraph could decipher letters and numbers automatically. Telegraphs have been largely replaced by other forms of telecommunications, such as electronic mail (e-mail), but they are still used in some parts of the world to send messages. The telephone uses a diaphragm (small membrane) connected to a magnet and a wire coil to convert sound into an analog or electrical waveform representation of the sound. When a person speaks into the telephone’s microphone, sound waves created by the voice vibrate the diaphragm, which in turn creates electrical impulses that are sent along a telephone wire. The receiver’s wire is connected to a speaker, which converts the modulated electrical impulses back into sound. Broadcast radio and cellular radio telephones are examples of devices that create signals by modulating radio waves. A radio wave is one type of electromagnetic radiation, a form of energy that travels in waves. Microwaves are also electromagnetic waves, but with shorter wavelengths and higher frequencies. In telecommunications, a transmitter creates and emits radio waves. The transmitter electronically modulates or encodes sound or other information onto the radio waves by varying either the amplitude (height) of the radio waves, or by varying the frequency (number) of the waves within an established range (see Frequency Modulation). A receiver (tuner) tuned to a specific frequency or range of frequencies will pick up the modulation added to the radio waves. A speaker connected to the tuner converts the modulation back into sound. Broadcast television works in a similar fashion. A television camera takes the light reflected from a scene and converts it into an electronic signal, which is transmitted over high-frequency radio waves. A television set contains a tuner that receives the signal and uses that signal to modulate the images seen on the picture tube. The picture tube contains an electron gun that shoots electrons onto a photo-sensitive display screen. The electrons illuminate the screen wherever they fall, thus creating moving pictures. Telegraphs, telephones, radio, and television all work by modifying electronic signals, making the signals imitate, or reproduce, the original message. This form of transmission is known as analog transmission. Computers and other types of electronic equipment, however, transmit digital information. Digital technologies convert a message into an electronic or optical form first by measuring different qualities of the message, such as the pitch and volume of a voice, many times. These measurements are then encoded into multiple series of binary numbers, or 1s and 0s. Finally, digital technologies create and send impulses that correspond to the series of 1s and 0s. Digital information can be transmitted faster and more clearly than analog signals, because the impulses only need to correspond to two digits and not to the full range of qualities that compose the original message, such as the pitch and volume of a human voice. While digital transmissions can be sent over wires, cables or radio waves, they must be decoded by a digital
receiver. New digital telephones and televisions are being developed to make telecommunications more efficient. Personal computers primarily communicate with each other and with larger networks, such as the Internet, by using the ordinary telephone network. Increasing numbers of computers rely on broadband networks provided by telephone and cable television companies to send text, music, and video over the Internet at high speeds. Since the telephone network functions by converting sound into electronic signals, the computer must first convert its digital data into sound. Computers do this with a device called a modem, which is short for modulator/demodulator. A modem converts the stream of 1s and 0s from a computer into an analog signal that can then be transmitted over the telephone network, as a speaker’s voice would. The modem of the receiving computer demodulates the analog sound signal back into a digital form that the computer can understand.
Transmitting the Signal
Telecommunications systems deliver messages using a number of different transmission media, including copper wires, fiber-optic cables, communication satellites, and microwave radio. One way to categorize telecommunications media is to consider whether or not the media uses wires. Wire-based (or wireline) telecommunications provide the initial link between most telephones and the telephone network and are a reliable means for transmitting messages. Telecommunications without wires, commonly referred to as wireless communications, use technologies such as cordless telephones, cellular radio telephones, pagers, and satellites. Wireless communications offer increased mobility and flexibility. In the future some experts believe that wireless devices will also offer high-speed Internet access.
Wires and Cables
Wires and cables were the original medium for telecommunications and are still the primary means for telephone connections. Wireline transmission evolved from telegraph to telephone service and continues to provide the majority of telecommunications services. Wires connect telephones together within a home or business and also connect these telephones to the nearest telephone switching facility. Other wireline services employ coaxial cable, which is used by cable television to provide hundreds of video channels to subscribers. Much of the content transmitted by the coaxial cable of cable television systems is sent by satellite to a central location known as the headend. Coaxial cables flow from the headend throughout a community and onward to individual residences and, finally, to individual television sets. Because signals weaken as distance from the headend increases, the coaxial cable network includes amplifiers that process and retransmit the television signals.
Fiber-optic cables use specially treated glass that can transmit signals in the form of pulsed beams of laser light. Fiber-optic cables carry many times more information than copper wires can, and they can transmit several television channels or thousands of telephone conversations at the same time. Fiber-optic technology has replaced copper wires for most transoceanic routes and in areas where large amounts of data are sent. This technology uses laser transmitters to send pulses of light via hair-thin strands of specially prepared glass fibers. New improvements promise cables that can transmit millions of telephone calls over a single fiber. Already fiber optic cables provide the high capacity, 'backbone' links necessary to carry the enormous and growing volume of telecommunications and Internet traffic.
Wireless telecommunications use radio waves, sent through space from one antenna to another, as the medium for communication. Radio waves are used for receiving AM and FM radio and for receiving television. Cordless telephones and wireless radio telephone services, such as cellular radio telephones and pagers, also use radio waves. Telephone companies use microwaves to send signals over long distances. Microwaves use higher frequencies than the radio waves used for AM, FM, or cellular telephone transmissions, and they can transmit larger amounts of data more efficiently. Microwaves have characteristics similar to those of visible light waves and transmit pencil-thin beams that can be received using dish-shaped antennas. Such narrow beams can be focused to a particular destination and provide reliable transmissions over short distances on Earth. Even higher and narrower beams provide the high-capacity links to and from satellites. The high frequencies easily penetrate the ionosphere (a layer of Earth’s atmosphere that blocks low-frequency waves) and provide a high-quality signal.
Communications satellites provide a means of transmitting telecommunications all over the globe, without the need for a network of wires and cables. They orbit Earth at a speed that enables them to stay above the same place on Earth at all times. This type of orbit is called geostationary or geosynchronous orbit because the satellite’s orbital speed operates in synchronicity with Earth’s rotation. The satellites receive transmissions from Earth and transmit them back to numerous Earth station receivers scattered within the receiving coverage area of the satellite. This relay function makes it possible for satellites to operate as “bent pipes”—that is, wireless transfer stations for point-to-point and point-to-multipoint transmissions. Communications satellites are used by telephone and television companies to transmit signals across great distances. Ship, airplane, and land navigators also receive signals from satellites to determine geographic positions.
Individual people, businesses, and governments use many different types of telecommunications systems. Some systems, such as the telephone system, use a network of cables, wires, and switching stations for point-to-point communication. Other systems, such as radio and television, broadcast radio signals over the air that can be received by anyone who has a device to receive them. Some systems make use of several types of media to complete a transmission. For example, a telephone call may travel by means of copper wire, fiber-optic cable, and radio waves as the call is sent from sender to receiver. All telecommunications systems are constantly evolving as telecommunications technology improves. Many recent improvements, for example, offer high-speed broadband connections that are needed to send multimedia information over the Internet.
Telegraph services use both wireline and wireless media for transmissions. Soon after the introduction of the telegraph in 1844, telegraph wires spanned the country. Telegraph companies maintained a system of wires and offices located in numerous cities. A message sent by telegraph was called a telegram. Telegrams were printed on paper and delivered to the receiving party by the telegraph company. With the invention of the radio in the early 1900s, telegraph signals could also be sent by radio waves. Wireless telegraphy made it practical for oceangoing ships as well as aircraft to stay in constant contact with land-based stations.
The telephone network also uses both wireline and wireless methods to deliver voice communications between people, and data communications between computers and people or other computers. The part of the telephone network that currently serves individual residences and many businesses operates in an analog mode, uses copper wires, and relays electronic signals that are continuous, such as the human voice. Digital transmission via fiber-optic cables is now used in some sections of the telephone network that send large amounts of calls over long distances. However, since the rest of the telephone system is still analog, these digital signals must be converted back to analog before they reach users. The telephone network is
stable and reliable, because it uses its own wire system that is powered by low-voltage direct current from the telephone company. Telephone networks modulate voice communications over these wires. A complex system of network switches maintains the telephone links between callers. Telephone networks also use microwave relay stations to send calls from place to place on the ground. Satellites are used by telephone networks to transmit telephone calls across countries and oceans.
Teletype, Telex, and Facsimile Transmission
Teletype, telex, and facsimile transmission are all methods for transmitting text rather than sounds. These text delivery systems evolved from the telegraph. Teletype and telex systems still exist, but they have been largely replaced by facsimile machines, which are inexpensive and better able to operate over the existing telephone network. The Internet increasingly provides an even more inexpensive and convenient option. The teletype, essentially a printing telegraph, is primarily a point-to-multipoint system for sending text. The teletype converts the same pulses used by telegraphs into letters and numbers, and then prints out readable text. It was often used by news media organizations to provide newspaper stories and stock market data to subscribers. Telex is primarily a point-to-point system that uses a keyboard to transmit typed text over telephone lines to similar terminals situated at individual company locations. Facsimile transmission now provides a cheaper and easier way to transmit text and graphics over distances. Fax machines contain an optical scanner that converts text and graphics into digital, or machine-readable, codes. This coded information is sent over ordinary analog telephone lines through the use of a modem included in the fax machine. The receiving fax machine’s modem demodulates the signal and sends it to a printer also contained in the fax machine.
Radios transmit and receive communications at various preset frequencies. Radio waves carry the signals heard on AM and FM radio, as well as the signals seen on a television set receiving broadcasts from an antenna. Radio is used mostly as a public medium, sending commercial broadcasts from a transmitter to anyone with a radio receiver within its range, so it is known as a point-to-multipoint medium. However, radio can also be used for private point-to-point transmissions. Two-way radios, cordless telephones, and cellular radio telephones are common examples of transceivers, which are devices that can both transmit and receive point-to-point messages. Personal radio communication is generally limited to short distances (usually a few kilometers), but powerful transmitters can send broadcast radio signals hundreds of kilometers. Shortwave radio, popular with amateur radio enthusiasts, uses a range of radio frequencies that are able to bounce off the ionosphere. This electrically charged layer of the atmosphere reflects certain frequencies of radio waves, such as shortwave frequencies, while allowing higher-frequency waves, such as microwaves, to pass through it. Amateur radio operators use the ionosphere to bounce their radio signals to other radio operators thousands of kilometers away.
Television is primarily a public broadcasting medium, using point-to-multipoint technology that is broadcast to any user within range of the transmitter. Televisions transmit news and information, as well as entertainment. Commercial television is broadcast over very high frequency (VHF) and ultrahigh frequency (UHF) radio waves and can be received by any television set within range of the transmitter. Televisions have also been used for point-to-point, two-way telecommunications. Teleconferencing, in which a television picture links two physically separated parties, is a convenient way for businesspeople to meet and communicate without the expense or inconvenience of travel. Video cameras on computers now allow personal computer users to teleconference over the Internet. Videophones, which use tiny video cameras
and rely on satellite technology, can also send private or public television images and have been used in news reporting in remote locations. Cable television is a commercial service that links televisions to a source of many different types of video programming using coaxial cable. The cable provider obtains coded, or scrambled, programming from a communications satellite, as well as from terrestrial links, including broadcast television stations. The signal may be scrambled to prevent unpaid access to the programming. The cable provider electronically unscrambles the signal and supplies the decoded signals by cable to subscribers. Television users with personal satellite dishes can access satellite programming directly without a cable installation. Personal satellite dishes are also a subscriber service. Fees are paid to the network operator in return for access to the satellite channels. Most television sets outside of the United States that receive programming use different types of standards for receiving video signals. The European Phase Alternative Line standard generates a higher-resolution picture than the sets used in the United States, but these television sets are more expensive. Manufacturers now offer digital video and audio signal processing, which features even higher picture resolution and sound quality. The shape of the television screen is changing as well, reflecting the aspect ratio (ratio of image height to width) used for movie presentation.
Global Positioning and Navigation Systems
The United States Global Positioning System (GPS) and the Russian Global Orbiting Navigation Satellite System (GLONASS) are networks of satellites that provide highly accurate positioning information from anywhere on Earth. Both systems use a group of satellites that orbit around the north and south poles at an altitude of 17,500 km (10,900 mi). These satellites constantly broadcast the time and their location above Earth. A GPS receiver picks up broadcasts from these satellites and determines its position through the process of triangulation. Using the time information from each satellite, the receiver calculates the time the signal takes to reach it. Factoring in this time with the speed at which radio signals travel, the receiver calculates its distance from the satellite. Finally, using the location of three satellites and its distance from each satellite, the receiver determines its position. GPS services, originally designed for military use, are now available to civilians. Handheld GPS receivers allow users to pinpoint their location on Earth to within a few meters. One type of navigational tool used in automobiles integrates a GPS receiver with an intelligent compact disc player capable of displaying road maps and other graphical information. Upon receiving the GPS location data, the CD player can pinpoint the location visually on one of the road maps contained on disc. G-
Personal computers use telecommunications to provide a transmission link for the delivery of audio, video, text, software, and multimedia services. Many experts believe that the convergence of these services will generate consumer demand for new generations of highspeed, broadband networks. Currently, the delivery of most of these audio, video, and text services occurs over existing telephone connections using the Internet. Some computers connect directly to the digital portion of the telephone network using the Integrated Services Digital Network (ISDN) or Digital Subscriber Lines (DSL), but this requires special equipment at user locations. Telephone and cable television companies must also make upgrades to their lines so that they can handle high-speed data transmission. In many locations companies and individuals with high-speed data requirements now have the option of securing DSL service from telephone companies and cable modem service from cable television companies. Electronic mail, or e-mail, is a key attraction of the Internet and a common form of computer telecommunications. E-mail is a text-based message delivery system that allows information such as typed messages and multimedia to be sent to individual computer users. Local e-mail messages (within a building or a company) typically reach addressees by traveling through wire-
based internal networks. E-mail that must travel across town or across a country to reach the final destination usually travels through the telephone network. Instant messaging is another key feature of computer telecommunications and involves sending text, audio, or video data in real time. Other computer telecommunications technologies that businesses frequently use include automated banking terminals and devices for credit card or debit card transactions. These transactions either bill charges directly to a customer’s credit card account or automatically deduct money from a customer’s bank account.
Voice Over Internet Protocol (VOIP)
Voice Over Internet Protocol (VOIP) is a method for making telephone calls over the Internet by sending voice data in separate packets, just as e-mail is sent. Each packet is assigned a code for its destination, and the packets are then reassembled in the correct order at the receiving end. Recent technological improvements have made VOIP almost as seamless and smooth as a regular telephone call. In February 2004 the Federal Communications Commission (FCC) ruled that VOIP, like e-mail and instant messaging, is free of government regulation as long as it involves communication from one computer to another. The FCC did not rule on whether VOIP software that sends voice data from a computer directly to a regular telephone should be regulated. Such services became available in the early part of the 21st century and are expected to become widely available. They require a broadband connection to the Internet but can reduce telephone charges significantly while also offering for free additional services such as call waiting, caller identification, voice mail, and the ability to call from your home telephone number wherever you travel.
The Emergence of Broadcasting
Telephones and telegraphs are primarily private means of communications, sending signals from one point to another, but with the invention of the radio, public communications, or point-tomultipoint signals, could be sent through a central transmitter to be received by anyone possessing a receiver. Italian inventor and electrical engineer Guglielmo Marconi transmitted a Morse-code telegraph signal by radio in 1895. This began a revolution in wireless telegraphy that would later result in broadcast radios that could transmit actual voice and music. Radio and wireless telegraph communication played an important role during World War I (1914-1918), allowing military personnel to communicate instantly with troops in remote locations. United States president Woodrow Wilson was impressed with the ability of radio, but he was fearful of its potential for espionage use. He banned nonmilitary radio use in the United States as the nation entered World War I in 1917, and this stifled commercial development of the medium. After the war, however, commercial radio stations began to broadcast. By the mid-1920s, millions of radio listeners tuned in to music, news, and entertainment programming. Television got its start as a mass-communication medium shortly after World War II (1939-1945). The expense of television transmission prevented its use as a two-way medium, but radio broadcasters quickly saw the potential for television to provide a new way of bringing news and entertainment programming to people. For more information on the development of radio and television, see Radio and Television Broadcasting.
The number of radio broadcasts grew quickly in the 1920s, but there was no regulation of frequency use or transmitter strength. The result was a crowded radio band of overlapping signals. To remedy this, the U.S. government created the Federal Communications Commission (FCC) in 1934 to regulate the spreading use of the broadcast spectrum. The FCC licenses broadcasters and regulates the location and transmitting strength, or range, stations have in an effort to prevent interference from nearby signals.
The FCC and the U.S. government have also assumed roles in limiting the types of business practices in which telecommunications companies can engage. The U.S. Department of Justice filed an antitrust lawsuit against AT&T Corp., arguing that the company used its monopoly position to stifle competition, particularly through its control over local telephone service facilities. The lawsuit was settled in 1982, and AT&T agreed to disperse its local telephone companies, thereby creating seven new independent companies. In 1996 the U.S. government enacted the Telecommunications Reform Act to further encourage competition in the telecommunications marketplace. This legislation removed government rules preventing local and long-distance phone companies, cable television operators, broadcasters, and wireless services from directly competing with one another. The act spurred consolidation in the industry, as regional companies joined forces to create telecommunications giants that provided telephone, wireless, cable, and Internet services. Deregulation, however, also led to overproduction of fiber optic cable and a steep decline in the fortunes of the telecommunications industry beginning in 2000. The increased competition provided the backdrop for the bankruptcy of a leading telecommunications company, WorldCom, Inc., in 2002, when it admitted to the largest accounting fraud in the history of U.S. business.
International Telecommunications Networks
In order to provide overseas telecommunications, people had to develop networks that could link widely separated nations. The first networks to provide such linkage were telegraph networks that used undersea cables, but these networks could provide channels for only a few simultaneous communications. Shortwave radio also made it possible for wireless transmissions of both telegraphy and voice over very long distances. To take advantage of the wideband capability of satellites to provide telecommunications service, companies from all over the world pooled resources and shared risks by creating a cooperative known as the International Telecommunications Satellite Organization, or Intelsat, in 1964. Transoceanic satellite telecommunications first became possible in 1965 with the successful launch of Early Bird, also known as Intelsat 1. Intelsat 1 provided the first international television transmission and had the capacity to handle one television channel or 240 simultaneous telephone calls. Intelsat later expanded and diversified to meet the global and regional satellite requirements of more than 200 nations and territories. In response to private satellite ventures entering the market, the managers of Intelsat converted the cooperative into a private corporation better able to compete with these emerging companies. The International Mobile Satellite Organization (Inmarsat) primarily provided service to oceangoing vessels when it first formed as a cooperative in 1979, but it later expanded operations to include service to airplanes and users in remote land areas not served by cellular radio or wireline services. Inmarsat became a privatized, commercial venture in 1999.
This action might not be possible to undo. Are you sure you want to continue?