You are on page 1of 120

ICT

INFORMATION TECHNOLOGY
FUNDAMENTALS

WEEK 2
WEEK 2

WEEK 2.1

WEEK 2.2

WEEK 2.3

WEEK 2.4
WEEK 2.1 – COMPUTER HARDWARE

Computer hardware is the physical part of a computer, as


distinguished from the computer software or computer
programs and data that operate within the hardware. The
hardware of a computer is infrequently changed, in comparison
with software and data which are "soft" in the sense that they
are readily created, modified or erased on the computer.
Firmware is special software that rarely, if ever, needs to be
changed and so is stored on hardware devices such as read-
only memory (ROM) where it is not readily changed (and
therefore is "firm" rather than just "soft").
WEEK 2.1 – COMPUTER HARDWARE

Personal computer hardware


A typical personal computer consists of a case or
chassis in desktop or tower shape and the
following parts:
 Motherboard
 Central processing unit (CPU)
 Random Access Memory (RAM)
 Buses :
 Power supply
 Video display controller
 Computer bus CD
 Hard disk
 Networking
 Modem
 Network card
 Mouse
 Keyboard
WEEK 2.2 – COMPUTER SOFTWARE

Computer software (or simply software) is that part of a


computer system that consists of encoded information (or
computer instructions), as opposed to the physical computer
equipment (hardware) which is used to store and process this
information. The term is roughly synonymous with computer
program but is more generic in scope.
The term "software" was first used in this sense by John W.
Tukey in 1957. In computer science and software engineering,
computer software is all information processed by computer
systems, programs and data. The concept of software was first
proposed by Alan Turing in an essay.
WEEK 2.2 – COMPUTER SOFTWARE

Relationship to hardware

Computer software is so called in contrast to computer


hardware, which is the physical substrate required to store and
execute (or run) the software. In computers, software is loaded
into RAM and executed in the central processing unit. At the
lowest level, software consists of a machine language specific
to an individual processor. A machine language consists of
groups of binary values signifying processor instructions and
data, which change the state of the computer from its
preceding state.
WEEK 2.2 – COMPUTER SOFTWARE

Software is an ordered sequence of instructions for changing


the state of the computer hardware in a particular sequence. It
is generally written in 'high-level languages' that are easier and
more efficient for humans to use (closer to natural language)
than machine language. High-level languages are compiled or
interpreted into machine language.
WEEK 2.2 – COMPUTER SOFTWARE

Relationship to data

Software has historically been considered an intermediary


between electronic hardware and data, which latter the
hardware processes according to the sequence of instructions
defined by the software. As computational science becomes
increasingly complex, the distinction between software and
data becomes less precise. Data has generally been considered
as either the output or input of executed software.
WEEK 2.2 – COMPUTER SOFTWARE

However, data is not the only possible output or input. For


example, (system) configuration information may also be
considered input, although not necessarily considered data
(and certainly not applications data). The output of a particular
piece of executed software may be the input for another
executed piece of software. Therefore, software may be
considered an interface between hardware, data, and/or (other)
software.
WEEK 2.2 – COMPUTER SOFTWARE

System, programming and application software

Practical computer systems divide software into three major


classes: system software, application software and
programming software, although the distinction is somewhat
arbitrary, and often blurred.
System software helps run the computer hardware and
computer system. It includes operating systems, device
drivers, diagnostic tools, servers, windowing systems, utilities
and more.
WEEK 2.2 – COMPUTER SOFTWARE

Programming software usually provides some useful tools to


help a programmer to write computer programs and software
using different programming languages in a more convenient
way. The tools include text editors, compilers, interpreters,
linkers, debuggers, and so on. An Integrated development
environment (IDE) merges those tools into a software bundle,
and a programmer may not need to type multiple commands for
compiling, interpreter, debugging, tracing, and etc., because
the IDE usually has an advanced graphical user interface, or
GUI.
WEEK 2.2 – COMPUTER SOFTWARE

Application software allows humans to accomplish one or more


specific tasks. Typical applications include industrial
automation, office suites, business software, educational
software, databases and computer games. Businesses are
probably the biggest users of application software and they use
it to automate all sorts of functions. Plenty of examples can be
found at the Business Software Directory.
WEEK 2.2 – COMPUTER SOFTWARE

Software program and library

Software program is usually the directly executable part of a


software. Software libraries can include software components
used by stand-alone programs, but which cannot be executed
on their own. Thus, programs can include standard routines
that are common to many programs, extracted from the
libraries, but libraries can also include stand-alone programs.
Depending on operating system, a program can be called by
another program, by a human being, and can call another
program.
WEEK 2.2 – COMPUTER SOFTWARE

Three layers of software


Platform software

Platform includes the basic input-output system (often


described as firmware rather than software), device drivers, an
operating system, and typically a graphical user interface
which, in total, allow a user to interact with the computer and
its peripherals (associated equipment). Platform software often
comes bundled with the computer, and users may not realize
that it exists or that they have a choice to use different platform
software.
WEEK 2.2 – COMPUTER SOFTWARE

Application software
Application software or Applications are what most people
think of when they think of software. Typical examples include
office suites and video games. Application software is often
purchased separately from computer hardware. Sometimes
applications are bundled with the computer, but that does not
change the fact that they run as independent applications.
Applications are almost always independent programs from the
operating system, though they are often tailored for specific
platforms. Most users think of compilers, databases, and other
"system software" as applications.
WEEK 2.2 – COMPUTER SOFTWARE

User-written software

User software tailors systems to meet the users specific needs.


User software include spreadsheet templates, word processor
macros, scientific simulations, graphics and animation scripts.
Even email filters are a kind of user software. Users create this
software themselves and often overlook how important it is.
Depending on how competently the user-written software has
been integrated into purchased application packages, many
users may not be aware of the distinction between the
purchased packages, and what has been added by fellow co-
workers.
WEEK 2.3 – PEOPLE WARE

Refers to the human issues in IT projects, including


productivity, personalities, teamwork and group dynamics.

 Computer Hardware Engineer


 Computer Software Engineer
 Computer Support Specialist
 Database Administrator
 Systems Analyst
 Webmaster
 Computer Programmer
WEEK 2.4 – EMBEDDED SYSTEM

An embedded system is a special-purpose computer system,


which is completely encapsulated by the device it controls. An
embedded system has specific requirements and performs pre-
defined tasks, unlike a general-purpose personal computer. An
embedded system is a computer-controlled system. The core of
any embedded system is a microprocessor, programmed to
perform a few tasks (often just one task). This is to be
compared to other computer systems with general purpose
hardware and externally loaded software. Embedded systems
are often designed for mass production.
The first recognizably modern embedded system was the
Apollo Guidance Computer, developed by Charles Stark Draper
at the MIT Instrumentation Laboratory. Each flight to the moon
had two. They ran the inertial guidance systems of both the
command module and LEM.
WEEK 2.4 – EMBEDDED SYSTEM

Examples of embedded systems

 automatic teller machines (ATMs)


 avionics, such as inertial guidance systems, flight control
hardware/software and other integrated systems in aircraft
and missiles
 cellular telephones and telephone switches
 computer network equipment, including routers, timeservers
and firewalls
 computer printers
 copiers
 disk drives (floppy disk drives and hard disk drives)
 engine controllers and antilock brake controllers for
automobiles
WEEK 2.4 – EMBEDDED SYSTEM

Characteristics

Embedded systems are computer systems in the widest sense.


They include all computers other than those specifically
intended as general-purpose computers. Examples of
embedded systems range from portable music players to real-
time controls for systems like the space shuttle.
Most commercial embedded systems are designed to do some
task at a low cost. Most, but not all have real-time system
constraints that must be met. They may need to be very fast for
some functions, but most other functions will probably not
need speed. These systems meet their real-time constraints
with a combination of special purpose hardware and software
tailored to the system requirements.
WEEK 2.4 – EMBEDDED SYSTEM

It is difficult to characterize embedded systems by speed or


cost, but for high volume systems, cost usually dominates the
system design. Often many parts of an embedded system need
low performance compared to the primary mission of the
system. This allows an embedded system to be intentionally
simplified to lower costs compared to a general-purpose
computer accomplishing the same task, by using a CPU that is
just “good enough” for these secondary functions
WEEK 2.4 – EMBEDDED SYSTEM

For example, a digital set-top box for satellite television has to


process tens of megabits of continuous-data per second, but
most of the processing is done by custom hardware that
parses, directs, and decodes the multi-channel digital video.
The embedded CPU starts the data paths at the right times, and
displays menu graphics, etc. for the set-top's look and feel.
For low-volume embedded systems, personal computers can
often be used, by limiting the programs or by replacing the
operating system with a real-time operating system. In this
case special purpose hardware may be replaced by one or
more high performance CPUs. Still, some embedded systems
may require both high performance CPUs, special hardware,
and large memories to accomplish a required task.
WEEK 2.4 – EMBEDDED SYSTEM

Embedded systems reside in machines that are expected to run


continuously for years without errors. Therefore the software is
usually developed and tested more carefully than Software for
Personal computers. Many embedded systems avoid
mechanical moving parts such as Disk drives, switches or
buttons because these are unreliable compared to solid-state
parts such as Flash memory

Design of embedded systems


The electronics usually uses either a microprocessor or a
microcontroller. Some large or old systems use general-
purpose mainframe computers or minicomputers.
WEEK 2.4 – EMBEDDED SYSTEM

User interfaces
User interfaces for embedded systems vary widely, and thus
deserve some special comment.
Interface designers at PARC, Apple Computer, Boeing and HP
discovered the principle that one should minimize the number
of types of user actions. In embedded systems this principle is
often combined with a drive to lower costs.
One standard interface, widely used in embedded systems,
uses two buttons (the absolute minimum) to control a menu
system (just to be clear, one button should be "next menu
entry" the other button should be "select this menu entry").
WEEK 2.4 – EMBEDDED SYSTEM

Tools

Like typical computer programmers, embedded system


designers use compilers, assemblers, and debuggers to
develop embedded system software. However, they also use a
few tools that are unfamiliar to most programmers.
Software tools can come from several sources:
Software companies that specialize in the embedded market
Ported from the GNU software development tools (see cross
compiler)
Sometimes, development tools for a personal computer can be
used if the embedded processor is a close relative to a
common PC processor
ICT
INFORMATION TECHNOLOGY
FUNDAMENTALS

WEEK 3
WEEK 3

WEEK 3.1

WEEK 3.2

WEEK 3.3

WEEK 3.4
WEEK 3.1 – OPERATING SYSTEM

The most important program that runs on a computer. Every


general-purpose computer must have an operating system to
run other programs. Operating systems perform basic tasks,
such as recognizing input from the keyboard, sending output to
the display screen, keeping track of files and directories on the
disk, and controlling peripheral devices such as disk drives
and printers.
WEEK 3.1 – OPERATING SYSTEM
WEEK 3.1 – OPERATING SYSTEM

Operating systems can be classified as follows:

multi-user : Allows two or more users to run programs at the


same time. Some operating systems permit hundreds or even
thousands of concurrent users.
 multiprocessing : Supports running a program on more than
one CPU.
 multitasking : Allows more than one program to run
concurrently.
 multithreading : Allows different parts of a single program to
run concurrently.
 real time: Responds to input instantly. General-purpose
operating systems, such as DOS and UNIX, are not real-time.
WEEK 3.1 – OPERATING SYSTEM

Operating systems provide a software platform on top of which


other programs, called application programs, can run. The
application programs must be written to run on top of a
particular operating system. Your choice of operating system,
therefore, determines to a great extent the applications you can
run. For PCs, the most popular operating systems are DOS,
OS/2, and Windows, but others are available, such as Linux.
WEEK 3.1 – OPERATING SYSTEM

LINUX OPERATING SYSTEM

Linux is a computer operating system and its kernel. It is one of


the most prominent examples of free software and of open-
source development: unlike proprietary operating systems
such as Windows and Mac OS, all of its underlying source code
is available to the public for anyone to freely use, modify,
improve, and redistribute.
WEEK 3.1 – OPERATING SYSTEM

In the narrowest sense, the term Linux refers to the Linux


kernel, but it is commonly used to describe entire Unix-like
operating systems (also known as GNU/Linux) that are based
on the Linux kernel combined with libraries and tools from the
GNU Project and other sources. Most broadly, a Linux
distribution bundles large quantities of application software
with the core system, and provides more user-friendly
installation and upgrades. Desktop environments such as
GNOME and KDE are sometimes generically associated with
Linux and are often referred to as such, but this is incorrect: a
number of other operating systems, including FreeBSD use
them.
WEEK 3.1 – OPERATING SYSTEM

Initially, Linux was primarily developed and used by individual


enthusiasts. Since then, Linux has gained the support of major
corporations such as IBM, Sun Microsystems, Hewlett-Packard,
and Novell for use in servers and is gaining popularity in the
desktop market. Proponents and analysts attribute this
success to its vendor independence (the opposite of vendor
lock-in), low cost, security, and reliability.
Linux was originally developed for Intel 386 microprocessors
and now supports all popular computer architectures (and
several obscure ones). It is deployed in applications ranging
from embedded systems (such as mobile phones and personal
video recorders) to personal computers to supercomputers.
WEEK 3.1 – OPERATING SYSTEM

Richard Stallman, founder of the GNU project for a free


operating system.
In 1983, Richard Stallman founded the GNU project, which
today provides an essential part of most Linux systems (see
also GNU/Linux, below). The goal of GNU was to develop a
complete Unix-like operating system composed entirely of free
software. By the beginning of the 1990s, GNU had produced or
collected nearly all of the necessary components of this
system—libraries, compilers, text editors, a Unix-like shell, and
other software—except for the lowest level, the kernel. The
GNU project began developing their own kernel, the Hurd, in
1990 (after an abandoned attempt called Trix).
WEEK 3.1 – OPERATING SYSTEM

According to Thomas Bushnell, the initial Hurd architect, their


early plan was to adapt the BSD 4.4-Lite kernel and, in
hindsight, "It is now perfectly obvious to me that this would
have succeeded splendidly and the world would be a very
different place today" [1]. However, due to a lack of cooperation
from the Berkeley programmers, Stallman decided instead to
use the Mach microkernel, which subsequently proved
unexpectedly difficult, and the Hurd's development proceeded
slowly.
WEEK 3.1 – OPERATING SYSTEM

Linus Torvalds, creator of the Linux kernel.


Meanwhile, in 1991, another kernel — eventually dubbed
"Linux" — was begun as a hobby by Finnish university student
Linus Torvalds while attending the University of Helsinki.
Torvalds originally used Minix, a simplified Unix-like system
written by Andrew Tanenbaum for teaching operating system
design.
However, Tanenbaum did not permit others to extend his
operating system, leading Torvalds to develop a replacement
for Minix. Linux started out as a terminal emulator written in IA-
32 assembler and C, which was compiled into binary form and
booted from a floppy disk so that it would run outside of any
operating system.
WEEK 3.1 – OPERATING SYSTEM

The terminal emulator was running two threads: one for


sending and one for receiving characters from the serial port.
When Linus needed to read and write files to disk, this task-
switching terminal emulator was extended with an entire
filesystem handler. After that, it gradually evolved into an entire
operating system kernel intended as a foundation for POSIX-
compliant systems. The first version of the Linux kernel (0.01)
was released to the Internet on September 17, 1991, with the
second version following shortly thereafter in October [2].
Since then, thousands of developers from around the world
have participated in the project. Eric S. Raymond's essay The
Cathedral and the Bazaar discusses the development model of
the Linux kernel and similar software.
WEEK 3.1 – OPERATING SYSTEM

The name "Linux" was coined, not by Torvalds, but by Ari


Lemmke. Lemmke was working for the Helsinki University of
Technology (TKK), located in Espoo near Helsinki, as an
administrator of ftp.funet.fi, an FTP server which belongs to the
Finnish University and Research Network (FUNET), which has
numerous organizations as its members, amongst them the
TKK and the University of Helsinki. He was the one to invent
the name Linux for the directory from which Torvalds' project
was first available for download. (The name Linux was derived
from Linus' Minix.)
WEEK 3.1 – OPERATING SYSTEM

Licensing
The Linux kernel, along with most of the GNU components, is
licensed under the GNU General Public License (GPL) version 2
(not or later). The GPL requires that all source code
modifications and derived works also be licensed under the
GPL, and is sometimes referred to as a "share and share-alike"
(or copyleft) license. In 1997, Linus Torvalds stated, "Making
Linux GPL'd was definitely the best thing I ever did." Other
subsystems use other licenses, although all of them share the
property of being free/open-source; for example, several
libraries use the LGPL (a more-permissive variant of the GPL),
and the X Window System uses the permissive (non-copyleft)
MIT License.
WEEK 3.1 – OPERATING SYSTEM

A GNOME Desktop
WEEK 3.1 – OPERATING SYSTEM

A KDE Desktop
WEEK 3.2 – NETWORKS AND NETWORKING

A computer network is a system for communication between


computers. These networks may be fixed (cabled, permanent)
or temporary (as via modems or null modems). Wireless
internet generally works over wifi or cellular carrier's networks.
A local area network (LAN) is a computer network covering a
small local area, like a home, office, or small group of buildings
such as a college. Current LANs are most likely to be based on
switched Ethernet or Wi-Fi technology running at from 10 to
10000 Mbps. The defining characteristics of LANs in contrast to
WANs are: a) much higher data rates, b) smaller geographic
range and c) they do not involve leased telecommunication
lines.
WEEK 3.2 – NETWORKS AND NETWORKING

Metropolitan Area Networks or MANs are large computer


networks usually spanning a campus or a city. They typically
use wireless infrastructure or optical fiber connections to link
their sites.
For instance a university or college may have a MAN that joins
together many of their local area networks (LANs) situated
around site of a fraction of a square kilometer. Then from their
MAN they could have several wide area network (WAN) links to
other universities or the Internet.
WEEK 3.2 – NETWORKS AND NETWORKING

Some technologies used for this purpose are ATM, FDDI and
SMDS. These older technologies are in the process of being
displaced by Ethernet-based MANs (e.g. Metro Ethernet) in
most areas. MAN links between LANs have been built without
cables using either microwave, radio, or infra-red free-space
optical communication links.
DQDB, Distributed Queue Dual Bus, is the Metropolitan Area
Network standard for data communication. It is specified in the
IEEE 802.6 standard. Using DQDB, networks can be up to 30
miles long and operate at speeds of 34 to 155 Mbit/s.
Several notable networks started as MANs, such as the Internet
peering points MAE-West and MAE-East and the Sohonet
media network.
WEEK 3.2 – NETWORKS AND NETWORKING

A personal area network (PAN) is a computer network used for


communication among computer devices (including telephones
and personal digital assistants) close to one person. The
devices may or may not belong to the person in question. The
reach of a PAN is typically a few meters. PANs can be used for
communication among the personal devices themselves
(intrapersonal communication), or for connecting to a higher
level network and the Internet (an uplink).
WEEK 3.2 – NETWORKS AND NETWORKING

Wireless

A Bluetooth PAN is also called a piconet, and is composed of


up to 8 active devices in a master-slave relationship (up to 255
devices can be connected in "parked" mode). The first
Bluetooth device in the piconet is the master, and all other
devices are slaves that communicate with the master. A
piconet typically has a range of 10 meters, although ranges of
up to 100 meters can be reached under ideal circumstances.
WEEK 3.2 – NETWORKS AND NETWORKING

By functional relationship

Client/Server is a network architecture which separates the


client (often a graphical user interface) from the server. Each
instance of the client software can send requests to a server or
application server.
Properties of a server:
 Passive (Slave)
 Waiting for requests
 On requests serves them and send a reply

Properties of a client:
 Active (Master)
 Sending requests
 Waits until reply arrives
WEEK 3.2 – NETWORKS AND NETWORKING

A peer-to-peer (or P2P) computer network is a network that


relies on the computing power and bandwidth of the
participants in the network rather than concentrating it in a
relatively low number of servers. P2P networks are typically
used for connecting nodes via largely ad hoc connections.
Such networks are useful for many purposes. Sharing content
files (see file sharing) containing audio, video, data or anything
in digital format is very common, and realtime data, such as
telephony traffic, is also passed using P2P technology.
WEEK 3.2 – NETWORKS AND NETWORKING

A pure peer-to-peer network does not have the notion of clients


or servers, but only equal peer nodes that simultaneously
function as both "clients" and "servers" to the other nodes on
the network. This model of network arrangement differs from
the client-server model where communication is usually to and
from a central server. A typical example for a non peer-to-peer
file transfer is an FTP server where the client and server
programs are quite distinct, and the clients initiate the
download/uploads and the servers react to and satisfy these
requests.
WEEK 3.2 – NETWORKS AND NETWORKING

A network topology is the pattern of links connecting pairs of


nodes of a network. A given node has one or more links to
others, and the links can appear in a variety of different shapes.
The simplest connection is a one-way link between two
devices. A second return link can be added for two-way
communication. Modern communications cables usually
include more than one wire in order to facilitate this, although
very simple bus-based networks have two-way communication
on a single wire.
WEEK 3.2 – NETWORKS AND NETWORKING

Network topology is determined only by the configuration of


connections between nodes; it is therefore a part of graph
theory. Distances between nodes, physical interconnections,
transmission rates, and/or signal types are not a matter of
network topology, although they may be affected by it in an
actual physical network.
WEEK 3.2 – NETWORKS AND NETWORKING
WEEK 3.2 – NETWORKS AND NETWORKING

By connecting the computers at each end, a ring topology can


be formed. An advantage of the ring is that the number of
transmitters and receivers can be cut in half, since a message
will eventually loop all of the way around. When a node sends a
message, the message is processed by each computer in the
ring. If a computer is not the destination node, it will pass the
message to the next node, until the message arrives at its
destination. If the message is not accepted by any node on the
network, it will travel around the entire ring and return to the
sender. This potentially results in a doubling of travel time for
data, but since it is traveling at a significant fraction of the
speed of light, the loss is usually negligible.
WEEK 3.2 – NETWORKS AND NETWORKING

The star topology reduces the chance of network failure by


connecting all of the systems to a central node. When applied
to a bus-based network, this central hub rebroadcasts all
transmissions received from any peripheral node to all
peripheral nodes on the network, sometimes including the
originating node. All peripheral nodes may thus communicate
with all others by transmitting to, and receiving from, the
central node only. The failure of a transmission line linking any
peripheral node to the central node will result in the isolation of
that peripheral node from all others, but the rest of the systems
will be unaffected.
WEEK 3.2 – NETWORKS AND NETWORKING

A tree topology (a.k.a. hierarchical topology) can be viewed as


a collection of star networks arranged in a hierarchy. This tree
has individual peripheral nodes (i.e. leaves) which are required
to transmit to and receive from one other node only and are not
required to act as repeaters or regenerators. Unlike the star
network, the function of the central node may be distributed.
WEEK 3.2 – NETWORKS AND NETWORKING

In a mesh topology, there are at least two nodes with two or


more paths between them. A special kind of mesh, limiting the
number of hops between two nodes, is a hypercube. The
number of arbitrary forks in mesh networks makes them more
difficult to design and implement, but their decentralized nature
makes them very useful. This is similar in some ways to a grid
network, where a linear or ring topology is used to connect
systems in multiple directions. A multi-dimensional ring has a
toroidal (torus) topology, for instance.
WEEK 3.2 – NETWORKS AND NETWORKING

A fully connected, complete topology or full mesh topology is a


network topology in which there is a direct link between all
pairs of nodes. In a fully connected network with n nodes, there
are n(n-1)/2 direct links. Networks designed with this topology
are usually very expensive to set up, but have a high amount of
reliability due to multiple paths data can travel on. This
topology is mostly seen in military applications.
WEEK 3.3 – TELECOMMUNICATION

Telecommunication refers to the communication of information


at a distance. This covers many technologies including radio,
telegraphy, television, telephone, data communication and
computer networking.
The elements of a telecommunication system are a transmitter,
a medium (line) and possibly a channel imposed upon the
medium (see baseband and broadband as well as multiplexing),
and a receiver. The transmitter is a device that transforms or
encodes the message into a physical phenomenon; the signal.
The transmission medium, by its physical nature, is likely to
modify or degrade the signal on its path from the transmitter to
the receiver. The receiver may therefore require a decoding
mechanism to recover the message from the received signal.
WEEK 3.3 – TELECOMMUNICATION

Telecommunication can be point-to-point, point-to-multipoint or


broadcasting, which is a particular form of point-to-multipoint
that goes only from the transmitter to the receivers.

Examples of human (tele)communications


In a simplistic example, consider a normal conversation
between two people. The message is the sentence that the
speaker decides to communicate to the listener. The
transmitter is the language areas in the brain, the motor cortex,
the vocal cords, the larynx, and the mouth that produce those
sounds called speech. The signal is the sound waves (pressure
fluctuations in air particles) that can be identified as speech.
WEEK 3.3 – TELECOMMUNICATION

The channel is the air carrying those sound waves, and all the
acoustic properties of the surrounding space: echoes, ambient
noise, reverberation. Between the speaker and the listener,
there might be other devices that do or do not introduce their
own distortions of the original vocal signal (for example a
telephone, a HAM radio, an IP phone, etc.) The receiver is the
listener's ear and auditory system, the auditory nerve, and the
language areas in the listener's brain that will "decode" the
signal into information and filter out background noise.
WEEK 3.3 – TELECOMMUNICATION

Another important aspect of the channel is called the


bandwidth. A low-bandwidth channel, such as a telephone,
cannot carry all of the audio information that is transmitted in
normal conversation, causing distortion and irregularities in
the speaker's voice, as compared to normal, in-person speech.
WEEK 3.4 – MESSAGING

Message in its most general meaning is an object of


communication. Depending on the context, the term may apply
to both the information contents and its actual presentation.
In the communications discipline, a message is information
which is sent from a source to a receiver. Some common
definitions include:
Any thought or idea expressed briefly in a plain or secret
language, prepared in a form suitable for transmission by any
means of communication.
An arbitrary amount of information whose beginning and end
are defined or implied.
WEEK 3.4 – MESSAGING

Record information, a stream of data expressed in plain or


encrypted language (notation) and prepared in a format
specified for intended transmission by a telecommunications
system.
WEEK 3.4 – MESSAGING

History of messaging

 Smoke signals - Ancient (short distance only)


 Wind-power shipping (hence the name) "In 1800, it took 2
years to send a message from London to Calcutta. You
wrote a physical letter and entrusted it to a wind-powered ship
that sailed down the western coasts of Europe and Africa,
around the Cape of Good Hope, back up the eastern coast of
Africa, across the Arabian Sea, etc. -- with, presumably, stops
in just about every port (yes, they had multi-hop message
transports back then)."
WEEK 3.4 – MESSAGING

Semaphore - Limited use


 Telegraph - (late 19th century)
 Telephone - (late 19th century-early 20th century)
 Steamshipping "By 1914, it took 1 month to send a message
from London to Calcutta. The Suez Canal had opened, and
steamships powered their way through the Mediterranean to
the Red Sea, and thence to India. Big improvement."[1]
 Radio, (early 20th century)
 Television - (mid 20th century)
WEEK 3.4 – MESSAGING

Airmail (1950s or 1960s?) ~ 1 week.


 Overnight mail - became popular and affordable in the 1980s,
made the international messaging only two days.
 Text messaging - (1990s) Messages sent through cellular
phones.
 Electronic mail (~1994) - delivery times of 10 minutes, based
on number of hops, frequency of manual retrieval, etc.
 Instant messaging - Message travels at average 100
milliseconds, almost always less than a second. Often shorted
to "IM", sometimes in combination with the type of messenger
(YIM is yahoo instant messenger). People enjoy messaging
others through many types of mail including: regular mail, e-
mail, online messaging services
ICT
INFORMATION TECHNOLOGY
FUNDAMENTALS

WEEK 4
WEEK 4

WEEK 4.1

WEEK 4.2
WEEK 4.1 – RDBMS AND OO DATABASE

A relational database management system (RDBMS) is a


database management system (DBMS) that is based on the
relational model as introduced by Edgar F. Codd.

History of the term

Codd introduced the term in his seminal paper "A Relational


Model of Data for Large Shared Data Banks". In this paper and
later papers he defined what he meant by relational. One well-
known definition of what constitutes a relational database
system is Codd's 12 rules.
WEEK 4.1 – RDBMS AND OO DATABASE

However, many of the early implementations of the relational


model did not conform to all of Codd's rules, so the term
gradually came to describe a broader class of database
systems. At a minimum, these systems:

presented the data to the user as relations (a presentation in


tabular form, i.e. as a collection of tables with each table
consisting of a set of rows and columns, can satisfy this
property)

 provided relational operators to manipulate the data in


tabular form
WEEK 4.1 – RDBMS AND OO DATABASE

Current usage

There is some disagreement about what a "relational" DBMS is.


The most popular definition of a RDBMS is relatively imprecise;
some argue that merely presenting a view of data as a
collection of rows and columns is sufficient to qualify as a
RDBMS. Typically, products that qualify as a RDBMS under this
interpretation implement some of Codd's 12 rules, but most
popular database systems do not support them all.
WEEK 4.1 – RDBMS AND OO DATABASE

A second school of thought argues that if a database does not


implement all of Codd's rules, it is not relational. This view,
shared by many theorists and other strict adherents to Codd's
principles, would disqualify many database systems as not
"truly relational". In fact, any database that uses the SQL
(Structured Query Language) to access and modify data is not
an RDBMS under this definition. Advocates of this philosophy
refer to systems that follow some but not all of the rules as
Pseudo-Relational Database Management Systems (PRDBMS).
For clarification, they often refer to RDBMSs that do follow all
of the rules as Truly-Relational Database Management Systems
(TRDBMS).
WEEK 4.1 – RDBMS AND OO DATABASE

LIST of RDBMS Software

Proprietary software Free or open


source software
InterBase
Matisse HSQLDB
Microsoft Access Ingres
Microsoft SQL Server MaxDB
Microsoft Visual FoxPro MonetDB
Mimer SQL MySQL
Netezza
NonStop SQL
Oracle
WEEK 4.1 – RDBMS AND OO DATABASE

When you integrate database capabilities with object


programming language capabilities, the result is an object
database management system (ODBMS). An ODBMS makes
database objects appear as programming language objects in
one or more object programming languages. An ODBMS
extends the language with transparently persistent data,
concurrency control, data recovery, associative queries, and
other capabilities.
Object-oriented databases are designed to work well with
object-oriented programming languages such Java, C#, and C+
+. ODBMSs used exactly the same model as object-oriented
programming languages.
WEEK 4.1 – RDBMS AND OO DATABASE

Consider an ODBMS when you have a business need for high


performance on complex data. Generally, an ODBMS is a good
choice when you have all three factors: business need, high
performance, and complex data.

Data is stored as {objects} and can be interpreted only using


the {method}s specified by its {class}. The relationship
between similar objects is preserved ({inheritance}) as are
references between objects. Queries can be faster because
{joins} are often not needed (as in a {relational database}).
WEEK 4.1 – RDBMS AND OO DATABASE

This is because an object can be retrieved directly without a


search, by following its object id. The same programming
language can be used for both data definition and data
manipulation. The full power of the database programming
language's {type system} can be used to model {data
structures} and the relationship between the different data
items. {Multimedia} {applications} are facilitated because the
{class} {method}s associated with the data are responsible for
its correct interpretation. OODBs typically provide better
support for {versioning}. An object can be viewed as the set of
all its versions.
WEEK 4.1 – RDBMS AND OO DATABASE

Also, object versions can be treated as full fledged objects.


OODBs also provide systematic support for {triggers} and
{constraints} which are the basis of {active databases}. Most, if
not all, object-oriented {application programs} that have
database needs will benefit from using an OODB. {Ode} is an
example of an OODB built on {C++}. (1997-12-07)
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

Object-oriented approach includes analysis, design and


programming in which the focus is set on ‘things’ rather than
on operations or functions. A software program is not designed
as a set of functions that interchange data through their
parameters and through a shared memory or global variables;
an object oriented program consists of interacting objects.
Objects maintain their own local state and define operations on
that state information. They hide information about the
representation of the state and hence limit access to it. The
characteristics of an object-oriented design are:
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

 In an object-oriented design a software system is designed


as a set of interacting objects that manage their own private
state and offer services to other objects. These services are
often called methods, or operations.

 Objects are specified by object classes. An object is created


by instantiating an object class.

 System functionality is expressed in terms of operations or


services associated with each object. Objects interact by
calling on the operations defined by other objects.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

Elements of Object-Oriented Approach

The fundamental part of any object-oriented approach is a


class and an object. While a class is a form (i.e. it identifies
which attributes and operations it includes), an object is an
instance of that form with concrete values of the attributes and
which performs the operations.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

Unified Modeling Language


UML is a result of the evolution of object-oriented modeling
languages. It was developed by Rational Software Company, by
unifying some of the leading object-oriented modeling
methods: Booch (author: Grady Booch), OMT(Object Modeling
Technique; author: Jim Raumbaugh) and OOSE (Object-
Oriented Software Engineering; author: Ivar Jacobson). The
unification work started in ’94. UML 1.0 was submitted to OMG
in ’97 (Object Management Group) by a group called UML
Partners which was founded by Rational Software. The current
UML version is 1.4 (published in Sep 2001) and there is an
ongoing work in OMG on a new major version 2.0. UML is used
for modeling software systems.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

The modeling includes a process of analysis and deigns. By an


analysis the system is first described by as set of
requirements, and then by identification of system parts. The
design phase is tightly connected to the analysis phase; it
started from the identified system parts and continues with
detailed specification of these parts and their interaction. For
requirements identifications UML provides a support for
identifying and specifying use cases. System parts are
identified as packages, components and finally as objects
(which are represented by classes).
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

UML Language Architecture

To be able to read and create UML models, one needs to


understand the
conceptual model of UML language. The conceptual model of
UML contains the
following elements:

• UML basic building blocks


 Things
 Relationships
 Diagrams
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

 Rules that dictate how building blocks can be used together


There are semantic rules for what well-formed UML models are.
Those
include: naming, scope, visibility, integrity, execution.
However, during
development, UML models are typically not well-formed, but
tend to be
incomplete and inconsistent.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

• Common mechanisms that apply consistently thought UML:

 Specifications UML’s graphical notation is used to


visualize the model, but UML’s specification is used to
state the model’s details.

 Adornments. Many of the specification details can be


rendered as graphical or textual adornments (meaning of
the word adornments is similar to enhancements or
decorations) to the basic notation. Every element in the
UML’s notation starts with a basic symbol. Variety of
adornments can be added to this basic symbol.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

 Common divisions. Almost every building block in UML has


class/object concept. Graphically, UML uses the same symbol
for class and object, but object’s name is underlined. There is a
separation between interface and implementation. Almost every
building block in UML has this interface/ implementation
concept.

 Extensibility mechanisms (stereotypes, tagged values and


constraints)
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

Component-Based Development

The concept of building software from components is not new.


A “classical” design of complex software systems always
begins with the identification of system parts designated
subsystems or blocks, and on a lower level modules, classes,
procedures and so on. The reuse approach to software
development has been used for many years. However, the
recent emergence of new technologies has significantly
increased the possibilities of building systems and applications
from reusable components.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

Both customers and suppliers have had great expectations


from component-based development (CBD), but their
expectations have not always been satisfied. Experience has
shown that component-based development requires a
systematic approach to and focus on the component aspects of
software development. Traditional software engineering
disciplines must be adjusted to the new approach, and new
procedures must be developed. Component-based Software
Engineering (CBSE) has become recognized as such a new
sub-discipline of Software Engineering.
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

The major goals of CBSE are the provision of support for the
development of systems as assemblies of components, the
development of components as reusable entities, and the
maintenance and upgrading of systems by customizing and
replacing their components. The building of systems from
components and the building of components for different
systems requires established methodologies and processes
not only in relation to the development/maintenance aspects,
WEEK 4.2 – SOFTWARE DEVELOPMENT TECHNOLOGIES

But also to the entire component and system lifecycle including


organizational, marketing, legal, and other aspects. In addition
to specific CBSE objectives such as component specification
or composition and technologies, there are a number of
software engineering disciplines and processes which require
specific methodologies for application in component-based
development. Many of these methodologies are not yet
established in practice, some are not even developed. The
progress of software development in the near future will
depend very much on the successful establishment of CBSE
and this is recognized by both industry and academia.
ICT
INFORMATION TECHNOLOGY
FUNDAMENTALS

WEEK 5
WEEK 5

WEEK 5.1

WEEK 5.2
WEEK 5.1 – SYSTEM ARCHITECTURE

Systems architecture can best be thought of as a


representation of an engineered (or To Be Engineered) system,
and the process and discipline for effectively implementing the
design(s) for such a system. Such a system may consist of
information and/or hardware and/or software.
It is a representation because it is used to convey the
informational content of the elements comprising a system, the
relationships among those elements, and the rules governing
those relationships.
It is a process because a sequence of steps is prescribed to
produce or change the architecture, and/or a design from that
architecture, of a system within a set of constraints.
It is a discipline because a body of knowledge is used to inform
practitioners as to the most effective way to design the system
within a set of constraints.
WEEK 5.1 – SYSTEM ARCHITECTURE

A systems architecture is primarily concerned with the internal


interfaces among the system's components or subsystems,
and the interface between the system and its external
environment, especially the user. (This latter, special interface,
is known as the computer human interface, AKA human
computer interface, or CHI; formerly called the man-machine
interface.)
WEEK 5.1 – SYSTEM ARCHITECTURE

What is a System

A system is an assemblage of related elements comprising a


whole, such that each element may be seen to be a part of that
whole in some sense. That is, each element is seen to be
related to other elements of and/or the whole system. It is
generally recognized that while any element of a system need
not have a (direct) relationship with any other particular
element of a system, any element which has no relationship
with any other element of a system, cannot be a part of that
system.
WEEK 5.1 – SYSTEM ARCHITECTURE

A system typically consists of components (or elements) which


interface in order to facilitate the 'flow' of information, matter or
energy. The term is often used to describe a set of entities
which 'act' on each other, and for which a mathematical model
or a logical model may be constructed encompassing the
elements and their allowed actions.
A system may be a set of rules for governing behavior or
organization. Laws are a system which governs human social
behavior. Grammar is a system which governs language usage
(in this case, the grammatical elements are the system
elements).
A system may be said to be any assemblage which accepts an
input, processes it, and produces an output.
A sub-system is a system which is a proper subset of another
system.
WEEK 5.1 – SYSTEM ARCHITECTURE

system could also be a method or an algorithm. Again, an


example will illustrate: There are systems of counting, as with
Roman numerals, and various systems for filing papers, or
catalogues, and various library systems, of which the Dewey
Decimal System is an example. This still fits with the definition
of components which are connected together (in this case in
order to facilitate the flow of information).

System can also be used referring to a framework, be it


software or hardware, designed to allow software to run,
WEEK 5.1 – SYSTEM ARCHITECTURE

By analogy, then, a systems architecture makes use of


elements of both software and hardware and is used to enable
design of such a composite system. A good architecture may
be viewed as a 'partitioning scheme,' or algorithm, which
partitions all of the system's present and foreseeable
requirements into a workable set of cleanly bounded
subsystems with nothing left over. That is, it is a partitioning
scheme which is exclusive, inclusive, and exhaustive.
WEEK 5.2 – INTERNET TECHNOLOGY

The Internet, or simply the Net, is the publicly accessible


worldwide system of interconnected computer networks that
transmit data by packet switching using a standardized Internet
Protocol (IP). It is made up of thousands of smaller commercial,
academic, domestic, and government networks. It carries
various information and services, such as electronic mail,
online chat, and the interlinked Web pages and other
documents of the World Wide Web.
Contrary to some common usage, the Internet and the World
Wide Web are not synonymous: the Internet is a collection of
interconnected computer networks, linked by copper wires,
fiber-optic cables, etc.; the Web is a collection of
interconnected documents, linked by hyperlinks and URLs, and
is accessible using the Internet.
WEEK 5.2 – INTERNET TECHNOLOGY

Creation of the Internet


The USSR's launch of Sputnik spurred the U.S. to create the
Defense Advanced Research Projects Agency (DARPA) to
regain a U.S. technological lead. DARPA created the
Information Processing Technology Office to further the
research of the Semi Automatic Ground Environment program,
which had networked country-wide radar systems together for
the first time. J. C. R. Licklider was selected to head the IPTO,
and saw universal networking as a potential unifying human
revolution. Licklider recruited Lawrence Roberts to head a
project to implement a network, and Roberts based the
technology on the work of Paul Baran who had written an
exhaustive study for the U.S. Air Force that recommended
packet switching to make a network highly robust and
survivable.
WEEK 5.2 – INTERNET TECHNOLOGY

After much work, the first node went live at UCLA on October
29, 1969 on what would be called the ARPANET, the "eve"
network of today's Internet.

The first TCP/IP wide area network was operational by January


1, 1983 (this is technically the birth of the Internet), when the
United States' National Science Foundation (NSF) constructed
a university network backbone that would later become the
NSFNet. It was then followed by the opening of the network to
commercial interests in 1995.
WEEK 5.2 – INTERNET TECHNOLOGY

Important separate networks that offered gateways into, then


later merged into the Internet include Usenet, Bitnet and the
various commercial and educational X.25 networks such as
Compuserve and JANET. The ability of TCP/IP to work over
these pre-existing communication networks allowed for a great
ease of growth. Use of Internet as a phrase to describe a single
global TCP/IP network originated around this time.
WEEK 5.2 – INTERNET TECHNOLOGY

The network gained a public face in the 1990s. In August 1991


CERN in Switzerland publicized the new World Wide Web
project, two years after Tim Berners-Lee had begun creating
HTML, HTTP and the first few web pages at CERN in
Switzerland. In 1993 the National Center for Supercomputing
Applications at the University of Illinois at Urbana-Champaign
released the Mosaic web browser version 1.0, and by late 1994
there was growing public interest in the previously
academic/technical Internet. By 1996 the word "Internet" was
common public currency, but it referred almost entirely to the
World Wide Web.
WEEK 5.2 – INTERNET TECHNOLOGY

Today's Internet

Aside from the complex physical connections that make up its


infrastructure, the Internet is held together by bi- or multi-
lateral commercial contracts (for example peering agreements)
and by technical specifications or protocols that describe how
to exchange data over the network.
Indeed, the Internet is essentially defined by its
interconnections and routing policies. In an often-cited, if
perhaps gratuitously mathematical definition, Seth Breidbart
once described the Internet as "the largest equivalence class in
the reflexive, transitive, symmetric closure of the relationship
'can be reached by an IP packet from'".
WEEK 5.2 – INTERNET TECHNOLOGY

Internet Protocols

Unlike older communications systems, the Internet protocol


suite was deliberately designed to be independent of the
underlying physical medium. Any communications network,
wired or wireless, that can carry two-way digital data can carry
Internet traffic. Thus, Internet packets flow through wired
networks like copper wire, coaxial cable, and fibre optic; and
through wireless networks like Wi-Fi. Together, all these
networks, sharing the same high-level protocols, form the
Internet.
WEEK 5.2 – INTERNET TECHNOLOGY

The Internet protocols originate from discussions within the


Internet Engineering Task Force (IETF) and its working groups,
which are open to public participation and review. These
committees produce documents that are known as Request for
Comments documents (RFCs). Some RFCs are raised to the
status of Internet Standard by the IETF process.
Some of the most used protocols in the Internet protocol suite
are IP, TCP, UDP, DNS, PPP, SLIP, ICMP, POP3, IMAP, SMTP,
HTTP, HTTPS, SSH, Telnet, FTP, LDAP, SSL, and TLS.
WEEK 5.2 – INTERNET TECHNOLOGY

Internet structure

There have been many analyses of the Internet and its


structure. For example, it has been determined that the Internet
IP routing structure and hypertext links of the World Wide Web
are examples of scale-free networks.

ICANN

The Internet Corporation for Assigned Names and Numbers


(ICANN) is the authority that coordinates the assignment of
unique identifiers on the Internet, including domain names,
Internet protocol addresses, and protocol port and parameter
numbers.
WEEK 5.2 – INTERNET TECHNOLOGY

A globally unified namespace (i.e., a system of names in which


there is one and only one holder of each name) is essential for
the Internet to function. ICANN is headquartered in Marina del
Rey, California, but is overseen by an international board of
directors drawn from across the Internet technical, business,
academic, and non-commercial communities. The US
government continues to have a privileged role in approving
changes to the root zone file that lies at the heart of the domain
name system.
WEEK 5.2 – INTERNET TECHNOLOGY

Because the Internet is a distributed network comprising many


voluntarily interconnected networks, the Internet, as such, has
no governing body. ICANN's role in coordinating the
assignment of unique identifiers distinguishes it as perhaps the
only central coordinating body on the global Internet, but the
scope of its authority extends only to the Internet's systems of
domain names, Internet protocol addresses, and protocol port
and parameter numbers.
WEEK 5.2 – INTERNET TECHNOLOGY

The World Wide Web

Through keyword-driven Internet research using search


engines like Google, millions worldwide have easy, instant
access to a vast and diverse amount of online information.
Compared to encyclopedias and traditional libraries, the World
Wide Web has enabled a sudden and extreme decentralization
of information and data.
WEEK 5.2 – INTERNET TECHNOLOGY

Some companies and individuals have adopted the use of


'weblogs' or blogs, which are largely used as easily-updatable
online diaries. Some commercial organizations encourage staff
to fill them with advice on their areas of specialization in the
hope that visitors will be impressed by the expert knowledge
and free information, and be attracted to the corporation as a
result. One example of this practice is Microsoft, whose
product developers publish their personal blogs in order to
pique the public's interest in their work.
WEEK 5.2 – INTERNET TECHNOLOGY

Collaboration

This low-cost and nearly instantaneous sharing of ideas,


knowledge and skills has revolutionized some, and given rise
to whole new, areas of human activity. One example of this is
the collaborative development and distribution of
Free/Libre/Open-Source Software (FLOSS) such as Linux,
Mozilla and OpenOffice.org. See Collaborative software.
WEEK 5.2 – INTERNET TECHNOLOGY

File-sharing

A computer file can be e-mailed to customers, colleagues and


friends as an attachment. It can be uploaded to a website or
FTP server for easy download by others. It can be put into a
"shared location" or onto a file server for instant use by
colleagues. The load of bulk downloads to many users can be
eased by the use of "mirror" servers or peer-to-peer
networking.
WEEK 5.2 – INTERNET TECHNOLOGY

Language

The most prevalent language for communication on the Internet


is English. This may be due to the Internet's origins, as well as
English's role as the lingua franca. It may also be related to the
poor capability of early computers to handle characters other
than those in the basic Latin alphabet. After English (32% of
web visitors) the most-requested languages on the world wide
web are Chinese 13%, Japanese 8%, Spanish 7%, German 6%
and French 4% (from Internet World Stats, updated November
30, 2005). By continent, 34% of the world's Internet users are
based in Asia, 29% in Europe, and 23% in North America ([2]
updated November 21, 2005).
WEEK 5.2 – INTERNET TECHNOLOGY

The Internet's technologies have developed enough in recent


years that good facilities are available for development and
communication in most widely used languages. However, some
glitches such as mojibake still remain.

Internet and the workplace


With the emergence of the internet and recent high speed
connections becoming available to the public, the internet has
altered the way many people work in significant ways. Contrary
to the traditional 9-5 workday where employees commute to
and from work, the internet has allowed greater flexibility both
in terms of working hours and work location. Today, many
employees work from home by "telecommuting".
WEEK 5.2 – INTERNET TECHNOLOGY

The internet and the advent of blogs has given employees a


forum from which to voice their opinions about their jobs,
employers and co-workers, creating a massive amount of
information and data on work that is currently being collected
by the Worklifewizard.org project run by Harvard Law School's
Labor & Worklife Program.
WEEK 5.2 – INTERNET TECHNOLOGY

Censorship

Some countries, such as Iran and the People's Republic of


China, restrict what people in their countries can see on the
Internet, especially unwanted political and religious content.
Censorship is sometimes done through government controlled
censoring filters, or by means of law or culture, making the
propagation of targeted materials extremely hard. However,
many internet users with the technical skill are able to bypass
these filters meaning that most Internet content is available
regardless of where one is in the world, so long as one has the
technical skill and means of connecting to it.
WEEK 5.2 – INTERNET TECHNOLOGY

In the Western world, it is Germany that has the highest rate of


censorship, especially of Nazis. However, most countries in the
Western world do not force Internet Service Providers to block
sites.

There are a large number of programs available that will block


what are deemed to be offensive sites (such as pornographic or
violent) on individual computers or networks.
WEEK 5.2 – INTERNET TECHNOLOGY

A complex system

Many computer scientists see the Internet as a "prime example


of a large-scale, highly engineered, yet highly complex system"
(Willinger, et al). The Internet is extremely heterogeneous. (For
instance, data transfer rates and physical characteristics of
connections vary widely.) The Internet exhibits "emergent
phenomena" that depend on its large-scale organization. For
example, data transfer rates exhibit temporal self-similarity.

You might also like