You are on page 1of 144

Lecture 1. AN ICT ROLE IN KEY SECTORS OF DEVELOPMENT OF SOCIETY.

ICTs (Information and Communication Technologies)- a diverse set of technological tools and
resources used to communicate, and to create, disseminate, store, and manage information.
Information and communications technology (ICT) refers to all the technology used to handle
 telecommunications,
 broadcast media,
 intelligent building management systems,
 audiovisual processing and transmission systems,
 network-based control and monitoring functions,
 system of ethernet based realtime encoding and media stream server and media receive &
display.
Different technologies are typically used in combination rather than as the sole delivery
mechanism.

ICT development includes many types of infrastructure and services, ranging from
telecommunications, such as voice, data, and media services, to specific applications, such as
banking, education, or health, to the implementation of electronic government (e-government).
Each of these types has its own trends that vary across countries and regions.

One of the goals of ICT4D (Information and communication technologies for development) is
to employ robust low-cost technologies that can be available for poor and low income
communities around the world. Short- and long-term negative effects of ICTs also need to be
studied. Examples of specific technologies used in developing countries include:

 Microlending and microfinance apps and organizations, such as Zidisha, Kiv


Microfunds, Milaap.
 Information sharing, such as Esoko.
 Scientific Animations Without Borders is a program based at the University of Illinois at
Urbana Champaign, focused on ICT4D.

Applications of ICT

Agriculture

Agriculture is considered to be the most vital sector for ICT intervention. It is considered as
the primary economic sector. It produces the most basic of human needs - food, clothing, shelter.

Farmers in the developing countries use ICTs to access price information from national and
international markets as well as connect to policy makers and other farmers. There are also
smartphone apps that can show the user information about the status of their crops and irrigation
system remotely. In livestock farming, cattle-breeding now includes scientific crossbreeding
techniques that produce cattle with greatly improved fertility. Having a local radio/TV show will
be a great help in informing the community on updates from the agricultural sector. ICTs can
also be used for training purposes.

For an experimental assessment of the role of mobile phones for farmers' access to agricultural
information from extension agents and from other farmers see a recent article.

ICT4D initiatives in agriculture can be generally classified into direct interventions, when
farmers are connected to information and opportunities that can directly improve their income or
well-being, and indirect interventions – supportive, long-term programs that can improve
established agricultural services over time through capacity building, research, and training.

ICT4D not only strengthens agricultural production but also helps in market development.
Thus it supports creating future opportunities for agricultural sector and the development of rural
livelihoods.

A document released by the World Bank's eTransform Africa project presents a summary of
ICT application in agriculture in the African continent. The report includes a roadmap on ICT's
application in farming, a list of African eAgriculture accomplishments called the Africa Scan,
and agricultural case studies performed in countries such as Namibia and Egypt, which focuses
on livestock production and irrigation efficiency, respectively.

The Open Agriculture (OpenAG) project by MIT is an ICT-enabled project with an


Agriculture development focus. In this project, users have a controlled environment agriculture
device where "every time users grow and harvest, they will contribute to a library of Climate
Recipes that can be borrowed and scaled so that users around the world can gain access to the
best and freshest foods".

Rice is the main food of half of the population. In the Philippines, the FutureRice program by
the Philippine Rice Research Institute (PhilRice) is close to completing its vision of Philippine
farms of the future as of 2015. The goal is to have farms that are automated, connected to apps
for the people to save on water, harness green energy, and make use of natural fertilizers and
pesticides. The demo farms aim to prepare farmers for two probable future scenarios: natural
farming for a world where fuel has become expensive and scarce due to high demand, and high-
tech, mechanized farming to make Philippine rice competitive in the world market.

With farming equipment, farmers can significantly save time, money, and labor. For instance,
a mechanical rice transplanter – a machine used to transfer rice seedlings onto a rice paddy – can
finish one hectare in one hour compared to an entire day with 8 to 10 laborers without a
transplanter. Organic, farm-sourced waste like carabao poop and rice straw are turned into
fertilizer through the action of microbacteria and earthworms. It is a process
called vermicompost.

Today, there are apps customized to the needs of farmers. Rice Crop Manager, a web and
mobile-based app developed by the International Rice Research Institute together with PhilRice,
presents farmers with a set of questions about their farm. Once all the questions are answered, the
app will generate recommendations on how the farmer can improve his yield (e.g. the app will
tell him when, how much, and how often to apply fertilizer). Rice Crop Manager can be viewed
and downloaded from Google Play as "RCM PH".

"Rice Doctor Tagalog" is a Filipino version of the mobile application. It aims to aid in the
identification and management of the rice crop issues here in the country. Leading authorities
from International Rice Research Institute, Philippine Rice Research Institute, the Indonesian
Research Institute for Rice, and the Lucid team at the University of Queensland in Australia
developed the application. IRRI said that workers, farmers, researchers, and students using Rice
Doctor can identify more than 80 pests, diseases and other disorders affecting rice with text and
images. Experts from PhilRice and students taking up development communication from the
University of the Philippines aided in the reviewing, editing and finalizing of the Filipino
translation of the summary of the signs, symptoms and management options. IRRI stated that this
recent meeting at in Laguna was the next step of the Filipino translation held by the project,
Improving Technology Promotion and Delivery through Capability Enhancement of Next-Gen
Rice Extension Professionals and Other Intermediaries, under the Food Staples Sufficiency
Program. Last year, the first part of the workshop was primarily for the terms and translation of
the diagnostic questions. IRRI claims that the Filipino translated Rice Doctor is the stepping
stone for the translation and localization of a diagnostic tool for the country-specific crop
problems. Currently, these are also being done in other countries such as Bangladesh and India.

Climate change and environment

The use of ICT in weather forecasting is broad. Weather forecasting offices use mass media to
inform the public on weather updates. After tropical storm Ondoy in the Philippines, the Filipino
people are more curious and aware about the weather hazards. Meteorological offices are also
using advanced tools to monitor the weather and the weather systems that may affect a certain
area.
Monitoring devices include:

 Weather satellites
 Weather radars
 Automatic weather stations
 Wind profilers
 Other synoptic data or weather instruments, including Earth Simulator which is used to
model climate and weather conditions.

In Africa, flood is one of the major concerns of farmers. The International Water Management
Institute launched the mobile services for flood management, specifically in East Sudan. These
mobile services are considered as a next-generation ICT for weather and water information. The
tool converts complex satellite sensor information to simple text messages which are sent to
farmers informing them about the optimum use of flood water for crop production. The text
messages would also warn the farmers about the flood events which would help them prepare
their fields and advise on how to mitigate flood damage in estimating the risk of future flood
events.

Climate change is a global phenomenon affecting the lives of mankind. In times of calamities,
information and communication technology is needed for disaster management. Various
organisations, government agencies and small and large-scale research projects have been
exploring the use of ICT for relief operations, providing early warnings and monitoring extreme
weather events. A review of new ICTs and climate change in developing countries highlighted
that ICT can be used for (1) Monitoring: observing, detecting and predicting, and informing
science and decision making; (2) Disaster management: supporting emergency response through
communications and information sharing, and providing early warning systems; and (3)
Adaptation: supporting environmental, health and resource management activities, up-scaling
technologies and building resilience. In the Philippines, institutions like the National Disaster and
Risk Reduction and Management Council help the public in monitoring the weather and advisory
for any possible risks due to hazardous weather. NetHope is another global organization which
contributes disaster management and awareness through information technology. According
to ICTandclimatechange.com ICT companies can be victims, villains or heroes of climate
change.

In 2015, the Metro Manila Development Authority (MMDA) launched a website called Be
Prepared Metro Manila. The website collates information regarding earthquake preparedness.
This was created in response to a predicted earthquake, expected to hit Metro Manila with a 7.2
intensity and it contains different info-graphics containing precautionary measures that can be
used to monitor and prepare for earthquakes.[47] Be Prepared Metro Manila explains how to
respond in the event of an earthquake, illustrates the valley fault system, lists down details of
emergency contacts, and opens a sign-up process for people interested to be volunteers. In
addition to the campaign launched by the Metro Manila Development Authority (MMDA), the
Department of Science and Technology (DOST) has also utilized ICT through the use of both
web application and mobile application for the DOST – Project Noah. According to DOST,
NOAH's mission is to undertake disaster science research and development, advance the use of
cutting edge technologies, and recommend innovative information services in government's
disaster prevention and mitigation efforts. Through the use of science and technology and in
partnership with the academe and other stakeholders, the DOST through Project NOAH is taking
a multi-disciplinary approach in developing systems, tools, and other technologies that could be
operationalized by government to help prevent and mitigate disasters.

Geographic information systems (GIS) are also used in several ICT4D applications, such as
the Open Risk Data Initiative (OpenRDI). OpenRDI aims to minimize the effect of disaster in
developing countries by encouraging them to open their disaster risk data. GIS technologies such
as satellite imagery, thematic maps, and geospatial data play a big part in disaster risk
management. One example is the HaitiData, where maps of Haiti containing layers of geospatial
data (earthquake intensity, flooding likelihood, landslide and tsunami hazards, overall damage,
etc.) are made available which can then be used by decision makers and policy makers for
rehabilitation and reconstruction of the country. The areas which are receiving priority attention
include natural resources information assessment, monitoring and management, water shed
development, environmental planning, urban services and land use planning.

Government, non-government and other organizations are encouraged to use ICT as a tool for
protecting environment and developing sustainable systems that save natural resources, to
implement green computing and to establish surveillance systems to forecast and monitor natural
and man-made disasters.

According to a research by OECD, ICTs can be tools for dealing with environmental issues as
follows:

1. Environment surveillance: Terrestrial (earth, land, soil, water), ocean, climate and
atmospheric surveillance, data collection, storage and record technologies, remote sensing,
telemetric systems, geographic information systems (GIS) etc.
2. Environment analysis: Different computational and processing tools are required to
analyze the data collected from environment. Some of these tools are land, soil, water and
atmospheric quality assessment tools, Tool for analyzing atmospheric conditions like GHG
emissions and pollutants etc.
3. Environment planning: Environment planning and policy formulation require analyzed
data, information and decision support systems.
4. Environment management and protection: Information and communication technologies
for management and protection of environment include resource and energy conservation and
management systems, GHG emission management and reduction systems and controls, pollution
control and management systems etc. ICT can reduce its own environmental impacts by
increasing system efficiency which ultimately reduce the overall negative impact on environment.
5. Impact and mitigating effects of ICT utilization: ICT use can mitigate the environmental
impacts directly by increasing process efficiency and as a result of dematerialization, and
indirectly by virtue of the secondary and tertiary effects resulting from ICT use on human
activities, which in turn reduce the impact of humans on the environment.
6. Environmental capacity building: ICT is used as a media to increase public awareness,
development of environment professionals, and integrating environmental issues into formal
education.

Examples: The Tropical Ecology Assessment and Monitoring Network, Atlas of Our
Changing Environment, Climate Change in Our World,

 Integrated ecosystem monitoring, sensing and modelling.

Education

The use of ICTs in the educational system would not be able to solve the current problems in
the educational system, but rather provide alternative solutions to the obstacles encountered in the
conventional educational system. ICTs would be able to provide education and knowledge in a
wider reach, even with a limited amount of resources, unlike conventional systems of education.

ICT has been employed in many education projects and research over the world. The Hole in
the Wall (also known as minimally invasive education) is one of the projects which focuses on
the development of computer literacy and the improvement of learning. Other projects included
the utilization of mobile phone technology to improve educational outcomes.

In the Philippines, there are key notes that have been forwarded to expand the definition of
ICT4E from an exclusive high-end technology to include low-end technology; that is, both digital
and analog. As a leading mobile technology user, the Philippines can take advantage of this for
student learning. One project that serves as an example is Project Mind, a collaboration of the
Molave Development Foundation, Health Sciences University of Mongolia, ESP Foundation, and
the University of the Philippines Open University (UPOU) which focuses on the viability
of Short Message System (SMS) for distance learning. Pedagogy, Teacher Training, and
Personnel Management are some of the subgroups of ICT4E. UPOU is one of the best examples
of education transformation that empowers the potential of ICT in the Philippines' education
system. By maximizing the use of technology to create a wide range of learning, UPOU promotes
lifelong learning in a more convenient way.

Furthermore, ICTs allow learning to become student-centered rather than teacher-dominated,


such as in the case of distance-learning programs. It has multiple impacts on student
achievements and motivations, including but not limited to: confidence in computer usage,
increased autonomy when learning, improved development in language and communication
skills. However, it is not without its flaws – ICTs can easily become the focus of a program, in
which the technology is given and provided before much thought is given to the application of it.

As education is a key factor of socio-economic development, the education system of


developing countries must be aligned with modern technology. ICT can improve the quality of
education and bring better outcomes by making information easily accessible to students, helping
to gain knowledge and skill easily and making trainings more available for teachers.

Literacy

Many current initiatives to improve global, regional and national literacy rates use ICT,
particularly mobile phones and SMS. For example, in India a project titled "Mobile Learning
Games for English as Second Language Literacy" (2004-2012) aimed to enhance the literacy sub-
skills of boys and girls in low-income rural areas (and in urban slums) via mobile game-based
learning of English in non-formal, formal and informal education contexts.

A project in Niger titled "Alphabetisation de Base par Cellulaire (ABC)" (2009-2011) was
based on the observation that ‘illiterate traders in Niger were teaching themselves how to read
and write in order to be able to benefit from the lower prices that sending SMS offered compared
with calling. If mobile phones could encourage illiterate traders to become partially literate, how
useful would it be to incorporate mobile phones in adult literacy classes?’ In consequence, this
project provided mobile phones and instruction to adults (including participants from producers’
associations) on how to use mobiles in literacy programmes (including ‘functional literacy
topics’).
In Somali, the "Dab IYO DAHAB Initiative" (2008-2011) used mobile phone technology to
‘build basic money management skills (financial skills) among youth and women so that they
could make informed decisions about their personal, households and/or small businesses’ and
was used ‘as a tool to empower Somali youth, particularly young Somali women, and more
generally, to enhance existing grassroots education, financial literacy, and poverty-reduction
initiatives’. The overall Somali community empowerment programme has been documented as
boosting job training and placement for 8,000 young people (women and men). Tests before and
after showed statistically significant improvement in skills, with the youth livelihoods programme
being linked to job placements.

Health

ICTs can be a supportive tool to develop and serve with reliable, timely, high-quality and
affordable health care and health information systems and to provide health education, training
and improve health research.

According to the World Health Organization (WHO), 15% of the world's total population have
disabilities. This is approximately 600 million people wherein three out of every four are living in
developing countries, half are of working age, half are women and the highest incidence and
prevalence of disabilities occurs in poor areas. With ICT, lives of people with disabilities can be
improved, allowing them to have a better interaction in society by widening their scope of
activities.

Goals of ICT and disability work

 Give disabled people a powerful tool in their battle to gain employment


 Increase disabled people's skills, confidence, and self-esteem
 Integrate disabled people socially and economically into their communities;
 Reduce physical or functional barriers and enlarge scope of activities available to disabled
persons
 Develop a web content that can be accessed by persons with disabilities especially the
visually impaired and hearing impaired

At the international level, there are numerous guiding documents impacting on the education
of people with disabilities such as Universal Declaration of Human Rights (1948), moving to
the Convention against Discrimination in Education (1960), the Convention on the Rights of the
Child (1989), the Convention on the Protection and Promotion of the Diversity of Cultural
Expressions (2005). The Convention on the Rights of Persons with Disabilities (CRPD) includes
policies about accessibility, non-discrimination, equal opportunity, full and effective participation
and other issues. The key statement within the CRPD (2006) relevant for ICT and people with
disabilities is within Article 9:

"To enable persons with disabilities to live independently and participate fully in all aspects of
life, States Parties shall take appropriate measures to ensure to persons with disabilities access, on
equal basis with others, to the physical environment, to transportation, to information and
communications, including information and communications technologies and systems, and other
facilities and services open or provided to the public, both in urban and rural areas. (p. 9)"

Another international policy that has indirect implications for the use of ICT by people with
disabilities are the Millennium Development Goals (MDGs). Although these do not specifically
mention the right to access ICT for people with disabilities, two key elements within the MDGs
are to reduce the number of people in poverty and to reach out to the marginalised groups without
access to ICT.

E-government and civic engagement

New forms of technology, such as social media platforms, provide spaces where individuals
can participate in expressions of civic engagement. Researchers are now realizing that activity
such as Twitter use "...that could easily be dismissed as leisure or mundane should be considered
under a broader conceptualization of development research."

Social Networking Sites (SNS) are indispensable for it provides a venue for civic engagement
for its users to call attention to issues that needs action because of the nature of social media
platforms as an effective tool in disseminating information to all its users. Social media can also
be used as a support venue for solving problems and also a means for reporting criminal activity
or calamity issues that affects the well being of communities. Social media is also used for
inciting volunteerism by letting others know of situations in places that requires civic intervention
and organize activities to make it happen.

Civic engagement plays a large part in e-government, particularly in the area of Transparency
and Accountability. ICTs are used to promote openness in the government as well as a platform
for citizens to report on anomalous government activities for the purpose of reducing corruption
and in promoting efficiency.

Even before the advent or popularity of social media platforms, internet forums were already
present. Here, people could share their concerns about pertinent topics to seek solutions.

In third-world countries like the Philippines, the text brigade is an easy method for informing
and gathering people for whatever purpose. It usually starts with an individual sending an SMS to
his/her direct contacts about a civic engagement. Then he/she requests the recipients to send the
same message to their own contacts as well until the number of people involved gets bigger and
bigger.

The e-government action plan includes applications and services for ensuring transparency,
improving efficiency, strengthening citizen relations, making need-based initiatives, allocating
public resources efficiently and enhancing international cooperation.

Writing about ICTs for government use in 1954, W. Howard Gammon can be credited as
writing the first e-government research paper. Though not mentioning the word "e-government",
his article "The Automatic Handling of Office Paper Work" tackled tactics regarding government
processes and information systems or electronic machinery.

In the Philippines, the administration now uses social media to converse more with its citizens
for it makes people feel more in touch with the highest official in the land. However, according to
Mary Grace P. Mirandilla-Santos, it has been suggested from research in the Philippines, that an
average citizen does not actively seek information about politics and government, even during an
election campaign. Another innovation is a standard suite of city indicators that enable mayors
and citizens to monitor the performance of their city with others, a valuable tool in obtaining
consistent & comparable city-level data.

Other

 Tourism: Tourism is the sector that has possibility of being benefited from ICT. Roger
Harris is the first person to show the possible benefits the field can get utilizing ICT. His work
location was a remote place in Malaysia and he showed how a small tourism operation can be run
there using internet. ICT can be an important medium for developing tourism market and
improving local livelihoods.

Tourism industry takes advantage of the beneficial use of information and communication
technology to cater their market through e-commerce. A journal entitled, "E-Tourism: The role of
ICT in tourism industry", enumerated several ways how e-commerce is expected to benefit
economic development in tourism industry. These are:

1. Through allowing local business access to global markets.


2. By providing new opportunities to export a wider range of goods and services.
3. By improving the internal efficiency within the firms.
 Reducing Gender Gap: According to the ITU, which is the United Nations specialized
agency for information and communication technologies, one of their Sustainable Development
Goals (SDGs) is focused on gender equality. In 2013, Broadband Commission Working Group
on Broadband and Gender released their global report which contained their estimation that there
are currently 200 million fewer women online compared to men. The ITU claims that ICT will
play an important role in delivering both gender equality to narrow the growing gender gap.
Based on their studies, evidence on the benefits that women can gain through ICT, especially
with being empowered with information are increasing. "Access to ICTs can enable women to
gain a stronger voice in their communities, their government and at the global level." "There is a
growing body of evidence on the benefits of ICTs for women's empowerment, through increasing
their access to health, nutrition, education and other human development opportunities, such as
political participation." ICT can also provide women new opportunities that involve sustainable
livelihood (including ICT-based jobs) and economic empowerment once they get to fully utilize
what ICT has to offer. One of ITU's projects that is related to this goal is the Women's Digital
Literacy Campaign. ITU partnered up with non-government organization Telecentre.org
Foundation for the campaign. They have trained over one million unskilled women to use
computers and ICT applications to open more opportunities in education and employment. In the
hopes that newly developed skills and knowledge related to ICT will improve their livelihoods.
According to ITU's case study... "The Campaign has demonstrated the power of digital literacy
training to open the door to other essential skills needed to operate in a broadband environment,
including financial literacy skills, as well as ICT-enabled career training. Such training enables
women to set up online businesses, or to use broadband services, such as social networking sites,
to enhance their ongoing livelihood and economic activity."

The ITU commitment to close the digital gender gap is installed in the 2030 Agenda, the
Addis Ababa Action plan 2015: develop gender-responsive strategies and policies, ensure access
and mitigate online threats, build content and services that meet women's needs, promote women
in the technology sector in decision-making positions, and establish multi-stakeholder
partnerships.

 Indigenous populations: According to UNESCO, indigenous people have low computer


ownership, low computer literacy, low connectivity to the Internet and low access to other digital
technologies such as cameras, film-making equipment, editing equipment, etc. Exacerbating
factors are the remoteness of many indigenous communities – often located in regions where
connectivity is difficult – and poor levels of literacy, particularly in English, the main computer
language. There is a lack of trained Indigenous ICT technicians to provide maintenance
locally. The goals of the UNESCO ICT4D Project for the Indigenous People are to preserve and
manage cultural resources, to enable recovery of their cultural self-worth and dignity, and to train
stakeholders to acquire greater mastery of ICT.
 Social Media: Social networking sites receive lots of attention in the Philippines, having
over 30 million Filipino users on Facebook alone. Sites like Facebook, Twitter and Instagram see
use in more than just socializing as the users tend to use the sites as a place of political
discussion, protests, and several other social movements. The usage of social media tools to
communicate with people across the world dominates the old school coffee table talk.
Businessmen tend to opt for a brief Skype conference to investors abroad than to set personal
meetings to save time and money. Social media is becoming part of the daily lives of so many
people around the world that it allows businesses to reach people they haven't been able to reach
before. Businesses need to make their own presence felt in social media. Otherwise, they might
lose out on opportunities that competitors could capitalize on. In addition to the traditional
methods of campaign, political figures make different social networking sites a part of their
electoral campaigns to voice out their platforms. The local government agencies and officials
releases announcements, statements and bulletins via their verified social media accounts. Local
transportation and transit agencies relay information mainly through Twitter about traffic
accidents, road closings and emergencies such as floods or typhoons.
 Persons with Disability (PWD): There are plenty of barriers to accessing electronic and
information and communication technologies, and one of them is the disability of the person.
Over a billion all over the world are hindered to access ICT because of their disability. Persons
with Disability (PWDs) will be at a huge disadvantage without the access to the said technology
in a world that is the age of information.

Mr. Opeolu Akinola, the President of Nigerian Association of the Blind, says "Accessibility is
ensuring that all the people in the society can access available resources irrespective of disability,
which means that persons with disability can participate and have the same choice as non-
disabled community members.

ICT is a great aid in improving the lives of PWDs by enlarging the opportunities that will be
available for them particularly in terms of social, cultural, political and economic integration in
certain communities. The UNESCO advocates the concept of knowledge societies which includes
the promotion of the rights and needs of PWDs and enrich them with the effective use of ICTs
which are accessible, adaptive and affordable by raising global awareness on disability rights,
developing innovative ICT solutions, building inclusive or assistive technologies for accessibility,
designing proper frameworks and tools, and to contribute to the implementation of UN
Convention on the Rights of Persons with Disabilities.

Lecture 2. INTRODUCTION TO COMPUTER SYSTEMS. ARCHITECTURE OF


COMPUTER SYSTEMS.
2.1. Computer architecture

2.1.1. Basic terms

Computer architecture is a specification detailing how a set of software and hardware


technology standards interact to form a computer system or platform. In short, computer
architecture refers to how a computer system is designed and what technologies it is compatible
with.
 Software – Programs
 Hardware – Mechanical and electronic equipment (is organized in PC Structure (Figure
1))

Figure 1
2.1.2 Von Neumann architecture
The term Von Neumann architecture, also known as the Von Neumann model or the Princeton
architecture, derives from a 1945 computer architecture description by the mathematician and
early computer scientist John von Neumann and others, First Draft of a Report on the EDVAC.
This describes a design architecture for an electronic digital computer (Figure 2) with
subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a
control unit containing an instruction register and program counter, a memory to store both data
and instructions, external mass storage, and input and output mechanisms. The meaning of the
term has evolved to mean a stored-program computer in which an instruction fetch and a data
operation cannot occur at the same time because they share a common bus. This is referred to as
the Von Neumann bottleneck and often limits the performance of the system.
Figure 2
2.1.3. Modified Harvard architecture

Accordingly, some pure Harvard machines are specialty products. Most modern computers
instead implement a modified Harvard architecture. Those modifications are various ways to
loosen the strict separation between code and data, while still supporting the higher performance
concurrent data and instruction access of the Harvard architecture.
Split cache architecture
The most common modification builds a memory hierarchy with a CPU cache separating
instructions and data. This unifies all except small portions of the data and instruction address
spaces, providing the von Neumann model. Most programmers never need to be aware of the fact
that the processor core implements a (modified) Harvard architecture, although they benefit from
its speed advantages. Only programmers who write instructions into data memory need to be
aware of issues such as cache coherency and executable space protection.
Another change preserves the "separate address space" nature of a Harvard machine, but
provides special machine operations to access the contents of the instruction memory as data.
Because data is not directly executable as instructions, such machines are not always viewed as
"modified" Harvard architecture:
Read access: initial data values can be copied from the instruction memory into the data
memory when the program starts. Or, if the data is not to be modified (it might be a constant
value, such as pi, or a text string), it can be accessed by the running program directly from
instruction memory without taking up space in data memory (which is often at a premium).
Write access: a capability for reprogramming is generally required; few computers are purely
ROM based. For example, a microcontroller usually has operations to write to the flash memory
used to hold its instructions. This capability may be used for purposes including software updates
and EEPROM replacement.
Read instructions from data memory
A few Harvard architecture processors, such as the MAXQ, can execute instructions fetched
from any memory segment -- unlike the original Harvard processor, which can only execute
instructions fetched from the program memory segment. Such processors, like other Harvard
architecture processors -- and unlike pure Von Neumann architecture -- can read an instruction
and read a data value simultaneously, if they're in separate memory segments, since the processor
has (at least) two separate memory segments with independent data buses. The most obvious
programmer-visible difference between this kind of modified Harvard architecture and a pure
Von Neumann architecture is that -- when executing an instruction from one memory segment --
the same memory segment cannot be simultaneously accessed as data.
Modern uses of the Modified Harvard architecture
Outside of applications where a cacheless DSP or microcontroller is required, most modern
processors have a CPU cache which partitions instruction and data.
There are also processors which are Harvard machines by the most rigorous definition (that
program and data memory occupy different address spaces), and are only modified in the weak
sense that there are operations to read and/or write program memory as data Similar solutions are
found in other microcontrollers such as the PIC and Z8Encore!, many families of digital signal
processors such as the TI C55x cores, and more. Because instruction execution is still restricted
to the program address space, these processors are very unlike von Neumann machines.
Having separate address spaces creates certain difficulties in programming with high-level
languages such as C, which do not directly support the notion that tables of read-only data might
be in a different address space from normal writable data (and thus need to be read using different
instructions).
The design of a Von Neumann architecture is simpler than the more modern Harvard
architecture which is also a stored-program system but has one dedicated set of address and data
buses for reading data from and writing data to memory, and another set of address and data
buses for fetching instructions.
A stored-program digital computer is one that keeps its programmed instructions, as well as its
data, in read-write, random-access memory (RAM). Stored-program computers were an
advancement over the program-controlled computers of the 1940s, such as the Colossus and the
ENIAC, which were programmed by setting switches and inserting patch leads to route data and
to control signals between various functional units. In the vast majority of modern computers, the
same memory is used for both data and program instructions, and the Von Neumann vs. Harvard
distinction applies to the cache architecture, not main memory.

2.2. Hardware

2.2.1. CPU

CPU (central processing unit) or processor – nerve center of a PC (Figure 3).

 This is built into a single chip which executes program


instructions and coordinates the activities that take place within the
computer system.
 The chip itself is a small piece of silicon with a complex
electrical circuit called an integrated circuit.

Figure 3
The CPU consists of three main parts:
 The control unit examines the instructions in the user’s program, interprets each
instruction and causes the circuits and the rest of the components – monitor, disc drives, etc. – to
execute the functions specified.
 The arithmetic logic unit (ALU) performs mathematical calculations (+,-, etc.) and logical
operations (AND, OR, NOT).
 The registers are high-speed units of memory used to store and control data. One of the
registers (the program counter, or PC) keeps track of the next instruction to be performed in the
main memory. The other (the instruction register or IR) holds the instruction that is being
executed.
The power and performance of a computer is partly determined by the speed of its processor.
A system clock sends out signals at fixed intervals to measure and synchronize the flow of data.
Clock speed is measured in gigahertz (GHz).
 CPU running at 4GHz (4 Thousand million hertz, or cycles, per second) will enable your
PC to handle the most demanding applications.
2.2.2. Buses and cards

The main circuit board inside your system is called the motherboard and contains the CPU,
The memory chips, expansions slots, and controllers for peripherals, connected by buses –
electrical channels which allow devices inside the computer to communicate with each other.
Expansion slots allow users to install expansion cards, adding features like sound, memory, and
network capabilities.
Computer bus types are as follows:
 System Bus (Figure 4): A parallel bus that simultaneously transfers data in 8-, 16-, or 32-
bit channels and is the primary pathway between the CPU and memory.
 Internal Bus: Connects a local device, like internal CPU memory.
 External Bus: Connects peripheral devices to the motherboard, such as scanners or disk
drives.
 Expansion Bus: Allows expansion boards to access the CPU and RAM.
 Front side Bus: Main computer bus that determines data transfer rate speed and is the
primary data transfer path between the CPU, RAM and other motherboard devices.
 Backside Bus: Transfers secondary cache (L2 cache) data at faster speeds, allowing more
efficient CPU operations.
The system bus (Figure 4) consists of three types of buses:
Data Bus: Carries the data that needs processing
Address Bus: Determines where data should be sent
Control Bus: Determines data processing
An address bus is a computer bus architecture used to transfer data between devices that are
identified by the hardware address of the physical memory (the physical address), which is stored
in the form of binary numbers to enable the data bus to access memory storage.
The address bus is used by the CPU or a direct memory access (DMA) enabled device to
locate the physical address to communicate read/write commands. All address busses are read
and written by the CPU or DMA in the form of bits.
An address bus is part of the system bus architecture, which was developed to decrease costs
and enhance modular integration. However, most modern computers use a variety of individual
buses for specific tasks.
An individual computer contains a system bus, which connects the major components of a
computer system and has three main elements, of which the address bus is one, along with the
data bus and control bus.
An address bus is measured by the amount of memory a system can retrieve. A system with a
32-bit address bus can address 4 gibibytes of memory space. Newer computers using a 64-bit
address bus with a supporting operating system can address 16 exbibytes of memory locations,
which is virtually unlimited.
A control bus is a computer bus that is used by the CPU to communicate with devices that are
contained within the computer. This occurs through physical connections such as cables or
printed circuits. The CPU transmits a variety of control signals to components and devices to
transmit control signals to the CPU using the control bus. One of the main objectives of a bus is
to minimize the lines that are needed for communication. An individual bus permits
communication between devices using one data channel. The control bus is bidirectional and
assists the CPU in synchronizing control signals to internal devices and external components. It is
comprised of interrupt lines, byte enable lines, read/write signals and status lines.
Although a CPU can have its own distinctive set of control signals, some controls are common
to all CPUs:
Interrupt Request (IRQ) Lines: Hardware line used by devices to interrupt signals to the CPU.
It allows the CPU to interrupt its current job to process the present request.
System Clock Control Line: Delivers the internal timing for various devices on the
motherboard and CPU.
Communication between the CPU and control bus is necessary for running a proficient and
functional system. Without the control bus the CPU cannot determine whether the system is
receiving or sending data. It is the control bus that regulates which direction the write and read
information need to go. The control bus contains a control line for write instructions and a control
line for read instructions. When the CPU writes data to the main memory, it transmits a signal to
the write command line. The CPU also sends a signal to the read command line when it needs to
read. This signal permits the CPU to receive or transmit data from main memory.
Figure 4
The size of a bus, called bus width, determines how much data can be transmitted. It can be
compared to the number of lanes on a motor way – the larger the width, the more data can travel
along the bus. For example, 64 – bit bus can transmit 64 bits of data.

2.2.3. Computer memory


In computing, memory refers to the physical devices used to store programs (sequences of
instructions) or data (e.g. program state information) on a temporary or permanent basis for use in
a computer or other digital electronic device. The term primary memory is used for the
information in physical systems which function at high-speed (i.e. RAM- Random Access
Memory), as a distinction from secondary memory, which are physical devices for program and
data storage which are slow to access but offer higher memory capacity. Primary memory stored
on secondary memory is called "virtual memory". An archaic synonym for memory is store.
The term "memory", meaning primary memory is often associated with addressable
semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for
example as primary memory but also other purposes in computers and other digital electronic
devices. There are two main types of semiconductor memory: volatile and non-volatile.
Examples of non-volatile memory are flash memory (sometimes used as secondary, sometimes
primary computer memory) and ROM (Read Only Memory)/PROM/EPROM/EEPROM memory
(used for firmware such as boot programs).
Examples of volatile memory are primary memory (typically dynamic RAM, DRAM), and fast
CPU cache memory (typically static RAM, SRAM, which is fast but energy-consuming and offer
lower memory capacity per area unit than DRAM).
Most semiconductor memory is organized into memory cells or bistable flip-flops, each
storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and
multiple bits per cell (called MLC, Multiple Level Cell). The memory cells are grouped into
words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be
accessed by a binary address of N bit, making it possible to store 2 raised by N words in the
memory. This implies that processor registers normally are not considered as memory, since they
only store one word and do not include an addressing mechanism.
The term storage is often used to describe secondary memory such as tape, magnetic disks and
optical discs (CD-ROM and DVD-ROM) RAM
The two main forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM).
In SRAM, a bit of data is stored using the state of a flip-flop. This form of RAM is more
expensive to produce, but is generally faster and requires less power than DRAM and, in modern
computers, is often used as cache memory for the CPU.
DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a
memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor
acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or
change it. As this form of memory is less expensive to produce than static RAM, it is the
predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power
is removed from the system.
ROM
Read-only memory (ROM) is a class of storage medium used in computers and other electronic
devices. Data stored in ROM cannot be modified, or can be modified only slowly or with
difficulty, so it is mainly used to distribute firmware (software that is very closely tied to specific
hardware, and unlikely to need frequent updates).
ROM is non-volatile, containing instructions and routines for the basic operations of the CPU.
The BIOS (basic input/output system) uses ROM to control communication with peripherals
Magnetic storage
Magnetic storage and magnetic recording are terms from engineering referring to the
storage of data on a magnetized medium.
Magnetic storage uses different patterns of magnetization in a magnetizable material to store
data and is a form of non-volatile memory.
Nowadays the most popular magnetic storage is HDD.
A hard disk drive (HDD) (Figure 5) is a data storage device used for storing and retrieving
digital information using rapidly rotating disks (platters) coated with magnetic material. An HDD
retains its data even when powered off. Data is read in a random-access manner, meaning
individual blocks of data can be stored or retrieved in any order rather than sequentially. An HDD
consists of one or more rigid ("hard") rapidly rotating disks (platters) with magnetic heads
arranged on a moving actuator arm to read and write data to the surfaces.
Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general
purpose computers by the early 1960s. Continuously improved, HDDs have maintained this
position into the modern era of servers and personal computers. More than 200 companies have
produced HDD units, though most current units are manufactured by Seagate, Toshiba and
Western Digital. Worldwide revenues for HDD shipments are expected to reach $33 billion in
2013, a decrease of approximately 12% from $37.8 billion in 2012.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified
in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000
gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is
unavailable to the user because it is used by the file system and the computer operating system,
and possibly inbuilt redundancy for error correction and recovery. Performance is specified by
the time to move the heads to a file (Average Access Time) plus the time it takes for the file to
move under its head (average latency, a function of the physical rotational speed in revolutions
per minute) and the speed at which the file is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch in desktop computers and
2.5-inch in laptops
As of 2012, the primary competing technology for secondary storage is flash memory in the
form of solid-state drives (SSDs). HDDs are expected to remain the dominant medium for
secondary storage due to predicted continuing advantages in recording capacity and price per unit
of storage; but SSDs are replacing HDDs where speed, power consumption and durability are
more important considerations than price and capacity.
The information is accessed using one or more read/write heads (Figure 5). Information is
written to and read from the storage medium as it moves past devices called read/write heads that
operate very close (often tens of nanometers) over the magnetic surface. The read-and-write head
is used to detect and modify the magnetization of the material immediately under it.
Older hard disk drives used iron(III) oxide as the magnetic material, but current disks use a
cobalt -based alloy.
The read element is typically magnito-resistive while the write element is typically thin-film
inductive.
A new type of magnetic storage, called Magnetoresistive Random Access Memory or MRAM,
is being produced that stores data in magnetic bits based on the tunnel megnetoresistance (TMR)
effect. Its advantage is non-volatility, low power usage, and good shock robustness.
However, with storage density and capacity orders of magnitude smaller than an HDD,
MRAM is useful in applications where moderate amounts of storage with a need for very
frequent updates are required, which flesh memory cannot support due to its limited write
endurance.
When the disk is formatted, the operating system organizes the disk surface into circular tracks
and divides each track into sectors.
The OS creates a directory which will record the specific location of files. When you save a
file, the OS moves the read/write head of the drive towards empty sectors, records the data and
writes an entry for the directory.

Figure 5

Later on, when you open that file, the OS looks for its entry for the directory, moves the
read/write heads to the correct sector, and reads the file in the RAM area.
The OS allows you to create one or more partitions on your HD, in effect dividing it into
several logical parts. Partitions let you install more then one operating system (e.g. Windows and
Linux) on your computer.
You may also decide to split your HD because you want to store the OS and programs on one
partition and your data files on another; this allows you to reinstall the OS when a problem
occurs, without affecting the data partition.
The average time required for the read/write heads to move and find data is called seek time
(or access time) and it measured in milliseconds (ms); most HD have a seek time of 7 or 14 ms.
Don’t confuse this with transfer rate – average speed required to transmit data from the disc to
the CPU, measured in megabytes per second.
SSD
A solid-state drive (SSD) (Figure 6) (also known as a solid-state disk or electronic disk,
though it contains no actual "disk" of any kind, nor motors to "drive" the disks) is a data storage
device using integrated circuit assemblies as memory to store data persistently. SSD technology
uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives,
thus permitting simple replacement in common applications. Also, new I/O interfaces like SATA
Express are created to keep up with speed advancements in SSD technology.

Figure 6

SSDs have no moving mechanical components. This distinguishes them from traditional
electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain
spinning disks and movable read/write heads.
Early SSDs using RAM and similar technology
SSDs had origins in the 1950s with two similar technologies: magnetic core memory and card
capacitor read-only store (CCROS). These auxiliary memory units (as contemporaries called
them) emerged during the era of vacuum-tube computers. But with the introduction of cheaper
drum storage units their use ceased.
Later, in the 1970s and 1980s, SSDs were implemented in semiconductor memory for early
supercomputers of IBM, Amdahl and Cray; however, the prohibitively high price of the built-to-
order SSDs made them quite seldom used. In the late 1970s, General Instruments produced an
electrically alterable ROM (EAROM) which operated somewhat like the later NAND flash
memory. Unfortunately, a ten-year life was not achievable and many companies abandoned the
technology. In 1976 Dataram started selling a product called Bulk Core, which provided up to 2
MB of solid state storage compatible with Digital Equipment Corporation (DEC) and Data
General (DG) computers. In 1978, Texas Memory Systems introduced a 16 kilobyte RAM solid-
state drive to be used by oil companies for seismic data acquisition. The following year,
StorageTek developed the first RAM solid-state drive.
Compared with electromechanical disks, SSDs are typically more resistant to physical shock,
run silently, have lower access time, and less latency. However, while the price of SSDs has
continued to decline in 2012, SSDs are still about 7 to 8 times more expensive per unit of storage
than HDDs.
As of 2010, most SSDs use NAND-based flash memory, which retains data without power.
For applications requiring fast access, but not necessarily data persistence after power loss, SSDs
may be constructed from random-access memory (RAM). Such devices may employ separate
power sources, such as batteries, to maintain data after power loss.
Hybrid drives or solid state hybrid drives (SSHD) combine the features of SSDs and HDDs in
the same unit, containing a large hard disk drive and an SSD cache to improve performance of
frequently accessed data.
Optical storage
In computing and optical disc recording technologies, an optical disc (OD) is a flat, usually
circular disc which encodes binary data (bits) in the form of pits (binary value of 0 or off, due to
lack of reflection when read) and lands (binary value of 1 or on, due to a reflection when read) on
a special material (often aluminium ) on one of its flat surfaces. The encoding material sits atop a
thicker substrate (usually polycarbonate) which makes up the bulk of the disc and forms a dust
defocusing layer. The encoding pattern follows a continuous, spiral path covering the entire disc
surface and extending from the innermost track to the outermost track. The data is stored on the
disc with a laser or stamping machine, and can be accessed when the data path is illuminated with
a laser diode in an optical disc drive which spins the disc at speeds of about 200 to 4,000 RPM or
more, depending on the drive type, disc format, and the distance of the read head from the center
of the disc (inner tracks are read at a higher disc speed). The pits or bumps distort the reflected
laser light, hence most optical discs (except the black discs of the original PlayStation video game
console) characteristically have an iridescent appearance created by the grooves of the reflective
layer. The reverse side of an optical disc usually has a printed label, sometimes made of paper but
often printed or stamped onto the disc itself. This side of the disc contains the actual data and is
typically coated with a transparent material, usually lacquer. Unlike the 3½-inch floppy disk,
most optical discs do not have an integrated protective casing and are therefore susceptible to
data transfer problems due to scratches, fingerprints, and other environmental problems.
Optical discs are usually between 7.6 and 30 cm (3 to 12 in) in diameter, with 12 cm (4.75 in)
being the most common size. A typical disc is about 1.2 mm (0.05 in) thick, while the track pitch
(distance from the center of one track to the center of the next) is typically 1.6 µm.
Some historical facts
The optical disc was invented in 1958. In 1961 and 1969, David Paul Gregg registered a patent
for the analog optical disc for video recording.
Later, in the Netherlands in 1969, Philips Research physicists began their first optical
videodisc experiments at Eindhoven. In 1975, Philips and MCA began to work together, and in
1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA
delivered the discs and Philips the players. However, the presentation was a technical and
commercial failure and the Philips/MCA cooperation ended.
In Japan and the U.S., Pioneer succeeded with the videodisc until the advent of the DVD. In
1979, Philips and Sony, in consortium, successfully developed the audio compact disc.
In the mid-1990s, a consortium of manufacturers developed the second generation of the
optical disc, the DVD.
Magnetic disks found limited applications in storing the data in large amount. So, there was
the need of finding some more data storing techniques. As a result, it was found that by using
optical means large data storing devices can be made which in turn gave rise to the optical discs.
The very first application of this kind was the Compact Disc (CD) which was used in audio
systems.
Sony and Philips developed the first generation of the CDs in the mid 1980s with the complete
specifications for these devices. With the help of this kind of technology the possibility of
representing the analog signal into digital signal was exploited to great level. For this purpose the
16 bit samples of the analog signal were taken at the rate of 44,100 samples per second which
was obviously following the Nyquist Criteria. The design of first version of the CD's was to hold
up to 75 minutes of music which required 650Mb of storage.
The third generation optical disc was developed in 2000–2006, and was introduced as Blu-ray
Disc. First movies on Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a
high definition optical disc format war over a competing format, the HD DVD. A standard Blu-
ray disc can hold about 25 GB of data (up to 400 GB – experimental), a DVD about 4.7 GB, and
a CD about 700 MB. In the picture you can see a comparison of various optical storage media
(the width of holes(pits), wavelength etc.).
Figure 7.
Flash memory

Flash memory is an electronic non-volatile computer storage medium that can be electrically
erased and reprogrammed.
Flash memory developed from EEPROM (electrically erasable programmable read-only
memory).
There are two main types of flash memory, which are named after the NAND and NOR logic
gates. The internal characteristics of the individual flash memory cells exhibit characteristics
similar to those of the corresponding gates.
Whereas EPROMs had to be completely erased before being rewritten, NAND type flash
memory may be written and read in blocks (or pages) which are generally much smaller than the
entire device. NOR type flash allows a single machine word (byte) to be written—to an erased
location—or read independently.
The NAND type is primarily used in main memory, memory cards, USB flash drives, solid-
state drives, and similar products, for general storage and transfer of data. The NOR type, which
allows true random access and therefore direct code execution, is used as a replacement for the
older EPROM and as an alternative to certain kinds of ROM applications, whereas NOR flash
memory may emulate ROM primarily at the machine code level; many digital designs need ROM
(or PLA) structures for other uses, often at significantly higher speeds than (economical) flash
memory may achieve. NAND or NOR flash memory is also often used to store configuration data
in numerous digital products, a task previously made possible by EEPROMs or battery-powered
static RAM.
Example applications of both types of flash memory include personal computers, PDAs,
digital audio players, digital cameras, mobile phones, synthesizers, video games, scientific
instrumentation, industrial robotics, medical electronics, and so on. In addition to being non-
volatile, flash memory offers fast read access times, as fast as dynamic RAM, although not as fast
as static RAM or ROM. Its mechanical shock resistance helps explain its popularity over hard
disks in portable devices, as does its high durability, being able to withstand high pressure,
temperature, immersion in water, etc.
Although flash memory is technically a type of EEPROM, the term "EEPROM" is generally
used to refer specifically to non-flash EEPROM which is erasable in small blocks, typically
bytes. Because erase cycles are slow, the large block sizes used in flash memory erasing give it a
significant speed advantage over non-flash EEPROM when writing large amounts of data. As of
2013 flash memory costs much less than byte-programmable EEPROM and has become the
dominant memory type wherever a system requires a significant amount of non-volatile, solid
state storage.
Some historical facts
Flash memory (both NOR and NAND types) was invented by Dr. Fujio Masuoka while
working for Toshiba circa 1980. According to Toshiba, the name "flash" was suggested by
Masuoka's colleague, Shōji Ariizumi, because the erasure process of the memory contents
reminded him of the flash of a camera. Masuoka and colleagues presented the invention at the
IEEE 1984 International Electron Devices Meeting (IEDM) held in San Francisco.
Intel Corporation saw the massive potential of the invention and introduced the first
commercial NOR type flash chip in 1988. NOR-based flash has long erase and write times, but
provides full address and data buses, allowing random access to any memory location. This
makes it a suitable replacement for older read-only memory (ROM) chips, which are used to store
program code that rarely needs to be updated, such as a computer's BIOS or the firmware of set-
top boxes. Its endurance may be from as little as 100 erase cycles for an on-chip flash memory, to
a more typical 10,000 or 100,000 erase cycles, up to 1,000,000 erase cycles. OR-based flash was
the basis of early flash-based removable media; CompactFlash was originally based on it, though
later cards moved to less expensive NAND flash.
NAND flash has reduced erase and write times, and requires less chip area per cell, thus
allowing greater storage density and lower cost per bit than NOR flash; it also has up to ten times
the endurance of NOR flash. However, the I/O interface of NAND flash does not provide a
random-access external address bus. Rather, data must be read on a block-wise basis, with typical
block sizes of hundreds to thousands of bits. This makes NAND flash unsuitable as a drop-in
replacement for program ROM, since most microprocessors and microcontrollers required byte-
level random access. In this regard, NAND flash is similar to other secondary data storage
devices, such as hard disks and optical media, and is thus very suitable for use in mass-storage
devices, such as memory cards. The first NAND-based removable media format was SmartMedia
in 1995, and many others have followed, including MultiMediaCard, Secure Digital, Memory
Stick and xD-Picture Card. A new generation of memory card formats, including RS-MMC,
miniSD and microSD, and Intelligent Stick, feature extremely small form factors. For example,
the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm. microSD
capacities range from 64 MB to 64 GB, as of May 2011.
Principles of operation
Flash memory stores information in an array of memory cells made from floating-gate
transistors. In traditional single-level cell (SLC) devices, each cell stores only one bit of
information. Some newer flash memory, known as multi-level cell (MLC) devices, including
triple-level cell (TLC) devices, can store more than one bit per cell by choosing between multiple
levels of electrical charge to apply to the floating gates of its cells.
The floating gate may be conductive (typically polysilicon in most kinds of flash memory) or
non-conductive (as in SONOS flash memory).
Floating-gate transistor
In flash memory, each memory cell resembles a standard MOSFET, except the transistor has
two gates instead of one. On top is the control gate (CG), as in other MOS transistors, but below
this there is a floating gate (FG) insulated all around by an oxide layer. The FG is interposed
between the CG and the MOSFET channel. Because the FG is electrically isolated by its
insulating layer, any electrons placed on it are trapped there and, under normal conditions, will
not discharge for many years. When the FG holds a charge, it screens (partially cancels) the
electric field from the CG, which modifies the threshold voltage (VT) of the cell (more voltage
has to be applied to the CG to make the channel conduct). For read-out, a voltage intermediate
between the possible threshold voltages is applied to the CG, and the MOSFET channel's
conductivity tested (if it's conducting or insulating), which is influenced by the FG. The current
flow through the MOSFET channel is sensed and forms a binary code, reproducing the stored
data. In a multi-level cell device, which stores more than one bit per cell, the amount of current
flow is sensed (rather than simply its presence or absence), in order to determine more precisely
the level of charge on the FG.
NOR flash
In NOR gate flash, each cell has one end connected directly to ground, and the other end
connected directly to a bit line. This arrangement is called "NOR flash" because it acts like a
NOR gate: when one of the word lines (connected to the cell's CG) is brought high, the
corresponding storage transistor acts to pull the output bit line low. NOR flash continues to be the
technology of choice for embedded applications requiring a discrete non-volatile memory device.
The low read latencies characteristic of NOR devices allow for both direct code execution and
data storage in a single memory product.
Programming (figure 26)
A single-level NOR flash cell in its default state is logically equivalent to a binary “1” value,
because current will flow through the channel under application of an appropriate voltage to the
control gate, so that the bitline voltage is pulled down. A NOR flash cell can be programmed, or
set to a binary “0” value, by the following procedure: an elevated on-voltage (typically >5 V) is
applied to the CG the channel is now turned on, so electrons can flow from the source to the drain
(assuming an NMOS transistor) the source-drain current is sufficiently high to cause some high
energy electrons to jump through the insulating layer onto the FG, via a process called hot-
electron injection.

Figure 8

Erasing
To erase a NOR flash cell (figure 8) (resetting it to the “1” state), a large voltage of the
opposite polarity is applied between the CG and source terminal, pulling the electrons off the FG
through quantum tunneling. Modern NOR flash memory chips are divided into erase segments
(often called blocks or sectors). The erase operation can only be performed on a block-wise basis;
all the cells in an erase segment must be erased together. Programming of NOR cells, however,
can generally be performed one byte or word at a time.
Figure 9

NAND flash
NAND flash also uses floating-gate transistors, but they are connected in a way that resembles
a NAND gate: several transistors are connected in series, and only if all word lines are pulled
high (above the transistors’ VT) is the bit line pulled low. These groups are then connected via
some additional transistors to a NOR-style bit line array in the same way that single transistors
are linked in NOR flash.
Compared to NOR flash, replacing single transistors with serial-linked groups adds an extra
level of addressing. Whereas NOR flash might address memory by page then word, NAND flash
might address it by page, word and bit. Bit-level addressing suits bit-serial applications (such as
hard disk emulation), which access only 1 bit at a time. Execute-In-Place applications, on the
other hand, require every bit in a word to be accessed simultaneously. This requires word-level
addressing. In any case, both bit and word addressing modes are possible with either NOR or
NAND flash.
To read, first the desired group is selected (in the same way that a single transistor is selected
from a NOR array). Next, most of the word lines are pulled up above the VT of a programmed
bit, while one of them is pulled up to just over the VT of an erased bit. The series group will
conduct (and pull the bit line low) if the selected bit has not been programmed.
Despite the additional transistors, the reduction in ground wires and bit lines allows a denser
layout and greater storage capacity per chip. (The ground wires and bit lines are actually much
wider than the lines in the diagrams.) In addition, NAND flash is typically permitted to contain a
certain number of faults (NOR flash, as is used for a BIOS ROM, is expected to be fault-free).
Manufacturers try to maximize the amount of usable storage by shrinking the size of the
transistor below the size where they can be made reliably, to the size where further reductions
would increase the number of faults faster than it would increase the total storage available.
Writing and erasing
NAND flash uses tunnel injection for writing and tunnel release for erasing. NAND flash
memory forms the core of the removable USB storage devices known as USB flash drives, as
well as most memory card formats and solid-state drives available today.
Memory wear
Another limitation is that flash memory has a finite number of program-erase cycles (typically
written as P/E cycles). Most commercially available flash products are guaranteed to withstand
around 100,000 P/E cycles before the wear begins to deteriorate the integrity of the storage.
Micron Technology and Sun Microsystems announced an SLC NAND flash memory chip rated
for 1,000,000 P/E cycles on 17 December 2008.
The guaranteed cycle count may apply only to block zero (as is the case with TSOP NAND
devices), or to all blocks (as in NOR). This effect is partially offset in some chip firmware or file
system drivers by counting the writes and dynamically remapping blocks in order to spread write
operations between sectors; this technique is called wear leveling. Another approach is to perform
write verification and remapping to spare sectors in case of write failure, a technique called bad
block management (BBM). For portable consumer devices, these wearout management
techniques typically extend the life of the flash memory beyond the life of the device itself, and
some data loss may be acceptable in these applications. For high reliability data storage, however,
it is not advisable to use flash memory that would have to go through a large number of
programming cycles. This limitation is meaningless for ‘read-only’ applications such as thin
clients and routers, which are programmed only once or at most a few times during their
lifetimes.
In December 2012, Taiwanese engineers from Macronix revealed their intention to announce
at the 2012 IEEE International Electron Devices Meeting that it has figured out how to improve
NAND flash storage read/write cycles from 10,000 to 100 million cycles using a “self-healing”
process that uses a flash chip with “onboard heaters that could anneal small groups of memory
cells.”
The built-in thermal annealing replaces the usual erase cycle with a local high temperature
process that not only erases the stored charge, but also repairs the electron induced stress in the
chip, giving write cycles of at least 100 million.
The result is a chip that can be erased and rewritten on over and over, even when it should
theoretically break down. As promising as Macronix’s breakthrough could be for the mobile
industry, however, there are no plans for a commercial product to be released any time in the near
future.
Read disturb
The method used to read NAND flash memory can cause nearby cells in the same memory
block to change over time (become programmed). This is known as read disturb. The threshold
number of reads is generally in the hundreds of thousands of reads between intervening erase
operations. If reading continually from one cell, that cell will not fail but rather one of the
surrounding cells on a subsequent read. To avoid the read disturb problem the flash controller
will typically count the total number of reads to a block since the last erase. When the count
exceeds a target limit, the affected block is copied over to a new block, erased, then released to
the block pool. The original block is as good as new after the erase. If the flash controller does
not intervene in time, however, a read disturb error will occur with possible data loss if the errors
are too numerous to correct with ECC.
Low-level access
The low-level interface to flash memory chips differs from those of other memory types such
as DRAM, ROM, and EEPROM, which support bit-alterability (both zero to one and one to zero)
and random access via externally accessible address buses.
NOR memory has an external address bus for reading and programming. For NOR memory,
reading and programming are random-access, and unlocking and erasing are block-wise. For
NAND memory, reading and programming are page-wise, and unlocking and erasing are block-
wise.
NOR memories
Reading from NOR flash is similar to reading from random-access memory, provided the
address and data bus are mapped correctly. Because of this, most microprocessors can use NOR
flash memory as execute in place (XIP) memory, meaning that programs stored in NOR flash can
be executed directly from the NOR flash without needing to be copied into RAM first. NOR flash
may be programmed in a random-access manner similar to reading. Programming changes bits
from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a
block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64,
128, or 256 KB.
Bad block management is a relatively new feature in NOR chips. In older NOR devices not
supporting bad block management, the software or device driver controlling the memory chip
must correct for blocks that wear out, or the device will cease to work reliably.
The specific commands used to lock, unlock, program, or erase NOR memories differ for each
manufacturer. To avoid needing unique driver software for every device made, special Common
Flash Memory Interface (CFI) commands allow the device to identify itself and its critical
operating parameters.
Besides its use as random-access ROM, NOR flash can also be used as a storage device, by
taking advantage of random-access programming. Some devices offer read-while-write
functionality so that code continues to execute even while a program or erase operation is
occurring in the background. For sequential data writes, NOR flash chips typically have slow
write speeds, compared with NAND flash.
NAND memories
NAND flash architecture was introduced by Toshiba in 1989.These memories are accessed
much like block devices, such as hard disks or memory cards. Each block consists of a number of
pages. The pages are typically 512 or 2,048 or 4,096 bytes in size. Associated with each page are
a few bytes (typically 1/32 of the data size) that can be used for storage of an error correcting
code (ECC) checksum.
Typical block sizes include:
32 pages of 512+16 bytes each for a block size of 16 KB
64 pages of 2,048+64 bytes each for a block size of 128 KB
64 pages of 4,096+128 bytes each for a block size of 256 KB
128 pages of 4,096+128 bytes each for a block size of 512 KB.
While reading and programming is performed on a page basis, erasure can only be performed
on a block basis. Number of Operations (NOPs) is the number of times the pages can be
programmed. So far, this number for MLC flash is always one while, for SLC flash, it is four.
NAND devices also require bad block management by the device driver software, or by a
separate controller chip. SD cards, for example, include controller circuitry to perform bad block
management and wear leveling. When a logical block is accessed by high-level software, it is
mapped to a physical block by the device driver or controller. A number of blocks on the flash
chip may be set aside for storing mapping tables to deal with bad blocks, or the system may
simply check each block at power-up to create a bad block map in RAM. The overall memory
capacity gradually shrinks as more blocks are marked as bad.
NAND relies on ECC to compensate for bits that may spontaneously fail during normal device
operation. A typical ECC will correct a one-bit error in each 2048 bits (256 bytes) using 22 bits
of ECC code, or a one-bit error in each 4096 bits (512 bytes) using 24 bits of ECC code. If the
ECC cannot correct the error during read, it may still detect the error. When doing erase or
program operations, the device can detect blocks that fail to program or erase and mark them bad.
The data is then written to a different, good block, and the bad block map is updated.
Most NAND devices are shipped from the factory with some bad blocks. These are typically
marked according to a specified bad block marking strategy. By allowing some bad blocks, the
manufacturers achieve far higher yields than would be possible if all blocks had to be verified
good. This significantly reduces NAND flash costs and only slightly decreases the storage
capacity of the parts.
When executing software from NAND memories, virtual memory strategies are often used:
memory contents must first be paged or copied into memory-mapped RAM and executed there
(leading to the common combination of NAND + RAM). A memory management unit (MMU) in
the system is helpful, but this can also be accomplished with overlays. For this reason, some
systems will use a combination of NOR and NAND memories, where a smaller NOR memory is
used as software ROM and a larger NAND memory is partitioned with a file system for use as a
non-volatile data storage area.
NAND sacrifices the random-access and execute-in-place advantages of NOR. NAND is best
suited to systems requiring high capacity data storage. It offers higher densities, larger capacities,
and lower cost. It has faster erases, sequential writes, and sequential reads.
Distinction between NOR and NAND flash
NOR and NAND flash differ in two important ways:
the connections of the individual memory cells are different
the interface provided for reading and writing the memory is different (NOR allows random-
access for reading, NAND allows only page access)
These two are linked by the design choices made in the development of NAND flash. A goal
of NAND flash development was to reduce the chip area required to implement a given capacity
of flash memory, and thereby to reduce cost per bit and increase maximum chip capacity so that
flash memory could compete with magnetic storage devices like hard disks.
NOR and NAND flash get their names from the structure of the interconnections between
memory cells. In NOR flash, cells are connected in parallel to the bit lines, allowing cells to be
read and programmed individually. The parallel connection of cells resembles the parallel
connection of transistors in a CMOS NOR gate. In NAND flash, cells are connected in series,
resembling a NAND gate. The series connections consume less space than parallel ones, reducing
the cost of NAND flash. It does not, by itself, prevent NAND cells from being read and
programmed individually.
When NOR flash was developed, it was envisioned as a more economical and conveniently
rewritable ROM than contemporary EPROM and EEPROM memories. Thus random-access
reading circuitry was necessary. However, it was expected that NOR flash ROM would be read
much more often than written, so the write circuitry included was fairly slow and could only
erase in a block-wise fashion. On the other hand, applications that use flash as a replacement for
disk drives do not require word-level write address, which would only add to the complexity and
cost unnecessarily.
Because of the series connection and removal of wordline contacts, a large grid of NAND
flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells (assuming
the same CMOS process resolution, for example, 130 nm, 90 nm, or 65 nm). NAND flash’s
designers realized that the area of a NAND chip, and thus the cost, could be further reduced by
removing the external address and data bus circuitry. Instead, external devices could
communicate with NAND flash via sequential-accessed command and data registers, which
would internally retrieve and output the necessary data. This design choice made random-access
of NAND flash memory impossible, but the goal of NAND flash was to replace hard disks, not to
replace ROMs.
Capacity
Multiple chips are often arrayed to achieve higher capacities for use in consumer electronic
devices such as multimedia players or GPSs. The capacity of flash chips generally follows
Moore’s Law because they are manufactured with many of the same integrated circuits
techniques and equipment.
Consumer flash storage devices typically are advertised with usable sizes expressed as a small
integral power of two (2, 4, 8, etc.) and a designation of megabytes (MB) or gigabytes (GB); e.g.,
8 GB, 16 GB. Device packaging uses “decimal prefixes”, meaning 1,000,000 bytes and
1,000,000,000 bytes, respectively. This includes SSDs marketed as hard drive replacements, in
accordance with traditional hard drives, which also use decimal prefixes. Thus, an SSD marked
as “64 GB” is actually at least 64 × 1,0003 bytes (64 GB), or often a bit more. Most users will
have slightly less capacity than this available for their files, due to the space taken by file system
metadata.
The flash memory chips inside them are sized in strict binary multiples, but the actual total
capacity of the chips is not usable at the drive interface. It is considerably larger than the
advertised capacity in order to allow for distribution of writes (wear leveling), for sparing, for
error correction codes, and for other metadata needed by the device’s internal firmware.
In March 2006, Samsung announced flash hard drives with a capacity of 4 GB, essentially the
same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung
announced an 8 GB chip produced using a 40 nm manufacturing process. In January 2008,
SanDisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards.
More recent flash drives (as of 2013) have much greater capacities, holding 64, 128, and 256
GB.
There are still flash chips manufactured with capacities under or around 1 MB, e.g., for BIOS-
ROMs and embedded applications.
Transfer rates
NAND flash memory cards are much faster at reading than writing so it is the maximum read
speed that is commonly advertised.
As a chip wears out, its erase/program operations slow down considerably, requiring more
retries and bad block remapping. Transferring multiple small files, each smaller than the chip-
specific block size, could lead to a much lower rate. Access latency also influences performance,
but less so than with their hard drive counterpart.
The speed is sometimes quoted in MB/s (megabytes per second), or as a multiple of that of a
legacy single speed CD-ROM, such as 60×, 100× or 150×. Here 1× is equivalent to 150 kB/s. For
example, a 100× memory card gives 150 kB/s × 100 = 15,000 kB/s
Performance also depends on the quality of memory controllers. Even when the only change to
manufacturing is die-shrink, the absence of an appropriate controller can result in degraded
speeds.
Applications
Serial flash
Serial flash is a small, low-power flash memory that uses a serial interface, typically Serial
Peripheral Interface Bus (SPI), for sequential data access. When incorporated into an embedded
system, serial flash requires fewer wires on the PCB than parallel flash memories, since it
transmits and receives data one bit at a time. This may permit a reduction in board space, power
consumption, and total system cost.
Firmware storage
With the increasing speed of modern CPUs, parallel flash devices are often much slower than
the memory bus of the computer they are connected to. Conversely, modern SRAM offers access
times below 10 ns, while DDR2 SDRAM offers access times below 20 ns. Because of this, it is
often desirable to shadow code stored in flash into RAM; that is, the code is copied from flash
into RAM before execution, so that the CPU may access it at full speed. Device firmware may be
stored in a serial flash device, and then copied into SDRAM or SRAM when the device is
powered-up. Using an external serial flash device rather than on-chip flash removes the need for
significant process compromise (a process that is good for high-speed logic is generally not good
for flash and vice-versa). Once it is decided to read the firmware in as one big block it is common
to add compression to allow a smaller flash chip to be used. Typical applications for serial flash
include storing firmware for hard drives, Ethernet controllers, DSL modems, wireless network
devices, etc.
Flash memory as RAM
As of 2012, there are attempts to use flash memory as the main computer memory, DRAM. In
this role, it is slower than conventional DRAM, but uses up to ten times less power and is also
significantly cheaper.

2.2.4. Input/output devices


In computing, input/output or I/O (Figure 10) is the communication between an information
processing system (such as a computer) and the outside world, possibly a human or another
information processing system. Inputs are the signals or data received by the system, and outputs
are the signals or data sent from it. The term can also be used as part of an action; to "perform
I/O" is to perform an input or output operation. I/O devices are used by a person (or other system)
to communicate with a computer. For instance, a keyboard or a mouse may be an input device for
a computer, while monitors and printers are considered output devices for a computer. Devices
for communication between computers, such as modems and network cards, typically serve for
both input and output.
Note that the designation of a device as either input or output depends on the perspective.
Mouse and keyboards take as input physical movement that the human user outputs and convert it
into signals that a computer can understand. The output from these devices is input for the
computer. Similarly, printers and monitors take as input signals that a computer outputs. They
then convert these signals into representations that human users can see or read. For a human user
the process of reading or seeing these representations is receiving input. These interactions
between computers and humans is studied in a field called human–computer interaction.

Figure 10
An input device is any peripheral (piece of computer hardware equipment) used to provide
data and control signals to an information processing system such as a computer or other
information appliance. Examples of input devices include keyboards, mouse, scanners, digital
cameras and joysticks.
Many input devices can be classified according to:
 modality of input (e.g. mechanical motion, audio, visual, etc.)
 the input is discrete (e.g. key presses) or continuous (e.g. a mouse's position, though
digitized into a discrete quantity, is fast enough to be considered continuous)
 the number of degrees of freedom involved (e.g. two-dimensional traditional mice, or
three-dimensional navigators designed for CAD applications)
Pointing devices, which are input devices used to specify a position in space, can further be
classified according to:
Whether the input is
 direct or
 indirect.
With direct input, the input space coincides with the display space, i.e. pointing is done in the
space where visual feedback or the pointer appears. Touchscreens and light pens involve direct
input. Examples involving indirect input include the mouse and trackball.
Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g. with a
mouse that can be lifted and repositioned)
Direct input is almost necessarily absolute, but indirect input may be either absolute or
relative. For example, digitizing graphics tablets that do not have an embedded screen involve
indirect input and sense absolute positions and are often run in an absolute input mode, but they
may also be set up to simulate a relative input mode like that of a touchpad, where the stylus or
puck can be lifted and repositioned.
A keyboard is a typewriter-style device, which uses an arrangement of buttons or keys, to act
as mechanical levers or electronic switches. Following the decline of punch cards and paper tape,
interaction via teleprinter-style keyboards became the main input device for computers.
Figure 11

A keyboard typically has characters engraved or printed on the keys and each press of a key
typically corresponds to a single written symbol. However, to produce some symbols requires
pressing and holding several keys simultaneously or in sequence. While most keyboard keys
produce letters, numbers or signs (characters), other keys or simultaneous key presses can
produce actions or execute computer commands.
Despite the development of alternative input devices, such as the mouse, touchscreen, pen
devices, character recognition and voice recognition, the keyboard remains the most commonly
used and most versatile device used for direct (human) input into computers.
Some historical facts
While typewriters are the definitive ancestor of all key-based text entry devices, the computer
keyboard as a device for electromechanical data entry and communication derives largely from
the utility of two devices: teleprinters (or teletypes) and keypunches. It was through such devices
that modern computer keyboards inherited their layouts.
As early as the 1870s, teleprinter-like devices were used to simultaneously type and transmit
stock market text data from the keyboard across telegraph lines to stock ticker machines to be
immediately copied and displayed onto ticker tape. The teleprinter, in its more contemporary
form, was developed from 1903 to 1910 by American mechanical engineer Charles Krum and his
son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were
developed separately by individuals such as Royal Earl House and Frederick G. Creed.
Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to
include keys for text and number entry akin to normal typewriters by the 1930s.
The keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint
communication for most of the 20th century, while the keyboard on the keypunch device played a
strong role in data entry and storage for just as long. The development of the earliest computers
incorporated electric typewriter keyboards: the development of the ENIAC computer
incorporated a keypunch device as both the input and paper-based output device, while the
BINAC computer also made use of an electromechanically controlled typewriter for both data
entry onto magnetic tape (instead of paper) and data output.
From the 1940s until the late 1960s, typewriters were the main means of data entry and output
for computing, becoming integrated into what were known as computer terminals. Because of the
limitations of terminals based upon printed text in comparison to the growth in data storage,
processing and transmission, a general move toward video-based computer terminals was
affected by the 1970s, starting with the Datapoint 3300 in 1967.
The keyboard remained the primary, most integrated computer peripheral well into the era of
personal computing until the introduction of the mouse as a consumer device in 1984. By this
time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich
icons on screen. However, keyboards remain central to human-computer interaction to the
present, even as mobile personal computing devices such as smartphones and tablets adapt the
keyboard as an optional virtual, touchscreen-based means of data entry.
Keyboard types
Standard
Standard "full-travel" alphanumeric keyboards have keys that are on three-quarter inch centers
(0.750 inches, 19.05 mm), and have a key travel of at least 0.150 inches (3.81 mm). Desktop
computer keyboards, such as the 101-key US traditional keyboards or the 104-key Windows
keyboards, include alphabetic characters, punctuation symbols, numbers and a variety of function
keys. The internationally common 102/105 key keyboards have a smaller left shift key and an
additional key with some more symbols between that and the letter to its right (usually Z or Y).
Also the enter key is usually shaped differently. Computer keyboards are similar to electric-
typewriter keyboards but contain additional keys.
Laptop-size
Keyboards on laptops and notebook computers usually have a shorter travel distance for the
keystroke and a reduced set of keys. They may not have a numerical keypad, and the function
keys may be placed in locations that differ from their placement on a standard, full-sized
keyboard.
Handheld
Handheld ergonomic keyboards are designed to be held like a game controller, and can be
used as such, instead of laid out flat on top of a table surface. Typically handheld keyboards hold
all the alphanumeric keys and symbols that a standard keyboard would have, yet only be accessed
by pressing two sets of keys at once; one acting as a function key similar to a 'Shift' key that
would allow for capital letters on a standard keyboard. Handheld keyboards allow the user the
ability to move around a room or to lean back on a chair while also being able to type in front or
away from the computer. Some variations of handheld ergonomic keyboards also include a
trackball mouse that allow mouse movement and typing included in one handheld device.
Thumb-sized
Smaller external keyboards have been introduced for devices without a built-in keyboard, such
as PDAs, and smartphones. Small keyboards are also useful where there is a limited workspace.
A chorded keyboard allows users to press several keys simultaneously. For example, the
GKOS keyboard has been designed for small wireless devices. Other two-handed alternatives
more akin to a game controller, such as the AlphaGrip, are also used to input data and text.
A thumb keyboard (thumbboard) is used in some personal digital assistants such as the Palm
Treo and BlackBerry and some Ultra-Mobile PCs such as the OQO.
Numeric keyboards contain only numbers, mathematical symbols for addition, subtraction,
multiplication, and division, a decimal point, and several function keys. They are often used to
facilitate data entry with smaller keyboards that do not have a numeric keypad, commonly those
of laptop computers. These keys are collectively known as a numeric pad, numeric keys, or a
numeric keypad, and it can consist of the following types of keys: Arithmetic operators,
Numerical digits, Arrow keys, Navigation keys, Num Lock and Enter key.
Software
Software keyboards or on-screen keyboards often take the form of computer programs that
display an image of a keyboard on the screen. Another input device such as a mouse or a
touchscreen can be used to operate each virtual key to enter text. Software keyboards have
become very popular in touchscreen enabled cell phones, due to the additional cost and space
requirements of other types of hardware keyboards. Microsoft Windows, Mac OS X, and some
varieties of Linux include on-screen keyboards that can be controlled with the mouse. In software
keyboards, the mouse has to be maneuvered onto the on-screen letters given by the software. On
the click of a letter, the software writes the respective letter on the respective spot.
Projection (as by laser)
Projection keyboards project an image of keys, usually with a laser, onto a flat surface. The
device then uses a camera or infrared sensor to "watch" where the user's fingers move, and will
count a key as being pressed when it "sees" the user's finger touch the projected image. Projection
keyboards can simulate a full size keyboard from a very small projector. Because the "keys" are
simply projected images, they cannot be felt when pressed. Users of projected keyboards often
experience increased discomfort in their fingertips because of the lack of "give" when typing. A
flat, non-reflective surface is also required for the keys to be projected. Most projection
keyboards are made for use with PDAs and smartphones due to their small form factor.
Optical keyboard technology
Also known as photo-optical keyboard, light responsive keyboard, photo-electric keyboard
and optical key actuation detection technology.
An optical keyboard technology utilizes light emitting devices and photo sensors to optically
detect actuated keys. Most commonly the emitters and sensors are located in the perimeter,
mounted on a small PCB. The light is directed from side to side of the keyboard interior and it
can only be blocked by the actuated keys. Most optical keyboards require at least 2 beams (most
commonly vertical beam and horizontal beam) to determine the actuated key. Some optical
keyboards use a special key structure that blocks the light in a certain pattern, allowing only one
beam per row of keys (most commonly horizontal beam).
Layout
Alphabetic
The 104-key PC US English QWERTY keyboard layout evolved from the standard typewriter
keyboard, with extra keys for computing.
The Dvorak Simplified Keyboard layout arranges keys so that frequently used keys are easiest
to press, which reduces muscle fatigue when typing common English.
There are a number of different arrangements of alphabetic, numeric, and punctuation symbols
on keys. These different keyboard layouts arise mainly because different people need easy access
to different symbols, either because they are inputting text in different languages, or because they
need a specialized layout for mathematics, accounting, computer programming, or other
purposes. The United States keyboard layout is used as default in the currently most popular
operating systems: Windows, Mac OS X and Linux. The common QWERTY-based layout was
designed early in the era of mechanical typewriters, so its ergonomics were compromised to
allow for the mechanical limitations of the typewriter.
As the letter-keys were attached to levers that needed to move freely, inventor Christopher
Sholes developed the QWERTY layout to reduce the likelihood of jamming. With the advent of
computers, lever jams are no longer an issue, but nevertheless, QWERTY layouts were adopted
for electronic keyboards because they were widely used. Alternative layouts such as the Dvorak
Simplified Keyboard are not in widespread use.
The QWERTZ layout is widely used in Germany and much of Central Europe. The main
difference between it and QWERTY is that Y and Z are swapped, and most special characters
such as brackets are replaced by diacritical characters.
Another situation takes place with "national" layouts. Keyboards designed for typing in
Spanish have some characters shifted, to release the space for Ñ ñ; similarly, those for French and
other European languages may have a special key for the character Ç ç. The AZERTY layout is
used in France, Belgium and some neighbouring countries. It differs from the QWERTY layout
in that the A and Q are swapped, the Z and W are swapped, and the M is moved from the right of
N to the right of L (where colon/semicolon is on a US keyboard). The digits 0 to 9 are on the
same keys, but to be typed the shift key must be pressed. The unshifted positions are used for
accented characters.
Keyboards in many parts of Asia may have special keys to switch between the Latin character
set and a completely different typing system. Japanese layout keyboards can be switched between
various Japanese input methods and the Latin alphabet by signaling the operating system's input
interpreter of the change, and some operating systems (namely the Windows family) interpret the
character "\" as "¥" for display purposes without changing the bytecode which has led some
keyboard makers to mark "\" as "¥" or both. In the Arab world, keyboards can often be switched
between Arabic and Latin characters.
In bilingual regions of Canada and in the French-speaking province of Québec, keyboards can
often be switched between an English and a French-language keyboard; while both keyboards
share the same QWERTY alphabetic layout, the French-language keyboard enables the user to
type accented vowels such as "é" or "à" with a single keystroke. Using keyboards for other
languages leads to a conflict: the image on the key does not correspond to the character. In such
cases, each new language may require an additional label on the keys, because the standard
keyboard layouts do not share even similar characters of different languages (see the example in
the figure above).
Layout changing software
The character code produced by any key press is determined by the keyboard driver software.
A key press generates a scancode which is interpreted as an alphanumeric character or control
function. Depending on operating systems, various application programs are available to create,
add and switch among keyboard layouts. Many programs are available, some of which are
language specific.
The arrangement of symbols of specific language can be customized. An existing keyboard
layout can be edited, and a new layout can be created using this type of software.
For example, Ukelele for Mac, The Microsoft Keyboard Layout Creator and open-source Avro
Keyboard for Windows provide the ability to customize the keyboard layout as desired. Other
programs with similar functions include The Microsoft Keyboard Layout Creator.
Control processor
Computer keyboards include control circuitry to convert key presses into key codes (usually
scancodes) that the computer's electronics can understand. The key switches are connected via the
printed circuit board in an electrical X-Y matrix where a voltage is provided sequentially to the Y
lines and, when a key is depressed, detected sequentially by scanning the X lines.
The first computer keyboards were for mainframe computer data terminals and used discrete
electronic parts. The first keyboard microprocessor was introduced in 1972 by General
Instruments, but keyboards have been using the single-chip 8048 microcontroller variant since it
became available in 1978. The keyboard switch matrix is wired to its inputs, it converts the
keystrokes to key codes, and, for a detached keyboard, sends the codes down a serial cable (the
keyboard cord) to the main processor on the computer motherboard. This serial keyboard cable
communication is only bi-directional to the extent that the computer's electronics controls the
illumination of the caps lock, num lock and scroll lock lights.
One test for whether the computer has crashed is pressing the caps lock key. The keyboard
sends the key code to the keyboard driver running in the main computer; if the main computer is
operating, it commands the light to turn on. All the other indicator lights work in a similar way.
The keyboard driver also tracks the Shift, alt and control state of the keyboard.
Some lower-quality keyboards have multiple or false key entries due to inadequate electrical
designs. These are caused by inadequate keyswitch "debouncing" or inadequate keyswitch matrix
layout that don't allow multiple keys to be depressed at the same time, both circumstances which
are explained below:
When pressing a keyboard key, the key contacts may "bounce" against each other for several
milliseconds before they settle into firm contact. When released, they bounce some more until
they revert to the uncontacted state. If the computer were watching for each pulse, it would see
many keystrokes for what the user thought was just one. To resolve this problem, the processor in
a keyboard (or computer) "debounces" the keystrokes, by aggregating them across time to
produce one "confirmed" keystroke.
Some low-quality keyboards also suffer problems with rollover (that is, when multiple keys
pressed at the same time, or when keys are pressed so fast that multiple keys are down within the
same milliseconds). Early "solid-state" keyswitch keyboards did not have this problem because
the keyswitches are electrically isolated from each other, and early "direct-contact" keyswitch
keyboards avoided this problem by having isolation diodes for every keyswitch. These early
keyboards had "n-key" rollover, which means any number of keys can be depressed and the
keyboard will still recognize the next key depressed. But when three keys are pressed (electrically
closed) at the same time in a "direct contact" keyswitch matrix that doesn't have isolation diodes,
the keyboard electronics can see a fourth "phantom" key which is the intersection of the X and Y
lines of the three keys. Some types of keyboard circuitry will register a maximum number of keys
at one time. "Three-key" rollover, also called "phantom key blocking" or "phantom key lockout",
will only register three keys and ignore all others until one of the three keys is lifted. This is
undesirable, especially for fast typing (hitting new keys before the fingers can release previous
keys), and games (designed for multiple key presses).
As direct-contact membrane keyboards became popular, the available rollover of keys was
optimized by analyzing the most common key sequences and placing these keys so that they do
not potentially produce phantom keys in the electrical key matrix (for example, simply placing
three or four keys that might be depressed simultaneously on the same X or same Y line, so that a
phantom key intersection/short cannot happen), so that blocking a third key usually isn't a
problem. But lower-quality keyboard designs and unknowledgeable engineers may not know
these tricks, and it can still be a problem in games due to wildly different or configurable layouts
in different games.
Key types
Alphanumeric
A Hebrew keyboard lets the user type in both Hebrew and the Latin alphabet.
Alphabetical, numeric, and punctuation keys are used in the same fashion as a typewriter
keyboard to enter their respective symbol into a word processing program, text editor, data
spreadsheet, or other program. Many of these keys will produce different symbols when modifier
keys or shift keys are pressed. The alphabetic characters become uppercase when the shift key or
Caps Lock key is depressed. The numeric characters become symbols or punctuation marks when
the shift key is depressed. The alphabetical, numeric, and punctuation keys can also have other
functions when they are pressed at the same time as some modifier keys.
The Space bar is a horizontal bar in the lowermost row, which is significantly wider than other
keys. Like the alphanumeric characters, it is also descended from the mechanical typewriter. Its
main purpose is to enter the space between words during typing. It is large enough so that a
thumb from either hand can use it easily. Depending on the operating system, when the space bar
is used with a modifier key such as the control key, it may have functions such as resizing or
closing the current window, half-spacing, or backspacing. In computer games and other
applications the key has myriad uses in addition to its normal purpose in typing, such as jumping
and adding marks to check boxes. In certain programs for playback of digital video, the space bar
is used for pausing and resuming the playback.
Modifier keys
Modifier keys are special keys that modify the normal action of another key, when the two are
pressed in combination. For example, <Alt> + <F4> in Microsoft Windows will close the
program in an active window. In contrast, pressing just <F4> will probably do nothing, unless
assigned a specific function in a particular program. By themselves, modifier keys usually do
nothing.
The most widely used modifier keys include the Control key, Shift key and the Alt key. The
AltGr key is used to access additional symbols for keys that have three symbols printed on them.
On the Macintosh and Apple keyboards, the modifier keys are the Option key and Command key,
respectively. On MIT computer keyboards, the Meta key is used as a modifier and for Windows
keyboards, there is a Windows key. Compact keyboard layouts often use a Fn key. "Dead keys"
allow placement of a diacritic mark, such as an accent, on the following letter (e.g., the Compose
key).
The Enter/Return key typically causes a command line, window form or dialog box to operate
its default function, which is typically to finish an "entry" and begin the desired process. In word
processing applications, pressing the enter key ends a paragraph and starts a new one.
Cursor keys
Navigation keys or cursor keys include a variety of keys which move the cursor to different
positions on the screen. Arrow keys are programmed to move the cursor in a specified direction;
page scroll keys, such as the Page Up and Page Down keys, scroll the page up and down. The
Home key is used to return the cursor to the beginning of the line where the cursor is located; the
End key puts the cursor at the end of the line. The Tab key advances the cursor to the next tab
stop.
The Insert key is mainly used to switch between overtype mode, in which the cursor
overwrites any text that is present on and after its current location, and insert mode, where the
cursor inserts a character at its current position, forcing all characters past it one position further.
The Delete key discards the character ahead of the cursor's position, moving all following
characters one position "back" towards the freed place. On many notebook computer keyboards
the key labeled Delete (sometimes Delete and Backspace are printed on the same key) serves the
same purpose as a Backspace key. The Backspace key deletes the preceding character.
Lock keys lock part of a keyboard, depending on the settings selected. The lock keys are
scattered around the keyboard. Most styles of keyboards have three LEDs indicating which locks
are enabled, in the upper right corner above the numeric pad. The lock keys include Scroll lock,
Num lock (which allows the use of the numeric keypad), and Caps lock.
System commands
The SysRq and Print screen commands often share the same key. SysRq was used in earlier
computers as a "panic" button to recover from crashes (and it is still used in this sense to some
extent by the Linux kernel; see Magic SysRq key). The Print screen command used to capture the
entire screen and send it to the printer, but in the present it usually puts a screenshot in the
clipboard. The Break key/Pause key no longer has a well-defined purpose. Its origins go back to
teleprinter users, who wanted a key that would temporarily interrupt the communications line.
The Break key can be used by software in several different ways, such as to switch between
multiple login sessions, to terminate a program, or to interrupt a modem connection.
In programming, especially old DOS-style BASIC, Pascal and C, Break is used (in
conjunction with Ctrl) to stop program execution. In addition to this, Linux and variants, as well
as many DOS programs, treat this combination the same as Ctrl+C. On modern keyboards, the
break key is usually labeled Pause/Break. In most Windows environments, the key combination
Windows key+Pause brings up the system properties.
The Escape key (often abbreviated Esc) is used to initiate an escape sequence. As most
computer users no longer are concerned with the details of controlling their computer's
peripherals, the task for which the escape sequences were originally designed, the escape key was
appropriated by application programmers, most often to "escape" or back out of a mistaken
command. This use continues today in Microsoft Windows's use of escape as a shortcut in dialog
boxes for No, Quit, Exit, Cancel, or Abort.
A common application today of the Esc key is as a shortcut key for the Stop button in many
web browsers. On machines running Microsoft Windows, prior to the implementation of the
Windows key on keyboards, the typical practice for invoking the "start" button was to hold down
the control key and press escape. This process still works in Windows 2000, XP, Vista, 7, and 8.
The Enter key is located: One in the alphanumeric keys and the other one is in the numeric
keys. When one worked something on their computer and wanted to do something with their
work, pressing the enter key would do the command they ordered. Another function is to create a
space for next paragraph. When one typed and finished typing a paragraph and they wanted to
have a second paragraph, they could press enter and it would do spacing.
Shift key, from the word "shift" that means change. When one presses shift and a letter, it will
capitalize the letter they pressed with the shift key. Another is to type more symbol just like this.
The apostrophe sign was accompanied with a quotation mark on the top. If one wanted to type
the quotation mark but they pressed that key, the symbol that would appear would be the
apostrophe. So if they pressed shift and the key, the quotation mark would appear.
The Menu key or Application key is a key found on Windows-oriented computer keyboards. It
is used to launch a context menu with the keyboard rather than with the usual right mouse button.
The key's symbol is usually a small icon depicting a cursor hovering above a menu. On some
Samsung keyboards the cursor in the icon is not present, showing the menu only. This key was
created at the same time as the Windows key. This key is normally used when the right mouse
button is not present on the mouse. Some Windows public terminals do not have a Menu key on
their keyboard to prevent users from right-clicking (however, in many Windows applications, a
similar functionality can be invoked with the Shift+F10 keyboard shortcut).
Miscellaneous
Multimedia buttons on some keyboards give quick access to the Internet or control the volume
of the speakers.
Many, but not all, computer keyboards have a numeric keypad to the right of the alphabetic
keyboard which contains numbers, basic mathematical symbols (e.g., addition, subtraction, etc.),
and a few function keys. On Japanese/Korean keyboards, there may be Language input keys.
Some keyboards have power management keys (e.g., power key, sleep key and wake key);
Internet keys to access a web browser or E-mail; and/or multimedia keys, such as volume
controls or keys that can be programmed by the user to launch a specified software or command
like launching a game or minimize all windows.
Numeric keys
When we calculate, we use these numeric keys to type numbers. Symbols concerned with
calculations such as addition, subtraction, multiplication and division symbols are located in this
group of keys. The enter key in this keys indicate the equal sign.
Multiple layouts
It is possible to install multiple keyboard layouts within an operating system and switch
between them, either through features implemented within the OS, or through an external
application. Microsoft Windows, Linux, and Mac provide support to add keyboard layouts and
choose from them.
Connection types
There are several ways of connecting a keyboard to a system unit (more precisely, to its
keyboard controller) using cables, including the standard AT connector commonly found on
motherboards, which was eventually replaced by the PS/2 and the USB connection. Prior to the
iMac line of systems, Apple used the proprietary Apple Desktop Bus for its keyboard connector.
Wireless keyboards have become popular for their increased user freedom. A wireless
keyboard often includes a required combination transmitter and receiver unit that attaches to the
computer's keyboard port. The wireless aspect is achieved either by radio frequency (RF) or by
infrared (IR) signals sent and received from both the keyboard and the unit attached to the
computer. A wireless keyboard may use an industry standard RF, called Bluetooth. With
Bluetooth, the transceiver may be built into the computer. However, a wireless keyboard needs
batteries to work and may pose a security problem due to the risk of data "eavesdropping" by
hackers. Wireless solar keyboards charge their batteries from small solar panels using sunlight or
standard artificial lighting. An early example of a consumer wireless keyboard is that of the
Olivetti Envision.
Alternative text-entering methods
An on-screen keyboard controlled with the mouse can be used by users with limited mobility.
Optical character recognition (OCR) is preferable to rekeying for converting existing text that
is already written down but not in machine-readable format (for example, a Linotype-composed
book from the 1940s). In other words, to convert the text from an image to editable text (that is, a
string of character codes), a person could re-type it, or a computer could look at the image and
deduce what each character is. OCR technology has already reached an impressive state (for
example, Google Book Search) and promises more for the future.
Speech recognition converts speech into machine-readable text (that is, a string of character
codes). This technology has also reached an advanced state and is implemented in various
software products. For certain uses (e.g., transcription of medical or legal dictation; journalism;
writing essays or novels) it is starting to replace the keyboard; however, it does not threaten to
replace keyboards entirely anytime soon. It can, however, interpret commands (for example,
"close window" or "undo word") in addition to text. Therefore, it has theoretical potential to
replace keyboards entirely (whereas OCR replaces them only for a certain kind of task).
Pointing devices can be used to enter text or characters in contexts where using a physical
keyboard would be inappropriate or impossible. These accessories typically present characters on
a display, in a layout that provides fast access to the more frequently used characters or character
combinations. Popular examples of this kind of input are Graffiti, Dasher and on-screen virtual
keyboards.
Physical injury
The use of any keyboard may cause serious injury (that is, carpal tunnel syndrome or other
repetitive strain injury) to hands, wrists, arms, neck or back (figure 30).
The risks of injuries can be reduced by taking frequent short breaks to get up and walk around
a couple of times every hour. As well, users should vary tasks throughout the day, to avoid
overuse of the hands and wrists. When inputting at the keyboard, a person should keep the
shoulders relaxed with the elbows at the side, with the keyboard and mouse positioned so that
reaching is not necessary. The chair height and keyboard tray should be adjusted so that the
wrists are straight, and the wrists should not be rested on sharp table edges. Wrist or palm rests
should not be used while typing.
Some adaptive technology ranging from special keyboards, mouse replacements and pen tablet
interfaces to speech recognition software can reduce the risk of injury. Pause software reminds
the user to pause frequently. Switching to a much more ergonomic mouse, such as a vertical
mouse or joystick mouse may provide relief. Switching from using a mouse to using a stylus pen
with graphic tablet or a trackpad can lessen the repetitive strain on the arms and hands.

Figure 12

Pointing devices
Keyboard devices are the most commonly used input devices today. A pointing device is any
human interface device that allows a user to input spatial data to a computer. In the case of mice
and touchpads, this is usually achieved by detecting movement across a physical surface. Analog
devices, such as 3D mice, joysticks, or pointing sticks, function by reporting their angle of
deflection. Movements of the pointing device are echoed on the screen by movements of the
pointer, creating a simple, intuitive way to navigate a computer's GUI.
A computer mouse
A mouse is an input device that functions by detecting two-dimensional motion relative to its
supporting surface. Physically, a mouse consists of an object held under one of the user's hands,
with one or more buttons. The mouse's motion typically translates into the motion of a pointer on
a display, which allows for fine control of a graphical user interface.
Some historical facts
The trackball, a related pointing device, was invented as part of a post-World War II-era radar
plotting system called Comprehensive Display System (CDS) by Ralph Benjamin when working
for the British Royal Navy Scientific Service in 1946. Benjamin's project used analog computers
to calculate the future position of target aircraft based on several initial input points provided by a
user with a joystick. Benjamin felt that a more elegant input device was needed and invented a
ball tracker called roller ball for this purpose. Whilst patented in 1947, only a prototype was built
and the device instead kept as a secret outside military.
Another early trackball was built by Tom Cranston, Fred Longstaff and Kenyon Taylor as part
of the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in
1952. DATAR was similar in concept to Benjamin's display, but used a digital computer to
calculate tracks, and sent the resulting data to other ships in a task force using pulse-code
modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was
not patented, as it was a secret military project as well.
Independently, Douglas Engelbart at the Stanford Research Institute (now SRI International)
invented the first mouse prototype in 1963, with the assistance of his lead engineer Bill English.
They christened the device the mouse as early models had a cord attached to the rear part of the
device looking like a tail and generally resembling the common mouse. Engelbart never received
any royalties for it, as his employer SRI held the patent, which ran out before it became widely
used in personal computers. The invention of the mouse was just a small part of Engelbart's much
larger project, aimed at augmenting human intellect via the Augmentation Research Center.
Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS)
exploited different body movements – for example, head-mounted devices attached to the chin or
nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a
bulky device (pictured) used two wheels perpendicular to each other: the rotation of each wheel
translated into motion along one axis.
Filed in 1967, Engelbart received patent US 3,541,541 on November 17, 1970 for an "X-Y
Position Indicator for a Display System". At the time, Engelbart envisaged that users would hold
the mouse continuously in one hand and type on a five-key chord keyset with the other. The
concept was preceded in the 19th century by the telautograph, which also anticipated the fax
machine.
Specific uses
Other uses of the mouse's input occur commonly in special application-domains. In interactive
three-dimensional graphics, the mouse's motion often translates directly into changes in the
virtual camera's orientation. For example, in the first-person shooter genre of games (see below),
players usually employ the mouse to control the direction in which the virtual player's "head"
faces: moving the mouse up will cause the player to look up, revealing the view above the
player's head. A related function makes an image of an object rotate, so that all sides can be
examined.
When mice have more than one button, software may assign different functions to each button.
Often, the primary (leftmost in a right-handed configuration) button on the mouse will select
items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative
actions applicable to that item. For example, on platforms with more than one button, the Mozilla
web browser will follow a link in response to a primary button click, will bring up a contextual
menu of alternative actions for that link in response to a secondary-button click, and will often
open the link in a new tab or window in response to a click with the tertiary (middle) mouse
button.
Optical mice
Optical mice make use of one or more light-emitting diodes (LEDs) and an imaging array of
photodiodes to detect movement relative to the underlying surface, rather than internal moving
parts as does a mechanical mouse. A laser mouse is an optical mouse that uses coherent (laser)
light.
The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the
modern optical mouse works on most opaque surfaces; it is usually unable to detect movement on
specular surfaces like glass. Laser diodes are also used for better resolution and precision. Battery
powered, wireless optical mice flash the LED intermittently to save power, and only glow
steadily when movement is detected.
Inertial and gyroscopic mice
Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning
fork or other accelerometer (US Patent 4787051, published in 1988) to detect rotary movement
for every axis supported. The most common models (manufactured by Logitech and Gyration)
work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user
requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm".
Usually cordless, they often have a switch to deactivate the movement circuitry between use,
allowing the user freedom of movement without affecting the cursor position. A patent for an
inertial mouse claims that such mice consume less power than optically based mice, and offer
increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless
keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a
flat work surface, potentially alleviating some types of repetitive motion injuries related to
workstation posture.
3D mice
Also known as bats, flying mice, or wands, these devices generally function through
ultrasound and provide at least three degrees of freedom. Probably the best known example
would be 3Dconnexion/Logitech's SpaceMouse from the early 1990s. In the late 1990s Kantek
introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which
enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base
station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient
resolution.
A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing
device (that is, it can determine its orientation and direction of movement), Wii Remote can also
detect its spatial position by comparing the distance and position of the lights from the IR emitter
using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its
current heading and orientation). The obvious drawback to this approach is that it can only
produce spatial coordinates while its camera can see the sensor bar.
A mouse-related controller called the SpaceBall has a ball placed above the work surface that
can easily be gripped. With spring-loaded centering, it sends both translational as well as angular
displacements on all six axes, in both directions for each. In November 2010 a German Company
called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new
concept of a true six degree-of-freedom input device uses a ball to rotate in 3 axes without any
limitations.
Tactile mice
In 2000, Logitech introduced a "tactile mouse" that contained a small actuator to make the
mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving
feedback when crossing a window boundary. To surf by touch requires the user to be able to feel
depth or hardness; this ability was realized with the first electrorheological tactile mice but never
marketed.
Ergonomic mice
As the name suggests, this type of mouse is intended to provide optimum comfort and avoid
injuries such as carpal tunnel syndrome, arthritis and other repetitive strain injuries. It is designed
to fit natural hand position and movements, to reduce discomfort.
Gaming mice
These mice are specifically designed for use in computer games. They typically employ a wide
array of controls and buttons and have designs that differ radically from traditional mice. It is also
common for gaming mice, especially those designed for use in real-time strategy games such as
StarCraft or League of Legends, to have a relatively high sensitivity, measured in dots per inch
(DPI). Some advanced mice from gaming manufacturers also allow users to customize the weight
of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is
also an important factor in gaming mice, as extended gameplay times may render further use of
the mouse to be uncomfortable. Gaming mice are held by gamers in three styles of grip:
 Palm Grip: the hand rests on the mouse, with extended fingers.
 Claw Grip: palm rests on the mouse, bent fingers.
 Finger-Tip Grip: bent fingers, palm doesn't touch the mouse.
Composite devices
Input devices, such as buttons and joysticks, can be combined on a single physical device that
could be thought of as a composite device. Many gaming devices have controllers like this.
Technically mice are composite devices, as they both track movement and provide buttons for
clicking, but composite devices are generally considered to have more than two different forms of
input. Examples: game controller, gamepad (or joypad), paddle (game controller), jog dial/shuttle
(or knob).
Imaging and video input devices
Video input devices are used to digitize images or video from the outside world into the
computer. The information can be stored in a multitude of formats depending on the user's
requirement. Examples: digital camera, digital camcorder, portable media player, webcam,
Microsoft Kinect Sensor, image scanner, fingerprint scanner, barcode reader, 3D scanner, laser
rangefinder, eye gaze tracker.
Medical Imaging: computed tomography, magnetic resonance imaging, positron emission
tomography, medical ultrasonography
Audio input devices
In the fashion of video devices, audio devices are used to either capture or create sound. In
some cases, an audio output device can be used as an input device, in order to capture produced
sound. Examples: microphone, MIDI keyboard or other digital musical instrument.
Output devices
An output device is any piece of computer hardware equipment used to communicate the
results of data processing carried out by an information processing system (such as a computer)
which converts the electronically generated information into human-readable form.
Display devices
A display device is an output device that visually conveys text, graphics, and video
information. Information shown on a display device is called soft copy because the information
exists electronically and is displayed for a temporary period of time. Display devices include
CRT monitors, LCD monitors and displays, gas plasma monitors, and televisions.
Tactile
Haptic technology, or haptics, is a tactile feedback technology which takes advantage of the
sense of touch by applying forces, vibrations, or motions to the user. Several printers and wax jet
printers have the capability of producing raised line drawings. There are also handheld devices
that use an array of vibrating pins to present a tactile outline of the characters or text under the
viewing window of the device.
Audio
Speech output systems can be used to read screen text to computer users. Special software
programs called screen readers attempt to identify and interpret what is being displayed on the
screen and speech synthesizers convert data to vocalized sounds or text.
Examples: speakers, headphones, screen (monitor), printer, voice output communication aid,
automotive navigation system, braille embosser, projector, plotter, television, radio.
3D Printing
Additive manufacturing or 3D printing is a process of making a three-dimensional solid object
of virtually any shape from a digital model. 3D printing is achieved using an additive process,
where successive layers of material are laid down in different shapes. 3D printing is also
considered distinct from traditional machining techniques, which mostly rely on the removal of
material by methods such as cutting or drilling (subtractive processes).
A 3D printer is a limited type of industrial robot that is capable of carrying out an additive
process under computer control.
While 3D printing technology has been around since the 1980s, it was not until the early 2010s
that the printers became widely available commercially. The first working 3D printer was created
in 1984 by Chuck Hull of 3D Systems Corp. Since the start of the 21st century there has been a
large growth in the sales of these machines, and their price has dropped substantially. According
to Wohlers Associates, a consultancy, the market for 3D printers and services was worth $2.2
billion worldwide in 2012, up 29% from 2011.
The 3D printing technology is used for both prototyping and distributed manufacturing with
applications in architecture, construction (AEC), industrial design, automotive, aerospace,
military, engineering, civil engineering, dental and medical industries, biotech (human tissue
replacement), fashion, footwear, jewelry, eyewear, education, geographic information systems,
food, and many other fields. One study has found that open source 3D printing could become a
mass market item because domestic 3D printers can offset their capital costs by enabling
consumers to avoid costs associated with purchasing common household objects.
Modeling
Additive manufacturing takes virtual blueprints from computer aided design (CAD) or
animation modeling software and "slices" them into digital cross-sections for the machine to
successively use as a guideline for printing. Depending on the machine used, material or a
binding material is deposited on the build bed or platform until material/binder layering is
complete and the final 3D model has been "printed."
A standard data interface between CAD software and the machines is the STL file format. An
STL file approximates the shape of a part or assembly using triangular facets. Smaller facets
produce a higher quality surface. PLY is a scanner generated input file format, and VRML (or
WRL) files are often used as input for 3D printing technologies that are able to print in full color.
Printing
To perform a print, the machine reads the design from an stl file and lays down successive
layers of liquid, powder, paper or sheet material to build the model from a series of cross
sections. These layers, which correspond to the virtual cross sections from the CAD model, are
joined or automatically fused to create the final shape. The primary advantage of this technique is
its ability to create almost any shape or geometric feature.
Printer resolution describes layer thickness and X-Y resolution in dpi (dots per inch), or
micrometers. Typical layer thickness is around 100 µm (250 DPI), although some machines such
as the Objet Connex series and 3D Systems' ProJet series can print layers as thin as 16 µm (1,600
DPI). X-Y resolution is comparable to that of laser printers. The particles (3D dots) are around 50
to 100 µm (510 to 250 DPI) in diameter.
Construction of a model with contemporary methods can take anywhere from several hours to
several days, depending on the method used and the size and complexity of the model. Additive
systems can typically reduce this time to a few hours, although it varies widely depending on the
type of machine used and the size and number of models being produced simultaneously.
Traditional techniques like injection molding can be less expensive for manufacturing polymer
products in high quantities, but additive manufacturing can be faster, more flexible and less
expensive when producing relatively small quantities of parts. 3D printers give designers and
concept development teams the ability to produce parts and concept models using a desktop size
printer.
Finishing
Though the printer-produced resolution is sufficient for many applications, printing a slightly
oversized version of the desired object in standard resolution and then removing material with a
higher-resolution subtractive process can achieve greater precision.
Some additive manufacturing techniques are capable of using multiple materials in the course
of constructing parts. Some are able to print in multiple colors and color combinations
simultaneously. Some also utilize supports when building. Supports are removable or dissolvable
upon completion of the print, and are used to support overhanging features during construction.
Several different 3D printing processes have been invented since the late 1970s. The printers
were originally large, expensive, and highly limited in what they could produce.
A large number of additive processes are now available. They differ in the way layers are
deposited to create parts and in the materials that can be used. Some methods melt or soften
material to produce the layers, e.g. selective laser melting (SLM) or direct metal laser sintering
(DMLS), selective laser sintering (SLS), fused deposition modeling (FDM), while others cure
liquid materials using different sophisticated technologies, e.g. stereolithography (SLA). With
laminated object manufacturing (LOM), thin layers are cut to shape and joined together (e.g.
paper, polymer, metal). Each method has its own advantages and drawbacks, and some
companies consequently offer a choice between powder and polymer for the material from which
the object is built. Some companies use standard, off-the-shelf business paper as the build
material to produce a durable prototype. The main considerations in choosing a machine are
generally speed, cost of the 3D printer, cost of the printed prototype, and cost and choice of
materials and color capabilities.
Printers that work directly with metals are expensive. In some cases, however, less expensive
printers can be used to make a mould, which is then used to make metal parts.
Standard applications of 3D printers include design visualization, prototyping/CAD, metal
casting, architecture, education, geospatial, healthcare, and entertainment/retail.
Industrial uses
Rapid prototyping
Industrial 3D printers have existed since the early 1980s and have been used extensively for
rapid prototyping and research purposes. These are generally larger machines that use proprietary
powdered metals, casting media (e.g. sand), plastics, paper or cartridges, and are used for rapid
prototyping by universities and commercial companies.
Rapid manufacturing
Advances in RP technology have introduced materials that are appropriate for final
manufacture, which has in turn introduced the possibility of directly manufacturing finished
components. One advantage of 3D printing for rapid manufacturing lies in the relatively
inexpensive production of small numbers of parts.
Rapid manufacturing is a new method of manufacturing and many of its processes remain
unproven. 3D printing is now entering the field of rapid manufacturing and was identified as a
"next level" technology by many experts in a 2009 report. One of the most promising processes
looks to be the adaptation of laser sintering (LS), one of the better-established rapid prototyping
methods. As of 2006, however, these techniques were still very much in their infancy, with many
obstacles to be overcome before RM could be considered a realistic manufacturing method.
Mass customization
Companies have created services where consumers can customize objects using simplified
web based customization software, and order the resulting items as 3D printed unique objects.
This now allows consumers to create custom cases for their mobile phones. Nokia has released
the 3D designs for its case so that owners can customize their own case and have it 3D printed.
Mass production
The current slow print speed of 3D printers limits their use for mass production. To reduce this
overhead, several fused filament machines now offer multiple extruder heads. These can be used
to print in multiple colors, with different polymers, or to make multiple prints simultaneously.
This increases their overall print speed during multiple instance production, while requiring
less capital cost than duplicate machines since they can share a single controller.
Distinct from the use of multiple machines, multi-material machines are restricted to making
identical copies of the same part, but can offer multi-color and multi-material features when
needed. The print speed increases proportionately to the number of heads. Furthermore, the
energy cost is reduced due to the fact that they share the same heated print volume. Together,
these two features reduce overhead costs.
Domestic and hobbyist uses
As of 2012, domestic 3D printing has mainly captivated hobbyists and enthusiasts and has not
quite gained recognition for practical household applications. A working clock has been made
and gears have been printed for home woodworking machines among other purposes. 3D printing
is also used for ornamental objects. Web sites associated with home 3D printing tend to include
backscratchers, coathooks, doorknobs etc.
As of 2013, 3D printers have been used to help animals. A 3D printed foot let a crippled
duckling walk again. 3D printed stylish hermit crab shells let them inhabit a new style home.
Printers have also made decorative pieces for humans such as necklaces, rings, bags etc.
Clothing
3D printing has spread into the world of clothing with fashion designers experimenting with
3D-printed bikinis, shoes, and dresses. In commercial production Nike is using 3D printing to
prototype and manufacture the 2012 Vapor Laser Talon football shoe for players of American
football, and New Balance is 3D manufacturing custom-fit shoes for athletes.
Research into new applications
Future applications for 3D printing might include creating open-source scientific equipment or
other science-based applications like reconstructing fossils in paleontology, replicating ancient
and priceless artifacts in archaeology, reconstructing bones and body parts in forensic pathology,
and reconstructing heavily damaged evidence acquired from crime scene investigations. The
technology is even being explored for building construction.
As of 2012, 3D printing technology has been studied by biotechnology firms and academia for
possible use in tissue engineering applications in which organs and body parts are built using
inkjet techniques. In this process, layers of living cells are deposited onto a gel medium or sugar
matrix and slowly built up to form three-dimensional structures including vascular systems.
Several terms have been used to refer to this field of research: organ printing, bio-printing, body
part printing, and computer-aided tissue engineering, among others.
A proof-of-principle project at the University of Glasgow, UK, in 2012 showed that it is
possible to use 3D printing techniques to create chemical compounds, including new ones. They
first printed chemical reaction vessels, then used the printer to squirt reactants into them as
"chemical inks" which would then react. They have produced new compounds to verify the
validity of the process, but have not pursued anything with a particular application. Cornell
Creative Machines Lab has confirmed that it is possible to produce customized food with 3D
Hydrocolloid Printing.
The use of 3D scanning technologies allows the replication of real objects without the use of
moulding techniques that in many cases can be more expensive, more difficult, or too invasive to
be performed, particularly for precious or delicate cultural heritage artifacts where direct contact
with the molding substances could harm the original object's surface.
An additional use being developed is building printing, or using 3D printing to build buildings.
This could allow faster construction for lower costs, and has been investigated for construction of
off-Earth habitats. For example, the Sinterhab project is researching a lunar base constructed by
3D printing using lunar regolith as a base material. Instead of adding a binding agent to the
regolith, researchers are experimenting with microwave sintering to create solid blocks from the
raw material.
Employing additive layer technology offered by 3D printing, Terahertz devices which act as
waveguides, couplers and bends have been created. The complex shape of these devices could not
be achieved using conventional fabrication techniques. Commercially available professional
grade printer EDEN 260V was used to create structures with minimum feature size of 100 µm.
The printed structures were later DC sputter coated with gold (or any other metal) to create a
Terahertz Plasmonic Device.
China has committed almost $500 million towards the establishment of 10 national 3-D
printing development institutes. In 2013, Chinese scientists began printing ears, livers and
kidneys, with living tissue. Researchers in China have been able to successfully print human
organs using specialized 3D bio printers that use living cells instead of plastic. Researchers at
Hangzhou Dianzi University actually went as far as inventing their own 3D printer for the
complex task, dubbed the “Regenovo” which is a "3D bio printer." Xu Mingen, Regenovo's
developer, said that it takes the printer under an hour to produce either a mini liver sample or a
four to five inch ear cartilage sample. Xu also predicted that fully functional printed organs may
be possible within the next ten to twenty years. In the same year, researchers at the University of
Hasselt, in Belgium had successfully printed a new jawbone for an 83-year-old Belgian woman.
The woman is now able to chew, speak and breathe normally again after a machine printed her a
new jawbone.
In Bahrain, large-scale 3D printing using a sandstone-like material has been used to create
unique coral-shaped structures, which encourage coral polyps to colonize and regenerate
damaged reefs. These structures have a much more natural shape than other structures used to
create artificial reefs, and have a neutral pH which concrete does not.
Some of the recent developments in 3D printing were revealed at the 3DPrintshow in London,
which took place in November 2013. One part of the show focused on ways in which 3D printing
can advance the medical field. The underlying theme of these advances was that these printers
can be used to create parts that are printed with specifications to meet each individual. This
makes the process safer and more efficient. One of these advances is the use of 3D printers to
produce casts that are created to mimic the bones that they are supporting. These custom-fitted
casts are open, which allow the wearer to scratch any itches and also wash the damaged area.
Being open also allows for open ventilation. One of the best features is that they can be recycled
to create more casts.
Effects of 3D printing
Additive manufacturing, starting with today's infancy period, requires manufacturing firms to
be flexible, ever-improving users of all available technologies in order to remain competitive.
Advocates of additive manufacturing also predict that this arc of technological development will
counter globalisation, as end users will do much of their own manufacturing rather than engage in
trade to buy products from other people and corporations. The real integration of the newer
additive technologies into commercial production, however, is more a matter of complementing
traditional subtractive methods rather than displacing them entirely.
Space exploration
As early as 2010, work began on applications of 3D printing in zero or low gravity
environments. The primary concept involves creating basic items such as hand tools or other
more complicated devices "on demand" versus using valuable resources such as fuel or cargo
space to carry the items into space.
Additionally, NASA is conducting tests to assess the potential of 3D printing to make space
exploration cheaper and more efficient. Rocket parts built using this technology have passed
NASA firing tests. In July 2013, two rocket engine injectors performed as well as traditionally
constructed parts during hot-fire tests which exposed them to temperatures approaching 6,000
degrees Fahrenheit (3,316 degrees Celsius) and extreme pressures. NASA is also preparing to
launch a 3D printer into space; the agency hopes to demonstrate that, with the printer making
spare parts on the fly, astronauts need not carry large loads of spares with them.
Firearms
In 2012, the U.S.-based group Defense Distributed disclosed plans to " a working plastic gun
that could be downloaded and reproduced by anybody with a 3D printer." Defense Distributed
has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than
650 rounds) and a 30 round M16 magazine. Soon after Defense Distributed succeeded in
designing the first working blueprint to produce a plastic gun with a 3D printer in May 2013, the
United States Department of State demanded that they remove the instructions from their website.
After Defense Distributed released their plans, questions were raised regarding the effects that
3D printing and widespread consumer-level CNC machining may have on gun control
effectiveness.
Lecture 3. SOFTWARE. OPERATING SYSTEMS.

3.1. General information.


Computer software, or just software, is a collection of computer programs and related data
that provides the instructions for telling a computer what to do and how to do it. Software refers
to one or more computer programs and data held in the storage of the computer. In other words,
software is a set of programs, procedures, algorithms and its documentation concerned with the
operation of a data processing system.
The term is used to contrast with computer hardware, which denotes the physical tangible
components of computers. It may be used as an adjective to mean "non-tangible component" or
as a group noun to mean "all computers programs when taken as a whole". Computer hardware
and software require each other and neither can be realistically used without the other.
At the lowest level, executable code consists of machine language instructions specific to an
individual processor – typically a central processing unit (CPU). A machine language consists of
groups of binary values signifying processor instructions that change the state of the computer
from its preceding state. For example, an instruction may change the value stored in a particular
storage location inside the computer – an effect that is not directly observable to the user. An
instruction may also (indirectly) cause something to appear on a display of the computer system –
a state change which should be visible to the user. The processor carries out the instructions in the
order they are provided, unless it is instructed to "jump" to a different instruction, or interrupted.
Software is usually written in high-level programming languages that are easier and more
efficient for humans to use (closer to natural language) than machine language. High-level
languages are compiled or interpreted into machine language object code. Software may also be
written in an assembly language, essentially, a mnemonic representation of a machine language
using a natural language alphabet. Assembly language must be assembled into object code via an
assembler.
Types of software
Based on the goal, computer software can be divided into:
 Application software uses the computer system to perform useful work or provide
entertainment functions beyond the basic operation of the computer itself.
 System software is designed to operate the computer hardware, to provide basic
functionality, and to provide a platform for running application software.
System software includes: Operating system, an essential collection of computer programs that
manages resources and provides common services for other software. Supervisory programs, boot
loaders, shells and window systems are core parts of operating systems. In practice, an operating
system comes bundled with additional software (including application software) so that a user
can potentially do some work with a computer that only has an operating system. Device driver, a
computer program that operates or controls a particular type of device that is attached to a
computer. Each device needs at least one corresponding device driver; thus a computer needs
more than one device driver. Utilities, software designed to assist users in maintenance and care
of their computers.
 Embedded software is computer software, written to control machines or devices that are
not typically thought of as computers.
It is typically specialized for the particular hardware that it runs on and has time and memory
constraints. This term is sometimes used interchangeably with firmware, although firmware can
also be applied to ROM-based code on a computer, on top of which the OS runs, whereas
embedded software is typically the only software on the device in question.
A precise and stable characteristic feature is that no or not all functions of embedded software
are initiated/controlled via a human interface, but through machine-interfaces instead.
Manufacturers 'build in' embedded software in the electronics in cars, telephones, modems,
robots, appliances, toys, security systems, pacemakers, televisions and set-top boxes, and digital
watches, for example. This software can be very simple, such as lighting controls running on an
8-bit microprocessor and a few kilobytes of memory, or can become very sophisticated in
applications such as airplanes, missiles, and process control systems.
 A programming tool or software development tool is a program or application that
software developers use to create, debug, maintain, or otherwise support other programs and
applications.
The term usually refers to relatively simple programs, that can be combined together to
accomplish a task, much as one might use multiple hand tools to fix a physical object.
 Malicious software or malware, computer software developed to harm and disrupt
computers.
As such, malware is undesirable. Malware is closely associated with computer-related crimes,
though some malicious programs may have been designed as practical jokes.

3.2. Operating Systems


An Operating System, or OS, is low-level software that enables a user and higher-level
application software to interact with a computer’s hardware and the data and other programs
stored on the computer.
An OS performs basic tasks, such as recognizing input from the keyboard, sending output to
the display screen, keeping track of files and directories on the disk, and controlling peripheral
devices such as printers.
Other Services
1. Program Execution
OS provides an environment where the user can conveniently run programs. The user does not
have to worry about memory allocation or CPU scheduling.
2. I/O Operations
Each program requires input and produces output. The OS hides some of the details of the
underlying hardware for such I/O. All the user sees is that the I/O has been performed, without
those details.
3. Communications
There are instances where processes need to communicate with each other to exchange
information. It may be between processes running on the same computer or running on different
computers. The OS provides these services to application programs, making inter-process
communication possible, and relieving the user of having to worry about how this accomplished.
Application programs and OS.
Operating systems provide a software platform on top of which other programs, called
application programs, can run.
The choice of operating system, therefore, determines to a great extent the applications a user
can run. For example, the DOS operating system contains commands such as COPY and
RENAME for copying files and changing the names of files, respectively. The commands are
accepted and executed by a part of the operating system. Similarly, the UNIX operating system
has commands like CP and MV to copy and rename.
UNIX
UNIX was one of the first operating systems to be written, in 1971.
Advantages of UNIX are…
1. Multitasking –multiple programs can run at one time.
2. Multi-user –allows more than a single user to work at any given time. This is accomplished
by sharing processing time between each user.
3. Safe –prevents one program from accessing memory or storage space allocated to another
program, and enables file protection, requiring users to have permission to perform certain
functions, such as accessing a directory, file, or disk drive.
Linux
Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual
Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from
supercomputers to wristwatches. The Linux kernel is released under an open source license, so
anyone can read and modify its code. It has been modified to run on a large variety of electronics.
Although estimates suggest that Linux is used on 1.82% of all personal computers, it has been
widely adopted for use in servers and embedded systems (such as cell phones). The Linux kernel
is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's
Android.
The GNU project is a mass collaboration of programmers who seek to create a completely free
and open operating system that was similar to Unix but with completely original code. It was
started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux
variants. Thousands of pieces of software for virtually every operating system are licensed under
the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus
Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted
information about his project on a newsgroup for computer students and programmers. He
received a wave of support and volunteers who ended up creating a full-fledged kernel.
Programmers from GNU took notice, and members of both projects worked to integrate the
finished GNU parts with the Linux kernel in order to create a full-fledged operating system.
Microsoft Windows
Microsoft Windows is a family of proprietary operating systems designed by Microsoft
Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9
percent total usage share on Web connected computers. The newest version is Windows 8 for
workstations and Windows Server 2012 for servers. Windows 7 recently overtook Windows XP
as most used OS.
Microsoft Windows originated in 1985 as an operating environment running on top of MS-
DOS, which was the standard operating system shipped on most Intel architecture personal
computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a
bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS and 16 bits
Windows 3.x drivers. Windows ME, released in 2000, was the last version in the Win9x family.
Later versions have all been based on the Windows NT kernel. Current versions of Windows run
on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture. In
the past, Windows NT supported non-Intel architectures.
Server editions of Windows are widely used. In recent years, Microsoft has expended
significant capital in an effort to promote the use of Windows as a server operating system.
However, Windows' usage on servers is not as widespread as on personal computers, as Windows
competes against Linux and BSD for server market share
Other
There have been many operating systems that were significant in their day but are no longer
so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to
Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in
niche markets and continue to be developed as minority platforms for enthusiast communities and
specialist applications. OpenVMS formerly from DEC, is still under active development by
Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for
operating systems education or to do research on operating system concepts. A typical example
of a system that fulfills both roles is MINIX, while for example Singularity is used purely for
research.

Lecture 4. HUMAN-COMPUTER INTERACTION.

Human-computer interaction (HCI) - is the study of how people interact with computers and to
what extent computers are or are not developed for successful interaction with human beings.
Human-machine interface (HMI) is used to refer to the user interface in a manufacturing or
process-control system.
User Interface - visual part of computer application or operating system through which a user
interacts with a computer or a software. It determines how commands are given to the computer
or the program and how information is displayed on the screen.
Usability
In the early days of computer science, designers and developers paid much less attention to
making hardware and software products usable or "user friendly."
Yet, requests from a growing subset of users for easy-to-use devices eventually focused
researchers' attention on usability.
Usability - the extent to which a product can be used by specified users to achieve specified
goals with effectiveness, efficiency, and satisfaction in a specified context of use.
(The International Standards Organization -ISO)
Thus, usability defines a set of criteria such as efficiency, safety, and utility, which are related
mainly to computer systems.
Types of Interface
Several main types of user interfaces are
• Command Line Interface: the user must know the machine and program-specific
instructions or codes.
• Menu Interface : user chooses the commands from lists displayed on the screen.
• Graphical User Interface (GUI): user gives commands by selecting and clicking on icons
displayed on the screen.
• Touch user interface are graphical user interfaces using a touchpad or touchscreen
display as a combined input and output device.
They supplement or replace other forms of output with haptic feedback methods. Used in
computerized simulators etc.
• Gesture interfaces are graphical user interfaces which accept input in a form of
hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
• Motion tracking interfaces monitor the user's body motions and translate them into
commands. (Currently being developed by Apple)
• Voice user interfaces, which accept input and provide output by generating voice
prompts.
The user input is made by pressing keys or buttons, or responding verbally to the interface.
Command line interface (CLI)
Command Line Interface's do not make use of images, icons or graphics. All the user is sees is
a plain black screen like the one to the right.
• There are over 270 different commands that can be entered at the command prompt.
Commands have to be entered precisely without spelling mistakes or else the operating
system will return an error.
• Remembering commands and the exact way to enter them can be difficult and so
Command Line Interface Operating Systems are considered hard to use.
The main features of a CLI are that keyboards are used to type in a variety of different
commands into a command prompt. Because they use no graphics they require very little
computer power.
Examples of some commands:
• copy Copies files from to another location
• del Deletes one or more files
• format Deletes all the data on a hard disk
• md Creates a new folder
• rename Renames a file or folder
Graphical User Interface (GUI)
This is often abbreviated to WIMP (Window, Icon. Menu, Pointer). GUI operating systems were
first created by Xerox in 1973. The idea was further developed by Apple on their Macintosh
system.
Instead of typing in commands, the user can use a mouse to point and click objects on the screen.
For example: A user can erase a file by right clicking and then selecting delete.
Touch user interface
Touchscreen technology allows people to use their fingers to select icons and options straight
from the device's screen. We call this type of interface Post-WIMP.

Lecture 5. DATABASE SYSTEMS.


5.1. Basic terms
A database is an structured collection of data. The data are typically organized to model
relevant aspects of reality in a way that supports processes requiring this information. For
example, modeling the availability of rooms in hotels in a way that supports finding a hotel with
vacancies.

Database management systems (DBMSs) are specially designed applications that interact with
the user, other applications, and the database itself to capture and analyze data. A general-purpose
database management system (DBMS) is a software system designed to allow the definition,
creation, querying, update, and administration of databases.
Well-known DBMSs include MySQL, MariaDB, PostgreSQL, SQLite, Microsoft SQL Server,
Oracle, SAP, dBASE, FoxPro, IBM DB2, LibreOffice Base and FileMaker Pro. A database is not
generally portable across different DBMS, but different DBMSs can interoperate by using
standards such as SQL and ODBC or JDBC to allow a single application to work with more than
one database.

5.2. History
With the data progress in technology in the areas of processors, computer memory, computer
storage and computer networks, the sizes, capabilities, and performance of databases and their
respective DBMSs have grown in orders of magnitudes.
The development of database technology can be divided into three eras based on data model or
structure (Figure 1):
 navigational,
 SQL/relational,
 and post-relational.
The two main early navigational data models were the hierarchical model, epitomized by
IBM's IMS system, and the Codasyl model (Network model), implemented in a number of
products such as IDMS.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by
insisting that applications should search for data by content, rather than by following links. The
relational model is made up of ledger-style tables, each used for a different type of entity. It was
not until the mid-1980s that computing hardware became powerful enough to allow relational
systems (DBMSs plus applications) to be widely deployed. By the early 1990s, however,
relational systems were dominant for all large-scale data processing applications, and they remain
dominant today (2013) except in niche areas. The dominant database language is the standard
SQL for the relational model, which has influenced database languages for other data models.
Object databases were invented in the 1980s to overcome the inconvenience of object-
relational impedance mismatch, which led to the coining of the term "post-relational" but also
development of hybrid object-relational databases.
The next generation of post-relational databases in the 2000s became known as NoSQL
databases, introducing fast key-value stores and document-oriented databases. A competing "next
generation" known as NewSQL databases attempted new implementations that retained the
relational/SQL model while aiming to match the high performance of NoSQL compared to
commercially available relational DBMSs.
Figure 1
1970s relational DBMS
Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was
primarily involved in the development of hard disk systems. He was unhappy with the
navigational model of the Codasyl approach, notably the lack of a "search" facility. In 1970, he
wrote a number of papers that outlined a new approach to database construction that eventually
culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead
of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea
was to use a "table" of fixed-length records, with each table used for a different type of entity. A
linked-list system would be very inefficient when storing "sparse" databases where some of the
data for any one record could be left empty. The relational model solved this by splitting the data
into a series of normalized tables (or relations), with optional elements being moved out of the
main table to where they would take up room only if needed. Data may be freely inserted, deleted
and edited in these tables, with the DBMS doing whatever maintenance needed to present a table
view to the application/user.
The relational model also allowed the content of the database to evolve without constant
rewriting of links and pointers. The relational part comes from entities referencing other entities
in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-
many relationship, like a navigational (network) model. Thus, a relational model can express both
hierarchical and navigational models, as well as its native tabular model, allowing for pure or
combined modeling in terms of these three models, as the application requires.
For instance, a common use of a database system is to track information about users, their
name, login information, various addresses and phone numbers. In the navigational approach all
of these data would be placed in a single record, and unused items would simply not be placed in
the database. In the relational approach, the data would be normalized into a user table, an
address table and a phone number table (for instance). Records would be created in these optional
tables only if the address or phone numbers were actually provided.
Linking the information back together is the key to this system. In the relational model, some
bit of information was used as a "key", uniquely defining a particular record. When information
was being collected about a user, information stored in the optional tables would be found by
searching for this key. In the relational model, related records are linked together with a
"key"(Figure 32).

Figure 32

For instance, if the login name of a user is unique, addresses and phone numbers for that user
would be recorded with the login name as its key. This simple "re-linking" of related data back
into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records,
the relational approach would require loops to collect information about any one record. Codd's
solution to the necessary looping was a set-oriented language, a suggestion that would later
spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he
demonstrated that such a system could support all the operations of normal databases (inserting,
updating etc.) as well as providing a simple system for finding and returning sets of data in a
single operation.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael
Stonebraker. They started a project known as INGRES using funding that had already been
allocated for a geographical database project and student programmers to produce code.
Beginning in 1973, INGRES delivered its first test products which were generally ready for
widespread use in 1979. INGRES was similar to System R in a number of ways, including the
use of a "language" for data access, known as QUEL. Over time, INGRES moved to the
emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one,
Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there
are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations
usually called relational are actually SQL DBMSs.
2000s NoSQL and NewSQL databases
The next generation of post-relational databases in the 2000s became known as NoSQL
databases, including fast key-value stores and document-oriented databases. XML databases are a
type of structured document-oriented database that allows querying based on XML document
attributes.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations
by storing denormalized data, and are designed to scale horizontally.
In recent years there was a high demand for massively distributed databases with high
partition tolerance but according to the CAP theorem it is impossible for a distributed system to
simultaneously provide consistency, availability and partition tolerance guarantees. A distributed
system can satisfy any two of these guarantees at the same time, but not all three. For that reason
many NoSQL databases are using what is called eventual consistency to provide both availability
and partition tolerance guarantees with a maximum level of data consistency.
The most popular NoSQL systems include: MongoDB, Couchbase, Riak, Oracle NoSQL
Database, memcached, Redis, CouchDB, Hazelcast, Apache Cassandra and HBase. Note that all
are open-source software products.
A number of new relational databases continuing use of SQL but aiming for performance
comparable to NoSQL are known as NewSQL.
Database design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the
structure of the information to be held in the database. A common approach to this is to develop
an entity-relationship model, often with the aid of drawing tools. Another popular approach is the
Unified Modeling Language. A successful data model will accurately reflect the possible state of
the external world being modeled: for example, if people can have more than one phone number,
it will allow this information to be captured. Designing a good conceptual data model requires a
good understanding of the application domain; it typically involves asking deep questions about
the things of interest to an organisation, like "can a customer also be a supplier?", or "if a product
is sold with two different forms of packaging, are those the same product or different products?",
or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe
even three)?". The answers to these questions establish definitions of the terminology used for
entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the
analysis of workflow in the organization. This can help to establish what information is needed in
the database, and what can be left out. For example, it can help when deciding whether the
database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to
translate this into a schema that implements the relevant data structures within the database. This
process is often called logical database design, and the output is a logical data model expressed in
the form of a schema. Whereas the conceptual data model is (in theory at least) independent of
the choice of database technology, the logical data model will be expressed in terms of a
particular database model supported by the chosen DBMS. (The terms data model and database
model are often used interchangeably, but in this article we use data model for the design of a
specific database, and database model for the modelling notation used to express that design.)
The most popular database model for general-purpose databases is the relational model, or
more precisely, the relational model as represented by the SQL language. The process of creating
a logical database design using this model uses a methodical approach known as normalization.
The goal of normalization is to ensure that each elementary "fact" is only recorded in one place,
so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability,
recovery, security, and the like. This is often called physical database design. A key goal during
this stage is data independence, meaning that the decisions made for performance optimization
purposes should be invisible to end-users and applications. Physical design is driven mainly by
performance requirements, and requires a good knowledge of the expected workload and access
patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control
to database objects as well as defining security levels and methods for the data itself.

Lecture 6. DATA ANALYSIS. DATA MANAGEMENT.


Data Analysis - the process of evaluating data using analytical and logical reasoning to examine
each component of the data provided.
This form of analysis is just one of the many steps that must be completed when conducting a
research experiment. Data from various sources is gathered, reviewed, and then analyzed to form
some sort of finding or conclusion.
There are a variety of specific data analysis method, some of which include
 data mining,
 text analytics,
 business intelligence, and
 data visualizations.
Data Mining - sifting through very large amounts of data for useful information.
Data Mining uses
 artificial intelligence techniques,
 neural networks, and
 advanced statistical tools (such as cluster analysis)
to reveal
 trends,
 patterns, and
relationships, which might otherwise have remained undetected.
In contrast to an expert system (which draws inferences from the given data on the basis of a
given set of rules) data mining attempts to discover hidden rules underlying the data. Also called
data surfing.

Data mining has been used in many applications. The following are some notable examples of
usage:
Games
Since the early 1960s, with the availability of oracles for certain combinatorial games, also
called tablebases (e.g. for 3x3-chess) with any beginning configuration, small-board dots-and-
boxes, small-board-hex, and certain endgames in chess, dots-and-boxes, and hex; a new area for
data mining has been opened. This is the extraction of human-usable strategies from these
oracles. Current pattern recognition approaches do not seem to fully acquire the high level of
abstraction required to be applied successfully. Instead, extensive experimentation with the
tablebases – combined with an intensive study of tablebase-answers to well designed problems,
and with knowledge of prior art (i.e., pre-tablebase knowledge) – is used to yield insightful
patterns. Berlekamp (in dots-and-boxes, etc.) and John Nunn (inchess endgames) are notable
examples of researchers doing this work, though they were not – and are not – involved in
tablebase generation.
Business
In business, data mining is the analysis of historical business activities, stored as static data in
data warehouse databases. The goal is to reveal hidden patterns and trends. Data mining software
uses advanced pattern recognition algorithms to sift through large amounts of data to assist in
discovering previously unknown strategic business information. Examples of what businesses
use data mining is to include performing market analysis to identify new product bundles,
finding the root cause of manufacturing problems, to prevent customer attrition and acquire new
customers, cross-selling to existing customers, and profiling customers with more accuracy.

 In today’s world raw data is being collected by companies at an exploding rate. For
example, Walmart processes over 20 million point-of-sale transactions every day. This
information is stored in a centralized database, but would be useless without some type of
data mining software to analyze it. If Walmart analyzed their point-of-sale data with data
mining techniques they would be able to determine sales trends, develop marketing
campaigns, and more accurately predict customer loyalty.
 Categorization of the items available in the e-commerce site is a fundamental problem. A
correct item categorization system is essential for user experience as it helps determine the
items relevant to him for search and browsing. Item categorization can be formulated as a
supervised classification problem in data mining where the categories are the target classes
and the features are the words composing some textual description of the items. One of the
approaches is to find groups initially which are similar and place them together in a latent
group. Now given a new item, first classify into a latent group which is called coarse level
classification. Then, do a second round of classification to find the category to which the
item belongs to.
 Every time a credit card or a store loyalty card is being used, or a warranty card is being
filled, data is being collected about the users behavior. Many people find the amount of
information stored about us from companies, such as Google, Facebook, and Amazon,
disturbing and are concerned about privacy. Although there is the potential for our personal
data to be used in harmful, or unwanted, ways it is also being used to make our lives better.
For example, Ford and Audi hope to one day collect information about customer driving
patterns so they can recommend safer routes and warn drivers about dangerous road
conditions.
 Data mining in customer relationship management applications can contribute
significantly to the bottom line. Rather than randomly contacting a prospect or customer
through a call center or sending mail, a company can concentrate its efforts on prospects that
are predicted to have a high likelihood of responding to an offer. More sophisticated methods
may be used to optimize resources across campaigns so that one may predict to which
channel and to which offer an individual is most likely to respond (across all potential
offers). Additionally, sophisticated applications could be used to automate mailing. Once the
results from data mining (potential prospect/customer and channel/offer) are determined, this
"sophisticated application" can either automatically send an e-mail or a regular mail. Finally,
in cases where many people will take an action without an offer, "uplift modeling" can be
used to determine which people have the greatest increase in response if given an offer.
Uplift modeling thereby enables marketers to focus mailings and offers on persuadable
people, and not to send offers to people who will buy the product without an offer. Data
clustering can also be used to automatically discover the segments or groups within a
customer data set.
 Businesses employing data mining may see a return on investment, but also they
recognize that the number of predictive models can quickly become very large. For example,
rather than using one model to predict how many customers will churn, a business may
choose to build a separate model for each region and customer type. In situations where a
large number of models need to be maintained, some businesses turn to more automated data
mining methodologies.
 Data mining can be helpful to human resources (HR) departments in identifying the
characteristics of their most successful employees. Information obtained – such as
universities attended by highly successful employees – can help HR focus recruiting efforts
accordingly. Additionally, Strategic Enterprise Management applications help a company
translate corporate-level goals, such as profit and margin share targets, into operational
decisions, such as production plans and workforce levels.
 Market basket analysis, relates to data-mining use in retail sales. If a clothing store
records the purchases of customers, a data mining system could identify those customers
who favor silk shirts over cotton ones. Although some explanations of relationships may be
difficult, taking advantage of it is easier. The example deals with association rules within
transaction-based data. Not all data are transaction based and logical, or inexact rules may
also be present within a database.
 Market basket analysis has been used to identify the purchase patterns of the Alpha
Consumer. Analyzing the data collected on this type of user has allowed companies to
predict future buying trends and forecast supply demands.
 Data mining is a highly effective tool in the catalog marketing industry. Catalogers have
a rich database of history of their customer transactions for millions of customers dating back
a number of years. Data mining tools can identify patterns among customers and help
identify the most likely customers to respond to upcoming mailing campaigns.
 Data mining for business applications can be integrated into a complex modeling and
decision making process. LIONsolver uses Reactive business intelligence (RBI) to advocate
a "holistic" approach that integrates data mining, modeling, and interactive visualization into
an end-to-end discovery and continuous innovation process powered by human and
automated learning.
 In the area of decision making, the RBI approach has been used to mine knowledge that
is progressively acquired from the decision maker, and then self-tune the decision method
accordingly. The relation between the quality of a data mining system and the amount of
investment that the decision maker is willing to make was formalized by providing an
economic perspective on the value of “extracted knowledge” in terms of its payoff to the
organization. This decision-theoretic classification framework was applied to a real-world
semiconductor wafer manufacturing line, where decision rules for effectively monitoring and
controlling the semiconductor wafer fabrication line were developed.
 An example of data mining related to an integrated-circuit (IC) production line is
described in the paper "Mining IC Test Data to Optimize VLSI Testing." Experiments
mentioned demonstrate the ability to apply a system of mining historical die-test data to
create a probabilistic model of patterns of die failure. These patterns are then utilized to
decide, in real time, which die to test next and when to stop testing. This system has been
shown, based on experiments with historical test data, to have the potential to improve
profits on mature IC products. Other examples of the application of data mining
methodologies in semiconductor manufacturing environments suggest that data mining
methodologies may be particularly useful when data is scarce, and the various physical and
chemical parameters that affect the process exhibit highly complex interactions. Another
implication is that on-line monitoring of the semiconductor manufacturing process using data
mining may be highly effective.

Science and engineering

In recent years, data mining has been used widely in the areas of science and engineering, such
as bioinformatics, genetics, medicine, education and electrical power engineering.

 In the study of human genetics, sequence mining helps address the important goal of
understanding the mapping relationship between the inter-individual variations in
human DNA sequence and the variability in disease susceptibility. In simple terms, it aims to
find out how the changes in an individual's DNA sequence affects the risks of developing
common diseases such as cancer, which is of great importance to improving methods of
diagnosing, preventing, and treating these diseases. One data mining method that is used to
perform this task is known as multifactor dimensionality reduction.
 In the area of electrical power engineering, data mining methods have been widely used
for condition monitoring of high voltage electrical equipment. The purpose of condition
monitoring is to obtain valuable information on, for example, the status of the insulation (or
other important safety-related parameters). Data clustering techniques – such as the self-
organizing map (SOM), have been applied to vibration monitoring and analysis of
transformer on-load tap-changers (OLTCS). Using vibration monitoring, it can be observed
that each tap change operation generates a signal that contains information about the
condition of the tap changer contacts and the drive mechanisms. Obviously, different tap
positions will generate different signals. However, there was considerable variability
amongst normal condition signals for exactly the same tap position. SOM has been applied
to detect abnormal conditions and to hypothesize about the nature of the abnormalities.
 Data mining methods have been applied to dissolved gas analysis (DGA) in power
transformers. DGA, as a diagnostics for power transformers, has been available for many
years. Methods such as SOM has been applied to analyze generated data and to determine
trends which are not obvious to the standard DGA ratio methods (such as Duval Triangle).
 In educational research, where data mining has been used to study the factors leading
students to choose to engage in behaviors which reduce their learning, and to understand
factors influencing university student retention. A similar example of social application of
data mining is its use in expertise finding systems, whereby descriptors of human expertise
are extracted, normalized, and classified so as to facilitate the finding of experts, particularly
in scientific and technical fields. In this way, data mining can facilitate institutional memory.
 Data mining methods of biomedical data facilitated by domain ontologies, mining
clinical trial data, and traffic analysis using SOM.
 In adverse drug reaction surveillance, the Uppsala Monitoring Centre has, since 1998,
used data mining methods to routinely screen for reporting patterns indicative of emerging
drug safety issues in the WHO global database of 4.6 million suspected adverse drug
reaction incidents. Recently, similar methodology has been developed to mine large
collections of electronic health records for temporal patterns associating drug prescriptions to
medical diagnoses.
 Data mining has been applied to software artifacts within the realm of software
engineering: Mining Software Repositories.

Human rights

Data mining of government records – particularly records of the justice system (i.e., courts,
prisons) – enables the discovery of systemic human rights violations in connection to generation
and publication of invalid or fraudulent legal records by various government agencies.
Medical data mining
Some machine learning algorithms can be applied in medical field as second-opinion diagnostic
tools and as tools for the knowledge extraction phase in the process of knowledge discovery in
databases. One of these classifiers (called Prototype exemplar learning classifier (PEL-C) is able
to discover syndromes as well as atypical clinical cases.
In 2011, the case of Sorrell v. IMS Health, Inc., decided by the Supreme Court of the United
States, ruled that pharmacies may share information with outside companies. This practice was
authorized under the 1st Amendment of the Constitution, protecting the "freedom of speech."
However, the passage of the Health Information Technology for Economic and Clinical Health
Act (HITECH Act) helped to initiate the adoption of the electronic health record (EHR) and
supporting technology in the United States. The HITECH Act was signed into law on February
17, 2009 as part of the American Recovery and Reinvestment Act (ARRA) and helped to open
the door to medical data mining. Prior to the signing of this law, estimates of only 20% of United
States-based physicians were utilizing electronic patient records. Søren Brunak notes that “the
patient record becomes as information-rich as possible” and thereby “maximizes the data mining
opportunities.” Hence, electronic patient records further expands the possibilities regarding
medical data mining thereby opening the door to a vast source of medical data analysis.
Spatial data mining
Spatial data mining is the application of data mining methods to spatial data. The end objective
of spatial data mining is to find patterns in data with respect to geography. So far, data mining
and Geographic Information Systems (GIS) have existed as two separate technologies, each with
its own methods, traditions, and approaches to visualization and data analysis. Particularly, most
contemporary GIS have only very basic spatial analysis functionality. The immense explosion in
geographically referenced data occasioned by developments in IT, digital mapping, remote
sensing, and the global diffusion of GIS emphasizes the importance of developing data-driven
inductive approaches to geographical analysis and modeling.
Data mining offers great potential benefits for GIS-based applied decision-making. Recently, the
task of integrating these two technologies has become of critical importance, especially as
various public and private sector organizations possessing huge databases with thematic and
geographically referenced data begin to realize the huge potential of the information contained
therein. Among those organizations are:

 offices requiring analysis or dissemination of geo-referenced statistical data


 public health services searching for explanations of disease clustering
 environmental agencies assessing the impact of changing land-use patterns on climate
change
 geo-marketing companies doing customer segmentation based on spatial location.

Challenges in Spatial mining: Geospatial data repositories tend to be very large. Moreover,
existing GIS datasets are often splintered into feature and attribute components that are
conventionally archived in hybrid data management systems. Algorithmic requirements differ
substantially for relational (attribute) data management and for topological (feature) data
management. Related to this is the range and diversity of geographic data formats, which present
unique challenges. The digital geographic data revolution is creating new types of data formats
beyond the traditional "vector" and "raster" formats. Geographic data repositories increasingly
include ill-structured data, such as imagery and geo-referenced multi-media.
There are several critical research challenges in geographic knowledge discovery and data
mining. Miller and Han offer the following list of emerging research topics in the field:

 Developing and supporting geographic data warehouses (GDW's): Spatial properties


are often reduced to simple aspatial attributes in mainstream data warehouses. Creating an
integrated GDW requires solving issues of spatial and temporal data interoperability –
including differences in semantics, referencing systems, geometry, accuracy, and position.
 Better spatio-temporal representations in geographic knowledge discovery: Current
geographic knowledge discovery (GKD) methods generally use very simple representations
of geographic objects and spatial relationships. Geographic data mining methods should
recognize more complex geographic objects (i.e., lines and polygons) and relationships (i.e.,
non-Euclidean distances, direction, connectivity, and interaction through attributed
geographic space such as terrain). Furthermore, the time dimension needs to be more fully
integrated into these geographic representations and relationships.
 Geographic knowledge discovery using diverse data types: GKD methods should be
developed that can handle diverse data types beyond the traditional raster and vector models,
including imagery and geo-referenced multimedia, as well as dynamic data types (video
streams, animation).

Temporal data mining

Data may contain attributes generated and recorded at different times. In this case finding
meaningful relationships in the data may require considering the temporal order of the attributes.
A temporal relationship may indicate a causal relationship, or simply an association.
Sensor data mining
Wireless sensor networks can be used for facilitating the collection of data for spatial data
mining for a variety of applications such as air pollution monitoring.A characteristic of such
networks is that nearby sensor nodes monitoring an environmental feature typically register
similar values. This kind of data redundancy due to the spatial correlation between sensor
observations inspires the techniques for in-network data aggregation and mining. By measuring
the spatial correlation between data sampled by different sensors, a wide class of specialized
algorithms can be developed to develop more efficient spatial data mining algorithms.
Visual data mining
In the process of turning from analog into digital, large data sets have been generated, collected,
and stored discovering statistical patterns, trends and information which is hidden in data, in
order to build predictive patterns. Studies suggest visual data mining is faster and much more
intuitive than is traditional data mining.
Music data mining
Data mining techniques, and in particular co-occurrence analysis, has been used to discover
relevant similarities among music corpora (radio lists, CD databases) for purposes including
classifying music into genres in a more objective manner.
Surveillance
Data mining has been used by the U.S. government. Programs include the Total Information
Awareness (TIA) program, Secure Flight (formerly known as Computer-Assisted Passenger
Prescreening System (CAPPS II)), Analysis, Dissemination, Visualization, Insight, Semantic
Enhancement (ADVISE), and the Multi-state Anti-Terrorism Information Exchange
(MATRIX).These programs have been discontinued due to controversy over whether they
violate the 4th Amendment to the United States Constitution, although many programs that were
formed under them continue to be funded by different organizations or under different names.
In the context of combating terrorism, two particularly plausible methods of data mining are
"pattern mining" and "subject-based data mining".
Pattern mining
"Pattern mining" is a data mining method that involves finding existing patterns in data. In this
context patterns often means association rules. The original motivation for searching association
rules came from the desire to analyze supermarket transaction data, that is, to examine customer
behavior in terms of the purchased products. For example, an association rule "beer ⇒ potato
chips (80%)" states that four out of five customers that bought beer also bought potato chips.
In the context of pattern mining as a tool to identify terrorist activity, the National Research
Council provides the following definition: "Pattern-based data mining looks for patterns
(including anomalous data patterns) that might be associated with terrorist activity — these
patterns might be regarded as small signals in a large ocean of noise." Pattern Mining includes
new areas such a Music Information Retrieval (MIR) where patterns seen both in the temporal
and non temporal domains are imported to classical knowledge discovery search methods.
Subject-based data mining
"Subject-based data mining" is a data mining method involving the search for associations
between individuals in data. In the context of combating terrorism, the National Research
Council provides the following definition: "Subject-based data mining uses an initiating
individual or other datum that is considered, based on other information, to be of high interest,
and the goal is to determine what other persons or financial transactions or movements, etc., are
related to that initiating datum."
Knowledge Grid
Knowledge discovery "On the Grid" generally refers to conducting knowledge discovery in an
open environment using grid computing concepts, allowing users to integrate data from various
online data sources, as well make use of remote resources, for executing their data mining tasks.
The earliest example was the Discovery Net, developed at Imperial College London, which won
the "Most Innovative Data-Intensive Application Award" at the ACM SC02 (Supercomputing
2002) conference and exhibition, based on a demonstration of a fully interactive distributed
knowledge discovery application for a bioinformatics application. Other examples include work
conducted by researchers at the University of Calabria, who developed a Knowledge Grid
architecture for distributed knowledge discovery, based on grid computing.

Data visualization - is the presentation of data in a pictorial or graphical format.


It enables decision makers to see analytics presented visually, so they can grasp difficult concepts
or identify new patterns.

Lecture 7. NETWORKS AND TELECOMMUNICATIONS.


7.1. General information about networks

A computer network, or simply a network, is a collection of computers and other hardware


interconnected by communication channels that allow sharing of resources and information.
Today, computer networks are the core of modern communication. All modern aspects of the
public switched telephone network (PSTN) are computer-controlled, and telephony increasingly
runs over the Internet Protocol, although not necessarily the public Internet. The scope of
communication has increased significantly in the past decade, and this boom in communications
would not have been possible without the progressively advancing computer network. Computer
networks, and the technologies needed to connect and communicate through and between them,
continue to drive computer hardware, software, and peripherals industries. This expansion is
mirrored by growth in the numbers and types of users of networks, from the researcher to the
home user.
 Facilitate communications
Using a network, people can communicate efficiently and easily via email, instant messaging,
chat rooms, telephone, video telephone calls, and video conferencing.
 Permit sharing of files, data, and other types of information
In a network environment, authorized users may access data and information stored on other
computers on the network. The capability of providing access to data and information on shared
storage devices is an important feature of many networks.
 Share network and computing resources
In a networked environment, each computer on a network may access and use resources
provided by devices on the network, such as printing a document on a shared network printer.
Distributed computing uses computing re sources across a network to accomplish tasks.
 May be insecure
A computer network may be used by computer hackers to deploy computer viruses or
computer worms on devices connected to the network, or to prevent these devices from normally
accessing the network .
7.2. History

Before the advent of computer networks that were based upon some type of
telecommunications system, communication between calculation machines and early computers
was performed by human users by carrying instructions between them. Many of the social
behaviors seen in today's Internet were demonstrably present in the 19th century and arguably in
even earlier networks using visual signals.
 In September 1940, George Stibitz used a Teletype machine to send instructions for a
problem set from his Model at Dartmouth College to his Complex Number Calculator in New
York and received results back by the same means. Linking output systems like teletypewriters to
computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962,
J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Computer
Network", a precursor to the ARPANET.
 Early networks of communicating computers included the military radar system Semi-
Automatic Ground Environment (SAGE), started in the late 1950s.
 The commercial airline reservation system semi-automatic business research environment
(SABRE) went online with two connected mainframes in 1960.
 In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for
distributed users of large computer systems. The same year, at Massachusetts Institute of
Technology, a research group supported by General Electric and Bell Labs used a computer to
route and manage telephone connections.
 Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies independently
conceptualized and developed network systems which used packets that could be used in a
network between computer systems.
 1965 Thomas Marill and Lawrence G. Roberts created the first wide area network
(WAN). This was an immediate precursor to the ARPANET, of which Roberts became program
manager.
 The first widely used telephone switch that used true computer control was introduced by
Western Electric in 1965.
 In 1969 the University of California at Los Angeles, the Stanford Research Institute,
University of California at Santa Barbara, and the University of Utah were connected as the
beginning of the ARPANET network using 50 kbit/s circuits.
 Commercial services using X.25 were deployed in 1972, and later used as an underlying
infrastructure for expanding TCP/IP networks. Initial Expansion of the ARPANET (Figure 1)
(a) Dec. 1969; (b) July 1970; (c) Mar. 1971;
(d) Apr. 1972; (e) Sept. 1972

Figure 1

 1973 Bob Kahn poses Internet problem---how to connect ARPANET, packet radio
network, and satellite network
 1974 Vint Cerf, Bob Kahn publish initial design of Internet protocols (including TCP) to
connect multiple networks
 Christmas Day Lockup - Harvard IMP hardware problem leads it to broadcast zero-length
hops to any ARPANET destination, causing all other IMPs to send their traffic to Harvard
 1978 TCP (NCP) split to TCP/IP. New applications kept the process going forward. The
Internet Becomes a Network of Networks
 1980 ARPANET grinds to a complete halt on 27 October because of an accidentally-
propagated status-message virus
 1981 BITNET (Because It’s Time NETwork) between CUNY and Yale. Store and
forward network for email.
 1983 Name server developed at Univ of Wisconsin, no longer requiring users to know the
exact path to other systems
 1986 NSF builds NSFNET as backbone, links 6 supercomputer centers, 56 kbps; this
allows an explosion of connections, especially from universities
 1987 10,000 hosts
 1988 NSFNET backbone upgrades to 1.5Mbps. Internet worm burrows through the Net,
affecting 6000
 1989 100,000 hosts. Growth of the Internet
 1990 ARPANET ceases to exist
 1991 NSF lifts restrictions on the commercial use of the Net; Berners-Lee of European
Organization for Nuclear Research (CERN) released World Wide Web
 1992 1 million hosts (RFC 1300: Remembrances of Things Past)
 1994 NSF reverts back to research network (vBNS); the backbone of the Internet consists
of multiple private backbones
 2000 Backbones run at 10Gbps, 400s millions computers in 150 countries
 2012 Internet Users 2,405,518,376

7.3. Classification of network

Networks may be classified according to a wide variety of characteristics, such as the


geographical scope, medium used to transport the data, communications protocol used, topology,
benefit, and organizational scope.
Computer network types by geographical scope
 Near field (NFC)
 Body (BAN)
 Personal (PAN)
 Near-me (NAN)
 Local (LAN)
 Campus (CAN)
 Backbone
 Metropolitan (MAN)
 Wide (WAN)
 Internet
Interplanetary Internet Networks are often classified by their physical or organizational extent
or their purpose. Usage, trust level, and access rights differ between these types of networks.
Personal area network
A personal area network (PAN) is a computer network used for communication among
computer and different information technological devices close to one person. Some examples of
devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs,
scanners, and even video game consoles. A PAN may include wired and wireless devices. The
reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB
and Firewire connections while technologies such as Bluetooth and infrared communication
typically form a wireless PAN.
Local area network
A local area network (LAN) is a network that connects computers and devices in a limited
geographical area such as home, school, computer laboratory, office building, or closely
positioned group of buildings. Each computer or device on the network is a node. Current wired
LANs are most likely to be based on Ethernet technology, although new standards like ITU-T
G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone
lines and power lines)
.A sample LAN is depicted in the accompanying diagram. All interconnected devices must
understand the network layer (layer 3), because they are handling multiple subnets (the different
colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user
device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches"
because they only have Ethernet interfaces and must understand IP. It would be more correct to
call them access routers, where the router at the top is a distribution router that connects to the
Internet and academic networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include
their higher data transfer rates, smaller geographic range, and no need for leased
telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at data
transfer rates up to 10 Gbit/s. IEEE has projects investigating the standardization of 40 and 100
Gbit/s. LANs can be connected to Wide area network by using routers.
Campus area network
A Campus area network (CAN) is a computer network made up of an interconnection of
LANs within a limited geographical area. The networking equipment (switches, routers) and
transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned (by
the campus tenant / owner: an enterprise, university, government etc.).
In the case of a university campus-based campus network, the network is likely to link a
variety of campus buildings including, for example, academic colleges or departments, the
university library, and student residence halls.
Backbone network
A backbone network (Figure 36) is part of a computer network infrastructure that
interconnects various pieces of network, providing a path for the exchange of information
between different LANs or subnetworks. A backbone can tie together diverse networks in the
same building, in different buildings in a campus environment, or over wide areas. Normally, the
backbone's capacity is greater than that of the networks connected to it.

Figure 2
A large corporation which has many locations may have a backbone network that ties all of
these locations together, for example, if a server cluster needs to be accessed by different
departments of a company which are located at different geographical locations. The equipment
which ties these departments together constitute the network backbone. Network performance
management including network congestion are critical parameters taken into account when
designing a network backbone.
A specific case of a backbone network is the Internet backbone, which is the set of wide-area
network connections and core routers that interconnect all networks connected to the Internet.
Metropolitan area network
A Metropolitan area network (MAN) is a large computer network that usually spans a city or a
large campus.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such
as a city, country, or spans even intercontinental distances, using a communications channel that
combines many types of media such as telephone lines, cables, and air waves. A WAN often uses
transmission facilities provided by common carriers, such as telephone companies. WAN
technologies generally function at the lower three layers of the OSI reference model: the physical
layer, the data link layer, and the network layer.
Enterprise private network
An enterprise private network is a network built by an enterprise to interconnect various
company sites, e.g., production sites, head offices, remote offices, shops, in order to share
computer resources.
Virtual private network

Figure 3

A virtual private network (Figure 3) (VPN) is a computer network in which some of the links
between nodes are carried by open connections or virtual circuits in some larger network (e.g.,
the Internet) instead of by physical wires. The data link layer protocols of the virtual network are
said to be tunneled through the larger network when this is the case. One common application is
secure communications through the public Internet, but a VPN need not have explicit security
features, such as authentication or content encryption. VPNs, for example, can be used to separate
the traffic of different user communities over an underlying network with strong security features.
Virtual Network
Not to be confused with a Virtual Private Network, a Virtual Network defines data traffic
flows between virtual machines within a hypervisor in a virtual computing environment. Virtual
Networks may employ virtual security switches, virtual routers, virtual firewalls and other virtual
networking devices to direct and secure data traffic.
Internetwork
An Internetwork is the connection of multiple computer networks via a common routing
technology using routers. The Internet is an aggregation of many connected internetworks
spanning the Earth.
Communication media
Computer networks can be classified according to the hardware and associated software
technology that is used to interconnect the individual devices in the network, such as electrical
cable (HomePNA, power line communication, G.hn), optical fiber, and radio waves (wireless
LAN).
A well-known family of communication media is collectively known as Ethernet. It is defined
by IEEE 802 and utilizes various standards and media that enable communication between
devices. Wireless LAN technology is designed to connect devices without wiring. These devices
use radio waves or infrared signals as a transmission medium.
Wired technologies
The order of the following wired technologies is, roughly, from slowest to fastest transmission
speed.
 Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair
cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of
two insulated copper wires twisted into pairs. Computer networking cabling (wired Ethernet as
defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice
and data transmission. The use of two wires twisted together helps to reduce crosstalk and
electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10
billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP)
and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in
various scenarios.
 Coaxial cable is widely used for cable television systems, office buildings, and other
work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by
an insulating layer (typically a flexible material with a high dielectric constant), which itself is
surrounded by a conductive layer. The insulation helps minimize interference and distortion.
Transmission speed ranges from 200 million bits per second to more than 500 million bits per
second.
 ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power
lines) to create a high-speed (up to 1 Gigabit/s) local area network.
 An optical fiber is a glass fiber. It uses pulses of light to transmit data. Some advantages
of optical fibers over metal wires are less transmission loss, immunity from electromagnetic
radiation, and very fast transmission speed, up to trillions of bits per second. One can use
different colors of lights to increase the number of messages being sent over a fiber optic cable.
Wireless technologies
 Terrestrial microwave – Terrestrial microwave communication uses Earth-based
transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low-
gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced
approximately 48 km (30 mi) apart.
 Communications satellites – The satellites communicate via microwave radio waves,
which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically
in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems
are capable of receiving and relaying voice, data, and TV signals.
 Cellular and PCS systems use several radio communications technologies. The systems
divide the region covered into multiple geographic areas. Each area has a low-power transmitter
or radio relay antenna device to relay calls from one area to the next area.
 Radio and spread spectrum technologies – Wireless local area network use a high-
frequency radio technology similar to digital cellular and a low-frequency radio technology.
Wireless LANs use spread spectrum technology to enable communication between multiple
devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-
wave technology.
 Infrared communication can transmit signals for small distances, typically no more than
10 meters. In most cases, line-of-sight propagation is used, which limits the physical positioning
of communicating devices.
 A global area network (GAN) is a network used for supporting mobile across an arbitrary
number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile
communications is handing off user communications from one local coverage area to the next. In
IEEE Project 802, this involves a succession of terrestrial wireless LANs.
Organizational scope
Networks are typically managed by organizations which own them. According to the owner's
point of view, networks are seen as intranets or extranets. A special case of network is the
Internet, which has no single owner but a distinct status when seen by an organizational entity –
that of permitting virtually unlimited global connectivity for a great multitude of purposes.
Intranets and extranets
Intranets and extranets are parts or extensions of a computer network, usually a LAN.
An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web
browsers and file transfer applications, that is under the control of a single administrative entity.
That administrative entity closes the intranet to all but specific, authorized users. Most
commonly, an intranet is the internal network of an organization. A large intranet will typically
have at least one web server to provide users with organizational information.
An extranet is a network that is limited in scope to a single organization or entity and also has
limited connections to the networks of one or more other usually, but not necessarily, trusted
organizations or entities—a company's customers may be given access to some part of its intranet
—while at the same time the customers may not be considered trusted from a security standpoint.
Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of
network, although an extranet cannot consist of a single LAN; it must have at least one
connection with an external network.
Internet
The Internet is a global system of interconnected governmental, academic, corporate, public,
and private computer networks. It is based on the networking technologies of the Internet
Protocol Suit. It is the successor of the Advanced Research Projects Agency Network
(ARPANET) developed by DARPA of the United States Department of Defense.
The Internet is also the communications backbone underlying the World Wide Web
(WWW).Participants in the Internet use a diverse array of methods of several hundred
documented, and often standardized, protocols compatible with the Internet Protocol Suite and an
addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and
address registries. Service providers and large enterprises exchange information about the
reachability of their address spaces through the Border Gateway Protocol (BGP), forming a
redundant worldwide mesh of transmission paths.
Network topology
Common layouts
A network topology is the layout of the interconnections of the nodes of a computer network.
Common layouts are:
 A bus network (Figure 4): all nodes are connected to a common medium along this
medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.

Figure 4

 A star network (Figure 5): all nodes are connected to a special central node. This is the
typical layout found in a Wireless LAN, where each wireless client connects to the central
Wireless access point.

Figure 5

 A ring network (Figure 6): each node is connected to its left and right neighbour node,
such that all nodes are connected and that each node can reach each other node by traversing
nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a
topology.
Figure 6

 A mesh network (Figure 7): each node is connected to an arbitrary number of neighbours
in such a way that there is at least one traversal from any node to any other.

Figure 7

 A fully connected network (Figure 8): each node is connected to every other node in the
network.

Figure 8
Note that the physical layout of the nodes in a network may not necessarily reflect the network
topology. As an example, with FDDI, the network topology is a ring (actually two counter-
rotating rings), but the physical topology is a star, because all neighboring connections are routed
via a central physical location.
Overlay network
An overlay network (Figure 9) is a virtual computer network that is built on top of another
network. Nodes in the overlay are connected by virtual or logical links, each of which
corresponds to a path, perhaps through many physical links, in the underlying network. The
topology of the overlay network may (and often does) differ from that of the underlying one.
For example, many peer-to-peer networks are overlay networks because they are organized as
nodes of a virtual system of links run on top of the Internet. The Internet was initially built as an
overlay on the telephone network.
The most striking example of an overlay network, however, is the Internet itself: At the IP
layer, each node can reach any other by a direct connection to the desired IP address, thereby
creating a fully connected network; the underlying network, however, is composed of a mesh-like
interconnect of subnetworks of varying topologies (and, in fact, technologies). Address resolution
and routing are the means which allows the mapping of the fully connected IP overlay network to
the underlying ones.

Figure 9

Overlay networks have been around since the invention of networking when computer systems
were connected over telephone lines using modems, before any data network existed.
Another example of an overlay network is a distributed hash table, which maps keys to nodes
in the network. In this case, the underlying network is an IP network, and the overlay network is a
table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as
through quality of service guarantees to achieve higher-quality streaming media.
An overlay network can be incrementally deployed on end-hosts running the overlay protocol
software, without cooperation from Internet service providers. The overlay has no control over
how packets are routed in the underlying network between two overlay nodes, but it can control,
for example, the sequence of overlay nodes a message traverses before reaching its destination.
For example, Akamai Technologies manages an overlay network that provides reliable,
efficient content delivery (a kind of multicast). Academic research includes end system multicast
and overcast for multicast; RON (resilient overlay network) for resilient routing; and OverQoS
for quality of service guarantees, among others
7.4. Basic hardware components

Apart from the physical communications media themselves as described above, networks
comprise additional basic hardware building blocks interconnecting their terminals, such as
network interface cards (NICs), hubs, bridges, switches, and routers.
Network interface cards
A network card , network adapter, or NIC (network interface card) is a piece of computer
hardware designed to allow computers to physically access a networking medium. It provides a
low-level addressing system through the use of MAC addresses.
Each Ethernet network interface has a unique MAC address which is usually stored in a small
memory device on the card, allowing any device to connect to the network without creating an
address conflict. Ethernet MAC addresses are composed of six octets. Uniqueness is maintained
by the IEEE, which manages the Ethernet address space by assigning 3-octet prefixes to
equipment manufacturers. The list of prefixes is publicly available. Each manufacturer is then
obliged to both use only their assigned prefix(es) and to uniquely set the 3-octet suffix of every
Ethernet interface they produce.
Repeaters and hubs
A repeater is an electronic device that receives a signal , cleans it of unnecessary noise,
regenerates it, and retransmits it at a higher power level, or to the other side of an obstruction, so
that the signal can cover longer distances without degradation. In most twisted pair Ethernet
configurations, repeaters are required for cable that runs longer than 100 meters. A repeater with
multiple ports is known as a hub.
Repeaters work on the Physical Layer of the OSI model. Repeaters require a small amount of
time to regenerate the signal. This can cause a propagation delay which can affect network
communication when there are several repeaters in a row. Today, repeaters and hubs have been
made mostly obsolete by switches (see below).
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the
OSI model. Bridges broadcast to all ports except the port on which the broadcast was received.
However, bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which
MAC addresses are reachable through specific ports. Once the bridge associates a port and an
address, it will send traffic for that address to that port only.
Bridges learn the association of ports and addresses by examining the source address of frames
that it sees on various ports. Once a frame arrives through a port, its source address is stored and
the bridge assumes that MAC address is associated with that port. The first time that a previously
unknown destination address is seen, the bridge will forward the frame to all ports other than the
one on which the frame arrived.
Bridges come in three basic types:
 Local bridges: Directly connect LANs Remote bridges: Can be used to create a wide area
network (WAN) link between LANs.
 Remote bridges, where the connecting link is slower than the end networks, largely have
been replaced with routers.
 Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data
communication) between ports (connected cables) based on the MAC addresses in the packets. A
switch is distinct from a hub in that it only forwards the frames to the ports involved in the
communication rather than all ports connected. A switch breaks the collision domain but
represents itself as a broadcast domain. Switches make forwarding decisions of frames on the
basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for
devices, and cascading additional switches. Some switches are capable of routing based on Layer
3 addressing or additional logical levels; these are called multi-layer switches. The term switch is
used loosely in marketing to encompass devices including routers and bridges, as well as devices
that may distribute traffic on load or by application content (e.g., a Web URL identifier).
Routers
A router is an internetworking device that forwards packets between networks by processing
information found in the datagram or packet (Internet protocol information from Layer 3 of the
OSI model). In many situations, this information is processed in conjunction with the routing
table (also known as forwarding table). Routers use routing tables to determine what interface to
forward packets (this can include the “null” also known as the “black hole” interface because data
can go into it, however, no further processing is done for said data).
Firewalls
A firewall is an important aspect of a network with respect to security. It typically rejects
access requests from unsafe sources while allowing actions from recognized ones. The vital role
firewalls play in network security grows in parallel with the constant increase in ‘cyber’ attacks
for the purpose of stealing/corrupting data, planting viruses, etc.
Network performance
Network performance refers to the service quality of a telecommunications product as seen by
the customer. It should not be seen merely as an attempt to get “more through” the network.
There are many different ways to measure the performance of a network, as each network is
different in nature and design. Performance can also be modelled instead of measured; one
example of this is using state transition diagrams to model queuing performance in a circuit-
switched network. These diagrams allow the network planner to analyze how the network will
perform in each state, ensuring that the network will be optimally designed.
Network security
In the field of networking, the area of network security consists of the provisions and policies
adopted by the network administrator to prevent and monitor unauthorized access, misuse,
modification, or denial of the computer network and network-accessible resources. Network
security is the authorization of access to data in a network, which is controlled by the network
administrator. Users are assigned an ID and password that allows them access to information and
programs within their authority. Network Security covers a variety of computer networks, both
public and private that are used in everyday jobs conducting transactions and communications
among businesses, government agencies and individuals.

Lecture 8. CABERSAFETY
8.1. Malicious software

Malware(malicious software) is a general name for all programs that are harmful. Malware
includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest
adware and other malicious or unwanted software, including true viruses.
A computer virus is a computer program that can replicate itself and spread from one
computer to another. The term "virus" is also commonly, but erroneously, used to refer to other
types of malware, including but not limited to adware and spyware programs that do not have a
reproductive ability.
Viruses are sometimes confused with worms and Trojan horses, which are technically
different. A worm can exploit security vulnerabilities to spread itself automatically to other
computers through networks, while a Trojan horse is a program that appears harmless but hides
malicious functions. Worms and Trojan horses, like viruses, may harm a computer system's data
or performance. Some viruses and other malware have symptoms noticeable to the computer
user, but many are surreptitious or simply do nothing to call attention to themselves. Some
viruses do nothing beyond reproducing themselves.
An example of a virus which is not a malware, but is putatively benevolent, is Fred Cohen's
theoretical compression virus. However, antivirus professionals do not accept the concept of
benevolent viruses, as any desired function can be implemented without involving a virus
(automatic compression, for instance, is available under the Windows operating system at the
choice of the user). Any virus will by definition make unauthorised changes to a computer, which
is undesirable even if no damage is done or intended.
History
Academic work
The first academic work on the theory of computer viruses (although the term "computer
virus" was not used at that time) was done in 1949 by John von Neumann who gave lectures at
the University of Illinois about the "Theory and Organization of Complicated Automata". The
work of von Neumann was later published as the "Theory of self-reproducing automata". In his
essay von Neumann described how a computer program could be designed to reproduce
itself.Von Neumann's design for a self-reproducing computer program is considered the world's
first computer virus, and he is considered to be the theoretical father of computer virology.
In 1972 Veith Risak, directly building on von Neumann's work on self-replication, published
his article “Self-reproducing automata with minimal information exchange”. The article describes
a fully functional virus written in assembler language for a SIEMENS 4004/35 computer system.
In 1980 Jürgen Kraus wrote his diplom thesis " Self-reproduction of programs " at the
University of Dortmund. In his work Kraus postulated that computer programs can behave in a
way similar to biological viruses.
In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer
Viruses - Theory and Experiments". It was the first paper to explicitly call a self-reproducing
program a "virus", a term introduced by Cohen's mentor Leonard Adleman. In 1987, Fred Cohen
published a demonstration that there is no algorithm that can perfectly detect all possible viruses.
Virus programs
The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early
1970s. Creeper was an experimental self-replicating program written by Bob Thomas at BBN
Technologies in 1971. Creeper gained access via the ARPANET and copied itself to the remote
system where the message, "I'm the creeper, catch me if you can!" was displayed. The Reaper
program was created to delete Creeper.
A program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—
that is, outside the single computer or lab where it was created. Written in 1981 by Richard
Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk. This
virus, created as a practical joke when Skrenta was still in high school, was injected in a game on
a floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the personal
computer and displaying a short poem beginning "Elk Cloner: The program with a personality."
Macro viruses have become common since the mid-1990s. Most of these viruses are written in
the scripting languages for Microsoft programs such as Word and Excel and spread throughout
Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also
available for Mac OS, most could also spread to Macintosh computers. Although most of these
viruses did not have the ability to send infected email messages, those viruses which did take
advantage of the Microsoft Outlook COM interface.
A virus may also send a web address link as an instant message to all the contacts on an
infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the
link to the website, the virus hosted at the site may be able to infect this new computer and
continue propagating.
Viruses that spread using cross-site scripting were first reported in 2002, and were
academically demonstrated in 2005. There have been multiple instances of the cross-site scripting
viruses in the wild, exploiting websites such as MySpace and Yahoo!

8.2. Classification

In order to replicate itself, a virus must be permitted to execute code and write to memory. For
this reason, many viruses attach themselves to executable files that may be part of legitimate
programs .If a user attempts to launch an infected program, the virus' code may be executed
simultaneously. Viruses can be divided into two types based on their behavior when they are
executed. Nonresident viruses immediately search for other hosts that can be infected, infect
those targets, and finally transfer control to the application program they infected.
Resident viruses do not search for hosts when they are started. Instead, a resident virus loads
itself into memory on execution and transfers control to the host program. The virus stays active
in the background and infects new hosts when those files are accessed by other programs or the
operating system itself.
Nonresident viruses can be thought of as consisting of a finder module and a replication
module. The finder module is responsible for finding new files to infect. For each new executable
file the finder module encounters, it calls the replication module to infect that file.
Resident viruses contain a replication module that is similar to the one that is employed by
nonresident viruses. This module, however, is not called by a finder module. The virus loads the
replication module into memory when it is executed instead and ensures that this module is
executed each time the operating system is called to perform a certain operation. The replication
module can be called, for example, each time the operating system executes a file. In this case the
virus infects every suitable program that is executed on the computer.
Resident viruses are sometimes subdivided into a category of fast infectors and a category of
slow infectors. Fast infectors are designed to infect as many files as possible. A fast infector, for
instance, can infect every potential host file that is accessed. This poses a special problem when
using anti-virus software, since a virus scanner will access every potential host file on a computer
when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is
present in memory the virus can "piggy-back" on the virus scanner and in this way infect all files
that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this
method is that infecting many files may make detection more likely, because the virus may slow
down a computer or perform many suspicious actions that can be noticed by anti-virus software.
Slow infectors, on the other hand, are designed to infect hosts infrequently. Some slow infectors,
for instance, only infect files when they are copied. Slow infectors are designed to avoid
detection by limiting their actions: they are less likely to slow down a computer noticeably and
will, at most, infrequently trigger anti-virus software that detects suspicious behavior by
programs. The slow infector approach, however, does not seem very successful.
Stealth
While some antivirus software employ various techniques to counter stealth mechanisms, once
the infection occurs any recourse to clean the system is unreliable. In Microsoft Windows
operating systems, the NTFS file system is proprietary. Direct access to files without using the
Windows OS is undocumented. This leaves antivirus software little alternative but to send a read
request to Windows OS files that handle such requests. Some viruses trick antivirus software by
intercepting its requests to the OS. A virus can hide itself by intercepting the request to read the
infected file, handling the request itself, and return an uninfected version of the file to the
antivirus software. The interception can occur by code injection of the actual operating system
files that would handle the read request. Thus, an antivirus software attempting to detect the virus
will either not be given permission to read the infected file, or, the read request will be served
with the uninfected version of the same file.
File hashes stored in Windows, to identify altered Windows files, can be overwritten so that
the System File Checker will report that system files are originals.
The only reliable method to avoid stealth is to boot from a medium that is known to be clean.
Security software can then be used to check the dormant operating system files. Most security
software relies on virus signatures or they employ heuristics, instead of also using a database of
file hashes for Windows OS files. Using file hashes to scan for altered files would guarantee
removing an infection. The security software can identify the altered files, and request Windows
installation media to replace them with authentic versions.
Self-modification
Most modern antivirus programs try to find virus-patterns inside ordinary programs by
scanning them for so-called virus signatures. Unfortunately, the term is misleading, in that
viruses do not possess unique signatures in the way that human beings do. Such a virus signature
is merely a sequence of bytes that an antivirus program looks for because it is known to be part of
the virus. A better term would be "search strings". Different antivirus programs will employ
different search strings, and indeed different search methods, when identifying viruses. If a virus
scanner finds such a pattern in a file, it will perform other checks to make sure that it has found
the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user
that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected
file. Some viruses employ techniques that make detection by means of signatures difficult but
probably not impossible. These viruses modify their code on each infection. That is, each infected
file contains a different variant of the virus.
Polymorphic code
Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like
regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself,
which is decoded by a decryption module. In the case of polymorphic viruses, however, this
decryption module is also modified on each infection. A well-written polymorphic virus therefore
has no parts which remain identical between infections, making it very difficult to detect directly
using signatures. Antivirus software can detect it by decrypting the viruses using an emulator, or
by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus
has to have a polymorphic engine (also called mutating engine or mutation engine) somewhere in
its encrypted body.
Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus
significantly. For example, a virus can be programmed to mutate only slightly over time, or it can
be programmed to refrain from mutating when it infects a file on a computer that already contains
copies of the virus. The advantage of using such slow polymorphic code is that it makes it more
difficult for antivirus professionals to obtain representative samples of the virus, because bait files
that are infected in one run will typically contain identical or similar samples of the virus. This
will make it more likely that the detection by the virus scanner will be unreliable, and that some
instances of the virus may be able to avoid detection.
Avoiding bait files and other undesirable hosts
A virus needs to infect hosts in order to spread further. In some cases, it might be a bad idea to
infect a host program. For example, many antivirus programs perform an integrity check of their
own code. Infecting such programs will therefore increase the likelihood that the virus is
detected. For this reason, some viruses are programmed not to infect programs that are known to
be part of antivirus software. Another type of host that viruses sometimes avoid are bait files. Bait
files (or goat files) are files that are specially created by antivirus software, or by antivirus
professionals themselves, to be infected by a virus. These files can be created for various reasons,
all of which are related to the detection of the virus:
 Antivirus professionals can use bait files to take a sample of a virus (i.e. a copy of a
program file that is infected by the virus). It is more practical to store and exchange a small,
infected bait file, than to exchange a large application program that has been infected by the
virus.
 Antivirus professionals can use bait files to study the behavior of a virus and evaluate
detection methods. This is especially useful when the virus is polymorphic. In this case, the virus
can be made to infect a large number of bait files. The infected files can be used to test whether a
virus scanner detects all versions of the virus.
 Some antivirus software employ bait files that are accessed regularly. When these files are
modified, the antivirus software warns the user that a virus is probably active on the system.
Since bait files are used to detect the virus, or to make detection possible, a virus can benefit
from not infecting them. Viruses typically do this by avoiding suspicious programs, such as small
program files or programs that contain certain patterns of "garbage instructions".
A related strategy to make baiting difficult is sparse infection. Sometimes, sparse infectors do
not infect a host file that would be a suitable candidate for infection in other circumstances. For
example, a virus can decide on a random basis whether to infect a file or not, or a virus can only
infect host files on particular days of the week.

8.3. Vulnerability and countermeasures

Anti-virus software and other preventive measures


Many users install anti-virus software that can detect and eliminate known viruses after the
computer downloads or runs the executable. There are two common methods that an anti-virus
software application uses to detect viruses. The first, and by far the most common method of
virus detection is using a list of virus signature definitions. This works by examining the content
of the computer's memory (its RAM, and boot sectors) and the files stored on fixed or removable
drives (hard drives, floppy drives), and comparing those files against a database of known virus
"signatures". The disadvantage of this detection method is that users are only protected from
viruses that pre-date their last virus definition update. The second method is to use a heuristic
algorithm to find viruses based on common behaviors. This method has the ability to detect novel
viruses that anti-virus security firms have yet to create a signature for.
Some anti-virus programs are able to scan opened files in addition to sent and received email
messages "on the fly" in a similar manner. This practice is known as "on-access scanning". Anti-
virus software does not change the underlying capability of host software to transmit viruses.
Users must update their software regularly to patch security holes. Anti-virus software also needs
to be regularly updated in order to recognize the latest threats.
One may also minimize the damage done by viruses by making regular backups of data (and
the operating systems) on different media, that are either kept unconnected to the system (most of
the time), read-only or not accessible for other reasons, such as using different file systems. This
way, if data is lost through a virus, one can start again using the backup (which should preferably
be recent).
Virus removal
One possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool
known as System Restore, which restores the registry and critical system files to a previous
checkpoint. Often a virus will cause a system to hang, and a subsequent hard reboot will render a
system restore point from the same day corrupt. Restore points from previous days should work
provided the virus is not designed to corrupt the restore files and does not exist in previous
restore points. Some viruses disable System Restore and other important tools such as Task
Manager and Command Prompt. An example of a virus that does this is CiaDoor. Many such
viruses can be removed by rebooting the computer, entering Windows safe mode, and then using
system tools.
Many websites run by anti-virus software companies provide free online virus scanning, with
limited cleaning facilities (the purpose of the sites is to sell anti-virus products). Some websites
allow a single suspicious file to be checked by many antivirus programs in one operation.
Additionally, several capable antivirus software programs are available for free download from
the internet (usually restricted to non-commercial use), and Microsoft provide a free anti-malware
utility that runs as part of their regular Windows update regime.
Operating system reinstallation
Reinstalling the operating system is another approach to virus removal. It involves either
reformatting the computer's hard drive and installing the OS and all programs from original
media, or restoring the entire partition with a clean backup image. User data can be restored by
booting from a live CD, or putting the hard drive into another computer and booting from its
operating system, using great care not to infect the second computer by executing any infected
programs on the original drive; and once the system has been restored precautions must be taken
to avoid reinfection from a restored executable file.
These methods are simple to do, may be faster than disinfecting a computer, and are
guaranteed to remove any malware. If the operating system and programs must be reinstalled
from scratch, the time and effort to reinstall, reconfigure, and restore user preferences must be
taken into account.

Lecture 9. INTERNET TECHNOLOGIES.


9.1. Evolution of web design

Web design encompasses many different skills and disciplines in the production and
maintenance of websites. The different areas of web design include web graphic design; interface
design; authoring, including standardised code and proprietary software; user experience design;
and search engine optimization. Often many individuals will work in teams covering different
aspects of the design process, although some designers will cover them all.
The term web design is normally used to describe the design process relating to the front-end
(client side) design of a website including writing mark up. Web design partially overlaps web
engineering in the broader scope of web development. Web designers are expected to have an
awareness of usability and if their role involves creating mark up then they are also expected to
be up to date with web accessibility guidelines.
History
1988—2001
Although web design has a fairly recent history, it can be linked to other areas such as graphic
design. However web design is also seen as a technological standpoint. It has become a large part
of people’s everyday lives. It is hard to imagine the Internet without animated graphics, different
styles of typography, background and music.
The start of the web and web design
In 1989, whilst working at CERN Tim Berners-Lee proposed to create a global hypertext
project, which later became known as the World Wide Web. Throughout 1991 to 1993 the World
Wide Web was born. Text only pages could be viewed using a simple line-mode browser. In
1993 Marc Andreessen and Eric Bina, created the Mosaic browser. At the time there were
multiple browsers however the majority of them were Unix-based and were naturally text heavy.
There had been no integrated approach to graphical design elements such as images or sounds.
The Mosaic browser broke this mould. The W3C was created in October 1994, to "lead the
World Wide Web to its full potential by developing common protocols that promote its evolution
and ensure its interoperability." This discouraged any one company from monopolizing a
propriety browser and programming language, which could have altered the effect of the World
Wide Web as a whole. The W3C continues to set standards, which can today be seen with
JavaScript. In 1994 Andreessen formed Communications corp. That later became known as
Netscape Communications the Netscape 0.9 browser. Netscape created its own HTML tags
without regards to the traditional standards process. For example Netscape 1.1 included tags for
changing background colors and formatting text with tables on web pages. Throughout 1996 to
1999 the browser wars began, as Microsoft and Netscape fought for ultimate browser dominance.
During this time there were many new technologies in the field, notably Cascading Style Sheets,
JavaScript, and Dynamic HTML. On a whole the browser competition did lead to many positive
creations and helped web design evolve at a rapid pace.
In 1996, Microsoft released its first competitive browser, which was complete with its own
features and tags. It was also the first browser to support style sheets, which at the time was seen
as an obscure authoring technique. The HTML markup for tables was originally intended for
displaying tabular data. However designers quickly realized the potential of using HTML tables
for creating the complex, multi-column layouts that were otherwise not possible. At this time, as
design and good aesthetics seemed to take precedence over good mark-up structure, and little
attention was paid to semantics and web accessibility. HTML sites were limited in their design
options, even more so with earlier versions of HTML. To create complex designs, many web
designers had to use complicated table structures or even use blank spacer .GIF images to stop
empty table cells from collapsing. CSS was introduced in December 1996 by the W3C to support
presentation and layout; this allowed HTML code to be semantic rather than both semantic and
presentational, and improved web accessibility, see tableless web design.
In 1996, Flash (originally known as FutureSplash) was developed. At the time, the Flash
content development tool was relatively simple compared to now, using basic layout and drawing
tools, a limited precursor to ActionScript, and a timeline, but it enabled web designers to go
beyond the point of HTML, animated GIFs and JavaScript. However, because Flash required a
plug-in, many web developers avoided using it for fear of limiting their market share from lack of
compatibility. Instead, designers reverted to gif animations (if they didn't forego using motion
graphics altogether) and JavaScript for widgets. But the benefits of Flash made it popular enough
among specific target markets to eventually work its way to the vast majority of browsers, and
powerful enough to be used to develop entire sites.
End of the first browser wars
During 1998 Netscape released Netscape Communicator code under an open source licence,
enabling thousands of developers to participate in improving the software. However, they decided
to stop and start from the beginning, which guided the development of the open source browser
and soon expanded to a complete application platform. The Web Standards Project was formed,
and promoted browser compliance with HTML and CSS standards by creating Acid1, Acid2, and
Acid3 tests. 2000 was a big year for Microsoft. Internet Explorer had been released for Mac, this
was significant as it was the first browser that fully supported HTML 4.01 and CSS 1, raising the
bar in terms of standards compliance. It was also the first browser to fully support the PNG image
format. During this time Netscape was sold to AOL and this was seen as Netscape’s official loss
to Microsoft in the browser wars.
2001—2012
Since the start of the 21st century the web has become more and more integrated into peoples
lives, as this has happened the technology of the web has also moved on. There have also been
significant changes in the way people use and access the web, and this has changed how sites are
designed.
Modern browsers
Since the end of the browsers wars there have been new browsers coming onto the scene.
Many of these are open source meaning that they tend to have faster development and are more
supportive of new standards. The new options are considered by many to be better that
Microsoft's Internet Explorer.
New standards
The W3C has released new standards of HTML (HTML5) and CSS (CSS3), as well as new
JavaScript API's each as a new but individual standard. However, while the term HTML5 is only
used to refer to the new version of HTML and some of the JavaScript API's, it has become
common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript)

9.2. Technologies and techniques

Web designers use a variety of different tools depending on what part of the production
process they are involved in. These tools are updated over time by newer standards and software
but the principles behind them remain the same. Web graphic designers use vector and raster
graphics packages for creating web formatted imagery or design prototypes. Technologies used
for creating websites include standardised mark-up, which could be hand-coded or generated by
WYSIWYG editing software. There is also proprietary software based on plug-ins that bypasses
the client’s browsers versions. These are often WYSIWYG but with the option of using the
software’s scripting language. Search engine optimisation tools may be used to check search
engine ranking and suggest improvements.
Other tools web designers might use include mark up validators and other testing tools for
usability and accessibility to ensure their web sites meet web accessibility guidelines.
Marketing and communication design
Marketing and communication design on a website may identify what works for its target
market. This can be an age group or particular strand of culture; thus the designer may understand
the trends of its audience. Designers may also understand the type of website they are designing,
meaning, for example, that (B2B) business-to-business website design considerations might differ
greatly from a consumer targeted website such as a retail or entertainment website. Careful
consideration might be made to ensure that the aesthetics or overall design of a site do not clash
with the clarity and accuracy of the content or the ease of web navigation, especially on a B2B
website. Designers may also consider the reputation of the owner or business the site is
representing to make sure they are portrayed favourably.
User experience design and interactive design
Users understanding the content of a website often depends on users understanding how the
website works. This is part of the user experience design. User experience is related to layout,
clear instructions and labeling on a website. How well a user understands how they can interact
on a site may also depend on the interactive design of the site. If a user perceives the usefulness
of that website, they are more likely to continue using it. Users who are skilled and well versed
with website use may find a more unique, yet less intuitive or less user-friendly website interface
useful nonetheless. However, users with less experience are less likely to see the advantages or
usefulness of a less intuitive website interface. This drives the trend for a more universal user
experience and ease of access to accommodate as many users as possible regardless of user skill.
Much of the user experience design and interactive design are considered in the user interface
design.
Advanced interactive functions may require plug-ins if not advanced coding language skills.
Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user
experience design. If the plug-in doesn't come pre-installed with most browsers, there's a risk that
the user will have neither the know how, nor the patience to install a plug-in just to access the
content. If the function requires advanced coding language skills, it may be too costly in either
time or money to code compared to the amount of enhancement the function will add to the user
experience. There's also a risk that advanced interactivity may be incompatible with older
browsers or hardware configurations. Publishing a function that doesn't work reliably is
potentially worse for the user experience than making no attempt. It depends on the target
audience if it's likely to be needed or worth any risks.
Page layout
Part of the user interface design is affected by the quality of the page layout. For example, a
designer may consider if the site's page layout should remain consistent on different pages when
designing the layout. Page pixel width may also be considered vital for aligning objects in the
layout design. The most popular fixed-width websites generally have the same set width to match
the current most popular browser window, at the current most popular screen resolution, on the
current most popular monitor size. Most pages are also center-aligned for concerns of aesthetics
on larger screens.
Fluid layouts increased in popularity around 2000 as an alternative to HTML-table-based
layouts and grid-based design in both page layout design principle, and in coding technique, but
were very slow to be adopted. This was due to considerations of screen reading devices and
windows varying in sizes which designers have no control over. Accordingly, a design may be
broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas)
that are sent to the browser and which will be fitted into the display window by the browser, as
best it can. As the browser does recognize the details of the reader's screen (window size, font
size relative to window etc.) the browser can make user-specific layout adjustments to fluid
layouts, but not fixed-width layouts. Although such a display may often change the relative
position of major content units, sidebars may be displaced below body text rather than to the side
of it. This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device
window. In particular, the relative position of content blocks may change while leaving the
content within the block unaffected. This also minimizes the user's need to horizontally scroll the
page.
Responsive Web Design is a newer approach, based on CSS3, and a deeper level of per-device
specification within the page's stylesheet through an enhanced use of the CSS @media pseudo-
selector.
Typography
Web designers may choose to limit the variety of website typefaces to only a few which are of
a similar style, instead of using a wide range of typefaces or type styles. Most browsers recognize
a specific number of safe fonts, which designers mainly use in order to avoid complications.
Font downloading was later included in the CSS3 fonts module and has since been
implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased
interest in web typography, as well as the usage of font downloading.
Most layouts on a site incorporate negative space to break the text up into paragraphs and also
avoid center-aligned text.
Motion graphics
The page layout and user interface may also be affected by the use of motion graphics. The
choice of whether or not to use motion graphics may depend on the target market for the website.
Motion graphics may be expected or at least better received with an entertainment-oriented
website. However, a website target audience with a more serious or formal interest (such as
business, community, or government) might find animations unnecessary and distracting if only
for entertainment or decoration purposes. This doesn't mean that more serious content couldn't be
enhanced with animated or video presentations that is relevant to the content. In either case,
motion graphic design may make the difference between more effective visuals or distracting
visuals.
Quality of code
Website designers may consider it to be good practice to conform to standards. This is usually
done via a description specifying what the element is doing. Failure to conform to standards may
not make a website unusable or error prone, but standards can relate to the correct layout of pages
for readability as well making sure coded elements are closed appropriately. This includes errors
in code, more organized layout for code, and making sure IDs and classes are identified properly.
Poorly-coded pages are sometimes colloquially called tag soup. Validating via W3C can only be
done when a correct DOCTYPE declaration is made, which is used to highlight errors in code.
The system identifies the errors and areas that do not conform to web design standards. This
information can then be corrected by the user.
Occupations
There are two primary jobs involved in creating a website: the web designer and web
developer, who often work closely together on a website. The web designers are responsible for
the visual aspect, which includes the layout, coloring and typography of a web page. Web
designers will also have a working knowledge of using a variety of languages such as HTML,
CSS, JavaScript, PHP and Flash to create a site, although the extent of their knowledge will differ
from one web designer to another. Particularly in smaller organizations one person will need the
necessary skills for designing and programming the full web page, while larger organizations
may have a web designer responsible for the visual aspect alone.
Further jobs, which under particular circumstances may become involved during the creation
of a website include:
 Graphic designers to create visuals for the site such as logos, layouts and buttons
 Internet marketing specialists to help maintain web presence through strategic solutions
on targeting viewers to the site, by using marketing and promotional techniques on the internet
 SEO writers to research and recommend the correct words to be incorporated into a
particular website and make the website more accessible and found on numerous search engines
 Internet copywriter to create the written content of the page to appeal to the targeted
viewers of the site
 User experience (UX) designer incorporates aspects of user focused design considerations
which include information architecture, user centered design, user testing, interaction design, and
occasionally visual design.

9.3 HTML

9.3.1. Markup Languages

Markup Languages are used to create web documents. Markup Languages use instructions,
known as markup tags to format and link text files.
HTML (Hyper Text Markup Language), which allows us to describe how information will be
displayed on web pages (uses pre-define tags).
XML, which stands for EXtensible Markup language (enables us to define our own tags).
VoiceXML, which makes Web content accessible via voice and phone (is used to create voice
applications that run on the phone).

9.3.2. Some historical facts


The first publicly available description of HTML was a document called "HTML Tags", first
mentioned on the Internet by Berners-Lee in late 1991. It describes 18 elements comprising the
initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly
influenced by SGMLguid, an in-house SGML based documentation format at CERN. Eleven of
these elements still exist in HTML 4.
Hyper Text Markup Language is a markup language that web browsers use to interpret and
compose text, images and other material into visual or audible web pages. Default characteristics
for every item of HTML markup are defined in the browser, and these characteristics can be
altered or enhanced by the web page designer's additional use of CSS. Many of the text elements
are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn
covers the features of early text formatting languages such as that used by the RUNOFF
command developed in the early 1960s for the CTSS (Compatible Time-Sharing System)
operating system: these formatting commands were derived from the commands used by
typesetters to manually format documents. However, the SGML concept of generalized markup is
based on elements (nested annotated ranges with attributes) rather than merely print effects, with
also the separation of structure and processing; HTML has been progressively moved in this
direction with CSS.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such
by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal
for an HTML specification: "Hypertext Markup Language (HTML) Internet-Draft by Berners-
Lee and Dan Connolly, which included an SGML Document Type Definition to define the
grammar. The draft expired after six months, but was notable for its acknowledgement of the
NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's
philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing
Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing
already-implemented features like tables and fill-out forms.
After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML
Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to
be treated as a standard against which future implementations should be based.
Further development under the auspices of the IETF was stalled by competing interests. Since
1996, the HTML specifications have been maintained, with input from commercial software
vendors, by the World Wide Web Consortium (W3C). However, in 2000, HTML also became an
international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with
further errata published through 2001. In 2004 development began on HTML5 in the Web
Hypertext Application Technology Working Group (WHATWG), which became a joint
deliverable with the W3C in 2008.

9.3.3. HTML tags and attributes

HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets
(like <html>), within the web page content. HTML tags most commonly come in pairs like <h1>
and </h1>, although some tags, known as empty elements, are unpaired, for example <img>. The
first tag in a pair is the start tag, the second tag is the end tag (they are also called opening tags
and closing tags). In between these tags web designers can add text, tags, comments and other
types of text-based content.
The purpose of a web browser is to read HTML documents and compose them into visible or
audible web pages. The browser does not display the HTML tags, but uses the tags to interpret
the content of the page.
HTML elements form the building blocks of all websites. HTML allows images and objects to
be embedded and can be used to create interactive forms. It provides a means to create structured
documents by denoting structural semantics for text such as headings, paragraphs, lists, links,
quotes and other items. It can embed scripts in languages such as JavaScript which affect the
behavior of HTML webpages.
Web browsers can also refer to Cascading Style Sheets (CSS) to define the appearance and
layout of text and other material. The W3C, maintainer of both the HTML and the CSS standards,
encourages the use of CSS over explicit presentational HTML markup.
Markup
HTML markup consists of several key components, including elements (and their attributes),
character-based data types, character references and entity references. Another important
component is the document type declaration, which triggers standards mode rendering.
The following is an example of the classic Hello world program, a common test employed for
comparing programming languages, scripting languages and markup languages. This example is
made using 9 lines of code:
<!DOCTYPE html>
<html>
<head>
<title>Hello HTML</title>
</head>
<body>
<p>Hello World!</p>
</body>
</html>
(The text between <html> and </html> describes the web page, and the text between <body>
and </body> is the visible page content. The markup text '<title>Hello HTML</title>' defines
the browser page title.)
This Document Type Declaration is for HTML5. If the <!DOCTYPE html> declaration is not
included, various browsers will revert to "quirks mode" for rendering.
Elements
HTML documents are composed entirely of HTML elements that, in their most general form
have three components: a pair of tags, a "start tag" and "end tag"; some attributes within the start
tag; and finally, any textual and graphical content between the start and end tags, perhaps
including other nested elements. The HTML element is everything between and including the
start and end tags.
The general form of an HTML element is therefore: <tag attribute1="value1"
attribute2="value2">content</tag>. Some HTML elements are defined as empty elements and
take the form <tag attribute1="value1" attribute2="value2" >. Empty elements may enclose no
content, for instance, the BR tag or the inline IMG tag. The name of an HTML element is the
name used in the tags. Note that the end tag's name is preceded by a slash character, "/", and that
in empty elements the end tag is neither required nor allowed. If attributes are not mentioned,
default values are used in each case.
Element examples
Header of the HTML document:<head>...</head>. Usually the title should be included in the
head, for example:
<head>
<title>The Title</title>
</head>
Headings: HTML headings are defined with the <h1> to <h6> tags:
<h1>Heading1</h1>
<h2>Heading2</h2>
<h3>Heading3</h3>
<h4>Heading4</h4>
<h5>Heading5</h5>
<h6>Heading6</h6>
Paragraphs:
<p>Paragraph 1</p> <p>Paragraph 2</p>
Line breaks:<br />. The difference between <br /> and <p> is that 'br' breaks a line without
altering the semantic structure of the page, whereas 'p' sections the page into paragraphs. Note
also that 'br' is an empty element in that, while it may have attributes, it can take no content and it
may not have an end tag.
<p>This <br /> is a paragraph <br /> with <br /> line breaks</p>
Comments:
<!-- This is a comment -->
Comments can help understanding of the markup and do not display in the webpage.
There are several types of markup elements used in HTML.
Structural markup describes the purpose of text
For example, <h2>Golf</h2> establishes "Golf" as a second-level heading. Structural markup
does not denote any specific rendering, but most web browsers have default styles for element
formatting. Content may be further styled using Cascading Style Sheets (CSS).
Presentational markup describes the appearance of the text, regardless of its purpose
For example <b>boldface</b> indicates that visual output devices should render "boldface" in
bold text, but gives little indication what devices that are unable to do this (such as aural devices
that read the text aloud) should do. In the case of both <b>bold</b> and <i>italic</i>, there are
other elements that may have equivalent visual renderings but which are more semantic in nature,
such as <strong>strong text</strong> and <em>emphasised text</em> respectively. It is easier to
see how an aural user agent should interpret the latter two elements. However, they are not
equivalent to their presentational counterparts: it would be undesirable for a screen-reader to
emphasize the name of a book, for instance, but on a screen such a name would be italicized.
Most presentational markup elements have become deprecated under the HTML 4.0
specification, in favor of using CSS for styling.
Hypertext markup makes parts of a document into links to other documents
An anchor element creates a hyperlink in the document and its href attribute sets the link's
target URL. For example the HTML markup,
<a href="http://www.google.com/">Wikipedia</a>, will render the word "Wikipedia" as a
hyperlink. To render an image as a hyperlink, an 'img' element is inserted as content into the 'a'
element. Like 'br', 'img' is an empty element with attributes but no content or closing tag. <a
href="http://example.org"><img src="image.gif" alt="descriptive text" width="50" height="50"
border="0"></a>.
Attributes
Most of the attributes of an element are name-value pairs, separated by "=" and written within
the start tag of an element after the element's name. The value may be enclosed in single or
double quotes, although values consisting of certain characters can be left unquoted in HTML
(but not XHTML). Leaving attribute values unquoted is considered unsafe. In contrast with
name-value pair attributes, there are some attributes that affect the element simply by their
presence in the start tag of the element, like the ismap attribute for the img element.
Data types
HTML defines several data types for element content, such as script data and stylesheet data,
and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length,
languages, media descriptors, colors, character encodings, dates and times, and so on. All of these
data types are specializations of character data.
Delivery
HTML documents can be delivered by the same means as any other computer file. However,
they are most often delivered either by HTTP from a web server or by email.
HTTP
The World Wide Web is composed primarily of HTML documents transmitted from web
servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used
to serve images, sound, and other content, in addition to HTML. To allow the Web browser to
know how to handle each document it receives, other information is transmitted along with the
document. This meta data usually includes the MIME type (e.g. text/html or
application/xhtml+xml) and the character encoding.
In modern browsers, the MIME type that is sent with the HTML document may affect how
the document is initially interpreted. A document sent with the XHTML MIME type is expected
to be well-formed XML; syntax errors may cause the browser to fail to render it. The same
document sent with the HTML MIME type might be displayed successfully, since some browsers
are more lenient with HTML.
The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth
in the recommendation's Appendix C may be labeled with either MIME Type. The current
XHTML 1.1 Working Draft also states that XHTML 1.1 documents should be labeled with either
MIME type.

Tag Description
<!--...--> Defines a comment
<!DOCTYPE> Defines the document type
<a> Defines a hyperlink
<abbr> Defines an abbreviation
<acronym> Not supported in HTML5. Defines an acronym
<address> Defines contact information for the author/owner of a document
<applet> Not supported in HTML5. Deprecated in HTML 4.01. Defines an
embedded applet
<area> Defines an area inside an image-map
<article> Defines an article
<aside> Defines content aside from the page content
<audio> Defines sound content
<b> Defines bold text
<base> Specifies the base URL/target for all relative URLs in a document
<basefont> Not supported in HTML5. Deprecated in HTML 4.01. Specifies a
default color, size, and font for all text in a document
<bdi> Isolates a part of text that might be formatted in a different
direction from other text outside it
<bdo> Overrides the current text direction
<big> Not supported in HTML5. Defines big text
<blockquote> Defines a section that is quoted from another source
<body> Defines the document's body
<br> Defines a single line break
<button> Defines a clickable button
<canvas> Used to draw graphics, on the fly, via scripting (usually
JavaScript)
<caption> Defines a table caption
<center> Not supported in HTML5. Deprecated in HTML 4.01. Defines
centered text
<cite> Defines the title of a work
<code> Defines a piece of computer code
<col> Specifies column properties for each column within a <colgroup>
element
<colgroup> Specifies a group of one or more columns in a table for formatting
<command> Defines a command button that a user can invoke
<datalist> Specifies a list of pre-defined options for input controls
<dd> Defines a description of an item in a definition list
<del> Defines text that has been deleted from a document
<details> Defines additional details that the user can view or hide
<dfn> Defines a definition term
<dir> Not supported in HTML5. Deprecated in HTML 4.01. Defines a
directory list
<div> Defines a section in a document
<dl> Defines a definition list
<dt> Defines a term (an item) in a definition list
<em> Defines emphasized text
<embed> Defines a container for an external (non-HTML) application
<fieldset> Groups related elements in a form
<figcaption> Defines a caption for a <figure> element
<figure> Specifies self-contained content
<font> Not supported in HTML5. Deprecated in HTML 4.01. Defines
font, color, and size for text
<footer> Defines a footer for a document or section
<form> Defines an HTML form for user input
<frame> Not supported in HTML5. Defines a window (a frame) in a
frameset
<frameset> Not supported in HTML5. Defines a set of frames
<h1> to <h6> Defines HTML headings
<head> Defines information about the document
<header> Defines a header for a document or section
<hgroup> Groups heading (<h1> to <h6>) elements
<hr> Defines a thematic change in the content
<html> Defines the root of an HTML document
<i> Defines a part of text in an alternate voice or mood
<iframe> Defines an inline frame
<img> Defines an image
<input> Defines an input control
<ins> Defines a text that has been inserted into a document
<kbd> Defines keyboard input
<keygen> Defines a key-pair generator field (for forms)
<label> Defines a label for an <input> element
<legend> Defines a caption for a <fieldset>, < figure>, or <details> element
<li> Defines a list item
<link> Defines the relationship between a document and an external
resource (most used to link to style sheets)
<map> Defines a client-side image-map
<mark> Defines marked/highlighted text
<menu> Defines a list/menu of commands
<meta> Defines metadata about an HTML document
<meter> Defines a scalar measurement within a known range (a gauge)
<nav> Defines navigation links
<noframes> Not supported in HTML5. Defines an alternate content for users
that do not support frames
<noscript> Defines an alternate content for users that do not support client-
side scripts
<object> Defines an embedded object
<ol> Defines an ordered list
<optgroup> Defines a group of related options in a drop-down list
<option> Defines an option in a drop-down list
<output> Defines the result of a calculation
<p> Defines a paragraph
<param> Defines a parameter for an object
<pre> Defines preformatted text
<progress> Represents the progress of a task
<q> Defines a short quotation
<rp> Defines what to show in browsers that do not support ruby
annotations
<rt> Defines an explanation/pronunciation of characters (for East
Asian typography)
<ruby> Defines a ruby annotation (for East Asian typography)
<s> Defines text that is no longer correct
<samp> Defines sample output from a computer program
<script> Defines a client-side script
<section> Defines a section in a document
<select> Defines a drop-down list
<small> Defines smaller text
<source> Defines multiple media resources for media elements (<video>
and <audio>)
<span> Defines a section in a document
<strike> Not supported in HTML5. Deprecated in HTML 4.01. Defines
strikethrough text
<strong> Defines important text
<style> Defines style information for a document
<sub> Defines subscripted text
<summary> Defines a visible heading for a <details> element
<sup> Defines superscripted text
<table> Defines a table
<tbody> Groups the body content in a table
<td> Defines a cell in a table
<textarea> Defines a multiline input control (text area)
<tfoot> Groups the footer content in a table
<th> Defines a header cell in a table
<thead> Groups the header content in a table
<time> Defines a date/time
<title> Defines a title for the document
<tr> Defines a row in a table
<track> Defines text tracks for media elements (<video> and <audio>)
<tt> Not supported in HTML5. Defines teletype text
<u> Defines text that should be stylistically different from normal text
<ul> Defines an unordered list
<var> Defines a variable
<video> Defines a video or movie
<wbr> Defines a possible line-break
An HTML table consists of the <table> element and one or more <tr>, <th>, and <td>
elements.
The <tr> element defines a table row, the <th> element defines a table header, and the <td>
element defines a table cell.
A more complex HTML table may also include <caption>, <col>, <colgroup>, <thead>,
<tfoot>, and <tbody> elements.
Attributes

Attribute Value Description

align left Not supported in HTML5. Deprecated in HTML


center 4.01. Specifies the alignment of a table according to
right surrounding text

bgcolor rgb(x,x,x) Not supported in HTML5. Deprecated in HTML


#xxxxxx 4.01. Specifies the background color for a table
colorname

border 1 Specifies whether the table cells should have borders


"" or not
cellpadding pixels Not supported in HTML5. Specifies the space
between the cell wall and the cell content

cellspacing pixels Not supported in HTML5. Specifies the space


between cells

frame void Not supported in HTML5. Specifies which parts of


above the outside borders that should be visible
below
hsides
lhs
rhs
vsides
box
border

rules none Not supported in HTML5. Specifies which parts of


groups the inside borders that should be visible
rows
cols
all

summary Text Not supported in HTML5. Specifies a summary of


the content of a table

width pixels Not supported in HTML5. Specifies the width of a


% table

Lecture 10. CLOUD AND MOBILE TECHNOLOGIES.

Mobile Cloud Computing (MCC) is the combination of cloud computing, mobile computing
and wireless networks to bring rich computational resources to mobile users, network operators,
as well as cloud computing providers. The ultimate goal of MCC is to enable execution of rich
mobile applications on a plethora of mobile devices, with a rich user experience. MCC provides
business opportunities for mobile network operators as well as cloud providers. More
comprehensively, MCC can be defined as "a rich mobile computing technology that leverages
unified elastic resources of varied clouds and network technologies toward unrestricted
functionality, storage, and mobility to serve a multitude of mobile devices anywhere, anytime
through the channel of Ethernet or Internet regardless of heterogeneous environments and
platforms based on the pay-as-you-use principle."
MCC uses computational augmentation approaches (computations are executed remotely instead
of on the device) by which resource-constraint mobile devices can utilize computational
resources of varied cloud-based resources. In MCC, there are four types of cloud-based
resources, namely distant immobile clouds, proximate immobile computing entities, proximate
mobile computing entities, and hybrid (combination of the other three model). Giant clouds such
as Amazon EC2 are in the distant immobile groups whereas cloudletor surrogates are member of
proximate immobile computing entities. Smartphone, tablets, handheld devices, and wearable
computing devices are part of the third group of cloud-based resources hich is proximate mobile
computing entities.
Challenges
In the MCC landscape, an amalgam of mobile computing, cloud computing, and communication
networks (to augment smartphones) creates several complex challenges such as Mobile
Computation Offloading, Seamless Connectivity, Long WAN Latency, Mobility Management,
Context-Processing, Energy Constraint, Vendor/data Lock-in, Security and Privacy, Elasticity
that hinder MCC success and adoption.
Although significant research and development in MCC is available in the literature, efforts in
the following domains is still lacking:

 Architectural issues: A reference architecture for heterogeneous MCC environment is a


crucial requirement for unleashing the power of mobile computing towards unrestricted
ubiquitous computing.
 Energy-efficient transmission: MCC requires frequent transmissions between cloud
platform and mobile devices, due to the stochastic nature of wireless networks, the
transmission protocol should be carefully designed.
 Context-awareness issues: Context-aware and socially-aware computing are inseparable
traits of contemporary handheld computers. To achieve the vision of mobile computing
among heterogeneous converged networks and computing devices, designing resource-
efficient environment-aware applications is an essential need.
 Live VM migration issues: Executing resource-intensive mobile application via Virtual
Machine (VM) migration-based application offloading involves encapsulation of application
in VM instance and migrating it to the cloud, which is a challenging task due to additional
overhead of deploying and managing VM on mobile devices.
 Mobile communication congestion issues: Mobile data traffic is tremendously hiking
by ever increasing mobile user demands for exploiting cloud resources which impact on
mobile network operators and demand future efforts to enable smooth communication
between mobile and cloud endpoints.
 Trust, security, and privacy issues: Trust is an essential factor for the success of the
burgeoning MCC paradigm. It is because the data along with
code/component/application/complete VM is offloaded to the cloud for execution. Moreover,
just like software and mobile application piracy, the MCC application development models
are also affected by the piracy issue. Pirax is known to be the first specialized framework for
controlling application piracy in MCC environment.

What is the difference between cloud computing and mobile computing?


Both cloud computing and mobile computing have to do with using wireless systems to transmit
data. Beyond this, these two terms are quite different.
Cloud computing relates to the specific design of new technologies and services that allow data
to be sent over distributed networks, through wireless connections, to a remote secure location
that is usually maintained by a vendor. Cloud service providers usually serve multiple clients.
They arrange access between the client's local or closed networks, and their own data storage and
data backup systems. That means that the vendor can intake data that is sent to them and store it
securely, while delivering services back to a client through these carefully maintained
connections.
Mobile computing relates to the emergence of new devices and interfaces. Smartphones and
tablets are mobile devices that can do a lot of what traditional desktop and laptop computers do.
Mobile computing functions include accessing the Internet through browsers, supporting
multiple software applications with a core operating system, and sending and receiving different
types of data. The mobile operating system, as an interface, supports users by providing intuitive
icons, familiar search technologies and easy touch-screen commands.
While mobile computing is largely a consumer-facing service, cloud computing is something
that is used by many businesses and companies. Individuals can also benefit from cloud
computing, but some of the most sophisticated and advanced cloud computing services are
aimed at enterprises. For example, big businesses and even smaller operations use specific cloud
computing services to make different processes like supply-chain management, inventory
handling, customer relationships and even production more efficient. An emerging picture of the
difference between cloud computing and mobile computing involves the emergence of smart
phone and tablet operating systems and, on the cloud end, new networking services that may
serve these and other devices.

Lecture 11. MULTIMEDIA TECHNOLOGIES.

Multimedia is content that uses a combination of different content forms such as text, audio,
images, animations, video and interactive content. Multimedia contrasts with media that use only
rudimentary computer displays such as text-only or traditional forms of printed or hand-
produced material.
Multimedia can be recorded and played, displayed, interacted with or accessed
by information content processing devices, such as computerized and electronic devices, but can
also be part of a live performance. Multimedia devices areelectronic media devices used to store
and experience multimedia content. Multimedia is distinguished from mixed media in fine art;
for example, by including audio it has a broader scope. In the early years of multimedia the term
"rich media" was synonymous with interactive multimedia, and "hypermedia" was a application
of multimedia.
The term multimedia was coined by singer and artist Bob Goldstein (later 'Bobb Goldsteinn') to
promote the July 1966 opening of his "LightWorks at L'Oursin" show at Southampton, Long
Island. Goldstein was perhaps aware of an American artist named Dick Higgins, who had two
years previously discussed a new approach to art-making he called "intermedia".
On August 10, 1966, Richard Albarino of Variety borrowed the terminology, reporting:
"Brainchild of songscribe-comic Bob ('Washington Square') Goldstein, the 'Lightworks' is the
latest multi-media music-cum-visuals to debut as discothèque fare." Two years later, in 1968, the
term "multimedia" was re-appropriated to describe the work of a political consultant, David
Sawyer, the husband of Iris Sawyer—one of Goldstein's producers at L'Oursin.
In the intervening forty years, the word has taken on different meanings. In the late 1970s, the
term referred to presentations consisting of multi-projector slide shows timed to an audio track.
However, by the 1990s 'multimedia' took on its current meaning.
In the 1993 first edition of Multimedia: Making It Work, Tay Vaughan declared "Multimedia is
any combination of text, graphic art, sound, animation, and video that is delivered by computer.
When you allow the user – the viewer of the project – to control what and when these elements
are delivered, it is interactive multimedia. When you provide a structure of linked elements
through which the user can navigate, interactive multimedia becomes hypermedia."
The German language society Gesellschaft für deutsche Sprache recognized the word's
significance and ubiquitousness in the 1990s by awarding it the title of German 'Word of the
Year' in 1995.The institute summed up its rationale by stating "Multimedia has become a central
word in the wonderful new media world".
In common usage, multimedia refers to an electronically delivered combination of media
including video, still images, audio, and text in such a way that can be accessed interactively.
Much of the content on the web today falls within this definition as understood by millions.
Some computers which were marketed in the 1990s were called "multimedia" computers because
they incorporated a CD-ROM drive, which allowed for the delivery of several hundred
megabytes of video, picture, and audio data. That era saw also a boost in the production of
educational multimedia CD-ROMs.
The term "video", if not used exclusively to describe motion photography, is ambiguous in
multimedia terminology. Video is often used to describe the file format, delivery format, or
presentation format instead of "footage" which is used to distinguish motion photography
from "animation" of rendered motion imagery. Multiple forms of information content are often
not considered modern forms of presentation such as audio or video. Likewise, single forms of
information content with single methods of information processing (e.g. non-interactive audio)
are often called multimedia, perhaps to distinguish static media from active media. In the fine
arts, for example, Leda Luss Luyken's ModulArt brings two key elements of musical
composition and film into the world of painting: variation of a theme and movement of and
within a picture, making ModulArt an interactive multimedia form of art. Performing arts may
also be considered multimedia considering that performers and props are multiple forms of both
content and media.
Multimedia presentations may be viewed by person on stage, projected, transmitted, or played
locally with a media player. A broadcast may be a live or recorded multimedia presentation.
Broadcasts and recordings can be either analog or digital electronic media technology. Digital
online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-
demand.
Multimedia games and simulations may be used in a physical environment with special
effects, with multiple users in an online network, or locally with an offline computer, game
system, or simulator.
The various formats of technological or digital multimedia may be intended to enhance the users'
experience, for example to make it easier and faster to convey information. Or in entertainment
or art, to transcend everyday experience.

A lasershow is a live multimedia performance.


Enhanced levels of interactivity are made possible by combining multiple forms of media
content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling
applications with collaborative end-user innovation and personalization on multiple forms of
content over time. Examples of these range from multiple forms of content on Web sites like
photo galleries with both images (pictures) and title (text) user-updated, to simulations whose co-
efficients, events, illustrations, animations or videos are modifiable, allowing the multimedia
"experience" to be altered without reprogramming. In addition to seeing and hearing, haptic
technology enables virtual objects to be felt. Emerging technology involving illusions of taste
and smell may also enhance the multimedia experience.
Multimedia may be broadly divided into linear and non-linear categories:

 Linear active content progresses often without any navigational control for the viewer
such as a cinema presentation;
 Non-linear uses interactivity to control progress as with a video game or self-
paced computer-based training. Hypermedia is an example of non-linear content.

Multimedia presentations can be live or recorded:

 A live multimedia presentation may allow interactivity via an interaction with the
presenter or performer.
 A recorded presentation may allow interactivity via a navigation system;

Virtual reality uses multimedia content. Applications and delivery platforms of multimedia are
virtually limitless.
Multimedia finds its application in various areas including, but not limited
to, advertisements, art, education, entertainment,engineering, medicine, mathematics, business,
scientific researchand spatial temporal applications. Several examples are as follows:
Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to
entertainment, to commercial art, tojournalism, to media and software services provided for any
of the industries listed below. An individual multimedia designer may cover the spectrum
throughout their career. Request for their skills range from technical, to analytical, to creative.
Commercial uses
Much of the electronic old and new media used by commercial artists and graphic designers is
multimedia. Exciting presentations are used to grab and keep attention in advertising. Business to
business, and interoffice communications are often developed by creative services firms for
advanced multimedia presentations beyond simple slide shows to sell ideas or liven up training.
Commercial multimedia developers may be hired to design for governmental services and
nonprofit services applications as well.
Entertainment and fine arts
Multimedia is heavily used in the entertainment industry, especially to develop special effects in
movies and animations (VFX, 3D animation, etc.). Multimedia games are a popular pastime and
are software programs available either as CD-ROMs or online. Some video games also use
multimedia features. Multimedia applications that allow users to actively participate instead of
just sitting by as passive recipients of information are called interactive multimedia. In the arts
there are multimedia artists, whose minds are able to blend techniques using different media that
in some way incorporates interaction with the viewer. One of the most relevant could be Peter
Greenaway who is melding cinema with opera and all sorts of digital media. Another approach
entails the creation of multimedia that can be displayed in a traditional fine arts arena, such as
an art gallery. Although multimedia display material may be volatile, the survivability of the
content is as strong as any traditional media. Digital recording material may be just as durable
and infinitely reproducible with perfect copies every time.
Education
In education, multimedia is used to produce computer-based training courses (popularly called
CBTs) and reference books like encyclopedia and almanacs. A CBT lets the user go through a
series of presentations, text about a particular topic, and associated illustrations in various
information formats. Edutainment is the combination of education with entertainment, especially
multimedia entertainment.
Learning theory in the past decade has expanded dramatically because of the introduction of
multimedia. Several lines of research have evolved, e.g. cognitive load and multimedia learning.
From multimedia learning (MML) theory, David Roberts has developed a large group lecture
practice using PowerPoint and based on the use of full-slide images in conjunction with a
reduction of visible text (all text can be placed in the notes view’ section of PowerPoint). The
method has been applied and evaluated in 9 disciplines. In each experiment, students’
engagement and active learning has been approximately 66% greater, than with the same
material being delivered using bullet points, text and speech, corroborating a range of theories
presented by multimedia learning scholars like Sweller and Mayer. The idea of media
convergence is also becoming a major factor in education, particularly higher education. Defined
as separate technologies such as voice (and telephony features), data (and productivity
applications) and video that now share resources and interact with each other, media
convergence is rapidly changing the curriculum in universities all over the world.
Journalism
Newspaper companies all over are trying to embrace the new phenomenon by implementing its
practices in their work. While some have been slow to come around, other major newspapers like
The New York Times, USA Today and The Washington Post are setting the precedent for the
positioning of the newspaper industry in a globalized world.
News reporting is not limited to traditional media outlets. Freelance journalists can make use of
different new media to produce multimedia pieces for their news stories. It engages global
audiences and tells stories with technology, which develops new communication techniques for
both media producers and consumers. The Common Language Project, later renamed to The
Seattle Globalist, is an example of this type of multimedia journalism production.
Multimedia reporters who are mobile (usually driving around a community with cameras, audio
and video recorders, and laptop computers) are often referred to asmojos, from mobile journalist.
Engineering
Software engineers may use multimedia in computer simulations for anything from
entertainment to training such as military or industrial training. Multimedia for software
interfaces are often done as a collaboration between creative professionals and software
engineers.
Mathematical and scientific research
In mathematical and scientific research, multimedia is mainly used for modeling and simulation.
For example, a scientist can look at a molecular model of a particular substance and manipulate
it to arrive at a new substance. Representative research can be found in journals such as
the Journal of Multimedia.
Medicine
In medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the
human body is affected by diseases spread by viruses and bacteria and then develop techniques
to prevent it. Multimedia applications such as virtual surgeries also help doctors to get practical
training.

Lecture 12. TECHNOLOGY SMART.


SMART technologies - the technologies (includes physical and logical applications in all
formats) that are capable to adapt automatically and modify behavior to fit environment, senses
things with technology sensors, this providing data to analyze and infer from, drawing
conclusions from rules. It also is capable of learning that is using experience to improve
performance, anticipating, thinking and reasoning about what to do next, with the ability to self-
generate and self-sustain.
SMART technologies allow sensors, databases, and wireless access to collaboratively sense,
adapt, and provide for users within the environment. Such technologies are currently found in
housing designs for elderly and educational environments similar to sensors and information
feeds within museums.
A technological convergence between an object and computer. This enables the object to connect
to the internet and extend its role beyond its traditional functions.

Smart cities are no longer the wave of the future. They are here now and growing quickly as the
Internet of Things (IoT) expands and impacts municipal services around the globe.

While there are many definitions of a smart city, in general, a smart city utilizes IoT sensors,
actuators and technology to connect components across the city, and it impacts every layer of a
city, from underneath the streets, to the air that citizens are breathing. Data from all segments is
analyzed, and patterns are derived from the collected data.

There are key technologies that make a smart city work. Here are the top six:

1. Smart energy
Both residential and commercial buildings in smart cities are more efficient, using less energy,
and the energy used is analyzed and data collected. Smart grids are part of the development of a
smart city, and smart streetlights are an easy entry point for many cities, since LED lights save
money and pay for themselves within a few years.

"Lighting is ubiquitous—it's everywhere that people work, travel, shop, dine, and relax. Digital
communications and energy-efficient LED lighting are revolutionizing urban lighting
infrastructures already in place, transforming them into information pathways with the capacity
to collect and share data and offer new insights that enable, and really drive, the smart city," said
Susanne Seitinger, PhD., Philips Lighting, professional systems.

Overall energy usage is also part of a smart city. "Many may have experienced this already with
the installation of smart meters at their homes. But with the rise of home solar power systems
and electric vehicles, hardware and software technology will allow for the potential of better grid
management, optimization of power production through different sources and distributed energy
production. Furthermore, buildings that monitor their energy usage actively and report this data
to utilities can reduce their costs. This will ultimately lead to lower pollution and much better
efficiency as cities become more urbanized," said Herman Chandi, co-founder
of CommunityLogiq.

And there are also smart grids and smart meters. "Smart grid solutions play an important role in
the development of smart cities. From prepaid energy applications to advanced metering
infrastructure, there are several solutions to enhance energy services. With a smart grid, you can
improve outage detection, speed of data capture, continuing and disaster recovery, field service
operations and overall grid modernization techniques," said Mike Zeto, general manager and
executive director of AT&T Smart Cities.

2. Smart transportation

A smart city supports multi-modal transportation, smart traffic lights and smart parking.

"One of the key areas that we have seen a lot of activity on has to do with mobility. Anything
around transportation, traffic monitoring, parking," said Sanjay Khatri, director of product
marketing and IoT services for Jasper. "These are areas where cities are seeing a very fast return
on investment. It not only helps to reduce the cost of monitoring parking and making sure that
they are collecting fines, it's also reducing congestion."

By making parking smarter, people spend less time looking for parking spots and circling city
blocks. Smart traffic lights have cameras that monitor traffic flow so that it's reflected in the
traffic signals, Khatri said.

Even city buses are becoming connected, so that people have real time information on when a
bus will arrive at a bus stop. In Australia, traffic lights are prioritized based on the bus schedules
so that traffic flows more freely during rush hours, Khatri said.

Chandi said, "it's using sensors to collect data about the movement of people, all forms of
vehicles and bikes. A smart city is one that greatly reduces vehicle traffic and allows people and
goods to be moved easily through various means. Intelligent traffic systems are an example of
this and the achievement of autonomous vehicle transportation would be a prime example of
success for a smart city, as this could reduce vehicle related deaths. All these efforts would
reduce pollution as well as time stuck in traffic, resulting in a healthier population."

3. Smart data

The massive amounts of data collected by a smart city must be analyzed quickly in order to make
it useful. Open data portals are one option that some cities have chosen in order to publish city
data online, so that anyone can access it and use predictive analytics to assess future patterns.
Companies such as CommunityLogiq are working with cities to help them analyze data, and
they're in the Startup in Residence (STiR) program for the city of San Francisco.

"The pervasiveness of technology and the expansion of open data policies is about to unleash an
economic growth engine for urban innovation that we have never seen. We are moving from
analyzing data that exists within city hall, to generating new data from sensors that are deployed
all across cities for use by multiple departments and people for multiple uses," said John Gordon,
chief digital officer at Current, powered by GE.

Even the data collected by streetlights can be used to benefit citizens. "Hidden within the
exponential volumes of data collected from connected lighting systems and other IoT devices are
valuable insights and information about how citizens interact with cities. For instance, traffic
data captured by streetlights can uncover a prime location for a new restaurant in a revitalized
neighborhood. Predictive analytics helps cities filter and translate data into relevant and
actionable information that makes city life better, easier, and more productive," Seitinger said.

4. Smart infrastructure

Cities will be able to plan better with a smart city's ability to analyze large amounts of data. This
will allow for pro-active maintenance and better planning for future demand. Being able to test
for lead content in water in real time when the data shows a problem is emerging could prevent
public health issues, Chandi said.

Having a smart infrastructure means that a city can move forward with other technologies and
use the data collected to make meaningful changes in future city plans.

5. Smart mobility

"Mobility refers to both the technology and the data which travels across the technology. The
ability to seamlessly move in and out of many different municipal and private systems is
essential if we are to realize the promise of smart cities. Building the smart city will never be a
project that is "finished." Technology needs to be interoperable and perform to expectations
regardless of who made it or when it was made. Data also needs to be unconstrained as it moves
between systems, with all due attention to intellectual property, security and privacy concerns.
For this, public policy and legal technology needs to be state of the art," said Tom Blewitt,
director of principal engineers, UL.

6. Smart IoT devices


And finally, one of the key components that ties everything together in a smart city is IoT
devices.

"Whether we like it or not, sensors and actuators in our cities are here to stay. Fusing sensor
information into our daily life and integrating it all with third party social networks will knit the
fabric of society closer together, while leaving city leaders to grapple with serious privacy and
security challenges," said Carl Piva, vice president of strategic programs at TM Forum.

Sensors are essential in a smart city, said Scott Allen, CMO of FreeWave Technologies. Allen
said that a smart city has "a wide range of reporting devices such as sensors, visibility devices
and other end points that create the data that makes a smart city work."

Blewitt said, " In a smart city, information will increasingly be obtained directly from
purposefully deployed sensors or indirectly from sensors deployed for another purpose but which
gather and share useful information. With this information, freely exchanged, complex city
systems can be managed in real-time and, with sufficient integration, to minimize unintended
consequences. As dependence on sensors grows, so too will the need that they be reliable and
that the systems to which they are connected will be able to tolerate the inevitable failures."

Beacons are another part of IoT, and one of the problems with a smart city is the vast amount of
information. Too much information can be overwhelming. Information received at a time when
one is unable to take advantage of it is essentially noise, Blewitt said.

"As cities move from millions to billions and then trillions of devices transmitting usable and
potentially unusable information, bandwidth efficiency and capacity could be challenged. Short
range notification that a user-selected need can be fulfilled nearby, whether it is the location of a
subway station or a service, provides convenience without tying up some of the bandwidth of the
carrier data networks. Perhaps this will have the side benefit of a reduction in the number of
signs and therefore the visual clutter that they cause on our city streets," he said.

Each of these technologies work together to make a smart city even smarter. As the world's
population grows, and more people move into urban areas, the need for smarter cities will
increase to make the best use of available resources.

The photo is the Nike Lunar TR1+. It’s a smart shoe. Sensors are embedded to capture
movements in real-time which is then monitored through its
app.

It is an example of an object that’s been merged with


computer chips and connected to the internet. The traditional
purpose of a shoe that is to protect the foot has been
extended to track and analyze movements. Thus making it
smart.

Lecture 13. E-TECHNOLOGIES. ELECTRONIC BUSINESS. ELECTRONIC TRAINING.


ELECTRONIC GOVERNMENT.
13.1. Overview

E-learning (or eLearning) refers to the use of electronic media and information and
communication technologies (ICT) in education. E-learning is broadly inclusive of all forms of
educational technology in learning and teaching. E-learning is inclusive of, and is broadly
synonymous with multimedia learning, technology-enhanced learning (TEL), computer-based
instruction (CBI), computer-based training (CBT), computer-assisted instruction or computer-
aided instruction (CAI), internet-based training (IBT), web-based training (WBT), online
education, virtual education, virtual learning environments (VLE) (which are also called learning
platforms), m-learning, and digital educational collaboration. These alternative names emphasize
a particular aspect, component or delivery method.
E-learning includes numerous types of media that deliver text, audio, images, animation, and
streaming video, and includes technology applications and processes such as audio or video tape,
satellite TV, CD-ROM, and computer-based learning, as well as local intranet/extranet and web-
based learning. Information and communication systems, whether free-standing or based on
either local networks or the Internet in networked learning, underly many e-learning processes.
E-learning can occur in or out of the classroom. It can be self-paced, asynchronous learning or
may be instructor-led, synchronous learning. E-learning is suited to distance learning and flexible
learning, but it can also be used in conjunction with face-to-face teaching, in which case the term
blended learning is commonly used.
Background
E-learning is an inclusive term that describes educational technology that electronically or
technologically supports learning and teaching. Bernard Luskin, a pioneer of e-learning,
advocates that the "e" should be interpreted to mean "exciting, energetic, enthusiastic, emotional,
extended, excellent, and educational" in addition to "electronic." This broad interpretation focuses
on new applications and developments, and also brings learning and media psychology into
consideration. Parks suggested that the "e" should refer to "everything, everyone, engaging,
easy".
The worldwide e-learning industry is economically significant, and was estimated in 2000 to
be over $48 billion according to conservative estimates. Developments in internet and multimedia
technologies are the basic enabler of e-learning, with consulting, content, technologies, services
and support being identified as the five key sectors of the e-learning industry. Information and
communication technologies (ICT) are used extensively by young people.
13.2. History
In 1960, the University of Illinois initiated a classroom system based in linked computer
terminals where students could access informational resources on a particular course while
listening to the lectures that were recorded via some form of remotely-linked device like
television or audio device.
In the early 1960s, Stanford University psychology professors Patrick Suppes and Richard C.
Atkinson experimented with using computers to teach math and reading to young children in
elementary schools in East Palo Alto, California. Stanford's Education Program for Gifted Youth
is descended from those early experiments. In 1963, Bernard Luskin installed the first computer
in a community college for instruction, working with Stanford and others, developed computer
assisted instruction. Luskin completed his landmark UCLA dissertation working with the Rand
Corporation in analyzing obstacles to computer assisted instruction in 1970.
Early e-learning systems, based on Computer-Based Learning/Training often attempted to
replicate autocratic teaching styles whereby the role of the e-learning system was assumed to be
for transferring knowledge, as opposed to systems developed later based on Computer Supported
Collaborative Learning (CSCL), which encouraged the shared development of knowledge.
The Open University in Britain and the University of British Columbia (where Web CT, now
incorporated into Blackboard Inc. was first developed) began a revolution of using the Internet to
deliver learning, making heavy use of web-based training and online distance learning and online
discussion between students. Practitioners such as Harasim (1995) put heavy emphasis on the use
of learning networks.
With the advent of World Wide Web in the 1990s, teachers embarked on the method using
emerging technologies to employ multi-object oriented sites, which are text-based online virtual
reality system, to create course websites along with simple sets instructions for its students. As
the Internet becomes popularized, correspondence schools like University of Phoenix became
highly interested with the virtual education, setting up a name for itself in 1980.
According to a 2008 study conducted by the U.S Department of Education, back in 2006-2007
academic year, about 66% of postsecondary public and private schools began participating in
student financial aid programs offered some distance learning courses, record shows only 77% of
enrollment in for-credit courses being for those with an online component. In 2008, the Council
of Europe passed a statement endorsing e-learning's potential to drive equality and education
improvements across the EU.

13.3. Educational approach

Synchronous and asynchronous


Synchronous learning occurs in real-time, with all participants interacting at the same time,
while asynchronous learning is self-paced and allows participants to engage in the exchange of
ideas or information without the dependency of other participants′ involvement at the same time.
Synchronous learning involves the exchange of ideas and information with one or more
participants during the same period of time. A face-to-face discussion is an example of
synchronous communications. In e-learning environments, examples of synchronous
communications include online real-time live teacher instruction and feedback, Skype
conversations, or chat rooms or virtual classrooms where everyone is online and working
collaboratively at the same time.
Asynchronous learning may use technologies such as email, blogs, wikis, and discussion
boards, as well as web-supported textbooks, hypertext documents, audio video courses, and social
networking using web 2.0. At the professional educational level, training may include virtual
operating rooms. Asynchronous learning is particularly beneficial for students who have health
problems or have child care responsibilities and regularly leaving the home to attend lectures is
difficult. They have the opportunity to complete their work in a low stress environment and
within a more flexible timeframe. In asynchronous online courses, students proceed at their own
pace. If they need to listen to a lecture a second time, or think about a question for a while, they
may do so without fearing that they will hold back the rest of the class. Through online courses,
students can earn their diplomas more quickly, or repeat failed courses without the
embarrassment of being in a class with younger students. Students also have access to an
incredible variety of enrichment courses in online learning, and can participate in college courses,
internships, sports, or work and still graduate with their class.
Linear learning
Computer-based learning or training (CBT) refers to self-paced learning activities delivered on
a computer or handheld device such as a tablet or smartphone. CBT often delivers content via
CD-ROM, and typically presents content in a linear fashion, much like reading an online book or
manual. For this reason, CBT is often used to teach static processes, such as using software or
completing mathematical equations. Computer-based training is conceptually similar to web-
based training (WBT), the primary difference being that WBTs are delivered via Internet using a
web browser.
Collaborative learning
Computer-supported collaborative learning (CSCL) uses instructional methods designed to
encourage or require students to work together on learning tasks. CSCL is similar in concept to
the terminology, "e-learning 2.0".
Higher education
In the United States, e-learning has become a predominant form of post-secondary education.
During the fall 2011 term, 6.7 million students enrolled in at least one online course. The Sloan
report, based on a poll of academic leaders, indicated that students are as satisfied with on-line
classes as with traditional ones.
Although massively-open online courses (MOOCs) may have limitations that preclude them
from fully replacing college education, such programs have significantly expanded. MIT,
Stanford and Princeton University offer classes to a global audience, but not for college credit.
University-level programs, like edX founded by Massachusetts Institute of Technology and
Harvard University, offer wide range of disciplines at no charge.
Coursera, an online-enrollment platform, is now offering education for millions of people
around the world. A certification is consigned by Coursera for students who are able to complete
an adequate performance in the course.

13.4. Advantages and disadvantages

For many students, e-learning is the most convenient way to pursue a degree in higher
education. A lot of these students are attracted to a flexible, self-paced method of education to
attain their degree. It is important to note that many of these students could be working their way
through college, supporting themselves or battling with serious illness.To these students, it would
be extremely difficult to find time to fit college in their schedule. Thus, these students are more
likely and more motivated to enroll in an e-learning class. Moreover, in asynchronous e-learning
classes, students are free to log on and complete work any time they wish. They can work on and
complete their assignments at the times when they think most cogently, whether it be early in the
morning or late at night.
However, many teachers have a harder time keeping their students engaged in an e-learning
class. A disengaged student is usually an unmotivated student, and an engaged student is a
motivated student. One reason why students are more likely to be disengaged is that the lack of
face-to-face contact makes it difficult for teachers to read their students' nonverbal cues,
including confusion, boredom or frustration. These cues are helpful to a teacher in deciding
whether to speed up, introduce new material, slow down or explain a concept in a different way.
If a student is confused, bored or frustrated, he or she is unlikely to be motivated to succeed in
that class.
Other advantages and disadvantages
Key advantages of e-learning include:
 Improved open access to education, including access to full degree programs
 Better integration for non-full-time students, particularly in continuing education,
 Improved interactions between students and instructors,
 Provision of tools to enable students to independently solve problems,
 Acquisition of technological skills through practice with tools and computers.
Key disadvantages of e-learning, that have been found to make learning less effective than
traditional class room settings, include:
 Potential distractions that hinder true learning,
 Ease of cheating,
 Bias towards tech-savvy students over non-technical students,
 Teachers' lack of knowledge and experience to manage virtual teacher-student interaction,
 Lack of social interaction between teacher and students,
 Lack of direct and immediate feedback from teachers,
 Asynchronic communication hinders fast exchange of question,
 Danger of procrastination.

Lecture 14. INFORMATION TECHNOLOGIES IN THE PROFESSIONAL SPHERE.


INDUSTRIAL ICT.

1.The software for the solution of tasks of the specialized professional sphere.
Аpplication package- a set of programs designed to meet the challenges of a certain class
(functional subsystem, business application)method-oriented.
· the following types of PPP:
· general purpose (universal);
· problem-oriented,
· global networks;
· organization (administration) computing process whole..
IFR general purpose- versatile software designed to automate the development and operation
of the user's functional tasks and information systems as a By this RFP class includes:
· text editor (word processor) and graphics,
· spreadsheets,
· database management systems (DBMS);
· integrated
· packages; Case-technology;
· the shell of expert systems and artificial intelligence systems changes.
RFP for creating and text documents, graphics, and illustrations, called the editor.
Text editors designed to handle text and perform mainly the following functions:
record the text file;
insert, delete, replace characters, lines, text fragments,
spell checking,
formatted text, different fonts,
text alignment
preparation tables of contents, splitting the text on the page;
search and replace words and expressions,
the inclusion in the text of simple illustrations;
text printing
The most widely used text editor Microsoft word, word Perfect (currently owned by the
company Corel), chiWriter, Multi-Edit And other used.
Graphic editors are for processing graphic documents, including diagrams, illustrations,
drawings and tables. Allowed to control the size and font of figures, moving figures and
letters, the formation of any image. Among the most famous image editors can be called
packages Corel DRAW, Adobe PhotoShop and Adobe Illustrator.
Publishing systems combine the possibilities of text and graphic editors have developed
capabilities for formatting strips with graphic materials and then printed. These systems are
targeted for use in publishing and typesetting systems are called. Because these systems can
be called PageMaker products from Adobe and Ventura Publisher of Corel Corporation.
Spreadsheets.Spreadsheet called RFP for processing tables.
The data in the table is stored in the cells located at the intersection of rows and columns. The
cells can be stored numbers, formulas and character data. Formula values are set dependent on
the contents of one cell to other cells. Changing the contents of a cell resulting in a change of
values in the dependent cells.
The most popular PPP of this class are products such as Microsoft Excel, Lotus 1-2-3, Quattro
Pro, and others.
Database ManagementSystems.To create a database inside the machine information support
uses special IFR - database management system-.
Database A set of specially data organized sets stored on disk adding,.
database management includes data entry, their correction and manipulation of data, that is,
delete, retrieve, update, etc. Developed database applications to ensure the independence,
working with them on the specific organization of information in databases. Depending on
how these organizations are distinguished: network, hierarchical, distributed, relational
database management system
1.The software for the solution of tasks of the specialized professional sphere.
Аpplication package- a set of programs designed to meet the challenges of a certain class
(functional subsystem, business application)method-oriented.
· the following types of PPP:
· general purpose (universal);
· problem-oriented,
· global networks;
· organization (administration) computing process whole..
IFR general purpose- versatile software designed to automate the development and operation
of the user's functional tasks and information systems as a By this RFP class includes:
· text editor (word processor) and graphics,
· spreadsheets,
· database management systems (DBMS);
· integrated
· packages; Case-technology;
· the shell of expert systems and artificial intelligence systems changes.
RFP for creating and text documents, graphics, and illustrations, called the editor.
Text editors designed to handle text and perform mainly the following functions:
record the text file;
insert, delete, replace characters, lines, text fragments,
spell checking,
formatted text, different fonts,
text alignment
preparation tables of contents, splitting the text on the page;
search and replace words and expressions,
the inclusion in the text of simple illustrations;
text printing
The most widely used text editor Microsoft word, word Perfect (currently owned by the
company Corel), chiWriter, Multi-Edit And other used.
Graphic editors are for processing graphic documents, including diagrams, illustrations,
drawings and tables. Allowed to control the size and font of figures, moving figures and
letters, the formation of any image. Among the most famous image editors can be called
packages Corel DRAW, Adobe PhotoShop and Adobe Illustrator.
Publishing systems combine the possibilities of text and graphic editors have developed
capabilities for formatting strips with graphic materials and then printed. These systems are
targeted for use in publishing and typesetting systems are called. Because these systems can
be called PageMaker products from Adobe and Ventura Publisher of Corel Corporation.
Spreadsheets.Spreadsheet called RFP for processing tables.
The data in the table is stored in the cells located at the intersection of rows and columns. The
cells can be stored numbers, formulas and character data. Formula values are set dependent on
the contents of one cell to other cells. Changing the contents of a cell resulting in a change of
values in the dependent cells.
The most popular PPP of this class are products such as Microsoft Excel, Lotus 1-2-3, Quattro
Pro, and others.
Database ManagementSystems.To create a database inside the machine information support
uses special IFR - database management system-.
Database A set of specially data organized sets stored on disk adding,.
database management includes data entry, their correction and manipulation of data, that is,
delete, retrieve, update, etc. Developed database applications to ensure the independence,
working with them on the specific organization of information in databases. Depending on
how these organizations are distinguished: network, hierarchical, distributed, relational
database management system
from the available database the most widely used Microsoft Access, Microsoft FoxPro,
Paradox (corporation Borland), and the Oracle database company, Informix, Sybase, Modern
etc.
2.Modern IT trends in the professional sphere: medicine, power, etc.
Computers have long been used in medicine. Many modern diagnostic methods based on
computer technology. Such methods are surveys, as ultrasound or computed tomography,
generally unthinkable without a computer. But in a more "old" methods of examination and
diagnostic computers are invading more and more actively. Cardiogram and blood tests, the
study of the fundus and dental health ... - It's hard to find an area of medicine in which
computers have not been applied to more and more to.
Active, but only diagnostic use of computers in medicine is not limited They are increasingly
beginning to be used in the treatment of various diseases -. Starting from the construction of
the optimal treatment plan and to manage the various medical facilities during procedures.
The information economy has changed many aspects of economic reality, in particular, and
the function of money, that of a universal equivalent effort gradually turned into a means of
calculation. Virtual banks and payment systems -. The fruit of development of information
technologies in economics and business information technology applied to the processing,
sorting and aggregating data for the organization of interaction of actors and computer
technology, to meet the information needs for operational communications, etc.
It is understood investment decision in the development of information technologies, as well
as other management decisions should take into account economic feasibility. But it turns out
that this very convenient to calculate the benefit in using all of the same information
technologies. There are models of counting the total economic impact, which allows to take
into account, among other things, additional benefits of the introduction of information
technologies, scalability and flexibility of the systems, as well as potential risks.
3.Use of search engines and electronic resources in the professional purposes.
The search engine (Eng. engine) -searchit is a computer system designed tofor search
information.One of the most well-known applications of search engines - web services to
search for text or graphic information in the World WideWeb.There are also systems that can
search for files on the FTP-server, goods inonlinestores, information in the Usenetnewsgroups.
To search for information using a search engine user formulates a search request.The search
engine's job is to search for the user to find documents that contain any specified keywords, or
words, in any way related to the keyword. In this case the search engine generates a search
result page. Such a search listing may comprise various types of results, such as web pages,
images, audio files. Some search engines also extract information from the appropriate
database and web directory.
The search engine is better than more documents that are to relevant the user's request, it will
return. Search results may become less relevant due to the nature of algorithms or due to
human error. Мost popular search engine in the world is Google.
According to the methods of research and service shared by four types of search Systems
using crawlers, systems controlled by a person, the hybrid system and the meta-system. The
search system architecture typically includes:
· crawler that collects information from Internet sites or from other documents,
· indexer, provides a quick search on the stored information, and
· the search engine - a graphical interface for the user

Lecture 15. PROSPECTS OF DEVELOPMENT OF ICT.


Social implications of ICT development: In 2020, large areas of our lives will be digitized
It is expected that within six years, or at the latest within fifteen years, so between 2015 and
2024, more than 95 of the adult population in Germany, Europe and the USA will actively and
regularly use the Internet and its services. In global terms, however, it will be at least another 20
years, probably longer than that, until more than 75 percent of the world population actively use
the Internet several times per week. Overcoming the digital divide will therefore continue to be a
huge challenge for decades to come.
The fact is that, despite the extensive, rapid dissemination of the Internet and its services,
especially the social network, large segments of the population will not yet have the skills to use
these technological facilities. In this context, the skills of an individual must mean first and
foremost treating their own personal data with care.
In the future, users, when dealing with their digital data in all kinds of usage contexts, will be
supported by tools for administering (multiple) identities on the Internet, which will be widely
disseminated in Germany and throughout Europe in as little as six to ten years. A worldwide
unified solution for identity management (authentication and integrity) between any number of
communications elements will be available in the distant future, but not until 2020 at the earliest,
and potentially at a much later date, or possibly not even at all. Whether and to what extent each
individual will have full control over the use of their personal data on the Internet is still
unknown: It can be assumed that, internationally and in particular for the USA, this ambitious
goal will be reached in six or, at the latest, ten years, i. e. by 2019. In Germany, however, the
idea that the individual has complete control over the use of their personal data on the Internet or
that this is guaranteed (the right to informational self-determination) will seem utopian.
Nevertheless, there will never be state censorship of access to Internet content in Germany,
Europe or the USA – the individual’s right to digital self-determination will remain protected.
This point is more critical in respect of limitations on freedom of opinion by exercise of state
influence in an international context. In many countries, this democratic barrier can already be
deemed to be broken today.
In summary, it can be seen that the area of conflict between openness and transparency due to
the evolution of the Internet, will continue to develop dynamically. In the future, this will require
scientific and political solutions – the shaping of this future has already begun. It should be noted
that, because of their complexity and the inherently long period of time it takes to implement
them, fundamental und pivotal decisions, e.g., regarding IT security or broadband expansion,
will have to be initiated today if they are to take effect in the foreseeable future.
By 2020 at the latest, Internet use will be largely mobile
One of the central developments, which in the next few years will add considerable
momentum to digital life, is the trend towards mobile use of the Internet and its services: It can
be assumed that there will be a large number of originally mobile applications and services that
will substantially increase the intensity of mobile Internet use in Germany in the next six to ten
years (as already suggested by the flood of applications in connection with positionand location-
based services). This development will mainly be driven by further technical advances,
especially in the development of terminal equipment and expansion of the network
infrastructure.
The intensity of this mobile use in particular will rise hugely in the coming years: In six to ten
years, 75 percent of cell phone users in Germany will access the Internet on a daily basis through
their mobile device. A similar development can also be seen in the rest of Europe and the USA.
A series of application scenarios and content will decisively expedite mobile Internet use:
• the merging of work and living spaces,
• location-based services,
• media use and • mobile commerce.
The merging of work and living spaces will be expedited by the fact that, by 2024 at the
latest, employees in Germany will universally use one and the same wireless device, which
administers several telephone numbers (including for private telephony at home, on the move or
at work). In the USA, this trend will take hold somewhat earlier, and in Europe as a whole,
similarly by 2024. For the further development of location-based services it is vital that
navigational and positioning systems (e. g., Galileo, GPS) are established as fixed components of
every mobile device (e.g., cell phones or digital cameras) in the next five years.
In the following six to ten years, so by 2019 at the latest, 75 percent of cell phone users in
Germany and Europe will access location-based services on a daily basis through their mobile
device – in the USA this trend will take hold with a five year delay, so by 2024 at the latest. With
regard to media use, the following scenario can be observed: Not until 2020 will more than 75
percent of the population in Germany and Europe use a multimedia mobile device as the
unifying element for conventional media (books, newspapers, magazines, television and Internet)
for displaying text, images, music and videos. It will be a relatively long time, not until 2020 or
later, that it will also be possible to use a single standard technology worldwide to pay at retail
outlets and restaurants through mobile devices (mobile wallet).
2. ICT innovation policy: In 2020, the boundaries between countries and also between
subject disciplines are obsolete
Evidently, Europe will not manage over the next few years and decades to catch up with the
USA and its general competitive lead in the ICT industry. Nevertheless, targeted investments in
research and development and in software expertise mean that Europe will take a leading role in
some segments of the global ICT industry in as little as six to ten years. Leadership opportunities
will lie in the areas of telecommunications services, telecommunications infrastructure, but also
in IT services and software.
Globalization and technical advances will lead to radical changes in value chains. First, the
number of parties involved in the processes will drastically increase around the world; value
chains will become value networks. Second, the competition will bring about a move from
“walled gardens” to open systems, which will include customers and users in the innovation
process to a much greater degree. This holds great potential to improve one's own opportunities
and close gaps. Open Innovation refers to the ability to include heterogeneous participants from
the outside world in the innovation process and to link up with innovation networks. By as early
as 2015, and by 2019 at the latest, Open Innovation will have taken root in leading German
companies as the standard. In Europe, this process will take five years longer and will be
complete in 2024. In as few as six to ten years, the cross-disciplinary collaboration of engineers
on the one hand, and social scientists, designers and artists on the other will be a prevailing
method in the innovation process of business in Germany and Europe.
In a relatively short period, globalization will give rise to considerable challenges: Although
it is unlikely that the integrity and functionality of critical ICT infrastructures in Germany will be
compromised in the future due to dependence on international system suppliers, the potential
threat posed by such a scenario cannot be ruled out altogether. This problem will apply similarly
in the USA and the rest of Europe.
3. Infrastructure development and key technologies
The availability of stationary broadband not only has a positive impact on the ICT and media
industry, but far beyond this, on the economy as a whole, on media use in particular, and on
society in general. From 2020, which is to say, in about ten years’ time, 100 MBit / s will be
available for both uploads and downloads nationwide in Germany for stationary Internet use. An
international comparison shows wide-ranging differences in broadband infrastructure
development: While development in Europe is generally in line with that in Germany and 100
MBit / s will be available Europe-wide from 2020, in some countries of the world, this state of
affairs is already on the brink of becoming reality, i. e., from 2010. In the USA too, nationwide
provision of 100 MBit / s can be expected five years earlier than in Germany. For many years to
come, access networks based on optical fibers will only be available in urban areas in Germany.
Not until 2025 will fiber-to-thehome be used Germany-wide. In this point, many European
countries will have overtaken Germany by five whole years and already have nationwide optical
fiberbased broadband networks by 2020.
In addition to the availability of infrastructure, the use of these networks is a key indicator for
a country’s sustainability. Parallel to the availability of 100 MBit / s for stationary Internet, from
2020 at the earliest, 95 percent of Internet users in Germany will have broadband connections
with a speed of at least 100 MBit / s for upload and download, although this may not be the case
until 2030. The further development of average bandwidths for stationary Internet access will
progress rapidly in Germany, even though these high bandwidths will not always be (able to be)
used nationwide at the same time: For example, assuming average use of 36 MBit / s in six years
in 2015, this will increase to 101 MBit / s in 2020, rising to 195 MBit / s in 2025 and 406 MBit /
s in 2030, according to the average expectations of the Delphi experts surveyed. With the
immense potential of mobile applications and services, mobile broadband will also be developed
nationwide in the coming years. From 2015, 50 MBit / s will be available Germany-wide for
mobile broadband upload and download. In parallel to this, 50 MBit / s will also be available in
the USA and Europe in six years. With the development of high-speed mobile networks, users’
use of them will also increase in coming years: In Germany, average bandwidths of 7 MBit / s
will be used for mobile Internet access in 2015. Five years after that, in 2020, average
bandwidths used will already have reached 20 MBit / s, in 2025 they will be 47 MBit / s and in
2030 84 MBit / s. Location-based services will develop in close co-evolution with the mobile
broadband networks and their usage. This presupposes a viable high-performance infrastructure
solution: In 2019, Galileo will be the standard for positioning and localization services in
Europe. The Internet of Things is also seen as an infrastructure with huge spillover effects. In
2019, RFID will be the standard technology worldwide and will be used everywhere in the area
of production and logistics and, for example, will have replaced the barcode in the consumer
goods sector in Germany. The wide range of applications and the use of embedded systems will
have a sustained impact on the economy as a key technology for the future. From 2020, these so
called “autonomous intelligent embedded systems” that learn from other intelligent systems and
communicate with them on an automated and completely independent basis will be the basic
standard of various applications and products.
Another much-vaunted future trend is in cloud computing. This development, which is also
referred to as a “net-centric approach” will give rise to huge changes in both private and business
applications in the coming years. From 2025 at the latest, more than 75 percent of private data in
Germany, such as private documents, pictures and music, and business data, such as business
documents or company databases, will be located on the Internet. Ten years before that, from
2015, software will no longer be stationary on local computers or mobile end devices, but rather
on an “on-demand” basis as “webware” in and via the Internet.
As part of these changes, the structure of the Internet will also be modernized: In 2019, IPv6
will replace the current standard (IPv4) and be established as the norm. The current Executive
Summary 11 Internet protocol (IP) will not be replaced as the base technology of the Internet
until after 2030, if at all. Internet usage will also change radically in the coming years. A key
development here will be in the transition from the traditional Internet to the semantic web. In
2019, semantic web technologies will be an integral part of the Internet and usage and quality for
the user will change substantially. Five years after this, in 2024, suppliers of these semantic
technologies will have brought about a shift in power in the Internet markets and replaced the
original offerings and suppliers. The changes in mobile and stationary infrastructures, the
mutating and expanding areas of application for ICT and the new forms of using the Internet and
its services will also bring with them constant developments in hardware and in particular in
memory and chip technologies: By 2019 at the latest, traditional silicon-based memories and
processors will have been pushed to their performance limits due to increasing miniaturization
and conventional photolithographic technology will be replaced as the standard technology for
the production of chips, e.g., by technologies such as nano-imprint or maskless lithography.
4. ICT driver of innovation in key industries: As a driver of innovation in key industries,
ICT has immense potential to achieve or secure leading positions worldwide
Especially in key industries, ICT acts to accelerate growth and drive innovation in the coming
years: in the media sector, the energy industry, the automotive industry and the healthcare sector.
In the course of convergence processes in media use initiated by digitization and thus in the
media sector, there will be complex changes for recipients as well as for media professionals in
the coming years: In 2024, the Internet will be the number one medium for entertainment in
Germany, Europe and many other countries of the world. In Germany and Europe, the
conventional, “traditional” media consumption formats will continue to predominate: “Media
snacks,” i. e., short formats in the form of three-minute clips, such as those already found on
YouTube, or entertainment content based on user-generated content will only be used in certain
contexts and will by no means dominate media use. Also, public-law broadcasters will continue
to be responsible for democratic opinion-shaping processes in Germany. There is no risk from
readily available, high-quality information. Changes are expected in the use of media: From
2020, it will be normal for 75 percent of media users in Germany to access one and the same
media content by means of various devices (e.g., newspaper articles on a mobile device,
television broadcasts on the PC or Internet content on the television. In parts of Europe, this
media convergence trend will become reality five years earlier than in Germany, from 2015.
Traditional print media, such as newspapers and magazines, will remain much as they are to
begin with. Ultimately, they will be supplemented and their use expanded convergently. For
example, newspapers and magazines in Germany also continue to be available in traditional print
formats in the coming decades, and not just as digital versions on the Internet. If at all, then from
2020 at the earliest, 75 percent of the populations in Germany and Europe will use individually
compiled daily e-newspapers in parallel with the conventional paper version. The use of
electronic media will also change: In 2024, more than half of the population in Germany will use
on-demand media and services in their daily media consumption instead of conventional linear
television. In the USA and Europe, television viewers will already be renouncing fixed and
scheduled programs by 2019.
In 2017, it will be equally normal for more than half of the Internet users in Germany, Europe
and the USA to pay for retrieving from the Internet professionally produced media content
(films, electronic newspapers and magazines, music, etc.). Only outside of these regions will
paying for digital content by users not be recognized until 2020. And yet another revenue
element in addition to direct payment for media content has changed: advertising. From 2017,
consumer opinions and experiences of Internet communities and consumer portals will have a
greater influence on the success of products and brands in Europe and Germany than the current,
immensely important traditional adverts.
In the area of electronic television media, the coming years will bring a number of changes in
the technology: From 2017, high-definition television (HDTV) will be the standard quality of
television transmission in Germany – in parts of Europe and other countries, this is already the
case, or will be very soon. 3D television will be available across Germany and Europe from 2030
at the earliest – internationally, this development will take place five years earlier, from 2025.
Resource efficiency through ICT: Green IT and e-energy to safeguard our future
Not least climate change requires a rethink or an adjustment of energy systems in Germany. A
possible solution to counteract climate change could be the implementation of ICT innovations:
Already today, but at the latest in five years, ICT infrastructures in energy supply will be
indispensable for ensuring energy efficiency and reliable provision in Germany. For Europe,
there will be no reliable provision at all by 2019 at the latest without ICT infrastructures. In
addition to the guarantee of energy efficiency and reliable provision, ICT offers high efficiency
in the e-energy sector: By 2020 at the latest, by using ICT in diverse application industries
(traffic, telematics, energy, house building, etc.), CO2 emissions will have been reduced by a
further 15 percent worldwide. Social awareness of the importance of sustainable use of energy
resources will lead to a holistic nationwide modernization of the technical infrastructure, devices
and services in Germany and Europe from 2020. In the USA and in many other countries of the
world, this modernization will kick in from 2015, which is five years earlier. The use of new ICT
components will lower energy consumption of communications networks in Germany alone by
over 90 percent over current consumption values in 2025 to 2030. In the USA and Europe, this
potential will be exploited five years earlier. The potential in ICT-supported renovation of
buildings is also high: From 2020, ICT-based concepts in intelligent buildings (“smart homes”)
will contribute to savings of more than 30 percent on energy consumption compared to 2009. In
Europe, this trend will take five to ten years longer. A specific example in this context is green
technologies and their use in buildings: In 2019, energy-saving IT components, automated
device switches and renouncement of the standby-function will be the standard in more than 75
percent of buildings (private households and commercial buildings) in Germany. Such a high
degree of penetration will not be seen in Europe until six years later, from 2025.
Supporting the demographic change: ICT promotes independence and support
In 2024, the medical healthcare standard in Germany, the USA, and many European
countries, will be “round-theclock” care of individuals (senior citizens, patients) in their own
home by means of ICT systems.
Five years before this, in 2019, entirely new forms of prevention, diagnostics and treatment
will be available in Germany thanks to ICT combined with vital functions monitoring. Five years
later, intelligent electronic medical implants will link to and interchange with ICT systems and
will be used by more than 25 percent of the population.
ICT will increase security and efficiency in vehicles
ICT innovations will also play an decisive role in one of the most important sectors: transport
in general, and the automotive sector in particular. Sustainable mobility concepts will become
much more attractive over the next few years. From 2020, this will impact on private vehicle
purchases. The expected high potential of new systems for vehicle communication in reducing
accident rates and traffic jams will be exploited. From 2025, there will be a common
communications infrastructure in Germany that links security applications, traffic applications,
and commercial services. Ten years before this, from 2015, the Internet will become the means
of central communications access in the vehicle regarding journey-related information (e.g.,
route planning, traffic information, danger warnings) on Germany’s roads. Five to ten years after
this, 50 percent of all new cars in Germany will exchange information about traffic, the
environment, etc. among each other and thus enable real car-2-car networking. The technological
course being followed in Germany and Europe have been confirmed by the experts. In addition,
the introduction of commercial services, provides a possibility for refinancing some of the
investments that must be made in the infrastructure. Autonomous driving, however, remains a
distant dream. It will not be until after 2030 that driving in the car of the future, without the
“driver” actively controlling the vehicle, will become reality in some subsections of the traffic
system.

You might also like