You are on page 1of 149

COMPARATIVE EDUCATION UNIT 1: COMPARATIVE EDUCATION ORIGIN OF COMPARATIVE EDUCATION: In the beginning, Comparative Education was not really

Comparative but descriptive as the people were mostly concerned with the description of educational systems of each country without necessarily comparing one educational systems with another. However, the 19th Century witnessed an increased interest in the study of Comparative Education as education started to be studied in a Comparative form. As a matter of fact, what can be regarded as serious studies in the field of Comparative Education could be traced to the early 19th century after the Napoleonic wars. Since there was no war among the Europeans, there was peace among them and they needed something that could enhance their interaction with one another. Therefore, a consideration was given to the study of comparative education as a strong channel through which the youths of various European countries could be more unified. To this end, John Griscom traveled to Europe and on his return; he published his findings on educational institutions in the countries visited such as Great Britain, France, Switzerland, Italy as well as Holland between 1818 and 1819. In the same vein, Victor Cousin, a representative of the French Minister of education visited Prussia in 1931 and also on return home, published his findings on the Prussian educational institutions and practices, His findings were later translated to English and enhanced the educational development in France, England as well as in America. Another pioneer in the field of Comparative Education was Horace Mann of America who after a six-month visit to Europe also published his findings in 1843 on educational institutions and practices in England, Scotland, Ireland, France, Germany as well as Holland. His report was purely on the comparison of the school organization and methods of instruction. Matthew Arnold of England visited both France and Germany in 1859 and 1865. On his return home, he made some remarks particularly on the educational institutions and practices in both France and Germany. Like others, he advised that some useful aspects of the educational system of France and Germany should be integrated into the systems of education in England.
1.1

What can be viewed as second generation in the study of Comparative Education could be traced to Sir Michael Sadler who in one of his publications: how far can we learn anything of practical value from the study of Foreign Systems of Education which was published in 1900, went further than other pioneers before him who were more

utilitarian and straight forward in the description of the foreign educational systems studied by them. While contributing to the development of Comparative Education study, Kandel cited by Hans (1958) observed that: The chief value of a Comparative approach to educational problems lies in an analysis of the causes which have produced them, in a comparison of the differences between the various systems and the reasons underlying them and finally, in a study of the solutions attempted. In other words, the comparative approach demands first and appreciation of the intangible, impalpable spiritual and cultural forces which underlie an educational system, the factors and forces outside the school matter even more than what goes inside it. In the same vein, Friedrich Schneider, a German speaking and Director of the Institute of Comparative Education, Salzburg started the editing of the international Review of Education in four languages in 1930. In his 1947 publication, he gave the following as the factors that can influence the educational theory and practice of any country: (a) National character (b) Geographical space (c) Culture (d) Sciences (e) Philosophy (f) Economic life and politics (g) Religion (h) History (i) Foreign influences and (j) The development of pedagogies Like others, he applied historical approach to the problems of education of all the countries visited by him. In his own contribution to the development of Comparative Education, Sergius Hessen, a Russian Philosopher looked at Comparative Education from a Philosophical Education point of view. In his book published in 1928, he selected four problems as an educational policy focus. The problems are (a) compulsory education (b) The school and the State (c) The school and the church and (d) The school and economic life. Hessen was perhaps the first education philosopher to apply philosophical approach.
2

Also, the Comparative Education Society, introduced by Brickman, came into being at a conference in New York in 1956. This society assists in the publication of journal called "The Comparative Education Review". In addition, it holds national as well as regional conferences and seminars. In 1961, a similar society was established in Europe after launching the new society in London. The membership of the Society was extended to the experts in the field of Comparative or International Education in the tertiary Institutions or the International organizations. Like others, it holds its conferences every two years and publishes the proceedings of its conferences. Meanwhile, similar societies have been established in Canada, Korea as well as Japan. Perhaps World-Wide today, the discipline is one of the subjects being offered in all the Universities and Colleges of Education. The Society for Comparative Education was founded in Nigeria in 1983 while the World congress on the discipline came into being in the year 1982 for Cooperation among the people involved in the study of the subject as well as the general development of Comparative Education. 1.2 NEED FOR COMPARATIVE EDUCATION: 1.3 SCOPE OF COMPARATIVE EDUCATION: The term "scope" according to Longman dictionary of contemporary English could mean: (a) The area within the limit of a question, subject, action etc. (b) Space or chance for actions or thought. From the above, scope of comparative education means the area or areas covered by the discipline. The scope of the subject also connotes the various subjects or disciplines from where Comparative education draws its information directly or indirectly. A critical look at the various definitions of the discipline no doubt reveals that Comparative Education is an interdisciplinary subject since it relies on other subjects to be able to accomplish its objectives. As an interdisciplinary subject, its scope covers the historical development of education right from the Roman as well as the Greek civilization. It also includes the historical development of non-formal education in
3

any country of study. The discipline has its scope extended to the purpose or purposes of education systems of the countries being studied, an investigation into the similarities as well as differences existing in the educational practices of the countries under investigation. However, subjects from where Comparative Education draws its contents include the following: (a) History of Education (b) Philosophy of Education (c) Sociology of Education (d) Anthropology (e) Economics (f) Geography (g) Psychology (h) Statistics (i) Literature (j) Political geography (k) Political science and (l) International relations. The above explanation clearly shows that the subject is not independent of other subjects; it is a discipline that relates to other subjects for the accomplishment of its aims and objectives. It may be reasonably concluded that the interdisciplinary nature of the subject has contributed to the wideness of the discipline. 1.4 MEANING OF COMPARATIVE EDUCATION: Naturally, human beings are in the habit of making comparison of the things that are around them particularly when such things exist in different places. This may be done as a result of man's desire to know the relationship existing between, or among the things being compared. Man may also involve himself in this kind of a business when he wants to choose between two things before him. The idea of comparison is not peculiar to the people in the
4

business of education alone. The children at home or anywhere do make comparison between their parents because one of them may be more loving than the other. The school pupils also make a comparison of their teachers particularly when the teachers are not with them. The parents themselves can make a comparison of their children morally and academically. Comparison can take place wherever we have two or more things at the same time either for the purpose of having a better understanding of the relationship existing between them or for the purpose of having a better choice. Like other concepts, comparative education is a concept that attracts varied interpretations or definitions. In other words, there are as many definitions as there are many Educational Comparativists. Adeyinka (1994) gives the following definitions for the concept. (a) A study of two or more education systems. (b) A study of how the philosophy, objectives and aims, policy and practice of education in other countries influence the general development, policy and practice of education in a particular country. (c)A study of how the development of education in the past, across the ages and continents, has influenced the development of education in particular countries. (d) A study of the school systems of two or more countries, and of the administrative machineries set up to implement or to control the implementation of government policies at various levels of education systems. Comparative Education according to Good (1962) is a field of study dealing with the comparison of current educational theory
5

and practice in different countries for the purpose of broadening and deepening understanding of educational problems beyond the boundaries of one's own country. From the above definitions, the study of Comparative education allows the person involved to have a better understanding of the system of education outside his own country. To Kandel (1957), Comparative Education is the comparison of various philosophies of education based not only on theories but the actual practices which prevail From this above definition, Kandel is of the opinion that comparative education goes beyond the comparison of education philosophies but also includes the comparison of the real education practices. Perhaps, from the definition, comparative Education can be regarded as being pragmatic. In his own contribution to the concept of comparative Education, Mallinson (1975) defines the subject as: a systematic examination of other cultures and other systems of education deriving from those cultures in order to discover resemblances and differences, and why variant solutions have been attempted (and with what result) to problems that are often common to all. In his own remark on the concept of Comparative education, Adejumobi (1994) defines the concept as a critical study of educational similarities and differences prevailing with a particular society or culture or among various societies and cultures. From the definition given by Adejumobi, it is obvious that the
6

idea of comparing educational systems is not peculiar to countries or societies alone but it can as well take place within a country or society. In the same vein, Osokoya (1992) observed that: Comparative Education could be the comparison of educational theory and practice within a society, state, region and nations ... that scholars could engage in the comparison of educational programmes, theories and practices even within one society. Therefore, there could be a comparative study of educational programmes within the local governments of a state, between states of a country and between countries of a continent. Alabietal (1998) sees comparative education as; a way of comparing and contrasting different educational systems at national, infra-national as well as international levels. The major implications of their definition is that comparison of educational philosophies, systems and practices is not peculiar to two cultures or countries alone but it can also be localized as it has been rightly pointed out by the other scholars in the field. In his own reaction to the concept of Comparative education, Awolola (1986) defines the subject as the study of aims and objectives of education, the curriculum methods of teaching, teacher student relationships, school calendar, mode of discipline, design of school buildings, school administration among others which may be at the international or national levels. 1.5 PURPOSE OF COMPARATIVE EDUCATION: Comparative education like other disciplines being offered in the education institutions is not a purposeless subject. In other words, the subject has some goals which it aims at achieving. While giving the purpose of comparative education, Hans (1992) concludes that: The analytical study of these factors from historical perspective and the comparison of attempted solution of resultant
7

problems are the main purpose of comparative education. It can be concluded from the above that comparative education tries to compare educational problems as well as the solutions applied to such problems with a view to helping one's educational practices. The purpose of Comparative Education was given by Mallinson (1975) when he noted that: To become familiar with what is being done in some countries ... and why it is done, is a necessary part of the training of all students of educational issues of the day. Only in that way will they be properly fitted to study and understand their own systems and plan intelligently for the future which given the basic cultural changes that have taken place with such astonishing throughout the nineteenth and twentieth centuries, is going to be one where we are thrown into ever closer contact with other peoples and other cultures. From the above, it is evident that the study of Comparative Education assists the learners to understand their educational systems better. In his own contribution to the purpose of comparative education, Marc - Antoine Jullien de Paris (1817) cited in Hans (1992) notes that: The purpose of Comparative Education is to

perfect national systems with modifications changes which the circumstances and conditions would demand.

and local

Like other Education Comparativsts, the purpose given above is a pointer to the fact that the study of Comparative education assists in the flexibility of educational systems of one's country. In the same vein, Kandel cited by Hans (1992) was of the opinion that the primary purpose of comparative education is to discover not only the differences existing in the Education systems of two countries but also the factors that bring about such differences in the educational system. Also, to Hans (1992) the purpose of Comparative Education is to discover the underlying principles which govern the development of all national education systems. 1.6 SIGNIFICANCE OF COMPARATIVE EDUCATION: UNIT 2: METHODS/APPROACHES EDUCATION OF COMPARATIVE

2.1 Approaches to the Study of Comparative Education Awolola (1986) identified eight approaches to the study of Comparative Education. They are: (a) Problem Approach or Thematic approach (b) Case study approach (c)Area study approach (d) Historical approach (e)Descriptive approach (f)Philosophical approach (g) International approach and (h) Castronomic approach
9

2.2 Thematic or Problem Approach Here, the investigator will first of all identify a particular educational problem in his own country. Then, he will begin to look for another country that has the same problem. The researcher will also study the education problem of another country in relation to their culture. The researcher will not only study the education problem of another country but he will also examine the solution applied to such problem by the affected country. From this, he will think of how he will be able to solve their own educational problem as well. It should be noted that Culture, economic, Socio Political factors vary from one country to another as a result of which educational problems and solutions may not necessarily be the same. 2.3 Case Study Approach In this approach, an education Comparativist from Nigeria can go to Iraq to study the primary education Level of the country. His report (is believed) will be very comprehensive for his readers to understand. If it is possible for the researcher, he can take all the educational systems of the country and compare such educational system with his own educational system. The problem with this approach is that as a human being, the investigator may not be totally objective in his report. 2.4 Area Study Approach The world area here could refer to a village, a town or country depending on the educational comparativist who wants to carry out the study. Under this approach, the educational comparativist will engage himself in the educational practices of only one country, if it is a

10

country that he has chosen. The investigator is going to involve himself in several activities as a result of which he is going to arrive at a body of generalizations on the educational system he is studying. The study under this approach is always based on geographical, linguistic or racial boundaries. However, Bereday (1958) is of the opinion that "one of the oldest and clearest ways of introducing the subject (Comparative Education) is to study one geographical area at a time" He therefore identified the following stages in the area study approach: (a) Descriptive Stage - At this stage, an Educational Comparativist can make a description of his own educational system as well as practices. The researcher has to start by reading extensively. He will start by reviewing the available literature on the educational system of the country being studied. To enable the investigator have on the spot assessment, he can personally visit the country whose educational system is studying. (b) Interpretation Stage - At this stage of the study, the investigator will now collate and analyse the data gathered from various sources to enable him do justice to the educational system of the area being studied. (c) Juxtaposition Stage - At this stage of the study, the investigator will put side by side the result obtained from the interpretation stage with the educational system of his own country. (d) Comparative Stage - At this stage of the investigation, the Researcher will objectively compare and contrast the educational practices of the country being studied with that of his own. It is at this stage of the study
11

that whatever hypotheses that might have been formulated by the researcher that will be rejected or accepted. 2.5 Historical Approach Under this approach, an investigator will only take a village, town or country for the examination of its educational historical development right from the first day when education was introduced into the place and the time of study. This approach will enable the researcher to identify the factors that are responsible for the current educational system of the country being studied. However, the problem with this approach is that greater emphasis is always placed on the past. 2.6 Descriptive Approach Here, the investigator will have to describe everything he finds on ground. Such things to be described could include: Number of schools, student enrolment, number of teachers, number of the school buildings including classrooms as well as the number of subjects being offered. However, the approach is not very popular among the modern educational Comparativists. 2.1.7 International Approach This is an approach whereby all the variations existing from one area to another within the same country are taken into consideration while comparing the system of education of a foreign country with one's educational system. 2.8 Gastronomic Approach This is a method whereby both the diet as well as the eating habit of the people in a particular country are related to the practices of their education, the approach is not very popular among the modern educational comparativists.

12

2.9 The Field Study Approach This approach is not new in the area of the subject. On this approach, Brickman (1966) cited by Alabi and Oyelade (1998) observed that: Visitation of foreign countries whether for the purpose of commerce, conversation curiosity or conflict, goes back to ancient history, travelers in all historical periods must have brought back facts and impression concerning the cultures of the other countries they had visited, included in their reports must have been comments relating to the young and their upbringing. They may also have made some remarks regarding the similarities and differences in the ways of educating children. Some, indeed, may have arrived at conclusions involving the expression of value judgments. In using this approach for studying comparative education Halls (1965) cited by Alabi and Oyelade (1998) identifies three stages in the field study of approach. They are: 1. Preparatory stage 2. Investigatory and analytical stage as well as 3. Evaluatory and Comparative stage. Preparatory Stage This is the stage in which the investigator will have to prepare himself very well before traveling to his country of interest. He has to be familiar with the country he wants to visit by reading very extensively about the country. Investigatory and Analytical Stage At this stage, the researcher will have to formulate some hypotheses on the educational practices of the country he wants to study. The formulation of these hypotheses will give him a focus on what to look for.

13

Evaluatory Comparative Stage At this stage, the investigator after coming back from his travel to the foreign country, will now examine the practices of education of the country he has visited in relation to the educational practices of his own country with a view to establishing the similarities as well as the differences existing in the educational practices of the two countries it is also at this stage that the hypotheses earlier on formulated will either be rejected or accepted. The field study approach unlike area study approach, concerns itself with the study of the educational systems of many countries at the same time. It also involves visiting the foreign countries of interest to enable the investigator make an objective comparison between the foreign educational practices and that of his country. 2.10 The Scientific Approach This is an approach in which the study of comparative education is carried out empirically by formulating hypotheses, defining the important concepts, setting out the variables as well as the conditions for establishing the validity of the hypotheses formulated. Since in any scientific research, data collection its interpretation with the help of statistics of analysis are very important,. These must not also be lacking in the study of Comparative Education to enhance the quality and credibility of whatever may be the result of the investigation.

14

2.11 The Integrated Approach This is an approach in which other disciplines such as history, philosophy, geography; economics, anthropology and statistics are integrated in to the study of Comparative Education because of their usefulness. As it has already been stated, it is not possible for Comparative Education as a discipline to stand on its own as it has to draw from other subjects which include the disciplines mentioned above. 2.12 The Philosophical Approach A Russian Philosopher by name Serguis Hessen was the first man to apply philosophical approach to the study of Comparative Education when he published his book in 1928 which he tittled "Kritische Vergleichung des Schulwesens der Anderen Kuturstaaten". In the book, he chose four main philosophical problems. The problems chosen by him are: (a) Compulsory education (b) The School and the State (c) The school and the Church and (d) The School and Economic life. He analysed the underlying principles and later followed it by giving a critical account of modern legislation in many countries. Kosemani (1995) believes that philosophical approach is a step forward to solve the problems in the national character approach. According to him, there are two major problems involved in the application of philosophical approach to the study of comparative education. The problems are: (a) Difference in emphasis as a result of which it may be
15

difficult to use the same criterion (national ideology) for the comparison. (b) There are many countries without clear cut national ideologies. (c) From the above, it could be deduced that with philosophical approach, hypotheses could be formulated, be tested and could also be empirically validated for better explanation of educational practices of various countries. 2.13 The Comparative Approach In this approach, the reader must not be made to do the comparison of various educational practices by himself, rather, the comparison and conclusion have to be done by the investigator himself. Data on the educational practices to be compared must have been gathered and reviewed. In addition, hypotheses should have also been formulated to assist in the gathering of data. Then, the educational practices of the country under study will be put side by side with the educational practices of another country slated for comparison. The next stage after Juxtaposition is the comparison of the educational practices of the countries that have been put side by side. It is at stage of comparison that the hypotheses that had been formulated earlier on will be rejected or accepted. UNIT 3: COMPARATIVE STUDY IN EDUCATION: 3.1 EDUCATIONAL SYSTEM OF PAKISTAN Education in Pakistan is overseen by the government's Ministry of Education and the provincial governments, whereas the federal government mostly assists in curriculum development, accreditation and in the financing of research. The article 25-A of Constitution of Pakistan obligates the state to provide free and compulsory quality

16

education to children of the age group 5 to 16 years. The State shall provide free and compulsory education to all children of the age of five to sixteen years in such a manner as may be determined by law The education system in Pakistan is generally divided into five levels: primary (grades one through five); middle (grades six through eight); high (grades nine and ten, leading to the Secondary School Certificate or SSC); intermediate (grades eleven and twelve, leading to a Higher Secondary (School) Certificate or HSC); and university programs leading to undergraduate and graduate degrees. The literacy rate ranges from 97% in Islamabad to 20% in the Kohlu District. Between 20002004, Pakistanis in the age group 55 64 had a literacy rate of almost 30%, those aged between 4554 had a literacy rate of nearly 20%, those between 2534 had a literacy rate of 20%, and those aged 1524 had a literacy rate of 10%.These data indicate that, with every passing generation, the literacy rate in Pakistan has risen by around 10%. Literacy rates vary regionally, particularly by sex. In tribal areas female literacy is 7.5%. Moreover, English is fast spreading in Pakistan, with 18 million Pakistanis having a command over the English language, which makes it the 9th Largest English Speaking Nation in the world and the 3rd largest in Asia. On top of that, Pakistan produces about 445,000 university graduates and 10,000 computer science graduates per year. Despite these statistics, Pakistan still has one of the highest illiteracy rates in the world. SOCIAL STRUCTURE The educational system in Pakistan is divided into five major levels. The pre-university education consists of four levels: the primary level (grades one to five), the middle level (grades six to eight), the high level (grades nine and ten, culminating in matriculation), and the intermediate level (grades eleven and twelve, leading to a diploma in arts or science). There is also a university level, which leads to undergraduate and graduate degrees. The Pakistani educational system is highly centralized. The Ministry of Education is in charge of coordinating all institutions
17

involved in academic and technical education, up to the intermediate level. For education programs above that level, there is a government-designated university in each of four Pakistani provinces of Sind, Punjab, Baluchistan, and the North West Frontier. These universities are responsible for coordinating instruction and examinations of all post-secondary institutions in their respective province. Apart from the Ministry of Education, other ministries may oversee certain degree programs of relevance to their activities. Private and nonprofit schools and universities have begun to appear in Pakistan. These include the Lahore University of Management Sciences and the Aga Khan Medical University in Karachi. As privately funded universities, they provide an opportunity for higher education for a small percentage of people who do not have a chance to pursue their studies at publicly funded universities, which have limited annual admissions. ADMINISTRATIVE STRUCTURE Education is administered at two levels: national and provincial. At the national level, the Central Ministry of Education and Scientific Research is responsible for the formulation of national education policy. It also helps the Provincial Governments with the development and implementation of the Five Year Plans of Education. In this regard, it acts mainly in an advisory capacity. Anne)/presents the organizational structure of the Central Government. At the provincial level, there is a Department of Education in each of the four provinces. Although the organization of educational administration varies considerably from province to province, it is the provincial authorities that are solely responsible for the public school system and provide assistance for and supervision of private institutions within their respective jurisdiction. It should be noted that besides the public schools, there is a large number of private (non-government) schools. They operate at all levels, but their number is particularly significant at the post-primary level. Universities are based on the British pattern (in a way only) and enjoy a great deal of autonomy. There are both central and provincial universities. Establishment of private universities is forbidden by law.

18

There is a University Grants Commission to advise the Central Government on university affairs, particularly with regard to planning, development and financing on the one hand and the maintenance and improvement of academic standards on the other. UNIT 4: COMPARATIVE PERSPECTIVES 4.1 COMPARISON CAPACITY & PARTICIPATION ISSUES IN PAKISTANS EDUCATION SYSTEM Pakistan's education system focuses strongly on primary education. Despite this concentration, however, there are still many children between 5 to 9 years of age that are not attending school and it would appear that the primary system needs to expand if universal primary enrolment is to be achieved. Other countries reviewed have significantly larger proportions of children of primary age in their primary education programmes. At the same time, the system's ability to accommodate students who wish to continue their education beyond the primary level is relatively low, which has both economic and social implications for Pakistan's future. Balancing growth at the primary level with growth at the higher levels of education should be a priority. Other countries provide greater opportunities for students to proceed beyond primary level education. In fact, both Sri Lanka and Iran provide as many student places per grade at the lower secondary level as they do at the primary level. Pakistan's percentage is much lower (46%). The average number of upper secondary places is very low (28%) compared to lower secondary. As a result, a smaller percentage of students in Pakistan than in other countries are able to continue their education at the upper secondary level. The question arises of how close is Pakistan to achieving universal primary education? The Net Enrolment Ratio (NER) provides the answer. The NER is a ratio of the number of students at a level of education who are of the official age for that level to the comparably aged population. A value of 100% means that universal primary education has been achieved.

19

Pakistan's NER at the primary level is 62%. In other words, 62% of children five to nine years of age were attending primary education in 2005/06. The NER for primary education ranged from almost 80% in the Islamabad Capital Territory (ICT) to under 50% in Balochistan. An NER was not available for Azad Jammu and Kashmir (AJK) or for Federally Administered Northern Areas (FANA). 35% of children 10 to 12 years of age were studying at the middle elementary level of education; 23% of children 13 and 14 were studying at the secondary level; and fewer than 10% of teenagers 15 and 16 years of age were studying at the higher secondary level. SCHOOL ADMINISTRATION Pakistan is an example of a country that has both public and private sector educational institutions, which has a larger proportion of its youth attending private institutions than in many other countries. Private education institutions enroll 31% of students who are studying in basic education (pre-primary through higher secondary). In urban centers, private schools account for more students (51%) than the public sector (49%). However, the situation is reversed in rural areas, where over 80% of students are attending public schools. In comparison with other countries, private basic education in Pakistan enrolls more students than in most other countries. In fact, only 10 countries with relatively large populations in the world have a higher percentage of students in private primary education, led by The Netherlands (69%) and Lebanon (66%). GIRLS EDUCATION Gender Parity Index (GPI). The GPI is defined as the ratio of females to males. A GPI of 1 generally indicates parity between the sexes. Pakistan's school age population has a larger number of boys than girls. In Pakistan, there were 14 million girls studying in basic education in 2006, compared to 18.3 million boys. In other words, there were over 4 million more boys than girls. SCHOOL FACILITIES Many schools are in need of better facilities to improve the teaching environment. For instance, 9% of primary schools do not

20

have a blackboard, 24% do not have textbooks available for pupils, and 46% do not have desks for their students. Private primary schools are better equipped with desks and blackboards, but overall, almost a quarter of primary schools in both the public and private sector do not have any textbooks. UNIT 5: EDUCATIONAL SYSTEMS OF JAPAN Pre-School Education in Japan (3-5 Years) Elementary education in Japan normally takes place in the kindergarten schools. The primary objective of pre-school education in Japan is to give all round development to the Japanese children. The curriculum for this level of education include: Health, Social Studies, Nature study, Language, Music, Art, Arithmetic, Writing, Reading, Songs, Tales and Physical Education. Japanese Elementary Education (6 Years) Elementary Education in Japan was organized on four years duration initially. In the year 1886, this level of education was made free and compulsory. Also in 1908, the compulsory elementary education was raised to six years from the original four years. According to the 1972 Education Reform, education in Japan aims at the following: (a) To help people to acquire the abilities for building a satisfactory and spontaneous life. (b) To adapt to social reality and the creative solution of difficulties. From the above general aims, the following specific objectives are derived: (a) The development of basic abilities in the young people side by side with a set of specified vocational skills, (b) The preparation of students to cope flexibly with rapid progress in science and technology, (c) The preparation of the students for social life.

21

The Japanese elementary schools are designed for children aged 6-12 years. About 97% of the Japanese children attend public elementary schools while only about 3% go to private elementary schools. The curriculum for this level of education include, Japanese language, social studies, Arithmetic, Science, Music, Art and Craft, home Education, Physical Education, Moral Education, (in public schools) Religious Education (in Private Schools). Secondary Education in Japan This level of education is divided into two cycles, namely: (a) Lower or Junior Secondary: This cycle of Secondary Education is free and compulsory like elementary education. It lasts for 3 years and designed for children ages 12 to 15 years. About 97% of Japanese secondary schools age attends public lower school while about 3% go to the private Junior Secondary School. The primary objective of this level of education is to give all round development to the Japanese children and to give continuity to the Japanese elementary education. The curriculum for this level of education include: Japanese Language, Social Studies, Mathematics, General Science, Vocational and home Education, Foreign Language, Vocational Subjects, Agriculture, Industry, Commerce and Fisheries. Upper Secondary Schools It should be noted that Upper Secondary School as well as preschool education are not free and compulsory for the Japanese children. The primary objective of this cycle of secondary education is to give general and specialized education to the students. The students wishing to enter an upper secondary school will have to write an entrance examination. In addition to paying school fees, students also have to buy textbooks recommended by the Ministry of Education. Basically, there are two types of upper secondary school in Japan. They are: (a) 3 years full time upper secondary schools and (b) 4 years part-time and correspondence upper school. This cycle of secondary education is terminal as its graduates can decide to work with it or seek an admission with it into a tertiary institution. Upper secondary

22

education curriculum include: Japanese Language, Social Studies, Ethics, Civics, Political Science, Economics, Mathematics, Physical Education, Fine Arts, Music, Handicrafts and geography and Finally, Home Education for girls. Teacher Education in Japan Teachers for kindergarten schools are always trained in the teacher training Institutions while the secondary school teachers are also trained in the Universities. In the same vein, teachers for the Japanese higher institutions are equally trained in the Universities. It should be noted that before a person can be appointed to teach in any public tertiary institution, such a person is expected to have a minimum of Masters degree in the area where he wants to teach. Adult Education Adult education in Japan is regarded as social education. This kind of education is always organized by the Ministry of Education for the Japanese citizens who are not in the formal school. Towards this education, the Ministry of Education always provides correspondence courses. Such correspondence courses are in vocational, technical, agricultural, fishery as well as forestry subjects. The ministry efforts are always complemented with radio and television programmes particularly in the area of general education. Nongovernmental bodies also assist in Japanese adult education. Special Education Right from the year 1973, an official recognition was accorded the education of special children in Japan. Like other levels of education, special education has its own objectives. Its objectives include the following: (a) To identify and give appropriate educational programmes to the affected children, (b) To establish national centres for research and training, (c) For the integration, whenever possible of the handicapped with normal children and (d) To render other possible assistance to the handicapped
23

children. Tertiary Education In Japan, there are three categories of tertiary institution. They are: (a) University (b) Junior College and (c) College of technology. In the University, degrees are awarded and students spend between 4-6 years depending on their course of study. At the Junior College, degrees are not awarded unlike in the Universities. As stated above, colleges of technology are another form of tertiary institution in Japan. In this institution, educational technology and engineering education are provided. This kind of education is primarily designed for the graduates of junior or lower secondary education. It has duration of five years. Financing Education All public schools in Japan are highly centralized. Generally, schools are administered in the following order: (a) The Ministry of Education at the top (b) The Prefectural Education Boards (c) The Municipal Education Boards at the grassroots level or local level. At the national level, the Ministry of Education, Science and Culture always assists in the preparation of education budget, formulation of educational laws, and maintenance of educational standard. The Ministry of Education, Science and Culture is also saddled with the responsibility of giving an approval for the establishment of higher institutions and also to supervise various tertiary institutions in the whole of Japan. UK The Elementary School This school is designed for the children aged 5-14 years, in order to ensure that majority of the children attend elementary school, elementary education in England was not only tuition free but it is also compulsory for all the children who are within the age of 5 and 14 years. Secondary School

24

This school was designed for children who have already completed the elementary education. It is designed for children whose parents are rich enough to pay the school fees. Unlike elementary education, it is not compulsory at all. Secondary education after graduation offers the products clerical jobs among others. Preparatory and Public Schools These schools were very expensive and were meant for children of the upper class (the Aristocrats). These preparatory and public schools gave birth to the establishment of both the University of Oxford and the Cambridge University. Types of Schools in England The following types of schools are in existence in England Nursery Education A nursery school is school designed for children aged 3-5 years. Nursery school also serves as a temporary home for the children whose parents are working. Nursery education can be dated back to 1850, through the efforts of Friedrich Frobel as well as Maria Montessori. It should be noted that day-nurseries where the children of working parents are kept are not the same thing as nursery school. The reason is that, real nursery education is for the children aged 3-5 years and it is also part of the school system. It is tuition free. The Hadow Report of 1933 and Plowden report of 1967 greatly enhanced the development and improvement of nursery education in England. Primary Education The Balfour-Morant Act of 1904 gave the following as the objectives of primary education in England. (a) Recognising the child from 7-11 is a total being whose character, intelligence as well as physical abilities are moulded and trained. (b) Arousing in the pupils a lively interest in man's ideals, achievements, literature, history as well as language. (c) Developing an awareness in the pupils limitation; and (d) Demonstrating to the pupils how to acquire knowledge as well as learning for themselves. Primary school in England could be divided into: (a) Elementary and (b) Higher elementary schools.
25

(c) According to statistics, about 93 percent of the children within the age of twelve years were in the elementary school. The Fisher education, Act of 1918 made primary education compulsory for children up to the age of fourteen years and it also recommended the re-organization of primary education. Simply, primary education in England can be described as the education of young children below the age of eleven years. For the purpose of administration, all the public primary schools were being administered by the local education Authorities. It was also the responsibility of the Local Education Authorities to control all forms of secular education in the privately owned (Voluntary) primary schools. Also in England, there are some primary schools called Direct Grant School. Parents pay school fees in these schools. Primary school head is always given contract appointment. Inspectors only visit schools on request. Primary school subjects include: History, Geography, Nature study, Crafts, Arts and Physical Education, French, Religious Education. Also, the extra school activities include: Gynamastic, Swimming, music among others. The 1944 Act increased the compulsory education in England to 15 years. It also recommended that the number of pupils in each class should be reduced to make it more manageable. The plowing report of 1967 had recommended a change in the age at which school pupils may transfer from eleven to twelve's years. It was also recommended that the junior schools were recommended for pupils aged 8-12 years or 9-13 years to enable the Local Authorities have a better reorganization of the secondary schools. Secondary Education Secondary school in England maybe day or boarding school which offers to each of its scholars, up to and beyond the age of 16, a general, education, physical, mental and moral, given through a complete graded course of instruction of wider scope and more advanced degree than that in elementary schools. About four types of secondary education can be identified in England. They are: (a)Secondary Modern Schools (b)Secondary Grammar Schools
26

(c)Technical High School and (d)Comprehensive School (a)Secondary Modern Schools are designed for students who arenot academically inclined after their Primary Education. Secondary Modern Schools cater for secondary education for academically weak students up to the age of fifteen (15) years. (b) Secondary Grammar Schools are designed for the students who are academically inclined after their primary education. These schools in addition to giving sound formal education to the students, also serve as the Custodian of English Tradition. (c) Technical High Schools are provided for students who have an intension of working in the industry later in life. In other words, these schools are established to cater for the needs of commerce and industry. The products of this school are admitted into the faculties of engineering for engineering courses in the British Universities. (d) Comprehensive Schools: These schools are established to cater for children aged 11-18 years. The students in these schools offer the same subjects up to their second year. At the end of their third year, the students will be expected to choose three subjects apart from English language and Mathematics which they will like to study in their last two years. In the school, the students will have opportunity of learning one vocation or the other. As from the third year, the students will be exposed to one vocation or the other which he may want to do later in life. To assist the students, career officers are always provided by the school for the purpose of counseling the students on their future vocation. Technical or Further Education in England These are the institutions provided for young persons for the purpose of assisting them to develop their various aptitudes and also to train them to become responsible adults in life. Such schools among others include physical, practical as well as vocational training. An increased interest in the development of British Industry after the World war II for the training of skilled manpower in the area of technology greatly contributed to the development of further or technical education in England. At the end of the course, the students could be presented for
27

the examination of the city and guilds of London Institute or any other related professional examinations. For children under the age of sixteen years, tuition is free while those who are above the age of 16 years and are working have to pay fees. Technical colleges or further education are run on both part time and full time basis. Teacher Education in England Teacher education is the professional training designed for teachers of all categories starting from the nursery school to the university. Perhaps, the first teachers' College for the training of secondary school teachers was the college of preceptors which was founded in the year 1846. With effect from 1904, the local education authorities were allowed to establish their Teachers Colleges. As from 1921, the British Universities include Teacher education programme in their curricula. For the degree in education, students would spend three years and the fourth year would be for their teaching practice after which a university diploma or certificate in education would be awarded. In 1943, the Board of education recommended that more Teacher's colleges should be founded so as to be able to solve the problem of inadequate qualified teachers. The Me Nair report of 1944 among others, recommended that universities as well as Teacher Training Colleges should work hand in hand for the general improvement of teacher education. However, before the education Act of 1944, the primary school teachers were of four categories: (a)Certificated (b)Un-certificated (c)Supplementary and (d)Specialist teachers. In case of secondary schools, the teachers are expected to specialize in a particular subject. While the teachers for the old elementary schools were trained in the two-year colleges after their secondary education, the secondary school teachers were trained in a one-year teachers' diploma course at the university departments after graduating either from the faculty of
28

Arts or Science. On the other hand, the teachers of independent public schools were degree holders. Adult Education in England Adult education in England can be described as education designed for people who have left school, adequate facilities for leisuretime occupation in organized cultural training and recreative activities for persons who are above compulsory school age and can benefit from such educational programmes. The beginning of Adult education in both England and Wales could be traced to the activities of British Philanthropists who initiated the idea by first of all establishing Sunday schools for the Literacy Education of both the children as well as the adults. Many children and adults profited from this kind of education. The success recorded from this kind of education led to the establishment of London Mechanics Institution for the training of Mechanics in 1823. Within a very short time, similar institutions were cited in both England and Wales. The formation of the workers' educational Association which was affiliated to the Oxford University also contributed to the development of Adult Education in England. The association used to organize tutorial classes for its members in England. With effect from 1907, the Board of Education in England started to assist the university tutorial classes for general enhancement of Adult education. Also, for the promotion of Adult education, an Adult education Committee was set up in 1921. Primarily, the committee was to assist in the co-ordination of all the Adult education Voluntary Agencies. To crown it all, an open University was founded between 1960 and 1970 to provide better education for both the Adults as well as the workers. With the open University education, workers in particular, were able to improve their working condition even though such Open University education was not tuition free. University Education in England Higher institutions in both England and Wales include: the Universities, colleges of education as well as the polytechnics. The most popular universities are the Oxford University founded in 1185 and Cambridge University in 1230. The two oldest universities were
29

founded by the Church of England. In order to break the monopoly of both the Oxford University and the University of Cambridge, London University was established in 1828. School fees are being charged by the universities with which the universities are being financed in addition to financial aid from the public. Each university is autonomous in respect of admission, examination and award of degrees among others. Administration of Education in England The Education Minister is always appointed by the prime Minister. The Minister has Permanent Secretaries to help him in carrying out his official duties. The Minister is the controller as well as the Director of Education Board. He has the power to organize the schools at any time. The Local Education Authorities are elected bodies for the purpose of controlling schools under them. They also have voluntary schools under them. They employed teachers and pay their salaries as well. Each school has the power to select the textbooks for the use of its pupils. However, unlike primary and secondary schools, higher institutions are not under control of the Ministry of Education. 3.3.13 Financing of Education Money is always voted for the Ministry of Education from the national revenue by the parliament. The Minister of Education will also disburse part of the money inform of grants to local Education Authorities. USA The Education Levels in America include: (a) Nursery Education (b) Elementary or Primary Education (c) Secondary Education (d) Teacher Education and (e) University and Adult Education. Nursery Education At the beginning, nursery education was part of primary school between 1868 and 1873. By 1888, nursery education had spread to

30

many places in America. The Lanham education Act of 1940 also enhanced the development of nursery education in America by giving subventions from the federal government to nursery education. Later, individuals who had interest in the education of children started to part-take in the running of nursery schools. Also, the churches were participating in the running of nursery school. The Primary Education Primary Education in America is the education given to the children which has the duration of six years. The purpose of American primary education includes: (a) turning out well-adjusted citizens (b) helping the children to be active participants in the building of their own lives and also to assist them in understanding the roles expected of them in establishing a better American society. The primary school subjects include: Mathematics, Science, Geography, History, Social Studies, English language, English literature, French, German and Spanish. However, religious subjects are not included in the school curriculum as Americans have freedom of worship. In the primary schools, the promotion of the pupils is always based on continuous assessment and not on any promotion examination. It is the duty of the Local Schools Board to provide some of the school materials. A public primary school is headed by the principal who is the administrative head of the school. While the primary school teachers are expected to have a university degree, the principals are in addition expected to have masters degree of educational administration and supervision. The failure of the Federal Government to include Religious Instructions in the school curriculum was one of the reasons that forced the Catholic to establish their own schools where religious instruction was included in the school curriculum. By implication, there are both
31

private and public primary schools in America. Secondary Schools in America Secondary education in America is the type of education given to the adolescents on the basis of three years in the Junior Secondary School and three years in the Senior Secondary School. This can be referred to as 3 - 3 secondary education system. The aims and objectives of American Secondary Education include: (a) Creating a strong egalitarian society where everybody will have equal opportunity. (b) Preparing students for survival in the future (c) Preparing the students for their colleges and universities. Some States in America provide free secondary education and free textbooks for their citizens particularly up to the age of sixteen years. The products of primary schools are always admitted into the secondary schools. There are both public as well as private secondary schools in America. The Ordinance Act 1785 which made it mandatory for each township to set its sixteenth section for the use of education as well as the North West Ordinance of 1887 greatly enhanced the development of education in America. However, the problem of sub-standardized secondary schools and willingness to provide secondary education for many American children led to the introduction of Junior High School. In the Junior High School, the students are expected to spend three years after their primary education that is between the age of 12-15 years. After successfully completing the Junior Secondary Education, the students will start their Senior High School Education which is meant for the students, who are academically inclined. The public senior high schools are tuition free. Provision of learning materials for schools and the general financing of schools are responsibilities of the local schools district. In America, private high schools or secondary schools are also allowed by the constitution. However, unlike the public high schools,
32

tuition is not free and the teaching of religious education is allowed. It is on record that America has started operating 6-3-3-4 education system, yet, the old system of 8-4 years is still in operation (eight years of primary education and four years of secondary education). Teacher Education in the United States of America Teacher education in America like in other places refer to the professional training being given to the would-be teachers. The aims and objectives of American Teacher Education include: (a) Preparing teachers for the needs and aspirations of American as a democratic nation. (b) Preparing teachers who will later assist in the training of American children for the purpose of promoting their culture. The establishment of Jefferson College in Washington among others in the 1800s marks the beginning of teacher education in America. The preparation of primary school teachers is always done by the normal schools. These normal schools are recognized by the State Boards of Education for the training of primary school teachers. The subjects being offered in thee training institutions include: Administration, psychology, philosophy, History of education. On the other hand, the secondary school teachers are expected to be university degree holders after a period of four years either in a college or in the university. In most cases, teachers' appointment is always on contract basis and it is renewable yearly, provided the concerned teacher is still interested in working in his school. At the same time, the school district board of education has the constitutional power to terminate the contract appointment of any of its teachers. The American University Education In America, higher education is provided in the colleges of education, higher technical institutes and universities. In 1862, the American Government passed the Morrill Act which made it compulsory for the Americans to make land available to the American

33

Federal Government for the development of Universities and higher institutions of learning. There are two major categories of higher education in America, they are: (a) The state universities and colleges which are maintained by the state and (b) Independent universities and colleges which are run by various churches and private individuals. In these private colleges and universities, high school fees are changed. A degree programme lasts for four years. Adult Education in America The beginning of Adult education can be traced to the establishment of Lyceum in Massachusetts in 1826. Also, the Smith Lever Act of 1914 as well as the Adult education Association of the United States in 1951 greatly contributed to the development of Adult Education in America. Adult education in American is run by private individuals such as lawyers, physicians, architects, teachers and musicians for the purpose of self culture, community instruction as well as the mutual discussion of common public interest. In 1906, the university extension was started and this has been extended to most of the universities in America. Technical Education There have been some technical institutions as early as the middle of 19th century. But there was no serious attempt to promote technical education until when the Mosco technical school was able to perform creditably well at the international exhibitions in the 1970s. Thereafter, more technical institutions began to spring up in America. Also, the Morill Act of 1862 assisted in the development of technical education, private individuals started founding both commercial as well as business colleges. The Smith-Hughes Act among other things recommended that a Federal Board of vocational education should be set up. It was on the
34

basis of this that the Federal Board of vocational education was established in which a substantial amount of money was set aside by the federal government for the general promotion of vocational and technical education throughout America. Administration of Education in America Education in America is decentralized. Therefore, it is the responsibility of each estate as well as the private individuals to take care of their schools. In 1867, the National Officer of Education was set up and it is being headed by the Education Commissioner who is an appointee of the president of America. The federal government always assists the state governments in the funding of technical and vocational education. The state universities are financially aided by the Federal Government. At the State level, there is a state department of education under the headship of Education Director who is elected by the people within the state for a period of two to four years. Locally, each local government has a local board of education, usually headed by a Superintendent of schools in the district. His duties include: appointing teachers and other personnel who will be working with him. He also works on the finance of schools founded by the local government. Finance of Education in America In the whole of America, less than 60 per cent of the total cost of both public primary and secondary schools comes from the taxes levied by the local schools boards. Also, the state government always sets aside about 40 per cent of its annual budget for the running of the public schools. The bulk of this money is generated from the state taxes as well as the taxes paid by the state workers. In the private schools, starting from the primary school up to the university, the students pay school fees in addition to the taxes being paid by the parents. Also, some well to-do individuals in America always assist the private schools financially.

35

NIGERIA Teacher Education at the Primary School Level The history of teacher training institutions in Nigeria can be dated back to 1859 when the first teacher training college was founded in Abeokuta by the Church Missionary Society (Fafunwa, 1974). The college was moved to Lagos in 1867 and later transferred to Oyo in 1896 where it became St. Andrews College, Oyo. Other Christian missions such as the Baptist, the Wesleyan Methodist and the Presbyterian Church of Scotland among others. The students for the early teacher training institutions were taken from standard VI for a two year professional programme. Such pupils were expected to have been pupil teachers for about two years, they must have passed the pupil teacher examination and they must have also acted as assistant teachers. The elementary training institutions for the lower primary school teachers lasted for a period of two years, leading to the award of Grade III teachers certificate while the higher elementary training institutions which also lasted for a duration of two years leading to the award of Grade II Teachers' certificate. However, both the Teacher Grade III and Grade II colleges have been phased out in many states of Nigeria as the Nigeria certificate in education has become the minimum teaching qualification in all primary schools. In other words, only the colleges of education produce the lowest cadre of teachers for the Nigerian primary schools. Education Curriculum The curriculum of primary school teachers' institution include among others, national service with an emphasis on military training and nation building, Ujama political education, school organization, educational psychology, adult education, youth leadership, academic subjects as well as teaching methodology.

36

Teacher Education at the Secondary school Level In Nigeria, the Christian Missions did not pay much attention to the training of secondary school teachers. They were mostly concerned with the training of primary school teachers. Any other education apart from primary was superfluous as they only needed interpreters and a few Nigerians who could serve them. However, the establishment of the Yaba Higher College in 1932 brought about the introduction of the diploma in education programme which took care of secondary school teachers. Also, the university college, Ibadan, which was founded in 1948, introduced the diploma programme in education in 1957/58 academic year. The University in addition to its efforts on teacher training started a one year associationship course for Nigerian Grade II Teachers in 1961 immediately after dependence. The University of Nigeria, Nsukka, also in September, 1961, introduced a degree programme in education with about fifty students. The first set of education students at Nsukka, however, graduated in June, 1964. The University of Ibadan introduced degree in education in 1963, Ahmadu Bello University in 1967. Until 2006 there are over 40 universities in Nigeria. Perhaps out of all these universities, it is only in the Universities of Agriculture and Technology that degrees in education are not being offered. Holders of SSCE or its equivalent spend four years while holders of GCE A' Level or Nigerian Certificate in education (NCE) or its equivalent spend three years for the first degree. Also, Masters degree in education and Doctorate degree in education are now available in almost all the conventional universities. After independence, Advanced Teachers Colleges were founded initially by the Federal Government but later, state started establishing their own Grade I colleges. Such colleges are now (a) Federal government colleges of Education and (b) State colleges of Education. A few ones are also owned by private individuals. Duration in these Colleges ranges between three to four years depending on the qualification with which a candidate is admitted. The candidate with five GCE or equivalent passes spend only three years. The programme leads to the award of the Nigerian Certificate in
37

Education. Teacher Education for Teachers in the Higher Institutions Higher Education according to the National Policy on Education (1981) covers the post secondary section of the National Education System which is given in Universities, polytechnics and Colleges of Technology including such courses as are given by the Colleges of Education, the Advanced Teacher Training Colleges, Correspondence Colleges and such institution as may be allied to them. In Nigeria, teacher education for higher education teachers to some extent depends largely on the Universities. However, the highest qualification of the Nigerian Higher Education teachers depends on the type of higher education in which one is working. The teachers working in the Nigerian Universities are being trained in the Nigerian Universities or elsewhere. Before a teacher can be employed to teach in the university, he must have at least a Masters degree in the relevant discipline. Also, teachers for polytechnics, colleges of Technology are being trained in the Universities or Polytechnics. A first degree holder or its equivalent could be appointed. However, like universities, masters and doctorate degree holders are preferred in the colleges of education. Moreover, a professional certificate in education is a must for all lecturers in the colleges of education particularly for the few ones among them who did not study education. AFGHANISTAN Afghanistan is lying in Central Asia. Asia being the largest continent, the country is bounded on the North by the U.S.S.R (Western Turkistan), on the extreme North East by China (Eastern Turkistan) on the East and
38

South by Pakistan and on the West by Iran. It has an estimated areas of 251,773 sqm 652,090 sq.km. Its extreme length from East to West is 770 miles while it? greatest width from North to South is over 500 miles. Pushto is the mother language. Next to Pushto is Persian which is spoken by a considerable number of people. There are also 18 other dialects spoken by various groups and Urdu is also spoken and understood in the bazaars of Kabul and Kandhar. The capital is Kabul and the country is divided into 24 provinces each under a governor. Two main historical periods are the pre-Islamic and the Islamic which have influenced the development of Afghanistan's educational system and deep influence on Afghan thinking, as well as their cultural patterns. The pre-Islamic period lasted from ancient times up to the seventh century A.D. During this period, education curriculum was centered on Vedas, (an Aryan religious book) the earliest Hindu sacred writings and later on Buddha teaching. The primary aim of education during this period was to provide moral enlightenment for citizen. Grammar and astronomy were also given some attention. Education was for boys and men only and took place at the courts of the royal palaces. During the Islamic period, the Arabs reached Afghanistan and Islam became the predominant religion. The mosque became the center for education and the mullas (religious leaders) were the teachers. Instruction centered on Muhammad's teachings, as found in the Koran and included Islamic history and literature as well as grammar, logic and philosophy. Islamic education reached its peak in Afghanistan during the eleventh century. At this time, geography and mathematics were included in the curriculum. In 1904, the first modern school was established in Kabul over the objections of the Islamic clerics (mullas) and it was named after his founder, Habibulah Khan who ruled the country between 1901 and 1919, Habibiyyah School. Habibiyyah School was at first patterned after the Aligarh Muslim University, India and it offered both religious and secular subjects. Habibullah also

39

founded Teacher Training Colleges, Military Academy and School for Army Officers. The first vocational schools and a girl school were opened and a number of primary schools for boys were started in rural and urban areas. During this time, Habibiyyah School became a high school patterned after the French Lycee School. Three more schools were established in 1923. This event laid the groundwork for co-education, which was just beginning to Troot in the 1960's. The French educational pattern was introduced to Afghanistan in 1920's through contact with Turkey. Students were sent abroad to study in Franco, Germany, Italy and Turkey. The Afghan graduates from these foreign universities were recruited to staff Afghan High Schools. After 1929, students were sent to study in the United States and Japan, and after World War II, teachers from England and America were recruited to teach in Afghanistan. The United States is predominant sources of foreign educational assistance to Afghanistan. In 1954, Teachers College of Columbia University accepted as contract from the U.S. Agency for International Development to assist the government of Afghanistan in improving education through assistance to teacher education. Educational Structure of Afghanistan The education system in Afghanistan is divided into four general sections: primary, secondary, vocational and post secondary. Secondary schools exist in Kabul and in provincial capital. Technical commercial and medical schools also exist for higher education. Primary Education Primary education is compulsory and it is for a period of 6 years in mother language, which is Pershian and Pushto. Pershain in the first three session (1 - 3) and Pushto in the second three session (Classes 4 6). The pupils are also taught Arabic in order to read Quran since
40

99%

Muslims and the main religion is Islam. Schools are not co-educational, separate schools for girls were established. Primary education takes place either in village schools or in primary schools. The village school (grade I through 3) usually has only one teacher for its three grades. This teacher is always the village religious leader and the village mosque serves as the school. In the primary school (Grades I through 6) there is one teacher for each of first three grades. In grades 4,5, and 6, there is a special teacher for each subject. Primary teachers in the major cities are usually graduates of the teacher training colleges (Grade 12). Outside the major cities, primary school teachers are most often graduates of the middle schools or the emergency teacher training colleges and a small percentage of primary school. The age of primary school pupils ranges from seven to 19 years. By 1966, there were 1,000 primary schools with 450,000 pupils in Afghan primary schools. The curriculum of primary schools in grades 1,2, and 3 includes: the Koran, theology reading of the mother tongue (either pushto or Dari), hand writing, arithmetic, natural science and hygiene, drawing and handicraft and physical education. The curriculum in grades 4, 5 and 6 also includes a second language (Dari or Persia) history and geography in addition to the subjects taught in the first three grades. Teaching in primary schools is based on memorization as well as rote learning. Secondary Education Under secondary education, there is a unit called the middle school and another one called the Lycee. The middle school (grades 7, 8 & 9) prepares students for admission to the Lycee or for vocational training.
41

population

are

The students who successfully pass primary school examination are qualified for admission into secondary schools which are either: vocational schools in Kabul which train youth or can also gain admission into Technical school. The middle school teachers should have been trained in the Higher Teacher College at Kabul (grades 13 and 14). The Lycee is the equivalent of an American High School with grades 10,11 and 12. Its main purpose among others is to prepare students for the university education. By 1966, there are 150,000 in a few hundred secondary schools in Afghanistan. The curriculum for middle schools includes: The Koran, theology, Pushto, Dari, Arabic, Foreign language German, French, Algebra, geometry, chemistry, physics, biology, history, geography, economics, drawing and physical education. The main emphasis is on mathematics, science, history, geography and languages. Also, the curriculum for Lycee includes: the Koran, theology, pushto, Dari, foreign language, algebra, trigonometry, calculus, geometry, geography and logic. The main emphasis is on mathematics, natural sciences, social science and languages. Like in primary schools, teaching method is memorization and rote leaning. Vocational Education in Afghanistan After completing education in the middle school, students who are interested and qualified may go to the vocational schools in Kabul which train youths in agriculture, commerce, theology, teaching, secretariat studies, Arts and Crafts. In the agricultural Lycee, students are specially trained to develop agricultural production for the country. Vocational training is also provided in mechanical and crafts schools, which begin after primary education and continues through grade 10. These schools train students for mechanical, technical and craft occupations.
42

In the Islamic school, grade 10 through 12, students concentrate on Islamic religion. They are prepared to help in the interpretation of the law in the judicial department of the government, teach religion in the schools, serve as officials in the mosques or go on to further religious study at the University. In the same vein, the special schools for training teachers (grades 10 to 12) offer a three-year programme (two years of general studies and one year of professional studies). Due to the increasing demand for primary teachers, and emergency teacher training colleges are paid small monthly allowance and provided free tuition, room, board, clothing and books. The technical school "the Afghan Institute of Technology" (grades 10 through 13) is designed to train technicians. The technical school involves the mechanics and mathematics. By 1966, there were 42 vocational schools with 13,201 students in Afghanistan. Post Secondary Education in Afghanistan There is the institute of industrial management to further commercial training, which provides a 3 year programme for students after the completion of grade 12 of the commercial Lycee. The institute trains students for managerial positions in industry, banks and public administration. Kabul university was established in 1946 while its first faculty, was that of medicine. The university is composed of the following faculties: Islamic Law/ Letters, Law and Political Sciences, Economics, Sciences, Medicine, Pharmacy, Education (closely associated with the Institute of Education), Agriculture and Engineering. The Polytechnic Institute is also part of the University. There is coeducation in all the faculties except engineering and Islamic law. In 1963, a college of Medicine was formally established in Jalalabab.

43

A six-year course beyond grade 12 is required for the M.D. degree M.B.B.S). Another post secondary institution, the academy of Teacher Training was also established in 1964, the colleges serves as a demonstration school for teacher education trainees who are university graduates with one year teaching and guiding the DMA students. These teacher education students learn how to direct and supervise the DMA students preparing to become teachers. Adult Education This is established by the Ministry of Education and it is designed for workers, so it takes place after daily's work. Administrative Organization Under Article 34 of the Afghan constitution adopted in 1964, it is government's responsibility to prepare and implement a universal education programme. All matters dealing with education are under the jurisdiction of Royal Afghan Ministry of Education. The Minister of Education who is also a member of the Prime Minister's cabinet, is the chief administrative officer, and he is blessed with two deputy ministers. In addition, the presidents of the various departments are under the deputy ministers. Afghanistan is divided politically into provinces and each province has an educational director who is equally responsible to the central ministry and who happens to be the chief administrative officer for all provincial education matters. The president of University of Kabul is directly responsible to the Education Minister.

44

EDUCATIONAL RESEARCH Defining a Research Problem Defining a research problem is the fuel that drives the scientific process, and is the foundation of any research method and experimental design, from true experiment to case study. by Martyn Shuttleworth (2008) It is one of the first statements made in any research paper and, as well as defining the research area, should include a quick synopsis of how the hypothesis was arrived at. Operationalization is then used to give some indication of the exact definitions of the variables, and the type of scientific measurements used. This will lead to the proposal of a viable hypothesis. As an aside, when scientists are putting forward proposals for research funds, the quality of their research problem often makes the difference between success and failure. STRUCTURING THE RESEARCH PROBLEM
45

Look at any scientific paper, and you will see the research problem, written almost like a statement of intent. Defining a research problem is crucial in defining the quality of the answers, and determines the exact research method used. A quantitative experimental design uses deductive reasoning to arrive at a testable hypothesis. Qualitative research designs use inductive reasoning to propose a research statement.

DEFINING A RESEARCH PROBLEM Formulating the research problem begins during the first steps of the scientific process. As an example, a literature review and a study of previous experiments, and research, might throw up some vague areas of interest. Many scientific researchers look at an area where a previous researcher generated some interesting results, but never followed up. It could be an interesting area of research, which nobody else has fully explored. A scientist may even review a successful experiment, disagree with the results, the tests used, or the methodology, and decide to refine the research process, retesting the hypothesis. This is called the conceptual definition, and is an overall view of the problem. A science report will generally begin with an overview of the previous research and real-world observations. The researcher will then state how this led to defining a research problem.

46

THE OPERATIONAL DEFINITIONS The operational definition is the determining the scalar properties of the variables. For example, temperature, weight and time are usually well known and defined, with only the exact scale used needing definition. If a researcher is measuring abstract concepts, such as intelligence, emotions, and subjective responses, then a system of measuring numerically needs to be established, allowing statistical analysis and replication. For example, intelligence may be measured with IQ and human responses could be measured with a questionnaire from 1- strongly disagree, to 5 strongly agree. Behavioral biologists and social scientists might design an ordinal scale for measuring and rating behavior. These measurements are always subjective, but allow statistics and replication of the whole research method. This is all an essential part of defining a research problem. EXAMPLES OF DEFINING A RESEARCH PROBLEM An anthropologist might find references to a relatively unknown tribe in Papua New Guinea. Through inductive reasoning, she arrives at the research problem and asks, How do these people live and how does their culture relate to nearby tribes? She has found a gap in knowledge, and she seeks to fill it, using a qualitative case study, without a hypothesis. The Bandura Bobo Doll Experiment is a good example of using deductive reasoning to arrive at a research problem and hypothesis. Anecdotal evidence showed that violent behavior amongst children was increasing. Bandura believed that higher levels of violent adult role models on television, was a contributor to this rise. This was expanded into a hypothesis, and operationalization of the variables, and scientific measurement scale, led to a robust experimental design. Data analysis Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains.

47

Data mining is a particular data analysis technique that focuses on modeling and knowledge discovery for predictive rather than purely descriptive purposes. Business intelligence covers data analysis that relies heavily on aggregation, focusing on business information. In statistical applications, some people divide data analysis into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data and CDA on confirming or falsifying existing hypotheses. Predictive analytics focuses on application of statistical or structural models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data. All are varieties of data analysis. Data integration is a precursor to data analysis, and data analysis is closely linked to data visualization and data dissemination. The term data analysis is sometimes used as a synonym for data modeling. Type of data Data can be of several types

Quantitative data data is a number Often this is a continuous decimal number to a specified number of significant Sometimes it is a whole counting number Categorical data data one of several categories Qualitative data data is a pass/fail or the presence or lack of a characteristic

digits

STATISTICAL ANALYSIS Measures of Central Tendency 265

48

Introduction A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location. They are also classed as summary statistics. The mean (often called the average) is most likely the measure of central tendency that you are most familiar with, but there are others, such as, the median and the mode. The mean, median and mode are all valid measures of central tendency but, under different conditions, some measures of central tendency become more appropriate to use than others. In the following sections we will look at the mean, mode and median and learn how to calculate them and under what conditions they are most appropriate to be used. Mean (Arithmetic) The mean (or average) is the most popular and well known measure of central tendency. It can be used with both discrete and continuous data, although its use is most often with continuous data (see our Types of Variable guide for data types). The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. So, if we have n values in a data set and they have values x1, x2, ..., xn, then the sample mean, usually denoted by (pronounced x bar), is:

This formula is usually written in a slightly different manner using the Greek capitol letter, , pronounced "sigma", which means "sum of...":

You may have noticed that the above formula refers to the sample mean. So, why call have we called it a sample mean? This is because, in statistics, samples and populations have very different meanings and these differences are very important, even if, in the case of the mean, they are calculated in the same way. To acknowledge that we are calculating the population mean and not the sample mean, we use the Greek lower case letter "mu", denoted as :

49

The mean is essentially a model of your data set. It is the value that is most common. You will notice, however, that the mean is not often one of the actual values that you have observed in your data set. However, one of its important properties is that it minimises error in the prediction of any one value in your data set. That is, it is the value that produces the lowest amount of error from all other values in the data set. An important property of the mean is that it includes every value in your data set as part of the calculation. In addition, the mean is the only measure of central tendency where the sum of the deviations of each value from the mean is always zero. When not to use the mean The mean has one main disadvantage: it is particularly susceptible to the influence of outliers. These are values that are unusual compared to the rest of the data set by being especially small or large in numerical value. For example, consider the wages of staff at a factory below:

The mean salary for these ten staff is $30.7k. However, inspecting the raw data suggests that this mean value might not be the best way to accurately reflect the typical salary of a worker, as most workers have salaries in the $12k to 18k range. The mean is being skewed by the two large salaries. Therefore, in this situation we would like to have a better measure of central tendency. As we will find out later, taking the median would be a better measure of central tendency in this situation. Another time when we usually prefer the median over the mean (or mode) is when our data is skewed (i.e. the frequency distribution for our data is skewed). If we consider the normal distribution - as this is the most frequently assessed in statistics - when the data is perfectly normal then the mean, median and mode are identical. Moreover, they all represent the most typical value in the data set. However, as the data becomes skewed the mean loses its ability to provide the best central location for the data as the skewed data is dragging it away from the typical value. However, the median best retains this position and is not as strongly influenced by the skewed values. This is explained in more detail in the skewed distribution section later in this guide. Median The median is the middle score for a set of data that has been arranged in order of magnitude. The median is less affected by outliers and skewed data. In order to calculate the median, suppose we have the data below:
50

We first need to rearrange that data into order of magnitude (smallest first):

Our median mark is the middle mark - in this case 56 (highlighted in bold). It is the middle mark because there are 5 scores before it and 5 scores after it. This works fine when you have an odd number of scores but what happens when you have an even number of scores? What if you had only 10 scores? Well, you simply have to take the middle two scores and average the result. So, if we look at the example below:

We again rearrange that data into order of magnitude (smallest first):

Only now we have to take the 5th and 6th score in our data set and average them to get a median of 55.5. Mode The mode is the most frequent score in our data set. On a histogram it represents the highest bar in a bar chart or histogram. You can, therefore, sometimes consider the mode as being the most popular option. An example of a mode is presented below:

51

Normally, the mode is used for categorical data where we wish to know which is the most common category as illustrated below:

52

We can see above that the most common form of transport, in this particular data set, is the bus. However, one of the problems with the mode is that it is not unique, so it leaves us with problems when we have two or more values that share the highest frequency, such as below:

53

We are now stuck as to which mode best describes the central tendency of the data. This is particularly problematic when we have continuous data, as we are more likely not to have any one value that is more frequent than the other. For example, consider measuring 30 peoples' weight (to the nearest 0.1 kg). How likely is it that we will find two or more people with exactly the same weight, e.g. 67.4 kg? The answer, is probably very unlikely - many people might be close but with such a small sample (30 people) and a large range of possible weights you are unlikely to find two people with exactly the same weight, that is, to the nearest 0.1 kg. This is why the mode is very rarely used with continuous data. Another problem with the mode is that it will not provide us with a very good measure of central tendency when the most common mark is far away from the rest of the data in the data set, as depicted in the diagram below:

54

In the above diagram the mode has a value of 2. We can clearly see, however, that the mode is not representative of the data, which is mostly concentrated around the 20 to 30 value range. To use the mode to describe the central tendency of this data set would be misleading. Skewed Distributions and the Mean and Median We often test whether our data is normally distributed as this is a common assumption underlying many statistical tests. An example of a normally distributed set of data is presented below:

55

When you have a normally distributed sample you can legitimately use both the mean or the median as your measure of central tendency. In fact, in any symmetrical distribution the mean, median and mode are equal. However, in this situation, the mean is widely preferred as the best measure of central tendency as it is the measure that includes all the values in the data set for its calculation, and any change in any of the scores will affect the value of the mean. This is not the case with the median or mode. However, when our data is skewed, for example, as with the right-skewed data set below:

56

we find that the mean is being dragged in the direct of the skew. In these situations, the median is generally considered to be the best representative of the central location of the data. The more skewed the distribution the greater the difference between the median and mean, and the greater emphasis should be placed on using the median as opposed to the mean. A classic example of the above right-skewed distribution is income (salary), where higherearners provide a false representation of the typical income if expressed as a mean and not a median. If dealing with a normal distribution, and tests of normality show that the data is non-normal, then it is customary to use the median instead of the mean. This is more a rule of thumb than a strict guideline however. Sometimes, researchers wish to report the mean of a skewed distribution if the median and mean are not appreciably different (a subjective assessment) and if it allows easier comparisons to previous research to be made. Summary of when to use the mean, median and mode Please use the following summary table to know what the best measure of central tendency is with respect to the different types of variable.

57

Type of Variable Nominal Ordinal Interval/Ratio (not skewed) Interval/Ratio (skewed)

Best measure of central tendency Median Median

Central tendency In statistics, the term central tendency relates to the way in which quantitative data tend to cluster around some value.[1] A measure of central tendency is any of a number of ways of specifying this "central value". In practical statistical analyses, the terms are often used before one has chosen even a preliminary form of analysis: thus an initial objective might be to "choose an appropriate measure of central tendency". In the simplest cases, the measure of central tendency is an average of a set of measurements, the word average being variously construed as mean, median, or other measure of location, depending on the context. However, the term is applied to multidimensional data as well as to univariate data and in situations where a transformation of the data values for some or all dimensions would usually be considered necessary: in the latter cases, the notion of a "central location" is retained in converting an "average" computed for the transformed data back to the original units. In addition, there are several different kinds of calculations for central tendency, where the kind of calculation depends on the type of data (level of measurement). Both "central tendency" and "measure of central tendency" apply to either statistical populations or to samples from a population. Basic measures of central tendency The following may be applied to individual dimensions of multidimensional data, after transformation, although some of these involve their own implicit transformation of the data. Arithmetic mean the sum of all measurements divided by the number of observations in the data set

58

Median the middle value that separates the higher half from the lower half Mode the most frequent value in the data set Geometric mean the nth root of the product of the data values Harmonic mean the reciprocal of the arithmetic mean of the reciprocals of Weighted mean an arithmetic mean that incorporates weighting to certain

of the data set


the data values

data elements Distance-weighted estimator the measure uses weighting coefficients for xi that are computed as the inverse mean distance between xi and the other data points. Truncated mean the arithmetic mean of data values after a certain number or proportion of the highest and lowest data values have been discarded. Midrange the arithmetic mean of the maximum and minimum values of a data set. Midhinge the arithmetic mean of the two quartiles. Trimean the weighted arithmetic mean of the median and two quartiles. Winsorized mean an arithmetic mean in which extreme values are replaced by values closer to the median.

Standard deviation In statistics and probability theory, standard deviation (represented by the symbol ) shows how much variation or "dispersion" exists from the average (mean, or expected value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data points are spread out over a large range of values. The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler though practically less robust than the average absolute deviation.[1][2] A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data. In addition to expressing the variability of a population, standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported margin of error is typically about twice the
59

standard deviation the radius of a 95 percent confidence interval. In science, researchers commonly report the standard deviation of experimental data, and only effects that fall far outside the range of standard deviation are considered statistically significant normal random error or variation in the measurements is in this way distinguished from causal variation. Standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment. Basic examples Consider a population consisting of the following eight values:

These eight data points have the mean (average) of 5:

To calculate the population standard deviation, first compute the difference of each data point from the mean, and square the result of each:

Next compute the average of these values, and take the square root:

This quantity is the population standard deviation; it is equal to the square root of the variance. The formula is valid only if the eight values we began with form the complete population. If they instead were a random sample, drawn from some larger, "parent" population, then we should have used 7 (which is n 1) instead of 8 (which is n) in the denominator of the last formula, and then the quantity thus obtained would have been called the sample standard deviation. See the section Estimation below for more details. A slightly more complicated real life example, the average height for adult men in the United States is about 70", with a standard deviation of around 3". This means that most men (about 68%, assuming a normal distribution) have a height within 3" of the mean (67"73") one
60

standard deviation and almost all men (about 95%) have a height within 6" of the mean (64"76") two standard deviations. If the standard deviation were zero, then all men would be exactly 70" tall. If the standard deviation were 20", then men would have much more variable heights, with a typical range of about 50"90". Three standard deviations account for 99.7% of the sample population being studied, assuming the distribution is normal (bellshaped). THE NORMAL CURVE As discussed in the previous chapter, the normal curve is one of a number of possible models of probability distributions. Because it is widely used and an important theoretical tool, it is given special status as a separate chapter. The normal curve is not a single curve, rather it is an infinite number of possible curves, all described by the same algebraic expression:

Upon viewing this expression for the first time the initial reaction of the student is usually to panic. Don't. In general it is not necessary to "know" this formula to appreciate and use the normal curve. It is, however, useful to examine this expression for an understanding of how the normal curve operates. First, some symbols in the expression are simply numbers. These symbols include "2", " ", and "e". The latter two are irrational numbers that are very long, equaling 3.1416... and e equaling 2.81.... As discussed in the chapter on the review of algebra, it is possible to raise a "funny number", in this case "e", to a "funny power". The second set of symbols which are of some interest includes the symbol "X", which is a variable corresponding to the score value. The height of the curve at any point is a function of X. Thirdly, the final two symbols in the equation, " " and " " are called PARAMETERS, or values which, when set to particular numbers, define which of the infinite number of possible normal curves with which one is dealing. The concept of parameters is very important and considerable attention will be given them in the rest of this chapter.
61

A FAMILY OF DISTRIBUTIONS The normal curve is called a family of distributions. Each member of the family is determined by setting the parameters ( and ) of the model to a particular value (number). Because the parameter can take on any value, positive or negative, and the parameter can take on any positive value, the family of normal curves is quite large, consisting of an infinite number of members. This makes the normal curve a general-purpose model, able to describe a large number of naturally occurring phenomena, from test scores to the size of the stars. Similarity of Members of the Family of Normal Curves All the members of the family of normal curves, although different, have a number of properties in common. These properties include: shape, symmetry, tails approaching but never touching the X-axis, and area under the curve. All members of the family of normal curves share the same bell shape, given the X-axis is scaled properly. Most of the area under the curve falls in the middle. The tails of the distribution (ends) approach the X-axis but never touch, with very little of the area under them. All members of the family of normal curves are bilaterally symmetrical. That is, if any normal curve was drawn on a two-dimensional surface (a piece of paper), cut out, and folded through the third dimension, the two sides would be exactly alike. Human beings are approximately bilaterally symmetrical, with a right and left side. All members of the family of normal curves have tails that approach, but never touch, the Xaxis. The implication of this property is that no matter how far one travels along the number line, in either the positive or negative direction, there will still be some area under any normal curve. Thus, in order to draw the entire normal curve one must have an infinitely long line. Because most of the area under any normal curve falls within a limited range of the number line, only that part of the line segment is drawn for a particular normal curve. All members of the family of normal curves have a total area of one (1.00) under the curve, as do all probability models or models of frequency distributions. This property, in addition to the property of symmetry, implies that the area in each half of the distribution is .50 or one half. AREA UNDER A CURVE Because area under a curve may seem like a strange concept to many introductory statistics students, a short intermission is proposed at this point to introduce the concept.
62

Area is a familiar concept. For example, the area of a square is s 2, or side squared; the area of a rectangle is length times height; the area of a right triangle is one-half base times height; and the area of a circle is * r2. It is valuable to know these formulas if one is purchasing such things as carpeting, shingles, etc. Areas may be added or subtracted from one another to find some resultant area. For example, suppose one had an L-shaped room and wished to purchase new carpet. One could find the area by taking the total area of the larger rectangle and subtracting the area of the rectangle that was not needed, or one could divide the area into two rectangles, find the area of each, and add the areas together. Both procedures are illustrated below:

Finding the area under a curve poses a slightly different problem. In some cases there are formulas which directly give the area between any two points; finding these formulas are what integral calculus is all about. In other cases the areas must be approximated. All of the above procedures share a common theoretical underpinning, however. Suppose a curve was divided into equally spaced intervals on the X-axis and a rectangle drawn corresponding to the height of the curve at any of the intervals. The rectangles may be drawn either smaller that the curve, or larger, as in the two illustrations below:

In either case, if the areas of all the rectangles under the curve were added together, the sum of the areas would be an approximation of the total area under the curve. In the case of the smaller rectangles, the area would be too small; in the case of the latter, they would be too big. Taking the average would give a better approximation, but mathematical methods provide a better way.

63

A better approximation may be achieved by making the intervals on the X-axis smaller. Such an approximations is illustrated below, more closely approximating the actual area under the curve.

The actual area of the curve may be calculated by making the intervals infinitely small (no distance between the intervals) and then computing the area. If this last statement seems a bit bewildering, you share the bewilderment with millions of introductory calculus students. At this point the introductory statistics student must say "I believe" and trust the mathematician or enroll in an introductory calculus course. DRAWING A MEMBER OF THE FAMILY OF NORMAL CURVES The standard procedure for drawing a normal curve is to draw a bell-shaped curve and an Xaxis. A tick is placed on the X-axis in corresponding to the highest point (middle) of the curve. Three ticks are then placed to both the right and left of the middle point. These ticks are equally spaced and include all but a very small portion under the curve. The middle tick is labeled with the value of ; sequential ticks to the right are labeled by adding the value of . Ticks to the left are labeled by subtracting the value of from for the three values. For example, if =52 and =12, then the middle value would be labeled with 52, points to the right would have the values of 64 (52 + 12), 76, and 88, and points to the left would have the values 40, 28, and 16. An example is presented below:

64

DIFFERENCES IN MEMBERS OF THE FAMILY OF NORMAL CURVES Differences in members of the family of normal curves are a direct result of differences in values for parameters. The two parameters, and , each change the shape of the distribution in a different manner. The first, , determines where the midpoint of the distribution falls. Changes in , without changes in , result in moving the distribution to the right or left, depending upon whether the new value of was larger or smaller than the previous value, but does not change the shape of the distribution. An example of how changes in affect the normal curve are presented below:

Changes in the value of , on the other hand, change the shape of the distribution without affecting the midpoint, because affects the spread or the dispersion of scores. The larger the value of , the more dispersed the scores; the smaller the value, the less dispersed. Perhaps the easiest way to understand how affects the distribution is graphically. The distribution below demonstrates the effect of increasing the value of :

Since this distribution was drawn according to the procedure described earlier, it appears similar to the previous normal curve, except for the values on the X-axis. This procedure effectively changes the scale and hides the real effect of changes in . Suppose the second distribution was drawn on a rubber sheet instead of a sheet of paper and stretched to twice its

65

original length in order to make the two scales similar. Drawing the two distributions on the same scale results in the following graphic:

Note that the shape of the second distribution has changed dramatically, being much flatter than the original distribution. It must not be as high as the original distribution because the total area under the curve must be constant, that is, 1.00. The second curve is still a normal curve; it is simply drawn on a different scale on the X-axis. A different effect on the distribution may be observed if the size of is decreased. Below the new distribution is drawn according to the standard procedure for drawing normal curves:

Now both distributions are drawn on the same scale, as outlined immediately above, except in this case the sheet is stretched before the distribution is drawn and then released in order that the two distributions are drawn on similar scales:

66

Note that the distribution is much higher in order to maintain the constant area of 1.00, and the scores are much more closely clustered around the value of , or the midpoint, than before. An interactive exercise is provided to demonstrate how the normal curve changes as a function of changes in and . The exercise starts by presenting a curve with = 70 and = 10. The student may change the value of from 50 to 90 by moving the scroll bar on the bottom of the graph. In a similar manner, the value of can be adjusted from 5 to 15 by changing the scroll bar on the right side of the graph. FINDING AREA UNDER NORMAL CURVES Suppose that when ordering shoes to restock the shelves in the store one knew that female shoe sizes were normally distributed with = 7.0 and = 1.1. Don't worry about where these values came from at this point, there will be plenty about that later. If the area under this distribution between 7.75 and 8.25 could be found, then one would know the proportion of size eight shoes to order. The values of 7.75 and 8.25 are the real limits of the interval of size eight shoes.

67

Finding the areas on the curve above is easy; simply enter the value of mu, sigma, and the score or scores into the correct boxes and click on a button on the display and the area appears. The following is an example of the use of the Normal Curve Area program and the reader should verify how the program works by entering the values in a separate screen. To find the area below 7.75 on a normal curve with mu =7.0 and sigma=1.1 enter the following information and click on the button pointed to with the red arrow.

To find the area between scores, enter the low and high scores in the lower boxes and click on the box pointing to the "Area Between."

The area above a given score could be found on the above program by subtracting the area below the score from 1.00, the total area under the curve, or by entering the value as a "Low Score" on the bottom boxes and a corresponding very large value for a "High Score." The following illustrates the latter method. The value of "12" is more than three sigma units from the mu of 7.0, so the area will include all but the smallest fraction of the desired area.

68

FINDING SCORES FROM AREA In some applications of the normal curve, it will be necessary to find the scores that cut off some proportion or percentage of area of the normal distribution. For example, suppose one wished to know what two scores cut off the middle 75% of a normal distribution with = 123 and = 23. In order to answer questions of this nature, the Normal Curve Area program can be used as follows:

The results can be visualized as follows:

69

In a similar manner, the score value which cuts of the bottom proportion of a given normal curve can be found using the program. For example a score of 138.52 cuts off .75 of a normal curve with mu=123 and sigma=23. This area was found using Normal Curve Area program in the following manner.

The results can be visualized as follows:

THE STANDARD NORMAL CURVE The standard normal curve is a member of the family of normal curves with = 0.0 and = 1.0. The value of 0.0 was selected because the normal curve is symmetrical around and the
70

number system is symmetrical around 0.0. The value of 1.0 for is simply a unit value. The X-axis on a standard normal curve is often relabeled and called Z scores. There are three areas on a standard normal curve that all introductory statistics students should know. The first is that the total area below 0.0 is .50, as the standard normal curve is symmetrical like all normal curves. This result generalizes to all normal curves in that the total area below the value of mu is .50 on any member of the family of normal curves.

The second area that should be memorized is between Z-scores of -1.00 and +1.00. It is .68 or 68%.

The total area between plus and minus one sigma unit on any member of the family of normal curves is also .68. The third area is between Z-scores of -2.00 and +2.00 and is .95 or 95%.

71

This area (.95) also generalizes to plus and minus two sigma units on any normal curve. Knowing these areas allow computation of additional areas. For example, the area between a Z-score of 0.0 and 1.0 may be found by taking 1/2 the area between Z-scores of -1.0 and 1.0, because the distribution is symmetrical between those two points. The answer in this case is . 34 or 34%. A similar logic and answer is found for the area between 0.0 and -1.0 because the standard normal distribution is symmetrical around the value of 0.0. The area below a Z-score of 1.0 may be computed by adding .34 and .50 to get .84. The area above a Z-score of 1.0 may now be computed by subtracting the area just obtained from the total area under the distribution (1.00), giving a result of 1.00 - .84 or .16 or 16%. The area between -2.0 and -1.0 requires additional computation. First, the area between 0.0 and -2.0 is 1/2 of .95 or .475. Because the .475 includes too much area, the area between 0.0 and -1.0 (.34) must be subtracted in order to obtain the desired result. The correct answer is . 475 - .34 or .135.

Using a similar kind of logic to find the area between Z-scores of .5 and 1.0 will result in an incorrect answer because the curve is not symmetrical around .5. The correct answer must be something less than .17, because the desired area is on the smaller side of the total divided area. Because of this difficulty, the areas can be found using the program included in this text. Entering the following information will produce the correct answer

72

The result can be seen graphically in the following:

The following formula is used to transform a given normal distribution into the standard normal distribution. It was much more useful when area between and below a score was only contained in tables of the standard normal distribution. It is included here for both historical reasons and because it will appear in a different form later in this text.

CONCLUSION The normal curve is an infinite number of possible probability models called a family of distributions. Each member of the family is described by setting the parameters ( and ) of the distribution to particular values. The members of the family are similar in that they share the same shape, are symmetrical, and have a total area underneath of 1.00. They differ in where the midpoint of the distribution falls, determined by , and in the variability of scores around the midpoint, determined by . The area between any two scores and the scores which
73

cut off a given area on any given normal distribution can be easily found using the program provided with this text

EDUCATIONAL PSYCOLOGY Personal development Personal development includes activities that improve awareness and identity, develop talents and potential, build human capital and facilitates employability, enhance quality of life and contribute to the realization of dreams and aspirations. The concept is not limited to selfhelp but includes formal and informal activities for developing others, in roles such as teacher, guide, counsellor, manager, coach, or mentor. Finally, as personal development takes place in the context of institutions, it refers to the methods, programs, tools, techniques, and assessment systems that support human development at the individual level in organizations.[1] At the level of the individual, personal development includes the following activities:

improving self-awareness improving self-knowledge building or renewing identity developing strengths or talents improving wealth spiritual development identifying or improving potential building employability or human capital enhancing lifestyle or the quality of life improving health fulfilling aspirations initiating a life enterprise or personal autonomy defining and executing personal development plans improving social abilities

The concept covers a wider field than self-development or self-help: personal development also includes developing other people. This may take place through roles such as those of a teacher or mentor, either through a personal competency (such as the skill of certain managers

74

in developing the potential of employees) or a professional service (such as providing training, assessment or coaching). Beyond improving oneself and developing others, personal development is a field of practice and research. As a field of practice it includes personal development methods, learning programs, assessment systems, tools and techniques. As a field of research, personal development topics increasingly appear in scientific journals, higher education reviews, management journals and business books. Any sort of development whether economic, political, biological, organizational or personal requires a framework if one wishes to know whether change has actually occurred. In the case of personal development, an individual often functions as the primary judge of improvement, but validation of objective improvement requires assessment using standard criteria. Personal development frameworks may include goals or benchmarks that define the end-points, strategies or plans for reaching goals, measurement and assessment of progress, levels or stages that define milestones along a development path, and a feedback system to provide information on changes. Social change refers to an alteration in the social order of a society. It may refer to the notion of social progress or sociocultural evolution, the philosophical idea that society moves forward by dialectical or evolutionary means. It may refer to a paradigmatic change in the socio-economic structure, for instance a shift away from feudalism and towards capitalism. Accordingly it may also refer to social revolution, such as the Socialist revolution presented in Marxism, or to other social movements, such as Women's suffrage or the Civil rights movement. Social change may be driven by cultural, religious, economic, scientific or technological forces.

Moral Development Moral development focuses on the emergence, change, and understanding of morality from infancy through adulthood. In the field of moral development, morality is defined as principles for how individuals ought to treat one another, with respect to justice, others welfare, and rights. In order to investigate how individuals understand morality, it is essential to measure their beliefs, emotions, attitudes, and behaviors that contribute to moral understanding. The field of moral development studies the role of peers and parents in facilitating moral development, the role of conscience and values, socialization and cultural influences, empathy and altruism, and positive development. The interest in morality spans many disciplines (e.g.,
75

philosophy, economics, biology, and political science) and specializations within psychology (e.g., social, cognitive, and cultural). Moral developmental psychology research focuses on questions of origins and change in morality across the lifespan Definition Moral development is the process throught which children develop proper attitudes and behaviors toward other people in society, based on social and cultural norms, rules, and laws. Description Moral development is a concern for every parent. Teaching a child to distinguish right from wrong and to behave accordingly is a goal of parenting. Moral development is a complex issue thatsince the beginning of human civilizationhas been a topic of discussion among some of the world's most distinguished psychologists, theologians, and culture theorists. It was not studied scientifically until the late 1950s. Piaget's Theory of Moral Reasoning Jean Piaget, a Swiss psychologist, explored how children developed moral reasoning. He rejected the idea that children learn and internalize the rules and morals of society by being given the rules and forced to adhere to them. Through his research on how children formed their judgments about moral behavior, he recognized that children learn morality best by having to deal with others in groups. He reasoned that there was a process by which children conform to society's norms of what is right and wrong, and that the process was active rather than passive. Piaget found two main differences in how children thought about moral behavior. Very young children's thinking is based on how actions affected them or what the results of an action were. For example, young children will say that when trying to reach a forbidden cookie jar, breaking 10 cups is worse than breaking one. They also recognize the sanctity of rules. For example, they understand that they cannot make up new rules to a game; they have to play by what the rule book says or what is commonly known to be the rules. Piaget called this "moral realism with objective responsibility." It explains why young children are concerned with outcomes rather than intentions. Older children look at motives behind actions rather than consequences of actions. They are also able to examine rules, determining whether they are fair or not, and apply these rules and their modifications to situations requiring negotiation, assuring that everyone affected by the rules is treated fairly. Piaget felt that the best moral learning came from these cooperative
76

decision-making and problem-solving events. He also believed that children developed moral reasoning quickly and at an early age. Kohlberg's Theory of Moral Development Lawrence Kohlberg, an American psychologist, extended Piaget's work in cognitive reasoning into adolescence and adulthood. He felt that moral development was a slow process and evolved over time. Still, his six stages of moral development, drafted in 1958, mirrors Piaget's early model. Kohlberg believed that individuals made progress by mastering each stage, one at a time. A person could not skip stages. He also felt that the only way to encourage growth through these stages was by discussion of moral dilemmas and by participation in consensus democracy within small groups. Consensus democracy was rule by agreement of the group, not majority rule. This would stimulate and broaden the thinking of children and adults, allowing them to progress from one stage to another. PRECONVENTIONAL LEVEL. The child at the first and most basic level, the preconventional level, is concerned with avoiding punishment and getting needs met. This level has two stages and applies to children up to 10 years of age. Stage one is the Punishment-Obedience stage. Children obey rules because they are told to do so by an authority figure (parent or teacher), and they fear punishment if they do not follow rules. Children at this stage are not able to see someone else's side. Stage two is the Individual, Instrumentation, and Exchange stage. Here, the behavior is governed by moral reciprocity. The child will follow rules if there is a known benefit to him or her. Children at this stage also mete out justice in an eye-for-an-eye manner or according to Golden Rule logic. In other words, if one child hits another, the injured child will hit back. This is considered equitable justice. Children in this stage are very concerned with what is fair. Children will also make deals with each other and even adults. They will agree to behave in a certain way for a payoff. "I'll do this, if you will do that." Sometimes, the payoff is in the knowledge that behaving correctly is in the child's own best interest. They receive approval from authority figures or admiration from peers, avoid blame, or behave in accordance with their concept of self. They are just beginning to understand that others have their own needs and drives. CONVENTIONAL LEVEL. This level broadens the scope of human wants and needs. Children in this level are concerned about being accepted by others and living up to their expectations. This stage begins around age 10 but lasts well into adulthood, and is the stage most adults remain at throughout their lives.
77

Stage three, Interpersonal Conformity, is often called the "good boy/good girl" stage. Here, children do the right thing because it is good for the family, peer group, team, school, or church. They understand the concepts of trust, loyalty, and gratitude. They abide by the Golden Rule as it applies to people around them every day. Morality is acting in accordance to what the social group says is right and moral. Stage four is the Law and Order, or Social System and Conscience stage. Children and adults at this stage abide by the rules of the society in which they live. These laws and rules become the backbone for all right and wrong actions. Children and adults feel compelled to do their duty and show respect for authority. This is still moral behavior based on authority, but reflects a shift from the social group to society at large. POST-CONVENTIONAL LEVEL. Some teenagers and adults move beyond conventional morality and enter morality based on reason, examining the relative values and opinions of the groups with which they interact. Few adults reach this stage. Correct behavior is governed by the sixth stage, the Social Contract and Individual Rights stage. Individuals in this stage understand that codes of conduct are relative to their social group. This varies from culture to culture and subgroup to subgroup. With that in mind, the individual enters into a contract with fellow human beings to treat them fairly and kindly and to respect authority when it is equally moral and deserved. They also agree to obey laws and social rules of conduct that promote respect for individuals and value the few universal moral values that they recognize. Moral behavior and moral decisions are based on the greatest good for the greatest number. Stage six is the Principled Conscience or the Universal/Ethical Principles stage. Here, individuals examine the validity of society's laws and govern themselves by what they consider to be universal moral principles, usually involving equal rights and respect. They obey laws and social rules that fall in line with these universal principles, but not others they deem as aberrant. Adults here are motivated by individual conscience that transcends cultural, religious, or social convention rules. Kohlberg recognized this last stage but found so few people who lived by this concept of moral behavior that he could not study it in detail. Social Values in Islam Every aspect of human relationship is governed by social values. In Islam all values affecting man are based upon the Qur'anic concept that each human being is endowed by the Almighty God with the highest potential for doing good to himself and to his society, and so he is capable of achieving the highest level of moral and spiritual development and that his personality must be respected. The Qur'an takes note of diversities of race, color, language, wealth etc., which serve their own useful purposes in the social scheme, and describes them as
78

signs of Allah for those who hear and possess knowledge. The Qur'an states that Allah has divided mankind into tribes and nations for greater facility of intercourse. Neither membership in a tribe nor citizenship in a state confers any privilege nor are they source of honor. The true source of honor in the sight of Allah (God) is a righteous life. In his Farewell Address, Prophet Muhammad (PBUH) said, "You are all brothers, and are all equal. None of you can claim any privilege or any superiority over any other." Islam has established a universal brotherhood. It is stressed that a true brotherhood can be established only by virtue of mankind s relationship with one another through Allah. Other factors e.g. common interests, common pursuits, common occupations may help to foster friendship and brotherhood. Islam considers the family as the basic unit of human society. The foundation of a family is laid through marriage. The relationship between husband, wife is children should be strong and ever- lasting. Prophet Muhammad (PBUH) noted once, "The best among you is he who treats the members of his family best." In order to maintain harmony within the family, Islam looked down upon divorce and considered it as the most obnoxious in the sight of Allah but if the relationship between the husband and wife is no longer endurable, it can be resorted to divorce with the object to provide the opportunity for a better and decent life. Great stress is laid on the proper upbringing and training of children. Infanticide, which was a common practice during certain periods of human history, is prohibited. Special concern was given to proper upbringing of girls. Prophet Muhammad (PBUH) said, "A person who is blessed with a daughter or daughters and makes no discrimination between them and his sons and brings them up with kindness and affection will be as close to paradise as my forefinger and middle ringer are to each other." While stressing kindness and affection toward children and uniformly treating all children tenderly, he did not approve of undue indulgence. The Qur'an lays great stress on kindness toward neighbors, and in the same way the treatment of the needy and the wayfarer. Orphans have been made the objects of particular care. Their proper upbringing and the due administration of their property must be ensured. Detailed directions are laid down with regard to the guardianship of minors. Another feature of Islam is that it aims at merging all sections of society into a single community so that all persons may feel them selves to be members of the same family. Islam encourages simple ways of life and to dispense with artificial ceremonial and superficial standards of living. Islam recognized that there must be diversity of all kinds in a healthy society, and that it is not only futile but also harmful to covet that in which others excel. Each must exercise his or her owns capacities and talents and strives to promote both individual and common good. Begging is prohibited except in me of extreme need. Various aspects of good manners are insisted upon. In Surah 3 1, Vs. 18 Allah says "Turn not thy face away from people in pride, nor walk in the earth haughtily; surely Allah loves not any arrogant boaster. Moderate thy pace when walking and soften thy voice when speaking". As for group activities Islam recognizes three types of public associations; first, those formed
79

for the purpose of promoting the general welfare, in other words, charitable associations and the like. Second, the object of which is to promote the spread and propagation of knowledge and investigation and research into the sciences, arts, philosophies etc., third, those established for the purpose of peaceful settlement of disputes and for removing causes of friction, whether in domestic, national, political or international spheres, and thereby promoting peace among man- kind. When people are gathered together for a common purpose, they should behave in an orderly manner, and should not leave or disperse without permission. AU people should behave with dignity and particular attention must be paid to maintenance of order in public places. Persons using public places must take care that no undue inconvenience is occasioned to others using the same, nor should any person be exposed to risk or injury. The obligation is laid upon everyone to urge others toward good ness and to seek to restrain them from evil, but with kindness and affection. Spying, backbiting and undue suspicion must be avoided. It is a duty of every Muslim constantly to seek increase of knowledge. Prophet Muhammad (PBUH) went so far as to add, "A word of wisdom is the lost property of a Muslim. He should seize it wherever he finds it." The Prophet (PBUH) was very insistent upon kindness towards animals. On one occasion he noticed a dove in anguish flying around agitatedly and discovered that somebody had caught its offspring. He was very annoyed and asked the person to restore the offspring to the mother immediately. Perhaps the most comprehensive directive within the domain of social values is: "Help one another in righteousness and virtue but help not one another in transgression." When Prophet Muhammad (PBUH) said on one occasion, "Help your brother whether oppressor or oppressed", he was asked "We understand what you mean by going to the help of a brother who is oppressed, but how shall we help a brother who is an oppressor?" The Prophet (PBUH) replied, "By restraining him from oppressing others." Regarding the moral and spiritual values, an essential element in the effort toward achievement of moral and spiritual excellence is the certainty that however low a person may have fallen, it is always possible for him to rise. Islam teaches that Allah has created mankind in accord with the nature designed by him. It is true that each person is subject to influences of heredity, upbringing, and environment, but these can, where necessary, be corrected or eliminated. Evil comes from outside and can be kept out or having entered, can be discarded. As for vice and virtue, Islam considers the lowest grade of vice that conduct which causes injury to others; for instance, all aggression against the person, property, interests or honor of fellow being. Most of these are crimes; the rest are civil wrongs and are punishable. All of them are moral offenses. As for virtue, there are three grades of virtues prescribed by Islam: The first (lowest) is described as equity or equitable dealing. This means to do good equal to the good one receives from others. Furthermore, it means that when one suffers a wrong, one should not impose or insist upon the imposition of a penalty in excess of the wrong suffered. The second is that of conscious beneficence, the doing of greater good in return for good and the doing of good without expectation of any return, and forgiveness of wrong if in the circumstances, may reasonably be expected that forgiveness would help the wrongdoer reform
80

himself. The third is the instinctive beneficence that flows out from one as love and affection flow out toward one's kindred. Cultivation to having been acquired deliberately, -it would be the highest moral quality; it would manifest itself toward a wrongdoer not only in forgiveness, but also in benevolence. Social customs may constitute a hindrance in the way of moral development. In the scale of values, moral progress must be placed higher than conformity to social customs and habits, which have no value beyond the fact that they have been observed over a long period of time. Such customs become burdensome impositions and should be discarded. All avenues from which evil might should be watched and guarded. Islam inspires faith in vivid realization of the existence of a Beneficent Creator, without partners, associates or equals. Islam teaches that each human being can and should establish direct communion with Allah through faith, through acceptance of Divine Guidance and through righteous conduct. On the basis of man's relationship to his fellow beings through God, the Creator of all, it lays the foundations of a true universal brotherhood, excluding the privilege and discrimination based on color, race, nationality or an office, status or wealth. In short, Islam sets forth and places at man's disposal a most effective means of achieving the purpose of life. Of all Allah's numberless bounties bestowed upon mankind it is one of the greatest and most precious, and it is indispensable for the beneficent growth of man in the epoch now unfolding before him. The social values in the Islamic community ensure strong and lasting relationship and interaction of people based on equality, doing what is right, and give consideration to the rights and privileges of others. We can see that the social, moral and spiritual values propagated by Islam, the essence of which was observed during the rise and expansion in the early history of Islam is applicable to the present and future for all mankind. Nine events of instruction Definition "Nine events of instruction" is an instructional design model put together by Gagne. This is a behaviorist model that also draws from cognitivism. The conditions of learning Essential to Gagne's ideas of instruction are what he calls "conditions of learning." He breaks these down into internal and external conditions. The internal conditions deal with previously learned capabilities of the learner. Or in other words, what the learner knows prior to the instruction. The external conditions deal with the stimuli (a purely behaviorist term) that is presented externally to the learner. For example, what instruction is provided to the learner. (Cory, 1996)

81

Gagn's most essential ingrediants of teaching are:


presenting the knowledge or demonstrating the skill providing practice with feedback providing learner guidance

These elements have to be designed differently according to the type of learning level (learning goal) to be achieved. For Gagn, instructional design means to first identify the goal (a learning outcome) and then construct the learning hierarchy, i.e. do a task analysis of skills needed to perform a measurable activitiy that demonstrates a learning goal. The nine events of instruction Gagne's 9 general steps of instruction for learning are: 1. Gain attention: o e.g. present a good problem, a new situation, use a multimedia advertisement, ask questions. o This helps to ground the lesson, and to motivate 2. Describe the goal: o e.g. state what students will be able to accomplish and how they will be able to use the knowledge, give a demonstration if appropriate. o Allows students to frame information, i.e. treat it better. 3. Stimulate recall of prior knowledge o e.g. remind the student of prior knowledge relevant to the current lesson (facts, rules, procedures or skills). Show how knowledge is connected, provide the student with a framework that helps learning and remembering. Tests can be included. 4. Present the material to be learned o e.g. text, graphics, simulations, figures, pictures, sound, etc. Chunk information (avoid memory overload, recall information). 5. Provide guidance for learning o e.g. presentation of content is different from instructions on how to learn. Use of different channel (e.g. side-boxes) 6. Elicit performance "practice" o let the learner do something with the newly acquired behavior, practice skills or apply knowledge. At least use MCQ's. 7. Provide informative feedback , o show correctness of the trainee's response, analyze learner's behavior, maybe present a good (step-by-step) solution of the problem

82

8. Assess performance test, if the lesson has been learned. Also give sometimes general progress information 9. Enhance retention and transfer : o e.g. inform the learner about similar problem situations, provide additional practice. Put the learner in a transfer situation. Maybe let the learner review the lesson. The way Gagne's theory is put into practice is as follows. First of all, the instructor determines the objectives of the instruction. These objectives must then be categorized into one of the five domains of learning outcomes. Each of the objectives must be stated in performance terms using one of the standard verbs (i.e. states, discriminates, classifies, etc.) associated with the particular learning outcome. The instructor then uses the conditions of learning for the particular learning outcome to determine the conditions necessary for learning. And finally, the events of instruction necessary to promote the internal process of learning are chosen and put into the lesson plan. The events in essence become the framework for the lesson plan or steps of instruction. (Corry, 1996) Meaning of personality development? Personality development is the development of the organized pattern of behaviors and attitudes that makes a person distinctive. Personality development occurs by the ongoing interaction of temperament, character, and environment. Growth of habitual patterns of behaviour in childhood and adolescence. An improvement in all spheres of an individual in Life, be it with friends, in office or any enviounment The Origin of Personality

Mothers, nurses and pediatricians are well aware that infants begin to express themselves as individuals from the time of birth. The fact that each child appears to have a characteristic temperament from his earliest days has also been suggested by Sigmund Freud and Arnold Gesell. In recent years, however, many psychiatrists and psychologists appear to have lost sight of this fact. Instead they have tended to emphasize the

83

influence of the child's early environment when discussing the origin of the human personality As physicians who have frequent occasion to examine the family background of disturbed children, we began many years ago to encounter reasons to question the prevailing one-sided emphasis on environment. We found that some children with severe psychological problems had a family upbringing that did not differ essentially from the environment of other children who developed no severe problems. On the other hand, some children were found to be free of serious personality disturbances although they had experienced severe family disorganizations and poor parental care. Even in cases where parental mishandling was obviously responsible for a child's personality difficulties there was no consistent or predictable relation between the parents' treatment and the child's specific symptoms. Domineering authoritarian handling by the parents might make one youngster anxious and submissive and another defiant and antagonistic. Such unpredictability seemed to be the direct consequence of omitting an important factor from the evaluation: the child's own temperament, that is, his own individual style of responding to the environment. It might be inferred from these opinions that we reject the environmentalist tendency to emphasize the role of the child's surroundings and the influence of his parents (particularly the mother's) as major factors in the formation of personality, and that instead we favor the constitutionalist concept of personality's being largely inborn. Actually er we reject both the :nurture" and the "nature" concepts. Either by itself is too simplistic to account for the intricate play of forces that form the human character. It is our hypothesis that personality is shaped by the constant interplay of temperament and environment.

Big Five personality traits In contemporary psychology, the "Big Five" factors (or Five Factor Model; FFM) of personality are five broad domains or dimensions of personality that are used to describe human personality.

84

The Big Five framework of personality traits from Costa & McCrae, 1992 has emerged as a robust model for understanding the relationship between personality and various academic behaviors.[1] The Big Five factors are:

Openness (inventive/curious vs. consistent/cautious) Conscientiousness (efficient/organized vs. easy-going/careless) Extraversion (outgoing/energetic vs. solitary/reserved) Agreeableness (friendly/compassionate vs. cold/unkind) Neuroticism (sensitive/nervous vs. secure/confident)

Acronyms commonly used to refer to the five traits collectively are OCEAN, NEOAC, or CANOE. Beneath each factor, a cluster of correlated specific traits are found; for example, extraversion includes such related qualities as gregariousness, assertiveness, excitement seeking, warmth, activity and positive emotions. The five factors The factors of the Big Five and their constituent traits can be summarized as:[3] Openness to experience (inventive/curious vs. consistent/cautious). Appreciation for art, emotion, adventure, unusual ideas, curiosity, and variety of experience. Openness reflects the degree of intellectual curiosity, creativity and a preference for novelty and variety. Some disagreement remains about how to interpret the openness factor, which is sometimes called "intellect" rather than openness to experience. Conscientiousness (efficient/organized vs. easy-going/careless). A tendency to show self-discipline, act dutifully, and aim for achievement; planned rather than spontaneous behavior; organized, and dependable. Extraversion (outgoing/energetic vs. solitary/reserved). Energy, positive emotions, surgency, assertiveness, sociability and the tendency to seek stimulation in the company of others, and talkativeness. Agreeableness (friendly/compassionate vs. cold/unkind). A tendency to be compassionate and cooperative rather than suspicious and antagonistic towards others. Neuroticism (sensitive/nervous vs. secure/confident). The tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, or vulnerability. Neuroticism also refers to the degree of emotional stability and impulse control, and is sometimes referred by its low pole "emotional stability".

The Big Five model is a comprehensive, empirical, data-driven research finding. [4] Identifying the traits and structure of human personality has been one of the most fundamental goals in all
85

of psychology. The five broad factors were discovered and defined by several independent sets of researchers.[4] These researchers began by studying known personality traits and then factor-analyzing hundreds of measures of these traits (in self-report and questionnaire data, peer ratings, and objective measures from experimental settings) in order to find the underlying factors of personality. The initial model was advanced by Ernest Tupes and Raymond Christal in 1961,[5] but failed to reach an academic audience until the 1980s. In 1990, J.M. Digman advanced his five factor model of personality, which Goldberg extended to the highest level of organization. [6] These five over-arching domains have been found to contain and subsume most known personality traits and are assumed to represent the basic structure behind all personality traits. [7] These five factors provide a rich conceptual framework for integrating all the research findings and theory in personality psychology. The Big Five traits are also referred to as the "Five Factor Model" or FFM,[8] and as the Global Factors of personality.[9] At least four sets of researchers have worked independently for decades on this problem and have identified generally the same Big Five factors: Tupes & Cristal were first, followed by Goldberg at the Oregon Research Institute,[10][11][12][13][14] Cattell at the University of Illinois,[15] [16][17][18] and Costa and McCrae at the National Institutes of Health.[19][20][21][22] These four sets of researchers used somewhat different methods in finding the five traits, and thus each set of five factors has somewhat different names and definitions. However, all have been found to be highly inter-correlated and factor-analytically aligned.[23][24][25][26][27] Because the Big Five traits are broad and comprehensive, they are not nearly as powerful in predicting and explaining actual behavior as are the more numerous lower-level traits. Many studies have confirmed that in predicting actual behavior the more numerous facet or primary level traits are far more effective (e.g. Mershon & Gorsuch, 1988;[28] Paunonon & Ashton, 2001[29]) When scored for individual feedback, these traits are frequently presented as percentile scores. For example, a Conscientiousness rating in the 80th percentile indicates a relatively strong sense of responsibility and orderliness, whereas an Extraversion rating in the 5th percentile indicates an exceptional need for solitude and quiet. Although these trait clusters are statistical aggregates, exceptions may exist on individual personality profiles. On average, people who register high in Openness are intellectually curious, open to emotion, interested in art, and willing to try new things. A particular individual, however, may have a high overall Openness score and be interested in learning and exploring new cultures but have no great interest in art or poetry.

86

The most frequently used measures of the Big Five comprise either items that are selfdescriptive sentences[30] or, in the case of lexical measures, items that are single adjectives. [31] Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balanced International English Big-Five MiniMarkers[32] or a very brief (10 item) measure of the Big Five domains.[33] Openness to experience Main article: Openness to experience Openness is a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, appreciative of art, and sensitive to beauty. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are more likely to hold unconventional beliefs. Another characteristic of the open cognitive style is a facility for thinking in symbols and abstractions far removed from concrete experience. People with low scores on openness tend to have more conventional, traditional interests. They prefer the plain, straightforward, and obvious over the complex, ambiguous, and subtle. They may regard the arts and sciences with suspicion or view these endeavors as uninteresting. Closed people prefer familiarity over novelty; they are conservative and resistant to change.[34] Sample openness items

I have a rich vocabulary. I have a vivid imagination. I have excellent ideas. I am quick to understand things. I use difficult words. I spend time reflecting on things. I am full of ideas. I am not interested in abstractions. (reversed) I do not have a good imagination. (reversed) I have difficulty understanding abstract ideas. (reversed)[35]

Conscientiousness Main article: Conscientiousness Conscientiousness is a tendency to show self-discipline, act dutifully, and aim for achievement against measures or outside expectations. The trait shows a preference for planned rather than spontaneous behavior. It influences the way in which we control, regulate,
87

and direct our impulses.[citation needed] According to a study conducted at Michigan State University, it was found by R.E. Lucas and his colleagues that the average level of conscientiousness augmented among young adults and then declined among older adults.[36] Sample conscientiousness items

I am always prepared. I pay attention to details. I get chores done right away. I like order. I follow a schedule. I am exacting in my work. I leave my belongings around. (reversed) I make a mess of things. (reversed) I often forget to put things back in their proper place. (reversed) I shirk my duties. (reversed)[35]

Extraversion Main article: Extraversion and introversion Extraversion is characterized by positive emotions, surgency, and the tendency to seek out stimulation and the company of others. The trait is marked by pronounced engagement with the external world. Extraverts enjoy being with people, and are often perceived as full of energy. They tend to be enthusiastic, action-oriented individuals who are likely to say "Yes!" or "Let's go!" to opportunities for excitement. In groups they like to talk, assert themselves, and draw attention to themselves.[citation needed] Introverts have lower social engagement and activity levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression. Introverts simply need less stimulation than extraverts and more time alone. They may be very active and energetic, simply not socially.[citation needed] Sample extraversion items

I am the life of the party. I don't mind being the center of attention. I feel comfortable around people. I start conversations. I talk to a lot of different people at parties.

88

I don't talk a lot. (reversed) I keep in the background. (reversed) I have little to say. (reversed) I don't like to draw attention to myself. (reversed) I am quiet around strangers. (reversed)[35]

Agreeableness Main article: Agreeableness Agreeableness is a tendency to be compassionate and cooperative rather than suspicious and antagonistic towards others. The trait reflects individual differences in general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, friendly, generous, helpful, and willing to compromise their interests with others. Agreeable people also have an optimistic view of human nature. Although agreeableness is positively correlated with good team work skills, it is negatively correlated with leadership skills. Those who voice out their opinion in a team environment tend to move up the corporate rankings, whereas the ones that don't, remains in the same position, usually labelled as the followers of the team.[37] Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others well-being, and are less likely to extend themselves for other people. Sometimes their skepticism about others motives causes them to be suspicious, unfriendly, and uncooperative.[citation needed] Sample agreeableness items

I am interested in people. I sympathize with others' feelings. I have a soft heart. I take time out for others. I feel others' emotions. I make people feel at ease. I am not really interested in others. (reversed) I insult people. (reversed) I am not interested in other people's problems. (reversed) I feel little concern for others. (reversed)[35]

Neuroticism Main article: Neuroticism

89

Neuroticism is the tendency to experience negative emotions, such as anger, anxiety, or depression. It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According to Eysencks (1967) theory of personality, neuroticism is interlinked with low tolerance for stress or aversive stimuli.[38] Those who score high in neuroticism are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening, and minor frustrations as hopelessly difficult. Their negative emotional reactions tend to persist for unusually long periods of time, which means they are often in a bad mood. These problems in emotional regulation can diminish the ability of a person scoring high on neuroticism to think clearly, make decisions, and cope effectively with stress.[citation needed]. Lacking contentment in one's life achievements can correlate to high Neuroticism scores and increase a person's likelihood of falling into clinical depression.[39] At the other end of the scale, individuals who score low in neuroticism are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings.[citation needed] Research suggest, extraversion and neuroticism are negatively correlated.[38] Sample neuroticism items

I am easily disturbed. I change my mood a lot. I get irritated easily. I get stressed out easily. I get upset easily. I have frequent mood swings. I often feel blue. I worry about things. I am relaxed most of the time. (reversed) I seldom feel blue. (reversed)[35]

Personality Development - Attitude, The Zing Thing Attitude is one of the vital ingredients an individual should possess to reach todays skyrocketing altitudes of success. Having negative attitude could be fatal and leads to nowhere. Attitude alone determines success in ones career. The greatest discovery of our generation is that human beings can alter their lives by altering their attitudes. As you think, so shall you be.
90

William

James

What

is

Attitude?

It goes without saying that management education plays an important role in determining ones career in todays corporate world, as the acquisition of this knowledge pumps a new ray of confidence and poise in individuals and makes them ready to meet the upcoming challenges and opportunities in both personal and professional life. But, it has been often observed that the most critical and important aspect that makes or mars ones career, beyond the boundaries of management concepts and skills taught at different B-Schools, is Attitude. The attitude of a person decides his/her success. Irrespective of how much knowledge one gathers in his/her life, it would be extremely hard to climb up the ladder if he/she does not have the right attitude towards life and matters associated with them. A right attitude is a mixture of the WIN principle where W stands for hard work, I stands for innovation and N stands for never giving up at any stage. An attitude of a person can also be defined as the established set of ways he/she repents to a particular person/ situation based on his/her beliefs, values and own perceptions and assumptions he/she holds of that person or situation. An attitude of a person can easily be apparent and understandable through his/her behavior. It is further said that ones attitude determines ones behavior. Attitude comes from judgment. Most attitudes in individuals are a result of social learning from the environment. The body language of a person is nothing but the outcome of his/her mental attitude. Depending on ones attitude, one reacts in a way, either verbal or non-verbal, that is understood by others. Studies and researches in this field have revealed that mind does not dictate the manner of reacting to a situation. However, they reveal that to every situation one has the option to react in a way one wants. If a person feels happy about something, it is because he/she has chosen to be happy about it and not because it was dictated by the mind. It has been seen that an individual reacts both in a happy as well as an unhappy manner to the same situation over a period of time. This clearly clarifies that attitude does not come directly from the mind nor it exists from birth. It can be developed over time. So, since we have the power in us to decide, its advisable to react in a positive manner most of the times. Just as yawing, crying and laughing are infectious, attitude too is very infectious. The first thing people pick in a face-to face communication/interview is the other persons attitude. Long before uttering a
91

word, ones attitude can infect the other who sees you with the same behavior. Its not what happens to you that counts rather it is how you react to what happens to you when you confront an unexpected problem of any kind which counts in the long run. Also, researchers from the leading American universities prove that people who have a positive attitude show significantly less signs of aging, they are less likely to become frail and are stronger and healthier than those that have negative attitude. Researches in this field also found that a person having a positive attitude towards life improves his/her personal health because it is more likely that they would succeed in life. Leading researcher in the field of attitude, Dr. Glen Ostir, said, I believe that there is a connection between mind and body, and that our thoughts and attitudes/emotions affect physical functioning, and overall health, whether through direct mechanisms such as immune function or indirect mechanisms such as social support networks. Barrie Hawkins, author of How to Generate Great Ideas, believes that the difference between more and less creative people is their attitude towards the problems they face and how they view them. It was positive attitude that gave Alexander the motivation of conquering the whole world. It was this attitude of the great Mahatma Gandhi and his belief in non-violence that has brought freedom for India from the British rule of over 200 years. It was this attitude of Azim Premji and Narayana Murthy that helped them establish global brands like Wipro and Infosys respectively from nowhere. There are many such instances, which show that a person is able to make it big only because they have the right attitude to do that particular thing. Attitude is everything. Ajay Chowdhry, Chairman and CEO, HCL Info Systems, believes that people with great attitude turn out to be great managers. He further said, Your attitude can make or mar your career. The top of the pyramid is very narrow; theres only room for the very best in the business and so having the right attitude gives you an edge over others to reach there and stay their longer. Attitude at Workplace

Having a right attitude at workplace is of utmost importance in todays cut-throat competitive world where everyone is pushing the other to secure a better position. It has been seen time and again that people do get carried way in offices. Being politically correct at ones workplace does not mean always doing the right thing at the right place. It specifically means respecting your colleagues and seniors, learning from them, and importantly containing your temper when there is a difference of opinion with your senior. The following rules help in carrying the right attitude
92

to Never be Rigid with Your

workplace. Views

Its very common that we all have some or the other preconceived notions and thoughts that guide our thinking pattern, i.e., the way we think of some particular person or situations. Being stereotype wont push you in your career, its important to keep your mind open and flexible to everything and everyone. Preconceiving is a strict no-no at workplace. Get Out of Your Comfort Zone

People quite often are drawn towards those who seem to be like them in whatever way. This is not a positive aspect in todays globalized workplace. You have to budge an inch from your comfort zone, i.e., you should try to make efforts to be or even to get to know others with different ethnicity, religion or even from different nationality in the organization. Never be Harsh with Your Humor

This is the most common mistake committed by an employee at the workplace. One should be extremely cautious of the kind of humor he/she shares with colleagues. Many a time, it has been seen that harsh jokes often lead to a souring relationship. It generally happens with the young who join an organization straight from college; its often a huge difference for them in terms of professional environment. One should never ever crack jokes at the expense of women and individuals with disabilities or intending any particular person. There should not be any room for such humor at workplace. Always be Curious

Entering a workplace always gives tremendous amount of opportunity to learn new things, new culture and get acquainted to new people. One must inculcate the habit of building relationships with others who are different from them. Always ask questions with an intention to learn more. If the approach towards learning is honest and sincere, then others around you will extend their helping hands and will respect and appreciate your personal views. Respect Opinions

Be extremely careful about how to interpret things like news and current events. If you feel
93

the need to voice your opinions on the ongoing politics and any such events at work, make sure you do so in a way that demonstrates to others that you are open to hearing others opinions too as a thorough professional. Always Remember the Golden Rule As an old saying goes that do unto others as you would like others do unto you, you should always behave with others in a manner in which you would like them to behave with you. If one behaves and treats others with due respect, care, love and dignity, the overall professional experience will be much more positive and results more fruitful. One can also attain a positive mental framework by becoming more solution-oriented, by seeking for the valuable lesson from every setback, by taking some time out in order to write out every detail of the problem and then taking the most logical step to solve it. Conclusion Attitude is all about how an individual thinks about himself and the situations around him. Its how they feel about the past, the present and the future. A recent Harvard University research found that when a person gets a job, 85% of the time its because of their attitude, and only 15% because of smartness and how many facts and figures they know. The attitude of the person to learn more is of utmost importance. Attitude is the power by which an individual can achieve and reach the pinnacle of success. As a tall building relies on its strong foundation, so does attitude serves as the foundation of success in ones career and life. Jeff Keller, President, Attitude Is Everything Inc., said, Success, its a matter of having a positive attitude and applying motivational principles on a daily basis. Learning disability Learning disability is a classification including several areas of functioning in which a person has difficulty learning in a typical manner, usually caused by an unknown factor or factors. While learning disability and learning disorder are often used interchangeably, the two differ. Leaning disability is when a person has significant learning problems in an academic area. These problems, however, are not enough to warrant an official diagnosis. Learning disorder, on the other hand, is an official clinical diagnosis, whereby the individual meets certain criteria, as determined by a professional (psychologist, pediatrician, etc.) The
94

difference is in degree, frequency, and intensity of reported symptoms and problems, and thus the two should not be confused. The unknown factor is the disorder that affects the brain's ability to receive and process information. This disorder can make it problematic for a person to learn as quickly or in the same way as someone who is not affected by a learning disability. People with a learning disability have trouble performing specific types of skills or completing tasks if left to figure things out by themselves or if taught in conventional ways. Some forms of learning disability are incurable.[citation needed] However, with appropriate cognitive/academic interventions, many can be overcome.[citation needed] Individuals with learning disabilities can face unique challenges that are often pervasive throughout the lifespan. Depending on the type and severity of the disability, interventions may be used to help the individual learn strategies that will foster future success. Some interventions can be quite simplistic, while others are intricate and complex. Teachers and parents will be a part of the intervention in terms of how they aid the individual in successfully completing different tasks. School psychologists quite often help to design the intervention, and coordinate the execution of the intervention with teachers and parents. Social support improves the learning for students with learning disabilities. Definitions In the 1980s, the National Joint Committee on Learning Disabilities (NJCLD) [where?] defines the term learning disability as: a heterogeneous group of disorders manifested by significant difficulties in the acquisition and use of listening, speaking, reading, writing, reasoning or mathematical abilities. These disorders are intrinsic to the individual and presumed to be due to Central Nervous System Dysfunction. Even though a learning disability may occur concomitantly with other handicapping conditions (e.g. sensory impairment, mental retardation, social and emotional disturbance) or environmental influences (e.g. cultural differences, insufficient/inappropriate instruction, psychogenic factors) it is not the direct result of those conditions or influences. The NJCLD used the term to indicate a discrepancy between a childs apparent capacity to learn and his or her level of achievement.[1] The 2002 LD Roundtable produced the following definition: "Concept of LD: Strong converging evidence supports the validity of the concept of specific learning disabilities (SLD). This evidence is particularly impressive because it converges across different indicators and methodologies. The central concept of SLD involves disorders
95

of learning and cognition that are intrinsic to the individual. SLD are specific in the sense that these disorders each significantly affect a relatively narrow range of academic and performance outcomes. SLD may occur in combination with other disabling conditions, but they are not due primarily to other conditions, such as mental retardation, behavioral disturbance, lack of opportunities to learn, or primary sensory deficits."[2][3] The term "learning disability" does not exist in DSM-IV, but it has been proposed that it be added to DSM-5, and incorporate the conditions learning disorder not otherwise specified and disorder of written expression.[4] Types of learning disabilities Learning disabilities can be categorized either by the type of information processing that is affected or by the specific difficulties caused by a processing deficit. By stage of information processing Learning disabilities fall into broad categories based on the four stages of information processing used in learning: input, integration, storage, and output.[5] Input: This is the information perceived through the senses, such as visual and auditory perception. Difficulties with visual perception can cause problems with recognizing the shape, position and size of items seen. There can be problems with sequencing, which can relate to deficits with processing time intervals or temporal perception. Difficulties with auditory perception can make it difficult to screen out competing sounds in order to focus on one of them, such as the sound of the teacher's voice. Some children appear to be unable to process tactile input. For example, they may seem insensitive to pain or dislike being touched. Integration: This is the stage during which perceived input is interpreted, categorized, placed in a sequence, or related to previous learning. Students with problems in these areas may be unable to tell a story in the correct sequence, unable to memorize sequences of information such as the days of the week, able to understand a new concept but be unable to generalize it to other areas of learning, or able to learn facts but be unable to put the facts together to see the "big picture." A poor vocabulary may contribute to problems with comprehension. Storage: Problems with memory can occur with short-term or working memory, or with long-term memory. Most memory difficulties occur in the area of short-term memory, which can make it difficult to learn new material without many more repetitions than is usual. Difficulties with visual memory can impede learning to spell. Output: Information comes out of the brain either through words, that is, language output, or through muscle activity, such as gesturing, writing or drawing. Difficulties with language output can create problems with spoken language, for example, answering a

96

question on demand, in which one must retrieve information from storage, organize our thoughts, and put the thoughts into words before we speak. It can also cause trouble with written language for the same reasons. Difficulties with motor abilities can cause problems with gross and fine motor skills. People with gross motor difficulties may be clumsy, that is, they may be prone to stumbling, falling, or bumping into things. They may also have trouble running, climbing, or learning to ride a bicycle. People with fine motor difficulties may have trouble buttoning shirts, tying shoelaces, or with handwriting. By function impaired Deficits in any area of information processing can manifest in a variety of specific learning disabilities. It is possible for an individual to have more than one of these difficulties. This is referred to as comorbidity or co-occurrence of learning disabilities.[6] In the UK, the term dual diagnosis is often used to refer to co-occurrence of learning difficulties. Reading disorder The most common learning disability. Of all students with specific learning disabilities, 70%80% have deficits in reading. The term "Developmental Dyslexia" is often used as a synonym for reading disability; however, many researchers assert that there are different types of reading disabilities, of which dyslexia is one. A reading disability can affect any part of the reading process, including difficulty with accurate or fluent word recognition, or both, word decoding, reading rate, prosody (oral reading with expression), and reading comprehension. Before the term "dyslexia" came to prominence, this learning disability used to be known as "word blindness." Common indicators of reading disability include difficulty with phonemic awarenessthe ability to break up words into their component sounds, and difficulty with matching letter combinations to specific sounds (sound-symbol correspondence). Disorder of Written Expression Speech and language disorders can also be called Dysphasia/Aphasia The DSM-IV-TR criteria for a Disorder of Written Expression is writing skills (as measured by standardized test or functional assessment) that fall substantially below those expected based on the individual's chronological age, measured intelligence, and age appropriate education, (Criterion A). This difficulty must also cause significant impairment to academic achievement and tasks that require composition of written text (Criterion B), and if a sensory deficit is present, the difficulties with writing skills must exceed those typically associated with the sensory deficit, (Criterion C)[7].
97

Individuals with a diagnosis of a Disorder of Written Expression typically have a combination of difficulties in their abilities with written expression as evidenced by grammatical and punctuation errors within sentences, poor paragraph organization, multiple spelling errors, and excessively poor handwriting. A disorder in spelling or handwriting without other difficulties of written expression do not generally qualify for this diagnosis. If poor handwriting is due to an impairment in motor coordination, a diagnosis of Developmental Dyspraxia should be considered. The term "dysgraphia" has been used as an overarching term for all disorders of written expression. Others, such as the International Dyslexia Association, use the term "dysgraphia" to refer to difficulties with handwriting. Math disability (ICD-10 and DSM-IV codes F81.2-3/315.1) Sometimes called dyscalculia, a math disability can cause such difficulties as learning math concepts (such as quantity, place value, and time), difficulty memorizing math facts, difficulty organizing numbers, and understanding how problems are organized on the page. Dyscalculics are often referred to as having poor "number sense".[8] Non ICD-10/DSM Nonverbal learning disability: Nonverbal learning disabilities often manifest in motor clumsiness, poor visual-spatial skills, problematic social relationships, difficulty with maths, and poor organizational skills. These individuals often have specific strengths in the verbal domains, including early speech, large vocabulary, early reading and spelling skills, excellent rote-memory and auditory retention, and eloquent self-expression.[9] Disorders of speaking and listening: Difficulties that often co-occur with learning disabilities include difficulty with memory, social skills and executive functions (such as organizational skills and time management).

Educational Therapy is a form of therapy used to treat individuals with learning differences, disabilities, and challenges. This form of therapy offers a wide range of intensive interventions that are designed to remediate learning problems. These interventions are individualized and unique to the specific learner. This type of therapy helps the student strengthen the ability to learn. The student engages in activities that help academics as well as teach processing, focusing, and memory skills. The difference between traditional tutoring and educational therapy is dramatic. Traditional tutoring deals specifically with academics. Educational therapy deals with processing of information as well as academics. The educational therapist uses a variety of methodologies and teaching materials to help the student reach academic success.
98

Processing is the way students think and learn. All students learn differently and process information in a unique manner. Information is taken in through the five senses. Some students learn better by watching (visual learning) while others learn better by hearing (auditory learning). The students who seem to do worse in the traditional school setting learn best by doing (kinesthetic learning). If these students are taught to strengthen their weakest learning systems, then learning becomes easier and more efficient. Some students have focusing problems. Attention deficits make the student less available for learning. If the student isnt attending to the information being presented, then the student isnt learning. Traditional methods involve medicating the student, but educational therapists are able to work with students and teach them how to focus and attend. Students today are expected to hold vast amounts of information in their memory banks. Many students are weak in this area as well. Memory skills can be strengthened like any skill, which in turn affects academics in a positive manner. Cross lateral kinesthetic exercises may be used to strengthen proprioception skills. These physical exercises are thought to strengthen cognitive skills.[1] By addressing the processing of information, focusing issues, and memory skills, as well as academics, the educational therapist is better able to treat the underlying problem of the learning difference that is keeping the student from succeeding in the academic arena. This sometimes seems illogical to people, as they feel that the only way to fix an academic problem is to offer more academics. This rarely is a long term solution to the problem of poor academics, since piling on more academics only fatigues and burdens an already frustrated student. Educational therapy is better equipped to deal with the problem of processing information. This in turn leads to stronger academics. Educational therapy addresses the underlying learning skills that affect academics. These skills would include visual and auditory processing, attention, and focusing as well as memory skills. The student only receives the skills that he/she is weak in. Each student is different and has unique strengths and weaknesses. Therefore, educational therapy is best equipped to deal with helping students with learning differences reach their highest potential. In the UK in the 1960s Irene Caspari, Principal Psychologist at the Tavistock Centre,London, became a leading trainer and exponent of a more psychoanalytic version of educational therapy, leaving money for the establishment of a 'Forum for the Advancement of Educational Therapy'. It was Caspari's belief that a child might learn more effectively when an academic learning program went hand in hand with 'expression work' which tapped into a child's deeper feelings, and that it therefore behoved the therapist to be aware of, and to work with, such feelings as well as with his/her own relationship with the child as a learner.[2]

99

Mental retardation (MR) is a generalized disorder appearing before adulthood, characterized by significantly impaired cognitive functioning and deficits in two or more adaptive behaviors. It has historically been defined as an Intelligence Quotient score under 70.[1] Once focused almost entirely on cognition, the definition now includes both a component relating to mental functioning and one relating to individuals' functional skills in their environment. As a result, a person with a below-average intelligence quotient (BAIQ) may not be considered mentally retarded. Syndromic mental retardation is intellectual deficits associated with other medical and behavioral signs and symptoms. Non-syndromic mental retardation refers to intellectual deficits that appear without other abnormalities. The terms used to describe this condition are subject to a process called the euphemism treadmill. This means that whatever term is chosen for this condition, it eventually becomes perceived as an insult. The terms mental retardation and mentally retarded were invented in the middle of the 20th century to replace the previous set of terms, which were deemed to have become offensive. By the end of the 20th century, these terms themselves have come to be widely seen as disparaging and politically incorrect and in need of replacement.[2] The term intellectual disability or intellectually challenged is now preferred by most advocates in most English-speaking countries. Clinically, however, mental retardation is a subtype of intellectual disability, which is a broader concept and includes intellectual deficits that are too mild to properly qualify as mental retardation, too specific (as in specific learning disability), or acquired later in life, through acquired brain injuries or neurodegenerative diseases like dementia. Intellectual disabilities may appear at any age. Developmental disability is any disability that is due to problems with growth and development. This term encompasses many congenital medical conditions that have no mental or intellectual components, although it, too, is sometimes used as a euphemism for MR.[3] Because of its specificity and lack of confusion with other conditions, mental retardation is still the term most widely used and recommended for use in professional medical settings, such as formal scientific research and health insurance paperwork. COMPUTER IN EDUCATION What is Computer Aided Learning (CAL)? Definition of CAL CAL is an abbreviation of Computer Aided Learning and is one of the most commonly used acronyms within education. It is difficult to say exactly when the term CAL was first employed, however since the mid 1980s CAL has been increasingly used to describe the use of technology in teaching. But what exactly does Computer Aided Learning refer to? Well there is, despite the ever increasing interest in the use of technology
100

within education, no clear definition of the term CAL. It does not refer to a given standardized set of rules, HCI ideals or generic specification. So in the absence of a type description perhaps we should concern ourselves less with the meaning of CAL but rather with the context in which the term is used. There are two common contexts of usage: CAL as Computer Based Learning and CAL as Integrative Technology What CAL is not - CBL! In the absence of a classical definition CAL has often been used to describe the development and application of educational technology for a variety of circumstances. From the mid 1980s until the early 1990s the term CAL was often used to refer to the development of either a single computer program or a series of programs which replaced the more traditional methods of instruction, in particular the lecture. This was in fact a natural progression from an early misguided strategy, propounded by Government literature (for example the pamphlet Higher Education: a New Framework, 1991), which encouraged through ignorance the development of computer programs with the explicit aim of replacing current methods as opposed to their incorporation within the traditional setting together with support to or from existing methods. More attention was being paid to solving the current staff to student ratio crisis rather than improving the quality of student education through the re-evaluation of the current methods of instruction. This would have resulted in a coherent instructional strategy within which CAL would form a part. Under these circumstances, whereby a computer program replaces a specific part or the whole of a lecture course with no provision or support provided from other methods, we are actually encouraging Computer Based Learning (CBL). CBL involves the development of a computer program with no provision, intentional or otherwise, for the re-evaluation of the current methods of teaching and the subject itself. CAL produced under these conditions is actually a computer program whose content consists of little more than lecture notes. Thus Computer Based Learning is exactly that. We are using the medium of the computer as the primary means of knowledge exposition with no support or reference to other methods of instruction the computer is the sole basis for learning. Under these circumstances where a lecture has either been replaced or added
101

to by a program (i.e. a bolt on computer application) which has been developed under a strategy lacking in re-evaluation then only the medium of instruction has changed. The lecturer has simply re-produced their lecture notes and displayed them in another format. However CBL does have its place within the curriculum, mainly as CBL in the classic sense where lecture notes are displayed in electronic format, e.g. a web page. There are several advantages in comparison to the more traditional methods, in particular the standard lecture and text book. For example a web page may be accessed at any time and over any distance; there are no limits over access unlike a library book; the entire content of the course - the lecturers perception of the topic is completely available and the content can be easily modified and updated. However such advantages are in the main concerned with resources rather than actual learning. Only the fact that the entire content of the course is presented has a bearing on the quality of learning, however the communication between tutor and student is one sided with little opportunity for the student to express their views of the topic. So learning technology when used in this context is CBL rather than CAL. However is it possible to change CBL into CAL? The answer is yes. A small step in the right direction is to add some form of formative or even summative assessment areas where the student can check their conception of the topic and hence their progress through the material. For example simple multiple choice questions (MCQ). However, this is only a start. What is actually required is subject based re-evaluation which will determine the learning outcomes expected of the technology and their methods of assessment. This will remove the material from that of a CBL context and place it within a CAL one. CAL as a Integrative Technology? CAL has also been used to describe a relatively more integrative approach whereby the program does not actually replace a lecture but is introduced into the course as a learning resource. Here the students experience directed learning (directed by the lecturer) or self study which takes place outside the main curriculum hours, (i.e. the primary contact
102

hours between student and tutor), and thus beyond any level of support from traditional methods. In fact the term CAL used in this context describes little more than an add on or bolt on resource for student self study whose success in terms of usage is dependent upon a number of student centered factors, not least their self discipline and motivation. So although there appears to be an attempt to integrate the program to form a part of a multi instructional media learning environment the truth of the matter is that it has been bolted on and is more akin to CBL. The contexts outlined above refer to two common types of CAL misusage whereby the term is used erroneously to describe educational methods which are little more than CBL. Both contexts involve a computer program which is essentially on its own - not supported within a framework of instructional methods which would have only come about through a general re-evaluation of the educational strategy employed to teach a particular subject. CAL - The Hard Truth of It. In the absence of a clear definition what should the term CAL actually refer to? The use of a single program within the class room? In the past developers have used the term CAL when describing the creation and performance of a computer program particularly those used as a librarytype resource. This is in fact a common misinterpretation of CAL and example of its misuse. Computer Aided Learning describes an educational environment where a computer program, or an application as they are commonly known, is used to assist the user in learning a particular subject. The key issue is the word assist which means that the program is not alone in this aim and that there are other methods involved. CAL according to my usage of the term refers to an overall integrative approach of instructional methods and is actually part of the bigger picture. When used in this context CAL describes an integrated approach to teaching a subject in which learning technology forms a part and which only comes about after re-assessment of the current teaching methods. This is Computer Aided Learning - in that the computer, being a program of some sort, is an aid to an overall learning strategy which in itself is a conglomeration of other methods of instruction, (e.g. the lecture, tutorial
103

sheets, text books etc.). So CAL is not a single computer program but art of an educational strategy devised to teach a particular subject. The relative part each method of instruction plays within the strategy is determined through the re-evaluation of the subject being taught. Re evaluation examines the educational objectives of the subject and their associated learning outcomes and determines the success with which the current methods of teaching and assessment are achieving them in terms of knowledge gain. Thus re-evaluation helps to determine the areas or objectives of the subject in which the traditional methods are failing and where the computer program can help. Furthermore re evaluation determines the level at which the program operates the educational objectives which the program alone cannot teach and thus the level of required support from the other methods. Thus re evaluation results in a coherent educational strategy within which each instructional method compliments and supports the other ensuring that no area of the subject is overlooked. 4 Advantages and disadvantages of Computer Based Learning There are many advantages and disadvantages when it comes to computer based learning the most interactive total education on computer, part of computer aided learning, listed below the benefits and disadvantages are highlight. 4.1 Advantages Computer based learning is ideal for distant learning such as the Open University as you dont need a lot of teacher contact (see e -learning above),this is also good for education in Australia bush or Africa as the countries are big with very little human contact with anyone. The student can learn at their own pace, which is different than the traditional approach where everyone learns together at the same pace which could leave people behind which can be bad if the person has learning difficulties, the traditional approach makes the assumption that everyone is the same, at the same level so everyone has to get to the teaching level, computer based/aided learning can be tailored for the Individual so you can start teaching where they are.

104

The computer systems are sometimes interactive which makes learning fulsome people find it easier to learn when the posses of doing so fun as you will see later when we discuss the idea of the new generation brought up on Sesame Street and computer games, sesame street helped children to learn because it was fun. Computer based learning generates a positive learning attitudes in students Computer based learning takes less time to do then traditional methods Computer based environments are sometimes used to simulate real situations such as operations, earthquakes etc, this is ideal as the student can experience the situations in a safe environment (please see the section later on about computer aided learning in the medical world). Computer based learning applications can have multimedia built in to it so not only can you read the text but you can see videos and hear sounds, so learning new languages such as German etc become easier as you can hear what the word should sound like Programs can be put on to CD-ROM or DVD or internet so people can get hold of the course materials easily 4.2 Disadvantages You need a computer to use which needs power so not ideal for places with lack of power such as some parts of the third world. Computers are expensive so people in the third world, even though those countries are big with very little schools in them, so computer aided learning environments would be ideal would not be able to afford them this creates an digital divide. Computer based learning lowers the teachers role

FUTURE

105

In weighing up if computer aided learning is worth doing we have a problem because on the one hand we have a new generation that will respond better to the interactive style of teaching that computer based learning offers. This generation was brought up on computer games, MTV, computers and the internet and have not known anything else, this generation is seen as being board with traditional teaching methods, but on the other hand you still need a teacher their to teach social skills or we will have a society of people with no social skills that. spend too much time on computers not talking to one another in fact we do already in some aspects of life today. In subjects of a complex nature such as science and engineering where it is better to see what happens rather than hear or read what happens in a book computer aided learning is a must because people will understand better what is happening. Education seems to be going on a computer based learning route more and more we need to test out if these methods work or not, we also need a happy medium between the new styles and the traditional approach if computer aided learning/based learning is going to work and bring up well balanced people not people who are fixed to computers all day. If computer aided/based learning is push forward as the only teaching method then the role of the teacher needs to change to the role of the mentor guiding their students along in their individual learning tasks. On the whole I think the classroom of tomorrow is very different from the classroom of today, will it be totally computer based, will we still have lectures in universities, and we will have to see. Unit#2 FUNDAMENTALS OF INFORMATION SYSTEM 2.1 Introduction to computer system

Introduction to Computer Systems These notes provide a general introduction to computers systems. A computer system is made up of both hardware and software. Software is another term for computer program. Software controls the computer and makes it do useful work. Without software a computer is useless, akin to a car without someone to drive it. Hardware refers to the physical components that make up a computer system. These include the
106

computer's processor, memory, monitor, keyboard, mouse, disk drive, printer and so on. In these notes we take a brief look at the functions of the different hardware components. In addition we describe the some of the essential software required for the operation of a computer system. 1 Hardware The hardware of a computer system is made up of a number of electronic devices connected together. Figures 1 and 1A are block diagrams of a typical computer system.

Printer Speakers CD-ROM T a p e

Compute r
Mouse

U n i t

Figure 1: Typical computer system

107

Phone socket

Modem

Monitor and Keyboard

Disk Unit

108

Computer Memory Modem Processor Mouse Monitor and Keyboard Phone Socket

System Bus

Printer Disk Unit Cable

Figure 1A: Typical computer system: Processor and Memory (RAM)

A computer has two major internal components that are of particular interest to us, namely its processor and its memory . There will also be a power supply unit (not shown) to provide power for the system. The term device is used to describe any piece of hardware that we connect to a computer such as a keyboard, monitor, disk drive, printer and so on. Such devices are also sometimes described as peripheral devices or simply peripherals . They may be classified as input/output (I/O) devices and storage devices. As the name suggests, I/O devices are responsible for communicating with the computer, providing input for the computer to process and arranging to display output for computer users. The keyboard and mouse are commonly used input devices. The monitor is the commonest output device, followed by the printer for hardcopy (permanent) output. Storage devices are used to store information in a computer system. The memory is used to store information inside the computer while the computer is switched on. Disk storage is the commonest form of external storage, followed by the tape storage. External storage devices can store information indefinitely or more

realistically, for some number of years. A very important component of a computer system is the system bus. This is used to transfer information between all system components. It is crucial to understand that all information is represented inside a computer system in binary form i.e. using the binary numbers 1 and 0. The hardware of a computer system has no other way of representing information. Thus when you press a key on a computer's keyboard, a

binary number (code) which represents the symbol on that key is transmitted to the computer and not the symbol itself, for example, 'A', displayed on the key. Similarly, when a computer transmits a character to be displayed on the monitor, it is the binary code representing that character that is sent to the monitor. The monitor hardware takes this binary code and displays the corresponding symbol on the screen. To reiterate, all information is transmitted and manipulated inside a computer system in the form of binary numbers. A binary digit (1 or 0) is called a bit and a group of 8 bits is called a byte. When describing storage capacity, the byte and multiples of bytes are the units used. A kilobyte (Kb) is 210 (1024) bytes, a megabyte (Mb) is 220 bytes (1024Kb), a gigabyte (Gb) is 230 bytes (1024Mb) and a terabyte (Tb) is 240 bytes (1024Gb). When describing transmission speeds, the number of bits per second (bps) is the unit used. A typical modem can handle speeds of up to 56 Kbps i.e 56 kilo bits per second or approx 56,000 bps. 1.1 The Processor The processor as its name suggests is the unit that does the work of the computer system i.e. it executes computer programs. Software is composed of instructions, which are executed (obeyed) by the processor. These instructions tell the processor when and what to read from a keyboard; what to display on a screen; what to store and retrieve from a disk drive and so on. A computer program is a set of such instructions that carries out a meaningful task. It is worth remembering at this stage that the processor can only perform a limited range of operations. It can do arithmetic, compare numbers and perform input/output (read information and display or store it). It has no magical powers. It is instructive to bear in mind that all computer programs are constructed from sequences of instructions based on such primitive operations. The processor itself is made up of a number of components such as the arithmetic logic unit (ALU) and the control unit (CU). The ALU carries out arithmetic operations (e.g. addition and subtraction) and logical operations (e.g. and, or xor) while the CU controls the execution of instructions. Traditionally, the processor is referred to as the central processing unit or CPU. With the advent of microprocessors, the term MPU or microprocessor unit is also used. A microprocessor is simply a processor contained on a single silicon chip. In addition to the ALU and CU, the processor has a small number (usually less than 100) of storage locations to store information that is currently being processed. These locations are called registers and depending on the processor, a register may typically store 8, 16, 32 or 64 bits. The register size of a particular processor allows us to classify the processor. Processors with a register size of n-bits are called n-bit processors, so that processors with 8-bit registers are called 8-bit processors, similarly there are 16-bit, 32-bit and 64-bit processors. An nbit processor is said to have an n-bit word size so a 32-bit processor has a 32-bit word size. The greater the number of bits the more powerful the processor is, since it will be able to process a larger unit of information in a single operation. For example, a 32-bit processor will be able to add two 32-bit numbers in a single operation whereas an 8-bit processor will only be able to add two 8-bit numbers in a single operation. An n-bit processor will usually be capable of transferring n-bits to or from memory in a single operation. This number of bits is also referred

to as the memory word size. So, while a byte refers to an 8-bit quantity, a word can mean 8, 16, 32, 64 or some other number of bits. On some machines a word is taken to mean a 16-bit quantity and the term long word is used to refer to a 32-bit quantity. An alternative method of classifying a processor is to use the width of the data bus (described later), in which case an n-bit processor describes one operating with a data bus of n-bits. This means that the CPU can transfer n-bits to another device in a single operation. Using this classification, the Intel 8088 microprocessor is an 8-bit processor since it uses an 8-bit data bus, although its CPU registers are in fact 16-bit registers. Similarly the Motorola 68000 is classified as a 16-bit processor, even though its CPU registers are 32-bit registers. Sometimes a combination of the two classifications is used where the 8088 might be described as 8/16-bit processor and the Motorola 68000 as a 16/32-bit processor. In these notes we use the register size as the method for classifying the processor. The data bus width is very important in a computer system, since it determines the amount of information that can be transferred to or from the CPU, in a single operation. This means, for example, that the Motorola 68000 would have to transfer two 16-bit items to the CPU to fill a 32-bit register, since the data bus width is 16-bits. As we shall see later, I/O devices and memory operate at very slow speeds compared to the speed of the CPU. As a result, the CPU is frequently delayed by these slower devices, waiting for information to be transferred along the data bus. So, the more information we can transfer in a single operation, between an I/O devices and the CPU, the less time the CPU will spend waiting for information to process. This in turns means that we should strive to have the data bus as wide as possible. An important component not shown in Figure 1.1 is the CPU clock. The clock controls the rate at which activities are carried out by the CPU. It generate a stream of cycles or ticks and an action can only be carried out on the occurrence of a clock tick. Obviously, the more cycles per second, the more actions that the CPU can carry out. The speed of the clock is measured in millions of cycles per second. One cycle per second is one Hertz (Hz), a kilohertz (KHz) is 1000Hz, a megahertz (MHz) is 1000 KHz and a gigahertz is 1000 MHz. Currently, PCs are being marketed with clock rates range from 2 to 4 GHz and the rate continues to increase. 1.2 Bus System The processor must be able to communicate with all devices. They are connected together by a communications channel called a bus. A bus is composed of a set of communication lines or wires. A simple bus configuration is shown Figure 1.2. We refer to this bus as the system bus as it connects the various components in a computer system. Internally, the CPU has a CPU bus for transferring information between its components (e.g. the control unit, the ALU and the registers).

Figure 1.2: The system bus: the processor communicates with all devices via the system bus Information is transferred from one device to another on the bus. For example, information keyed in at the keyboard is passed along the bus to the processor. The processor executes programs made up of instructions, which are stored in the computers memory. These instructions are transferred to the processor using the bus. As indicated in Figure 1.2, the lines of the bus may be classified into 3 groups. One group of lines, the data lines, is used to carry the actual data along the bus from one device to another. A second group of lines, the address lines, allow the CPU to specify where the data is going to or coming from i.e. which memory location is to be accessed or which I/O device is to be used. The third group of lines, the control lines, carry control signals that allow the CPU control the transfer of information along the bus. For example, the CPU must be able to indicate whether information is to be transferred from memory or to memory; it must be able to signal when to start the transfer and so on. We will refer to these groups of lines as separate buses in these notes, so we refer to the data bus, address bus and control bus as separate entities. It is important to realise that a computer system may have a number of separate bus systems so that information can be transferred between more than one pair of components at the same time. For example, it is common to have one bus for communicating between memory and the CPU at high speeds. This bus is called a CPU-memory bus. In addition, this bus would be connected to a second I/O bus via a bus adapter, as illustrated in Figure 1.3. This second bus would be used for the slower I/O devices.

Processor CPU - Memory Bus

Main Memory

Bus Adapter

I/O Bus
I/O controller

I/O controller

I/O controller

Figure 1.3: CPU-memory bus and I/O bus

This allows the processor more efficient access to memory, as the CPU-memory bus can operate at very high speeds. These high speeds are only possible, if the physical bus length is quite short. Thus, by providing a second I/O bus to accommodate the various I/O devices that may be connected to the computer, the length of the CPU-memory bus can be kept shorter than it would be if the I/O devices were to be directly attached to a single system bus. On the other hand, to keep the cost of a computer system low, a single bus running at a slower speed, may be used to connect all devices to the CPU. In order to attach any device to a computer, it must be connected to the computers bus system. This means that we need a unit that connects the device to the bus. The terms device controller and device interface are used to refer to such a unit. So, for example, a disk controller would be used to connect a disk drive to the system bus and the term I/O controller refers to the controller for any I/O device to be connected to the bus system. A computer system will have some standard interfaces such as a serial interface , which can be used with a number of different I/O devices. The serial interface, for example, can be used to attach a printer, a mouse or a modem (device for communications over a telephone line) to the computer. So, if you wish to construct a new type of I/O device, you could use the standard laid down for the serial interface (the RS-232 standard) and you could then attach your device to the computer, using the serial interface.

1.3 Memory Memory is used to store the information (programs and data) that the computer is currently using. It is sometimes called main or primary memory. One form of memory is called RAM random access memory. This means that any location in memory may be accessed in the same amount of time as any other location. Memory access means one of two things, either the CPU is reading from a memory location or the CPU is writing to a memory location. When the CPU reads from a memory location, the contents of the memory location are copied to a CPU

register. When the CPU writes to a memory location, the CPU copies the contents of a CPU

register to the memory location, overwriting the previous contents of the location. The CPU cannot carry out any other operations on memory locations. RAM is a form of short term or volatile memory. Information stored in short term storage is lost when the computer is switched off (or when power fails e.g. if you pull out the power lead!). There is therefore a requirement for permanent or long term storage which is also referred to as secondary storage or auxiliary storage. This role is fulfilled by disk and tape storage. RAM consists of a large number of storage locations or cells, each one capable of storing a small amount of information typically a single byte. These cells are numbered or addressed starting at zero, up to some maximum number determined by the amount of RAM present, as illustrated in Figure 1.4. Currently (2003), PCs typically have 256 Mb to 512Mb of RAM installed, but the figure is constantly being revised upwards.

The address of a memory cell is used when we wish to access that particular memory location. This means that we must know the address of a cell in memory before we can access its contents. A byte is a small unit of storage, capable of storing unsigned numbers in the range 0 to 255. In order to allow you store larger quantities in memory, the hardware allows you treat a number of consecutive cells as a unit. For example, by using two consecutive cells, 16-bits are available for storing information giving an unsigned number range from 0 to 216-1 (65,535). By using 4 consecutive cells, 32 bits are available, allowing numbers in excess of 1 billion to be manipulated. What about text, such as that on this page? In the case of text, each character is stored separately in a single byte. So if there are 2000 characters on a page, then 2000 consecutive bytes could be used to store the text. There are two major forms of RAM called static RAM (SRAM) and dynamic RAM (DRAM). SRAM is the more expensive of the two as it is more complex to manufacture but it is considerably faster to access than DRAM. DRAM has an access time in the range of 20-60 nanoseconds upwards, while SRAM access times range from 4 or 5 nanoseconds to 20 nanoseconds. It is not uncommon for a computer system to have a small amount of SRAM and a larger volume of DRAM making up its total RAM capacity. The SRAM is used to construct a

cache memory which stores frequently accessed information and so speed up memory access for the system. There are other forms of primary memory such as ROM, PROM, EPROM EEPROM and flash memory. ROM (Read Only Memory) is the same as RAM in so far as any location can be read from at random, but it cannot be written to. ROM is pre-programmed by the manufacturer and its contents cannot be changed, hence its name read only. This means that ROM is a form of permanent storage. However, since the user cannot store information in ROM, its usefulness is restricted. ROM is typically used to store programs and data that are required to start up (boot) a computer system. When a computer is powered on, its RAM will contain no useful information, but the processor is designed to run programs that it finds in memory. One major use of ROM is to store the initial program used by the processor when the machine is started. This use is described in the section on booting up a computer in the second half of these notes. Another use of ROM in personal computers, is to store operating systems subprograms for carrying out I/O and other activities. The term firmware is used for the combination of ROM and the software stored on it. PROM stands for programmable ROM which means that the memory chip manufacturer provides a form of ROM that can be programmed via the use of a special hardware device. This allows computer system designers place their own programs on the PROM chip. If their programs do not operate correctly, the designer can program another PROM chip, as opposed to getting the memory manufacturer to do it, as is the case when a designer uses ROM. EPROM is a form of ROM that is erasable which means that the contents of the EPROM chip can be erased in their entirety and the chip can be reprogrammed (a limited number of times). As in the case of PROM, EPROM can only be programmed and erased (via exposure to ultra violet light) by a special hardware device, outside the computer system. EEPROM is electrically erasable PROM. EEPROM can be erased inside a computer system using an electrical current. Its major advantage is that it does not have to be removed from the computer system. In recent years work has advanced on such non-volatile RAM (NVRAM) devices. Flash memory is one such device. This memory can be accessed like RAM (read and written), but is non-volatile i.e. it is a form of permanent storage. At the time of writing flash memory is available in the 512Mb to 1 Gb range. One disadvantage of current NVRAMs is that they cannot be written to, as quickly as ordinary RAM. However, they are much faster to access than disk storage systems and they consume less power, so that in small portable computer systems they offer an alternative low-powered option to disk storage. However, NVRAMs are more expensive than disk storage devices. NVRAM should not be confused with a device called a RAM card which is made up of normal RAM with a battery power supply. A RAM card can be removed from a computer and is about half the size of a floppy disk. At the moment they are available in the kilobyte to megabyte storage range. Because of the battery power supply, RAM cards retain their contents when removed from a computer.

1.4 Permanent Storage Devices Long term storage is also described using the terms secondary, auxiliary, mass, and external. The two commonest forms of secondary storage are disk and tape storage. Disk Storage Disk storage is the most popular form of secondary storage. It is more versatile than tape storage. It is faster to access, as information on any part of the disk can be accessed ( direct access) quickly, independently of its position on the disk. Its disadvantage is that it is more expensive than tape storage. The surface of a disk is divided into tracks and each track is divided into sectors (blocks). There may be from 40 to hundreds of tracks on a disk surface. Each sector of a track will typically have a capacity from 32 to 1024 (1Kb) bytes. Information is stored on or read from a disk magnetically, using a read/write head. To access information on a disk, the head must be moved to the correct track (the time taken to do this is called the seek time); the correct sector must rotate around to the head (the time taken to do this is called the rotational delay or latency) and finally the information may be transferred (transfer time). On a typical hard disk, the average seek time is less than 20 ms (milliseconds). Based on a disk rotation speed of 3600, the average rotational delay is the time for half of one rotation, about 8 ms. The transfer time is so small, compared to the seek time and latency, that it can be ignored. Rotation speeds now range from 3000 to 9000 revolutions per sec. Note: It is approximately 100,000 times slower to access information on disk than to access information in RAM. This is because of the electromechanical nature of the disk drive, involving disk rotation and read/write head movement. While the speeds used in disk drives are quite fast in human terms, in CPU terms, they are extremely slow. For example, the CPU can access information stored in RAM in of the order of 20 to 100 nanoseconds. The CPU can access information in its registers in a few nanoseconds. So from the CPU's perspective, if information has to be fetched from disk which takes of the order of 28 milliseconds, then a long wait ensues. As a result of the mismatch in speed between the CPU and disks, much work is concerned with making disk I/O as efficient as possible. For example, you can arrange to do disk I/O, so that when you read something from disk, you read a big chunk (at least one sector). Then when you need another piece of information, it may have been read into memory already, as part of the large chunk. You can also arrange information on disk, so that it is stored on the same track or neighbouring tracks which means that the seek time can be significantly reduced. The physical size of disk drives has decreased dramatically over the years. Only a few years ago, a disk drive of 100Mb capacity would have been larger than a domestic washing machine. Nowadays such a multi-gigabyte disk drive fits easily inside a laptop or notebook computer. The cost of disk storage has fallen in a similar manner. The shrinking size and low cost of disk drives has led to the use of systems with several disk drives or arrays of disk drives. In addition, to increase availability of data, redundant arrays of independent disks (RAID) systems have been developed. Current disk sizes for PCs range from 6 to 60Gb, and the amount is constantly

increasing.

In a RAID system, information is distributed over a number of disk drives in such a fashion that if one of the disk drives is removed from the system (due to failure), the information can still be accessed. A simple version of a RAID system is called mirroring, whereby two disks, whose contents are mirror images of each other, are maintained. Whenever, information is stored (updated) on one disk, it is automatically stored (updated) on the its mirror disk. In the event of one of the disk failing, then the second disk can be used to access the information. In this case we have 100% redundancy i.e. a complete copy of all information is stored on the second drive. This increases the availability of data in the system at the expense of a second disk drive. Using clever software however, similar availability can be achieved in a system without the overhead of 100% redundancy. For example, a RAID system might be composed of 9 disk drives where 8 of the disk drives are used to store information and 1 is used to store redundant information. This redundant information can be used to reconstruct data from any of the drives, in the event of a drive failing. In this case, we have only a little more than 11% redundancy, but the system can operate successfully (all be it more slowly) without information loss, if a disk drive becomes faulty. In brief, the decreasing cost and size of disk drives is leading to computer systems having very large storage capacities with very high data availability. Tape Storage Tape storage is cheap with a large capacity e.g. 50Mb upwards for a typical tape. Video tapes can store 2Gb to 8Gb (billion bytes). The disadvantage of tape as a storage medium, is that tape is a sequential storage medium. This means that to access the nth item of information, you have to skip over the first n-1 items; in the same fashion as fast forwarding to play music from the middle of an audio tape cassette. This makes tape very slow to access in comparison with disks. Typically, tape storage is used to keep a backup of the information stored on a disk. Thus, in the event of a loss of information from disk, you can retrieve it from your tape backup. Tape is also used to transfer information (data and software) between computers. Tape is especially popular in large computer installations where large amounts of data have to be kept for years. On personal computers it is more common to use disks as a form of backup storage and as a means of transferring information between computers. [Aside: Important Principle: Always have a Backup A backup is a second copy of information stored on disk or tape. This crucial principle is basic common sense. Much time is spent entering data and programs (days, weeks, even years). But it must be remembered that computer storage media can be easily damaged, lost or stolen. In addition users may inadvertently delete information. All computer users lose information at some stage. The seriousness of this is greatly reduced or even eliminated if regular backups are taken. In the event of information loss, you simply use the backup copy. If the information is particularly valuable (in terms of time spent to enter it or economically) then it may be a wise precaution to have several backups. Backup copies should be stored separately from the main copy to avoid a disaster destroying all copies at the same time.] There are a number of different types of tape available: reel tapes, cartridge tapes, digital audio tapes (DAT) and optical tapes. The capacity ranges from tens of megabytes for reel tapes, from hundreds of megabytes for cartridge tapes, from several gigabytes for DAT tapes and from a

terabyte for optical tapes. The huge capacity of optical tapes is useful for organisations storing huge amounts of data such as weather forecasting services. CD-ROM (Compact Disk Read-Only Memory) is another form of secondary storage, that is increasing in popularity. It is a low-cost storage medium with a very large capacity. Unlike disk storage, CD-ROM is a WORM (Write Once Read Many times) device i.e. it is a read only storage device. This means that like ROM, the disk comes with information already stored on it. Thus one of the main uses of CD-ROM is to disseminate information such as library catalogues, reports, manuals, journals, directories and software. It has also become a very popular medium for computer games. Many software vendors and computer manufacturers such as Sun and Apple distribute their software and manuals on CD-ROM. Many publishers now use CD-ROM especially for educational material and it is possible to buy encyclopaedia and history texts in CD-ROM form. The CD-ROM has sufficient capacity not only to store the written text, but also video and audio material which require large amounts of storage, for example, a digital version of a passport size photograph requires up to a megabyte of storage. CD-ROM uses the same technology as the compact audio disk or CD and such disks can also be used in a CD-ROM drive. Optical scanning techniques, using lasers, are employed with CDROMs, which allow massive amounts of data to be stored in a compact area. A CD-ROM drive is about the same size as a floppy disk drive. CD-ROM is currently more reliable and durable than magnetic media (disks and tapes). In terms of capacity, a single CD-ROM may store up to 600 megabytes. In terms of text this is equivalent to about 200 books of 1000 pages each. A disadvantage of CD-ROM is that it takes longer to access information compared to a hard disk. However, clever software tailored for particular applications often means that this is not a serious problem. Video Disks are similar to CD-ROM (but have a larger capacity) and are used for similar applications. Rewritable CD storage (CD-R) is now becoming more widely used. This storage combines the reliability and storage capacity of CD-ROM with the flexibility of magnetic disks in that users can store their own information on them. They are still slower to access than conventional hard disks. Magneto-optical (MO) disks combine the use of magnetic and optical principles to store information. MO disks have a smaller capacity than CD-ROMs (e.g. a 3.5 inch MO disk stores 128Mb) and are quite expensive in comparison to conventional hard disks. CD-ROMs are now being replaced by DVD-ROMs (Digital Versatile Disks). DVD-ROM capacity ranges from 4.7 Gb upwards (4.7x 2 or 4.7 x 4 Gb). These are also used for distributing films as a rival to video tapes. A current PC will typically have a DVD drive which is also capable of reading conventional CDs. DVD-RAMs are also available which allow users to store files. Taking short term and long term storage together we can represent the relative capacity and access times in the form of a storage hierarchy as illustrated in Figure 1.5:

Fast CPU Access

Registers Cache

Low Capacity

Main Memory

Disk Storage

Slow CPU Access

Tape Storage

Very Large Capacity

Figure 1.5: Storage hierarchy of a computer system

At the top of the hierarchy we have storage on the CPU chip (i.e. registers). This is the fastest form of storage in terms of the CPU accessing it. It also has the smallest capacity. Register capacity ranges from hundreds of bytes to a few kilobytes. We then have cache memory with a capacity of typically less than 1Mb. Nowadays we also have CPU cache memory i.e. cache memory on the CPU chip itself. This is in the low kilobyte range e.g. 8Kb to 64Kb at the moment. Cache memory has an access time of typically less than 20ns. The next level is that of main memory (RAM) with a capacity in the megabyte range and access times of 30 - 60ns. Disk storage is in the gigabyte capacity range with typical access times of microseconds. Tape storage provides from gigabyte to terabyte storage capacity with access times as slow as seconds for reel tapes (old technology). It should be noted that the access time, for the newer optical tapes, is much better.

1.5 I/O Devices In this section we survey some of the commonly used I/O devices encountered in computer systems. Input Devices The keyboard and mouse are the most widely used input devices at the moment. The QWERTY keyboard, so called because the keys 'q','w','e','r','t' and 'y' are adjacent, is the commonest form of keyboard. But other types of keyboard are available, some being specially designed for people with special needs. It should be noted that the layout of keys on the QWERTY keyboard owes its origins to typewriter designers who were actually trying to slow down the speed at which a typist could type. The reason was that the old lever-based typewriters were liable to levers getting interlocked if two keys were pressed in rapid succession. Typewriter designers laid out the keys in a fashion that made it difficult to type quickly, the QWERTY layout being the product of this design. Because so many people trained on such keyboards, the layout still remains with us today, many years after the engineering problem which it was designed to alleviate, disappeared. It is worth noting that in some non-English speaking countries the layout is slightly different giving rise to QWERTZ and AZERTY keyboard layouts. The mouse is used as a pointing device and to select options from menus. A tracker ball is used for the same purposes as a mouse and is popular on laptop computers. Another input device is called a light pen which can be used to point at a monitor, serving a similar function to a mouse. A touch sensitive screen is a method of input based on touching a specially designed screen in particular places. It is typically used in an application such as a tourist information system, where information can be obtained by touching menu options displayed on the screen. A very common requirement for business is the processing of payments. Take an insurance company, for example, here, very many customers return payment for their insurance with some form of printed statement from the company. In order to automate processing such payments, a form of input called optical character recognition (OCR) was developed. An OCR device can scan a document and recognise characters. Originally, text had to be printed in a special OCR font for OCR to operate but nowadays OCR can handle almost any font. The advantage of OCR for companies is that when statements are returned with payments, they can be scanned in and the customer accounts automatically credited. A less sophisticated but similar device is an optical mark reader which can scan a specially designed form and recognise the presence of marks in particular positions. One use for such a device is in lottery games machines where a user marks numbers on a pre-printed form which is then read by an OCR reader connected to a lottery computer. Magnetic ink character recognition (MICR) is similar to OCR but this time the characters are not scanned optically. Instead they are scanned magnetically as they have been printed with magnetised ink, each character having a very distinct shape. This is used on cheques by banks, to encode bank account numbers. Barcode scanners are very popular input devices in supermarkets and stores. These devices scan barcodes which identify products. This is a form of OCR. The barcode is translated to a number which can be used by the computer to identify the product and look up its price in a database. In addition the software can keep track of stock levels by recording the number of

sales of each item.

Image scanners are devices which scan an image (document, photograph) and produce a digital version of the image i.e. the image is stored as a sequence of binary numbers. Special software can then display the digital version of the image on a monitor. They effectively photocopy the image into the computer. This type of technology is very useful for storing legal documents, application forms and anywhere there is a requirement to access the contents of an original document very quickly. The term document image processing (DIP) is used to describe the application of this technique and it is becoming an important application in insurance and banking organisations. [Aside: It should be noted that digital images require large amounts of storage. To alleviate this problem, various data compression techniques may be used. Data compression software can reduce storage requirements dramatically, with savings ranging from 10% to 90% depending of the type of data being compressed. Some PCs use this type of software to effectively double their hard disk storage capacity i.e. all data stored on the hard disk is compressed, so that an 80Mb disk appears as if it has 160Mb capacity. Data compression is also used by software vendors who typically compress their software when distributing it on floppy disks, since it reduces the number of floppy disks required. Data compression is also very important in data communications, since by compressing data, it can be transmitted in less time. This is important because users are charged for either transmission time or for the amount of data transmitted, or both. Compression techniques reduce both costs. Fax machines have hardware to compress the images being transmitted and due to the nature of most faxes (lots of blank lines i.e. white space) reductions of up to 90% can be achieved i.e. an image requiring 1Mb can be compressed to 100Kb for transmission. The receiving fax machine automatically decompresses the image as it receives it.] A whole range of cards are available such as ATM and credit cards which encode information magnetically. These cards can be read by card readers and allow you carry out various transactions such as paying for goods or obtaining cash. Voice Input is perhaps the most exciting form of computer input. While some devices and applications are available, a good deal of work remains to be done before we will easily be able to use computer software without the need for a keyboard and mouse. Output Devices Monitors are the commonest output device for a computer system. They range from the lowly dumb terminal screen to the high quality bit-mapped colour screen of workstations. A basic monitor displays up to 24 lines of 80 columns of standard characters. Advanced monitors range from monochrome to full colour and are bit-mapped which means that each point (usually called pixel which stands for picture element) on the screen corresponds to at least one bit in memory. By modifying the bits in memory, the image on the screen is modified. A colour screen may have up to 24-bits in memory corresponding to each pixel, since the colour of the pixel must be recorded. Such monitors vary in size and in the number of colours they support. Printers are the commonest hardcopy output device. They range from cheap low quality dotmatrix to high speed, high quality laser printers with a variety of intermediate quality devices available.

[Aside: A word of caution is appropriate regarding the management and use of printers. Paper inevitably jams in printers at some stage, no matter whether its an expensive laser printer or a cheap dot-matrix one. There are very few more irate users, than one who has spent a few hours preparing a document, only to find that they cannot get it printed! So, if you have anything to do with managing or installing a computer system, be warned, make sure your users know the basics of clearing paper jams or face the consequences!] Connecting a Printer to a Computer Computers are connected to printers by cables using plugs and sockets as illustrated in Figure 1.6. The sockets are usually called interfaces or ports. Since we use these ports to send information, into or out of, a computer, they are also be called input/output ports or I/O ports. The cable used to connect the printer to the computer is often called a line. There are two types of cable which may be used. One is called a serial cable and the other a parallel cable. The parallel cable is made up of many lines running in parallel, hence its name. A different interface (socket) is required for each. So you have a serial interface for a serial line and a parallel interface for a parallel line. Most computers and printers have both types of interface, allowing you use which ever one you please. The serial line and interface is made up according to an international standard referred to as the RS232 standard. Hence a serial line and the interface for a serial line (i.e. a serial interface) are often referred to as an RS232 line and an RS232 interface.

Cable

Computer

Printer

Socket
Figure 1.6: Connecting a computer to a printer

Plug

You also use a serial interface if you wish to communicate with other computers over a telephone line. A device called a modem is used for such communication. It connects your

computer to the telephone line via the serial port. In fact a whole variety of I/O devices may be connected a computer using a serial interface. For example, on multi-user computers which use computer terminals , each terminal (keyboard and screen) is connected to the computer via a serial line.

Plotters are output devices used for graphical output such as architectural and engineering drawings produced by CAD (Computer Aided Design) packages. They can handle a range of paper sizes and operate at various speeds. Embedded Computer Systems To date we have been describing a conventional computer system as might be used in an office or at home. However, strange as it may seem at first, the vast majority of computer processors are not used in such computer systems. One of the world's largest customers for processors is General Motors, the U.S. automobile manufacturer where they are installed inside cars. In the home, appliances such as microwave ovens, washing machines, sound systems, alarms and so on are usually controlled by a microprocessor. The computer system used in such applications is called an embedded computer system, as it is a component of another system. Embedded processors are much the same (and sometimes are the same) as those used in a conventional computer system. For example, Patriot missiles are guided by a VAX processor (a pretty suicidal task for a processor!). For many embedded systems, only very simple microprocessors are required and in many cases primitive 4-bit processors are still used, while 8bit microprocessors are very commonly used. The obvious difference between an embedded computer system and a conventional one is in the type of I/O device used. Embedded systems take their input from a range of devices such as switches and sensors including temperature, pressure, light, humidity, sound, vibration sensors and so on. The output of such systems typically goes to switches which can activate lights and other devices . These I/O devices are usually electrical devices and use analog electrical signals. A computer on the other hand uses digital (binary) signals. The conventional I/O devices described earlier all use digital signals so they can be directly connected to an appropriate device controller. Sensors and switches must be connected to a analog-to-digital (A/D) converter for input to the computer system and to a digital-to-analog (D/A) converter for output from a computer system. Smart Cards A smart card resembles an ATM card but actually contains a microprocessor on the card itself. A common example is that of a phone card. The card will also have memory which in the case of a phone card records the number of units left on the card. Smart cards are being used in various applications such as for security purposes where the card can encrypt confidential information. Another application is that of an electronic purse where the card can be topped up with money from an ATM-like machine and then use for shop purchases with money transferred from the card to the shop where the purchase is made. 1.6 Classifying Computers A few years ago computers were easily classified as being one of mainframe, minicomputer or microcomputer. Mainframe computers were physically large and powerful systems capable of supporting hundreds of users. They had relatively large amount of RAM (e.g. 1-2 Mb) and disk storage (e.g. 100 to 400 Mb). Minicomputers were smaller less powerful machines than mainframes, but were still multi-user machines. Microcomputers were small humble machines

with 8-bit or 16-bit processors, 16Kb to 256Kb of RAM and 5 to 10 Mb of disk storage, used mainly for games and basic word processing. Developments in microprocessor technology however means that today's desktop microcomputer will easily have more RAM and disk storage than the above mentioned mainframe, as well as having a more powerful CPU. Minicomputers today are really no more than very powerful microcomputers. The differences between such a machine and a desktop model may more likely be in the software that is being used as opposed to the hardware, for example, a multi-user operating system such as Unix, would typically be used, as opposed to the Windows systems of PCs. Mainframe computers are still powerful machines (physically much smaller than their ancestors) with hundreds of megabytes of RAM and terabytes of disk storage. They have very powerful CPU's that allow them cope with large numbers of users. Supercomputer is the term used for the most powerful computer available at any time. These are typically tailored for very fast processing of what are known as number crunching applications. Such applications require a tremendous number of arithmetic calculations to be carried out. Weather forecasting is the classic example of such an application, where equations taking account of huge numbers of observations have to be solved. Other applications are to be found in astro-physics and some branches of chemical analysis and modelling. Supercomputers at the moment can carry out trillions of operations per second! The Cray is perhaps the most well known supercomputer, named after its designer, Seymour Cray. 1.7 Computer Networks The trend at the time of writing these notes is for organisations to install distributed computer systems in a move away from large mainframe systems (referred to as downsizing ). In a distributed system, computers are connected together to form networks. Networks often provide services (e.g. e-mail, printing, database access) on one machine for all users of the network. The machine providing the service is called a server. The machines (users) using the service are called clients . A typical organisation might provide each user with their own desktop machine, connected on a network, to a central file server machine. The file server is simply a microcomputer with high capacity disk drives, dedicated to the task of storing user files and applications software. A user can load software from the file server and run it on her own machine. In addition, they can have shared access to data stored on the file server. Because the machines are networked, it is easy to provide electronic mail ( e-mail) applications to allow users communicate with each other. One of the problems with a distributed system is that of management. With a centralised mainframe-based system, it is easier for the system manager to keep track of software and data, as well as users! With a distributed system, users may store software and data on their local hard disks (even if they are advised not to do so!). This can cause problems in keeping data consistent (i.e. everybody should have access to the same data) and problems due to different versions of a software package being used. A computer network which is local to a building or campus is called a local area network or LAN. The advantages of such a system include decreased cost (it is cheaper to install a network of PCs than a powerful mainframe) and increased reliability since users are not dependent on a single mainframe computer. LANs are to be found used in offices, schools, colleges, hospitals and most large organisations. They are suitable for networking within a building complex or

campus area. They do not provide for new applications that could not be carried out on a single mainframe with terminals, but they have significant cost and reliability advantages. Wide area computer networks (WANs) are interconnected computer systems where the distance between the machines making up the network is anything from a few Kms to the other side of the globe. Many WANs are based on phone lines for their connections. WANs are widely used in banking and the airline industry. The financial markets are also heavily dependent on WANs. They provide for remote database access (accessing a database in a computer system that may be hundreds or thousands of miles away) which is the basis for airline reservations and home banking applications. They also provide global e-mail for users. The connection of two networks is called internetworking. The term internetwork or internet is used to describe the composite network. This may involve the connection of: a LAN to a WAN; or a LAN to another LAN; or of WAN to another WAN. The term Internet is now often employed to refer to a specific global network of computers that is widely used by people all over the world to communicate with each other. The internet has hundreds of millions of computers connected to it at the moment and the number of users is growing at a phenomenal rate. Organisations are connecting their LANs or WANs to the internet and individual users can also access the Internet from home, using a modem. The term information superhighway is being applied to this network and its planned high capacity successor. The information superhighway will allow users employ the network for all of their communication requirements e.g. e-mail, voice mail, fax, tele-conferencing, television and radio programs and so on. It will be based on optical fibre links which are capable of transferring vast amounts of information very quickly.

System software System software is computer software designed to operate and control the computer hardware and to provide a platform for running application software. Device drivers such as computer BIOS and device firmware provide basic functionality to operate and control the hardware connected to or built into the computer. The operating system (prominent examples being z/OS, Microsoft Windows, Mac OS X and Linux), allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It also provides a platform to run high-level system software and application software. Window systems are components of a graphical user interface (GUI), and more specifically of a desktop environment, which supports the implementation of window managers, and provides basic support for graphics hardware,

pointing devices such as mice, and keyboards. The mouse cursor is also generally drawn by the windowing system. Utility software helps to analyze, configure, optimize and maintain the computer. Servers are computer programs running to serve the requests of other programs, the "clients". The server performs some computational task on behalf of clients which may run on either the same computer or on other computers connected through a network. In some publications, the term system software is also used to designate software development tools (like a compiler, linker or debugger) In contrast to system software, software that allows users to do things like create text documents, play games, listen to music, or surf the web is called application software.

Application software Application software, also known as an application or an app, is computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply in the performance of tasks that benefit the user. The system software serves the application, which in turn serves the user. Similar relationships apply in other fields. For example, a shopping mall does not provide the merchandise a shopper is seeking, but provides space and services for retailers that serve the shopper. A bridge may similarly support rail tracks which support trains, allowing the trains to transport passengers. Application software applies the power of a particular computing platform or system software to a particular purpose. Some applications are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows or an Android application for education or Linux gaming. Sometimes a new and popular application arises which only runs on one platform, increasing the desirability of that platform. This is called a killer application. Difference between system software and application software? Actually, a system software is any computer software which manages and controls computer hardware so that application software can perform a task. Operating systems, such as Microsoft Windows, Mac OS X or Linux, are prominent examples of system software. System software

contrasts with application software, which are programs that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation. System software performs tasks like transferring data from memory to disk, or rendering text onto a display device. Specific kinds of system software include loading programs, operating systems, device drivers, programming tools, compilers, assemblers, linkers, and utility software. Software libraries that perform generic functions also tend to be regarded as system software, although the dividing line is fuzzy; while a C runtime library is generally agreed to be part of the system, an OpenGL or database library is less obviously so. If system software is stored on non-volatile memory such as integrated circuits, it is usually termed firmware while an application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. In this context the term application refers to both the application software and its implementation. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. And often they may have some capability to interact with each other in ways beneficial to the user. For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application. User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or Microwave Oven. Computer programming languages

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely. The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description. The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages, such as Perl 5 and earlier, have a dominant implementation that is used as a reference.

Web languages Used for creating and editing pages on the web. Can do anything from putting plain text on a webpage, to accessing and retrieving data from a database. Vary greatly in terms of power and complexity. HTML Hyper Text Markup Language. The core language of the world wide web that is used to define the structure and layout of web pages by using various tags and attributes. Although a fundamental language of the web, HTML is static - content created with it does not change. HTML is used to specify the content a webpage will contain, not how the page functions. Learn HTML at our HTML tutorials section. XML Extensible Markup Language. A language developed by the W3C which works like HTML, but unlike HTML, allows for custom tags that are defined by programmers. XML allows for the transmission of data between applications and organizations through the use of its custom tags. Javascript A language developed by Netscape used to provide dynamic and interactive content on webpages. With Javascript it is possible to communicate with HTML, create animations, create calculators, validate forms, and more. Javascript is often confused with Java, but they are two different languages. Learn Javascript at our Javascript tutorials section. VBScript Visual Basic Scripting Edition. A language developed by Microsoft that works only in Microsoft's

Internet Explorer web browser and web browsers based on the Internet Explorer engine such as FlashPeak's Slim Browser. VBScript Can be used to print dates, make calculations, interact with the user, and more. VBScript is based on Visual Basic, but it is much simpler. Learn VBScript at our VBScript tutorials section. PHP Hypertext Preprocessor (it's a recursive acronym). A powerful language used for many tasks such as data encryption, database access, and form validation. PHP was originally created in 1994 By Rasmus Lerdorf. Learn PHP at our PHP tutorials section. Java A powerful and flexible language created by Sun MicroSystems that can be used to create applets (a program that is executed from within another program) that run inside webpages as well as software applications. Things you can do with Java include interacting with the user, creating graphical programs, reading from files, and more. Java is often confused with Javascript, but they are two different languages. Learn Java at our Java tutorials section. Software languages Used for creating executable programs. Can create anything from simple console programs that print some text to the screen to entire operating systems. Vary greatly in terms of power and complexity. C An advanced programming language used for software application development. Originally developed by Dennis Ritchie at Bell Labs in the 1970's and designed to be a systems programming language but since then has proven itself to be able to be used for various software applications such as business programs, engineering programs, and even games. The UNIX operating system is written in C. C++ Descendant of the C language. The difference between the two languages is that C++ is objectoriented. C++ was developed by Bjarne Stroustrup at Bell Labs and is a very popular language for graphical applications. Visual Basic A language developed by Microsoft based on the BASIC language . Visual Basic is used for creating Windows applications. The VBScript language (also developed by Microsoft) is based on Visual Basic. Java A powerful and flexible language created by Sun MicroSystems that can be used to create applets (a program that is executed from within another program) that run inside webpages as well as software applications. Things you can do with Java include interacting with the user, creating graphical programs, reading from files, and more. Java is often confused with Javascript, but they are two different languages. Learn Java at our Java tutorials section.

The different generations of languages There are currently five generations of computer programming languages. In each generation, the languages syntax has become easier to understand and more human-readable.

First generation languages (abbreviated as 1GL) Represent the very early, primitive computer languages that consisted entirely of 1's and 0's - the actual language that the computer understands (machine language). Second generation languages (2GL) Represent a step up from from the first generation languages. Allow for the use of symbolic names instead of just numbers. Second generation languages are known as assembly languages. Code written in an assembly language is converted into machine language (1GL). Third generation languages (3GL) With the languages introduced by the third generation of computer programming, words and commands (instead of just symbols and numbers) were being used. These languages therefore, had syntax that was much easier to understand. Third generation languages are known as "high level languages" and include C, C++, Java, and Javascript, among others. Fourth generation languages (4GL) The syntax used in 4GL is very close to human language, an improvement from the pervious generation of languages. 4GL languages are typically used to access databases and include SQL and ColdFusion, among others. Fifth generation languages (5GL) Fifth generation languages are currently being used for neural networks. A nueral network is a form of artifical intelligence that attempts to imitate how the human mind works.

Procedure-oriented programming A type of programming where a structured method of creating programs is used. With procedure-oriented programming, a problem is broken up into parts and each part is then broken up into further parts. All these parts are known as procedures . They are separate but work together when needed. A main program centrally controls them all. Some procedure-oriented languages are COBOL, FORTRAN, and C. Object oriented programming A type of programming where data types representing data structures are defined by the programmer as well as their properties and the things that can be done with them. With objectoriented programming, programmers can also create relationships between data structures and create new data types based on existing ones by having one data type inherit characteristics from another one. In object-oriented programming, data types defined by the programmer are called classes (templates for a real world object to be used in a program). For example, a programmer can create a data type that represents a car - a car class. This class can contain the properties of a car (color, model, year, etc.) and functions that specify what the car does (drive, reverse, stop, etc.) Some object-oriented languages are C++, Java, and PHP.

Operating system An operating system (OS) is a set of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component of the system software in a computer system. Application programs require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, [1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, GNU/Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Types Real-time A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or timesharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user A multi-user operating system allows multiple users to access a computer system concurrently. Time-sharing system can be classified as multi-user systems as they enable a multiple user access to a computer through the sharing of time. Single-user operating systems, as opposed to

a multi-user operating system, are usable by a single user at a time. Being able to use multiple accounts on a Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real user. But for a UNIX-like operating system, it is possible for two users to login at a time and this capability of the OS makes it a multi-user operating system. Multi-tasking vs. Single-tasking When only a single program is allowed to run at a time, the system is grouped under a singletasking system. However, when the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multi-tasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking. Distributed Further information: Distributed system A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system. Embedded Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems.

Database management system A database management system (DBMS) is a software package with computer programs that control the creation, maintenance, and use of a database. It allows organizations to conveniently develop databases for various applications by database administrators (DBAs) and other specialists. A database is an integrated collection of data records, files, and other objects. A DBMS allows different user application programs to concurrently access the same database. DBMSs may use a variety of database models, such as the relational model or object model, to conveniently describe and support applications. It typically supports query languages, which are in fact high-level programming languages, dedicated database languages that considerably simplify writing database application programs. Database languages also simplify the database organization as well as retrieving and presenting information from it. A DBMS provides facilities for controlling data access, enforcing data integrity, managing concurrency control, and recovering the database after failures and restoring it from backup files, as well as maintaining database security.

Telecommunication

Telecommunication is the transmission of information over significant distances to communicate. In earlier times, telecommunications involved the use of visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, and optical heliographs, or audio messages such as coded drumbeats, lung-blown horns, and loud whistles. In modern times, telecommunications involves the use of electrical devices such as the telegraph, telephone, and teleprinter, as well as the use of radio and microwave communications, as well as fiber optics and their associated electronics, plus the use of the orbiting satellites and the Internet. A revolution in wireless telecommunications began in the 1900s (decade) with pioneering developments in wireless radio communications by Nikola Tesla and Guglielmo Marconi. Marconi won the Nobel Prize in Physics in 1909 for his efforts. Other highly notable pioneering inventors and developers in the field of electrical and electronic telecommunications include Charles Wheatstone and Samuel Morse (telegraph), Alexander Graham Bell (telephone), Edwin Armstrong, and Lee de Forest (radio), as well as John Logie Baird and Philo Farnsworth (television).

INTERNET he Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (often called TCP/IP, although not all applications use TCP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support email. Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled and accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.

The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, faulttolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2011, more than 2.2 billion people nearly a third of Earth's population use the services of the Internet.[1] The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Intranet An intranet is a computer network that uses Internet Protocol technology to share information, operational systems, or computing services within an organization. The term is used in contrast to internet, a network between organizations, and instead refers to a network within an organization. Sometimes, the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure, and may be composed of multiple local area networks. The objective is to organise each individual's desktop with minimal cost, time and effort to be more productive, cost efficient, timely and competitive. An intranet may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer protocol). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data. An intranet can be understood as a private analog of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1996-1997. Although not officially noted, the term intranet first became common-place among early adopters, such as universities and technology corporations, in 1992.
[dubious discuss]

Intranets are sometimes contrasted to extranets. While intranets are generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties.[1] Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol). In many organizations, intranets are protected from unauthorized external access by means of a network gateway and firewall. For smaller companies, intranets may be created simply by using private IP address ranges, such as 192.168.0.0/16. In these cases, the intranet can only be directly accessed from a computer in the local network; however, companies may provide access to off-site employees by using a virtual private network, or by other access methods, requiring user authentication and encryption.

Email

Electronic mail, commonly known as email or e-mail, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks. Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages. An Internet email message[NB 1] consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp. Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized in RFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions (MIME). Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it,[2] but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today.

Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP), first published as Internet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.

Multimedia

Multimedia is media and content that uses a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms. The term is used in contrast to media which use only rudimentary computer display such as text-only, or traditional forms of printed or handproduced material. Multimedia includes a combination of text, audio, still images, animation, video, or interactivity content forms. Multimedia is usually recorded and played, displayed or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia (as an adjective) also describes electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in fine art; by including audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application.

Virtual reality

Virtual reality (VR) is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Furthermore, virtual reality covers remote communication

environments which provide virtual presence of users with the concepts of telepresence and telexistence or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus, and omnidirectional treadmills. The simulated environment can be similar to the real world in order to create a lifelike experiencefor example, in simulations for pilot or combat trainingor it can differ significantly from reality, such as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution, and communication bandwidth; however, the technology's proponents hope that such limitations will be overcome as processor, imaging, and data communication technologies become more powerful and cost-effective over time. Virtual reality is often used to describe a wide variety of applications commonly associated with immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves, and miniaturization have helped popularize the notion. In the book The Metaphysics of Virtual Reality by Michael R. Heim, seven different concepts of virtual reality are identified: simulation, interaction, artificiality, immersion, telepresence, full-body immersion, and network communication. People often identify VR with head mounted displays and data suits

Microsoft Office

Microsoft Office is a proprietary commercial office suite of desktop applications, servers and services for the Microsoft Windows and Mac OS X operating systems, introduced by Microsoft on August 1, 1989. Initially a marketing term for a bundled set of applications, the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Microsoft Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-ofbusiness software under the Office Business Applications brand.

Components

Word Main article: Microsoft Word

Microsoft Word is a word processor and was previously considered the main program in Office. Its proprietary DOC format is considered a de facto standard, although Word 2007 can also use a new XML-based, Microsoft Office-optimized format called .DOCX, which has been controversially standardized by Ecma International as Office Open XML and its SP2 update supports PDF and a limited ODF.[5] Word is also available in some editions of Microsoft Works. It is available for the Windows and Mac platforms. The first version of Word, released in the autumn of 1983, was for the MS-DOS operating system and had the distinction of introducing the mouse to a broad population. Word 1.0 could be purchased with a bundled mouse, though none was required. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add closer WYSIWYG features into its package. Word for Mac was released in 1985. Word for Mac was the first graphical version of Microsoft Word. Despite its bugginess, it became one of the most popular Mac applications. Excel Microsoft Excel is a spreadsheet program that originally competed with the dominant Lotus 12-3, but eventually outsold it. It is available for the Windows and Mac platforms. Microsoft released the first version of Excel for the Mac in 1985, and the first Windows version (numbered 2.05 to line up with the Mac and bundled with a standalone Windows run-time environment) in November 1987. Outlook/Entourage Microsoft Outlook (not to be confused with Outlook Express) is a personal information manager and e-mail communication software. The replacement for Windows Messaging, Microsoft Mail, and Schedule+ starting in Office 97, it includes an e-mail client, calendar, task manager and address book. On the Mac, Microsoft offered several versions of Outlook in the late 1990s, but only for use with Microsoft Exchange Server. In Office 2001, it introduced an alternative application with a slightly different feature set called Microsoft Entourage. It reintroduced Outlook in Office 2011, replacing Entourage.[6] PowerPoint Microsoft PowerPoint is a popular presentation program for Windows and Mac. It is used to create slideshows, composed of text, graphics, movies and other objects, which can be displayed on-screen and navigated through by the presenter or printed out on transparencies or slides.

149