Learning Objects and the Teaching of Java Programming

by Jane Yin-Kim Yau

A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Masters by Research in Computer Science

University of Warwick, Department of Computer Science September 2004

Table of Contents
List of Tables List of Figures Declaration Publications Abstract 1 Introduction 1.1 1.2 2 Motivation Outline of Thesis iv v vi vii viii 1 3 4 7 7 7 8 13 16 18 19 21 23 27 28 28 29 35 38 40 40 41 41 42 45 46 50 51

Background Literature 2.1 Learning Objects 2.1.1 Origin of the term ‘Learning Objects’ 2.1.2 Background Literature on Learning Objects 2.1.3 Learning Object Repositories 2.1.4 Problems with Learning Objects 2.1.5 Related Literature on Learning Objects 2.2 XML 2.2.1 Applications of XML 2.2.2 XSL, XSLT, XPath and XSL-FO 2.2.3 Issues regarding the usage of XML 2.3 Standards Initiatives for Web-based Learning 2.3.1 DCMI 2.3.2 IEEE LTSC 2.3.3 IMS 2.3.4 ADL 2.3.5 CanCore 2.3.6 Singapore ECC 2.4 Web-based Adaptive Learning and Testing 2.4.1 Adaptive Learning 2.4.2 Adaptive Testing 2.4.3 Data Models for Adaptive Learning and Testing 2.4.4 Web-based Education Predecessors 2.4.5 Metadata for Adaptive Content 2.5 Summary


Teaching Java as an Introductory Programming Language 3.1 Why is Java used as an Introductory Language? 3.2 Suitability of Teaching Java as an Introductory Language 3.3 3.4 Difficulties of Learning Programming Approaches for Teaching Java Programming 3.4.1 Fundamentals-first 3.4.2 Objects-first 3.4.3 GUIs-first Our Research Investigations 3.5.1 Literature Survey 3.5.2 Research Questions and Results 3.5.3 Student Questionnaire 3.5.4 Research Questions and Results Summary

52 52 54 57 59 60 63 64 67 69 70 80 84 90 91 92 94 94 104 106 107 110 111 116 118 120 123 126 128 130 131 132 133 137 139 139 147 152


3.6 4

Adaptive Learning and Testing with Learning Objects 4.1 Architecture of our software tool OCTA 4.2 Logical Structure of our software tool 4.2.1 Problems 4.2.2 Roadmaps 4.2.3 Handling Functions 4.3 4.4 Graphical User Interface Learning Content Development 4.4.1 Ideas behind construction of learning paths 4.4.2 Roadmaps for different learning paths 4.4.3 Pre-Test Roadmap 4.4.4 Proficiency Test Roadmap 4.4.5 Lesson Instruction Roadmap 4.4.6 Performance Test Roadmap 4.4.7 Metadata for our Learning Objects Summary

4.5 5

Evaluation 5.1 5.2 Methodology 5.1.1 Details of Student Volunteers 5.1.2 Constraints of the Evaluation Research Questions and Evaluation Results 5.2.1 Pre-Test Roadmap 5.2.2 Proficiency Roadmap 5.2.3 Lesson Instruction Roadmap

3 6 5.3 Future Work References Appendix A Publication – Introducing Java: the Case for Fundamentals-first Appendix B Publication – Adaptive Learning and Testing with Learning Objects Appendix C Learning Object Metadata Appendix D CanCore Learning Object Metadata Application Profile Appendix E SingCore Learning Object Metadata Application Profile Appendix F Full Details of Textbooks Appendix G Student Questionnaire . Research Contributions 6.4 Performance Test Roadmap 5.1 Summary of Thesis Work 6.5 Web-based learning approach Summary 157 159 160 161 161 163 164 165 A1 B1 C1 D1 E1 F1 G1 Conclusion 6.

4 3.1 5.3 3.8 4.2 Simple DCM Mapping from Simple DCM to LOM Bloom’s Taxonomy Topics and their Reasons for Selection Details of Student Volunteers Difficulty Levels of Topics rated by Students Percentage of Students and their Ratings Arithmetic Mean of Students’ Perceived Difficulty Levels Additional Metadata Objects of the Instruction Learning Mode Objects of the Assessment Learning Mode Objects of the Collaboration Learning Mode Objects of the Practice Learning Mode Assigned Difficulty Levels for Java topics Marks for Different Pre-Test Questions How Questions are selected in Proficiency Tests Marks for Different Proficiency Test Questions Marks for Different Lesson Instruction Questions Marks for Different Performance Test Questions Educational Category of the IEEE LOM standard Details of Student Volunteers Syllabi of Core Computer Science Modules 29 33 61 68 82 85 85 86 97 112 113 114 115 117 119 122 122 125 127 128 135 136 .1 3.2 4.3 4.11 4.2 3.6 4.7 4.9 4.5 4.6 4.2 3.1 4.10 4.4 4.5 3.12 5.1 2.List of Tables 2.

1 5.17 4.6 2.20 5.3 2.3 4.12 4.14 4.13 4.5 2.5.4 Students’ Proficiency Levels in the Pre-Tests Students’ Proficiency Levels in the Proficiency Tests 140 148 List of Figures 2.2 3.4 4.18 4.4 2.8 3.15 4.2 Student.1 2.3 5.11 4.2 2.7 4.16 4.9 4.1 3.7 2.3 4.1 4.10 4.DTD A Student XML File XSL Stylesheet Declaration XSL Transformation Template Link between Stylesheet and XML Document XPath Expression Metadata for Adaptive Content Example of Adaptive Content Metadata Ordering of Topics by the Three Approaches Difference between Objects-first and Fundamentals-first Most Common Ordering of Procedural Constructs Architecture of OCTA XML Tag for Problem A Problem Example A Screenshot of a Screen in OCTA XML Tag for Screen XML Tag for Screen with the Additional Metadata XML Tag for Administrative part of Logic XML Tag for Sequencing part of Logic An Example of a Problem XML Tag for Roadmap How a Roadmap Works A Screenshot showing the Available Problems A Screenshot showing the Initial View for Students A Screenshot showing the Available Roadmaps A Screenshot of a Student Screen Pre-Test Roadmap Proficiency Test Roadmap Lesson Instruction Roadmap Performance Test Roadmap The Overall Problem.2 4.6 4.DTD Average Assigned Proficiency Levels in Pre-Tests Average Assigned Proficiency Levels in Proficiency Tests 20 21 24 25 26 27 50 51 59 73 77 92 95 95 96 96 98 99 100 103 104 105 106 108 108 109 119 121 125 127 129 141 148 .19 4.5 4.8 4.

Declaration I confirm that this thesis is my own work and has not been submitted for a degree at another university. .

Publications This thesis has not been published elsewhere in its present form.2 and 4. however. . which is published in the International Conference on Education and Information Systems. This is included in Appendix A. which will appear in the Proceedings of the International Conference on Computers in Education. work based on that presented here has been published.4 formed the basis of Adaptive Learning and Testing with Learning Objects . December 2004. July 2004. This is included in Appendix B. • A preliminary version of Section 4.4 and 3. as follows: • A preliminary version of Section 3.5 formed the basis of Introducing Java: the Case for Fundamentals-first.

The popularity of e-learning due to the widespread use of the Internet is a new approach to web-based education. The large amount of learning materials on the Internet can be reused by many educational institutions worldwide, providing that there exists an effective way of finding and disseminating this material. A new concept, learning object, which is a small unit of learning materials in a uniform format, has been developed to ensure that learning materials are interoperable and reusable between different web-based learning systems.

This thesis addresses the possibility of developing learning materials for the Java programming language as learning objects and how these can be incorporated into a web-based adaptive learning and testing system. A literature survey was
conducted to evaluate the different teaching Java approaches, and to attempt to establish an agreed ordering of the relative

difficulty levels within basic Java topics adopted by each of the three approaches. A student questionnaire investigation was then performed to compare the professionals’ apparent ordering of topics in the published books with students’ perceptions of the difficulty levels of these topics, hence to determine whether this agrees with the professional’s ordering of topics. We have extended an existing web-based learning tool to use learning objects to teach students introductory Java programming and each student is provided with an

individual adaptive learning experience. We have evaluated the tool and show that it appears effective for students learning


Chapter 1


The use of the Internet has become more and more popular in our everyday life, whether in business, in the workplace or in education. This has increased the use of learning and teaching materials made available on the web amongst primary, secondary, further and especially higher education. Many educational institutions worldwide make the content of their courses publicly accessible not only to their students but also to others, via the web, for example Massachusetts Institute of Technology (2004). Web-based education brings about many benefits,
for example Classroom Independence and Platform Independence

(Brusilovsky, 1999) and this creates the opportunity for the same topic to be taught amongst many educational institutions providing there exists an effective way of finding and disseminating this material. Web-based learning systems are also increasingly employed to facilitate student learning, however systems among different institutions may contain learning materials which are incompatible with one another. Therefore, a common way of combining this

material must be defined in order to allow reuse (Wu, 2002).

A new concept, learning objects, has been developed to facilitate
fast and effective retrieval of the vast amount of online learning










example amongst different web-based and offline learning systems. A learning object is a piece of self-contained pedagogic data which can be used and reused in many different contexts and which has a set of self-describing metadata to facilitate search and retrieval

(IEEE LTSC, 2004). This reusability allows substantial reductions in the cost of developing online learning materials (Downes, 2001) and since learning objects can be repurposed for different courses or disciplines, this further reduces the cost of development (Smith, 2004). A common standard for metadata also exists which allows learning objects to be imported from and exported to different web-based learning environments. XML (W3.Org, 2004), the eXtensible Markup Language, is a system for exchanging and storing information on the web, and has become the common language for developing course materials by many educational institutions (Evanoff, 2003).

Computer programming, an essential part of all computer science degree courses, is arguably one of the most complex subjects for the teacher to teach and for the learner to learn (Jenkins, 2002). Some students may know very little about programming, since prior computing knowledge, or at least computer programming knowledge, is usually not required to enroll onto such a course (UCAS, 2004). However, some students may have previous programming experience, gained either as part of their education or via leisure. Many universities have also introduced advanced technologies such as web-based learning systems to help facilitate student learning. The more sophisticated of these systems can personalize the learning

1 Motivation Given the advantages of web-based education (Brusilovsky. learning or cognitive styles and special needs (Hockemeyer & Dietrich. 2002). and since a much wider and more diverse population of students will use web-based learning materials than conventional classroom teaching materials (Meisalo et al. 2002).. adaptive testing can effectively assess and distinguish students of different abilities by dynamically adjusting its proficiency to suit all students (Wainer et al.. 2000).content and test questions for each individual learner to achieve maximum performance. 1999). Generally speaking. Related work includes Boyle (2003) who has constructed many learning objects for introductory programming in JavaTM in order to tackle students’ difficulties in learning programming. each individual learner’s needs are met by adapting different learning objects for different students. we felt that it would be . Given these many factors. adaptive learning customizes for each learner the selection and presentation of learning material according to their previous and current knowledge. this can be facilitated by providing a learning objects repository for learning objects to be selected (Adamchik & Gunawardena. learning aims and objectives. 1. Ideally. 2003). it is critical for the courseware to be adaptive and this is also beneficial in helping students effectively learn introductory programming. Similarly. Pillay (2003) points out that “One-on-one tutoring has proven to be the most effective means of assisting first time programmers overcome learning difficulties ”.

Its features are common to many computer based teaching aids. As part of ongoing work in computer-assisted learning technologies (Joy et al. Different learning testing strategies have been designed and developed to meet the needs of students with different levels of programming knowledge and abilities. an authoring tool called Online Computer Teaching Aid (OCTA) was developed to help students learn computer science course material. these of objects with a created used learning standard. however it aims to provide each student previous with an adaptive learning experience and based on their performance. For the purpose of this thesis.2 Outline of Thesis This thesis addresses the possibility of developing learning materials for the Java programming language as learning objects and how these can be incorporated into a web-based adaptive learning and testing system.appropriate to examine how learning objects can be used for teaching introductory programming. 1. This demonstrates how learning objects can be used in different learning contexts and how learners can be provided testing and with their own A individually repository objects are personalized learning up learning is and environments.. marked commonly metadata . an existing online non-adaptive Java programming course has been imported into OCTA and is converted into an adaptive one that uses learning objects. 2002).

evaluates its suitability. XSLT. Chapter 2 of this thesis provides background literature on learning objects. describes the origin of the term ‘learning object’. as well as An overview of each and of these approaches It is given. Finally. Chapter 3 begins by investigating why Java is used as an introductory language at degree level. Objects-first and GUI-first – which are adopted to teach the Java programming language both at university and in textbooks. are then investigated. examines their repositories and addresses any problems associated with them. their advantages disadvantages. and describes the reasons why many students find programming difficult to learn. We then look into some of XML’s counterparts such as XSL. Several web-based learning initiatives that have developed standards and specifications for learning objects are then discussed in detail. The second part of this chapter describes the XML technology and some applications which XML is currently being used for. Three different approaches – Fundamentalsfirst. XPath and XSL-FO.Issues regarding whether learning objects can be imported or exported to and from the system are addressed and additional tags are included in our metadata to allow easy search and retrieval for the users of the system. along with a description of many of these standards or specifications. the last part of this chapter focuses on adaptive learning and testing technologies and examines their advantages and problems. then discusses our two investigations – Literature Survey and Student Questionnaire – which were performed to ascertain whether there exists an agreed ordering of topics by difficulty level in basic . clarifies the different definitions given for learning objects.

A detailed analysis of the research results are provided in this section.Pre-Test. Proficiency Test. Finally. and thereafter the graphical user interface of the software is illustrated. we discuss how our learning objects are made conformant to a learning object metadata standard. Our research questions and their corresponding results are then discussed in detail. The system’s sequencing features. These were analyzed to provide answers to our research questions amongst the four modules and within our webbased learning approach. The idea behind the construction of the learning content and individual learning paths are then examined. Chapter 4 provides an overview of the design and implementation of our web-based software tool – OCTA. Chapter 6 summarizes the thesis work.are discussed in detail. . The physical architecture is first discussed then the internal logical structure and its handling functions. Chapter 5 begins by introducing the purpose of our evaluation and then provides a detailed account of our evaluation methodology.Java across the three approaches. and four implemented modules . which allows a non-adaptive course to be converted into an adaptive one. Lesson Instruction and Performance Test . A number of students volunteered to perform evaluation of our software and we subsequently recorded their progress and their comments regarding the software. is then discussed. and discusses research contribution and future work related to the web-based learning system.

components called objects are constructed centred on the . These are integral to our research as this provides numerous benefits and ensures learning materials are interoperable and reusable between our web-based learning tool and other systems.1 Origin of the term ‘Learning Objects’ According to Johnson (2003). XML. 2.Chapter 2 Background Literature This chapter provides detailed information concerning Learning Objects. The first fundamental idea behind the construction of learning objects derives from the object-oriented paradigm of computer science. the name learning object originates from two areas of professional practice. In objectoriented programming.1. 2. a recent important development has been the concept of learning objects.1 Learning Objects Within the field of e-learning. Standards Initiatives for Web-based Learning and finally Adaptive Learning and Testing techniques.

the conditions under which the learning is to be demonstrated. and the degree of mastery that will be expected from that performance” (Ibid). 2.2 Background Literature on Learning Objects There is not yet an agreed definition of the phrase “ learning object”.1.grounds that they could be reused in multiple contexts and by many different users (Wiley. The ambiguity and misinterpretation of the term learning object can be reflected by the following . 2003). 2002) and “bits of code are bundled into reusable bundles that have a discrete functionality and simple properties” (Johnson. and many developers have various opinions on what the definition should be. The second idea derives from learning objectives which “offer simple statements of desired learning and performance outcomes that consider behaviours to be demonstrated as a result of a learning intervention. New programs can be generated simply by combining and reusing these software objects and there is no need to completely write the program from scratch (Ibid). If a learning object is created around a ‘single lesson’ then automatic computer agents can be constructed to combine adapted lessons into a personalized course for each individual learner (Ibid).

(2002) who stated that “a learning object can be as small as a grain of sand or as large as an ocean!” A more concrete. a learning resource. digital or non-digital. and persons. distance learning systems and collaborative learning environments” (Ibid). Learning objects can be found in a variety of environments such as “computerbased training systems. instructional software and software tools. intelligent computeraided instruction systems. however relatively diverse definition has been given by the IEEE Learning Technology Standards Committee (LTSC) who defined a learning object as “Any entity. or events referenced during technology supported learning” (IEEE LTSC. They argue that by using learning objects to deliver either partially or completely the . learning objectives.” Their working definition is “a digital learning resource that facilitates a single learning objective and which may be reused in a different context ”. or an entity of learning capable of being reused from one course to another. Examples of learning objects include multimedia content. instructional content. which can be used.from Bennett et al. interactive learning environments. Mohan & Greer (2003) define a learning object as “ an item of content. organizations. 2004). re-used or referenced during technology supported learning.

independent chunks of information achieving a particular instructional goal. between organizations. Wu (2002) maintained that learning objects from a wide range of resources can be used to construct courses. A learning object is defined by Wei & Kin (2003) as “instructional materials designed to be selfsufficient. By selecting different learning objects to form individual courses. and hence can be reused in different learning situations ” and by Johnson (2003) as “any grouping of materials that is structured in a meaningful way and is tied to an educational objective”. A number of authors have proposed similar descriptions. but more importantly. . the presentation of learning objects must be separated from the content so that different users who import the same learning materials can adapt these to their own web-based learning system and flexibility is allowed for this (Ibid). therefore developers can produce courses more efficiently. This removes the need to prepare all course materials from nothing. these resources can be reused not only within one organization. Elaborating from the above..course content. A much narrower definition of learning object by Wiley (2002) is “any digital resource that can be reused to support learning”. This is contrary to conventional learning materials which can only be in one place at a time (Wiley et al. 2000). This course material is also made more readily available and more easily updated. In order for learning objects to be used effectively. The digital aspect of the learning objects implies that they can be used and reused simultaneously regardless of which learning contexts they are being used in.

the same advantages of constructing learning materials as learning objects are common between these definitions. the learning impact is much greater as it can be reused much more easily. For example a student may find an interactive learning object with a voiceover instruction to be effective if they learn particularly well by auditory means. as follows: • Flexibility – when materials have been designed to be used in multiple contexts. . • Interoperability – learning objects can be used within different learning systems.the needs of different learners according to their requirements and interests can be further accommodated. it is much easier to customize them and to combine them into sequences to construct a more personalized learning experience for each individual learner. searches and content management facilitated by metadata tags. Despite these various definitions. Longmire (2000) points out six of these advantages. • Customization – as learning objects are designed to be modular. Semmens (2004) argues that learning paths can be formed by combining independent learning objects to give students a more personalized and rewarding online learning experience. Similarly. Smith (2004) also suggests that students with different learning styles can be facilitated using this customizable learning objects approach. • Ease of updates.

2003). searched for. there are many existing specifications for learning objects and it is very important to standardize them for learning objects to be interoperable both locally and worldwide (Mohan & Greer. .• Facilitation of competency-based learning – the metadata tags allow learning objects to be described. A common format for describing the content must exist to achieve interoperability between different web-based learning environments in order for learning objects to be reused (Mohan & Greer. if learners have some gaps in some particular knowledge. • Increased value of content – every time a learning object is reused. where each object can then have its defining properties described (or tagged) using metadata constructs”. Using these tags. 2003) and allow the same objects to be engaged across a range of hardware and software platforms (Wu. A summary of the functionality of learning objects can be described by Semmens (2004) who explains that the concept of learning objects is relative to “firstly breaking educational material down into modular ‘chunks’ (objects). they are able to search for the learning objects which fill this gap. identified and retrieved when required. learning objects can be successfully stored. 2002). Currently. the value of content is increased. and subsequently.

2003). exchange and re-use of learning objects ” (Richards et al. 2002) and also allow users to obtain rights to learning objects and to use them over the web (Mohan & Greer. we define learning objects as “Pedagogic software components which are interoperable..” 2. The sizes of these repositories vary from a few courses to thousands and the number of learning objects which are available to the public in these repositories increases with time (Ibid). Different learning object specifications may also be used within different repositories.For the purpose of this thesis. exchangeable and reusable between web-based learning environments. These global repositories are usually built based on a client/server approach which employs brokerage services and provides peer-to-peer access to the learning object’s local repositories.1. . The number of global repositories has increased as a result of the large growth of the amount of learning objects made available.3 Learning Object Repositories Learning object repositories “provide mechanisms to encourage the discovery. The learning objects in the global repositories are usually stored elsewhere on the web but links are provided to allow access.

A variety of disciplines of learning objects are available in MERLOT including Biology. Engineering.codewitz. must be reusable. is a global repository and contains about 7500 learning objects (Mohan & Greer. 2003) at the time of writing. At present. Its goal is to provide for students and teachers of computer programming various illustration. and must also be standalone and browser capable. Multimedia Educational Resource for Learning and Online Teaching (www. They are primarily designed for students in higher education and are made available via URLs which point to their location. Mathematics. MERLOT. the problems connected to the use of basic and advanced structures in computer programming” (Ibid).org) originated in Canada. Each learning object is assigned to one of the following categories: . must not be associated with any other learning object or resource. Each of the learning objects in this project must satisfy one specific learning objective.Codewitz (www. Their learning objects are stored in a repository which they refer to as the Material Bank.org) is an international project originated in Finland for better programming skills. only partners of this project have access to these learning objects. History.merlot. and the teachers to better explain and illustrate. They refer to these aids as “learning objects” and their primary objective is to “help the learners to better understand and master. Psychology and World Languages. The learning objects in this repository are free and accessible for all. Business. animation and visualization aids.

. The learning resources in this repository are catalogued using the IMS Metadata Specification (IMS Global Learning Consortium. also originated in Canada. Membership is again offered on a free basis. Campus Alberta Repository for Educational Objects (www. Ball and Socket Joint. Photosynthesis Process and Earth at night. In contrast with the above repositories. 2004) (Ibid).edu) global repository has over 66. the TeleCampus (telecampus. contains about 3000 learning objects (Mohan & Greer.000 courses and programs available on a commercial basis (Mohan & Greer.careo. 2003). 2003) at the time of writing. Their learning objects include learning materials from many different areas. for example Learning English as a Second Language.org).• • • • • • • • • Simulation Animation Tutorial Drill and Practice Quiz/Test Lecture/Presentation Case Study Collection Reference Material CAREO.

• Access to digital materials is now very easy.1. they would be more comfortable using software which they are familiar with. • Effective pedagogy is lost if the author does not have a clear educational goal when designing a learning object. Also. however. 2004).4 Problems with Learning Objects The previous sections have described advantages in the use of learning objects. since the definitions and the size of the learning objects are fairly unclear.2. permission must be obtained and correct attribution must be provided if you are using learning . it is not certain how much content to contain in one single learning object and it is difficult to construct learning objects which are independent of each other and with no context in mind (Ibid). • The lack of technical experience may deter some technologically inexperienced staff to create learning objects. however learning objects are not without their associated problems. It can also be timeconsuming to create a high-quality learning object and subsequently the workload of the author will be increased (Smith.

• The requirement of learning objects to be reusable has meant that many authors have had to reformat all their existing learning content before it can be reused. educators are required to deconstruct learning objects into components in order to rebuild the materials according to their individual learning needs. However. For example. and converted into XML before storing it into a database so that it can be reused in a learning objects system. 2003). A high percentage of current digital educational materials cannot be reused until they have been decomposed and workload is therefore increased for authors (Wiley. Difficulties and the associated financial expenses arise when decomposing the learning objects because it is difficult to index extremely decontextualised materials for human discovery and use and also computers are unable to make meaning of these materials (Ibid). a PDF user manual for a piece of software is required to be deconstructed into several smaller components.materials in your learning objects which have been created by someone else (Ibid). . • Generally. learning objects are designed in a highly contextualised manner to promote reusability in various diverse learning contexts.

He defined a Learning Object Mark-Up Language which incorporated metadata not only for instructional learning content but also for exercises and tests.2. .1. Reusable tests were also encapsulated in Cesarini et al. (2004) have examined ways in which they can convert their existing course materials into learning objects.5 Related Literature on Learning Objects Online tutorial courses were developed by Wu (2002) to demonstrate how learning objects can be reused in different learning contexts and how learning content can be customized for individual learners. and Griffiths et al. (2004)’s learning object framework.

but without the restriction of having to use pre-defined tags. Tags are placed around areas of code that we wish to markup with a particular property or attribute. In essence. 2004) and is a system for exchanging and storing information on the web. Tags are generally implemented by an open and closed pair and the tag name is indicative of this property.Org. 2002). 2004). the Standard Generalized Markup Language. was developed by the World Wide Web Consortium (W3. XML is a metamarkup language which means it has its own system of text markup which can give meaningful names to describe the data that it contains. The advantage here is that the data can be both human and machine-readable. the HTML tag pair <title> and </title> is placed around an area of text which we wish the browser to display . “SGML is more customizable (thus flexible and more "powerful") at the expense of being (much) more expensive to implement” (Cover Pages. The main difference between HTML and XML is that HTML focuses on displaying information whereas XML is typically used for describing and storing information (W3 Schools. but with its complex options removed whilst retaining the most important functions which allow the ease of implementation of web documents. XML’s features are similar to those of HTML.2 XML XML. For example. It is a subset of SGML. also a subset of SGML. the eXtensible Markup Language.2.

These tags also act as metadata to facilitate data retrieval. Figure 2. Typically. age)> (#PCDATA)> (#PCDATA)> (#PCDATA)> Figure 2. An alternative to DTD is to use an XML Schema which can be used for applications which require a more powerful and meaningful . course. third and fourth lines declare that the name.1 Student. a course and an age. but not tags.dtd The first line declares the element student as having a name. 2003). A validating parser is used to verify any document to its DTD and will alert if there are any discrepancies. parsed character data. The second. course and age must contain PCDATA – that is.1 shows an example of a DTD called student.in the title bar. which can be a mixture of text and symbols. a Document Type Definition (DTD) is used to define the structure of the data and elements and describes the types of elements within XML documents.dtd defines a student and his/her course: <! ELEMENT student <! ELEMENT name <! ELEMENT course <! ELEMENT age (name. XML is a platform-independent language and therefore data can be transmitted between many incompatible formats (Evanoff.

0” encoding = “ISO-8859-1”?> <!DOCTYPE student SYSTEM “student.validation method (Harold et al. The following list gives some of the most popular and useful applications of XML in society today: • Integrating Business Applications – “…XML has emerged as the major Web and Business transformation technology today…” (Kneiling. 2002) XML can interoperate between the several architectural layers – application.2. information and data which are the underpinning elements in any modern business enterprise. Its adaptability and portability have led many different enterprises including e-Business and e-Commerce organisations to adopt .2 shows an example XML document which conforms to the dtd above: <?xml version = “1.1 Applications of XML XML technology today is used by many different companies and industries ranging from business corporations to computer firms and can be used for fields varying from telecommunications to multimedia.dtd”> <student> <name>Jane Yau</name> <course>Computer Science</course> <age>23</age> </student> Figure 2. Figure 2. 2002).2 A Student XML file 2. integration.

manage and publish any content onto the web.XML (Ibid). which is then transcoded so that it is compatible with the device’s data format and then displayed (Ibid). A Web Portal such as Yahoo can contain many XML features for example. Wireless Markup Language. • Mobile Technology Applications – a subset of XML. the user authentication application which communicates with the login form. this is known as Web Services. • Unicode Applications – an increased use of websites from people of all over the world means that many sites are nowadays represented in many different languages. Businesses also find XML easier to debug because the exchange of data is easier to read (Schloss. 2000). this is known as a Web Portal. • Web Services and Portal Applications – XML is used to develop many computer applications as it can create. The data that is received by these devices are written in XML. . is used to create the pages which can be displayed on a mobile device such as a PDA handheld computer or a mobile phone. 2001). weather information provided by another weather forecast website or stock market information collected from markets from all over the world (Germann. A database can also be created to store and manage these applications thus allowing them to communicate with one another. XML allows different languages or character sets to be represented by using Unicode text within its documents (Ibid).

2003). HTML or printable format. It is used to convert data in XML documents into a more readable format required by the end user. is a language used to express stylesheets and is an extension of HTML’s concept of stylesheet standard formatting rules but is specific to XML. and mathematical content. 2. languages. 2003) and its three components contained within it have all become W3C . XSLT. a GUI interface. Convera RetrievalWare search engine uses an ‘intelligent’ parser to search the XML metadata in a high accurate and efficient way. XPath and XSL-FO XSL. different types of media – video or image (Jarrard. 2001). for applications where semantics plays more of a key role such as scientific software or voice synthesis” (W3. such as a web browser. • MathML Application – an application of XML.2 XSL. It is used generally to “encode the presentation of mathematical notation for high quality visual display. It also uses a semantic (thesaurus-based) network which translates searched items into different similar meanings and which will be performed the search. XML facilitates other application formats to be searched such as different file formats.• Search Engine Application – for example. the eXtensible Stylesheet Language. is a system for representing mathematical and scientific content on the web and the reuse of such content is encouraged.Org. an application program or engine. XSL (Dingli.2.

if it matches.0” xmlns:xsl = “http://www. When all the elements have been checked by the template rules. so that XML documents can be represented on the web. These components are XSL Transformations (XSLT). and output XML data to different media. XML documents are fed through an XSLT processor which compares the elements of this input document to the template-rule patterns in a stylesheet.org/1999/XSL/Transform”> Figure 2. define parts of an XML document. Xpath and XSL Formatting Objects (XSL-FO). <xsl:stylesheet version = “1. for example XML requires closing tags and HTML does not. Rules are defined in XSLT to transform one XML document into an HTML document.3 shows an XSL Stylesheet which is declared in accordance to the W3C XSL recommendation. like displaying negative numbers in red. and Figure 2. A pattern and a template are defined for each template rule. like screens. format XML data based on the data value. The template from that rule is written to the output file. filter and sort XML data.3 XSL Stylesheet Declaration .w3. the output file is formed into an HTML document. Template rules are contained in an XSLT document or stylesheet.recommendations (W3 Schools. Figure 2. Note that XHTML is the XML version of HTML: it is XML-compliant.4 allows an XSL Style Sheet with a transformation template to be created. Altogether they form “a set of languages that can transform XML into XHTML. 2004b). paper or voice ” (Ibid).

<?xml version="1.0" xmlns:xsl = "http://www.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <body> <h2>Student Database</h2> <table border="1"> <tr bgcolor="#9acd32"> <th align="left">name</th> <th align="left">course</th> <th align="left">age</th> </tr> <xsl:for-each select="student"> <tr> <td><xsl:value-of select="name"/></td> <td><xsl:value-of select="course"/></td> <td><xsl:value-of select="age"/></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> Figure 2.0" encoding = "ISO-8859-1"?> <xsl:stylesheet version = "1.4 XSL Transformation Template .w3.

6 shows an XPath expression which selects all the students in our example above which are over 21 years old: . colour and other different formatting options for XML documents. this document is passed through a formatter.To link the stylesheet into the XML document. using the XSLT stylesheet.xsl"?> Figure 2. This transformation is done in three stages. which turns it into a printable form such as PDF format (Dingli. XSL-FO documents are simply XML files which contain additional output information such as output layout and output contents. the input of the XML document is transformed into the XSL-FO vocabulary. numbers and Boolean expressions ”. XSLT documents cannot be created without XPath knowledge and XPath is therefore a major element in the W3C XSLT standard. Second. Figure 2. First. 2003). size. we only need to add an extra line of code into the document after the XML heading.5 Link between Stylesheet and XML document The purpose of an XSL-FO is to convert an XML document into a document which humans find comfortable to read such as a readable printed version. Finally. XSL-FO defines the font.5 illustrates this: <?xml-stylesheet type = "text/xsl" href = "student. XPath is a “set of syntax rules for defining parts of an XML document ” (Ibid). Paths are used to define XML elements and XPath contains a library of standard functions primarily for “working with strings. a stylesheet which conforms to the DTD of the XML document is developed to create the expected output document. Figure 2.

Searching and retrieving information stored in the XML files are easily facilitated by the structure and constraint that XML files follow (Wu. 2002).2. ./student [age>21] Figure 2.3 Issues Regarding the Usage of XML Many standards or specifications nowadays are written in XML to promote the widest possible adoption. there appears not to be any significant issues regarding compatibility despite the different platforms that the learning materials may have been constructed on. At this time.6 XPath Expression 2.

yet effective standard for cross-domain information resource description. word documents and presentations. where the name originated from. It uses an element set to describe ‘anything that has identity’ and has been used for a variety of web resources including learning objects.3. the Learning Technology Standards Committee (LTSC) from the Institute of Electrical and Electronic Engineers (IEEE). 2003). similarities between some of their specifications can be observed. 2004) such as Dublin Core Metadata Initiative (DCMI). However.1 DCMI DCMI. and are discussed in this section. Dublin Core Metadata Initiative developed the Dublin Core Metadata (DCM) (DCMI. the Instructional Management System (IMS) Global Learning Consortium. 2.2. Ohio in 1995. The standard has two levels - . 2003) in Dublin. It adopts a minimalist approach. Advanced Distributed Learning (ADL). Note that all these specifications and standards are written in XML to promote the widest adoption. Some of these initiatives have slightly different interpretations of the definition of learning objects and have subsequently proposed different metadata standards and specifications for learning objects. Canadian Core Initiative (CanCore) and Singapore E-Learning Competency Centre (ECC).3 Standards initiatives for Web-based Learning Several standards initiatives for Web-based learning exist (Training Place. not specific to education or linked to e-learning aspects (Lim. It is a straightforward.

and Qualified DCM which consists of Simple DCM plus an additional element. Audience. The language of the content of the resource.9 <dc:format> 1.13 <dc:relation> 1. sound. Key words or key phrases to describe the content’s topic. text and physical object. Table 2. Person/s who has contributed to the resource.6 <dc:contributor> 1.8 <dc:type> Name of resource. software. 1. which will be more helpful for resource discovery. dates. Contains rights information such as Copyright and Intellectual Property Rights. dataset. place name can be specified.1 <dc:title> 1. Creation date or the available date of the resource.4 <dc:description> 1.Simple DCM which contains fifteen elements.2 <dc:creator> 1. Digital or physical appearance of the resource for example.2 IEEE LTSC .11 <dc:source> 1. along with a group of modifications. The resource identifier is unique such as the Uniform Resource Identifier (URI). event.3 <dc: subject> 1. shown in the Table 2. image.14 <dc:coverage> 1. The degree or amount of the resource’s content for example. abstract or table of contents which describes the content of the resource. Person/s responsible for distributing the resource.7 <dc:date> 1. Resource type such as collection.1 Simple DCM 1. “duration” or “size”. “media-type”. for example en for English The reference to a connected resource. The references used to develop the resource.10 <dc:identifier> 1. A paragraph.1. interactive resource.12 <dc:language> 1. service. Person/s responsible for developing the content.3.15 <dc:rights> 2.5 <dc:publisher> 1.

is a non-profit. This approach is known as “structuralist” rather than “minimalist” (Ibid). It produces 30% of the world’s published literature in computers and electrical engineering. ADL and AICC and to make contrasts between these specifications. technical professional association and is a leading authority in technical areas ranging from computer engineering and aerospace to consumer electronics.The IEEE. 2003). 2002). They have collaboratively developed the LOM standard which had inherited all 15 elements of Simple DCM and developed an extra 61 metadata tags to cover “a wide variety of characteristics attributable to learning objects. and it has been the most commonly used standard for web-based learning . A role of the LTSC is to review specifications created by initiatives such as IMS. The Learning Technology Standards Committee is a special committee chartered by the IEEE Computer Society Standards Activity Board to “ develop accredited technical standards. The Learning Object Metadata (LOM) working group works very closely with the DCMI to develop interoperable metadata for learning objects.12. 2004). the Institute of Electrical and Electronic Engineers. recommended practices and guides for learning technology” (IEEE LTSC. The version 1484. sometimes combining them to form standards which are more beneficial and more general to fit the requirements of any learning organizations (Wu. approved in June 2002.1 of this standard is a full IEEE accredited standard. and places these elements in interrelationships that are both hierarchical and iterative” (Friesen.

2004) are as follows: • To allow learning objects to be searched. • “To define a standard that is simple yet extensible to multiple domains and jurisdictions so as to be most easily and broadly adopted and applied” (Ibid). private. learning and training organizations. whether public. . • To allow personalized lessons to be automatically and dynamically constructed for an individual learner. not-for-profit and for profit. • To allow academic institutions such as education. evaluated. 2004). The key purposes of this standard (IEEE LTSC. • To allow learning objects to be exchanged across any web-based learning systems. • To allow a common standard for data to be collected and shared for researchers. to express learning materials in a standardized format. • To allow learning objects to be distributed – non-profit. or government-owned. obtained and used by learners and instructors. • To allow learning objects to be developed in units that can be joined and decomposed in meaningful ways.materials (Makela.

2. Rights. 9. The Technical category describes the learning object’s technical requirements and characteristics. The Annotation category provides comments on the learning object’s educational use and when and by whom these comments were created. The Rights category describes the learning object’s intellectual property rights and conditions of use. Annotation and Classification. The Lifecycle category describes the features related to the learning object’s history. current state and during its evolution. Technical. 7. . 6. 8.Nine categories are contained in LOM which are General. The Classification category describes the relation of the learning object in relation to a particular classification system. The Educational category describes the learning object’s educational and pedagogic characteristics. 5. The following provides an overview of these categories (IEEE LTSC. Educational. The General category describes the general information of the learning object as a whole. Lifecycle. Metametadata. 3. Relations. 4. The Relation category describes the learning object’s relationship between itself and other related learning objects. 2002): (See Appendix C for the full LOM Specification) 1. The Meta-Metadata category describes the information about the metadata.

Description 1.Role has a value of "Author".Contribute.Entity with the type contribution specified in 2.5: General.Purpose equals "Discipline" or "Idea".Role.2 is taken from IEEE LTSC (2002) and shows how the elements of Simple DCM are mapped to LOM: DC. 2.1: Technical.Entity when 2. .Contribute.LearningResourceType 2.Entity when 2.2: Relation.6: General.Publisher DC.Language DC.3: LifeCycle.1: LifeCycle..Date DC.Contribute.2: Relation.3: General.Source 1.Keyword or 9: Classification with 9.Role has a value of "Publisher". 2.1.Identifier. 2.Entry 1.Title 1.3.Contribute.Contribute.Description 7. 1.1: Classification.Contribute.3.3.Subject DC.1: LifeCycle.Resource when the value of 7.3.2: LifeCycle.Identifier DC.Mapping of Simple DCM to LOM Table 2.2: General.Language 1.Relation DC.Contribute.Creator DC.Type DC.Format 6.2.2 Mapping from Simple DCM to LOM LOM Application Profiles An Application Profile is “an assemblage of metadata elements selected from one or more metadata schemas and combined in a compound schema " (Duval et.OtherContr ibutor DC.Contribute.2: LifeCycle.1: of LifeCycle.2: Educational.Coverage DC. 4.Date when LifeCycle. al.Kind is “IsBasedOn” Table 2.3.Format DC.Coverage 5.3.1: 2.4: General.3: Rights.3.Role has a value of "Publisher".Resource.2: General.Description DC.Rights DC.3.Title DC.Description 7.2: LifeCycle.1: Relation.

3. Four categories of this adaptation to form Application Profile can be defined. 2003). Many organizations have developed their own Application Profiles which have been adapted from the IEEE LOM (Friesen. • “Those that focus on the definition of element extensions and other customizations specifically for the LOM” (Ibid). non-profit organization which has more than 50 contributing members . • “Those that both simplify and undertake customized extensions of the LOM ” (Ibid). 2.2002). as follows: • “Those that combine elements from LOM with elements from other metadata specifications and standards” (Ibid).3 IMS IMS. • “Those that emphasize the reduction of the number of LOM elements and the choices they present” (Ibid). Instructional Management System Global Learning Consortium is another worldwide.

This specification contains 86 elements and has been intended to realign with the IEEE LOM standard since its approval in 2002 and are therefore very similar. government agencies and software vendors to manage access to electronic resources across their existing infrastructure. government agencies. This specification defines a standardized set of structures so that data can be exchanged between different systems. and other consortia” (IMS Global Learning Consortium. 2004a) provides a structure of defined elements to describe and catalogue learning resources to increase the efficiency of finding and using a resource.and affiliates. 2004b). 2003a) facilitates the collection and packaging of instructional content in an electronic form. multimedia content providers. companies. Its objectives are to allow authors to construct learning . The members come from a variety of backgrounds from the global elearning community such as “hardware and software vendors. 1999) facilitates an interoperable way for schools. These are as follows: • The IMS Learning Resource Metadata (LRM) Specification (IMS Global Learning Consortium. systems integrators. • The IMS Enterprise Specification (IMS Global Learning Consortium. IMS has created many specifications which have been used as de facto standards worldwide and these can be implemented for use by companies free of charge. educational institutions. • The IMS Content Packaging Specification (IMS Global Learning Consortium. publishers.

A learning activity is an “ instructional event(s) . Lee et al. Items and Assessment and the Results Reports are freely exchangeable between content authors. • The IMS Learner Information Package (IMS Global Learning Consortium. goals and accomplishments. For example. It describes a structure to represent questions as Items and test data as Assessment and their corresponding Results Reports. • The IMS Question and Test Interoperability (IMS Global Learning Consortium. administrators to manage and distribute content and learners to interact with and learn from the content. (2004) has introduced a system for setting and sharing web-based assessments using this specification to allow assessments to be exchangeable between different Learning Management Systems. Its goal is to record and manage learning-related history. 2002) to facilitate the interoperability and exchangeability of question and test data between different learning environments. 2001) is a specification which describes learners’ characteristics. Using this specification. discover learning opportunities for learners and engage a learner in a learning experience.content. 2003b) is a specification which represents information required to sequence learning activities in various ways. • The IMS Simple Sequencing (IMS Global Learning Consortium. content libraries and collections.

Tracking Model – contains the “results of the learners’ interactions with the content” (Ibid). Three data models are used in this specification. 2003).embedded in a content resource” (Ibid) and can be for example an instruction. knowledge or assessment. and learning objects are used to contain learning materials. 2. 2004).3. 3. Activity Tree – used to sequence the learning activities. as follows: 1. 2.4 ADL . Sequencing Definition Model – defines the “rules for progressing through the learning objects” (Abdullah & Davis. therefore there are no issues regarding interoperability and reusability (Wang & Li.

SCORM was developed collaboratively by several elearning partners such as the IEEE LTSC and IMS Global Learning Consortium (Cover Pages. 2003). Advanced Distributed Learning. tailored to individual needs. 2003a).ADL. industry and academia. as follows: • • It should provide useful standalone learning material. is an initiative sponsored by the US Office of the Secretary of Defence. It should be the smallest component of possible use for a different course and. delivered cost-effectively anywhere and anytime” (ADL. Its initiative is to establish and circulate a global online learning environment which allows the interoperability and reusability of learning tools and course content between government. 2004). 2003) and it focuses exclusively on the simplication of the LOM standard (Friesen. The model was established to overcome previous problems where e-learning systems have been developed with different architectures and e-learning content was created in different formats and by different developers. SCORM can also be used for dynamic content within a web-based intelligent learning environment (Kazi. ADL interprets learning objects as a collection of Sharable Content Objects (SCOs) and have developed their own learning object model called Sharable Content Object Reference Model (SCORM) (ADL. . 2004) and each SCOs have three distinct features. Its objective is to “provide access to high quality education and training. which meant that interoperability and content reuse could not be achieved.

) between the LMS and the Sharable Content Object (SCO).g. the status of the learning resource..3. the course structure and external references which represent a course and its intended behaviour (Ibid).5 CanCore . Course Structure Formats (CSF) are used to specify learning materials in SCORM. Application Program Interface (API) and Data Model. It also enables the transition of course material from one LMS to another.• It must be created so that it can be stored and catalogued by a SCORMcompliant Learning Management System (LMS) – software used to design. etc. time limits. It describes course elements.. and is used for getting and setting data (e. “The Launch mechanism defines a common way for LMSs to start Webbased learning resources. The SCORM Run-Time Environment has three components which are Launch.” (ADL. A Data Model is a standard set of data elements used to define the information being communicated. initialized. deliver and manage an online course. finished or in an error condition). such as. The API is the communication mechanism for informing the LMS of the state of the learning resource (e. 2003b) 2. score.g.

Singapore Learning Resource Identification Specification (E-learning Competency Centre. E-learning Competency Centre developed SingCore. There are 8 categories and 42 elements in this specification and it contains a streamlined subset of elements with its main vocabularies defined in the context of Singapore’s educational environment such as language requirements (Asian language/Chinese) (Lim. 2003). Its purpose was to develop an Asian-based metadata standard to be used in Asia.4 Web-based Adaptive Learning and Testing . (See Appendix E for the full SingCore Specification) 2.3. and to form a framework for the Asian E-learning Network.CanCore. 2003) which contains 39 elements from LOM and the IMS LRM and is fully compatible with these. (See Appendix D for the full CanCore LOM Application Profile) 2. Canadian Core developed the CanCore Learning Object Metadata (LOM) Application Profile (CanCore. 2004) which is also an Application Profile adapted from the IEEE LOM and the IMS LRM. It is a simplication of the IEEE LOM standard (Friesen. 2004) and it aims to provide an interoperable way of exchanging educational resources in Canada and elsewhere. Its main objective is to maximise the potential for Semantic Interoperability – “the meanings that are embedded in this exchanged information and to the effective and consistent interpretation of these meanings” (Ibid).6 Singapore ECC Singapore ECC.

• The diversity of these students is much greater (Meisalo et al. This is a critical factor which will determine the widespread acceptance of adaptive learning systems (Santos et al.1 Adaptive Learning “The next generation of e-learning is Adaptive learning” (Training Place. It is particularly important for distance learning students for the following reasons: • These students usually work at home and customised and intelligent assistance for each student is usually not available. .. Brusilovsky (1999) argues that it is very important to use adaptive learning within web-based education. One challenging aspect of adaptive learning systems is whether the adaptive learning content can be standardized to allow reuse between different adaptive learning systems (Albert et al. 2002) which means that there may be a large range of proficiency levels within them. 2001).4.The two key issues regarding web-based education which must be addressed are adaptive learning and testing. 2003).. 2004) and Brusilovsky (1999) points out that the development of advanced web-based pedagogical applications which can provide adaptivity and intelligence is a challenging research goal.. These principles essentially customize the presented material according to the changing skill levels of the student. 2.

is an assessment method which dynamically adjusts proficiency to suit all students.4. commonly shorter than a traditional paper-andpencil test.. It provides a more efficient way of measuring the examinee’s competence as it administers questions to ascertain the most important information about the examinee (Wainer et al.. a student who is performing well is asked progressively harder questions.2 Adaptive Testing Computer Adaptive Testing (CAT). 2001). The decision to stop the test is also adapted dynamically to the learner’s performance in the assessment (Gouli et al. Also. 2001). 2002). As a result of this.. careless slips and guesswork may occur due to inaccurate repetitive testing (Abdullah. 1999) . it is also less tedious as they do not need to answer numerous questions that may be too easy or too difficult (Gouli et al. whereas questions of the same proficiency level may be repeated for weaker students (Joy et al. 2001). From the student’s point of view. 2000) and aims to “assess a learner’s competency by posing a minimum number of questions” (Gouli et al. Advantages of using Adaptive Testing as opposed to non-adaptive testing include the following: • Fewer questions are needed to achieve satisfactory accuracy in the evaluation of the learner therefore less time is required to administer adaptive tests..2..

4. otherwise it is set to a lower level. 3. After a number of . Algorithms of Adaptive Testing Typically. an Adaptive Testing algorithm (Gouli et al. “As this process progresses. After the answer. The next question to be selected is based on the learner’s estimated proficiency. 2. Subsequently. the learner’s proficiency is re-estimated again. A mock examination can also be performed under time conditions (Ibid). An estimation of the initial learner’s proficiency is not available. therefore a moderate level of knowledge is assumed and a question of moderate difficulty is presented. the learner’s estimated proficiency is updated and is set to a higher level.• It can be used for helping students with examination revision by identifying strong and weak areas of knowledge (this process can be repeated) so that work can be carried out on the problem areas and continued until the students have mastered the subject to a satisfactory level. the difficulty level of questions selected matches the learner’s estimated proficiency.. If a correct answer to the question is given. the distance between the estimated proficiency and the true proficiency of the learner is gradually becoming smaller. 2001) works as follows: 1.

2.3 Data Models for Adaptive Learning and Testing . are carried out” (Ibid).4. The assessment procedure is terminated when specified termination criteria. (2003) implemented a Mathematics Online Adaptive Assessment system which was able to generate different questions and different amount of questions to test different user’s level of proficiency.questions. related to the length of the test or the desired accuracy. 81% of their tested students felt that it had helped them to achieve better marks in their mathematics and commented that they wished there were similar tutoring systems for the other subjects in their institution. the assessment will hopefully reach an accurate estimation of the learner’s actual proficiency. Related Literature on Adaptive Testing Lim et al.

preferences. current skills. • Adaptation Model – includes pedagogical rules which define the relationship between the Domain Model and the Student Model. 2003): • Student Model – stores students’ information. • Domain Model .stores learning contents of the courses and describes the structure of the information and uses concept relationships to link them together.. The following describes each of the typical Data Models and the Adaptation Engine (Ibid) (Abdullah & Davis. 2003). Again..4.Data Models are typically used in adaptive learning and testing technologies to store both user information and learning materials. for example personal data. and to personalize learning content for the users. • Adaptation Engine – is an inference control mechanism which uses and updates these Data Models to provide an individual learning experience for each learner. learning style. knowledge and experience and learning disabilities. these Data Models should be standardized to “guarantee contents and service interoperability” (Santos et al. 2.4 Web-based Education Predecessors . 2003). • Environmental Model – contains a “description of the capabilities of the hardware and software used by the student in a particular learning session” (Santos et al.

the system is able to adapt the course presentation and navigation for the learner. The user’s knowledge and goals are inferred from the user’s actions with learning materials. (2002) uses learning objects conformed to standard specifications and is able to develop individualised courseware dependent on the students’ current state of knowledge. and about teaching strategies to support flexible individualised learning and tutoring” (Brusilovsky. The main objective of an ITS is to use “knowledge about the domain. evaluations of completed assessments. the student. A similar definition of an ATS has also been described by Albert & Hockemeyer (2002).There are two primary predecessors to many web-based adaptive and intelligent educational systems (Brusilovsky. Using this information. Three principal ITS technologies and their objectives (Ibid) are as follows: . 1999). 1999) which are: • • Intelligent Tutoring Systems Adaptive Hypermedia Systems Intelligent Tutoring Systems Similar definitions exist for Intelligent Tutoring Systems (ITS). instructor’s reviews of homework and self assessment and estimation of learner. also known as Adaptive Tutoring Systems (ATS). their preferences and learning styles. For example. an ITS developed by Specht et al.

• Interactive Problem Solving Support (Interactive Tutors) aims to provide help to students on each step of problem solving. Extensive error feedback is also provided by the Analyser and the Student Model is updated. problems)” (Ibid).• Curriculum Sequencing (or Instructional Planning Technology) aims to provide students with the “most suitable individually planned sequence of knowledge units to learn and a sequence of learning tasks (examples. using this information. There are two types of sequencing: Active and Passive. The Solution Analyser has to identify what missing or incorrect knowledge has caused the error if the solution is incorrect. from offering them a hint. • Intelligent Analysis of Student’s Solutions aims to help students arrive at their final answers to educational problems. The ‘Tutor’ watches and understands the students and then updates the Student Model. to providing them with the full answer. students are given the appropriate level of help for example. Subsequently. Adaptive Hypermedia Systems . questions.

pedagogical knowledge. such as user’s prior knowledge. preferences. Two principal AH technologies and their objectives (Brusilovsky. An alternative technology to AH is IMS Simple Sequencing (SS) and the differences between these two technologies were examined by Abdullah & Davis (2003). 2004). content and the links of hypermedia pages can be adapted to the users (Brusilovsky.Adaptive Hypermedia (AH) Systems “focus on constructing explicit models that represent various aspects of information related to decision making. It ensures that all activities that an instructor deems important are completed by the learner. 1999) are as follows: . it is instructor centred. learning domain. “ but simply applies a set of rules decided by an author to sequence a learner towards a pre-determined learning objective” (Ibid) i. etc ” (Wang & Li. as follows: • AH systems are user centred – they use intelligence to help each individual learner in navigating a complex information space so that they can achieve their chosen learning goal. the interoperability and reusability of learning materials within AH systems remain a challenging issue for many authors (Wang & Li. Again. 1999). Using this user information. • IMS SS has no intelligence and cannot distinguish between users. 2004).e.

• Adaptive Presentation extracts the user information such as goals and knowledge and then adapts the content of a hypermedia page to the user. (2001) have developed an AH system which is targeted at distance learners. 2. web pages are adaptively constructed from components for each user.5 Metadata for Adaptive Content .4. For example.Curriculum Sequencing). At the beginning of the interaction. Using this technology. Related Literature on AH Systems Gouli et al. novice users receive more detailed and additional information whereas expert users will receive less. This technology can be further broken down into Direct Guidance. Adaptive Link Hiding and Adaptive Link Sorting. Adaptive Link Annotation. • Adaptive Navigational Support “supports the student in hyperspace orientation and navigation by changing the appearance of visible links ” (Ibid) and its main objective is to find the best learning path through the learning material for the student (Same objective as ITS technology . domain knowledge is restricted to the learners – this is a strategy which is more appropriate for novices (Ibid).

Figure 2. an optional ref to specify a URI. optional element Adaptivity within the Educational category of the IMS LRM. and the langstring contains the adaptive information.8 shows an example below shows the metadata for a learning object which is best suited for a learner whose preferred learning style is Visual: . shown in Figure 2. They have added a new. (2001) have proposed a framework which extends the IMS Learning Resource Metadata (LRM) to describe the adaptive features of digital learning resources. Albert et al.7 Metadata for Adaptive Content Each learning resource may contain any number of Adaptivitytype which describes one aspect or type of adaptivity. however these primarily describe static learning content.We have described previous standards and specifications for learning objects such as LOM.7: Adaptivity? Adaptivitytype* Name=<langstring> Ref=<URI>? Langstring Figure 2. The Adaptivitytype element contains a mandatory name.

XML technologies.5 Summary In this chapter.<adaptivity> <adaptivitytype name = “learningstyle”> Visual </adaptivitytype> </adaptivity> Figure 2.8 Example of Adaptive Content Metadata 2. are discussed. we have reviewed the new concept learning objects and how these can be used for the new generation of web-based education. We also examined web-based learning standards initiatives along with their standards and/or specifications. . and finally adaptive learning and testing technologies. the language used for interoperable and exchangeable data within the web.

Reges. 3. There are three main reasons for Java’s popularity: • It is portable and flexible .1 Why is Java used as an Introductory Language? Java has become the language of choice for introductory programming courses in many university computing departments in recent years (Barikzai. Tyma (1998) claims that Java “ takes the best ideas developed over the past 10 years and incorporates them into one powerful and highly supported language”. and many students on Java courses are learning programming for the first time.Java is freely available on Sun’s Solaris TM and Microsoft WindowsTM platforms and from third party sources for Linux TM and . Fleury. We have conducted two investigations to evaluate these approaches and results from these investigations are subsequently discussed.Chapter 3 Teaching Java as an Introductory Programming Language This chapter primarily suggests effective ways in which Java can be taught as an introductory programming language. 2000). 2000. 1998a). The motivation for this investigation is due to the popularity and widespread use of Java as an introductory language at degree level (Hadjerriout. 2003. Difficulties in teaching and learning programming have resulted in three different approaches for teaching Java programming.

which makes it ideal for use in industry as it can meet many current requirements demanded by companies and developers (Lewis. networking. It is a powerful object-oriented language with web programming. Other categories of students being motivated to undertake programming include Intrinsic which is a “deep interest in computing (or specifically programming)”. • Students may want to learn Java in order to enter the “ highly-rewarding and exciting world of web programming and design ” (Callear. • Java is “increasingly becoming a paradigm for learning concurrency. Universities using any of these platforms have no restrictions if they wish to adopt Java.MacintoshTM. 1998a). 2001. this is known as Extrinsic Motivation (Jenkins.they do not need to convert their code to adapt it to different operating systems (Gibbons. and it provides a firm foundation for software design. 2000). and to anticipate in return a career as a high-paid programmer in the IT industry. 2000). 1998. Achievement which is the “desire to perform well for . 1998a). 1998b). Hadjerrouit. and concurrent programming capabilities. It also means students have the flexibility of choosing between working at home or at university . It was also pointed out that “ Java’s commercial and pedagogical success is largely due to the pioneering efforts of previous languages on which it stands ” (Ibid). Hadjerrouit. interactive computing and object-oriented design” (Hadjerrouit.

2001).2 Suitability of Teaching Java as an Introductory Language Teaching introductory programming involves teaching the general principles underlying programming such as Problem Solving Skills. Jenkins (2002) identifies the following pedagogic questions and some further technical questions: • • • Is it easy to learn? Does it demonstrate effectively the language paradigm of that language? Are there helpful and suitable textbooks for the language? Then if necessary. 3. Algorithmic Thinking and Structured Programming (Hadjerrouit. Hadjerrouit. 1998). 1998a) and to ensure that the appropriate introductory language is chosen (Jenkins. the technical features should be considered. we need to consider its Pedagogic Suitability.personal satisfaction” and Social which is the “desire to please family or friends” (Jenkins. for example :- • • • Is the language available on the appropriate platforms? Is it in commercial use? Are there jobs which require this programming language? . 2002.

cprogramming. Much controversy has arisen regarding whether Java should be taught as an introductory programming language and many different authors have suggested that there are other languages which may be more suitable. .However. • Steffler (2004) argues that the object-oriented language Smalltalk TM (smalltalk. (2000) argues that the simple language BlueTM (Koelling. • Fekete et al.org) which is a powerful object-oriented language with clear syntax. This is because the amount of code required to write programs in Python are much shorter and therefore it is easier for students to learn. is an easier language to teach compared with Java and C++TM (www. 1999) would be ideal to teach at university level since it is especially designed for novice learners. Therefore. It seems that the students will tend not to study a language that is commercially unviable as it would affect their chances of getting a highly paid programming job.python.com). as follows: • Jenkins (2003) argues that PythonTM (www. Jenkins (2001) pointed out that a commercially popular language may have to be adopted as the basis for teaching programming. there are problems associated with this.org) is very easy to learn as “one of its original design intentions was to be a learning environment for children”.

if Structured Programming is taught from it as a first course. Burton & Bruhn (2003) argue that students should learn two other simpler paradigms of programming languages before learning an object-oriented language such as Java.com). Gibbons (1998) points out that Java is an excellent choice to teach introductory programming.• Hadjerrouit (1998a) argues that the general-purpose programming language SimulaTM (Dahl. they can proceed to learning the Object-Orientation aspects of Java as a second course. Once the students are equipped with these skills. Firstly.com) and secondly. 2001) is easy to learn as it has simple basic constructs syntax and has a better support for programming principles.cprogramming.about. This route provides students with fundamental concepts of any programming language and will make it easier for them when they need to tackle the abstract concepts of object-orientation. However. a visual language such as Visual BasicTM (visualbasic. He argues that many programming-related skills such as procedural constructs should be learnt before object-orientation. a simpler procedural language such as CTM (www. . Similarly.

(2002) points out that many computing departments face challenges when teaching first time programming to students. Hadjerrouit. For example. Jenkins (2002) argues that programming is invariably one of the hardest subjects to learn. The reason for this must be discovered so that educators can teach programming successfully and the students can learn it effectively. • The challenging aspects of learning to program may de-motivate students and it is pointed out that if students are not motivated. run and debug code (Ibid). 2003. Meisalo et . For example. 1998a) as well as problem solving abilities. they will not succeed (Jenkins. 2001). they will not learn and subsequently. Proulx (2000) states that the “first programming course is a major stumbling block for many students interested in computer science”.3 Difficulties of Learning Programming A number of authors have suggested that there are many difficulties with learning programming. This is a problem because a good mathematical background is required to learn programming (Burton & Bruhn. The following describes the factors contributing to this: • Academic requirements for admission to programming courses are relatively low and previous general computing.3. programming and/or mathematical knowledge are not always required. and Allison et al. general computer operating skills and the ability to compile.

Jenkins & Davy. 2001. (2002) report that 32% of their students who enrolled on their first year computer science distance learning course had discontinued with their programming courses because they found the exercises or the theory too difficult or had failed in their retake exams. they may feel very insecure and disadvantaged against the proficient students. 2000) and a result of increasing course participation means that there is less time to support each individual learner (Jefferies & Barrett. Lecturing programming to a diverse group of students creates further problems for this (Davis et al.al. 2002). Röβling & Freisleben. • Two different learning approaches are required to be applied concurrently in order to learn programming effectively. 2001. This . 2002). The Surface Learning approach is required to remember the syntax details of a programming language. however they can usually only convey material which requires the Surface Learning approach and is not an effective way of delivering programming concepts (Ibid). • The diverse gap between students with and without programming (and general computer) experience may cause proficient users to be over-confident and feel that no effort was required to pass the course. • Traditional lectures are typically used to teach programming. As for the novice users. and the Deep Learning approach is required for the act of programming by analyzing and applying these syntax details to complete programs (Jenkins.

Similarly. the institution of Shackelford & LeBlanc Jr. Fundamental s. in order to solve this problem. Figure 3.1 illustrates the ordering of Java topics is affected by each of these approaches. we have discovered three principal approaches for teaching Java: Fundamentals-first. 3.first Basic Concepts ObjectOrientation Graphical User Interfaces Objectsfirst ObjectOrientation Basic Concepts Graphical User Interfaces GUI-first Graphical User Interfaces ObjectOrientation Basic Concepts Figure 3.4 Approaches for Teaching Java Programming As a result of our research. This section aims to provide an overview of these three approaches and discuss the advantages and disadvantages of each approach. Objects-first and GUI-first (Graphical User Interface). (1997) began a ten-week introductory course to Computing for its students.1 Ordering of Topics by the Three Approaches .problem was experienced by Sanders & Mueller (2000) who subsequently tried to resolve this gap by enrolling the students on an introductory course prior to undertaking programming modules.

4. The first one is by professional educators where they teach the fundamental principles first and when the students have mastered these. such as the object-orientation aspects of Java. as they would have been “well-grounded in language-independent fundamentals” (Ibid). they then progress to teaching design and problem solving (Sanders & Mueller.. The second is the Classic Instruction Methodology used for introductory programming which allows a gentle learning curve by starting with a simple program and steadily moving onto more complex programming. thus allowing time for the learner to grasp each concept and incrementally build up their knowledge (Cooper et al.3.1 Fundamentals-first The Fundamentals-first approach concentrates initially on Basic Concepts. This approach appears to be consistent with the viewpoints of three other authors. Proponents of this approach. if necessary. This approach is also more motivating for students to pursue the course further if they are faced with easier topics to begin with and their confidence can be built up in this way. argue that students should grasp all the introductory concepts of programming before moving onto the specific technical features of the language and it is beneficial for them as the gain of applicable foundational knowledge can equip them with the ability to shift to a new programming language and/or paradigm. 2003). 2000). before any language-specific programming. . for example Smolarski (2003).

Conceptual. • Knowledge which consists of four categories: Factual.. Apply 4. Analyze 5. Its revised framework has two dimensions as follows: • Cognitive Process which consists of six categories in ascending order of cognitive complexity: Remember. shown Table 3. Procedural Knowledge D. such as Assignment.1 Bloom’s Taxonomy The Cognitive Process Dimension 1. MetaCognitive Knowledge Table 3. These categories lie on a scale from concrete (Factual) which is an easier-to-attain low-level skill to abstract (Metacognitive) which is a harder-to-master high-level skill. Remember 2. Procedural and Metacognitive.1 (Taken from Anderson et al.. Evaluate 6.’s A Revision of Bloom’s Taxonomy (Anderson et al. Conceptual Knowledge C. Factual Knowledge B. should be taught first. Understand 3. Evaluate and Create. Create This framework suggests that a topic which is cognitively simpler and the knowledge within it is more concrete. Analyze. Apply.The third is by Anderson et al. 2001). Understand. A . The Knowledge Dimension A. 2001).

for learning object-oriented programming. which are contained in object-oriented programming. and it was pointed out that students who have experience in the procedural paradigm will learn object-oriented programming much more capably and effectively (Ibid). as follows: • The object-oriented paradigm primarily contains modeling structure and relationships which are already present in the procedural paradigm. as an algorithm can possibly consist of elements such as selection and repetition. There are two reasons for this. if not a prerequisite. . 2000).topic such as Classes appears to be more cognitively complex and requires more abstract and analytical thinking skills and should be taught after an easier topic such as Assignment (Sanders & Mueller. having procedural experience can be regarded as a benefit. • A firm understanding of Algorithms and Structured Programming will help students tackle the object-oriented paradigm and since the procedural paradigm contains both of these components. This framework also suggests that procedural programming must precede object-oriented programming as the former is much more concrete than the latter. Evidence for this point is given by the definition made by (Burton & Bruhn. Note that Algorithmic thinking is also considered as a paradigm in its own right. 2003) that “an object consists of a collection of variables (attributes) and procedures (behaviors) bundled permanently together (encapsulated) as a unit”.

This means that Classes and Methods are always taught before basic procedural topics such as Assignment. and students are required to write multiple classes and methods initially (Lewis. and polymorphic functions. as its underlying concepts (and specifically objectorientation) also existed in the Simula language which was developed between 1962 and 1967 and used modern object-oriented concepts such as classes. It concentrates on Object-Oriented Programming Principles and focuses on Objects and Inheritance before introducing any of the procedural elements. There is also another object-oriented language called Smalltalk (smalltalk. The Unified Modeling Language (UML) is often used to develop a visual and intuitive model of objects and their relationships. Lewis (2000) argues that objectorientation is not new.4. subclasses. 2003). However. which will then be translated into code afterwards (Burton & Bruhn.e. 2003). 2000).2 Objects-first The Objects-first approach advocates that Objects should be taught right from the beginning and the establishment of Object-Oriented Thinking is the primary focus (Cooper et al. Proponents of this approach argue that object-oriented programming is a new programming paradigm which requires a new accompanying teaching approach i.org) which was developed around 25 years ago and used objects for all their program entities. It is also pointed out that the underlying .. an Objects-first approach (Ibid). These procedural elements are in any case always kept in the context of an object-oriented design.3.

They subsequently altered the order of their course to adopt the Fundamentals-first approach. It allows students to learn how to develop GUI programs from the beginning to help them understand the functions of Classes and their components.3 GUI-first The GUI-first approach commences by introducing students to Graphical User Interfaces (GUIs) and Java Applets to illustrate object-oriented programming concepts and properties common to all Java classes. Therefore.4. 2002). First hand experience was reported by Sanders & Mueller (2000) who discovered that their programming course which adopts the Object-first approach was inaccessible to many students who had the potential to cope with the material. Proulx et al. 2000. The advantages of this approach are as follows: .. 3. objectorientation and procedural concepts are not mutually exclusive. except that a procedural design does not use the same terms. There is much controversy on whether this approach should be used for teaching students introductory programming. notations and relationships as an object-oriented design. before learning object-oriented programming and fundamental procedural elements (Decker & Hirshfield.concepts and goals between a procedural and object-oriented design are nonetheless essentially the same.

Whether this is the case or not. • Students are able to view the organization of Java code within the GUI (such as statements. as these can be very abstract (Ibid). brackets. and the students may become more motivated and will gain more satisfaction if they can see that their running program is displaying in the form of a GUI as opposed to a static textual alternative (Hadjerrouit. 2002). parentheses. This gradually illustrates the most important concepts in object-oriented programming and enables students to realise the powerful object-oriented programming techniques (Proulx et al. • GUIs can be used to explore classes and objects behavior in response to different actions. which may help them understand how the programs are constructed (Ibid). it may be helpful for recruitment (Gibbons. • Students are able to create a mental model of classes. 1998a).• The general student learning experience is enhanced using GUIs as visual aids.. data declarations and method declarations) and syntax details (new class creation and method calls). • This approach may lead students who enroll on a programming course which adopt this approach to think that there is more hands-on programming in the course than pure abstract theory. instance objects of classes and organization of classes (member data. 2002). member functions and information hiding aspects) using GUIs. .

to create a simple program for text input in Java can be very difficult (Proulx et al.However. . • Using Java’s Abstract Windowing Toolkit (AWT) to build GUIs involves learning about many other concepts which adds complexity to the programming course and Hong (1998) argues that instead of concentrating on trying to solve problems.. 2002). which may deter academic staff from adopting this approach (Ibid). Also. a counter argument made regarding this is that a toolkit such as the Java Power Tools (Rasala & Proulx. • This approach has a lack of emphasis on Algorithmic Thinking. However. as follows: • Students must visualize the concepts from an object-oriented point of view from the beginning before they do any hands-on programming (Ibid). • To create a GUI is too time-consuming and students in introductory courses will not be able to create them without assistance. 2002). Structured Programming and Object-oriented Design.. students are distracted with the details of the AWT. there are also disadvantages of this approach. 2001) decreases the time and effort required to build a GUI (Proulx et al.

This follows the general rule that the ordering of topics within a course must proceed from easy to difficult and especially some simpler topics must be taught first if they are prerequisites for the more complex topics (Callear.5 Our Research Investigations This section discusses our research investigations. The aim of the first investigation was to evaluate each of the different teaching Java approaches. Table 3. 2000).. The motivation for this was that the relative difficulty levels within topics are significantly important in an adaptive learning and testing environment (Wainer et al. 2000) and we wished to form an introductory Java course with these basic Java topics and import it into our adaptive system. Nine basic Java topics were selected for this investigation and are used as the basis for comparing the ordering between the approaches. which are the Literature Survey and the Student Questionnaire. and the answers provided by two sets of corresponding research questions.2 shows the topics and their reasons for selection. The primary objective of the second investigation was to compare the professionals’ apparent ordering of topics in the published books with students’ perceptions of the difficulty levels of these topics. hence to determine whether this agrees with the professionals’ ordering of topics. . and to attempt to establish a common agreed ordering of the relative difficulty levels within the basic Java topics adopted by each of the three approaches.3.

Topic Comments Assignment (including Variables and Primitive Data Types) Expressions (including Arithmetic Operators) If-Statements (including If-Else-Statements) For-Loops Arrays Classes (and Objects) Methods I/O (Input & Output).2 Topics and their Reasons for Selection . Table 3. Reason for Selection Important part of a well-documented program Essential elements of both Structured and Object-Oriented Programming Essence of Object-Oriented Programming Underlying Building Blocks of Programs Permits Reading In and Printing or Outputting Values.

many of them are recommended textbooks and they have frequently been loaned out and used by students studying Java courses.3. and do not assume.1 Literature Survey We selected thirty recently published academic Java programming books from the University of Warwick library for our first investigation.5. • The selected literature are all currently in print. hence an ordering of topics can be identified. all the published literature have gone through a reviewing process which gives us confidence in the quality of the data. . Also. The relative difficulty levels of topics within the books can also be indicated by this linear structure. The textbooks must be aimed at students learning Java as an introductory language. and followed a fixed criteria for selecting these textbooks. Advantages of conducting a literature survey for this investigation are as follows: • The linear nature inherent in printed works can help us determine easily the apparent teaching Java approach and the direction of the learning curve in each book. programming experience was not required in order to use any of these textbooks. or assume basic knowledge of programming concepts or some familiarity with programming in other languages. Also.

The textbooks which adopt these different approaches are shown in the following three lists respectively: (See Appendix D for full details of these textbooks) Fundamentals-first 1. Introducing Java for Scientists and Engineers 8.Java 1. Java an Object-Oriented Language 4. Programming for Everyone in Java 6.1 in 21 Days 3. Introduction to Java Programming 2. 18 adopt the Fundamentals-first approach. Teach Yourself Java 1.5. Java Outside In 7. 7 are Objects-first and 5 are GUI-first.3.1 Certification Study Guide . Programming with Java 9. How popular are the approaches used for introductory programming textbooks and which approach is adopted by each book? Of the thirty textbooks surveyed. An Introduction to Object-Oriented Programming with Java 5.2 Research Questions and Results This section describes the results from the literature survey which provide us answers to a number of research questions. An Introduction to Computer Science using Java 10.

An Introduction to Programming and Object Oriented Design using Java GUI-first 1. Just Java 2 2.An Introduction to Programming using Java – An OO Approach 15.Object Oriented Programming with Java 18.Object Oriented Programming in Java 16. An Introduction to Programming with Java (Programming.Core Java 1. Java and Object Orientation – An Introduction – 7. Exploring Java 3.java) 4.11.Thinking in Java – The Definitive Introduction to OO Programming in the language of WWW 17.Essential Java 2 Fast – How to develop applications and applets with Java 2 12. Java – How to Program . Java Gently – Programming Principles Explained 3.2 Volume I – Fundamentals Objects-first 1.Onto Java 12 14. The Essence of Java Programming 6.Java with Object-Oriented Programming and World Wide Web Applications 13. The Java Programming Language 5. Programming the Internet with Java 2. The Java Tutorial – Object-Oriented Programming for the Internet 4.

Expressions.Fundamentals-first and Objects-first. we also established many similarities between these approaches. Java. regardless of the adopted approach.5. If-Statements. (Concepts of Object-Oriented Programming) 2. We observed that there is an additional focus on Concepts of Object-oriented Programming at the beginning of these different approaches of textbooks which precede the nine basic topics. Java – Object Oriented Problem-Solving What are the differences between the ordering of topics within the two most popular approaches? For this research question. Following this. (Procedural Constructs) . Comments and Output are the first topics to be taught. (Primitive) 3. During our investigation. we analyzed the differences between the two most popular approaches . We conducted the investigation by separating the two categories of books according to the approach they adopt. and then individually examined each book. For-Loops and Arrays were always taught one after the other (though sometimes one before another). Java. Assignment. as follows: 1.

Figure 3.4. For example. Input is usually taught last out of the basic Java topics. Methods are usually taught after Classes in both of the approaches. (Advanced) We observed that the main difference between these two approaches is the ordering of the Procedural Constructs and Object-Orientation.2 illustrates this difference between the two approaches: (The top flow of events represent Fundamentals-first and the bottom flow of events represent Objects-first.2 Difference between Objects-first and Fundamentals-first . (ObjectOrientation) 5. in the Objectsfirst approach. Object-Orientation precedes Procedural Constructs and vice versa for the Fundamentals-first approach.) PROCEDURAL CONSTRUCTS CONCEPTS OF OOP & PRIMITIVE OBJECTORIENTATION OBJECTORIENTATION ADVANCED PROCEDURAL CONSTRUCTS Figure 3.

setting up operations for them (Methods) and representing information about them (Attributes) (Garside & Mariani. for example. 1998). Smolaski (2003) pointed out that students should commence with learning the basic concepts before learning any detailed fundamentals of programming and objectoriented programming. creating them as instances of “categories of things” (Classes). Concepts of Object-Oriented Programming is first discussed. Concepts of Object-Oriented Programming A high percentage (86%) of the different categories of books introduced Concepts of Object-Oriented Programming at the very beginning. Each of the five groups are discussed in detail. This methodology is agreed by many authors.Do any of the two most popular approaches share the same ordering between any of the topics? This section attempts to provide answers to whether there are any similarities between any part of the ordering of topics which are adopted by the authors of the two different approaches. and finally Object-Orientation. then the Primitive topics are discussed together with the Advanced topic. . then Procedural Constructs. These Object-Oriented Concepts include the notion of identifying “things” ( Objects).

Primitive (Comments and Output) and Advanced (Input) Following the Concepts of Object-Oriented Programming . and Van Roy et al. (2002) performed a study (referred to as Study A below ) to investigate the various programming topics that students have difficulty with. and outputting only requires a single simple statement – System. and found that none of their students who were surveyed found the introductory topics (Algorithms and Basics of Java) difficult. For example. A possible reason why Comments and Output are taught first is because they require easier-to-attain low-level skills since commenting only requires factual knowledge. We observed that the topics Input and Output were not grouped together to form one topic but were split up and taught separately.out. the topics – Comments and Output – were the next two most popular topics chosen by many of the authors. and that Input is usually introduced last out of the nine topics.The reasons for this have been suggested by a number of authors. However.which allows many different primitive data types in Java to be displayed. (2003) argue that concepts must be learnt first before learning a particular language or paradigm. Input may be taught last because it is much more .println(). Behforooz (1981) pointed out that students must learn concepts and problemsolving techniques before learning any of the specific details of the syntax of a programming language. Meisalo et al.

readString() and readDouble(). respectively. it appears that Input and Output should not be grouped together and taught as one topic. First hand experience whilst teaching Java were reported by Hong (1998) who encountered problems whilst attempting to teach Input. and handling exceptions. an object must be created by supplying System. and then this object is passed to another class called BufferedReader as the keyboard input works best when it is buffered.complex and there is a collection of input methods which read in different data types such as read(). Similarly a study performed by Sayers et al. Given these reasons. For example. and in order to input. which read in bytes. A number of authors have also suggested that novice students find Input much more complex.in belongs to the abstract class InputStream. (2003) (referred to as Study B below) shows that the students rated the Input topic to be much more difficult than the Output topic. abstract classes. .in as a constructor parameter to a subclass InputStreamReader.io package must be imported. He pointed out that this was mainly due to the syntax and the large amount of complex details within it. as well as ensuring that all IOExceptions are caught. Bishop (1998) points out that Input requires students to be aware of more complex topics such as packages. System. This is because the java. constructor parameters. subclasses. creating and passing objects. strings and doubles.

Expressions. Only a small percentage of students (24%) surveyed in Study A found Variables. whether Procedural Constructs or Object-Orientation are introduced first is dependent on the adopted teaching Java approach.3: ASSIGNMENT EXPRESSIONS IF-STATEMENTS FOR-LOOPS ARRAYS SS Figure 3.Procedural Constructs As discussed previously. A high percentage (73%) of the books surveyed taught the same following linear order of topics: Assignment. The result is also supported by Smolarski (2003) who pointed out that the topic Data Types is simple enough to be covered early on in an introductory programming course. If-Statements. For-Loops and Arrays. . which are contained in the Assignment topic. The reason for this ordering may become apparent if we discuss two other studies (Study A and B) which also support this ordering. we observed that there is a common agreed ordering of Procedural Constructs for both approaches. However. Expressions topic also includes Arithmetic Operators and the If-Statements topic also includes If-Else-Statements. difficult.3 Most Common Ordering of Procedural Constructs Note that the Assignment topic also includes Variables and Primitive Data Types. as shown in Figure 3. This result is also mirrored in Study B which reported that the students surveyed considered Variables & Data Types to be the easiest topic.

which are formed by Arithmetic Operators. . OR. this result is inconsistent with Study A which reported a higher percentage (48%) of students considered If-Statements and Logical Operations to be a difficult topic. This is because an If-Statement usually has only one or more conditions which need to be satisfied whereas a For-Loop has a start and a stop condition and an updatepart. whereas a lower percentage (24%) of students had the same opinion regarding For-Loops. For-Loops and Arrays. XOR and NOT. Arrays also use Assignment to store values. therefore their students found this topic much more difficult. However. The reason why the If-Statement topic may be taught before the For-Loops topic is possibly because a For-Loop is regarded as more complex than an If-Statement. This is because the use of both an If-Statement and a For-Loop involves conditions. This inconsistency may be explained by the fact that their If-Statements topic included relatively more complex Logical Operations concepts such as AND. which explains the precedence of these two topics before the other three.We observed that the topics Assignment and Expressions may be prerequisite topics for If-Statements. which may be regarded as more complex. This result is also supported by Study B which indicated that the students rated the topic If-Statements to be easier than the topic For-Loops. and these conditions are pre-determined by assigned Variable values with a specific Type declared.

Object-Orientation (Classes and Methods) The majority of the books which adopt the Objects-first approach taught Classes before Methods and a high percentage (60%) of the books which adopt the Fundamentals-first approach adopt this same order. . However. Arguments can be made for teaching Methods either before or after Classes as the Classes topic contains a special kind of Method called a Constructor. the remaining 40% took the reverse order. This result is also supported by Jenkins (1998) who emphasizes the fact that students always seem to experience difficulties with Arrays and he suggests a possible reason for this was because Arrays were often defined as “Variables that can hold multiple values” which can be very confusing for students.A possible reason why most of the books taught Arrays last was because it is the most difficult topic out of the Procedural Constructs. and Study B also found that Arrays was the most difficult topic out of the Procedural Constructs. This result was supported by Study A which reported the highest percentage (72%) of their students found Arrays the most difficult.

HND or Industry Placement). Age. Mathematics & Computer Science). • Their Understanding of the nine basic Java topics before they began their programming course (Excellent. many first. Computer Systems Engineering. • Programming and Scripting Languages that they had studied before (for example. A-Level. Poor or None for each topic). Computer & Business Studies.5. Visual Basic. C#. Computer & Management Sciences. • • • • Year of Study (First.3 Student Questionnaire For our second investigation. C++. Gender. Second.3. . second and third year undergraduate students from the Department of Computer Science at the University of Warwick were asked to complete a Student Questionnaire (See Appendix E) to provide the following details: • Their Course (Computer Science. Average. SML and Python) • Their Perceived Difficulty of the nine basic Java topics (On a scale from 1 being easy to 10 being difficult). Third). C. Their Computing Background (computer-related qualifications attained for example: GCSE.

First Year Core module CS126 – Design of Information Structures. to avoid duplication of the same students filling in more than one questionnaire. However. which had the highest number of student participation. Second Year Core module CS231 – Human Computer Interaction. Two of these modules were Core first and second year modules and the other is an Optional2 third year module at the University. and we subsequently chose a lecture time which may have the highest student turnout. Third Year Optional module CS324 – Computer Graphics. to fill in the student questionnaires. 2. Note that there are no core third year modules in our department except the individual Computer 2 Science Project. There are around 450 undergraduate students in total in the department – around 200 first year. To ensure a large sample. and therefore we had hoped that we would get a reasonably large questionnaire sample. for example. We decided that we would hand out the questionnaires in no more than one lecture for these modules. as follows: 1. 3. we selected one module for each year of study. . we avoided early morning lectures and late afternoon ones. we did not manage to get a very large data sample and there are three main reasons for this. as follows: 1 Note that permission was given by the module convenor to do this. 150 second year and 100 third year.We decided the most effective way to do this is was to hand out the questionnaires to students in the last five minutes of lectures1 where students are able to fill them in immediately and subsequently return them back to us.

These limitations are as follows: : • Students who joined our University generally come from a similar educational background as we require significant high entry requirements (although computing qualifications are not required). 78 students participated in the student questionnaire and the year of study of these students and their gender are provided in Table 3.3 Details of Student Volunteers There are limitations within our research sample which means that it may not be fully representative of the student population.3.• Student participation was generally not very high in lectures. of Students 43 6 49 7 1 8 19 2 21 Table 3. This is because Computer Science . since attendance in lectures is not compulsory and is usually not monitored. • Second year Computer Science students had a project deadline and were not able to attend many of their lectures. Year of Study 1 2 3 Male Students Female Students Total No. • The proficiency levels of students in programming become less diverse as they progress to the subsequent years.

• The sample was reasonably small and there are a small number of degree courses available at the University. as there are more male participants than female ones within our courses. 2001). as is the case with many other computing courses elsewhere (Jenkins.5. The number of male students are also significantly higher than the female ones who filled out our questionnaires. 2004a). 3.4 Research Questions and Results .students at our University follow similar course structure and there are many core modules which students must undertake (University of Warwick.

4 shows the number of students and the level of difficulty which they rated against each of the nine basic Java topics: TOPIC Comments Assignment 1 4 9 2 2 1 5 1 3 7 2 4 3 4 5 2 5 6 0 2 7 1 3 8 1 0 9 0 0 10 0 0 . Table 3. What are the Students’ Perceived Difficulty Levels of the Nine Basic Java Topics? In the student questionnaire.This section describes the results from the student questionnaire which provide us answers to a number of research questions. the data is also displayed in a bar chart. The data is presented in a tabular form since it is necessary to compare two series of results. To allow a correlation to be more easily ascertained. we asked the students to indicate a number between 1 and 10 (1 being easy and 10 being difficult) what they perceive to be the difficulty levels of the nine basic Java topics.

2% 15. Easy.6% 17.3% 15.2% 34.8% 25.8% 7.8% 2.8% 15.5% 9% 10.4% 11.6% 0% 53.9% 28.4% 0% 13.4 Difficulty Levels of Topics rated by Students The ten ratings can be grouped into five categories which are Very Easy.7% 8.3% 53.3% 9% 1.6% 2.3% 0% 60.7% 0% 35.9% 28. Table 3.2% 27.4% 30.7% 12.5 shows the percentage of students and their rating against each of the nine basic Java topics: TOPIC Comments Assignment Expressions If-Statements For-Loops Arrays Classes Methods Input/Output Very Easy Easy Average Difficult Very Difficult 82% 12.9% Table 3.5% 35.3% 9% 3.4% 1.2% 21.8% 0% 56.9% 33.2% 15. Average. Difficult and Very Difficult.5 Percentage of Students and their Ratings .Expressions IfStatements For-Loops Arrays Classes Methods Input/Output 9 2 7 3 1 2 6 1 9 6 1 1 6 3 1 7 1 6 1 6 9 2 5 4 2 1 6 1 2 1 2 1 0 5 1 3 1 1 0 5 8 1 2 7 1 5 1 3 1 1 5 8 1 3 1 5 9 4 1 5 4 1 1 7 1 1 1 4 5 1 9 7 7 0 0 2 4 8 5 6 0 1 0 7 3 0 6 0 0 0 0 4 0 0 4 2 2 Table 3.3% 1.4% 33.8% 20.8% 34.1% 7.

Topic Comments Assignment Expressions If-Statements Input/Output Arrays Methods Classes Arithmetic Mean 1.7 Table 3. where 1 is Easy and 10 is Difficult.5 2.2 5.5 2.5 2.6 illustrates this. Table 3.74 2.Can an ordering of students’ perceived difficulty levels within the topics be established? An the arithmetic mean of the students’ perceived difficulty levels can be calculated from each of the students’ ratings of each of the topic on the scale of 1 to 10.6 Arithmetic Mean of Students’ Perceived Difficulty Levels Is there any consistency between the students’ perceived difficulty levels and the similarities shared by the authors of the two most popular approaches? .6 4 4.

We evaluated the results from the student questionnaire and observed that there were a number of consistencies between the similarities shared by the authors of the two most popular approaches and the students’ perceived difficulty levels. The only inconsistency which was observed from the student questionnaire was the perceived difficulty of the topic Input and Output as it was perceived to be of average difficulty within the nine basic topics. • Methods and Classes are perceived to be the two most difficult topics. Expressions and If-Statements are similar. as follows: • Students perceived Comments to be the least difficult topic. which suggest there may not be a definite difficulty ordering between them. • The perceived difficulty levels of Assignment. and the Classes topic is perceived to be the most difficult of all. Lewis (2000) also pointed out that students may not find the topic Input very difficult was due to the way that they were taught. • Students perceived the topic Arrays to be more difficult than the topic ForLoops. These are also the next least perceived difficult topics after Comments. which was perceived to be more difficult than the If-Statements topic. A reason why the students may have indicated this was because the topic Input and Output was grouped together as one topic and the students were rating these two topics as a whole. One of the teaching approaches for Input is to provide the students with a class or template with all the necessary declarations to set up the .

InputStreamReader and the BufferedReader so that the students are only expected to ‘fill in the gaps’. It was also pointed out that object-orientation is perceived to be much more difficult than structured programming because software tools. teaching support material and teachers’ experience are less mature compared with those of structured programming (Ibid). Is there any consistency between this and a specific teaching Java approach? The students’ perception of the ordering of difficulty levels within the basic Java topics is very consistent with the topic ordering in the Fundamentals-first approach. 2001). and which means that it is not consistent with the Objects-first and the GUI-first approaches. One of the main reasons for authors to adopt the GUI-first approach is to encourage students to enjoy programming (Kluit et al.. various teachers of object-oriented languages suggest that these languages are best taught using the Objects-first approach. although it is claimed that there is not scientific evidence which supports this (Kolling & Rosenberg. 1998). However. Does having procedural programming experience act as a barrier when learning object-oriented programming? .

and examined them in detail. We separated the questionnaires of students who had studied a procedural language from the ones who had not.We evaluated the results from the student questionnaire and attempted to provide answers to this research question. A high percentage (77%) of the students who participated in the questionnaire had studied a procedural language such as C whereas the remaining 23% had not. Hadjerrouit (1998a) also claims that the core features of Java can be efficiently taught to students with C or C++ knowledge. rated the Java topics to be easier than those students who do not have previous understanding of basic concepts of programming and/or have programming experience. there is no significant evidence to indicate that the students who had previously studied a procedural language to rate the object-orientation topics in Java to be more difficult than the students who have not studied them before. However. Also. 3.6 Summary . In fact. 44% had studied the procedural language Basic (Beginners All-Purpose Symbolic Instruction Code). the results indicate that some students who have studied an advanced computer-related subject at school and have learnt a procedural programming language before they entered university.

We have isolated three approaches to teaching Java programming and the advantages and disadvantages of these had been addressed. Chapter 4 . the results from both of these investigations provided us answers to several research questions regarding the teaching of the Java programming language. which was supported by our Student Questionnaire investigation. We have performed the Literature Survey investigation to evaluate these approaches and have established a general agreed ordering of topics between them.This chapter has described many issues regarding teaching Java as an introductory programming language including the reasons for why it has become the language of choice in many university departments and its suitability as an introductory language at degree level. Finally.

(2003) in the Department of Computer Science at the University of Warwick. the ideas behind the construction of . using an open standard or specification for learning objects. interoperable and reusable for other web-based learning environments.Adaptive Learning and Testing with Learning Objects To assist us with our research. and the graphical user interface.. It has been developed as part of ongoing work in webbased computer-assisted learning technologies (Joy et al. 2002) to help students learn course material. • To incorporate learning objects into a repository and which these learning objects can be selected and reused in the four different areas. This chapter first gives an overview of the architecture of our software tool including its physical and logical structure. many of its features can be enhanced therefore our aims in this thesis are as follows: • To import a non-adaptive introductory Java programming course and to convert it into an adaptive one. Then the development of the learning content. However. • To ensure that these learning objects are compatible. • To construct different Areas of adaptive learning and testing for students of different proficiencies. one of our main activities was to enhance an existing web-based teaching tool OCTA (Online Computer Teaching Aid) which was constructed by Boyatt et al.

which was provided via a Web Interface. This communicated with a Business Logic Tier and the final tier was Data Storage. 4. There was a UserInterface Tier. with Apache Tomcat providing the Servlet Technology.1 Architecture of our software tool OCTA The following describes the overall architecture of the system and Figure 4. and Metadata for our Learning Objects. 2003). USER INTERFACE BUSINESS LOGIC DATA STORAGE Figure 4..1 illustrates this: “[The] architecture consisted of a three-tier approach. To store the teaching material a PostgreSQL Relational Database was used” (Boyatt et al. The whole application used Java as the primary programming language. four Learning and Assessment modules.1 Architecture of OCTA Each of these tiers is outlined below: . is discussed.learning paths.

postgresql. This is because they provide an interoperable way for the three tiers to communicate with one another. 4. • Two categories of data are stored in the database – User Data and Learning Materials Data. • The Business Logic is where students’ answers to questions are verified and is written in XML and Java. 2004b) driver to access the PostgreSQLTM (www.org) database. and Learning Materials Data include learning content and questions.2 Logical Structure of our Software Tool . This is implemented by HTML pages which are dynamically generated by means of Java Servlet TechnologyTM (Sun Microsystems. 2004a). The Java software uses PostgreSQL JDBCTM (Java Database Connectivity) (Sun Microsystems. User Data is information relating to specific students and their progress through the system (Logs are generated).• The User Interface is where the students view the software and where learning materials and questions are presented.

OCTA aims to provide an adaptive learning and testing environment based on the students' previous performance by using two entities called Problems and Roadmaps. for example questions can be added and the marks obtained by the students can be recorded for future reference. The XML tag for Problem is defined in Figure 4. Two essential components are contained within each Problem. However. as follows: • One or more Screens which are the actual physical windows that students view when using the software. .Our software contains features common to many web-based teaching aids. specific and reusable. These entities are primarily constructed using XML DTDs but their functions are implemented using Java. For example the learning materials in the Assignment topic can be presented as an individual Problem.2. 4. We can view Problems as learning objects because Problems are generally written by a subject specialist and may be very small. • A Logic component which specifies in what order the screens are presented.1 Problems A Problem primarily contains learning material and/or an individual question or a set of questions of a particular topic.2.

.2 XML Tag for Problem An example of a Problem containing four Screens and a Logic part is shown in Figure 4. Problem Screen Screen Screen Screen Logic Figure 4. and an image is at the top right corner of the Screen. Figure 4. logic )> <!ATTLIST problem name ID #REQUIRED> Figure 4.<!ELEMENT problem ( screen+.4 shows a typical Screen that a student views . Multiple Choice Questions or Images. Text gap .3 A Problem Example Screens A Screen can contain any or all of the following elements: HTML.the text is written in HTML.allows a text string to be inputted.3. the question is in Multiple Choice Question format.

4 A Screenshot of a Screen in OCTA The XML tag for Screen is defined in Figure 4.Figure 4. <!ELEMENT screen ( elem+ )> <!ATTLIST screen name ID #REQUIRED> <!ELEMENT elem ( html|textgap|mchoice|image )> <!ELEMENT elem name ID #REQUIRED> <!ELEMENT html (#PCDATA)> <!ELEMENT textgap (#PCDATA)> <!ELEMENT mchoice (choice+)> <!ELEMENT choice (#PCDATA)> <!ATTLIST choice value CDATA #REQUIRED)> <!ELEMENT image EMPTY> <!ATTLIST image src CDATA #REQUIRED)> Figure 4.5 XML Tag for Screen .5.

To indicate the level of proficiency for the question such as Novice.1 Additional Metadata These metadata tags are defined in the Screen section. as shown in Table 4. The following are examples of this (Boyatt et al. 3D Visualisations. as shown in Figure 4. 2003): • • • • • “Drag. Questions which require a response within a fixed period of time. Table 4.Note that the element types for the Screens are extensible to add new types of questions or components of a Screen if the need arises. To indicate the material covered within an assessment topic for example One-Dimensional Arrays in the Arrays topic.6.. Intermediate and Advanced. Sound or video clips.and-drop” style or “fill-in-the-blank” questions. . To indicate the area of assessment for example Types within the Assignment topic. Entering and displaying mathematical or chemical formulae. Additional Metadata We have implemented the following metadata tags for the Assessment aspect of our learning objects. Note that answers can be stored in particular screens and this indicates whether the selected answer is correct.1: Metadata Difficulty Level Assessment Area Material Covered Assessment Level Answer Reason for Implementation To indicate using a number between 1 and 9 the difficulty level of the topic.

Setvalue and Mark. and the Mark action modifies (Add/Subtract/Set) the mark for a Problem.6 XML tag for Screen with the Additional Metadata Logic Two parts are contained in the Logic component . the initial screen to be displayed and the mark for the student initially. The Setvalue action is used for assigning values to elements.an Administrative and a Sequencing part.<!ELEMENT screen (elem+)> <!ATTLIST screen name ID #REQUIRED> <!ATTLIST screen difficultyLevel CDATA #IMPLIED> <!ATTLIST screen assessmentArea CDATA #IMPLIED> <!ATTLIST screen materialCovered CDATA #IMPLIED> <!ATTLIST screen assessmentLevel CDATA #IMPLIED> <!ATTLIST screen answer CDATA #IMPLIED> Figure 4. Initial Snapshot indicates the initial settings at the beginning of a user session i. The Administrative part contains the three tags .7: .e.Initial Snapshot. The XML for the Administrative part of the Logic component is defined. as shown in Figure 4.

Match. Three actions . The Jump action specifies which Screen to jump to.<!ELEMENT logic ( initial snapshot. The Terminate action stops the Problem. Jump and Terminate – are used to specify how the student moves between the Screens: • • • The Match action checks whether the student's answer is correct. match+ )> <!ELEMENT initial_snapshot ( setvalue* )> <!ATTLIST initial_snapshot screen IDREF #REQUIRED)> <!ATTLIST initial_snapshot mark CDATA #REQUIRED)> <!ELEMENT setvalue (#PCDATA)> <!ATTLIST setvalue elem IDREF #REQUIRED)> <!ATTLIST setvalue arg CDATA #REQUIRED)> <!ELEMENT mark (#PCDATA)> <!ATTLIST mark method ( add|subtract|set ) #REQUIRED> <!ATTLIST mark arg CDATA #REQUIRED> Figure 4. the second part of the Logic component provides a deterministic sequence of interactions within the Problem between the system and the student. . however it is these logical interconnections within the Problems which offer each learner an adaptive learning experience.7 XML Tag for Administrative part of Logic Sequencing. The appearance of each Problem may essentially be identical.

The XML for the Sequencing part of the Logic component is defined. as shown in Figure 4. .8: <!ELEMENT match ( match*. The Logic part instructs the system to display the appropriate answer depending on the chosen option. ( jump|terminate )? )> <!ATTLIST match elem IDREF #IMPLIED> <!ATTLIST match arg CDATA #IMPLIED> <!ATTLIST match screen IDREF #IMPLIED> <!ATTLIST match markarg CDATA #IMPLIED> <!ATTLIST match markmethod CDATA #IMPLIED> <!ATTLIST match method CDATA #IMPLIED> <!ELEMENT jump ( mark?. the test will terminate. setvalue* )> Figure 4. If the answer is incorrect. The name of the Problem is Assignment and this contains a multiple choice question (Question1) with three answers (Answer1.9. Answer2 and Answer3).8 XML tag for Sequencing part of Logic An example Problem is given in Figure 4. setvalue* )> <!ATTLIST jump screen IDREF #REQUIRED> <!ELEMENT terminate ( mark?.

</p>]]></choice></mchoice> </elem> </screen> <screen name = “Answer1”> <elem name = “html2”><html><![CDATA[<div><p> Incorrect.<?xml version = “1.0”?> <!DOCTYPE problem SYSTEM “. (Further Explanation) </p></div>]]></html></elem> .. </p>]]></choice> <choice value = “Choice2” ><![CDATA[<p> int age = 14.dtds/problem.dtd”> <problem name = “Assignment”> <screen name = “Question1”> <elem name = “html1”><html><![CDATA[<div><p> Suppose your age is 14. </p>]]></choice> <choice value = “Choice3” ><![CDATA[<p> string age = 14. Which of the following is thedeclaration and assignment of your age in number to amemory location called age? </p></div>]]></html></elem> <elem name = “mchoice”><mchoice> <choice value = “Choice1” ><![CDATA[<p> char age = 14.

(Further Explanation) </p></div>]]></html></elem> </screen> <screen name = “Answer3”> <elem name = “html4”><html><![CDATA[<div><p> Incorrect. (Further Explanation) </p></div>]]></html></elem> </screen> .</screen> <screen name = “Answer2”> <elem name = “html3”><html><![CDATA[<div><p> Correct.

<logic> <initial_snapshot screen = “Question1” mark = “0”/> <match screen = “Question1” methods = “equals”> <match elem = “mchoice” arg = “Choice1” methods = “equals”> <jump screen = “Answer1”/> </match> <match elem = “mchoice” arg = “Choice2” methods = “equals”> <jump screen = “Answer2”> <mark method = “add” arg = “10”/> </jump> </match> <match “equals”> </match> <match screen = “Answer1” method = “equals”> <terminate/> </match> <match screen = “Answer2” method = “equals”> <terminate/> </match> <match screen = “Answer3” method = “equals”> <terminate/> </match> </logic> </problem> elem = “mchoice” arg = “Choice3” methods = <jump screen = “Answer3”/> Figure 4.9 An Example of a Problem .

2 Roadmaps A Roadmap is the logic mechanism which links together individual Problems to form a package of learning material.10 XML tag for Roadmap . which can be regarded as a course. and/or a set of assessment questions which can be regarded as a test. The XML for the Roadmap is defined.2. <!ELEMENT roadmap ( initial+. jump* )> <!ATTLIST roadmap name ID #REQUIRED> <!ELEMENT initial EMPTY> <!ATTLIST initial problem CDATA #REQUIRED> <!ELEMENT jump EMPTY> <!ATTLIST jump from CDATA #REQUIRED> <!ATTLIST jump to CDATA #REQUIRED> Figure 4. the user is returned to the previous position in the Roadmap. Note that the collection of jumps must be non-empty. There are two components within the Roadmap – Initial and Jump. The Initial component specifies any number of Problems which the author wishes to display when the student logs onto the system.4. The Jump component specifies a collection of jumps between Problems.10. Jumps are only validated when a Problem has been completed and if the Problem is abandoned. as shown in Figure 4.

This is so that students can terminate a Problem and return to it at a later stage without having to start from the beginning again.11 depicts how the Roadmap works: PROBLEM PROBLEM PROBLEM ROADMAP DATABASE Figure 4.e.The Roadmap works by storing students responses to Problems i. This constructs customized courses for each student and many of the Problems (or learning objects) can be shared by the different Roadmaps to form different courses. into the database so that it can select the next suitable Problem to be presented to the student. Figure 4.11 How a Roadmap Works The position within a particular Problem of each Roadmap is saved automatically if the student if a student terminates a Problem or logs out. their interaction history or performance. if they wish. the student also has the option to restart the Problem. Figure 4.12 shows the Problems which are available to the current student – Pre-Test1 and Pre-Test2. and Pre-Test2 can be continued: . However.

as follows: • • Add.xml.2. which will add all of the elements of the Problem into the database. Delete and List Available Roadmaps.Figure 4. For example. we apply the function Add Problem to the Assignment XML file. The software has been constructed so that new Problems and Roadmaps can be added quite easily using Java-implemented functions. to add an Assignment Problem called Assignment. Delete and List Problems.12 A Screenshot showing the Available Problems 4.3 Handling Functions Problems and Roadmaps are added to the database before they are used by students. . Add.

13. Administrator. . To log in. Student View The initial view for students is shown in the Figure 4. The screen will be redisplayed if the user name and/or password are incorrect. otherwise the software will automatically recognize via the user name. the user name and password are required to be entered in the respective textboxes and then the Login button should be clicked. which he/she wishes to attempt.13) Login Screen Once the web address for the software has been entered into the web-browser. by clicking the link. Marker or Author.4.a central content area encapsulated with a thin black border. It shows all the Roadmaps available and a student may select one.3 The Graphical User Interface The system consists of a standard layout . the Login Screen is the first screen to be displayed. the type of user it is and will load the appropriate view. Four different types of users can log into the system: Student. A title of the screen is displayed on the top left hand corner and the top right hand corner always displays the OCTA logo. (See Figure 4. The graphical user interface is minimalistic so as to allow the students to focus on the learning materials and not become distracted.

if a student clicks on Section1-Pre-Test.Figure 4. two available Problems are available for the student – Pre-Test1-Assignment and Pre-Test2-Expressions. Figure 4. he/she is subsequently directed to another Screen which displays the Problems of the Roadmap which are available to the student. For example. in Figure 4.14.13 A Screenshot showing the Initial View for Students For example.14 A Screenshot showing two Available Roadmaps .

Once the student has selected one of the available Problems. and an Index button located at the left hand corner. if he/she selects Pre-Test1-Assignment.15: Figure 4. .15 A Screenshot of a Student Screen Note that each screen of the learning materials or questions consists of a Proceed button. This Proceed button is activated once an answer has been selected and when it has been clicked the student’s answer is subsequently evaluated. Clicking the Index button will return the user to the Roadmap Selection screen. for example. as shown in Figure 4. then the first question in that test is presented to the student. located at the bottom right hand corner.

which provides personalized learning paths for different levels of proficiencies of students.4. 2002) and subsequently converted these into learning objects which can be reused. Related work includes Pickard (2004) who also developed learning Objects to help students learn introductory Java programming.4 Learning Content Development Our main contribution to the software is the construction of learning objects to form a Java Programming course. We intended to use the learning materials and test questions from our existing Java course (Yau. The motivation for using this course was because it was aimed at students prior to their entry into higher education or university and was particularly chosen to illustrate the need for personalized content as the target users of the system may have different levels of programming experience. using different adaptive testing algorithms. . These learning objects were then incorporated into our software tool and standardized metadata were attached.

Collaboration and Practice (Ibid). Assessment. The following details of these learning modes are adapted from ASTD & Smartforce (2002).4.1 Ideas behind the Construction of Learning Paths A Learning Path is a group of “Learning objects arranged together to achieve a specific instructional purpose. 2002). A path can be viewed as a comprehensive program that provides an in-depth understanding of a subject. Learning paths are usually subdivided into learning objects for instructional design purposes and it is a good rule to construct learning objects around four different learning modes – Instruction.4. or as a collection of e-learning objects that can be mixed and matched to meet specific learning needs” (ASTD & Smartforce. . This section gives a description of the learning objects which can be constructed around these learning modes.

Table 4. Consist of a combination of video. or software product implementation. exercises. audio and interactive slides where instructors speak directly to learners.Instruction Learning Mode Table 4.3 shows the types of learning objects which can be constructed from the Assessment Learning mode. Brief and text-oriented documents used as supplementary study material. Lessons Workshop s Seminars Articles White Papers Case Studies Consist of text. Detailed and text-oriented documents used to address complex topics. research. for example. . the current developments in a third world country.2 Objects of the Instruction Learning Mode Assessment Learning Mode Table 4. Consist of detailed analyses of business. Hands-on training provided by instructors to learners including demonstrations. web tours and video broadcasts. questions.2 shows the types of learning objects which can be constructed from the Instruction Learning mode. graphics and animation and are used to explain concepts and theories. industry.

3 Objects of the Assessment Learning Mode Collaboration Learning Mode Table 4. Used for learners to test their knowledge to determine the degree of the learning materials they have pertained. before learning begins.PreAssessment Proficiency Assessment Performanc e Test Certification Prep Test Used to evaluate the depth of the learner’s knowledge and to determine the scope of their learning needs. (Used towards end of study) Table 4. Mentored Provide a feedback mechanism of the work between the learner and . Scored tests used to evaluate whether the learner is able to successfully complete a specific task. Evaluates whether learners have successfully assimilated the learning materials and mastered the skills gained during the learning stage.4 shows the types of learning objects which can be constructed from the Assessment Learning mode.

4 Objects of the Collaboration Learning Mode Practice Learning Mode Table 4. which enables them to build and test their skills and knowledge.Exercises Chats Discussion Boards Online Meetings the instructor. Role-play Simulation Software Simulation Learners can interact with a realistic simulation of a business scenario. Materials generated from this such as documents. Learners are able to communicate instantly or leave messages with anyone globally about ideas that interest them. Learners can replicate GUI environments which allow them to practice and learn how to solve any complex tasks associated with . Learners are able to share their knowledge and experiences on a personal level.5 shows the types of learning objects which can be constructed from the Practice Learning mode. Table 4. presentations and web pages can be shared amongst learners regardless of their location.

Lesson Instruction and Performance Test. Learners can learn how to solve complex exercises which will help them gain technical business skills. Learners can learn how to remotely configure in real time live network devices over the internet. Proficiency Assessment.5 Objects of the Practice Learning Mode Adopted Learning Paths Given these different types of learning objects which can be constructed around the different learning modes. Learners can learn how to install and configure hardware components or use test instruments in a simulated environment.Hardware Simulation Coding Simulation Conceptual Simulation Businessmodeling Simulation Online lab Research Project the software. Learners can practice complex coding tasks and they can view the correct code if they cannot perform the tasks.a student cannot perform well in a test by learning minimal material (Wainer et al. Three Assessment objects and one Instruction object were chosen .Pre-Assessment. we decided that the Instruction and Assessment learning modes are most important for our students. Table 4. Learners can practice the application of ideas and understand relationships between certain kinds of information. By having many assessments learning objects which can be reused in the different assessments will improve the test security . . Learners can learn how to undertake a detailed research exercise on a specific subject field and analyze their findings.. 2000).

This is targeted at students of all proficiency levels. • The Proficiency Test was chosen to establish in minimal time students’ levels of proficiency. 2002) 3. These learning objects are learning materials or test questions (mainly multiple choice questions). which have been taken from the nine topics of the introductory Java course (Yau. • The Lesson Instruction was chosen to facilitate student learning. This is targeted at novice to average students.4.2 Roadmaps for Different Learning Paths This section discusses the development of our four different Roadmaps for each of the chosen type of learning mode. These nine basic Java topics and their levels of difficulty (as established in the previous chapter) are given in Table 4. This is targeted at novice to average students. A learning object repository has been constructed which contains learning objects used to form the Roadmaps. The reasons for their selection are as follows: • The Pre-Test was chosen to establish students’ levels of proficiency before they commence learning. Note that Output and Input have been assigned difficulty level 3 and 4 is because we wish to teach these after Assignment and Expressions. .4. • The Performance Test was chosen to determine whether students have effectively assimilated the learning materials in the learning stage.6. This is targeted at average to proficient students. 3 Note that this course was originally targeted at students before they enter higher education.

. Figure 4.3 Pre-Test Roadmap The motivation for this Roadmap was to provide a way of assessing students to ascertain their level of understanding and to locate them at the appropriate level of instruction. topic by topic. one for each of the nine Java topics.4.6 Assigned Difficulty Levels for Java topics 4. until it reaches the most difficult topic – Classes. The Pre-Test Roadmap contains 9 Pre-Tests. Gaps in knowledge are identified and it helps students to realise their weaker areas. 2001).16 shows a simplified view of the algorithm of each Pre-Test containing six questions.TOPIC Assignment (including Variables and Data Types) Expressions (including Arithmetic Operators) Output Input If-Statements (including If-Else-Statements) For-Loops Arrays Methods Classes (and Objects) DIFFICULTY LEVEL 1 2 3 4 5 6 7 8 9 Table 4. Questions 3 and 4 are of Intermediate level . This process is conducted so that students can spend more time in the learning stage (Arroyo et al. Questions 1 and 2 are of Novice level. It commences with the easiest topic – Assignment – and gradually increases difficulty level. The aim of this Assessment is to establish the students’ proficiency levels in each of the nine topics.

At this point. Table 4.16 Pre-Test Roadmap A different amount of marks are allocated for students who answer correctly questions of different levels of proficiency. the Pre-Test is terminated and the student’s level of proficiency is established for that topic i. Intermediate or Advanced. Novice.. This distinguishes learners of different levels of proficiency (Wainer et al. Students will only be presented with the next question (which is of the same or higher proficiency level) if the previous question is answered correctly. if a question has been answered incorrectly. This is shown by the horizontal arrows.and Questions 5 and 6 are of Advanced level. Q1 No Previous Knowledge Q2 Novice Level Q3 Q4 Q5 Q6 Advanced Level Intermediate Level Figure 4. 2000). This is shown by the vertical arrows.7 shows the number of marks allocated for each type of question in this section: Type of Question Novice Intermediate Advanced Number of Marks 4 5 6 .e. The section is completed when all nine Pre-Tests have been completed and the level of proficiency has been established in each topic. the student will be redirected to the Pre-Test Roadmap Selection screen where he/she is able to select the next Pre-Test. Otherwise.

Similarly. • The established proficiency levels for each completed assessment topic and the total number of marks awarded for the section. Or alternatively. whether these are answered correctly and the number of marks awarded for each question.4. one for each of the nine topics.7 Marks for Different Pre-Test Questions The maximum number of marks allocated for each Pre-Test is 30 therefore the maximum mark for the nine Pre-Tests is 270. 2001). It is especially targeted at students who have previous programming experience and may not wish to participate in the learning stage but only in order to establish their proficiency levels in basic Java. this Roadmap can also be used for Proficient Learners before the learning stage to establish their level of proficiency.4 Proficiency Test Roadmap This Roadmap is similar to the Pre-Test Roadmap however it aims to provide a quick and more interesting means of assessment for students as this may be more appealing for them if they are not required to answer numerous questions (Gouli et al. Information regarding each student’s progress through the Pre-Test Roadmap is recorded in the database as follows: • Each completed question within a topic. Each .Table 4. the Proficiency Test Roadmap contains 9 Proficiency Tests.. 4. after the learning stage to establish which level of proficiency they have reached since the learning stage. starting from Assignment and ending with Classes. However.

The following diagram illustrates how each of these tests work.Proficiency Test contains five questions of different levels of proficiency (Beginner. as described in Table 4. . The next question to be presented is dependent on the students’ performance.8. and Expert). No Knowledge BEGINNER Beginner Novice NOVICE Student starts here INTERMEDIATE Question Correct Answer Incorrect Answer ADVANCED Intermediate EXPERT Advanced Expert Figure 4. Advanced. Intermediate. 2000).. and subsequently their levels of proficiency are established. Novice.17 Proficiency Test Roadmap The student begins the test at Intermediate proficiency level because this level is assumed since the learner's initial proficiency level is not available (Wainer et al.

Test is terminated and the student’s proficiency level is Intermediate. topic.8 How Questions are selected in Proficiency Tests A different amount of marks are also allocated for students who answer correctly questions of different levels of proficiency.Proficiency Level of Question Intermediate Advanced Expert If Question is Answered Correctly A more difficult question is presented (Advanced). Test is terminated and the student’s proficiency level is Expert Novice Test is terminated and the student’s proficiency level is Novice. Beginner Test is terminated and the Test is terminated and the student student’s proficiency level is does not have knowledge in this Beginner.9 shows the number of marks allocated for each type of question in this section: Number of Marks Beginners 3 Novice 4 Intermediate 5 Advanced 6 More Advanced 7 Table 4. A more difficult question is presented (Expert). Test is terminated and the student’s proficiency level is Advanced. Table 4. Table 4. If Question is Answered Incorrectly A less difficult question is presented (Novice). A less difficult question is presented (Beginner).9 Marks for Different Proficiency Test Questions Type of Question .

Passive Sequencing is a reactive technology which. examples.5 Lesson Instruction Roadmap The motivation for this Roadmap is to utilize learning objects which contain textbook content and code examples to present key programming concepts. and review . which are branches of Curriculum Sequencing. if a student is unable to solve a problem then a subset of available learning materials is offered to the student (Ibid). Active Sequencing “implies a learning goal (a subset of domain concepts or topics to be mastered) and systems with Active Sequencing can build the best individual path to achieve the goal” (Brusilovsky.Similar information (with the Pre-Test Roadmap) regarding each student’s progress through the Proficiency Test Roadmap will be stored in the database. Adamchik (2003) pointed out that this was an effective and low-cost way of transferring programming knowledge from the teacher to the student.4. and to demonstrate how learning objects can be selected from a repository to form different learning paths for individual learners. for example. Each lesson contains many Screens of learning materials regarding the topic. 1999). 4. code examples and tips for writing code. On the other hand. starting from Assignment and ending with Classes. We have implemented this Roadmap consistently with the ideas of Active and Passive Sequencing. The Lesson Instruction Roadmap contains the nine lesson instruction Java topics. These consist of factual information.

as follows: • Students have different levels of proficiency and therefore learning materials have been constructed according to three different proficiency levels (Novice. • Some students may require more detailed explanations and examples than others (even though their proficiency levels may be the same). Intermediate and Advanced) (Wu. We have implemented the learning stage to meet three specific requirements. 2002). • Review questions are implemented because these help students become more involved in the learning process and encourage higher interactivity between the system and the student (ASTD et al. Students are required to attempt two review questions at the end of a topic. Therefore we have implemented additional supplementary learning materials in each topic. or alternatively omit them if they understand the material (Ibid). 2002). Figure 4.18 illustrates this: . which students have the option to select to view.questions which are multiple choice questions. which are of Intermediate or Advanced level.. This level is dependent on whether the student required supplementary materials to help them fully understand the topic.

.Student starts here PRIMARY LEARNING MATERIALS ADVANCED QUESTION 1 ADVANCED QUESTION 2 SUPPLEMENTARY LEARNING MATERIALS INTERMEDIATE QUESTION 1 INTERMEDIATE QUESTION 2 Figure 4. whether supplementary materials were consulted and the number of marks awarded for each topic.10 Marks for Different Lesson Instruction Questions The maximum number of marks allocated for each lesson and the Lesson Instruction Roadmap are 12 and 108 respectively. Information about each student’s progress through the Lesson Instruction Roadmap is stored including each assimilated topic.18 Lesson Instruction Roadmap Two different amounts of marks are allocated for students who answer correctly the two questions of different levels of proficiency. . as shown in Table 4.10: Type of Question Number of Marks Intermediate 5 Advanced 6 Table 4.

The aim of this is to distinguish students of different levels of proficiency (Wainer et al. The task for this Roadmap is to complete a Noughts and Crosses game by providing the correct answers to multiple choice questions. topics which students’ have difficulties with. Our Noughts and Crosses game task consists of 27 questions which are divided into six sections. if any. There are also secondary aims of this assessment. and together these examine each of the basic Java topics.6 Performance Test Roadmap The motivation for this Roadmap is to determine whether students are able to complete a specific task based on the skills acquired during the learning stage and to provide feedback to the students on their overall performance. Therefore we allow the students to refer to the Lesson Instruction Roadmap if they struggle with this assessment.4. 2002). An overall score has been allocated for the task and questions from the various topics are assigned different weighted scores according to its level of difficulty. are established and the students are recommended to return to the Lesson Instruction Roadmap to learn the topics again. which will subsequently combine to form the code for the implementation of the game. . • After the assessment. as follows: • To help students learn and realize the applications of Java.. 2000)..4. This type of assessment can be regarded as a summative final exam (ASTD et al.

11 shows each of the number of marks allocated for each topic: (Note that Input and Output are not tested in this section.) Topic Number of Marks Assignment 2 Expressions 3 If-Statements 4 For-Loops 5 Arrays 6 Methods 7 Classes 8 Table 4.The maximum number of marks allocated for this section is 135 and Table 4.11 Marks for Different Performance Test Questions .

19. Section 6 Main Method and NoughtAndCrosses Class (Tests Methods and Classes) Section 1 Declaration of Variables (Tests Assignment) Section 2 NextPlayer Method (Tests Expressions.19 Performance Test Roadmap 4.The six sections of the Noughts and Crosses game are illustrated in Figure 4. For-Loops and Method) Section 4 IsWin Method (If-Statements. Arrays and Methods) Section 5 NewGame Method (Tests Assignment.7 Metadata for our Learning Objects . If-Statements. Arrays and Methods) Figure 4. If-Statements and Methods) Section 3 MakePlay Method (Tests Assignment.4.

10 Description 5.1 InteractivityType 5.2 LearningResourceType 5.We decided that it would be most appropriate to create a Learning Object Metadata Application Profile which is conformant to the IEEE Learning Object Metadata (LOM).7 TypicalAgeRange 5.9 TypicalLearningTime 5.12: 5.DTD including the additional metadata and the Learning Object Application Profile is displayed in Figure 4.4 SemanticDensity 5.6 Context 5.11 Language Table 4.5 IntendedEndUserRole 5.20: (Note the Logic part is excluded) .8 Difficulty 5. These are shown in Table 4.3 InteractivityLevel 5.12 Educational Category of the IEEE LOM stnadard The overall Problem. This approach allows us to maintain our metadata elements which are essential for our learning objects and incorporate a streamlined subset of elements from the IEEE LOM – the Educational category – which are the most important for our use.

logic )> <!ATTLIST problem name ID #REQUIRED> <!ELEMENT learningObject EMPTY> <!ATTLIST learningObject interactivityType CDATA #IMPLIED> <!ATTLIST learningObject learningResourceType CDATA #IMPLIED> <!ATTLIST learningObject interactivityLevel CDATA #IMPLIED> <!ATTLIST learningObject semanticDensity CDATA #IMPLIED> <!ATTLIST learningObject intendedEndUserRole CDATA #IMPLIED> <!ATTLIST learningObject context CDATA #IMPLIED> <!ATTLIST learningObject typicalAgeRange CDATA #IMPLIED> <!ATTLIST learningObject difficulty CDATA #IMPLIED> <!ATTLIST learningObject typicalLearningTime CDATA #IMPLIED> <!ATTLIST learningObject description CDATA #IMPLIED> <!ATTLIST learningObject language CDATA #IMPLIED> <!ELEMENT screen (elem+)> <!ATTLIST screen name ID #REQUIRED> <!ATTLIST screen difficultyLevel CDATA #IMPLIED> <!ATTLIST screen assessmentArea CDATA #IMPLIED> <!ATTLIST screen materialCovered CDATA #IMPLIED> <!ATTLIST screen assessmentLevel CDATA #IMPLIED> <!ATTLIST screen answer CDATA #IMPLIED> Figure 4.5 Summary .<!ELEMENT problem ( learningObject. screen+.DTD 4.20 The Overall Problem.

A non-adaptive Java programming course has been imported into OCTA and converted into an adaptive one which uses reusable learning objects. Lesson Instruction and Performance Test. We have examined ways to construct learning paths and have selected four appropriate ones – Pre-Test. Chapter 5 . Two extra categories of metadata have been incorporated into our learning objects – the metadata for Assessment and the Educational category of the IEEE LOM standard. Proficiency Test. and the enhancements constructed on it for the purpose of this thesis.We have described our existing novel software OCTA which uses Problems and Roadmaps to adapt learning materials to users.

however the quality of our pedagogic tool must be ensured through an evaluation process.Evaluation A fully working model of our web-based adaptive learning and testing tool OCTA has been described in the previous chapter. 2004). provide evidence. learn. There are new issues and responsibilities which have to be considered and addressed with elearning since it is still a new technology. enriched or enhanced through the use of technology” (Ibid) as compared with the traditional learning approach. Evaluation is a very important process and is a “means to investigate. share and make judgements about what we do and how we do it” (Dempster. Answers should be provided for whether “learning has been enabled. An important element to consider when evaluating e-learning is that evaluating good educational design is equally as important as evaluating the technological approach. The aims of our evaluation are as follows: • To determine the effectiveness of the learning materials within our four Roadmaps and whether each Roadmap is effective for its given purposes. . and “effective evaluation should include a close scrutiny of the pedagogical rationale and outcomes of the curriculum and assessment design which e-learning is intended to support” (Ibid).

Each student performed the evaluation assignment individually which took place in a laboratory. Prior to the evaluation assignments. 5.• To establish whether our Roadmaps are successful in assisting with students with learning introductory Java programming and examine how accurately the test questions can assess the students’ knowledge. and they were given their own user name and password for the system so that their progress could be tracked individually. which contains the four Roadmaps. • To establish whether the students’ learning experience has been enhanced through the use of our web-based tool. The assignment required each student to spend approximately one hour on the software and each student was paid £5 for the assignment. seeking first year computer science or mathematics (or combined courses) undergraduate students to volunteer for this evaluation assignment. students were asked to complete the Student Questionnaire (See Appendix G) to provide their educational background . We decided that this payment would encourage a higher number of student volunteers and also encourage a higher quantity and quality of constructive feedback. Advertisements in both the Computer Science and the Mathematics departments were placed.1 Methodology To help us with the evaluation process. and subsequently provide us with feedback. we asked a number of students to spend time working through OCTA.

It was also approaching the end of term time when we placed the . which took place during or shortly after students’ exams period. The feedback was recorded and analyzed later. The number of Roadmaps that students had sufficient time to complete was dependent on how fast they could read and answer questions.information.1. students were asked to provide feedback on the learning content after the completion of each test or lesson within a Roadmap. 5. and how detailed their feedback was. Each student had sufficient time to complete at least one Roadmap and many of them managed to complete two Roadmaps. The possible contributing factors for this small number included the time of our evaluation.1 Details of Student Volunteers A relatively small number (twelve) of students volunteered to perform the evaluation. for example: • Students who felt competent were asked to perform the Proficiency Test and the Performance Test Roadmaps. They were also asked a number of questions to determine how confident they are with basic Java. • Students who were not so confident were asked to perform the Pre-Test Roadmap and work with the Lesson Instruction Roadmap. this allows us to select appropriate Roadmaps for them. and students may have been busy with revision or had wanted to relax after their exams. After login.

computing background and selected Roadmaps are also displayed. Nine of these students. gender. However. MEng Computer Systems Engineering and BSc Computer & Management Sciences respectively. small evaluation which can retrieve many details about a system is more effective than a large expensive aimless one which does not gather much information about the usage of the system. Table 5.1 shows the details of the twelve student volunteers refer to as Students A – L. were first year undergraduates at the time of the evaluation.advertisements and students may have been busy with packing and/or moving home. Note that PhD students were asked to perform our evaluation to determine whether our Roadmaps would be useful for students with a high proficiency level in Java. and Students J – K were PhD students. despite the small number of volunteers. Their course. Six of the undergraduate students were enrolled on BSc Computer Science and the remaining three students were enrolled on BSc Maths with Computing. . Students A – I. the quality of each of the feedback was extremely helpful and constructive and we felt that it was not necessary to obtain more volunteers to perform further evaluation. This is also supported by Dempster (2004) who points out that a well-aimed. to preserve their anonymity.

& Management Performance Test Sciences PhD Computer F Computing Proficiency Test & Science Certificate Lesson Instruction PhD Computer F None Pre-Test Science PhD Computer F None Pre-Test & Lesson Science Instruction Table 5. it would appear that there was a diverse range of programming abilities amongst them. Also some of these GCSE or A-Level courses involved various degree of programming ranging from none to programming in languages such as Basic or Visual Basic.1 Details of Student Volunteers Computing Background None Selected Roadmaps Given the different computing background of these students before they entered university. A number of them have undertaken GCSE and/or A-Level whereas some have not. however. Student A had self-studied .Student A B C D E F G H I J K L Course Gender BSc Computer M Proficiency Test & Science Performance Test BSc Computer M GCSE & Proficiency Test & Science A-Level Performance Test BSc Computer M GCSE Lesson Instruction Science BSc Computer M GCSE & Pre-Test Science A-Level BSc Computer M GCSE & APre-Test & Science Level Performance Test BSc Computer F A-Level Pre-Test & Science Performance Test BSc Maths with F GCSE & Pre-Test Computing A-Level MEng Computer M None Proficiency Test & Systems Lesson Instruction Engineering BSc Computer M A-Level Lesson Instruction. It is also interesting to note that both Students A and H have not completed GCSE or A-Level in computing.

the diversity of the students’ programming abilities should be significantly reduced.1. Table 5. The syllabi of these two modules are shown in Table 5.2 Syllabi of Core Computer Science Modules 5. Construction and Testing of Programs using Java CS126 Design of Information Structures (With Java) Simple Types and their Properties Abstract Data Types Algorithms including Searching and Sorting Algorithms.the C++ programming language to a reasonably proficient level. 2004c). The full details of the information of the modules within each of the various undergraduate degree courses can be found at our Computer Science Department website (University of Warwick. 2004a). This means that by the end of the first year. 2004b.2: CS118 Programming for Computer Scientists Fundamentals of Programming Object-Oriented Programming Design. whereas Student H did not have any understanding of any basic concepts of programming. and despite the various computer science and mixed degree courses.2 Constraints of the Evaluation . first year undergraduate students are required to undertake two Java-based core modules which are CS118 Programming for Computer Scientists and CS126 Design of Information Structures (University of Warwick.

however. it was not possible to seek school students to perform our evaluation. Therefore it was decided that it would be appropriate to test the learning materials on first year computer science undergraduates since their programming experience is usually relatively limited at this stage. The materials were targeted at high school or college students. The difficulty level of material within each topic was also assigned by a single author and there may be issues regarding its accuracy and preciseness. given our time constraints. Therefore alternatively we have had to ask different students to evaluate the different Roadmaps to provide feedback individually. • It would not have been ideal to ask students to perform the evaluation assignment for more than an hour as the students may become overloaded with information or become unmotivated or disinterested and this would affect the results of our evaluation.There are a number of constraints within our evaluation process. . as follows: • Our learning materials were originally constructed as part of an undergraduate final year project by a single author therefore issues about the quality of the materials may arise during the evaluation. This prohibited the possibility of asking students to complete all four Roadmaps and compare them. There was not sufficient time to develop new learning materials from scratch for this thesis which target university students.

5. We anticipate that this may result in students providing us similar feedback regarding the learning materials between the different Roadmaps. This is especially the case between the Pre-Test and the Proficiency Test Roadmaps as the learning content is very similar but the main difference is the adaptive testing algorithm structure.• The four Roadmaps were relatively similar to some extent because they each had learning materials from the same nine Java topics and questions were reused in the different assessments.2 Research Questions and Evaluation Results .

Each Pre-test has six questions – two novice.1 Pre-Test Roadmap Recall that this Roadmap contains nine Pre-Tests and was designed and are targeted at novice students. This section presents our evaluation results of the four Roadmaps – Pre-Test. otherwise the test terminates and the student’s proficiency level is established.2. 5. Is the apparent ordering of topics from the evaluation of the Pre-Test Roadmap consistent with the perceived ordering of topics from our research investigation? . all six questions are presented.“Pedagogical effectiveness must be evidenced in terms of student learning ” (Dempster. two intermediate and two advanced. If a student answers each question in a Pre-Test correctly. 2004) and our evaluation process aims to provide answers to a number of research questions. Lesson Instruction and Performance Test – and our web-based learning approach. Proficiency-Test.

Also three of these were male and the other three were female.3 Students’ Proficiency Levels in the Pre-Tests Figure 5.67 2.67 3.3 shows the six students’ assigned proficiency levels in the nine topics and the mean proficiency level of each topic: (Note that No Knowledge = 0.83 3.5 4. Four of these students were first year undergraduates and the remaining two were PhD students.33 3.17 3.33 3.The nine Pre-Tests within this Roadmap were completed by six students and their proficiency levels in each topic were assigned. Intermediate = 3 and Advanced = 5) Topic Stude nt D Student E Student F Student H Student K Student L Mean Proficiency Level 1 Assignment 2 Expressions 3 Output 4 Input 5 If-Statements 6 For-Loops 7 Arrays 8 Methods 9 Classes 5 3 1 5 5 5 5 1 5 1 3 5 5 5 5 5 5 0 5 3 5 3 1 5 0 5 3 5 5 5 5 5 5 5 1 3 5 5 1 0 5 5 5 5 5 1 3 0 5 0 0 0 1 3 3.67 3.1 illustrates the average assigned proficiency level of each topic in ascending order: .5 Table 5. Novice = 1. Table 5.

4.3 4.1 3.9 3.7 3.5 3.3 3.1 2.9 2.7 2.5 3 Output 4 Input 2 Expressions 1 Assignment 6 For-Loops 8 Methods 7 Arrays 5 IfStatements 9 Classes

Figure 5.1 Average Assigned Proficiency Levels in Pre-Tests

Evident from the bar chart above, the mean assigned proficiency level of each topic appears not to be consistent with the perceived ordering of topics from our research investigation. There are two possible conflicting reasons for this, as follows:

Our previous research investigations ascertained inaccurate results on the students’ perceived difficulty levels in basic Java. This is unlikely given the rigorousness of both of our Literature Survey and Student Questionnaire investigations.

The learning materials were originally targeted at high school or college students and relatively easy test questions were used. This may also means

that the questions in the more difficult topics were made easier to compensate and difficult components were omitted and therefore the complexity of more difficult topics may not be reflected. Similar questions were also used to ensure that novice students fully understood the material. Our learning materials can be adapted more suitably for university students by ensuring that all complex components within topics are included and having a larger variety of questions.

Is the Pre-Test Roadmap accurate in establishing students’ proficiency levels? If not, why not?

A way to measure whether this Roadmap is accurate in establishing the students’ proficiency levels is to determine whether the students agreed with their assigned proficiency level of each topic. Note that this may not be wholly accurate since a student is not necessarily the best judge of their own ability. The six student volunteers who completed this Roadmap mostly agreed that each Pre-Test was accurate in reflecting their level of proficiency in the various topics. However, there were four exceptions to this – Student F did not agree with her assigned proficiency level in the Arrays and If-Statements topics; Student H did not agree with his assigned level in Methods and Student K did not agree with her assigned level in Output.

Students F, H and K were asked to provide reasons for this disagreement and the main reason for this was the Misunderstanding of a Question. Two factors may result in this, as follows:

A poorly phrased question or ambiguity in questions can lead to an incorrect answer. Abdullah (2003) highlights the importance of the clarity of test questions and how these should be phrased according to the target audience.

Too many similar questions can demotivate and distract students which can lead to an incorrect answer. It was also emphasised by Abdullah (2003) that demotivation and boredom can cause careless slips which can lead to an incorrect answer and it is therefore important to have a large variety of questions to keep students motivated in the assessment. Also, a number of students commented that they would prefer more varied questions as this would be more interesting.

It is possible to conclude that the testing algorithm appears to be fully working and the students’ disagreement with their assigned proficiency levels were as a result of the quality of the learning materials.

one student commented that the test was easy to start with and the difficulty of the questions was suitably paced.Does the Pre-Test Roadmap encourage and motivate novice students? The students’ experiences within this Pre-Test Roadmap suggest that it is encouraging and motivating for novice students. . Abdullah (2003) accentuates the importance of maintaining the confidence of students attempting test questions and especially at the beginning of the test where often a lower difficulty level of question than the student is capable of may be presented. For example. she commented that she felt the questions were relatively easy but she did not wish to attempt more difficult questions in case she answers them incorrectly. Student F – a female student – was assigned high levels of proficiency for the topics. Another student commented that the difficulty curve was very gentle and this was encouraging for him to continue with the test questions. however.

Is there a sufficient number of questions used to establish students’ proficiency levels in this Roadmap?

We asked the student volunteers to comment on whether there was a sufficient amount of questions used to determine the level of proficiency for a topic. The majority of the students who agreed with their assigned proficiency levels felt that there were a sufficient number of questions. This is consistent with the view of Abdullah (2003) who points out that a test should terminate if new information regarding the students’ knowledge would not be revealed by further questioning. However the three students who disagreed with their assigned levels for one or two of the topics commented that there should have been more questions. They argued that the test did not provide a fair representation of their level on those topics because they were not given a second chance to demonstrate what they know. It is possible to conclude that because the difficulty levels of the material within the topics are not precise enough, this resulted in the inaccuracy in the software of assigning students’ proficiency levels, and was not a problem with the adaptive test algorithm structure.

Is there a difference in the proficiency levels between PhD students and undergraduate students?

We observed that there was a significant difference between the proficiency levels of the PhD students and the first year undergraduate students, and the assigned proficiency levels of the PhD students were unexpectedly much lower than that of the undergraduate students. It was commented by many of the undergraduate students that they found the questions reasonably easy given that they had been relearning and revising for their recent Java exams. On the other hand, the PhD students commented that it has been a long time since they studied Java and they could not remember many of the Java’s syntax details. From this, we can conclude that although PhD students may be more proficient in basic Java, their knowledge in Java is not fresh in mind and this was the reason their assigned proficiency levels in the topics are lower. On the contrary, less proficient first year undergraduate students has performed much better in the Roadmaps, due to their recent Java exam revision.

5.2.2 Proficiency Test Roadmap

Recall that this Roadmap contains nine Proficiency Tests, was designed and targeted at more proficient students. Each Proficiency-Test has five questions – one each of beginner, novice, intermediate, advanced, and expert. The first presented question is at intermediate level and depending on whether students answer this correctly, an appropriate question is next selected for them.

Is the apparent ordering of topics from the evaluation of the ProficiencyTest Roadmap consistent with the perceived ordering of topics from our research investigation?

The nine Proficiency Tests within this Roadmap were completed by five students and their proficiency levels in each topic were assigned. Four of these students were first year undergraduates and the other was a PhD student - three were male and two were female. Table 5.4 shows the five students’ assigned proficiency levels in the nine topics and the mean proficiency level of each topic: (Note that Beginner = 1, Novice = 2, Intermediate = 3, Advanced = 4 and Expert = 5)

9 4.2 7 Arrays 5 5 5 3 5 4.4 4 Input 5 5 2 5 5 4.4 5 If-Statements 5 5 5 5 5 5 6 For-Loops 5 2 5 4 5 4.1 3.5 4.6 8 Methods 2 5 5 5 5 4.5 3 Output 6 For-Loops 4 Input 8 Methods 7 Arrays 2 Expressions 1 Assignment 5 IfStatements 9 Classes Figure 5.6 Table 5.4 9 Classes 5 3 5 5 5 4.2 Average Assigned Proficiency Levels in the Proficiency Tests .7 3.3 4.9 3.1 4.2 illustrates the average assigned proficiency level of each topic in ascending order: 5.4 Students’ Proficiency Levels in the Proficiency Tests Figure 5.7 4.Topic Student A Student B Student G Student I Student J Mean Poficiency Level 1 Assignment 5 3 5 5 2 4 2 Expressions 5 5 5 5 5 5 3 Output 5 5 5 2 5 4.

Hence. it is possible to conclude that the test questions are too easy for the students and subsequently the test is unable to accurately establish the students’ proficiency levels. . Table 5. As in the Pre-Test.4 shows a large number of Expert (5) levels which have been assigned to these students. the mean assigned proficiency level of each topic appears not to be consistent with the perceived ordering of topics from our research investigation. This appears to indicate that these students have a high level of proficiency in basic Java.Similar to the Pre-Test. the two possible reasons are the possible inaccurate results from our research investigation and/or the low level of difficulty of questions.

• Student A was a non-native English speaker and he had many problems understanding the questions. however they each disagreed with at least one of their assigned proficiency levels – Student A did not agree with his assigned level in Methods. . as follows: • It is more difficult to read. Student I did not agree with his level in Output and Student J did not agree with her level in Assignment. It is possible to conclude that the adaptive testing algorithm appears to be fully working and the students’ disagreement with their assigned proficiency levels were as a result of the quality of the learning materials and the display medium. Student B did not agree with his level in Assignment and For-Loops. Student G did not agree with her level in Input.Is the Proficiency-Test Roadmap accurate in establishing students’ proficiency levels? If not. why not? The results of this question are similar to that of the students who performed the Pre-Test Roadmap. understand and perform logic on a question which is displayed on the screen than one on paper. The main reason for this disagreement was again the Misunderstanding of a Question and more factors were revealed in this Roadmap. All five of the students agreed with their assigned proficiency levels in most of the topics.

the students commented that there was generally a sufficient number of questions to determine the level of proficiency for each topic.Does the Proficiency Test Roadmap encourage and motivate proficient students? The proficient students’ experiences with this Roadmap show that it is encouraging and motivating for proficient students. and if she did not perform well on one topic. Student J – a PhD student – also commented that only a few questions had to be answered before a proficiency level is assigned and that it was inspiring to attempt the different tests in the nine topics. . Is there a sufficient number of questions used to establish students’ proficiency levels in this Roadmap? Similar to the Pre-Test Roadmap. one student commented that the test was able to assign very quickly the proficiency level for each topic and he did not have to attempt many questions. Another student commented that the questions which were of higher difficulty level were much more complex. except in the instances where they disagreed with their assigned proficiency level. it is possible to conclude that it would be more ideal to present more questions to students if they answer a question incorrectly before terminating the test. For example. Again. and this would distinguish the more proficient students from the rest. she was motivated to try harder in the next topic.

which he preferred than having to consult textbooks. a proficient student.5.3 Lesson Instruction Roadmap Recall that this Roadmap contains two types of learning materials – primary materials which are essential for students to learn in this Roadmap and supplementary materials which are optional. G. J and L completed this Roadmap and positive feedback regarding its effectiveness was received from all of them. as follows: • Student C. and since it provided detailed information about Java’s syntax it can be used as an online reference tool. He pointed out that this teaching aid would be helpful for providing background knowledge prior to undertaking the CS118 module. commented that the primary learning materials were very easy to follow and very effective in gaining understanding of basic Java and the option to view supplementary materials were very useful for novice students. Is the Lesson Instruction Roadmap effective in helping students learn Java and are the questions presented to the students at the appropriate level of proficiency? Students C. He also commented that the test questions were . Questions of different levels of proficiency are presented to students depending on whether they select the supplementary materials.2.

which makes it easier to gain knowledge of Java. Finally.simpler than textbook questions. commented that the materials were very easy to follow and they have helped her reiterate many trivial concepts. which is encouraging. One of her comments was interesting because she pointed out that the proficiency level of the questions were appropriate for students who have not previously learnt programming. however. commented that both the primary and supplementary learning materials have helped her gain better understanding of many topics especially Classes. She pointed out that this Roadmap was very suitably paced and she felt that she was able to follow the material very easily because the prerequisite topics were introduced before the new complex concepts within topics such as Methods and Classes. This view is consistent with that of Jenkins (2001) who argues that novice students may not be able to understand a new concept if they do not understand previous learning materials. they . she felt that the questions were relatively easy. • Student J. • Student G. and were at the right level of difficulty level. enrolled on BSc Maths with Computing. She also pointed out that the option to return to previous topics to review some concepts were ideal as this allows her to refer back when learning a complex topic which requires prerequisite knowledge of a simpler topic. which can be both confusing and hard to remember. a PhD student. such as the different types of quotes for different simple data types.

may be too easy for students who have learnt procedural programming. This view was consistent with that of Jenkins (2001) who pointed out that the students with procedural programming experience are able to learn objectoriented programming much more easily.

Student L, another PhD student, commented that it was a long time since she had studied Java and she found the test questions relatively difficult. However, she commented that the supplementary learning materials were very useful and the answer explanations have helped her ascertain why she had answered a question either correctly or incorrectly. She found that the easier questions were more effective to help her reiterate Java knowledge again, step-by-step, and she found it useful that she was able to undertake learning at her own pace.

It is evident that this Roadmap appears to be effective for students learning programming and especially for novice students. The quality of both the primary and supplementary learning materials can be enhanced, as well as the accuracy and preciseness of the difficulty of the test questions within this Roadmap. This will ensure that both the learning materials and test questions are adapted correctly to individual students.

Is the ordering of topics appropriate for teaching Java?

We asked the students to comment on whether the ordering of topics was appropriate for teaching Java and most of the students commented that the order was effective for learning Java, as follows:

Student L commented that the Assignment and Expressions topics were very easy and it was encouraging to begin learning programming using these topics. She commented that the Expressions topic was slightly more difficult than Assignment but this was a steady incremental change of difficulty between the two topics, which was motivating for her to continue learning. She also commented that Classes were the most difficult topic and therefore it was ideal that this was presented last.

Student G commented that a change could be made to the topic ordering which is to present the Methods topic before the Classes topic as this would make the relationship between Classes, Objects and Methods clearer. On the contrary, Student C commented that the Methods topic should precede Classes since Classes cannot be constructed properly without knowing Methods details such as public and private entities. He also commented that Methods was an easier topic than Classes and it would be more preferable to learn Methods first before Classes. Similarly, Student J pointed out that there is no problem presenting the Methods topic before Classes providing

that the wording to explain both these topics are phrased clearly to ensure students have a clear understanding of both of these topics. The same also applies to the Input topic where some details of Classes are involved.

The overall feedback from students indicated that their preferred ordering was consistent with the ordering of topics established in our research investigation.

Does this Roadmap cover sufficient concepts of basic Java?

Most of the students found that this Roadmap provided good background knowledge to Java and is beneficial to use before learning Java at degree level. However, the students suggested that additional complex concepts and additional topics can be added to cover other aspects of Java such as Inheritance, Polymorphism, Method Overriding, Copying Arrays and Switch and Nested-If Statements.

she was able to ascertain why as the explanations were very good. Does the Performance Test Roadmap effectively assess the nine basic Java topics? Students A. a proficient student commented that this Roadmap was able to determine very quickly his overall knowledge of each of the nine basic Java topics and it acted as a “ quick and efficient review check”. Additionally. however. This Roadmap was effective for both novice and proficient students for example. E. B.2. she was able to find out which were her strong and weak topics and additional work can be done on the weak ones. F and I attempted the Performance Test Roadmap and each provided positive feedback regarding the Roadmap’s effectiveness for assessing the nine basic Java topics (excluding Input and Output as these were not included). it was commented by one student that this Roadmap can also be used as a small section of assessed coursework towards the end of the CS118 module. One student. suggested that there should be more test questions on Classes and Objects as these are really important. On the other hand. a novice student commented that even though she answered some questions incorrectly. Also at the end of the test. It is .4 Performance Test Roadmap Recall that this Roadmap contains test questions to form the code for a Noughts and Crosses game and it acts as a reviewing process for the Lesson Instruction Roadmap.5.

possible to conclude that this Roadmap can effectively assess the nine basic Java topics and any errors arising from this are caused by the content or quality of the learning materials. . Lorenzen & Heilman (2002) implemented computer games as Java applets and included them in all programming assignments at their department in order to add fun to the course and to keep the students interested. and that the game exercise was more motivating for them. 2002) and they “combine fun and education in an attractive problem domain for learning how to program ” (Gibson. 2003). There are many advantages of using the design and implementation of computer games for students to learn programming (Jones. as follows: • Game exercises can “offer many learning opportunities for early programmers” (Ross. Similarly. • Excellent examples of object-oriented design and programming are demonstrated when designing and implementing computer games. 2000) for example: students’ problem-solving skills can be enhanced and little effort seems to be needed to motivate students. This view was supported by a number of authors. Is this Roadmap interesting for students for assessing Java? Most of the students commented that the Performance Test Roadmap provided a more interesting way of assessing Java compared to the other three Roadmaps.

5. This conclusion is consistent with the view of Ryan (2001) who points out that screens should be kept simple and should not be overloaded with information as this can frustrate and demotivate students and affect their learning process. . Are any of the results of the evaluation process affected by the design of our web-based tool? Most of the students commented that the layout was very simple. effective and easy to navigate. not too few and not too many. They also commented that there was a sufficient amount of learning materials on each page. clear. and that they would use the software again. It is possible to conclude that.5 Web-based Learning Approach This section addresses the pedagogical effectiveness of our web-based learning software tool. It examines whether any of the results of the evaluation process has been affected by the design of our web-based tool and whether it is beneficial over traditional learning methods. and there were not any distractions on the page to cause students to lose their concentration.2. Students suggested that it would be ideal if animations and pictures were added to the tool as these can help explain complex concepts. given the simple features of the web-based tool which students were not distracted by it and their learning performance was able to be maximised.

5.3 Summary We have asked students to perform evaluation of the four Roadmaps within OCTA and the feedback received from them was mostly positive. and finally the Performance Test Roadmap appears useful as a review check for the Lesson Instruction Roadmap. especially for novice students. The Lesson Instruction Roadmap appears effective for helping students learn programming. The Pre-Test and Proficiency Test algorithms appear to be suitable for novice and proficient students respectively. .

. Research Contributions and Future Work. we examined ways to teach Java programming more effectively. We have isolated three approaches to teaching Java and have ascertained a general difficulty ordering of topics in basic Java. Secondly. 6.Chapter 6 Conclusions This chapter discusses the Summary of Thesis Work. hence allowing us to determine a suitable topic ordering for students. Results from a Student Questionnaire investigation allowed us to compare the professionals' apparent ordering of topics with the students' perceived difficulty levels of these topics. and adaptive learning and testing. via a Literature Survey. the standards initiatives for web-based learning. We first surveyed the pedagogical literature on learning objects. the applications and usage of XML.1 Summary of Thesis Work Four main activities were performed for the completion of this thesis.

An overview of its physical and logical structure was presented as well as its two main components – Problems and Roadmaps – which are the underlying mechanism for constructing personalized learning content for students. We have incorporated learning objects into OCTA and two categories of metadata for our learning materials . implemented and incorporated into our software four modules for learning Java programming. and our objects can also exported to other web-based learning environments. . The students' learning experiences appears to have been enhanced through the use of our web-based tool. Finally.Our third main activity was to enhance our novel software OCTA. and the learning and testing materials are adapted to individual learners by our software. which demonstrates the reusability of learning objects. we have asked students to perform evaluation of our four Roadmaps and it appears that the learning materials are effective in assisting students with the learning of Java programming and the different tests appear to be accurate in assigning students' proficiency levels. Our learning materials and test questions can be reused in the different modules. Our system is also able to import learning objects conformant with the Learning Object Metadata standard. We have designed.one is an Assessment category and the other is the Educational category of the IEEE LOM standard.

6. and an ordering of topics in basic Java which may be effectively presented to students. Through this process. with the appropriate metadata. Our third main contribution has been the evaluation of the novel OCTA architecture.2 Research Contributions There are three main research contributions arising from this thesis. . Our second contribution has been our exercise in the development of reusable learning objects from online learning materials. The possibility of using reusable learning objects to form a web-based course has also been addressed as well as the ability to form different personalized courses for different students using these learning objects. we were able to ascertain that OCTA is effective to assist students with learning. These can be applied for teaching other object-oriented languages such as C++. The first is the identification of three principal approaches for teaching Java. The existing software OCTA developed at the University of Warwick has been used to construct four small simple modules to teach students Java programming to illustrate how learning objects can be reused and to provide each student with individual learning content. This supports the feasibility of transferring traditional learning materials into reusable learning objects.

This will allow learning materials and test questions to be adapted to the users according to their criteria.3 Future Work Our learning objects will be offered as a contribution to the Codewitz International Project for Better Programming Skills4 to allow a larger community to access these learning objects. . References 4 Note that the Department of Computer Science at the University of Warwick is a partner of this project. Role-Play Simulation objects can be created for students with an active learning style. The OCTA Roadmap framework can be enhanced to incorporate student models to contain for example learning goals. Different learning modes of learning objects can also be constructed to meet the requirements of students with different learning styles.6. Future directions of research include the following: The development of learning objects to incorporate into OCTA other learning situations to provide different learning opportunities for students. For example.

org/ (Accessed 14/07/04) . PhD thesis. http://www. & Davis. Abdullah.adlnet. N. S. (2003) Is Simple Sequencing Simple Adaptive Hypermedia?.cfm? fuseaction=scormabt (Accessed 18/09/04) ADL (2003b) The ADL SCORM Specification v 1. http://www. & Gunawardena.ac.adlnet.staffs. ADL (2003a) SCORM Overview. V. (2003) Student Modelling by Adaptive Testing – A Knowledge Approach. (1999) Pre-modelling for Examination Revision through Adaptive Testing. http://www.2. Hypertext and Hypermedia Conference 2003. International Conference on Information Technology: Computers and Communications 2003. Abdullah. (2003) A Learning Objects Approach to Teaching Programming.uk/COSE/cosenew/SCORM. University of Kent at Canterbury. A.Abdullah. S. H. Human Centred Technology Postgraduate Workshop. Adamchik. University of Sussex.doc (Accessed 26/10/03) ADL (2004) ADL Overview.org/index.

& Woolf. Inc.Albert. Guzman... Teaching and Assessing – A Revision of Bloom’s Taxonomy of Educational Objectives . Powell. Anderson.. (2001) Reusing Adaptive Learning Resources. University of Massachusetts. V. International Conference on Computers in Education 2002. 2002. Conlan. I.ac. R. & Hockemeyer. C. S.. & Wade. Conejo. B. .doc (Accessed 10/07/04). (2001) An Adaptive Web-based Component for Cognitive Ability Estimation. http://www. O.learningcircuits.ics. (2003) Designing Reusable Java Teaching Modules. 2001. et al (2001) A Taxonomy for Learning. P.ltsn. LTSN Centre for Information and Computer Science.. L. P.org/2002/jul2002/smartforce. E. & Orton. ASTD & Smartforce (2002) A Field Guide to Learning Objects. Albert. D. Allison.uk/pub/jicc7/Safia2. (2002) A Virtual Learning Environment for Introductory Programming. 2001. Arroyo. Addison Wesley Longman. (2002) Applying Demand Analysis of a Set of Test Problems for Developing Adaptive Courses. International Conference on Computers in Education 2001. C. Hockemeyer.pdf (Accessed 07/04/04) Barikzai. D. www.. I.

& Boyle. C. University of Warwick. Bennett.Barr. (2002) The Promise and Pitfalls of Learning Objects. D. Bradley. & Skepper.. S. M. (2004) Student Evaluation of the use of learning objects in introductory programming.. . Hansen. T. reusable learning objects. EDUCAUSE Information Resources Library. Holden. S. Unpublished MEng Dissertation. Brose. 1998. Addison Wesley. S. & Metros. SIGCSE Technical Symposium on Computer Science Education 1999... Hypermedia & Telecommunications 2004. F. Design principles for authoring dynamic. (2003). Bishop. Boyatt. T. V. (1999) An Exploration of Novice Programming Errors in an Object-Oriented Environment . J (1998) Java Gently Programming Principles Explained.. Diaz. World Conference on Educational Multimedia. Phillips. A (1981) A One-Year Introductory Course for Computer Science Undergraduate Program. Technical Symposium on Computer Science Education 1981. Boyle. & Greening. T. McArthur. K. Australian Journal of Educational Technology 2003. (2003) Online Computer Teaching Aid Project Report. C. Behforooz. 2002. D. R.

Special Issue on Intelligent Systems and Teleteaching. Callear. & Monga. CanCore (2004) Canadian Core Learning Resource Metadata Application Profile . & Pausch. (2004) Designing a Simple Sequence for RLOs. Burton.. LTSN Centre for Information and Computer Science 2000. P.ca/indexen.Brusilovsky. L. J. The IASTED International Conference on Web-Based Education 2004. (2004) Learning Objects and Tests. SIGCSE Technical Symposium on Computer Science Education 2003. IEEE Computer Society Technical Committee on Learning Technology newsletter. Mckinney. http://www. W. SIGCSE Technical Symposium on Computer Science Education. April 2004.html (Accessed 18/09/04) Cesarini. & Bruhn. . M.. Sawywe. Kunstliche Intelligenz. Mazzoni. Dann. & Zhang. P. S. (2003) Teaching Objects First in Introductory Computer Science. R. M. R. Cooper. (2003) Teaching Programming in the OOP Era. P (1999) Adaptive and Intelligent Technologies for Web-based Education. 2003. S...cancore. J. Chen. S (2000) Teaching Programming: Some Lessons From Prolog .

J.warwick. Carr. University of Sheffield. http://www.uio.ifi. (2003) XML Transformations.no/~olejohan/birth-of-oo.ac.uk/go/cap/resources/eguides (Accessed 27/06/04) Dingli.html (Accessed 29/07/04) Cover Pages (2003) SCORM Initiative. http://xml. Centre of Academic Excellence.Cover Pages (2002) Standard Generalized Markup Language.dcs. 2000. S.ltsn. (2000) Programming. Cooke. Brooks/Cole.org/scorm. R. www. S.uk/pub/conf2001/papers/Davis.pdf (Accessed 31/08/04) Davis.html (Accessed 26/10/03) Dahl. E. (2001) The Birth of Object Orientation: the Simula Languages . Decker.shef.ac.ics.ac. A. L.coverpages.org/sgml.. Dempster.java – An Introduction to Programming Using Java. (2004) CAP e-Learning guides: Evaluating E-Learning.coverpages. (2001) Managing Diversity: Experiences Teaching Programming Principles. H. University of Warwick. http://xml. O.htm (Accessed 10/07/04).uk/~alexiei/WebSite/University/ (Accessed 26/10/03) . http://heim. & Hirshfield. www.. & White.

Kay.edu/numsse/Fall2003/691N/Lecture%203. Available at: http://netserver. & Weibel. & Wirmalaratne.Downes.html (Accessed 29/07/04) Dublin Core Metadata Initiative (2003) Dublin Core Metadata Element Set. (2003) Introduction to XML. S. W. International Review of Research in Open and Distance Learning. M. (2001) Learning Objects: resources for distance education worldwide. S. E. Sutton.wvu.org/dlib/april02/weibel/04weibel. L. Kingston. J. SIGCSE Technical Symposium on Computer Science Education 2000. http://www.dlib. S. K. (2000) Supporting Reflection in Introductory Computer Science. (2000) Programming in Java: Student-Constructed Rules.org/documents/dces/ (Accessed 31/10/03) Duval. Hodgins.. http://dublincore. J.irrodl..cerc. (2002) Metadata Principles and Practicalities. . A.1/downes. D-Lib Magazine. http://www.html (Accessed 31/08/04) Evanoff.pdf 18/06/04) (Accessed Fekete.. SIGCSE Technical Symposium on Computer Science Education 2000. A. Fleury.org/content/v2..

(2003) Semantic & Syntactic Interoperability for Learning Object Metadata. Georgouli.exegenix. Gibson. Gibbons. http://www. (1998) Java: First Contact. G (1998) Structured Programming in Java. (2002) The Design of a ‘Motivating’ Intelligent Assessment System . .html (Accessed 07/04/04) Garside. 2002. R. College of St. Course Technology. International Conference on Intelligent Tutoring Systems. J (2003) A Noughts and Crosses Java Applet to Teach Programming to Primary School Children. K.com/resources/xml/ (Accessed 23/10/03) Gibbons. www. N. SIGCSE Technical Symposium on Computer Science Education 1998. (2001) The basics of XML: The way to understanding XML applications. 1998. Scholastica. J.cancore. Germann. Principles and Practice in Programming in Java 2003. Metadata in Practice 2004. T (2002) Using Graphics in the First Year of Programming with C++. R. & Mariani.ca/semantic_and_syntactic_interoperability.Friesen. 2002.

O’Reilly. J. M. Watkins.Gouli.. et al (2002) XML in a Nutshell: A Desktop Reference Guide. Innovation and Technology in Computer Science Eduction 1998. K. Stubbs. (2001) Adaptive Assessment Improving Interaction in an Educational Hypermedia System. G. & Hodson. H. M. (2004) Converting Existing Course Materials into Learning Objects: An Exemplar in a School of Computing . Griffiths. Conference on Human Computer Interaction 2001. (2002) Adaptive eLearning and the Learning Grid. International Conference in Advanced Learning Technologies 2004. LEGE-WG International Workshop on Educational Models for GRID Based Services.. R.. Kornilakis. Hadjerrouit (1998b) A Constructivist Framework for Integrating the Java Paradigm into the Undergraduate Curriculum. Hockemeyer. E. & Dietrich. C. P. Harold. & Grigoriadou.. Papanikolaou. A. . SIGCSE Technical Symposium on Computer Science Education 1998. Hadjerrouit (1998a) Java as First Programming Language: A Critical Evaluation .

imsglobal. ltsc.pdf (Accessed 07/04/04) IEEE LTSC (2004) WG12 – Learning Object Metadata.ieee.html (Accessed 20/06/04) IMS Global Learning Consortium (2002) IMS Question & Test Interoperability: ASI Best Practice & Implementation Guide .org/crossroads/xrds4-4/introjava.org/enterprise/enbest03.ieee.org/wg12/ (Accessed 19/06/04) IMS Global Learning Consortium (1999) IMS Enterprise Best Practice and Implementation (Accessed 28/08/04) Guide.org/profiles/lipinfo01.org/question/qtiv1p2/imsqti_asi_bestv1p2.imsglobal. http://www. http://ltsc.org/wg12/files/LOM_1484_12_1_v1_Final_Draft. http://www. ACM Electronic Publication.html IMS Global Learning Consortium (2001) IMS Learner Information Packaging Information Model Specification.Hong.html (Accessed 18/06/04) IEEE LTSC (2002) Draft Standard for Learning Object Metadata. (Accessed http://www.imsglobal. (1998) The Use of Java as an Introductory Programming Language . J.html 20/06/04) . http://www.acm.

imsglobal.imsglobal. http://www.imsglobal.org/aboutims.html (Accessed 27/08/04) IMS Global Learning Consortium (2004b) About IMS.html 23/10/03) .org/metadata/mdv1p3pd/imsmd_bestv1p3pd.1-2002 Standard for Learning Object Metadata.html (Accessed 28/08/04) IMS Global Learning Consortium (2003b) IMS Simple Sequencing Best Practice and Implementation Guide. http://www. (Accessed http://xml.gov/presentations/loc/xml_search_engine_vendors.org/content/packaging/cpv1p1p3/imscp_bestv1p1p3.12.html (Accessed 29/08/04) IMS Global Learning Consortium (2004a) IMS Meta-data Best Practice Guide for IEEE 1484. http://www.cfm (Accessed 18/09/04) Jarrard.IMS Global Learning Consortium (2003a) IMS Content Packaging Best Practice Guide. J. http://www. (2001) Convera RetrievalWare Search Engine.org/simplesequencing/ssv1p0/imsss_bestv1p0.imsglobal.

http://www. Jenkins.Jefferies.html (Accessed 10/07/04) Jenkins. T (1998) A Participative Approach to Teaching Programming.ltsn. Innovation and Technology in Computer Science Education 1998. L.ac.ltsn. Johnson. Elusive Vision: Challenges Impeding the Learning Object Economy . T (2003) The First Language – A Case for Python? . T (2001) The Motivation of Students of Programming.uk/pub/conf2002/Jefferies. 2001. LTSN Centre for Information and Computer Science. T (2002) On the Difficulty of Learning to Program . Jenkins.pdf (Accessed 03/08/04). Jenkins.uk/pub/italics/issue1/tjenkins/003. J (2001) Diversity and Motivation in Introductory Programming.ics. R (2002) Size matters – Teaching Initial Programming to Large groups of students.nmc. T & Davy. A & Barrett. Jenkins. . (2003).org/pdf/Elusive_Vision. http://www. www. University of Kent.html (Accessed 10/07/04). Masters Thesis.ics. LTSN Centre for Information and Computer Science. 2003.ac. 2002.

Jones. Joy. (2000) Design and Implementation of Computer Games: A Capstone Course for Undergraduate Computer Science Education . (1998) Visual Programming with Java: Evaluation of an Introductory Programming Course . Journal of Object-oriented programming 1999. M. Muzykantskii.. F. M. Sint. International Conference on Advanced Learning Technologies 2004. Kneiling. (2002) An Infrastructure for Web-based Computer-Assisted Learning. SIGCSE Technical Symposium on Computer Science Education 2000. R. & Evans. (1999) The Blue Language. Innovation and Technology in Computer Science Education 1998. M. & Wester. M. (2004) A Conceptual Framework for Web-based Intelligent Learning Environments using SCORM-2004. ACM Journal on Educational Resources in Computing. S. .. P.. L. Kluit. S. (2002) XML and Web Integrating Services for Enterprise Applications . Kazi. J. Address (Accessed 22/10/03) Koelling. B. Rawles.

Innovation and Technology in Computer Science Education 2001. SIGCSE Technical Symposium on Computer Science Education 2000. Lee. (2000) Myths about Object-Orientation and Its Pedagogy. (2003) e-Learning and e-Assessment: Impacts and Benefits of MOL and ICAM . K. K. .Kolling. Makela. SIGCSE Technical Symposium on Computer Science Education 2002. C. (2004) Setting and Sharing Web-Based Assessments. The IASTED International Conference on Web-Based Education 2004. Ala-Mutka. (2001) Guidelines for Teaching Object Orientation with Java. Lewis. J.. http://pnclink. W. & Peltonen. Lim. P. M. & Soon-Ng.pdf (Accessed 29/08/04) Lorenzen. K. Tham. (2004) An Implementation of a LO Repository with Version Control. E-learning Competency Centre. T. & Rosenberg. & Heilman.org/annual/annual2003/programe/presenpdf/110821. W. A. J.. S. April 2004. Lim. A. (2002) CS1 and CS2: Write Computer Games in Java!. International Conference on Computers in Education 2003. & Goh. (2003) Meta-data Implementations in Singapore. J. IEEE Computer Society Technical Committee on Learning Technology newsletter.

P. P. Mohan.html (Accessed 18/09/04) Meisalo. Suhonen. & Jones. International Conference on Advanced Learning Technologies 2004. World Conference on Educational Multimedia. & Greer. V. Mohan. SIGCSE Technical Symposium on Computer Science Education 2003.. R.edu/index. J. . P. K.Massachusetts Institute of Technology (2004) MIT OpenCourseware. Hypermedia and Telecommunications 2003.. (2004) Reusable Online Learning Resources: Problems. J. & Sutinen E. (2003) Developing Intelligent Programming Tutors for Novice Programmers. Innovation and Technology in Computer Science Education 2002. (2004) Learning Objects for Introductory Computer Programming. (2002) Formative Evaluation Scheme for a Web-based Course Design. (2003) Reusable Learning Objects: Current Status and Future Directions. Pickard.. April 2004. Torvinen. Solutions and Opportunities. Pillay. Fisher. http://ocw.mit. N. IEEE Computer Society Technical Committee on Learning Technology newsletter. S.

Rabb. . Informing Science. (2000) Conservatively Radical Java in CS1. R. V. SIGCSE Technical Symposium on Computer Science Education 2000. S. & Freisleben. Rasala. (2001) Java Power Tools: Model Software for Teaching Object-Oriented Design. Innovation and Technology in Computer Science Education 2002. N. (2002) Learning Object Repository Technologies for TeleLearning: The Evolution of POOL and CanCore . June 2002. (2000) Programming Patterns and Design Patterns in the Introductory Computer Science Course. SIGCSE Technical Symposium on Computer Science Education 2000. G. & Friesen. R (2002) Objects from the Beginning. B. Rößling. SIGCSE Technical Symposium on Computer Science Education 2001. (2000) Experiences in Using Animations in Introductory Computer Science Lectures.. Proulx. V.. SIGCSE Technical Symposium on Computer Science Education 2000.. J. R.Proulx. J & Rasala. McGreal. Reges. V. G. Raab. & Proulx. Richards.

Ryan. & Mueller. (2001) The Human-Computer Interface: Challenges for Educational Multimedia and Web Designers. Santos. SIGCSE Technical Symposium on Computer Science Education 2000. J. Available at: http://www. (2003) On the Use of E-learning Standards in Adaptive Learning Systems. & Hagan. Sayers. International Conference on Advanced Learning Technologies 2003. (2003) Teaching Java Programming: Determining the needs of First Year Students. Schloss. L. H. (2002) Guiding Students through Programming Puzzles: Value and Examples of Java Game Assignments .Ross. & Llamas. J. LTSN Centre for Information and Computer Science. M.net.. SIGCSE Technical Symposium on Computer Science Education 2001. I. Anidao. (2000) A Fundamentals-based Curriculum for First Year Computer Science. (2000) Ten best bets for XML applications. 2003. B.digitalearth. M. SIGCSE Technical Symposium on Computer Science Education 2002. C.. Nicell. C. S. Sanders.cn/GISRelatedITIssues/XML/ (Accessed 18/06/04) .

Pesin. April 2004.standards (Accessed 18/09/04) Smith. L. http://www.nmc. 2003. Klemke.. R. Shackelford.. & LeBlanc Jr. (1997) Introducing Computer Science Fundamentals Before Programming.sg/cocoon/ecc/website/standards/singcore. Specht. R. University of Debrecen. Kravcik.Semmens. (2003) A First Course in Computer Science: Languages and Goals.org. NMC: The New Media Consortium.org/guidelines/NMC%20LO %20Guidelines. E-learning Competency Centre (2004) Singapore Learning Resource Identification Specification.pdf (Accessed 27/06/04) Smolarski. (2004) The Potential for Learning Objects to Support Flexible Learning in Higher Education. . M. IEEE Computer Society Technical Committee on Learning Technology newsletter.. R (2004) Guidelines for Authors of Learning Objects. R. Adaptive Hypermedia 2002. P. R. D.ecc. http://www.. (2002) Adaptive Learning Environment for Teaching and Learning in WINDS . M. Frontiers in Education Conference 1997. & Huttenhain.

htm (Accessed 03/09/04) Sun Microsystems (2004a) Java Servlet Technology.trainingplace. http://java. http://www.com/source/research/adaptivelearning.ucas.Steffler.ac. http://java.com/products/servlet/index.sun. WhySmallTalk.warwick.com/products/jdbc/ (Accessed 23/09/07) Training Place (2004) Adaptive Learning.dcs. Tyma.com/smalltalk_reasons/index. www.uk/undergraduate (Accessed 18/09/04) . www. (Accessed http://www.htm 08/08/04).whysmalltalk.uk (Accessed 18/09/04) University of Warwick (2004a) Courses Taught in Computer Science. (1998) Why are we using Java again?. P. UCAS (2004) Universities & Colleges Admissions Services. J (2004) Reasons to Develop in SmallTalk.sun.jsp (Accessed 23/09/04) Sun Microsystems (2004b) JDBC Technology. Communications of the ACM 1998.ac.

B.uk/undergraduate/modules/cs126. J. M.ac. International Conference on Computers in Education 2003. & Magnusson. (2000) Computerized Adaptive Testing: A Primer (Second Edition) Lawrence Erlbaum Associates Inc.ac.University of Warwick (2004b) CS118 Programming for Computer Scientists .html 14/09/04) University of Warwick (2004c) CS126 Design of Information Structures. (2003) Designing Learning Objects for an Advanced IT Course. H.html 14/09/04) Van Roy.warwick. Y. Wang. (Accessed http://www. et al..dcs. SIGCSE Technical Symposium on Computer Science Education 2003. Department of Computer Science. IEEE Computer Society Technical Committee on Learning Technology newsletter. & Kin. Wei. Armstrong. H. P. . P. & Li. (2003) The Role of Language Paradigms in Teaching Programming. T.warwick.. Department of Computer Science. April 2004. (Accessed http://www.: New Jersey.uk/undergraduate/modules/cs118. (2004) Considering Model-based Adaptivity for Learning Objects. Flatt.dcs. Wainer.

(2003) Learning Objects: Difficulties and Opportunities . a metaphor. (2002) Connecting Learning Objects on Instructional Design Theory: a definition. Information Disciplines.usask.w3. http://www. Utah State University. Masters Thesis. (2002) Designing a Reusable and Adaptive E-Learning System .html (Accessed 23/10/03) .usu. 1997. C. http://www.Org (2003) Mathematical Markup Language (MathML). D.org/Math/whatIsMathML. http://wiley. http://www.Weisert.edu/docs/lo_do.pdf (Accessed 07/04/04) W3. http://www.ed.org/granularity.pdf (Accessed 29/08/03) Wiley.pdf (Accessed 20/06/04) Wu. D. www. H. Inc.org/XML/ (Accessed 18/06/04) W3.doc (Accessed 19/06/04) Wiley.w3. and taxonomy. University of Saskatchewan.cs..reusability.Org (2004) Extensible Markup Language (XML). (1997) Learning to Program: It Starts with Procedural.reusability.ca/faculty/cooke/HGthesis. Gibbons & Recker (2000) A Reformation of the Issue of Learning Object Granularity and its implications for the design of Learning Objects .org/read/chapters/wiley. Wiley.

J.geocities.w3schools.com/xsl/xsl_languages. & Joy. (2004a) Introducing Java: the Case for Fundamentals-first. M. (2004b) Adaptive Learning and Testing with Learning Objects .W3 Schools (2004a) XML Tutorial. International Conference on Education and Information Systems: Technologies and Applications 2004. to appear in Proceedings of International Conference Computers in Education 2004. J. http://www.com/xml/ (Accessed 18/06/04) W3 Schools (2004b) XSL Languages. J. http://www. uk. (2002) Learning Java by Jane Yau: An Introductory Java Programming Course.w3schools. M.asp (Accessed 31/07/04) Yau. . & Joy. Yau.com/jane_yau/ (Accessed 07/04/04) Yau.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.