This action might not be possible to undo. Are you sure you want to continue?
4.5
Views: 7,221Likes: 193https://www.scribd.com/doc/17701356/DataStructuresAlgorithms
08/05/2013
text
original
Data Structures and Algorithms
Data Structures and Algorithms  Table of Contents
Front Page Course Outline 1. Introduction 2. Programming Strategies r 2.1 Objects and ADTs s 2.1.1 An Example: Collections r 2.2 Constructors and destructors r 2.3 Data Structure r 2.4 Methods r 2.5 Pre and postconditions r 2.6 C conventions r 2.7 Error Handling r 2.8 Some Programming Language Notes 3. Data Structures r 3.1 Arrays r 3.2 Lists r 3.3 Stacks s 3.3.1 Stack Frames r 3.4 Recursion s 3.4.1 Recursive Functions s 3.4.2 Example: Factorial 4. Searching r 4.1 Sequential Searches r 4.2 Binary Search r 4.3 Trees 5. Complexity r 5. Complexity (PS) 6. Queues r 6.1 Priority Queues r 6.2 Heaps 7. Sorting r 7.1 Bubble r 7.2 Heap r 7.3 Quick r 7.4 Bin
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ds_ToC.html (1 of 3) [3/23/2004 2:23:29 PM]
Data Structures and Algorithms: Table of Contents
8.
9.
10.
11. 12. 13.
14.
7.5 Radix Searching Revisited r 8.1 RedBlack trees r 8.1.1 AVL trees r 8.2 General nary trees r 8.3 Hash Tables Dynamic Algorithms r 9.1 Fibonacci Numbers r 9.2 Binomial Coefficients r 9.3 Optimal Binary Search Trees r 9.4 Matrix Chain Multiplication r 9.5 Longest Common Subsequence r 9.6 Optimal Triangulation Graphs r 10.1 Minimum Spanning Tree r 10.2 Dijkstra's Algorithm Huffman Encoding FFT Hard or Intractable Problems r 13.1 Eulerian or Hamiltonian Paths r 13.2 Travelling Salesman's Problem Games
r
Appendices
A. ANSI C B. Source code listings C. Getting these notes
Slides
Slidesfrom 1998 lectures (PowerPoint).
Course Management
r r r r
Key Points from Lectures Workshops Past Exams Tutorials
Texts
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ds_ToC.html (2 of 3) [3/23/2004 2:23:29 PM]
Data Structures and Algorithms: Table of Contents
Texts available in UWA library Other online courses and texts Algorithm Animations
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ds_ToC.html (3 of 3) [3/23/2004 2:23:29 PM]
Data Structures and Algorithms: Table of Contents
Data Structures and Algorithms
John Morris, Electrical and Electronic Engineering, University of Western Australia These notes were prepared for the Programming Languages and System Design course in the BE(Information Technology) course at the University of Western Australia. The course covers:
q
q
q
q
q
q
q q q q q
Algorithm Complexity r Polynomial and Intractible Algorithms Classes of Efficient Algorithms r Divide and Conquer r Dynamic r Greedy Searching r Lists r Trees s Binary s RedBlack s AVL s Btrees and other mway trees s Optimal Binary Search Trees r Hash Tables Queues r Heaps and Priority Queues Sorting r Quick r Heap r Bin and Radix Graphs r Minimum Spanning Tree r Dijkstra's Algorithm Huffman Encoding Fast Fourier Transforms Matrix Chain Multiplication Intractible Problems AlphaBeta search
The algorithm animations were mainly written by Woi Ang with contributions by ChienWei Tan, Mervyn Ng, Anita Lee and John Morris. Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/index.html [3/23/2004 2:23:31 PM]
PLDS210  Programming Languages and Data Structures
PLDS210 Programming Languages and Data Structures
Course Synopsis
This course will focus on data structures and algorithms for manipulating them. Data structures for storing information in tables, lists, trees, queues and stacks will be covered. Some basic graph and discrete transform algorithms will also be discussed. You will also be introduced to some basic principles of software engineering: good programming practice for "longlife" software. For a full list of topics to be covered, view the table of contents page for the lecture notes.
Lectures  1998
There are two lectures every week: Monday 12 pm E273 Tuesday 12 pm AG11
Lecture Notes
A set of notes for this course is available on the Web. From the table of contents page you can jump to any section of the course. There is a home page set up for student information: http://www.ee.uwa.edu.au/internal/ug.courses.html On which, you will find an entry for course information; you can follow the links to this page and the notes themselves. You can also go directly to the PLSD210 page: http://www.ee.uwa.edu.au/~plsd210/ds/plds210.html
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/plds210.html (1 of 5) [3/23/2004 2:23:35 PM]
PLDS210  Programming Languages and Data Structures
Note that the Web pages use the string plds (programming languages and data structures)  a historical accident, which we retain because this label describes the content more accurately!
Printed Notes
For a ridiculously low price, you can obtain a preprinted copy of the notes from the bookshop. You are strongly advised to do so, as this will enable you to avoid laboriously taking notes in lectures and concentrate on understanding the material. (It will also save you a large amount of time printing each page from a Web browser!) The printed notes accurately represent the span of the course: you will be specifically advised if examinable material not appearing in these notes is added to the course. (But note that anything appearing in laboratory exercises and assignments is automatically considered examinable: this includes the feedback notes!) However, the Web notes are undergoing constant revision and improvement (comments are welcome!) so you are advised to browse through the Web copies for updated pages. You'll be advised in lectures if there is a substantial change to any section. Textbooks The material on data structures and algorithms may be found in many texts: lists of reference books in the library are part of the Web notes. The Web notes are, of necessity, abbreviated and should not be considered a substitute for studying the material in texts. Web browsers Web browsers have varying capabilities: the notes were checked with Netscape 2  but should read intelligently with other browsers. If you have problems, I would be interested to know about them, but please note that updating these notes, adding the animations, tutoring and marking your assignments for this course have priority: problems with other browsers, your home computer, etc, will only be investigated if time permits.
Using the notes
The notes make use of the hypertext capabilities of Web browsers: you will find highlighted links to subsidiary information scattered throughout the text. Occasionally these links will point to Web resources which may be located off campus and take some time to download: you may find it productive to use the "Stop" facility on the browser to abort the current fetch  you can try again later when the Net is less heavily loaded. In all cases, the browser's "Back" command should take you back to the original page. Program source
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/plds210.html (2 of 5) [3/23/2004 2:23:35 PM]
PLDS210  Programming Languages and Data Structures
Example source code for programs will sometimes pop up in a separate window. This is to enable you to scan the code while referring to the notes in the main page. You will probably need to move the source code page out of the way of the main page. When you have finished with the source code page, select File:Close to close the window. Selecting File:Exit will close the window and exit from Netscape  possibly not your intention!
Tutorials  1997
Exercises for the tutorials and laboratory sessions are also found in the Web pages. Tutorial Times Weeks Time Thursday 9 am Thursday 2 pm Location Groups E273 E269 2ic,2it13 rest
413
The first tutorial will be in the fourth week of semester. As long as one tutorial group does not become ridiculously overloaded, you may go to whichever tutorial suits you best.
Laboratory Sessions  1998
There will be two formal introductory laboratory sessions early in the semester  watch these pages for the final details. These sessions will be in laboratory G.50. After the first two laboratories, a tutor will be available in G.50 every week at times to be advertised. The tutor will advise on any problems related to the whole course: assignments, lecture material, etc. You will be able to complete the assignment on any machine which has an ANSI C compiler. Assignments will be submitted electronically: submit programs on the SGI machines and on the NT systems in 1.51 may be used  refer to the submission instructions. Note that you are expected to write ANSI standard C which will run on any machine: programs which won't run on our SGI's risk failure! In 1998, Java programs written to an acceptable standard will also be accepted. (The standard required for C is set out explicitly: ensure that you understand how to translate the important elements of this to Java before starting work in Java. Seek feedback if uncertain!)
Assessment
Assignments 20%
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/plds210.html (3 of 5) [3/23/2004 2:23:35 PM]
PLDS210  Programming Languages and Data Structures
Written Exam (3hrs) 80% As with all other courses with a practical component, the practical assignments are compulsory. Failure to obtain a satsifactory grade in the practical component of the course may cause you to be given a 0 for this component of PLSD210. Since this will make it virtually impossible to obtain more than a faculty pass for the year, failure to do the practical assignments will not only cause you to miss some feedback which may well be useful to you in the written exam, but may cause you to fail the whole unit. A "satisfactory" grade in assignments is more than 40% overall. Any less will put your whole year at risk. A much safer course is to do the assignments conscientiously, making sure that you understand every aspect of them: assume that the effort put into them will improve your examination mark also.
Assignments  1998
Four assignment exercises will be set for the semester. You should be able to complete most of the first two assignments during the initial laboratory sessions. The 3rd and 4th are more substantial. Completed assignments (which should include a summary report, the program code and any relevant output) should be submitted by following the submission instructions at the end of the Web page. Performance on the assignments will be 20% of your overall assessment for the unit.
Assignments 1 & 2
These will be relatively short and should require only 1 or 2 hours extra work to complete. They contribute 6% of your final assessment. These assignments will provide some feedback on what is expected for the remaining two assignments. You may even find that you can use the (corrected) code from these assignments in the later assignments.
Assignments 3 & 4
For these two assignments, you will be expected to implement one algorithm and test another. You will be assigned an algorithm to implement as assignment 3. You may obtain from one of your class colleagues an implementation of any other algorithm and test it for assignment 4. You must submit them by the dates shown on the assignment sheets. They will constitute the remaining 14% of your assignment assessment.
A minimum standard must be obtained in the assignments to pass the unit as a whole. Failure to attempt the assignments will put you at a severe disadvantage in the exam. Assignment reports
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/plds210.html (4 of 5) [3/23/2004 2:23:35 PM]
PLDS210  Programming Languages and Data Structures
Each assignment submission should be accompanied by a summary report. The report should be clear and concise: it is unlikely that you will need to write more than 2 A4 pages (or about 120 lines of text).
Report Format
The report should be in plain ASCII text. The 'native form' of any wordprocessor will be rejected. If you prefer to use a word processor to prepare your report, then ensure that you export a plain text file for submission when you have finished: all wordprocessors have this capability. This allows you to concentrate on the content of the report, rather than the cosmetics of its format. However, the general standards for report structure and organisation (title, authors, introduction, body grouped into related paragraphs, conclusion, etc) expected for any other unit apply here also. Communication This course attempts to be "paperless" as much as possible! Assignments will be submitted electronically and comments will be emailed back to you. Please ensure that your reports include email addresses of all authors. The preferred method for communication with the lecturer and tutor(s) is, at least initially, email. All routine queries will be handled this way: we will attempt to respond to all email messages by the next day. If you have more complex problems, email for an appointment (suggest a few times when you will be free). You may of course try to find me in my office at any time (but early in the morning is likely to be a waste of time), but emailing for an appointment first ensures you some priority and enables you to avoid wasting a trip to the 4th floor when there may be zero probability of success!
Continue on the lecture notes.
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/plds210.html (5 of 5) [3/23/2004 2:23:35 PM]
Data Structures and Algorithms: Introduction
Data Structures and Algorithms
1. Introduction
This course is designed to teach you how to program efficiently. It assumes that
q q q
you know the basics of programming in C, can write, debug and run simple programs in C, and have some simple understanding of objectoriented design.
An introduction to objectoriented programming using ANSI standard C may be found in the companion Object First course.
Good Programs
There are a number of facets to good programs: they must a. b. c. d. e. run correctly run efficiently be easy to read and understand be easy to debug and be easy to modify. What does correct mean? We need to have some formal notion of the meaning of correct: thus we define it to mean "run in accordance with the specifications".
The first of these is obvious  programs which don't run correctly are clearly of little use. "Efficiently" is usually understood to mean in the minimum time  but occasionally there will be other constraints, such as memory use, which will be paramount. As will be demonstrated later, better running times will generally be obtained from use of the most appropriate data structures and algorithms, rather than through "hacking", i.e. removing a few statements by some clever coding  or even worse, programming in assembler! This course will focus on solving problems efficiently: you will be introduced to a number of fundamental data structures and algorithms (or procedures) for manipulating them. The importance of the other points is less obvious. The early history of many computer installations is, however, testimony to their importance. Many studies have quantified the enormous costs of failing to build software systems that had all the characteristics listed. (A classic reference is Boehm's text.) Unfortunately, much recent evidence suggests that these principles are still not well understood! Any perusal of Risks forum will soon convince you that there is an enormous amount of poor software in use. The discipline of software engineering is concerned with building large software systems which perform as their users expected, are reliable and easy to maintain. This course will introduce some software engineering principles but we will concentrate on the creation of small programs only. By using wellknown, efficient techniques for solving problems, not only do you
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/introduction.html (1 of 2) [3/23/2004 2:23:41 PM]
Data Structures and Algorithms: Introduction
produce correct and fast programs in the minimum time, but you make your programs easier to modify. Another software engineer will find it much simpler to work with a wellknown solution than something that has been hacked together and "looks a bit like" some textbook algorithm.
Key terms
correct A correct program runs in accordance with its specifications algorithm A precisely specified procedure for solving a problem. Continue on to Programming Strategies
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/introduction.html (2 of 2) [3/23/2004 2:23:41 PM]
Data Structures and Algorithms: Aside  Software Engineering
Data Structures and Algorithms
Software Engineering
The discipline of Software Engineering was founded when it was discovered that large numbers of software projects
q q q q
exceeded their budgets were late were riddled with errors and did not satisfy their users' needs.
The term is believed to have been coined by a NATO study group in 1967. The first software engineering conference was the NATO Software Engineering Conference held in Garmisch, Germany in 1968.
Software Engineering References
One of the classic early texts is Boehm's book: B.J. Boehm, "Software Engineering Economics", PrenticeHall, 1981
The Risks Forum
A continuing saga of problems with software system is chronicled in the "Risks" section of ACM journal, "Software Engineering Notes". The Risks section has appeared in every issue for more than ten years, i.e. there has been no shortage of material to keep it alive for all that time!
Notes
Clever programming
Be the end of this course, you should understand that hacking is far from clever: there are much more effective strategies for making programs run faster! Back to Introduction Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/softeng.html [3/23/2004 2:23:43 PM]
Data Structures and Algorithms: Programming Strategies
Data Structures and Algorithms
2. Programming Strategies
It is necessary to have some formal way of constructing a program so that it can be built efficiently and reliably. Research has shown that this can be best done by decomposing a program into suitable small modules, which can themselves be written and tested before being incorporated into larger modules, which are in turn constructed and tested. The alternative is create what was often called "sphaghetti code" because of its tangled of statements and jumps. Many expensive, failed projects have demonstrated that, however much you like to eat sphaghetti, using it as a model for program construction is not a good idea! It's rather obvious that if we split any task into a number of smaller tasks which can be completed individually, then the management of the larger task becomes easier. However, we need a formal basis for partitioning our large task into smaller ones. The notion of abstraction is extremely useful here. Abstractions are high level views of objects or functions which enable us to forget about the low level details and concentrate on the problem at hand. To illustrate, a truck manufacturer uses a computer to control the engine operation  adjusting fuel and air flow to match the load. The computer is composed of a number of silicon chips, their interconnections and a program. These details are irrelevant to the manufacturer  the computer is a black box to which a host of sensors (for engines speed, accelerator pedal position, air temperature, etc) are connected. The computer reads these sensors and adjusts the engine controls (air inlet and fuel valves, valve timing, etc) appropriately. Thus the manufacturer has a high level or abstract view of the computer. He has specified its behaviour with statements like: "When the accelerator pedal is 50% depressed, air and fuel valves should be opened until the engine speed reaches 2500rpm". He doesn't care how the computer calculates the optimum valve settings  for instance it could use either integer or floating point arithmetic  he is only interested in behaviour that matches his specification. In turn, the manager of a transport company has an even higher level or more abstract view of a truck. It's simply a means of transporting goods from point A to point B in the minimum time allowed by the road traffic laws. His specification contains statements like: "The truck, when laden with 10 tonnes, shall need no more than 20l/100km of fuel when travelling at 110kph." How this specification is achieved is irrelevant to him: it matters little whether there is a control computer or some mechanical engineer's dream of cams, rods, gears, etc. There are two important forms of abstraction: functional abstraction and structural abstraction. In functional abstraction, we specify a function for a module, i.e.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/strategies.html (1 of 2) [3/23/2004 2:24:04 PM]
Data Structures and Algorithms: Programming Strategies
"This module will sort the items in its input stream into ascending order based on an ordering rule for the items and place them on its output stream." As we will see later, there are many ways to sort items  some more efficient than others. At this level, we are not concerned with how the sort is performed, but simply that the output is sorted according to our ordering rule. The second type of abstraction  structural abstraction  is better known as object orientation. In this approach, we construct software models of the behaviour of real world if ( pedal_pos > 50.0 ) { items, i.e. our truck set_air_intake( 0.78*pedal_pos); manufacturer, in analysing the set_fuel_valve( 0.12 + 0.32*pedal_pos); performance of his vehicle, } would employ a software model of the control computer. For him, this model is abstract it could mimic the behaviour of the real computer by simply providing a behavioural model with program statements like: Alternatively, his model could incorporate details of the computer and its program. However, he isn't concerned: the computer is a "black box" to him and he's solely concerned with its external behaviour. To simplify the complexity of his own model (the vehicle as a whole), he doesn't want to concern himself with the internal workings of the control computer; he wants to assume that someone else has correctly constructed a reliable model of it for him.
Key terms
hacking Producing a computer program rapidly, without thought and without any design methodology. Continue on to Objects and ADTs Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/strategies.html (2 of 2) [3/23/2004 2:24:04 PM]
Data Structures and Algorithms: Objects and ADTs
Data Structures and Algorithms
2.1 Objects and ADTs
In this course, we won't delve into the full theory of objectoriented design. We'll concentrate on the precursor of OO design: abstract data types (ADTs). A theory for the full object oriented approach is readily built on the ideas for abstract data types. An abstract data type is a data structure and a collection of functions or procedures which operate on the data structure. To align ourselves with OO theory, we'll call the functions and procedures methods and the data structure and its methods a class, i.e. we'll call our ADTs classes. However our classes do not have the full capabilities associated with classes in OO theory. An instance of the class is called an object . Objects represent objects in the real world and appear in programs as variables of a type defined by the class. These terms have exactly the same meaning in OO design methodologies, but they have additional properties such as inheritance that we will not discuss here. It is important to note the object orientation is a design methodology. As a consequence, it is possible to write OO programs using languages such as C, Ada and Pascal. The socalled OO languages such as C++ and Eiffel simply provide some compiler support for OO design: this support must be provided by the programmer in nonOO languages.
2.2 An Example: Collections
Programs often deal with collections of items. These collections may be organised in many ways and use many different program structures to represent them, yet, from an abstract point of view, there will be a few common operations on any collection. These might include: create add delete find Create a new collection Add an item to a collection Delete an item from a collection Find an item matching some criterion in the collection
destroy Destroy the collection
2.2.1 Constructors and destructors
The create and destroy methods  often called constructors and destructors  are usually implemented for any abstract data type. Occasionally, the data type's use or semantics are such that there is only ever one object of that type in a program. In that case, it is possible to hide even the object's `handle'
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/objects.html (1 of 4) [3/23/2004 2:24:14 PM]
Data Structures and Algorithms: Objects and ADTs
from the user. However, even in these cases, constructor and destructor methods are often provided. Of course, specific applications may call for additional methods, e.g. we may need to join two collections (form a union in set terminology)  or may not need all of these. One of the aims of good program design would be to ensure that additional requirements are easily handled.
2.2.2 Data Structure
To construct an abstract software model of a collection, we start by building the formal specification. The first component of this is the name of a data type  this is the type of objects that belong to the collection class. In C, we use typedef to define a new type which is a pointer to a structure: typedef struct collection_struct *collection; Note that we are defining a pointer to a structure only; we have not specified details of the attributes of the structure. We are deliberately deferring this  the details of the implementation are irrelevant at this stage. We are only concerned with the abstract behaviour of the collection. In fact, as we will see later, we want to be able to substitute different data structures for the actual implementation of the collection, depending on our needs. The typedef declaration provides us with a C type (class in OO design parlance), collection. We can declare objects of type collection wherever needed. Although C forces us to reveal that the handle for objects of the class is a pointer, it is better to take an abstract view: we regard variables of type collection simply as handles to objects of the class and forget that the variables are actually C pointers.
2.2.3 Methods
Next, we need to define the methods: collection ConsCollection( int max_items, int item_size ); void AddToCollection( collection c, void *item ); void DeleteFromCollection( collection c, void *item ); void *FindInCollection( collection c, void *key ); Note that we are using a number of C "hacks" here. C  even in ANSI standard form  is not exactly the safest programming language in the sense of the support it provides for the engineering of quality software. However, its portability and extreme popularity mean that it is a practical choice for even large software engineering projects. Unfortunately, C++, because it is based on C, isn't much better. Java, the latest fad in the software industry, shows some evidence that its designers have learned from experience (or actually read some of the literature in programming language research!) and has eliminated some of the more dangerous features of C.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/objects.html (2 of 4) [3/23/2004 2:24:14 PM]
Data Structures and Algorithms: Objects and ADTs
Just as we defined our collection object as a pointer to a structure, we assume that the object which belong in this collection are themselves represented by pointers to data structures. Hence in AddToCollection, item is typed void *. In ANSI C, void * will match any pointer  thus AddToCollection may be used to add any object to our collection. Similarly, key in FindInCollection is typed void *, as the key which is used to find any item in the collection may itself be some object. FindInCollection returns a pointer to the item which matches key, so it also has the type void *. The use of void * here highlights one of the deficiencies of C: it doesn't provide the capability to create generic objects, cf the ability to define generic packages in Ada or templates in C++. Note there are various other "hacks" to overcome C's limitations in this area. One uses the preprocessor. You might like to try to work out an alternative approach and try to convince your tutor that it's better than the one set out here!
2.2.4 Pre and postconditions
No formal specification is complete without pre and postconditions. A useful way to view these is as forming a contract between the object and its client. The preconditions define a state of the program which the client guarantees will be true before calling any method, whereas the postconditions define the state of the program that the object's method will guarantee to create for you when it returns. Again C (unlike Eiffel, for example) provides no formal support for pre and postconditions. However, the standard does define an assert function which can (and should!) be used to verify preand postconditions [man page for assert]. We will see how this is used when we examine an implementation of our collection object. Thus pre and postconditions should be expressed as comments accompanying the method definition. Adding pre and postconditions to the collection object would produce: Select to load collection.h Aside In order to keep the discussion simple at this stage, a very general specification of a collection has been implied by the definitions used here. Often, we would restrict our specification in various ways: for example, by not permitting duplicates (items with the same key) to be added to the collection. With such a collection, the pre and postconditions can be made more formal: Select to load ucollection.h Note how the pre and postconditions now use the FindInUCollection function to more precisely define the state of the object before and after the method has been invoked. Such formal preand postconditions are obviously much more useful than the informal English ones previously specified. They are also easier to translate to appropriate assertions as will be seen when the implementation is constructed.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/objects.html (3 of 4) [3/23/2004 2:24:14 PM]
Data Structures and Algorithms: Objects and ADTs
2.2.5 C conventions
This specification  which all a user or client of this object needs to see (he isn't interested in the implementation details)  would normally be placed in a file with a .h (h = header) suffix to its name. For the collection, we would place the specifications in files called collection.h and Ucollection.h and use the C #include facility to import them into programs which needed to use them. The implementation or body of the class is placed in a file with a .c suffix. References Some additional sources of information on Object Orientation:
q q
q q
What is ObjectOriented Software? Basic Principles and Concepts of ObjectOrientation  an extensive set of notes of OO concepts. Unfortunately most of the bibliography links seem to be outdated. Object Oriented Programming  notes for a class at Indiana University (based on ObjectiveC). Object Oriented Programming Languages  summary of OO programming languages (with links to full details).
Key terms
abstract data type (ADT) A data structure and a set of operations which can be performed on it. A class in objectoriented design is an ADT. However, classes have additional properties (inheritance and polymorphism) not normally associated with ADTs. Continue on to Error Handling Continue on to Arrays Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/objects.html (4 of 4) [3/23/2004 2:24:14 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/eiffel.html
Eiffel
Eiffel is an objectoriented language based on Ada designed by Betrand Meyer. It provides formal support for preconditions by allowing a programmer to insert assert statements at appropriate points in his code.
Reference
Bertrand Meyer, "Eiffel: The Language", Prentice Hall, 1992 ISBN 0132479257. There is also an Eiffel Home Page.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/eiffel.html [3/23/2004 2:24:16 PM]
C hacks
C hacks
C allows you to define a void pointer. This pointer will match a pointer to any type. When you invoke a function whose formal parameter is a void *, you can use any pointer as the actual argument. This allows you to build generic functions which will operate on a variety of data types. However it does bypass the compiler's type checking  and allow you to inadvertently make mistakes that a compiler for a strongly typed language such as Ada would detect for you.
Hacking
This term probably arises from the MIT expression for what we know in English as a "student prank". MIT students refer to these assaults on the conventions of society as "hacks".
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/C_hacks.html [3/23/2004 2:24:20 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ANSI_C.html
Data Structures and Algorithms
Appendix A: Languages A.1 ANSI C
Function prototypes ANSI C Compilers
A.2 C++
A.3 Java
Designed by a group within Sun Microsystems, Java has eliminated some of the more dangerous features of C (to the undoubted disappointment of some hackers  who probably achieve their daily highs from discovering new ways to program dangerously in C!). A host of texts on Java have now appeared  possibly setting a new record for the rate of textbook production on any one subject!
References
Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ANSI_C.html [3/23/2004 2:24:22 PM]
Ada
Ada
Ada was designed in response to a US Department of Defense initiative which sought a common higher order language for all defense applications. Jean Ichbiah's team at Honeywell Bull won the competition for the new language, which was named after Ada Augusta, daugther of Lord Byron and Countess of Lovelace. Ada was Babbage's assistant and thus may claim the title of the first computer programmer. Ada is now an ANSI and ISO standard: Reference Manual for the Ada Programming Language, ANSI/MILSTD1815A1983, Feb, 1983. The language reference manual may also be found as an appendix in some texts, eg J.G.P. Barnes, "Programming in Ada plus Language Reference Manual", 3rd ed, AddisonWesley, 1991. ISBN 0201565390. The Ada initiative predates the discovery of objectoriented design. However, it does support many OO design strategies. It provides excellent support for the construction of Abstract Data Types through use of the package and private data type facilities. An object oriented Ada, "Ada 95", has been defined. An online version of the Ada Language Reference Manual is available. It is also available from a number of other sites: any Internet search engine should be able to locate the nearest one for you. A full set of Ada resources, the Ada rationale, the Ada Information Clearing House (Ada IC), etc, is available at the Swiss Federal Institute of Technology in Lausanne (EPFL)'s Ada Resources page.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada.html [3/23/2004 2:24:30 PM]
Data Structures and Algorithms  xx
Data Structures and Algorithms
This section not complete yet!
Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/notyet.html [3/23/2004 2:24:32 PM]
assert
ASSERT(3V) NAME
C LIBRARY FUNCTIONS
ASSERT(3V)
assert  program verification SYNOPSIS #include <assert.h> assert(expression) DESCRIPTION assert() is a macro that indicates expression is expected to be true at this point in the program. If expression is false (0), it displays a diagnostic message on the standard output and exits (see exit(2V)). Compiling with the cc(1V) option DNDEBUG, or placing the preprocessor control statement #define NDEBUG before the ``#include <assert.h>'' deletes assert() from the program. statement effectively
SYSTEM V DESCRIPTION The System V version of assert() calls abort(3) rather exit(). SEE ALSO cc(1V), exit(2V), abort(3)
than
DIAGNOSTICS Assertion failed: file f line n The expression passed to the assert() statement at line n of source file f was false. SYSTEM V DIAGNOSTICS Assertion failed: expression, file f, line n The expression passed to the assert() statement at line n of source file f was false.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/assert.html [3/23/2004 2:24:40 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.h
/* Specification for Collection */ typedef struct t_Collection *Collection; Collection ConsCollection( int max_items, int item_size ); /* Construct a new Collection Precondition: max_items > 0 Postcondition: returns a pointer to an empty Collection */ void AddToCollection( Collection c, void *item ); /* Add an item to a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ void DeleteFromCollection( Collection c, void *item ); /* Delete an item from a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ void *FindInCollection( Collection c, void *key ); /* Find an item in a Collection Precondition: c is a Collection created by a call to ConsCollection key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.h [3/23/2004 2:24:42 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/ucollection.h
/* Specification of a collection which contains unique items only Assumes that the items of which the collection is composed supply an ItemKey method which returns the pointer to an identifying key for the item */ typedef struct u_collection_struct *u_collection; u_collection ConsUCollection( int max_items ); /* Construct a new collection Precondition: None Postcondition: returns a pointer to an empty collection */ void AddToUCollection( u_collection c, void *item ); /* Add an item to a collection Precondition: (c was created by a call to ConsUCollection) && (item != NULL) && (FindInUCollection(c,ItemKey(item))==NULL) Postcondition: FindInUCollection(c,ItemKey(item)) != NULL */ void DeleteFromUCollection( u_collection c, void *item ); /* Delete an item from a collection Precondition: (c was created by a call to ConsUCollection) && (item != NULL) && (FindInUCollection(c,ItemKey(item)) != NULL) Postcondition: FindInUCollection(c,ItemKey(item)) == NULL */ void *FindInUCollection( u_collection c, void *key ); /* Find an item in a collection Precondition: c is a collection created by a call to ConsCollection key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/ucollection.h [3/23/2004 2:24:44 PM]
Data Structures and Algorithms: Error Handling
Data Structures and Algorithms
2.7 Error Handling
No program or program fragment can be considered complete until appropriate error handling has been added. Unexpected program failures are a disaster  at the best, they cause frustration because the program user must repeat minutes or hours of work, but in lifecritical applications, even the most trivial program error, if not processed correctly, has the potential to kill someone. If an error is fatal, in the sense that a program cannot sensibly continue, then the program must be able to "die gracefully". This means that it must
q q
inform its user(s) why it died, and save as much of the program state as possible.
2.7.1 Defining Errors The first step in determining how to handle errors is to define precisely what is considered to be an error. Careful specification of each software component is part of this process. The preconditions of an ADT's methods will specify the states of a system (the input states) which a method is able to process. The postconditions of each method should clearly specify the result of processing each acceptable input state. Thus, if we have a method: int f( some_class a, int i ) /* PRECONDITION: i >= 0 */ /* POSTCONDITION: if ( i == 0 ) return 0 and a is unaltered else return 1 and update a's ith element by .... */
q
q q q
This specification tells us that i==0 is a meaningless input that f should flag by returning 0 but otherwise ignore. f is expected to handle correctly all positive values of i. The behaviour of f is not specified for negative values of i, ie it also tells us that It is an error for a client to call f with a negative value of i.
Thus, a complete specification will specify
q q
all the acceptable input states, and the action of a method when presented with each acceptable input state.
By specifying the acceptable input states in preconditions, it will also divide responsibility for errors unambiguously.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/errors.html (1 of 3) [3/23/2004 2:24:47 PM]
Data Structures and Algorithms: Error Handling
q
q
The client is responsible for the preconditions: it is an error for the client to call the method with an unacceptable input state, and The method is responsible for establishing the postconditions and for reporting errors which occur in doing so.
2.7.2 Processing errors Let's look at an error which must be handled by the constructor for any dynamically allocated object: the system may not be able to allocate enough memory for the object. A good way to create a disaster is to do this: X ConsX( .... ) { X x = malloc( sizeof(struct t_X) ); if ( x == NULL ) { printf("Insuff mem\n"); exit( 1 ); } else ..... } Not only is the error message so cryptic that it is likely to be little help in locating the cause of the error (the message should at least be "Insuff mem for X"!), but the program will simply exit, possibly leaving the system in some unstable, partially updated, state. This approach has other potential problems:
q
q
q
What if we've built this code into some elaborate GUI program with no provision for "standard output"? We may not even see the message as the program exits! We may have used this code in a system, such as an embedded processor (a control computer), which has no way of processing an output stream of characters at all. The use of exit assumes the presence of some higher level program, eg a Unix shell, which will capture and process the error code 1.
As a general rule, I/O is nonportable!
A function like printf will produce error messages on the 'terminal' window of your modern workstation, but if you are running a GUI program like Netscape, where will the messages go? So, the same function may not produce useful diagnostic output for two programs running in different environments on the same processor! How can we expect it to be useful if we transport this program to another system altogether, eg a Macintosh or a Windows machine? Before looking at what we can do in ANSI C, let's look at how some other languages tackle this
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/errors.html (2 of 3) [3/23/2004 2:24:47 PM]
Data Structures and Algorithms: Error Handling
problem. Continue on to Ada Exceptions Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/errors.html (3 of 3) [3/23/2004 2:24:47 PM]
Data Structures and Algorithms: Arrays
Data Structures and Algorithms
3 Data Structures
In this section, we will examine some fundamental data structures: arrays, lists, stacks and trees.
3.1 Arrays
The simplest way to implement our collection is to use an array to hold the items. Thus the implementation of the collection object becomes: /* Array implementation of a collection */ #include <assert.h> /* Needed for assertions */ #include "collection.h" /* import the specification */ struct t_collection { int item_cnt; int max_cnt; int item_size; void *items[]; };
/* Not strictly necessary */ /* Needed by FindInCollection */
Points to note: a. We have imported the specification of this object into the implementaton  this enables the compiler to verify that the implementation and the specification match. Although it's not necessary to include the specification (cf function prototypes), it is much safer to do so as it enables the compiler to detect some common errors and ensures that the specification and its implementation remain consistent when the object is changed. b. items is typed as an array of void * in the struct. It is an array of item's which happen to be pointers  but remember that we are trying to hide this from users of the class. Many C programmers would write the equivalent void ** here. A question:
q
Why is the attribute max_cnt not strictly necessary? Hint: it's related to the pre and postconditions specified for methods on this object.
The implementations of the methods are: Select here to load collection.c
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/arrays.html (1 of 2) [3/23/2004 2:24:53 PM]
Data Structures and Algorithms: Arrays
Points to note: a. ConsCollection uses the memory allocator calloc to dynamically allocate memory off the program's heap for the collection. Two calls are necessary  one to allocate space for the "header" structure itself and one to allocate space for the array of item pointers. b. assert calls have been added for the preconditions (cf full description of assert). Note that the preconditions here are expressed as a number of conditions linked by &&. Since assert requires a single boolean expression as its argument, one assert would suffice. However, we have chosen to implement each individual condition as a separate assert. This is done to assist debugging: if the preconditions are not satisfied, it is more helpful to know which one of multiple conditions has not been satisfied! c. memcmp is a standard function which compares blocks of memory byte by byte [man page for memcmp]. d. The use of memcp and ItemKey severely constrain the form of the key  it must be in a contiguous string of characters in the item. There are ways of providing more flexible keys (eg ones having multiple fields within item or ones calculated from item. These rely on C capabilities which will be discussed in a later section. e. There is no treatment of errors, e.g. if no memory is available on the heap for calloc. This is a serious shortcoming. No software without a consistent strategy for detecting, reporting and recovering from errors can be considered well engineered. It is difficult to debug, prone to crashes from faults which are difficult to correct because there is no indication of the source of the error. Error handling is addressed in a later section.
Key terms
hacking producing a computer program rapidly, without thought and Continue on to Lists Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/arrays.html (2 of 2) [3/23/2004 2:24:53 PM]
Data Structures and Algorithms: Ada exceptions
Data Structures and Algorithms
Ada Exceptions
Ada defines an EXCEPTION which may be processed in an exception handler at any level in the program above that where the exception was generated or RAISEd. PACKAGE adtX IS TYPE X IS PRIVATE; EXCEPTION out_of_range; PROCEDURE f( a: INOUT X; b: INTEGER ); END adtX; PACKAGE BODY adtX IS PROCEDURE f( a: INOUT X; b: INTEGER ) IS BEGIN ...... IF b < some_limit THEN  Normal processing ELSE RAISE out_of_range; END IF; END adtX; This package exports the exception out_of_range which may be caught in any routine that uses f. WITH adtX; USE adtX; PROCEDURE g( ... ) IS BEGIN ... f( a, n ); ...  Import adtX
 Invoke method f  Continue here if exception not raised
....  Return from here if no errors EXCEPTION WHEN out_of_range => ...  process the exception END g; In this example, the exception was processed in the procedure, g, which called the function, f, in which it was raised. The code processing the exception is any set of Ada statements: it could even raise another exception.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada_exceptions.html (1 of 3) [3/23/2004 2:25:13 PM]
Data Structures and Algorithms: Ada exceptions
If the exception is not 'caught' it is propagated up the call stack until it encounters an exception handler prepared to process it. (If there are no exception handlers, then it will propagate to the highest level and cause the program to abort. However an implementation would be expected to print out the name of the exception causing the abort.) Because they are propagated to arbitrarily high levels of an Ada program, it is easy to arrange for Ada exceptions to be caught at some level where there is an appropriate interface for dealing with them. For example, in a GUI program, the routines which handle interaction with a user through the windows, mouse events, keyboard input, etc, are generally at the highest level in the program. These routines "know" how to pop up the alert box that tells the user that a problem has occurred and force him or her to take some action to correct the problem. Alternatively, in an embedded processor, they would "know" to send a message via a communications channel to some master processor. Lower level, reusable code should be able to function correctly in any environment  GUI, text terminal, embedded system, etc. Ada's ability to propagate exceptions to a level at which the program knows sufficient about the environment to output the appropriate messages makes life simple for the writer of reusable software. Exceptions are defined which correspond to all the errors that could occur. Reusable code simply raises the exceptions. The users of the code then have the flexibility to decide when (ie at what level) to process the exceptions. An added benefit of Ada's exception mechanism is that it provides a uniform method of handling errors. Left to their own devices, programmers are able to define a large grabbag of styles of error raising and processing, for example, we can:
q q q q q q
use the return values of functions, add a callbyreference error parameter to a function, set a global variable, call an error handling module, notify a separate process, ...
In Ada, a disciplined group of programmers will use Ada's inbuilt exception handling uniformly to propagate exceptions to some agreed level in programs where code which "knows" the current environment can handle the problem appropriately. Ada further standardises behaviour by predefining a number of exceptions for commonly encountered problems, such as constraint_error when an attempt is made to assign a value to a variable is outside the permitted range for its type.
Key terms
Exception An exception is raised by a program or program module when some event occurs which should
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada_exceptions.html (2 of 3) [3/23/2004 2:25:13 PM]
Data Structures and Algorithms: Ada exceptions
be handled by some other program or module. Continue on to C++ exception handling Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada_exceptions.html (3 of 3) [3/23/2004 2:25:13 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.c
/* Array implementation of a Collection */ #include <stdio.h> /* Definition of NULL */ #include <assert.h> /* Needed for assertions */ #include "Collection.h" /* import the specification */ extern void *ItemKey( void * ); struct t_Collection { int item_cnt; int max_items; int size; void **items; };
/* Not strictly necessary */ /* Needed by FindInCollection */
Collection ConsCollection(int max_items, int item_size ) /* Construct a new Collection Precondition: (max_items > 0) && (item_size > 0) Postcondition: returns a pointer to an empty Collection */ { Collection c; assert( max_items > 0 ); assert( item_size > 0 ); c = (Collection)calloc( 1, sizeof(struct t_Collection) ); c>items = (void **)calloc(max_items,sizeof(void *)); c>size = item_size; c>max_items = max_items; return c; } void DeleteCollection( Collection c ) { assert( c != NULL ); assert( c>items != NULL ); free( c>items ); free( c ); } void AddToCollection( Collection c, void *item ) /* Add an item to a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ { assert( c != NULL); assert( c>item_cnt < c>max_items ); assert( item != NULL); c>items[c>item_cnt++] = item; /* Postcondition */ assert( FindInCollection( c, ItemKey( item ) ) != NULL ); } void DeleteFromCollection( Collection c, void *item ) /* Delete an item from a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ { int i;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.c (1 of 2) [3/23/2004 2:25:16 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.c
assert( c != NULL ); assert( c>item_cnt >= 1 ); assert( item != NULL ); for(i=0;i<c>item_cnt;i++) { if ( item == c>items[i] ) { /* Found the item to be deleted, shuffle all the rest down */ while( i < c>item_cnt ) { c>items[i] = c>items[i+1]; i++; } c>item_cnt; break; } } } void *FindInCollection( Collection c, void *key ) /* Find an item in a Collection Precondition: c is a Collection created by a call to ConsCollection key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ { int i; assert( c != NULL ); assert( key != NULL ); for(i=0;i<c>item_cnt;i++) { if (memcmp(ItemKey(c>items[i]),key,c>size)==0) return c>items[i]; } return NULL; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection.c (2 of 2) [3/23/2004 2:25:16 PM]
Data Structures and Algorithms: Memory Allocators
Data Structures and Algorithms
Memory Allocators
C implementations provide a number of memory allocation functions which allocate memory from the program's heap. This is usually an area of memory above the program and data blocks which grows upwards in memory as memory is allocated by program requests. The two most commonly used C functions are malloc and calloc. Full descriptions may be found in the Unix man pages for malloc and calloc.
Fig 1 A typical program's use of memory. Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/malloc.html [3/23/2004 2:25:44 PM]
memcmp
MEMORY(3) NAME
C LIBRARY FUNCTIONS
MEMORY(3)
memory, memccpy, memchr, memcmp, operations SYNOPSIS #include <memory.h> ... others omitted ... int memcmp(s1, s2, n) char *s1, *s2; int n;
memcpy,
memset

memory
DESCRIPTION These functions operate as efficiently as possible on memory areas (arrays of characters bounded by a count, not terminated by a null character). They do not check for the overflow of any receiving memory area. memcmp() compares its arguments, looking at the first n characters only, and returns an integer less than, equal to, or greater than 0, according as s1 is lexicographically less than, equal to, or greater than s2. See full man page for other descriptions. NOTES For user convenience, all these functions the <memory.h> header file. are declared in
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/memcmp.html [3/23/2004 2:25:47 PM]
Data Structures and Algorithms: Introduction
Data Structures and Algorithms
3.2 Lists
The array implementation of our collection has one serious drawback: you must know the maximum number of items in your collection when you create it. This presents problems in programs in which this maximum number cannot be predicted accurately when the program starts up. Fortunately, we can use a structure called a linked list to overcome this limitation.
3.2.1 Linked lists
The linked list is a very flexible dynamic data structure: items may be added to it or deleted from it at will. A programmer need not worry about how many items a program will have to accommodate: this allows us to write robust programs which require much less maintenance. A very common source of problems in program maintenance is the need to increase the capacity of a program to handle larger collections: even the most generous allowance for growth tends to prove inadequate over time! In a linked list, each item is allocated space as it is added to the list. A link is kept with each item to the next item in the list. Each node of the list has two elements 1. the item being stored in the list and 2. a pointer to the next item in the list The last node in the list contains a NULL pointer to indicate that it is the end or tail of the list. As items are added to a list, memory for a node is dynamically allocated. Thus the number of items that may be added to a list is limited only by the amount of memory available.
Handle for the list
The variable (or handle) which represents the list is simply a pointer to the node at the head of the list.
Adding to a list
The simplest strategy for adding an item to a list is to:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/lists.html (1 of 4) [3/23/2004 2:26:04 PM]
Data Structures and Algorithms: Introduction
a. b. c. d.
allocate space for a new node, copy the item into it, make the new node's next pointer point to the current head of the list and make the head of the list point to the newly allocated node.
This strategy is fast and efficient, but each item is added to the head of the list. An alternative is to create a structure for the list which contains both head and tail pointers: struct fifo_list { struct node *head; struct node *tail; }; The code for AddToCollection is now trivially modified to make a list in which the item most recently added to the list is the list's tail. The specification remains identical to that used for the array implementation: the max_item parameter to ConsCollection is simply ignored [7] Thus we only need to change the implementation. As a consequence, applications which use this object will need no changes. The ramifications for the cost of software maintenance are significant. The data structure is changed, but since the details (the attributes of the object or the elements of the structure) are hidden from the user, there is no impact on the user's program. Select here to load collection_ll.c Points to note: a. This implementation of our collection can be substituted for the first one with no changes to a client's program. With the exception of the added flexibility that any number of items may be added to our collection, this implementation provides exactly the same high level behaviour as the previous one. b. The linked list implementation has exchanged flexibility for efficiency  on most systems, the system call to allocate memory is relatively expensive. Preallocation in the arraybased implementation is generally more efficient. More examples of such tradeoffs will be found later. The study of data structures and algorithms will enable you to make the implementation decision which most closely matches your users' specifications.
3.2.2 List variants
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/lists.html (2 of 4) [3/23/2004 2:26:04 PM]
Data Structures and Algorithms: Introduction
Circularly Linked Lists By ensuring that the tail of the list is always pointing to the head, we can build a circularly linked list. If the external pointer (the one in struct t_node in our implementation), points to the current "tail" of the list, then the "head" is found trivially via tail>next, permitting us to have either LIFO or FIFO lists with only one external pointer. In modern processors, the few bytes of memory saved in this way would probably not be regarded as significant. A circularly linked list would more likely be used in an application which required "roundrobin" scheduling or processing. Doubly Linked Lists
Doubly linked lists have a pointer to the preceding item as well as one to the next. They permit scanning or searching of the list in both directions. (To go backwards in a simple list, it is necessary to go back to the start and scan forwards.) Many applications require searching backwards and forwards through sections of a list: for example, searching for a common name like "Kim" in a Korean telephone directory would probably need much scanning backwards and forwards through a small region of the whole list, so the backward links become very useful. In this case, the node structure is altered to have two links: struct t_node { void *item; struct t_node *previous; struct t_node *next; } node; Lists in arrays Although this might seem pointless (Why impose a structure which has the overhead of the "next" pointers on an array?), this is just what memory allocators do to manage available space. Memory is just an array of words. After a series of memory allocations and deallocations, there are blocks of free memory scattered throughout the available heap space. In order to be able to reuse this memory, memory allocators will usually link freed blocks together in a free list by writing pointers to the next free block in the block itself. An external free list pointer pointer points to the first block in the free list. When a new block of memory is requested, the allocator will generally scan the free list looking for a freed block of suitable size and delete it from the free list (relinking the free list around the deleted block). Many variations of memory allocators have been proposed: refer to a text on
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/lists.html (3 of 4) [3/23/2004 2:26:04 PM]
Data Structures and Algorithms: Introduction
operating systems or implementation of functional languages for more details. The entry in the index under garbage collection will probably lead to a discussion of this topic.
Key terms
Dynamic data structures Structures which grow or shrink as the data they hold changes. Lists, stacks and trees are all dynamic structures. Continue on to Stacks Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/lists.html (4 of 4) [3/23/2004 2:26:04 PM]
Data Structures and Algorithms: C++ exceptions
Data Structures and Algorithms
C++ Exceptions
C++ defines a mechanism that is somewhat similar to Ada's exception mechanism. C++ allows you to execute a throw statement when an error is detected. When a throw is invoked, control jumps immediately to a catch routine. First you define a try block which encloses the "normal" (no exception) code. The try block is followed by a number of catch blocks: if code in the try block throws an exception, it is caught by one of the catch blocks. try { classX x; x.mangle( 2 ); x.straighten( 2 ); } catch( const char *string ) { ..... } catch( RangeErr &re ) { .... } catch( ... ) { // catches any other error .... } classX's constructor, mangle and straighten methods contain throw statements when problems are encountered: classX::mangle( int degree ) { if ( degree > MAX_MANGLE ) throw "Can't mangle this much!"; .... // Normal code for mangle } The throw causes control to jump straight out of the mangle method to the catch block, skipping all subsequent statements in the try block. However, like much of C++, the rules which are used to associate a throw with the correct catch are too complex to contemplate. Stroustroup attempted to make the throw like a general method invocation with parameters, overloading, etc, etc. Historical note: many early C++ compilers did not implement the throw/catch pair, (possibly because of the complexity alluded to above!) and some textbooks avoid it  or relegate it to the last few pages!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/C++_throw.html (1 of 2) [3/23/2004 2:26:06 PM]
Data Structures and Algorithms: C++ exceptions
Basically, like a lot of C++ features, the throw/catch mechanism is best used in a very simple way, eg by providing a single catch block with a single int or enum parameter! Ada's simple and clean mechanism may lack some power (an exception handler can't be passed a parameter), but it's a lot easier to understand and use! Continue on to Java Exceptions Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/C++_throw.html (2 of 2) [3/23/2004 2:26:06 PM]
Data Structures and Algorithms: Notes
Data Structures and Algorithms
Notes
1. By the end of this course, you should understand that hacking is far from clever: there are much more effective strategies for making programs run faster! 2. Boehm, Software Engineering Economics 3. Software Engineering Notes 4. Ada LRM 5. B Meyer, Eiffel 6. Some compilers, e.g. Metrowerks Macintosh C compiler, have an option Require function prototypes which can be turned on. If it is on, then the compiler will issue errors if the specification is not included  because the function prototypes (the formal specification of the methods of our objects) are in the specification. Other compilers, e.g. GNU gcc, will only issue warnings if the function prototypes are absent. 7. Or possibly used as "advice" to the system  enabling it to preallocated space in some efficient way, e.g. in a contiguous block in one page of memory. 8. Maintenance is well known to be the most costly phase of any large software development project, refer to any text on Software Engineering. 9. top is equivalent to x = pop(s); push(s,x); 10. In fact, adding and deleting from the head of a linked list is the simplest implementation and produces exactly the LIFO semantics of a stack. 11. In most operating systems, allocation and deallocation of memory is a relatively expensive operation, there is a penalty for the flexibility of linked list implementations. 12. Pronounce this "bigOh n"  or sometimes "Oh n". 13. You will find that not many people will be happy with a prediction that 90% of the time, this computer will calculate the [new position of the aircraft's flapsthe fastest rate at which the brakes can be applied ...] to prevent [ the aircraft crashing hitting the car in front...].
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/notes.html [3/23/2004 2:26:08 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection_ll.c
/* Linked list implementation of a collection */ #include #include #include #include <stdlib.h> <stdio.h> <assert.h> "collection.h" /* /* /* /* calloc */ NULL */ Needed for assertions */ import the specification */
extern void *ItemKey( void * ); struct t_node { void *item; struct t_node *next; } node; struct t_Collection { int size; /* Needed by FindInCollection */ struct t_node *node; }; Collection ConsCollection(int max_items, int item_size ) /* Construct a new collection Precondition: (max_items > 0) && (item_size > 0) Postcondition: returns a pointer to an empty collection */ { Collection c; /* Although redundant, this assertion should be retained as it tests compliance with the formal specification */ assert( max_items > 0 ); assert( item_size > 0 ); c = (Collection)calloc( 1, sizeof(struct t_Collection) ); c>node = (struct t_node *)0; c>size = item_size; return c; } void AddToCollection( Collection c, void *item ) /* Add an item to a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ { struct t_node *new; assert( c != NULL ); assert( item != NULL ); /* Allocate space for a node for the new item */ new = (struct t_node *)malloc(sizeof(struct t_node)); /* Attach the item to the node */ new>item = item; /* Make the existing list `hang' from this one */ new>next = c>node; /* The new item is the new head of the list */ c>node = new; assert( FindInCollection( c, ItemKey( item ) ) != NULL ); } void DeleteFromCollection( Collection c, void *item ) /* Delete an item from a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection_ll.c (1 of 2) [3/23/2004 2:26:15 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection_ll.c
Postcondition: item has been deleted from c */ { struct t_node *node, *prev; assert( c != NULL ); /* The requirement that the Collection has at least one item is expressed a little differently */ assert( c>node != NULL ); assert( item != NULL); /* Select node at head of list */ prev = node = c>node; /* Loop until we've reached the end of the list */ while( node != NULL ) { if ( item == node>item ) { /* Found the item to be deleted, relink the list around it */ if( node == c>node ) /* We're deleting the head */ c>node = node>next; else prev>next = node>next; /* Free the node */ free( node ); break; } prev = node; node = node>next; } } void *FindInCollection( Collection c, void *key ) /* Find an item in a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (key != NULL) Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ { struct t_node *node; assert( c != NULL ); assert( key != NULL ); /* Select node at head of list */ node = c>node; while( node != NULL) { if ( memcmp(key,ItemKey(node>item),c>size)==0 ) { return node>item; } node = node>next; } return NULL; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/collection_ll.c (2 of 2) [3/23/2004 2:26:15 PM]
Data Structures and Algorithms: Stacks
Data Structures and Algorithms
3.3 Stacks
Another way of storing data is in a stack. A stack is generally implemented with only two principle operations (apart from a constructor and destructor methods): push adds an item to a stack pop Other methods such as top returns the item at the top without removing it [9] extracts the most recently pushed item from the stack
isempty determines whether the stack has anything in it are sometimes added.
A common model of a stack is a plate or coin stacker. Plates are "pushed" onto to the top and "popped" off the top. Stacks form LastInFirstOut (LIFO) queues and have many applications from the parsing of algebraic expressions to ...
A formal specification of a stack class would look like: typedef struct t_stack *stack; stack ConsStack( int max_items, int item_size ); /* Construct a new stack Precondition: (max_items > 0) && (item_size > 0) Postcondition: returns a pointer to an empty stack */ void Push( stack s, void *item ); /* Push an item onto a stack Precondition: (s is a stack created by a call to ConsStack) && (existing item count < max_items) &&
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/stacks.html (1 of 3) [3/23/2004 2:26:27 PM]
Data Structures and Algorithms: Stacks
(item != NULL) Postcondition: item has been added to the top of s */ void *Pop( stack s ); /* Pop an item of a stack Precondition: (s is a stack created by a call to ConsStack) && (existing item count >= 1) Postcondition: top item has been removed from s */ Points to note: a. A stack is simply another collection of data items and thus it would be possible to use exactly the same specification as the one used for our general collection. However, collections with the LIFO semantics of stacks are so important in computer science that it is appropriate to set up a limited specification appropriate to stacks only. b. Although a linked list implementation of a stack is possible (adding and deleting from the head of a linked list produces exactly the LIFO semantics of a stack), the most common applications for stacks have a space restraint so that using an array implementation is a natural and efficient one (In most operating systems, allocation and deallocation of memory is a relatively expensive operation, there is a penalty for the flexibility of linked list implementations.).
3.3.1 Stack Frames
Almost invariably, programs compiled from modern high level languages (even C!) make use of a stack frame for the working memory of each procedure or function invocation. When any procedure or function is called, a number of words  the stack frame  is pushed onto a program stack. When the procedure or function returns, this frame of data is popped off the stack. As a function calls another function, first its arguments, then the return address and finally space for local variables is pushed onto the stack. Since each function runs in its own "environment" or context, it becomes possible for a function to call itself  a technique known as recursion. This capability is extremely useful and extensively used  because many problems are elegantly specified or solved in a recursive way.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/stacks.html (2 of 3) [3/23/2004 2:26:27 PM]
Data Structures and Algorithms: Stacks
Program stack after executing a pair of mutually recursive functions: function f(int x, int y) { int a; if ( term_cond ) return ...; a = .....; return g(a); } function g(int z) { int p,q; p = ...; q = ...; return f(p,q); } Note how all of function f and g's environment (their parameters and local variables) are found in the stack frame. When f is called a second time from g, a new frame for the second invocation of f is created.
Key terms
push, pop Generic terms for adding something to, or removing something from a stack context The environment in which a function executes: includes argument values, local variables and global variables. All the context except the global variables is stored in a stack frame. stack frames The data structure containing all the data (arguments, local variables, return address, etc) needed each time a procedure or function is called. Continue on to Recursion Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/stacks.html (3 of 3) [3/23/2004 2:26:27 PM]
Data Structures and Algorithms: Trees
Data Structures and Algorithms
4.3 Trees
4.3.1 Binary Trees
The simplest form of tree is a binary tree. A binary tree consists of a. a node (called the root node) and b. left and right subtrees. Both the subtrees are themselves binary trees. You now have a recursively defined data structure. (It is also possible to define a list recursively: can you see how?)
A binary tree The nodes at the lowest levels of the tree (the ones with no subtrees) are called leaves. In an ordered binary tree, 1. the keys of all the nodes in the left subtree are less than that of the root, 2. the keys of all the nodes in the right subtree are greater than that of the root, 3. the left and right subtrees are themselves ordered binary trees.
Data Structure
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/trees.html (1 of 3) [3/23/2004 2:26:44 PM]
Data Structures and Algorithms: Trees
The data structure for the tree implementation simply adds left and right pointers in place of the next pointer of the linked list implementation. [Load the tree struct.] The AddToCollection method is, naturally, recursive. [ Load the AddToCollection method.] Similarly, the FindInCollection method is recursive. [ Load the FindInCollection method.]
Analysis
Complete Trees Before we look at more general cases, let's make the optimistic assumption that we've managed to fill our tree neatly, ie that each leaf is the same 'distance' from the root.
This forms a complete tree, whose height is defined as the number of links from the root to the deepest leaf.
A complete tree First, we need to work out how many nodes, n, we have in such a tree of height, h. Now, n = 1 + 21 + 22 + .... + 2h From which we have, n = 2h+1  1 and h = floor( log2n ) Examination of the Find method shows that in the worst case, h+1 or ceiling( log2n ) comparisons are needed to find an item. This is the same as for binary search. However, Add also requires ceiling( log2n ) comparisons to determine where to add an item. Actually adding the item takes a constant number of operations, so we say that a binary tree requires O(logn) operations for both adding and finding an item  a considerable improvement over binary search for a dynamic structure which often requires addition of new items. Deletion is also an O(logn) operation.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/trees.html (2 of 3) [3/23/2004 2:26:44 PM]
Data Structures and Algorithms: Trees
General binary trees However, in general addition of items to an ordered tree will not produce a complete tree. The worst case occurs if we add an ordered list of items to a tree. What will happen? Think before you click here! This problem is readily overcome: we use a structure known as a heap. However, before looking at heaps, we should formalise our ideas about the complexity of algorithms by defining carefully what O(f(n)) means.
Key terms
Root Node Node at the "top" of a tree  the one from which all operations on the tree commence. The root node may not exist (a NULL tree with no nodes in it) or have 0, 1 or 2 children in a binary tree. Leaf Node Node at the "bottom" of a tree  farthest from the root. Leaf nodes have no children. Complete Tree Tree in which each leaf is at the same distance from the root. A more precise and formal definition of a complete tree is set out later. Height Number of nodes which must be traversed from the root to reach a leaf of a tree. Continue on to Complexity (PS) Continue on to Complexity (HTML) Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/trees.html (3 of 3) [3/23/2004 2:26:44 PM]
Data Structures and Algorithms: Java Exceptions
Data Structures and Algorithms
Java Exceptions
Java, somewhat unfortunately, follows C++ and has the try, throw, catch mechanism. However, Java defines a class hierarchy for errors, all of which are specialisations of the exception class. Java exceptions, like Ada exceptions, are thrown from one class method to the invoking method, until it reaches a try block. However Java methods explicitly list the exceptions that they throw. void readFromFile( String s ) throws IOException, InterruptedException { ...... } try { readFromFile( "abc" ); } catch( FileNotFoundException e ) { ..... } catch( IOException e) { .... } catch( Exception e ) { // catches any other error (Exception is the "superclass") .... } finally { // Cleanup code  always executed } The finally is executed whether an exception is thrown or not. Despite inheriting a mess, Java's designers have managed to simplify it to a useable system. catch appears (my text is not clear on this point  a problem with Java's instant fame!) to accept a single parameter belonging to the exception class. As in Ada, many exceptions are predefined, but you can also define your own. Note the problem of a precise definition of Java is a perennial problem in the computer industry: the tendency to instantly adopt "fads" before they have a chance to be defined thoroughly and unambiguously. This has lead to enormous porting problems and enormous, avoidable costs for users. No professional engineer would design any system using screws with nonstandard threads!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/java_throw.html (1 of 2) [3/23/2004 2:26:50 PM]
Data Structures and Algorithms: Java Exceptions
Continue on to C ADT errors Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/java_throw.html (2 of 2) [3/23/2004 2:26:50 PM]
Data Structures and Algorithms: Programming Language Capabilities
Data Structures and Algorithms
2.7 Programming Languages
This section contains some notes on capabilities of programming languages. The first subsection discusses the ability to pass a function as an argument to another function  an important capability which enables us to create flexible generic ADTs in ANSI C. The remaining subsections give brief overviews of the objectoriented capabilities of C++, Java and Ada  three of the more important programming languages.
q q q q
Functions as data types in C C++ classes Java classes ADTs in Ada
Continue on to Arrays Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/prog_languages.html [3/23/2004 2:27:05 PM]
Data Structures and Algorithms: Functions as Data Types
Data Structures and Algorithms
Functions as Data Types
2.7.1 C functions
C allows a function to be used as a data item. This makes it possible to pass functions as arguments to other functions. This capability, although not often used, is extremely useful when it is appropriate. For example, as we initially defined the collections, even though we were careful to design our collection so that it would handle any type of data, we limited ourselves to collections of only one type of data in any one program. This is caused by the need to define an external function for comparing items. Ideally, we would like to specify a general comparison function function for the objects in the collection when we constructed the collection. In C, this is easy (although the syntax is definitely nonintuitive!). We want to have a general comparison function which tells us what order objects to be stored in our collection should be ranked in, ie we need a function: int ItemCmp( void *a, void *b ); which returns 1, 0 or +1 depending on whether a is less than, equal to or greater than b. (Note that we're allowing a very general notion of 'less than': the ItemCmp can order items according to any rule which might be appropriate to these items.) So we add to our collection structure, a comparison function: struct t_collection { int item_cnt; int (*ItemCmp)( void *, void * ); .... }; ItemCmp is a pointer to a function which has the prototype: int ItemCmp( void *, void * ); The parentheses are necessary to distinguish this declaration from the function prototype and an invocation of the function! The ConsCollection function now becomes: collection ConsCollection( int max_items, int (*ItemCmp)( void *, void * ) ); A use of the collection now looks like: #include "widget.h" /* import the ADT for widgets */ int WidgetComp( widget, widget );
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/C_functions.html (1 of 2) [3/23/2004 2:27:13 PM]
Data Structures and Algorithms: Functions as Data Types
collection LotsOfWidgets; LotsOfWidgets = ConsCollection( large_no, WidgetComp ); In the body of the ADT, the ItemCmp function is used by dereferencing the pointer to the function and using it normally: in FindInCollection, we might have: int FindInCollection( collection c, void *a ) { ..... if ( (*(c>ItemCmp))( c>items[i], a ) == 0 ) { /* Found match ... */ .... } In the example above, an excessive number of parentheses has been used, because I simply don't want to bother to look up the precedence rules: why risk making a silly mistake, when a few extra parentheses will ensure that the compiler treats the code as you intended? However, C permits a 'shortcut'  which doesn't require dereferencing the pointer to the function: in the source code examples, an ItemCmp function has been added to a tree collection. New collection specification New tree implementation
Key terms
Continue on to ADTs in Ada
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/C_functions.html (2 of 2) [3/23/2004 2:27:13 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_a.h
/* Specification for collection */ typedef struct t_collection *collection; collection ConsCollection( int max_items, int (*ItemCmp)(void *, void *) ); /* Construct a new collection Precondition: max_items > 0 Postcondition: returns a pointer to an empty collection */ void AddToCollection( collection c, void *item ); /* Add an item to a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ void DeleteFromCollection( collection c, void *item ); /* Delete an item from a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ void *FindInCollection( collection c, void *key ); /* Find an item in a collection Precondition: c is a collection created by a call to ConsCollection key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_a.h [3/23/2004 2:27:14 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_at.c
/* Binary tree implementation of a collection */ #include #include #include #include <stdlib.h> <stdio.h> <assert.h> "coll_a.h" /* /* /* /* calloc */ NULL */ Needed for assertions */ import the specification */
struct t_node { void *item; struct t_node *left; struct t_node *right; } node; struct t_collection { /* Note that size is not needed any longer! */ int (*ItemCmp)( void *, void * ); struct t_node *node; }; collection ConsCollection(int max_items, int (*ItemCmp)(void *,void *) ) /* Construct a new collection Precondition: (max_items > 0) && (item_size > 0) Postcondition: returns a pointer to an empty collection */ { collection c; /* Although redundant, this assertion should be retained as it tests compliance with the formal specification */ assert( max_items > 0 ); c = (collection)calloc( 1, sizeof(struct t_collection) ); c>node = (struct t_node *)0; c>ItemCmp = ItemCmp; return c; } static void AddToTree( struct t_node **t, struct t_node *new, int (*ItemCmp)(void *, void *) ) { struct t_node *base; base = *t; /* If it's a null tree, just add it here */ if ( base == NULL ) { *t = new; return; } else { if ( ItemCmp( base>item, new ) < 0 ) { AddToTree( &(base>left), new, ItemCmp ); } else AddToTree( &(base>right), new, ItemCmp ); } } void AddToCollection( collection c, void *item ) /* Add an item to a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count < max_items) &&
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_at.c (1 of 2) [3/23/2004 2:27:16 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_at.c
(item != NULL) Postcondition: item has been added to c */ { struct t_node *new, *node_p; assert( c != NULL ); assert( item != NULL ); /* Allocate space for a node for the new item */ new = (struct t_node *)malloc(sizeof(struct t_node)); /* Attach the item to the node */ new>item = item; new>left = new>right = (struct t_node *)0; node_p = c>node; AddToTree( &node_p, new, c>ItemCmp ); } void DeleteFromTree( struct t_node **t, void *item ) { } void DeleteFromCollection( collection c, void *item ) /* Delete an item from a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ { struct t_node *node; assert( c != NULL ); /* The requirement that the collection has at least one item is expressed a little differently */ assert( c>node != NULL ); assert( item != NULL); /* Select node at head of list */ node = c>node; DeleteFromTree( &node, item ); } void *FindInTree( struct t_node *t, void *key ) { } void *FindInCollection( collection c, void *key ) /* Find an item in a collection Precondition: (c is a collection created by a call to ConsCollection) && (key != NULL) Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ { struct t_node *node; assert( c != NULL ); assert( key != NULL ); /* Select node at head of list */ node = c>node; return FindInTree( node, key ); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/coll_at.c (2 of 2) [3/23/2004 2:27:16 PM]
Data Structures and Algorithms  Ada ADTs
Data Structures and Algorithms
6.2 ADTs in Ada
Ada was designed in the late 70's  just before object orientation was "discovered". However at that time the value of abstract data types was well understood and Ada provides good support for this concept. Two Ada constructs are needed for defining an ADT: the data type and its methods are placed in an Ada package. For safety and information hiding, the data type is made private. Although a package's "client" can see the structure of the data type, the compiler prevents access to individual attributes of the type: thus effectively implementing the information hiding principle. The client can see the information, but can't do anything with it! (I believe that the reason for exposing the structure of the private type is purely pragmatic: compilers and linkers know how much space an ADT in a separately compiled package  for which only the specification might be available  requires.) An Ada package for complex numbers would be implemented: PACKAGE complex_numbers IS TYPE complex IS PRIVATE; I : CONSTANT complex;  'i' FUNCTION ""( complex a ) RETURNS complex;  Unary minus FUNCTION FUNCTION FUNCTION FUNCTION "+"( ""( "*"( "="( complex complex complex complex a; a; a; a; complex complex complex complex b b b b ) ) ) ) RETURNS RETURNS RETURNS RETURNS complex; complex; complex; boolean;
PRIVATE TYPE complex IS RECORD real, imag : FLOAT; END RECORD; I : CONSTANT complex := (0.0, 1.0); END complex_numbers; The body or implementation would usually be placed in a separate file and compiled separately: PACKAGE BODY complex_numbers IS FUNCTION ""( complex a ) RETURNS complex IS  Unary minus RETURN complex'(a.real,a.imag);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada_adt.html (1 of 2) [3/23/2004 2:27:38 PM]
Data Structures and Algorithms  Ada ADTs
END ""; FUNCTION "+"( complex a; complex b ) RETURNS complex IS RETURN complex'(a.real+b.real,a.imag+c.imag); END "+"; FUNCTION ""( complex a; complex b ) RETURNS complex IS RETURN complex'(a.realb.real,a.imagc.imag); END ""; FUNCTION "*"( complex a; complex b ) RETURNS complex IS RETURN complex'(a.real*b.real  a.imag*b.imag, a.real*b.imag + a.imag*b.real ); END "*"; FUNCTION "="( complex a; complex b ) RETURNS boolean IS RETURN (a.real = b.real) AND (a.imag = b.imag); END "="; END complex_numbers; Note that Ada provides excellent operator overloading capabilities, which enable us to write mathematically "natural" code: e.g. complex a, b, c, z; IF a = b THEN c := z  a; z := z; END IF; You can also observe that Ada is an extreme case of the principle that programs are read many times more often than they are read (for modern languages at least  nothing is ever likely to match the verbosity of COBOL!). Keywords appear everywhere: IS, RETURNS, etc..
Sorting Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ada_adt.html (2 of 2) [3/23/2004 2:27:38 PM]
Data Structures and Algorithms: Sorting
Data Structures and Algorithms
7 Sorting
Sorting is one of the most important operations performed by computers. In the days of magnetic tape storage before modern databases, it was almost certainly the most common operation performed by computers as most "database" updating was done by sorting transactions and merging them with a master file. It's still important for presentation of data extracted from databases: most people prefer to get reports sorted into some relevant order before wading through pages of data!
7.1 Bubble, Selection, Insertion Sorts
There are a large number of variations of one basic strategy for sorting. It's the same strategy that you use for sorting your bridge hand. You pick up a card, start at the beginning of your hand and find the place to insert the new card, insert it and move all the others up one place. /* Insertion sort for integers */ void insertion( int a[], int n ) { /* Precondition: a contains n items to be sorted */ int i, j, v; /* Initially, the first item is considered 'sorted' */ /* i divides a into a sorted region, x<i, and an unsorted one, x >= i */ for(i=1;i<n;i++) { /* Select the item at the beginning of the as yet unsorted section */ v = a[i]; /* Work backwards through the array, finding where v should go */ j = i; /* If this element is greater than v, move it up one */ while ( a[j1] > v ) { a[j] = a[j1]; j = j1; if ( j <= 0 ) break; } /* Stopped when a[j1] <= v, so put v at position j */ a[j] = v; } } Insertion Sort Animation This animation was written by Woi Ang. Please email comments to: morris@ee.uwa.edu.au
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/sorting.html (1 of 3) [3/23/2004 2:27:45 PM]
Data Structures and Algorithms: Sorting
Bubble Sort
Another variant of this procedure, called bubble sort, is commonly taught: /* Bubble sort for integers */ #define SWAP(a,b) { int t; t=a; a=b; b=t; } void bubble( int a[], int n ) /* Precondition: a contains n items to be sorted */ { int i, j; /* Make n passes through the array */ for(i=0;i<n;i++) { /* From the first element to the end of the unsorted section */ for(j=1;j<(ni);j++) { /* If adjacent items are out of order, swap them */ if( a[j1]>a[j] ) SWAP(a[j1],a[j]); } } }
Analysis
Each of these algorithms requires n1 passes: each pass places one item in its correct place. (The nth is then in the correct place also.) The ith pass makes either ior n  i comparisons and moves. So:
or O(n2)  but we already know we can use heaps to get an O(n logn) algorithm. Thus these algorithms are only suitable for small problems where their simple code makes them faster than the more complex code of the O(n logn) algorithm. As a rule of thumb, expect to find an O(n logn) algorithm faster for n>10  but the exact value depends very much on individual machines!. They can be used to squeeze a little bit more performance out of fast sort algorithms  see later.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/sorting.html (2 of 3) [3/23/2004 2:27:45 PM]
Data Structures and Algorithms: Sorting
Key terms
Bubble, Insertion, Selection Sorts Simple sorting algorithms with O(n2) complexity  suitable for sorting small numbers of items only. Continue on to Heap Sort Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/sorting.html (3 of 3) [3/23/2004 2:27:45 PM]
Data Structures and Algorithms: Quick vs Heap Sort
Data Structures and Algorithms
Quick Sort  the last drop of performance
The recursive calls in quick sort are generally expensive on most architectures  the overhead of any procedure call is significant and reasonable improvements can be obtained with equivalent iterative algorithms. Two things can be done to eke a little more performance out of your processor when sorting: a. Quick sort  in its usual recursive form  has a reasonably high constant factor relative to a simpler sort such as insertion sort. Thus, when the partitions become small (n < ~10), a switch to insertion sort for the small partition will usually show a measurable speedup. (The point at which it becomes effective to switch to the insertion sort is extremely sensitive to architectural features and needs to be determined for any target processor: although a value of ~10 is a reasonable guess!) b. Write the whole algorithm in an iterative form. This is left for a tutorial exercise! Continue on to Bin and radix sorting
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort_perf.html [3/23/2004 2:28:01 PM]
Data Structures and Algorithms: Heap Sort
Data Structures and Algorithms
7.2 Heap Sort
We noted earlier, when discussing heaps, that, as well as their use in priority queues, they provide a means of sorting: 1. construct a heap, 2. add each item to it (maintaining the heap property!), 3. when all items have been added, remove them one by one (restoring the heap property as each one is removed). Addition and deletion are both O(logn) operations. We need to perform n additions and deletions, leading to an O(nlogn) algorithm. We will look at another efficient sorting algorithm, Quicksort, and then compare it with Heap sort.
Animation
The following animation uses a slight modification of the above approach to sort directly using a heap. You will note that it places all the items into the array first, then takes items at the bottom of the heap and restores the heap property, rather than restoring the heap property as each item is entered as the algorithm above suggests. (This approach is described more fully in Cormen et al.) Note that the animation shows the data
q q
stored in an array (as it is in the implementation of the algorithm) and also in the tree form  so that the heap structure can be clearly seen.
Both representations are, of course, equivalent. Heap Sort Animation This animation was written by Woi Ang. Continue on to Quick Sort
© John Morris, 1998
Please email comments to: morris@ee.uwa.edu.au Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/heapsort.html [3/23/2004 2:28:04 PM]
Data Structures and Algorithms: Recursion
Data Structures and Algorithms
3.4 Recursion
Many examples of the use of recursion may be found: the technique is useful both for the definition of mathematical functions and for the definition of data structures. Naturally, if a data structure may be defined recursively, it may be processed by a recursive function! recur From the Latin, re = back + currere = to run To happen again, esp at repeated intervals.
3.4.1 Recursive functions
Many mathematical functions can be defined recursively:
q q q q
factorial Fibonacci Euclid's GCD (greatest common denominator) Fourier Transform
Many problems can be solved recursively, eg games of all types from simple ones like the Towers of Hanoi problem to complex ones like chess. In games, the recursive solutions are particularly convenient because, having solved the problem by a series of recursive calls, you want to find out how you got to the solution. By keeping track of the move chosen at any point, the program call stack does this housekeeping for you! This is explained in more detail later.
3.4.2 Example: Factorial
One of the simplest examples of a recursive definition is that for the factorial function: factorial( n ) = if ( n = 0 ) then 1 else n * factorial( n1 ) A natural way to calculate factorials is to write a recursive function which matches this definition: function fact( int n ) { if ( n == 0 ) return 1; else return n*fact(n1); } Note how this function calls itself to evaluate the next term. Eventually it will reach the termination condition and exit. However, before it reaches the termination condition, it will have pushed n stack frames onto the program's runtime stack.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/recursion.html (1 of 2) [3/23/2004 2:28:10 PM]
Data Structures and Algorithms: Recursion
The termination condition is obviously extremely important when dealing with recursive functions. If it is omitted, then the function will continue to call itself until the program runs out of stack space usually with moderately unpleasant results! Failure to include a correct termination condition in a recursive function is a recipe for disaster! Another commonly used (and abused!) example of a recursive function is the calculation of Fibonacci numbers. Following the definition: fib( n ) = if ( n = 0 ) then 1 if ( n = 1 ) then 1 else fib( n1 ) + fib( n2 ) one can write: function fib( int n ) { if ( (n == 0)  (n == 1) ) return 1; else return fib(n1) + fib(n2); } Short and elegant, it uses recursion to provide a neat solution  that is actually a disaster! We shall revisit this and show why it is such a disaster later. Data structures also may be recursively defined. One of the most important class of structure  trees allows recursive definitions which lead to simple (and efficient) recursive functions for manipulating them. But in order to see why trees are valuable structures, let's first examine the problem of searching.
Key terms
Termination condition Condition which terminates a series of recursive calls  and prevents the program from running out of space for stack frames! Continue on to Searching
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/recursion.html (2 of 2) [3/23/2004 2:28:10 PM]
Data Structures and Algorithms  Recursive List
Data Structures and Algorithms
Recursively Defined Lists
We can define a list as: a. empty or b. containing a node and a link to a list. A list can be scanned using a recursive function: eg. to count the number of items in a list: int ListCount( List l ) { if ( l == NULL ) return 0; else return 1 + ListCount( l>next ); } However, it turns out to be much faster to write this function without the recursive call: int ListCount( List l ) { int cnt = 0; while ( l != NULL ) { cnt++; l = l>next; } return cnt; } The overhead of calling a function is quite large on any machine, so that the second iterative version executes faster. (Another factor is that modern machines rely heavily on the cache for performance: the iterative code of the second version doesn't use so much memory for the call stack and so makes much better use of the cache.) Back to trees Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/recur_list.html [3/23/2004 2:28:13 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_struct.c
/* Binary tree implementation of a collection */ struct t_node { void *item; struct t_node *left; struct t_node *right; }; typedef struct t_node *Node; struct t_collection { int size; Node root; };
/* Needed by FindInCollection */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_struct.c [3/23/2004 2:28:15 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_add.c
/* Binary tree implementation of a collection */ static void AddToTree( Node *t, Node new ) { Node base; base = *t; /* If it's a null tree, just add it here */ if ( base == NULL ) { *t = new; return; } else { if ( KeyLess( ItemKey( new>item ), ItemKey( base>item ) ) ) { AddToTree( &(base>left), new ); } else AddToTree( &(base>right), new ); } } void AddToCollection( Collection c, void *item ) { Node new, node_p; assert( c != NULL ); assert( item != NULL ); /* Allocate space for a node for the new item */ new = (Node)malloc(sizeof(struct t_node)); /* Attach the item to the node */ new>item = item; new>left = new>right = (Node)0; AddToTree( &(c>node), new ); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_add.c [3/23/2004 2:28:16 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_find.c
/* Binary tree implementation of a collection */ /* Now we need to know whether one key is less, equal or greater than another */ extern int KeyCmp( void *a, void *b ); /* Returns 1, 0, 1 for a < b, a == b, a > b */ void *FindInTree( Node t, void *key ) { if ( t == (Node)0 ) return NULL; switch( KeyCmp( key, ItemKey(t>item) ) ) { case 1 : return FindInTree( t>left, key ); case 0: return t>item; case +1 : return FindInTree( t>right, key ); } } void *FindInCollection( collection c, void *key ) { /* Find an item in a collection Precondition: (c is a collection created by a call to ConsCollection) && (key != NULL) Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ assert( c != NULL ); assert( key != NULL ); /* Select node at head of list */ return FindInTree( c>root, key ); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tree_find.c [3/23/2004 2:28:20 PM]
Data Structures and Algorithms: Unbalanced Trees
Data Structures and Algorithms
Unbalanced Trees
If items are added to a binary tree in order then the following unbalanced tree results:
. The worst case search of this tree may require up to n comparisons. Thus a binary tree's worst case searching time is O(n). Later, we will look at redblack trees, which provide us with a strategy for avoiding this pathological behaviour.
Key terms
Balanced Binary Tree Binary tree in which each leaf is the same distance from the root. Back to Trees Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/unbalanced.html [3/23/2004 2:47:17 PM]
Data Structures and Algorithms: Heaps
Data Structures and Algorithms
6.2 Heaps
Heaps are based on the notion of a complete tree, for which we gave an informal definition earlier. Formally: A binary tree is completely full if it is of height, h, and has 2h+11 nodes. A binary tree of height, h, is complete iff a. it is empty or b. its left subtree is complete of height h1 and its right subtree is completely full of height h2 or c. its left subtree is completely full of height h1 and its right subtree is complete of height h1. A complete tree is filled from the left:
q
q
all the leaves are on r the same level or r two adjacent ones and all nodes at the lowest level are as far to the left as possible.
Heaps
A binary tree has the heap property iff a. it is empty or b. the key in the root is larger than that in either child and both subtrees have the heap property. A heap can be used as a priority queue: the highest priority item is at the root and is trivially extracted. But if the root is deleted, we are left with two subtrees and we must efficiently recreate a single tree with the heap property. The value of the heap structure is that we can both extract the highest priority item and insert a new one in O(logn) time. How do we do this?
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/heaps.html (1 of 4) [3/23/2004 2:48:51 PM]
Data Structures and Algorithms: Heaps
Let's start with this heap. A deletion will remove the T at the root.
To work out how we're going to maintain the heap property, use the fact that a complete tree is filled from the left. So that the position which must become empty is the one occupied by the M. Put it in the vacant root position.
This has violated the condition that the root must be greater than each of its children. So interchange the M with the larger of its children.
The left subtree has now lost the heap property. So again interchange the M with the larger of its children.
This tree is now a heap again, so we're finished. We need to make at most h interchanges of a root of a subtree with one of its children to fully restore the heap property. Thus deletion from a heap is O(h) or O(logn).
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/heaps.html (2 of 4) [3/23/2004 2:48:51 PM]
Data Structures and Algorithms: Heaps
Addition to a heap
To add an item to a heap, we follow the reverse procedure. Place it in the next leaf position and move it up. Again, we require O(h) or O(logn) exchanges.
Storage of complete trees
The properties of a complete tree lead to a very efficient storage mechanism using n sequential locations in an array. If we number the nodes from 1 at the root and place:
q q
the left child of node k at position 2k the right child of node k at position 2k+1
Then the 'fill from the left' nature of the complete tree ensures that the heap can be stored in consecutive locations in an array.
Viewed as an array, we can see that the nth node is always in index position n.
The code for extracting the highest priority item from a heap is, naturally, recursive. Once we've extracted the root (highest priority) item and swapped the last item into its place, we simply call MoveDown recursively until we get to the bottom of the tree. Click here to load heap_delete.c
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/heaps.html (3 of 4) [3/23/2004 2:48:51 PM]
Data Structures and Algorithms: Heaps
Note the macros LEFT and RIGHT which simply encode the relation between the index of a node and its left and right children. Similarly the EMPTY macro encodes the rule for determining whether a subtree is empty or not. Inserting into a heap follows a similar strategy, except that we use a MoveUp function to move the newly added item to its correct place. (For the MoveUp function, a further macro which defines the PARENT of a node would normally be added.) Heaps provide us with a method of sorting, known as heapsort. However, we will examine and analyse the simplest method of sorting first.
Animation
In the animation, note that both the array representation (used in the implementation of the algorithm) and the (logical) tree representation are shown. This is to demonstrate how the tree is restructured to make a heap again after every insertion or deletion. Priority Queue Animation This animation was written by Woi Ang. Please email comments to: morris@ee.uwa.edu.au
Key terms
Complete Tree A balanced tree in which the distance from the root to any leaf is either h or h1.
Continue on to Sorting
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/heaps.html (4 of 4) [3/23/2004 2:48:51 PM]
Data Structures and Algorithms: Complexity
Data Structures and Algorithms
5. Complexity
Rendering mathematical symbols with HTML is really painful!
Please don't suggest latex2html .. its tendency to put every symbol in an individual GIF file makes it equally painful!
Please load the postscript file instead  you will need a postscript viewer. Continue on to Queues
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/complexity.html [3/23/2004 2:48:54 PM]
Data Structures and Algorithms: Searching
Data Structures and Algorithms
4 Searching
Computer systems are often used to store large amounts of data from which individual records must be retrieved according to some search criterion. Thus the efficient storage of data to facilitate fast searching is an important issue. In this section, we shall investigate the performance of some searching algorithms and the data structures which they use.
4.1 Sequential Searches
Let's examine how long it will take to find an item matching a key in the collections we have discussed so far. We're interested in: a. the average time b. the worstcase time and c. the best possible time. However, we will generally be most concerned with the worstcase time as calculations based on worstcase times can lead to guaranteed performance predictions. Conveniently, the worstcase times are generally easier to calculate than average times. If there are n items in our collection  whether it is stored as an array or as a linked list  then it is obvious that in the worst case, when there is no item in the collection with the desired key, then n comparisons of the key with keys of the items in the collection will have to be made. To simplify analysis and comparison of algorithms, we look for a dominant operation and count the number of times that dominant operation has to be performed. In the case of searching, the dominant operation is the comparison, since the search requires n comparisons in the worst case, we say this is a O(n) (pronounce this "bigOhn" or "Ohn") algorithm. The best case  in which the first comparison returns a match  requires a single comparison and is O(1). The average time depends on the probability that the key will be found in the collection  this is something that we would not expect to know in the majority of cases. Thus in this case, as in most others, estimation of the average time is of little utility. If the performance of the system is vital, i.e. it's part of a lifecritical system, then we must use the worst case in our design calculations as it represents the best guaranteed performance.
4.2 Binary Search
However, if we place our items in an array and sort them in either ascending or descending order on the key first, then we can obtain much better performance with an algorithm called binary search. In binary search, we first compare the key with the item in the middle position of the array. If there's a match, we can return immediately. If the key is less than the middle key, then the item sought must lie in the lower half of the array; if it's greater then the item sought must lie in the upper half of the array. So we repeat the procedure on the lower (or upper) half of the array.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/searching.html (1 of 5) [3/23/2004 2:49:56 PM]
Data Structures and Algorithms: Searching
Our FindInCollection function can now be implemented: static void *bin_search( collection c, int low, int high, void *key ) { int mid; /* Termination check */ if (low > high) return NULL; mid = (high+low)/2; switch (memcmp(ItemKey(c>items[mid]),key,c>size)) { /* Match, return item found */ case 0: return c>items[mid]; /* key is less than mid, search lower half */ case 1: return bin_search( c, low, mid1, key); /* key is greater than mid, search upper half */ case 1: return bin_search( c, mid+1, high, key ); default : return NULL; } } void *FindInCollection( collection c, void *key ) { /* Find an item in a collection Precondition: c is a collection created by ConsCollection c is sorted in ascending order of the key key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ int low, high; low = 0; high = c>item_cnt1; return bin_search( c, low, high, key ); } Points to note: a. bin_search is recursive: it determines whether the search key lies in the lower or upper half of the array, then calls itself on the appropriate half. b. There is a termination condition (two of them in fact!) i. If low > high then the partition to be searched has no elements in it and ii. If there is a match with the element in the middle of the current partition, then we can return immediately. c. AddToCollection will need to be modified to ensure that each item added is placed in its correct place in the array. The procedure is simple: i. Search the array until the correct spot to insert the new item is found, ii. Move all the following items up one position and iii. Insert the new item into the empty position thus created. d. bin_search is declared static. It is a local function and is not used outside this class: if it were not declared static, it would be exported and be available to all parts of the program. The
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/searching.html (2 of 5) [3/23/2004 2:49:56 PM]
Data Structures and Algorithms: Searching
static declaration also allows other classes to use the same name internally. static reduces the visibility of a function an should be used wherever possible to control access to functions! Analysis
Each step of the algorithm divides the block of items being searched in half. We can divide a set of n items in half at most log2 n times. Thus the running time of a binary search is proportional to log n and we say this is a O(log n) algorithm.
Binary search requires a more complex program than our original search and thus for small n it may run slower than the simple linear search. However, for large n,
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/searching.html (3 of 5) [3/23/2004 2:49:56 PM]
Data Structures and Algorithms: Searching
Thus at large n, log n is much smaller than n, consequently an O(log n) algorithm is much faster than an O(n) one.
Plot of n and log n vs n . We will examine this behaviour more formally in a later section. First, let's see what we can do about the insertion (AddToCollection) operation. In the worst case, insertion may require n operations to insert into a sorted list. 1. We can find the place in the list where the new item belongs using binary search in O(log n) operations. 2. However, we have to shuffle all the following items up one place to make way for the new one. In the worst case, the new item is the first in the list, requiring n move operations for the shuffle! A similar analysis will show that deletion is also an O(n) operation. If our collection is static, ie it doesn't change very often  if at all  then we may not be concerned with the time required to change its contents: we may be prepared for the initial build of the collection and the occasional insertion and deletion to take some time. In return, we will be able to use a simple data structure (an array) which has little memory overhead. However, if our collection is large and dynamic, ie items are being added and deleted continually, then we can obtain considerably better performance using a data structure called a tree.
Key terms
Big Oh A notation formally describing the set of all functions which are bounded above by a nominated function. Binary Search
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/searching.html (4 of 5) [3/23/2004 2:49:56 PM]
Data Structures and Algorithms: Searching
A technique for searching an ordered list in which we first check the middle item and based on that comparison  "discard" half the data. The same procedure is then applied to the remaining half until a match is found or there are no more items left. Continue on to Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/searching.html (5 of 5) [3/23/2004 2:49:56 PM]
Data Structures and Algorithms: RedBlack Trees
Data Structures and Algorithms
8.2 RedBlack Trees
A redblack tree is a binary search tree with one extra attribute for each node: the colour, which is either red or black. We also need to keep track of the parent of each node, so that a redblack tree's node structure would be: struct t_red_black_node { enum { red, black } colour; void *item; struct t_red_black_node *left, *right, *parent; } For the purpose of this discussion, the NULL nodes which terminate the tree are considered to be the leaves and are coloured black. Definition of a redblack tree A redblack tree is a binary search tree which has the following redblack properties: 1. Every node is either red or black. 3. implies that on any path from the root to a 2. Every leaf (NULL) is black. leaf, red nodes must not be adjacent. 3. If a node is red, then both its children are black. However, any number of black nodes may 4. Every simple path from a node to a descendant appear in a sequence. leaf contains the same number of black nodes.
A basic redblack tree
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black.html (1 of 5) [3/23/2004 2:50:18 PM]
Data Structures and Algorithms: RedBlack Trees
Basic redblack tree with the sentinel nodes added. Implementations of the redblack tree algorithms will usually include the sentinel nodes as a convenient means of flagging that you have reached a leaf node. They are the NULL black nodes of property 2. The number of black nodes on any path from, but not including, a node x to a leaf is called the blackheight of a node, denoted bh(x). We can prove the following lemma:
Lemma
A redblack tree with n internal nodes has height at most 2log(n+1). (For a proof, see Cormen, p 264) This demonstrates why the redblack tree is a good search tree: it can always be searched in O(log n) time. As with heaps, additions and deletions from redblack trees destroy the redblack property, so we need to restore it. To do this we need to look at some operations on redblack trees. Rotations A rotation is a local operation in a search tree that preserves inorder traversal key ordering. Note that in both trees, an inorder traversal yields: A x B y C The left_rotate operation may be encoded: left_rotate( Tree T, node x ) { node y; y = x>right; /* Turn y's left subtree into x's right subtree */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black.html (2 of 5) [3/23/2004 2:50:18 PM]
Data Structures and Algorithms: RedBlack Trees
x>right = y>left; if ( y>left != NULL ) y>left>parent = x; /* y's new parent was x's parent */ y>parent = x>parent; /* Set the parent to point to y instead of x */ /* First see whether we're at the root */ if ( x>parent == NULL ) T>root = y; else if ( x == (x>parent)>left ) /* x was on the left of its parent */ x>parent>left = y; else /* x must have been on the right */ x>parent>right = y; /* Finally, put x on y's left */ y>left = x; x>parent = y; } Insertion Insertion is somewhat complex and involves a number of cases. Note that we start by inserting the new node, x, in the tree just as we would for any other binary tree, using the tree_insert function. This new node is labelled red, and possibly destroys the redblack property. The main loop moves up the tree, restoring the redblack property. rb_insert( Tree T, node x ) { /* Insert in the tree in the usual way */ tree_insert( T, x ); /* Now restore the redblack property */ x>colour = red; while ( (x != T>root) && (x>parent>colour == red) ) { if ( x>parent == x>parent>parent>left ) { /* If x's parent is a left, y is x's right 'uncle' */ y = x>parent>parent>right; if ( y>colour == red ) { /* case 1  change the colours */ x>parent>colour = black; y>colour = black; x>parent>parent>colour = red; /* Move x up the tree */ x = x>parent>parent; } else { /* y is a black node */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black.html (3 of 5) [3/23/2004 2:50:18 PM]
Data Structures and Algorithms: RedBlack Trees
if ( x == x>parent>right ) { /* and x is to the right */ /* case 2  move x up and rotate */ x = x>parent; left_rotate( T, x ); } /* case 3 */ x>parent>colour = black; x>parent>parent>colour = red; right_rotate( T, x>parent>parent ); } } else { /* repeat the "if" part with right and left exchanged */ } } /* Colour the root black */ T>root>colour = black; }
Here's an example of the insertion operation.
Animation
RedBlack Tree Animation This animation was written by Linda Luo, Mervyn Ng, Anita Lee, John Morris and Woi Ang. Please email comments to: morris@ee.uwa.edu.au
Examination of the code reveals only one loop. In that loop, the node at the root of the subtree whose redblack property we are trying to restore, x, may be moved up the tree at least one level in each iteration of the loop. Since the tree originally has O(log n) height, there are O(log n) iterations. The tree_insert routine also has O(log n) complexity, so overall the rb_insert routine also has O(log n) complexity.
Key terms
Redblack trees Trees which remain balanced  and thus guarantee O(logn) search times  in a dynamic environment. Or more importantly, since any tree can be rebalanced  but at considerable cost can be rebalanced in O(logn) time.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black.html (4 of 5) [3/23/2004 2:50:18 PM]
Data Structures and Algorithms: RedBlack Trees
Continue on to AVL Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black.html (5 of 5) [3/23/2004 2:50:18 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/heap_delete.c
/* Extract the highest priority from the heap */ #define #define #define #define LEFT(k) RIGHT(k) EMPTY(c,k) SWAP(i,j) (2*k) (2*k+1) (k>=c>item_cnt) { void *x = c>items[i]; \ c>items[i] = c>items[j]; \ c>items[j] = x; }
void MoveDown( Collection c, int k ) { int larger, right, left; left = LEFT(k); right = RIGHT(k); if ( !EMPTY(c,k) ) /* Termination condition! */ { larger=left; if ( !EMPTY(c,right) ) { if ( ItemCmp( c>items[right], c>items[larger] ) > 0 ) larger = right; } if ( ItemCmp( c>items[k], c>items[larger] ) ) { SWAP( k, larger ); MoveDown( c, larger ); } } } void *HighestPriority( Collection c ) /* Return the highest priority item Precondition: (c is a collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ { int i, cnt; void *save; assert( c != NULL ); assert( c>item_cnt >= 1 ); /* Save the root */ save = c>items[0]; /* Put the last item in the root */ cnt = c>item_cnt; c>items[0] = c>items[cnt1]; /* Adjust the count */ c>item_cnt; /* Move the new root item down if necessary */ MoveDown( c, 1 ); return save; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/heap_delete.c [3/23/2004 2:50:19 PM]
Data Structures and Algorithms: Queues
Data Structures and Algorithms
6 Queues
Queues are dynamic collections which have some concept of order. This can be either based on order of entry into the queue  giving us FirstInFirstOut (FIFO) or LastInFirstOut (LIFO) queues. Both of these can be built with linked lists: the simplest "addtohead" implementation of a linked list gives LIFO behaviour. A minor modification  adding a tail pointer and adjusting the addition method implementation  will produce a FIFO queue.
Performance
A straightforward analysis shows that for both these cases, the time needed to add or delete an item is constant and independent of the number of items in the queue. Thus we class both addition and deletion as an O(1) operation. For any given real machine+operating system+language combination, addition may take c1 seconds and deletion c2 seconds, but we aren't interested in the value of the constant, it will vary from machine to machine, language to language, etc. The key point is that the time is not dependent on n  producing O(1) algorithms. Once we have written an O(1) method, there is generally little more that we can do from an algorithmic point of view. Occasionally, a better approach may produce a lower constant time. Often, enhancing our compiler, runtime system, machine, etc will produce some significant improvement. However O(1) methods are already very fast, and it's unlikely that effort expended in improving such a method will produce much real gain!
5.1 Priority Queues
Often the items added to a queue have a priority associated with them: this priority determines the order in which they exit the queue  highest priority items are removed first. This situation arises often in process control systems. Imagine the operator's console in a large automated factory. It receives many routine messages from all parts of the system: they are assigned a low priority because they just report the normal functioning of the system  they update various parts of the operator's console display simply so that there is some confirmation that there are no problems. It will make little difference if they are delayed or lost. However, occasionally something breaks or fails and alarm messages are sent. These have high priority because some action is required to fix the problem (even if it is mass evacuation because nothing can stop the imminent explosion!). Typically such a system will be composed of many small units, one of which will be a buffer for messages received by the operator's console. The communications system places messages in the buffer so that communications links can be freed for further messages while the console software is processing the message. The console software extracts messages from the buffer and updates
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/queues.html (1 of 3) [3/23/2004 2:50:24 PM]
Data Structures and Algorithms: Queues
appropriate parts of the display system. Obviously we want to sort messages on their priority so that we can ensure that the alarms are processed immediately and not delayed behind a few thousand routine messages while the plant is about to explode. As we have seen, we could use a tree structure  which generally provides O(logn) performance for both insertion and deletion. Unfortunately, if the tree becomes unbalanced, performance will degrade to O(n) in pathological cases. This will probably not be acceptable when dealing with dangerous industrial processes, nuclear reactors, flight control systems and other lifecritical systems. Aside The great majority of computer systems would fall into the broad class of information systems which simply store and process information for the benefit of people who make decisions based on that information. Obviously, in such systems, it usually doesn't matter whether it takes 1 or 100 seconds to retrieve a piece of data  this simply determines whether you take your coffee break now or later. However, as we'll see, using the best known algorithms is usually easy and straightforward: if they're not already coded in libaries, they're in textbooks. You don't even have to work out how to code them! In such cases, it's just your reputation that's going to suffer if someone (who has studied his or her algorithms text!) comes along later and says "Why on earth did X (you!) use this O(n2) method there's a well known O(n) one!" Of course, hardware manufacturers are very happy if you use inefficient algorithms  it drives the demand for new, faster hardware  and keeps their profits high! There is a structure which will provide guaranteed O(logn) performance for both insertion and deletion: it's called a heap.
Key terms
FIFO queue A queue in which the first item added is always the first one out. LIFO queue A queue in which the item most recently added is always the first one out. Priority queue A queue in which the items are sorted so that the highest priority item is always the next one to be extracted. Life critical systems Systems on which we depend for safety and which may result in death or injury if they fail: medical monitoring, industrial plant monitoring and control and aircraft control systems are examples of life critical systems. Real time systems
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/queues.html (2 of 3) [3/23/2004 2:50:24 PM]
Data Structures and Algorithms: Queues
Systems in which time is a constraint. A system which must respond to some event (eg the change in attitude of an aircraft caused by some atmospheric event like windshear) within a fixed time to maintain stability or continue correct operation (eg the aircraft systems must make the necessary adjustments to the control surfaces before the aircraft falls out of the sky!). Continue on to Heaps
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/queues.html (3 of 3) [3/23/2004 2:50:24 PM]
Data Structures and Algorithms: RedBlack Trees
Data Structures and Algorithms
8.2 RedBlack Tree Operation
Here's an example of insertion into a redblack tree (taken from Cormen, p269).
Here's the original tree .. Note that in the following diagrams, the black sentinel nodes have been omitted to keep the diagrams simple.
The tree insert routine has just been called to insert node "4" into the tree. This is no longer a redblack tree there are two successive red nodes on the path 11  2  7  5  4 Mark the new node, x, and it's uncle, y. y is red, so we have case 1 ...
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black_op.html (1 of 3) [3/23/2004 2:50:41 PM]
Data Structures and Algorithms: RedBlack Trees
Change the colours of nodes 5, 7 and 8.
Move x up to its grandparent, 7. x's parent (2) is still red, so this isn't a redblack tree yet. Mark the uncle, y. In this case, the uncle is black, so we have case 2 ...
Move x up and rotate left.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black_op.html (2 of 3) [3/23/2004 2:50:41 PM]
Data Structures and Algorithms: RedBlack Trees
Still not a redblack tree .. the uncle is black, but x's parent is to the left ..
Change the colours of 7 and 11 and rotate right ..
This is now a redblack tree, so we're finished! O(logn) time!
Back to Red Black Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/red_black_op.html (3 of 3) [3/23/2004 2:50:41 PM]
Data Structures and Algorithms: AVL Trees
Data Structures and Algorithms
8.3 AVL Trees
An AVL tree is another balanced binary search tree. Named after their inventors, AdelsonVelskii and Landis, they were the first dynamically balanced trees to be proposed. Like redblack trees, they are not perfectly balanced, but pairs of subtrees differ in height by at most 1, maintaining an O(logn) search time. Addition and deletion operations also take O(logn) time. Definition of an AVL tree An AVL tree is a binary search tree which has the following properties: 1. The subtrees of every node differ in height by at most one. 2. Every subtree is an AVL tree.
Balance requirement for an AVL tree: the left and right subtrees differ by at most 1 in height.
You need to be careful with this definition: it permits some apparently unbalanced trees! For example, here are some trees: Tree AVL tree?
Yes Examination shows that each left subtree has a height 1 greater than each right subtree.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/AVL.html (1 of 3) [3/23/2004 2:51:00 PM]
Data Structures and Algorithms: AVL Trees
No Subtree with root 8 has height 4 and subtree with root 18 has height 2
Insertion As with the redblack tree, insertion is somewhat complex and involves a number of cases. Implementations of AVL tree insertion may be found in many textbooks: they rely on adding an extra attribute, the balance factor to each node. This factor indicates whether the tree is leftheavy (the height of the left subtree is 1 greater than the right subtree), balanced (both subtrees are the same height) or rightheavy (the height of the right subtree is 1 greater than the left subtree). If the balance would be destroyed by an insertion, a rotation is performed to correct the balance. A new item has been added to the left subtree of node 1, causing its height to become 2 greater than 2's right subtree (shown in green). A rightrotation is performed to correct the imbalance.
Key terms
AVL trees Trees which remain balanced  and thus guarantee O(logn) search times  in a dynamic environment. Or more importantly, since any tree can be rebalanced  but at considerable cost 
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/AVL.html (2 of 3) [3/23/2004 2:51:00 PM]
Data Structures and Algorithms: AVL Trees
can be rebalanced in O(logn) time. Continue on to General nary Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/AVL.html (3 of 3) [3/23/2004 2:51:00 PM]
Data Structures and Algorithms: Quick Sort
Data Structures and Algorithms
7.3 Quick Sort
Quicksort is a very efficient sorting algorithm invented by C.A.R. Hoare. It has two phases:
q q
the partition phase and the sort phase.
As we will see, most of the work is done in the partition phase  it works out where to divide the work. The sort phase simply sorts the two smaller problems that are generated in the partition phase. This makes Quicksort a good example of the divide and conquer strategy for solving problems. (You've already seen an example of this approach in the binary search procedure.) In quicksort, we divide the array of items to be sorted into two partitions and then call the quicksort procedure recursively to sort the two partitions, ie we divide the problem into two smaller ones and conquer by solving the smaller ones. Thus the conquer part of the quicksort routine looks like this:
quicksort( void *a, int low, int high ) { int pivot; /* Termination condition! */ if ( high > low ) { pivot = partition( a, low, high ); quicksort( a, low, pivot1 ); quicksort( a, pivot+1, high ); } }
Initial Step  First Partition
Sort Left Partition in the same way For the strategy to be effective, the partition phase must ensure that all the items in one part (the lower part) and less than all those in the other (upper) part. To do this, we choose a pivot element and arrange that all the items in the lower part are less than the pivot and all those in the upper part greater than it. In the most general case, we don't know anything about the items to be sorted, so that any choice of the pivot element will do  the first element is a convenient one. As an illustration of this idea, you can view this animation, which shows a partition algorithm in which items to be sorted are copied from the original array to a new one: items smaller than the pivot are placed to the left of the new array and items greater than the pivot are placed on the right. In the final step, the pivot is dropped into the remaining slot in the middle. QuickSort Animation This animation was based on a suggestion made by Jeff Rohl; it was written by Woi Ang. Observe that the animation uses two arrays for the items being sorted: thus it requires O(n) additional space to operate. However, it's possible to partition the array in place. The next page shows a conventional implementation of the partition phase which swaps elements in the same array and thus avoids using extra space.
Key terms
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort.html (1 of 2) [3/23/2004 2:51:09 PM]
Data Structures and Algorithms: Quick Sort
Divide and Conquer Algorithms Algorithms that solve (conquer) problems by dividing them into smaller subproblems until the problem is so small that it is trivially solved. in place In place sorting algorithms don't require additional temporary space to store elements as they sort; they use the space originally occupied by the elements. Continue on to Quick sort: Partition in place
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort.html (2 of 2) [3/23/2004 2:51:09 PM]
Data Structures and Algorithms: Quick Sort
Data Structures and Algorithms
Quick Sort: Partition in place
Most implementations of quick sort make use of the fact that you can partition in place by keeping two pointers: one moving in from the left and a second moving in from the right. They are moved towards the centre until the left pointer finds an element greater than the pivot and the right one finds an element less than the pivot. These two elements are then swapped. The pointers are then moved inward again until they "cross over". The pivot is then swapped into the slot to which the right pointer points and the partition is complete. int partition( void *a, int low, int high ) { int left, right; void *pivot_item; pivot_item = a[low]; pivot = left = low; right = high; while ( left < right ) { /* Move left while item < pivot */ while( a[left] <= pivot_item ) left++; /* Move right while item > pivot */ while( a[right] > pivot_item ) right; if ( left < right ) SWAP(a,left,right); } /* right is final position for the pivot */ a[low] = a[right]; a[right] = pivot_item; return right; }
Note that this above code does not check that left does not exceed the array bound. You need to add this check, before performing the swaps  both the one in the loop and the final one outside the loop.
partition ensures that all items less than the pivot precede it and returns the position of the pivot. This meets our condition for dividing the problem: all the items in the lower half are known to be less than the pivot and all items in the upper half are known to be greater than it. Note that we have used our ItemCmp function in the partition function. This assumes that there is an external declaration for ItemCmp and that in any one program, we only want to sort one type of object. Generally this will not be acceptable, so the formal specification for quicksort in the Unix and ANSI C libraries includes a function compar which is supplied to qsort when it is called. Passing the function, compar, which defines the ordering of objects when qsort is called avoids this problem in the same way that we passed an ItemCmp function to ConsCollection
Analysis
The partition routine examines every item in the array at most once, so it is clearly O(n). Usually, the partition routine will divide the problem into two roughly equal sized partitions. We know that we can divide n items in half log2n times. This makes quicksort a O(nlogn) algorithm  equivalent to heapsort. However, we have made an unjustified assumption  see if you can identify it before you continue. QuickSort Animation This animation uses the partition in place approach; it was written by Chien Wei Tan Please email comments to morris@ee.uwa.edu.au
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort1a.html (1 of 2) [3/23/2004 2:51:26 PM]
Data Structures and Algorithms: Quick Sort
Key terms
Divide and Conquer Algorithms Algorithms that solve (conquer) problems by dividing them into smaller subproblems until the problem is so small that it is trivially solved.
Continue on to Quick Sort (cont)
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort1a.html (2 of 2) [3/23/2004 2:51:26 PM]
Data Structures and Algorithms: Bin Sort
Data Structures and Algorithms
7.4 Bin Sort
Assume that 1. the keys of the items that we wish to sort lie in a small fixed range and 2. that there is only one item with each value of the key. Then we can sort with the following procedure: 1. Set up an array of "bins"  one for each value of the key  in order, 2. Examine each item and use the value of the key to place it in the appropriate bin. Now our collection is sorted and it only took n operations, so this is an O(n) operation. However, note that it will only work under very restricted conditions. Constraints on bin sort To understand these restrictions, let's be a little more precise about the specification of the problem and assume that there are m values of the key. To recover our sorted collection, we need to examine each bin. This adds a third step to the algorithm above, 3. Examine each bin to see whether there's an item in it. which requires m operations. So the algorithm's time becomes: T(n) = c1n + c2m and it is strictly O(n + m). Now if m <= n, this is clearly O(n). However if m >> n, then it is O(m). For example, if we wish to sort 104 32bit integers, then m = 232 and we need 232 operations (and a rather large memory!). For n = 104: nlogn ~ 104 x 13 ~ 213 x 24 ~ 217 So quicksort or heapsort would clearly be preferred. An implementation of bin sort might look like: #define EMPTY 1 /* Some convenient flag */ void bin_sort( int *a, int *bin, int n ) { int i; /* Precondition: for 0<=i<n : 0 <= a[i] < M */ /* Mark all the bins empty */ for(i=0;i<M;i++) bin[i] = EMPTY;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/binsort.html (1 of 3) [3/23/2004 2:51:30 PM]
Data Structures and Algorithms: Bin Sort
for(i=0;i<n;i++) bin[ a[i] ] = a[i]; } main() { int a[N], bin[M]; /* for all i: 0 <= a[i] < M */ .... /* Place data in a */ bin_sort( a, bin, N );
If there are duplicates, then each bin can be replaced by a linked list. The third step then becomes: 3. Link all the lists into one list. We can add an item to a linked list in O(1) time. There are n items requiring O(n) time. Linking a list to another list simply involves making the tail of one list point to the other, so it is O(1). Linking m such lists obviously takes O(m) time, so the algorithm is still O(n+m). In contrast to the other sorts, which sort in place and don't require additional memory, bin sort requires additional memory for the bins and is a good example of trading space for performance. Although memory tends to be cheap in modern processors so that we would normally use memory rather profligately to obtain performance, memory consumes power and in some circumstances, eg computers in space craft, power might be a higher constraint than performance. Having highlighted this constraint, there is a version of bin sort which can sort in place: #define EMPTY 1 /* Some convenient flag */ void bin_sort( int *a, int n ) { int i; /* Precondition: for 0<=i<n : 0 <= a[i] < n */ for(i=0;i<n;i++) if ( a[i] != i ) SWAP( a[i], a[a[i]] ); } However, this assumes that there are n distinct keys in the range 0 .. n1. In addition to this restriction, the SWAP operation is relatively expensive, so that this version trades space for time. The bin sorting strategy may appear rather limited, but it can be generalised into a strategy known as Radix sorting.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/binsort.html (2 of 3) [3/23/2004 2:51:30 PM]
Data Structures and Algorithms: Bin Sort
Bin Sort Animation This animation was written by Woi Ang.
Please email comments to: morris@ee.uwa.edu.au
Continue on to Radix sorting
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/binsort.html (3 of 3) [3/23/2004 2:51:30 PM]
Data Structures and Algorithms: Radix Sort
Data Structures and Algorithms
7.5 Radix Sorting
The bin sorting approach can be generalised in a technique that is known as radix sorting. An example Assume that we have n integers in the range (0,n2) to be sorted. (For a bin sort, m = n2, and we would have an O(n+m) = O(n2) algorithm.) Sort them in two phases: 1. Using n bins, place ai into bin ai mod n, 2. Repeat the process using n bins, placing ai into bin floor(ai/n), being careful to append to the end of each bin. This results in a sorted list. As an example, consider the list of integers: 36 9 0 25 1 49 64 16 81 4 n is 10 and the numbers all lie in (0,99). After the first phase, we will have: Bin Content 0 0 1 1 81 2 3 4 64 4 5 25 6 36 16 7 8 9 9 49
Note that in this phase, we placed each item in a bin indexed by the least significant decimal digit. Repeating the process, will produce: Bin 0 0 1 4 9 1 16 2 25 3 36 4 49 5 6 64 7 8 81 9 
Content
In this second phase, we used the leading decimal digit to allocate items to bins, being careful to add each item to the end of the bin. We can apply this process to numbers of any size expressed to any suitable base or radix.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/radixsort.html (1 of 3) [3/23/2004 2:51:41 PM]
Data Structures and Algorithms: Radix Sort
7.5.1 Generalised Radix Sorting We can further observe that it's not necessary to use the same radix in each phase, suppose that the sorting key is a sequence of fields, each with bounded ranges, eg the key is a date using the structure: typedef int int int } date; struct t_date { day; month; year;
If the ranges for day and month are limited in the obvious way, and the range for year is suitably constrained, eg 1900 < year <= 2000, then we can apply the same procedure except that we'll employ a different number of bins in each phase. In all cases, we'll sort first using the least significant "digit" (where "digit" here means a field with a limited range), then using the next significant "digit", placing each item after all the items already in the bin, and so on. Assume that the key of the item to be sorted has k fields, fii=0..k1, and that each fi has si discrete values, then a generalised radix sort procedure can be written: radixsort( A, n ) { for(i=0;i<k;i++) { for(j=0;j<si;j++) bin[j] = EMPTY; for(j=0;j<n;j++) { move Ai to the end of bin[Ai>fi] } for(j=0;j<si;j++) concatenate bin[j] onto the end of A; } }
O(si)
O(n)
O(si)
Total
Now if, for example, the keys are integers in (0,bk1), for some constant k, then the keys can be
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/radixsort.html (2 of 3) [3/23/2004 2:51:41 PM]
Data Structures and Algorithms: Radix Sort
viewed as kdigit baseb integers. Thus, si = b for all i and the time complexity becomes O(n+kb) or O(n). This result depends on k being constant. If k is allowed to increase with n, then we have a different picture. For example, it takes log2n binary digits to represent an integer <n. If the key length were allowed to increase with n, so that k = logn, then we would have:
. Another way of looking at this is to note that if the range of the key is restricted to (0,bk1), then we will be able to use the radixsort approach effectively if we allow duplicate keys when n>bk. However, if we need to have unique keys, then k must increase to at least logbn. Thus, as n increases, we need to have logn phases, each taking O(n) time, and the radix sort is the same as quick sort!
Sample code
This sample code sorts arrays of integers on various radices: the number of bits used for each radix can be set with the call to SetRadices. The Bins class is used in each phase to collect the items as they are sorted. ConsBins is called to set up a set of bins: each bin must be large enough to accommodate the whole array, so RadixSort can be very expensive in its memory usage!
q q q q
RadixSort.h RadixSort.c Bins.h Bins.c Please email comments to: morris@ee.uwa.edu.au Back to the Table of Contents
Radix Sort Animation This animation was written by Woi Ang. Continue on to Search Trees
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/radixsort.html (3 of 3) [3/23/2004 2:51:41 PM]
Data Structures and Algorithms  xx
Data Structures and Algorithms
NAME qsort  quicker sort SYNOPSIS qsort(base, nel, width, compar) char *base; int (*compar)(); DESCRIPTION qsort() is an implementation of the quickersort algorithm. It sorts a table of data in place. base points to the element at the base of the table. nel is the number of elements in the table. width is the size, in bytes, of each element in the table. compar is the name of the comparison function, which is called with two arguments that point to the elements being compared. As the function must return an integer less than, equal to, or greater than zero, so must the first argument to be considered be less than, equal to, or greater than the second. NOTES The pointer to the base of the table should be of type pointertoelement, and cast to type pointertocharacter. The comparison function need not compare every byte, so arbitrary data may be contained in the elements in addition to the values being compared. The order in the output of two items which compare as equal is unpredictable. SEE ALSO sort(1V), bsearch(3), lsearch(3), string(3) EXAMPLE The following program sorts a simple array: static int intcompare(i,j) int *i, *j; { return(*i  *j); } main() { int a[10]; int i; a[0] = 9;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort_man.html (1 of 2) [3/23/2004 2:51:43 PM]
Data Structures and Algorithms  xx
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8] a[9]
= = = = = = = = =
8; 7; 6; 5; 4; 3; 2; 1; 0;
qsort(a,10,sizeof(int),intcompare) for (i=0; i<10; i++) printf(" %d",a[i]); printf("\n"); }
Back to Quicksort Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort_man.html (2 of 2) [3/23/2004 2:51:43 PM]
Data Structures and Algorithms: Quick Sort (2)
Data Structures and Algorithms
7.3 Quick Sort (cont)
Quick Sort  The Facts!
Quick Sort is generally the best known sorting algorithm, but it has a serious limitation, which must be understood before using it in certain applications. What happens if we apply the qsort routine on the previous page to an already sorted array? This is certainly a case where we'd expect the performance to be quite good! However, the first attempt to partition the problem into two problems will return an empty lower partition  the first element is the smallest. Thus the first partition call simply chops off one element and calls qsort for a partition with n1 items! This happens a further n2 times! Each partition call still requires O(n) operations  and we have generated O(n) such calls. In the worst case, quicksort is an O(n2) algorithm! Can we do anything about this? A number of variations to the simple quicksort will generally produce better results: rather than choose the first item as the pivot, some other strategies work better. Medianof3 Pivot For example, the medianof3 pivot approach selects three candidate pivots and uses the median one. If the three pivots are chosen from the first, middle and last positions, then it is easy to see that for the already sorted array, this will produce an optimum result: each partition will be exactly half (±one element) of the problem and we will need exactly ceiling(logn) recursive calls. Random pivot Some qsort's will simply use a randomly chosen pivot. This also works fine for sorted arrays  on average the pivot will produce two equal sized partitions and there will be O(logn) of them. However, whatever strategy we use for choosing the pivot, it is possible to propose a pathological case in which the problem is not divided equally at any partition stage. Thus quicksort must always be treated as potentially O(n2).
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort2.html (1 of 2) [3/23/2004 2:51:48 PM]
Data Structures and Algorithms: Quick Sort (2)
Why bother with quicksort then? Heap sort is always O(nlog n): why not just use it? Continue on to Comparing quick and heap sort
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort2.html (2 of 2) [3/23/2004 2:51:48 PM]
Data Structures and Algorithms: Quick vs Heap Sort
Data Structures and Algorithms
Comparing Quick and Heap Sorts
Empirical studies show that generally quick sort is considerably faster than heapsort. The following counts of compare and exchange operations were made for three different sorting algorithms running on the same data: Quick 712 1,682 5,102 148 328 919 Heap 2,842 9,736 53,113 581 1,366 4,042 Insert 2,595 10,307 62,746 899 3,503 21,083
n 100 200 500
Comparison Exchange Comparison Exchange Comparison Exchange
Thus, when an occasional "blowout" to O(n2) is tolerable, we can expect that, on average, quick sort will provide considerably better performance  especially if one of the modified pivot choice procedures is used. Most commercial applications would use quicksort for its better average performance: they can tolerate an occasional long run (which just means that a report takes slightly longer to produce on full moon days in leap years) in return for shorter runs most of the time. However, quick sort should never be used in applications which require a guarantee of response time, unless it is treated as an O(n2) algorithm in calculating the worstcase response time. If you have to assume O(n2) time, then  if n is small, you're better off using insertion sort  which has simpler code and therefore smaller constant factors. And if n is large, you should obviously be using heap sort, for its guaranteed O(nlog n) time. Lifecritical (medical monitoring, life support in aircraft and space craft) and missioncritical (monitoring and control in industrial and research plants handling dangerous materials, control for aircraft, defence, etc) software will generally have a response time as part of the system specifications. In all such systems, it is not acceptable to design based on average performance, you must always allow for the worst case, and thus treat quicksort as O(n2). So far, our best sorting algorithm has O(nlog n) performance: can we do any better? In general, the answer is no. However, if we know something about the items to be sorted, then we may be able to do better. But first, we should look at squeezing the last drop of performance out of quicksort.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort3.html (1 of 2) [3/23/2004 2:52:00 PM]
Data Structures and Algorithms: Quick vs Heap Sort
Continue on to The last drop of performance!
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort3.html (2 of 2) [3/23/2004 2:52:00 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.h
/* Bins.h Possible bin array for RadixSort */ #define TYPE int typedef struct t_bins *Bins; Bins ConsBins( int n_bins, int items_per_bin ); /* Construct an array of n_bins bins, each with items_per_bin spaces */ int AddItem( Bins b, TYPE item, int bin_index ); /* Add item to bin bin_index Pre: b != NULL && item != NULL && bin_index >= 0 && bin_index < n_bins */ TYPE *MergeBins( Bins b, TYPE *list ); /* Merge the bins by copying all the elements in bins into list, return a pointer to list */ void DeleteBins( Bins b ); /* Destructor .. frees all space used by b */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.h [3/23/2004 2:52:27 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.c
/* Bins.c Possible bin array for RadixSort */ #include <stdlib.h> #include <stdio.h> #include <assert.h> #include "Bins.h" struct t_bins { int n_bins, max_items; int *bin_cnt; TYPE **bin_pts; }; /* Construct an array of n_bins bins, each with items_per_bin spaces */ Bins ConsBins( int n_bins, int items_per_bin ) { Bins b; int i; #ifdef ONE_LARGE int max; TYPE *bins; #endif /* fprintf(stdout, "ConsBins %d/%d ", n_bins, items_per_bin ); fflush( stdout ); */ b = (Bins)malloc( sizeof( struct t_bins ) ); if ( b != NULL ) { b>n_bins = n_bins; b>max_items = items_per_bin; b>bin_pts = (TYPE **)malloc( n_bins*sizeof(TYPE *) ); b>bin_cnt = (int *)calloc( n_bins, sizeof(int) ); if ( b>bin_pts != NULL ) { #ifdef ONE_LARGE /* Allocate a single large bin */ max = n_bins*items_per_bin*sizeof(TYPE); bins = malloc( max ); if( bins == NULL ) { printf("ConsBins: insuff mem %d bytes needed\n", max ); return NULL; } /* Divide it into n_bins, each holding items_per_bin items */ for(i=0;i<n_bins;i++) { b>bin_pts[i] = bins; bins += (items_per_bin); } #else /* Allocate n_bins individual bins */ for(i=0;i<n_bins;i++) { b>bin_pts[i] = (TYPE *)malloc( items_per_bin*sizeof(TYPE) ); if( b>bin_pts[i] == NULL ) { printf("ConsBins: insuff mem after %d bins\n", i ); b = NULL; break; } } #endif } } else { fprintf( stdout, "Insuff mem\n"); } return b; } int AddItem( Bins b, TYPE item, int bin_index ) { /* Add item to bin bin_index
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.c (1 of 2) [3/23/2004 2:52:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.c
Pre: b != NULL && item != NULL && bin_index >= 0 && bin_index < n_bins */ int k; assert( b != NULL ); assert( bin_index >= 0 ); assert( bin_index < b>n_bins ); k = b>bin_cnt[bin_index]; assert( (k>=0) && (k<b>max_items) ); assert( (b>bin_pts[bin_index]) != NULL ); (b>bin_pts[bin_index])[k] = item; b>bin_cnt[bin_index]++; return 1; } TYPE *MergeBins( Bins b, TYPE *list ) { /* Merge the bins by copying all the elements in bins 1..n_bins1 into list return a pointer to list (This pointer can be used in the next phase!) */ int j, k; TYPE *lp; assert( b != NULL ); assert( list != NULL ); lp = list; for( j = 0; j<b>n_bins; j++ ) { for(k=0;k<b>bin_cnt[j];k++) { *lp++ = (b>bin_pts[j])[k]; } } return list; } void FreeUnusedBins( Bins b ) { /* Free bins 1 .. n_bins1 in preparation for next phase */ int k; assert( b != NULL ); #ifdef ONE_LARGE free( b>bin_pts[0] ); #else for(k=0;k<b>n_bins;k++) { assert( b>bin_pts[k] != NULL ); free( b>bin_pts[k] ); } #endif free( b>bin_pts ); } void DeleteBins( Bins b ) { /* Destructor .. frees all space used by b */ assert( b != NULL ); FreeUnusedBins( b ); free( b>bin_cnt ); free( b ); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/Bins.c (2 of 2) [3/23/2004 2:52:47 PM]
Data Structures and Algorithms: Search Trees
Data Structures and Algorithms
8 Searching Revisited
Before we examine some more searching techniques, we need to consider some operations on trees in particular means of traversing trees. Tree operations A binary tree can be traversed in a number of ways: 1. Visit the root 2. Traverse the left subtree, 3. Traverse the right subtree
preorder
inorder
1. Traverse the left subtree, 2. Visit the root 3. Traverse the right subtree
postorder
1. Traverse the left subtree, 2. Traverse the right subtree 3. Visit the root
If we traverse the standard ordered binary tree inorder, then we will visit all the nodes in sorted order. Parse trees
If we represent the expression: A*(((B+C)*(D*E))+F) as a tree:
then traversing it postorder will produce:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/search_trees.html (1 of 3) [3/23/2004 2:53:06 PM]
Data Structures and Algorithms: Search Trees
A B C + D E * * F + * which is the familiar reversepolish notation used by a compiler for evaluating the expression. Search Trees We've seen how to use a heap to maintain a balanced tree for a priority queue. What about a tree used to store information for retrieval (but not removal)? We want to be able to find any item quickly in such a tree based on the value of its key. The search routine on a binary tree: tree_search(tree T, Key key) { if (T == NULL) return NULL; if (key == T>root) return T>root; else if (key < T>root) return tree_search( T>left, key ); else return tree_search( T>right, key ); } is simple and provides us with a O(log n) searching routine as long as we can keep the tree balanced. However, if we simply add items to a tree, producing an unbalanced tree is easy!
This is what happens if we add the letters A B C D E F in that order to a tree: Not exactly well balanced!
Key terms
Preorder tree traversal Traversing a tree in the order: root  left  right Inorder tree traversal Traversing a tree in the order: left  root  right Postorder tree traversal Traversing a tree in the order: left  right  root
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/search_trees.html (2 of 3) [3/23/2004 2:53:06 PM]
Data Structures and Algorithms: Search Trees
Continue on to RedBlack Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/search_trees.html (3 of 3) [3/23/2004 2:53:06 PM]
Data Structures and Algorithms: nary trees
Data Structures and Algorithms
8.2 General nary trees
If we relax the restriction that each node can have only one key, we can reduce the height of the tree. An mway search tree a. is empty or b. consists of a root containing j (1<=j<m) keys, kj, and a set of subtrees, Ti, (i = 0..j), such that i. if k is a key in T0, then k <= k1 ii. if k is a key in Ti (0<i<j), then ki <= k <= ki+1 iii. if k is a key in Tj, then k > kj and iv. all Ti are nonempty mway search trees or all Ti are empty
Or in plain English ..
b. A node generally has m1 keys and m children.
Each node has alternating subtree pointers and keys: subtree  key  subtree  key  ...  key  sub_tree
i. ii. iii. iv.
All keys in a subtree to the left of a key are smaller than it. All keys in the node between two keys are between those two keys. All keys in a subtree to the right of a key are greater than it. This is the "standard" recursive part of the definition.
The height of a complete mary tree with n nodes is ceiling(logmn). A Btree of order m is an mway tree in which a. all leaves are on the same level and b. all nodes except for the root and the leaves have at least m/2 children and at most m children. The root has at least 2 children and at most m children.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/n_ary_trees.html (1 of 2) [3/23/2004 2:53:17 PM]
Data Structures and Algorithms: nary trees
A variation of the Btree, known as a B+tree considers all the keys in nodes except the leaves as dummies. All keys are duplicated in the leaves. This has the advantage that is all the leaves are linked together sequentially, the entire tree may be scanned without visiting the higher nodes at all.
Key terms
nary trees (or nway trees) Trees in which each node may have up to n children. Btree Balanced variant of an nway tree. B+tree Btree in which all the leaves are linked to facilitate fast in order traversal. Continue on to Hash Tables
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/n_ary_trees.html (2 of 2) [3/23/2004 2:53:17 PM]
Data Structures and Algorithms: Hash Tables
Data Structures and Algorithms
8.3 Hash Tables
8.3.1 Direct Address Tables If we have a collection of n elements whose keys are unique integers in (1,m), where m >= n, then we can store the items in a direct address table, T[m], where Ti is either empty or contains one of the elements of our collection. Searching a direct address table is clearly an O(1) operation: for a key, k, we access Tk,
q q
if it contains an element, return it, if it doesn't then return a NULL.
There are two constraints here: 1. the keys must be unique, and 2. the range of the key must be severely bounded.
If the keys are not unique, then we can simply construct a set of m lists and store the heads of these lists in the direct address table. The time to find an element matching an input key will still be O(1). However, if each element of the collection has some other distinguishing feature (other than its key), and if the maximum number of duplicates is ndupmax, then searching for a specific element is O(ndupmax). If duplicates are the exception rather than the rule, then ndupmax is much smaller than n and a direct address table will provide good performance. But if ndupmax approaches n, then the time to find a specific element is O(n) and a tree structure will be more efficient.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_tables.html (1 of 5) [3/23/2004 2:54:18 PM]
Data Structures and Algorithms: Hash Tables
The range of the key determines the size of the direct address table and may be too large to be practical. For instance it's not likely that you'll be able to use a direct address table to store elements which have arbitrary 32bit integers as their keys for a few years yet! Direct addressing is easily generalised to the case where there is a function, h(k) => (1,m) which maps each value of the key, k, to the range (1,m). In this case, we place the element in T[h(k)] rather than T[k] and we can search in O(1) time as before.
8.3.2 Mapping functions
The direct address approach requires that the function, h(k), is a onetoone mapping from each k to integers in (1,m). Such a function is known as a perfect hashing function: it maps each key to a distinct integer within some manageable range and enables us to trivially build an O(1) search time table. Unfortunately, finding a perfect hashing function is not always possible. Let's say that we can find a hash function, h(k), which maps most of the keys onto unique integers, but maps a small number of keys on to the same integer. If the number of collisions (cases where multiple keys map onto the same integer), is sufficiently small, then hash tables work quite well and give O(1) search times. Handling the collisions In the small number of cases, where multiple keys map to the same integer, then elements with different keys may be stored in the same "slot" of the hash table. It is clear that when the hash function is used to locate a potential match, it will be necessary to compare the key of that element with the search key. But there may be more than one element which should be stored in a single slot of the table. Various techniques are used to manage this problem: 1. 2. 3. 4. 5. 6. chaining, overflow areas, rehashing, using neighbouring slots (linear probing), quadratic probing, random probing, ...
Chaining One simple scheme is to chain all collisions in lists attached to the appropriate slot. This allows an unlimited number of collisions to be handled and doesn't require a priori knowledge of how many elements are contained in the collection. The tradeoff is the same as with linked lists versus array implementations of collections: linked list overhead in space and, to a lesser extent, in time.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_tables.html (2 of 5) [3/23/2004 2:54:18 PM]
Data Structures and Algorithms: Hash Tables
Rehashing Rehashing schemes use a second hashing operation when there is a collision. If there is a further collision, we rehash until an empty "slot" in the table is found. The rehashing function can either be a new function or a reapplication of the original one. As long as the functions are applied to a key in the same order, then a sought key can always be located. Linear probing One of the simplest rehashing functions is +1 (or 1), ie on a collision, look in the neighbouring slot in the table. It calculates the new address extremely quickly and may be extremely efficient on a modern RISC processor due to efficient cache utilisation (cf. the discussion of linked list efficiency). h(j)=h(k), so the next hash function, h1 is used. A second collision occurs, The animation gives you a practical demonstration of so h2 is used. the effect of linear probing: it also implements a quadratic rehash function so that you can compare the difference. Clustering Linear probing is subject to a clustering phenomenon. Rehashes from one location occupy a block of slots in the table which "grows" towards slots to which other keys hash. This exacerbates the collision problem and the number of rehashed can become large. Quadratic Probing Better behaviour is usually obtained with quadratic probing, where the secondary hash function depends on the rehash index: address = h(key) + c i2 on the tth rehash. (A more complex function of i may also be used.) Since keys which are mapped to the same value by the primary hash function follow the same sequence of addresses, quadratic probing shows secondary clustering. However, secondary clustering is not nearly as severe as the clustering shown by linear probes. Rehashing schemes use the originally allocated table space and thus avoid linked list overhead, but
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_tables.html (3 of 5) [3/23/2004 2:54:18 PM]
Data Structures and Algorithms: Hash Tables
require advance knowledge of the number of items to be stored. However, the collision elements are stored in slots to which other key values map directly, thus the potential for multiple collisions increases as the table becomes full. Overflow area Another scheme will divide the preallocated table into two sections: the primary area to which keys are mapped and an area for collisions, normally termed the overflow area.
When a collision occurs, a slot in the overflow area is used for the new element and a link from the primary slot established as in a chained system. This is essentially the same as chaining, except that the overflow area is preallocated and thus possibly faster to access. As with rehashing, the maximum number of elements must be known in advance, but in this case, two parameters must be estimated: the optimum size of the primary and overflow areas.
Of course, it is possible to design systems with multiple overflow tables, or with a mechanism for handling overflow out of the overflow area, which provide flexibility without losing the advantages of the overflow scheme. Summary: Hash Table Organization Organization Advantages q Unlimited number of elements Chaining q Unlimited number of collisions Rehashing Disadvantages q Overhead of multiple linked lists
q q
Fast rehashing Fast access through use of main table space
q
q
Maximum number of elements must be known Multiple collisions may become probable
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_tables.html (4 of 5) [3/23/2004 2:54:18 PM]
Data Structures and Algorithms: Hash Tables
Overflow area
q q
Fast access Collisions don't use primary table space
q
Two parameters which govern performance need to be estimated
q
Animation
Hash Table Animation This animation was written by Woi Ang. Please email comments to: morris@ee.uwa.edu.au
Key Terms
hash table Tables which can be searched for an item in O(1) time using a hash function to form an address from the key. hash function Function which, when applied to the key, produces a integer which can be used as an address in a hash table. collision When a hash function maps two different keys to the same table address, a collision is said to occur. linear probing A simple rehashing scheme in which the next slot in the table is checked on a collision. quadratic probing A rehashing scheme in which a higher (usually 2nd) order function of the hash index is used to calculate the address. clustering. Tendency for clusters of adjacent slots to be filled when linear probing is used. secondary clustering. Collision sequences generated by addresses calculated with quadratic probing. perfect hash function Function which, when applied to all the members of the set of items to be stored in a hash table, produces a unique set of integers within some suitable range. Continue on to Hashing Functions
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_tables.html (5 of 5) [3/23/2004 2:54:18 PM]
Data Structures and Algorithms  Linked List Performance
Data Structures and Algorithms
Linked Lists vs Arrays
This section will be easier to understand if you are familiar with the memory hierarchy on modern processors. Performance considerations Array based implementations of collections will generally show slightly better performance than linked list implementations. This will only be in the constant factor: scanning both arrays and linked lists is basically an O(n) operation. However, a number of factors contribute to slightly better performance of arrays: 1. The address of the next element in an array may be calculated from a current address and an element size  both held in registers: (next)address := (current)address + size Both addresses may use a single register. Since no memory accesses are required, this is a singlecycle operation on a modern RISC processor. 2. Using arrays, information is stored in consecutive memory locations: this allows the long cache lines of modern processors to be used effectively. The part of the cache line which is "prefetched" by accessing the current element of an array contains part of the next array element. Thus no part of the cache or the CPU<>memory bandwidth is "wasted" by not being used. In contrast, with linked list implementations: r There is additional overhead for pointers (and the overhead normally introduced by memory allocators like malloc). This means that fewer elements can fit into a single cache line. r There is no guarantee that successive elements in a list occupy successive memory locations. This leads to waste of memory bandwidth, because elements prefetched into the cache are not necessarily used. Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ll_time.html [3/23/2004 2:54:30 PM]
Data Structures and Algorithms: Hash Functions
Data Structures and Algorithms
8.3.3 Hashing Functions
Choosing a good hashing function, h(k), is essential for hashtable based searching. h should distribute the elements of our collection as uniformly as possible to the "slots" of the hash table. The key criterion is that there should be a minimum number of collisions. If the probability that a key, k, occurs in our collection is P(k), then if there are m slots in our hash table, a uniform hashing function, h(k), would ensure:
Sometimes, this is easy to ensure. For example, if the keys are randomly distributed in (0,r], then, h(k) = floor((mk)/r) will provide uniform hashing. Mapping keys to natural numbers Most hashing functions will first map the keys to some set of natural numbers, say (0,r]. There are many ways to do this, for example if the key is a string of ASCII characters, we can simply add the ASCII representations of the characters mod 255 to produce a number in (0,255)  or we could xor them, or we could add them in pairs mod 2161, or ... Having mapped the keys to a set of natural numbers, we then have a number of possibilities. 1. Use a mod function: h(k) = k mod m. When using this method, we usually avoid certain values of m. Powers of 2 are usually avoided, for k mod 2b simply selects the b low order bits of k. Unless we know that all the 2b possible values of the lower order bits are equally likely, this will not be a good choice, because some bits of the key are not used in the hash function. Prime numbers which are close to powers of 2 seem to be generally good choices for m. For example, if we have 4000 elements, and we have chosen an overflow table organization, but wish to have the probability of collisions quite low, then we might choose m = 4093. (4093 is the largest prime less than 4096 = 212.)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_func.html (1 of 2) [3/23/2004 2:54:39 PM]
Data Structures and Algorithms: Hash Functions
2. Use the multiplication method:
r r r
Multiply the key by a constant A, 0 < A < 1, Extract the fractional part of the product, Multiply this value by m.
Thus the hash function is: h(k) = floor(m * (kA  floor(kA))) In this case, the value of m is not critical and we typically choose a power of 2 so that we can get the following efficient procedure on most digital computers:
r r r
Choose m = 2p. Multiply the w bits of k by floor(A * 2w) to obtain a 2w bit product. Extract the p most significant bits of the lower half of this product. It seems that: A = (sqrt(5)1)/2 = 0.6180339887 is a good choice (see Knuth, "Sorting and Searching", v. 3 of "The Art of Computer Programming").
3. Use universal hashing: A malicious adversary can always chose the keys so that they all hash to the same slot, leading to an average O(n) retrieval time. Universal hashing seeks to avoid this by choosing the hashing function randomly from a collection of hash functions (cf Cormen et al, p 229 ). This makes the probability that the hash function will generate poor behaviour small and produces good average performance.
Key terms
Universal hashing A technique for choosing a hashing function randomly so as to produce good average performance. Continue on to Dynamic Algorithms
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hash_func.html (2 of 2) [3/23/2004 2:54:39 PM]
Data Structures and Algorithms  Memory Hierarchy
Data Structures and Algorithms
The Memory Hierarchy in Modern Processors
All the figures in this section are typical of commercially available highperformance processors in September, 1996. There is a distinct possibility that they will be somewhat out of date by the time you are reading this (even if it is only October, 1996!). Even in 1996, some very large machines were larger or faster than the figures below would indicate. One of the most important considerations in understanding the performance capabilities of a modern processor is the memory hierarchy. We can classify memory based on its "distance" from the processor: here distance is measured by the number of machine cycles required to access it. As memory becomes further away from the main processor (ie becomes slower to access) the number of words in a typical system increases. Some indicative numbers for 1996 processors would be: Name Register Cache Level 1 Cache Level 2 Main memory Disc Access Time Number of words (cycles) 1 32 2 5 30 106 16x103 0.25x106 108 109
In 1996, high performance processors had clock frequencies of of 200400 MHz or cycle times of 2.55.0 nanoseconds. Registers Registers are a core part of the processor itself. A RISC processor performs all operations except loads and stores on operands stored in the registers. In a typical RISC processor, there will be 32 32bit integer registers and 32 64bit floating point registers. (True 64bit machines with 64bit registers are starting to appear.) Cache
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mem_hierarchy.html (1 of 3) [3/23/2004 2:54:42 PM]
Data Structures and Algorithms  Memory Hierarchy
Cache memory sits between the processor and main memory. It stores recently accessed memory words "closer" to the processor than the main memory. Cache is transparent to a programmer (or compiler writer!): cache hardware will intercept all memory requests from the processor and satisfy the request from the cache if possible  otherwise it is forwarded to the main memory. Many high performance systems will have a number of levels of cache: a small level 1 cache "close" the processor (typically needing 2 cycles to access) and as many as 2 more levels of successively lower and larger caches built from high performance (but expensive!) memory chips. For stateoftheart (2 cycle access) performance, a processor needs to have the level 1 cache on the same die as the processor itself. Sizes of 16 Kwords (64 Kbytes, often separated into instruction and data cache) were common. The bus between the cache and main memory is a significant bottleneck: system designers usually organise the cache as a set of "lines" of, typically, 8 words. A whole line of 8 contiguous memory locations will be fetched into the cache each time it is updated. (8 words will be fetched in a 4cycle burst  2 words in each cycle on a 64bit bus.) This means that when one memory word is fetched into the cache, 7 of its neighbours will also be fetched. A program which is able to use this factor (by, for instance, keeping closely related data in contiguous memory locations) will make more effective use of the cache and see a reduced effective memory access time. At least one processor (DEC's Alpha) has a larger level 2 cache on the processor die. Other systems place the level 2 cache on a separate die within the same physical package (such packages are sometimes referred to as multichip modules). Level x Cache Systems with more than 1 Mbyte of Level 2 cache  built from fast, but expensive and less dense, static RAM (SRAM) memory devices are becoming common. The SRAM memories have access times of 10 ns (or slightly less), but the total access time in a system would be of the order of 5 cycles or more. Main memory High density dynamic RAM (DRAM) technology provides the cheapest and densest semiconductor memory available. Chip access times are about 60 ns, but system access times will be 2530 cycles (and perhaps more for high clock frequency systems). DRAM manufacturers have tended to concentrate on increasing capacity rather than increasing speed, so that access times measured in processor cycles are increasing as processor clock speeds increase faster than DRAM access times decrease. However DRAM capacities are increasing at similar rates to processor clock speeds. Disc Some forecasters have been suggesting that magnetic discs will be obsolete soon for some years now. Although the cost and density gap between DRAM and disc memory has been narrowing, some heroic
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mem_hierarchy.html (2 of 3) [3/23/2004 2:54:42 PM]
Data Structures and Algorithms  Memory Hierarchy
efforts by disc manufacturers have seen disc capacities increase and prices drop, so that the point at which magnetic disc becomes obsolete is still a way off. Discs with 4 GByte (109 words) are commonplace and access times are of the order of 10 ms or 106 processor cycles. The large gap between access times (a factor of 104) for the last two levels of the hierarchy is probably one of the factor that is driving DRAM research and development towards higher density rather than higher speed. However work on cacheDRAMs and synchronous DRAM is pushing its access time down. Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mem_hierarchy.html (3 of 3) [3/23/2004 2:54:42 PM]
Data Structures and Algorithms: Dynamic Algorithms
Data Structures and Algorithms
9 Dynamic Algorithms
Sometimes, the divide and conquer approach seems appropriate but fails to produce an efficient algorithm. We all know the algorithm for calculating Fibonacci numbers: int fib( int n ) { if ( n < 2 ) return n; else return fib(n1) + fib(n2); } This algorithm is commonly used as an example of the elegance of recursion as a programming technique. However, when we examine its time complexity, we find it's far from elegant! Analysis If tn is the time required to calculate fn, where fn is the nth Fibonacci number. Then, by examining the function above, it's clear that tn = tn1 + tn2 and t1 = t2 = c, where c is a constant. Therefore tn = cfn Now,
thus tn = O(fn) = O(1.618..n) So this simple function will take exponential time! As we will see in more detail later, algorithms which run in exponential time are to be avoided at all costs! An Iterative Solution However, this simple alternative: int fib( int n ) {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dynamic.html (1 of 2) [3/23/2004 2:54:46 PM]
Data Structures and Algorithms: Dynamic Algorithms
int k, f1, f2; if ( n < 2 ) return n; else { f1 = f2 = 1; for(k=2;k<n;k++) { f = f1 + f2; f2 = f1; f1 = f; } return f; } runs in O(n) time. This algorithm solves the problem of calculating f0 and f1 first, calculates f2 from these, then f3 from f2 and f1, and so on. Thus, instead of dividing the large problem into two (or more) smaller problems and solving those problems (as we did in the divide and conquer approach), we start with the simplest possible problems. We solve them (usually trivially) and save these results. These results are then used to solve slightly larger problems which are, in turn, saved and used to solve larger problems again. Free Lunch? As we know, there's never one! Dynamic problems obtain their efficiency by solving and storing the answers to small problems. Thus they usually trade space for increased speed. In the Fibonacci case, the extra space is insignificant  the two variables f1 and f2, but in some more complex dynamic algorithms, we'll see that the space used is significant.
Key terms
Dynamic Algorithm A general class of algorithms which solve problems by solving smaller versions of the problem, saving the solutions to the small problems and then combining them to solve the larger problem. Continue on to Binomial Coefficients
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dynamic.html (2 of 2) [3/23/2004 2:54:46 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/binom.html
Data Structures and Algorithms
9.2 Binomial Coefficients
As with the Fibonacci numbers, the binomial coefficients can be calculated recursively  making use of the relation: = n1Cm1 + n1Cm A similar analysis to that used for the Fibonacci numbers shows that the time complexity using this approach is also the binomial coefficient itself.
m nC
However, we all know that if we construct Pascal's triangle, the nth row gives all the values, nC , m = 0,n: m 1 1 1 1 1 1 1 1 7 6 21 5 15 35 4 10 20 35 3 6 10 15 21 2 3 4 5 6 7 1 1 1 1 1 1 1
Each entry takes O(1) time to calculate and there are O(n2) of them. So this calculation of the coefficients takes O(n2) time. But it uses O(n2) space to store the coefficients. Continue on to Optimal Binary Search Trees
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/binom.html [3/23/2004 2:54:56 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html
Data Structures and Algorithms
9.3 Optimal Binary Search Trees
Up to this point, we have assumed that an optimal search tree is one in which the probability of occurrence of all keys is equal (or is unknown, in which case we assume it to be equal). Thus we concentrated on balancing the tree so as to make the cost of finding any key at most log n. However, consider a dictionary of words used by a spelling checker for English language documents. It will be searched many more times for 'a', 'the', 'and', etc than for the thousands of uncommon words which are in the dictionary just in case someone happens to use one of them. Such a dictionary needs to be large: the average educated person has a vocabulary of 30 000 words, so it needs ~100 000 words in it to be effective. It is also reasonably easy to produce a table of the frequency of occurrence of words: words are simply counted in any suitable collection of documents considered to be representative of those for which the spelling checker will be used. A balanced binary tree is likely to end up with a word such as 'miasma' at its root, guaranteeing that in 99.99+% of searches, at least one comparison is wasted! If key, k, has relative frequency, rk, then in an optimal tree, sum(dkrk) where dk is the distance of the key, k, from the root (ie the number of comparisons which must be made before k is found), is minimised. We make use of the property: Lemma Subtrees of optimal trees are themselves optimal trees. Proof If a subtree of a search tree is not an optimal tree, then a better search tree will be produced if the subtree is replaced by an optimal tree. Thus the problem is to determine which key should be placed at the root of the tree. Then the process can be repeated for the left and rightsubtrees. However, a divideandconquer approach would choose each key as a candidate root and repeat the process for each subtree. Since there are n choices for the root and 2O(n) choices for roots of the two subtrees, this leads to an O(nn) algorithm. An efficient algorithm can be generated by the dynamic approach. We calculate the O(n) best trees consisting of just two elements (the neighbours in the sorted list of keys).
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html (1 of 4) [3/23/2004 2:55:20 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html
In the figure, there are two possible arrangements for the tree containing F and G. The cost for (a) is 5.1 + 7.2 = 19 and for (b) 7.1 + 5.2 = 17 Thus (b) is the optimum tree and its cost is saved as c(f,g). We also store g as the root of the best fg subtree in best(f,g). Similarly, we calculate the best cost for all n1 subtrees with two elements, c(g,h), c(h,i), etc.
The subtrees containing two elements are then used to calculate the best costs for subtrees of 3 elements. This process is continued until we have calculated the cost and the root for the optimal search tree with n elements. There are O(n2) such subtree costs. Each one requires n operations to determine, if the cost of the smaller subtrees is known. Thus the overall algorithm is O(n3). Code for optimal binary search tree Note some C 'tricks' to handle dynamicallyallocated twodimensional arrays using preprocessor macros for C and BEST! This Java code may be easier to comprehend for some! It uses this class for integer matrices. The data structures used may be represented:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html (2 of 4) [3/23/2004 2:55:20 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html
After the initialisation steps, the data structures used contain the frequencies, rfi, in cii (the costs of single element trees), max everywhere below the diagonal and zeroes in the positions just above the diagonal (to allow for the trees which don't have a left or right branch):
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html (3 of 4) [3/23/2004 2:55:20 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html
In the first iteration, all the positions below the diagonal (ci,i+1) will be filled in with the optimal costs of twoelement trees from i to i+1. In subsequent iterations, the optimal costs of k1 element trees (ci,i+k) are filled in using previously calculated costs of smaller trees.
Animation
Optimal Binary Search Tree Animation This animation was written by John Morris and (mostly) Woi Ang Please email comments to: morris@ee.uwa.edu.au
Continue on to Matrix Chain Multiplication
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_bin.html (4 of 4) [3/23/2004 2:55:20 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/optbin.c
/* Optimal Binary Search Tree */ #define MAX_FREQ #define C(i,j) #define BEST(i,j) 1.0e38 cost[i*(n+1)+j] best[i*(n+1)+j]
int *opt_bin( double *rf, int n ) { int i, j, k; int *best; double *cost, t; /* Allocate best array */ best = (int *)calloc( (n+1)*(n+1), sizeof(int) ); /* Alocate cost array */ cost = (double *)calloc( (n+1)*(n+1), sizeof(double) ); /* Initialise all subtrees to maximum */ for(i=0;i<n;i++) for(j=(i+1);j<(n+1);j++) C(i,j) = MAX_FREQ; /* but we know the cost of singleitem trees */ for(i=0;i<n;i++) C(i,i) = rf[i]; /* Initialise above the diagonal to allow for nodes with only one child */ for(i=0;i<(n+1);i++) C(i,i1) = 0; /* For subtrees ending at j */ for(j=1;j<n;j++) { /* Consider trees starting at 1 to nj */ for(i=1;i<=(nj);i++) { /* Check each key in i,i+j as a possible root */ for(k=i;k<=(i+j);k++) { t = C(i,k1) + C(k+1,i+j); if ( t < C(i,i+j) ) { C(i,i+j) = t; BEST(i,i+j) = k; } } /* Add the cost of each key because we're pushing the whole tree down one level */ t = 0; for(k=i;k<=(i+j);k++) t = t + rf[k]; C(i,i+j) = C(i,i+j) + t; } } return best; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/optbin.c [3/23/2004 2:55:23 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java
import java.lang.*; import java.awt.*; class OBSearch extends Object { int n; IntMatrix cost, best; int[] rel_freq; DrawingPanel drawingPanel; OBSAnim obsAnim; AlgAnimFrame frame; BinTree binTree; static final int horizSpace = 34; static final int vertSpace = 19; public OBSearch( int n, int[] freq, String[] label, DrawingPanel drawingPanel, AlgAnimFrame frame ) { this.drawingPanel = drawingPanel; obsAnim = new OBSAnim(drawingPanel); obsAnim.setFonts(drawingPanel.getHugeFont(), drawingPanel.getBigFont()); this.frame = frame; this.n = n; cost = new IntMatrix( n ); best = new IntMatrix( n ); rel_freq = new int[n]; for(int j=0; j<n; j++) rel_freq[j] = freq[j]; cost.setDiag( freq ); for(int k=0;k<n;k++) best.setElem( k, k, k ); cost.setLT( Integer.MAX_VALUE ); best.setTitle("Roots Matrix"); String[] bestLabel = new String[label.length]; for (int i = 0; i < label.length; i++) bestLabel[i] = new String(label[i] + ":" + i); best.setRowLabels(bestLabel); best.setColLabels(bestLabel); cost.setTitle("Costs Matrix"); cost.setRowLabels(label); cost.setColLabels(label); obsAnim.setCostMat(cost); obsAnim.setBestMat(best); drawingPanel.repaint(); binTree = new BinTree(obsAnim, drawingPanel, n*2); obsAnim.setFreq(freq); obsAnim.setTree(binTree); } int CostSubTree( int j, int k ) { int c; if ( (j<0)  (j>k)  (j>=n)  (k>=n) ) c = 0; else { c = cost.elem(j,k); } return c; } /**/ // Evaluate trees of size k public void TreeEvaluate( int tree_size ) { /**/int line = 1; /**/frame.Highlight(line); int left, right, c_root, k, c, best_cost, best_root;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java (1 of 4) [3/23/2004 2:55:28 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java
/**/obsAnim.setText2("Optimal subtree: "); // For trees starting at 0, ntree_size1 for(left=0;left<=(ntree_size);left++) { /**/frame.Highlight(line + 4); right = left+tree_size1;/**/frame.Highlight(line + 5); /**/for (int l = left; l <= right; l++) /**/ obsAnim.highlightFreq(l); /**/binTree = new BinTree(obsAnim, drawingPanel, n*2); /**/obsAnim.setTree(binTree); /**/obsAnim.setCom2( /**/ "Checking all possible subtree from: " + /**/ toString(left) + " to " + toString(right) + "...", /**/ drawingPanel.getOffset(), /**/ drawingPanel.getOffset() + 320); /**/drawingPanel.repaint(); drawingPanel.delay(); best_cost = cost.elem(left,right); // Best cost /**/frame.Highlight(line + 6); best_root = left; /**/frame.Highlight(line + 7); // If left tree has k nodes, right tree has tree_size  k  1 for(c_root=left;c_root<=right;c_root++) { /**/frame.Highlight(line + 9); // Check each candidate root c = CostSubTree(left, c_root1) + CostSubTree(c_root+1, right); /**/frame.Highlight(line + 11); /**/binTree = new BinTree(obsAnim, drawingPanel, n*2); /**/obsAnim.setTree(binTree); /**/binTree.insertNodeAt(new Node(toString(c_root), /**/ rel_freq[c_root]), 1); /**/formSubTree( left, c_root, right, 1 ); /**/obsAnim.setText("Subtree " + (c_rootleft+1), /**/ binTree.getNodeAt(1).getX()  30, /**/ binTree.getNodeAt(1).getY()  35); /**/binTree.redraw(); /**/binTree.highlightLeftRightSubtree(this, left, /**/ c_root, right); /**/frame.waitStep(); /**/frame.Highlight(line + 13); if ( c < best_cost ) { /**/obsAnim.setOptree(binTree); best_cost = c; /**/frame.Highlight(line + 14); best_root = c_root; /**/frame.Highlight(line + 15); } /**/ else { /**/obsAnim.discardBinTree(); /**/} /**/frame.Highlight(line + 16); /**/obsAnim.hideCom(); /**/frame.waitStep();obsAnim.hideText(); } /**/frame.Highlight(line + 17); /**/binTree = obsAnim.moveOpt2Tree(); /**/obsAnim.setCom("Adding cost and root to matrices...", /**/ 300, 350); /**/binTree.tree2Matrices(left, right); /**/frame.waitStep(); // Update the cost, best matrices best.setElem(left,right,best_root); /**/frame.Highlight(line + 20); cost.setElem(left,right,best_cost); /**/frame.Highlight(line + 21); // Add the cost of each key c = 0; /**/frame.Highlight(line + 23); for(k=left;k<=right;k++) c = c + rel_freq[k]; /**/frame.Highlight(line + 24); /**/frame.Highlight(line + 25);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java (2 of 4) [3/23/2004 2:55:28 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java
cost.incElem(left,right,c); /**/frame.Highlight(line + 26); /**/obsAnim.hideCom2(); /**/cost.setHighlight(left, right); /**/best.setHighlight(left, right); /**/obsAnim.setCom("Optimal cost for subtree: " + /**/ toString(left) + /**/ " to " + toString(right), /**/ drawingPanel.getOffset(), /**/ drawingPanel.getOffset() + 10 + (right  1)*vertSpace); /**/obsAnim.setCom2( /**/ "Root for this optimal " + /**/ "subtree is: " + toString(best_root) + /**/ ", represented by " + best_root + " in root matrix...", /**/ drawingPanel.getOffset() + 300, /**/ drawingPanel.getOffset() + 10 + (right  1)*vertSpace); /**/drawingPanel.repaint(); drawingPanel.delay(); /**/frame.waitStep(); /**/obsAnim.hideCom(); /**/obsAnim.hideCom2(); /**/for (int l = left; l <= right; l++) /**/ obsAnim.restoreFreq(l); /**/cost.restoreHighlight(left, right); /**/best.restoreHighlight(left, right); /**/drawingPanel.repaint(); drawingPanel.delay(); } /**/frame.Highlight(line + 27); /**/frame.Highlight(line + 28); } public void BuildOptTree(DrawingPanel drawingPanel, AlgAnimFrame frame) { /**/int line = 31; /**/frame.Highlight(line); int root; // Build all the subtrees in turn cost.setDiag( rel_freq ); /**/frame.Highlight(line + 3); for(int k=0;k<n;k++) best.setElem( k, k, k ); for( int k=2; k<=n; k++ ) { this.TreeEvaluate( k ); /**/frame.waitSkip(); } root = best.elem(0,n1); /**/best.setHighlight2(0,n1); /**/obsAnim.setCom("Root for the whole tree is: " + root + "...", /**/ drawingPanel.getOffset() + 280, /**/ drawingPanel.getOffset() + 10 + (n  2)*vertSpace); /**/binTree = new BinTree(obsAnim, drawingPanel, n*2); /**/obsAnim.setTree(binTree); /**/binTree.animateInsertNode(drawingPanel.getOffset() + 280, /**/ drawingPanel.getOffset() + 10 + (n  2)*vertSpace, 1); /**/binTree.insertNodeAt(new Node(toString(root), /**/ rel_freq[root]), 1); binTree.redraw(); /**/printTree( 0, root, n1, 1 ); } //void printTree( int left, int root, int right, int parentPosn ) { frame.waitStep(); int left_child, right_child, i; if ( left < root) { left_child = best.elem( left, root1 ); /**/obsAnim.setCom("Left subtree root of " + root + /**/ " is best[" + left + ", " + (root1) + "] = " + /**/ left_child, /**/ drawingPanel.getOffset() + 280 + (left)*horizSpace, /**/ drawingPanel.getOffset() + 10 + (root  2)*vertSpace); /**/best.setHighlight2( left, root1 );
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java (3 of 4) [3/23/2004 2:55:28 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java
/**/drawingPanel.repaint(); drawingPanel.delay(); /**/binTree.animateInsertNode( /**/ drawingPanel.getOffset() + 280 + (left)*horizSpace, /**/ drawingPanel.getOffset() + 10 + (root  2)*vertSpace, binTree.left(parentPosn)); binTree.insertNodeAt(new Node(toString(left_child), rel_freq[left_child]), binTree.left(parentPosn)); binTree.redraw(); printTree( left, left_child, root1, binTree.left(parentPosn) ); } if ( root < right) { right_child = best.elem( root+1, right ); /**/obsAnim.setCom("Right subtree root of " + root + /**/ " is best[" + (root+1) + ", " + right + "] = " /**/ + right_child, /**/ drawingPanel.getOffset() + 280 + (root)*horizSpace, /**/ drawingPanel.getOffset() + 10 + (right  2)*vertSpace); /**/best.setHighlight2( root+1, right ); /**/drawingPanel.repaint(); drawingPanel.delay(); /**/binTree.animateInsertNode( /**/ drawingPanel.getOffset() + 280 + (root)*horizSpace, /**/ drawingPanel.getOffset() + 10 + (right  2)*vertSpace, binTree.right(parentPosn)); binTree.insertNodeAt(new Node(toString(right_child), rel_freq[right_child]), binTree.right(parentPosn)); binTree.redraw(); printTree( root+1, right_child, right, binTree.right(parentPosn) ); } } String toString(int i) { return new String("" + (char)('A' + i)); } void formSubTree( int left, int root, int right, int parentPosn ) { int left_child, right_child, i; if ( left < root) { left_child = best.elem( left, root1 ); binTree.insertNodeAt(new Node(toString(left_child), rel_freq[left_child]), binTree.left(parentPosn)); formSubTree( left, left_child, root1, binTree.left(parentPosn) ); } if ( root < right) { right_child = best.elem( root+1, right ); binTree.insertNodeAt(new Node(toString(right_child), rel_freq[right_child]), binTree.right(parentPosn)); formSubTree( root+1, right_child, right, binTree.right(parentPosn) ); } } } // class OBSearch
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/OBSearch.java (4 of 4) [3/23/2004 2:55:28 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java
/* IntMatrix.java */ import java.io.*; import java.awt.*; class IntMatrix implements DrawingObj { int rows, columns; int[][] elems; /* Drawing characteristics */ int cell_size; Font font = new Font("Courier", Font.PLAIN, 12); //Font boxFont = new Font("Courier", Font.PLAIN, 12); String[] rowLabels, colLabels; String title = null; boolean[][] highlight; boolean[][] highlight2; static final int horizSpace = 32; static final int vertSpace = 17; private Color fg, bg; private int x, y; public IntMatrix( int rows, int columns ) { int j, k; this.rows = rows; this.columns = columns; elems = new int[rows][columns]; highlight = new boolean[rows][columns]; highlight2 = new boolean[rows][columns]; for(j=0; j<rows; j++) for(k=0; k<columns; k++) { elems[j][k] = Integer.MAX_VALUE; highlight[j][k] = false; highlight2[j][k] = false; } x = y = 0; } /* Construct a square matrix */ public IntMatrix( int rows ) { int j, k; this.columns = this.rows = rows; elems = new int[rows][columns]; highlight = new boolean[rows][columns]; highlight2 = new boolean[rows][columns]; for(j=0;j<rows;j++) for(k=0;k<columns;k++) { elems[j][k] = Integer.MAX_VALUE; highlight[j][k] = false; highlight2[j][k] = false; } x = y = 0; } public int elem( int i, int j ) { return elems[i][j]; } public void setElem( int i, int j, int value ) { elems[i][j] = value; } public void incElem( int i, int j, int value ) { elems[i][j] = elems[i][j] + value; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java (1 of 4) [3/23/2004 2:55:46 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java
public void setDiag( int[] value ) { int j; for(j=0;j<columns;j++) elems[j][j] = value[j]; } public void setLT( int value ) { int j, k; for(j=0;j<rows;j++) for(k=0; k<j; k++) elems[k][j] = value; } public void printMatrix() { int j, k, x; for(j=0;j<rows;j++) { for(k=0;k<columns;k++) { x = elems[k][j]; if( x == Integer.MAX_VALUE ) System.out.print(" #"); else System.out.print( x ); System.out.print( ' ' ); } System.out.println(); } } public void setHighlight(int j, int k) { this.highlight[j][k] = true; this.highlight2[j][k] = false; } public void setHighlight2(int j, int k) { this.highlight2[j][k] = true; this.highlight[j][k] = false; } public void restoreHighlight(int j, int k) { this.highlight[j][k] = false; } public void restoreHighlight2(int j, int k) { this.highlight2[j][k] = false; } public void restoreAll() { for (int i = 0; i < columns; i++) for (int j = 0; j < rows; j++) { this.highlight[i][j] = false; this.highlight2[i][j] = false; } } public void setTitle(String title) { this.title = title; } public void setRowLabels(String[] strs) { if (strs.length != rows) { System.out.println("Row labels do no match the number of rows!"); return; } rowLabels = new String[rows]; for (int i = 0; i < rows; i++) {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java (2 of 4) [3/23/2004 2:55:46 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java
if (strs[i].length() > 4) strs[i] = strs[i].substring(0, 4); if (strs[i].length() < 4) { String blank = new String(); for (int j = 0; j < 4strs[i].length(); j++) blank = blank.concat(" "); strs[i] = blank + strs[i]; } rowLabels[i] = new String(strs[i]); } } public void setColLabels(String[] strs) { if (strs.length != columns) { System.out.println( "Column labels do no match the number of columns!"); return; } colLabels = new String[columns]; for (int i = 0; i < columns; i++) { if (strs[i].length() > 4) strs[i] = strs[i].substring(0, 4); if (strs[i].length() < 4) { String blank = new String(); for (int j = 0; j < 4strs[i].length(); j++) blank = blank.concat(" "); strs[i] = blank + strs[i]; } colLabels[i] = new String(strs[i]); } } public void drawBox(Graphics g, int x, int y, String str, Color fg, Color bg, Font font) { g.setColor(bg); g.fillRect(x, y, horizSpace, vertSpace); g.setColor(Color.black); g.drawRect(x, y, horizSpace, vertSpace); g.setColor(fg); g.setFont(font); g.drawString(str, x + 2, y + vertSpace  4); } public void move(int x, int y) { this.x = x; this.y = y; } public int getX() { return x; } public int getY() { return y; } public void setColor(Color fg, Color bg) { this.fg = fg; this.bg = bg; } public void draw(Graphics g) { drawMatrix(g, x, y, fg, bg); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java (3 of 4) [3/23/2004 2:55:46 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java
public void drawMatrix(Graphics g, int x, int y, Color fg, Color bg) { int j, k, elem; int posnX = x, posnY = y; // draw colLabels if (colLabels != null && colLabels.length == columns) { posnX += horizSpace + 2; for (int i = 0; i < columns; i++) { drawBox(g, posnX, posnY, colLabels[i], bg, fg, font); posnX += horizSpace + 2; } } posnX = x; // draw rowLabels if (rowLabels != null && rowLabels.length == rows) { posnY += vertSpace + 2; for (int i = 0; i < rows; i++) { drawBox(g, posnX, posnY, rowLabels[i], bg, fg, font); posnY += vertSpace + 2; } } posnY = y + vertSpace + 2; for(j=0;j<rows;j++) { posnX = x + horizSpace + 2; for(k=0;k<columns;k++) { elem = elems[k][j]; if (j < k) drawBox(g, posnX, posnY, " ", fg, Color.lightGray, font); else if( elem == Integer.MAX_VALUE ) drawBox(g, posnX, posnY, " ", fg, bg, font); else { String blank = new String(); if (elem < 1000) blank = new String(" "); if (elem < 100) blank = new String(" "); if (elem < 10) blank = new String(" "); if (highlight[k][j]) drawBox(g, posnX, posnY, blank+elem, Color.white, Color.black, font); else if (highlight2[k][j]) drawBox(g, posnX, posnY, blank+elem, Color.white, Color.gray, font); else drawBox(g, posnX, posnY, blank+elem, fg, bg, font); } posnX += horizSpace + 2; } posnY += vertSpace + 2; } if (title != null && title.length() > 0) { posnY += 5; posnX = x + (columns/2  1)*(horizSpace + 2); new ComBox(posnX, posnY, title, Color.black, Color.green, font).draw(g); } } } // class IntMatrix
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/opt_bin/IntMatrix.java (4 of 4) [3/23/2004 2:55:46 PM]
Data Structures and Algorithms: Matrix Chain Multiplication
Data Structures and Algorithms
Matrix Chain Multiplication
Problem We are given a sequence of matrices to multiply: A1 A2 A3 ... An Matrix multiplication is associative, so A1 ( A2 A3 ) = ( A1 A2 ) A3 that is, we can can generate the product in two ways. The cost of multiplying an nxm by an mxp one is O(nmp) (or O(n3) for two nxn ones). A poor choice of parenthesisation can be expensive: eg if we have Matrix Rows Columns A1 10 100 A2 A3 the cost for ( A1 A2 ) A3 is A1A2 Total but for A1 ( A2 A3 ) A2A3 100x5x50 = 25000 => A2A3 (100x5) A1(A2A3) 10x100x50 = 50000 => A1A2A3 (10x50) Total 75000 Clearly demonstrating the benefit of calculating the optimum order before commencing the product calculation! 10x100x5 = 5000 => A1 A2 (10x5) 7500 (A1A2) A3 10x5x50 = 2500 => A1A2A3 (10x50) 100 5 5 50
Optimal Substructure
As with the optimal binary search tree, we can observe that if we divide a chain of matrices to be multiplied into two optimal subchains: (A1 A2 A3 ... Aj) (Aj+1 ... An ) then the optimal parenthesisations of the subchains must be composed of optimal chains. If they were not, then we could replace them with cheaper parenthesisations. This property, known as optimal substructure is a hallmark of dynamic algorithms: it enables us to solve the small problems (the substructure) and use those solutions to generate solutions to larger
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mat_chain.html (1 of 2) [3/23/2004 2:55:51 PM]
Data Structures and Algorithms: Matrix Chain Multiplication
problems. For matrix chain multiplication, the procedure is now almost identical to that used for constructing an optimal binary search tree. We gradually fill in two matrices,
q
q
one containing the costs of multiplying all the subchains. The diagonal below the main diagonal contains the costs of all pairwise multiplications: cost[1,2] contains the cost of generating product A1A2, etc. The diagonal below that contains the costs of triple products: eg cost[1,3] contains the cost of generating product A1A2A3, which we derived from comparing cost[1,2] and cost[2,3], etc. one containing the index of last array in the left parenthesisation (similar to the root of the optimal subtree in the optimal binary search tree, but there's no root here  the chain is divided into left and right subproducts), so that best[1,3] might contain 2 to indicate that the left subchain contains A1A2 and the right one is A3 in the optimal parenthesisation of A1A2A3.
As before, if we have n matrices to multiply, it will take O(n) time to generate each of the O(n2) costs and entries in the best matrix for an overall complexity of O(n3) time at a cost of O(n2) space.
Animation
Matrix Chain Multiplication Animation This animation was written by Woi Ang.
If you don't have a high resolution display, the bottom of the screen will be clipped!
Please email comments to: morris@ee.uwa.edu.au
Key terms
optimal substructure a property of optimisation problems in which the subproblems which constitute the solution to the problem itself are themselves optimal solutions to those subproblems. This property permits the construction of dynamic algorithms to solve the problem.
Continue on to Longest Common Subsequence
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mat_chain.html (2 of 2) [3/23/2004 2:55:51 PM]
Data Structures and Algorithms: Longest Common Subsequence
Data Structures and Algorithms
Longest Common Subsequence
Another problem that has a dynamic solution is that of finding the longest common subsequence. Problem Given two sequences of symbols, X and Y, determine the longest subsequence of symbols that appears in both X and Y. Reference Cormen, Section 16.3 Lecture notes by Kirk Pruhs, University of Pittsburgh Pseudocode from John Stasko's notes for CS3158 at Georgia Tech
Key terms
optimal substructure a property of optimisation problems in which the subproblems which constitute the solution to the problem itself are themselves optimal solutions to those subproblems. This property permits the construction of dynamic algorithms to solve the problem.
Continue on to Optimal Triangulation
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/lcom_subseq.html [3/23/2004 2:55:52 PM]
Data Structures and Algorithms: Optimal Triangulation
Data Structures and Algorithms
Optimal Triangulation
Triangulation  dividing a surface up into a set of triangles  is the first step in the solution of a number of engineering problems: thus finding optimal triangulations is an important problem in itself. Problem Any polygon can be divided into triangles. The problem is to find the optimum triangulationi of a convex polygon based on some criterion, eg a triangulation which minimises the perimeters of the component triangles. Reference Cormen, Section 16.4
Key terms
convex polygon a convex polygon is one in which any chord joining two vertices of the polygon lies either wholly within or on the boundary of the polygon.
Continue on to Graph Algorithms
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/opt_tri.html [3/23/2004 2:55:59 PM]
Data Structures and Algorithms: Graph Algorithms
Data Structures and Algorithms
10 Graphs
10.1 Minimum Spanning Trees
Greedy Algorithms Many algorithms can be formulated as a finite series of guesses, eg in the Travelling Salesman Problem, we try (guess) each possible tour in turn and determine its cost. When we have tried them all, we know which one is the optimum (least cost) one. However, we must try them all before we can be certain that we know which is the optimum one, leading to an O(n!) algorithm. Intuitive strategies, such as building up the salesman's tour by adding the city which is closest to the current city, can readily be shown to produce suboptimal tours. As another example, an experienced chess player will not take an opponent's pawn with his queen  because that move produced the maximal gain, the capture of a piece  if his opponent is guarding that pawn with another pawn. In such games, you must look at all the moves ahead to ensure that the one you choose is in fact the optimal one. All chess players know that shortsighted strategies are good recipes for disaster! There is a class of algorithms, the greedy algorithms, in which we can find a solution by using only knowledge available at the time the next choice (or guess) must be made. The problem of finding the Minimum Spanning Tree is a good example of this class. The Minimum Spanning Tree Problem Suppose we have a group of islands that we wish to link with bridges so that it is possible to travel from one island to any other in the group. Further suppose that (as usual) our government wishes to spend the absolute minimum amount on this project (because other factors like the cost of using, maintaining, etc, these bridges will probably be the responsibility of some future government ). The engineers are able to produce a cost for a bridge linking each possible pair of islands. The set of bridges which will enable one to travel from any island to any other at minimum capital cost to the government is the minimum spanning tree. We will need some definitions first: Graphs A graph is a set of vertices and edges which connect them. We write: G = (V,E) where V is the set of vertices and the set of edges,
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mst.html (1 of 5) [3/23/2004 2:56:10 PM]
Data Structures and Algorithms: Graph Algorithms
E = { (vi,vj) } where vi and vj are in V. Paths A path, p, of length, k, through a graph is a sequence of connected vertices: p = <v0,v1,...,vk> where, for all i in (0,k1: (vi,vi+1) is in E. Cycles A graph contains no cycles if there is no path of nonzero length through the graph, p = <v0,v1,...,vk> such that v0 = vk. Spanning Trees A spanning tree of a graph, G, is a set of V1 edges that connect all vertices of the graph. Minimum Spanning Tree In general, it is possible to construct multiple spanning trees for a graph, G. If a cost, cij, is associated with each edge, eij = (vi,vj), then the minimum spanning tree is the set of edges, Espan, forming a spanning tree, such that: C = sum( cij  all eij in Espan ) is a minimum. Kruskal's Algorithm This algorithm creates a forest of trees. Initially the forest consists of n single node trees (and no edges). At each step, we add one (the cheapest one) edge so that it joins two trees together. If it were to form a cycle, it would simply link two nodes that were already part of a single connected tree, so that this edge would not be needed. The basic algorithm looks like this: Forest MinimumSpanningTree( Graph g, int n, double **costs ) { Forest T;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mst.html (2 of 5) [3/23/2004 2:56:10 PM]
Data Structures and Algorithms: Graph Algorithms
Queue q; Edge e; T = ConsForest( g ); q = ConsEdgeQueue( g, costs ); for(i=0;i<(n1);i++) { do { e = ExtractCheapestEdge( q ); } while ( !Cycle( e, T ) ); AddEdge( T, e ); } return T; } The steps are: 1. The forest is constructed  with each node in a separate tree. 2. The edges are placed in a priority queue. 3. Until we've added n1 edges, 1. Extract the cheapest edge from the queue, 2. If it forms a cycle, reject it, 3. Else add it to the forest. Adding it to the forest will join two trees together. Every step will have joined two trees in the forest together, so that at the end, there will only be one tree in T. We can use a heap for the priority queue. The trick here is to detect cycles. For this, we need a unionfind structure. Unionfind structure To understand the unionfind structure, we need to look at a partition of a set. Partitions A partitions is a set of sets of elements of a set.
q q
Every element of the set belong to one of the sets in the partition. No element of the set belong to more than one of the subsets.
or
q
Every element of a set belongs to one and only one of the sets of a partition.
The forest of trees is a partition of the original set of nodes. Initially all the subsets have exactly one node in them. As the algorithm progresses, we form a union of two of the trees (subsets), until
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mst.html (3 of 5) [3/23/2004 2:56:10 PM]
Data Structures and Algorithms: Graph Algorithms
eventually the partition has only one subset containing all the nodes. A partition of a set may be thought of as a set of equivalence classes. Each subset of the partition contains a set of equivalent elements (the nodes connected into one of the trees of the forest). This notion is the key to the cycle detection algorithm. For each subset, we denote one element as the representative of that subset or equivalence class. Each element in the subset is, somehow, equivalent and represented by the nominated representative. As we add elements to a tree, we arrange that all the elements point to their representative. As we form a union of two sets, we simply arrange that the representative of one of the sets now points to any one of the elements of the other set. So the test for a cycle reduces to: for the two nodes at the ends of the candidate edge, find their representatives. If the two representatives are the same, the two nodes are already in a connected tree and adding this edge would form a cycle. The search for the representative simply follows a chain of links. Each node will need a representative pointer. Initially, each node is its own representative, so the pointer is set to NULL. As the initial pairs of nodes are joined to form a tree, the representative pointer of one of the nodes is made to point to the other, which becomes the representative of the tree. As trees are joined, the representative pointer of the representative of one of them is set to point to any element of the other. (Obviously, representative searches will be somewhat faster if one of the representatives is made to point directly to the other.) Equivalence classes also play an important role in the verification of software. Select diagrams of Kruskal's algorithm in operation. Greedy operation At no stage did we try to look ahead more than one edge  we simply chose the best one at any stage. Naturally, in some situations, this myopic view would lead to disaster! The simplistic approach often makes it difficult to prove that a greedy algorithm leads to the optimal solution. proof by contradiction is a common proof technique used: we demonstrate that if we didn't make the greedy choice now, a nonoptimal solution would result. Proving the MST algorithm is, happily, one of the simpler proofs by contradiction! Data structures for graphs You should note that we have discussed graphs in an abstract way: specifying that they contain nodes and edges and using operations like AddEdge, Cycle, etc. This enables us to define an abstract data type without considering implementation details, such as how we will store the attributes of a graph! This means that a complete solution to, for example, the MST problem can be specified before we've
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mst.html (4 of 5) [3/23/2004 2:56:10 PM]
Data Structures and Algorithms: Graph Algorithms
even decided how to store the graph in the computer. However, representation issues can't be deferred forever, so we need to examine ways of representing graphs in a machine. As before, the performance of the algorithm will be determined by the data structure chosen. Minimum Spanning Tree Animation This animation was written by Mervyn Ng. Please email comments to: morris@ee.uwa.edu.au
Key terms
Greedy algorithms Algorithms which solve a problem by making the next step based on local knowledge alone without looking ahead to determine whether the next step is the optimal one. Equivalence Classes The set of equivalence classes of a set is a partition of a set such that all the elements in each subset (or equivalence class) is related to every other element in the subset by an equivalence relation. Union Find Structure A structure which enables us to determine whether two sets are in fact the same set or not. Kruskal's Algorithm One of the two algorithms commonly used for finding a minimum spanning tree  the other is Prim's algorithm. Back to the Table of Contents
Proving the MST algorithm
© John Morris, 1998
Graph Representations
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/mst.html (5 of 5) [3/23/2004 2:56:10 PM]
Data Structures and Algorithms: Kruskal's Algorithm
Data Structures and Algorithms
Kruskal's Algorithm
The following sequence of diagrams illustrates Kruskal's algorithm in operation.
gh is shortest. Either g or h could be the representative, g chosen arbitrarily.
ci creates two trees. c chosen as representative for second.
fg is next shortest. Add it, choose g as representative.
ab creates a 3rd tree
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/krusk.html (1 of 3) [3/23/2004 2:56:31 PM]
Data Structures and Algorithms: Kruskal's Algorithm
Add cf, merging two trees. c is chosen as the representative.
gi is next cheapest, but a cycle would be created. c is the representative of both.
Add cd instead
hi would make a cycle
Add ah instead
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/krusk.html (2 of 3) [3/23/2004 2:56:31 PM]
Data Structures and Algorithms: Kruskal's Algorithm
bc would create a cycle. Add de instead to complete the spanning tree all trees joined, c is sole representative.
Back to Mininum Spanning Tree
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/krusk.html (3 of 3) [3/23/2004 2:56:31 PM]
Data Structures and Algorithms: Proving MST
Data Structures and Algorithms
Proving Greedy Algorithms
The very nature of greedy algorithms makes them difficult to prove. We choose the step that maximises the immediate gain (in the case of the minimum spanning tree  made the smallest possible addition to the total cost so far) without thought for the effect of this choice on the remainder of the problem. So the commonest method of proving a greedy algorithm is to use proof by contradiction, we show that if we didn't make the "greedy" choice, then, in the end, we will find that we should have made that choice.
The Minimum Spanning Tree Algorithm
At each step in the MST algorithm, we choose the cheapest edge that would not create a cycle. We can easily establish that any edge creating a cycle should not be added. The cyclecompleting edge is more expensive than any previously added edge and the nodes which it connects are already joined by some path. Thus it is redundant and can be left out. Each edge that we add must join two subtrees. If the next cheapest edge, ex, would join two subtrees, Ta and Tb, then we must, at some later stage, use a more expensive edge, ey, to join Ta to Tb, either directly or by joining a node of one of them to a node that is now connected to the other. But we can join Ta to Tb (and any nodes which are now connected to them) more cheaply by using ex, which proves the proposition that we should choose the cheapest edge at each stage.
Complexity
The steps in Kruskal's algorithm are: Initialise the forest Sort the edges Until we've added V1 edges O(V) x Check whether an edge O(V) = forms a cycle Total Since E=O(V2) O(V) O(ElogE)
O(V2) O(V+ElogE+V2) O(V2logV)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/greedy_proof.html (1 of 2) [3/23/2004 2:56:39 PM]
Data Structures and Algorithms: Proving MST
Thus, we may refer to Kruskal's algorithm as an O(n2log n) algorithm, where n is the number of vertices. However, note that if E is similar to V, then the complexity is O(n2). The alternative MST algorithm, Prim's algorithm, can be made to run in O(E + VlogV) time, using Fibonacci heaps. Because of its wide practical application, the MST problem has been extensively studied and, for sparse (E approx= V) graphs, an even faster algorithm O(Elog logV) is known (cf Fredman and Tarjan, "Fibonacci heaps and their uses in improved network optimization algorithms", JACM, 34(3), 596615(1987).) This emphasizes the point that no good software engineer tries to reinvent wheels, he keeps a good algorithms text in his library and makes sure to refer to it before attempting to program a new problem! Return to Minimum Spanning Tree
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/greedy_proof.html (2 of 2) [3/23/2004 2:56:39 PM]
Data Structures and Algorithms: Graph Representations
Data Structures and Algorithms
Graph Representations
Node Representations Usually the first step in representing a graph is to map the nodes to a set of contiguous integers. (0,V1) is the most convenient in C programs  other, more flexible, languages allow you greater choice! The mapping can be performed using any type of search structure: binary trees, mway trees, hash tables, etc. Adjacency Matrix Having mapped the vertices to integers, one simple representation for the graph uses an adjacency matrix. Using a V x V matrix of booleans, we set aij = true if an edge connects i and j. Edges can be undirected, in which case if aij = true, then aji = true also, or directed, in which aij != aji, unless there are two edges, one in either direction, between i and j. The diagonal elements, aii, may be either ignored or, in cases such as state machines, where the presence or absence of a connection from a node to itself is relevant, set to true or false as required. When space is a problem, bit maps can be used for the adjacency matrix. In this case, an ADT for the adjacency matrix improves the clarity of your code immensely by hiding the bit twiddling that this space saving requires! In undirected graphs, only one half of the matrix needs to be stored, but you will need to calculate the element addresses explicitly yourself. Again an ADT can hide this complexity from a user! If the graph is dense, ie most of the nodes are connected by edges, then the O(V2) cost of initialising an adjacency matrix is matched by the cost of inputting and setting the edges. However, if the graph is sparse, ie E is closer to V, then an adjacency list representation may be more efficient. Adjacency List Representation Adjacency lists are lists of nodes that are connected to a given node. For each node, a linked list of nodes connected to it can be set up. Adding an edge to a graph will generate two entries in adjacency lists  one in the lists for each of its extremities. Traversing a graph Depthfirst Traversal A depthfirst traverse of a graph uses an additional array to flag nodes that it has visited already. Using the adjacency matrix structure: struct t_graph {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/graph_rep.html (1 of 4) [3/23/2004 2:56:44 PM]
Data Structures and Algorithms: Graph Representations
int n_nodes; graph_node *nodes; int *visited; adj_matrix am; } static int search_index = 0; void search( graph g ) { int k; for(k=0;k<g>n_nodes;k++) g>visited[k] = FALSE; search_index = 0; for(k=0;k<g>n_nodes;k++) { if ( !g>visited[k] ) visit( g, k ); } } The visit function is called recursively: void visit( graph g, int k ) { int j; g>visited[k] = ++search_index; for(j=0;j<g>n_nodes;j++) { if ( adjacent( g>am, k, j ) ) { if ( !g>visited[j] ) visit( g, j ); } This procedure checks each of the V2 entries of the adjacency matrix, so is clearly O(V2). Using an adjacency list representation, the visit function changes slightly: struct t_graph { int n_nodes; graph_node *nodes; AdjListNode *adj_list; int *visited; adj_matrix am; } void search( graph g ) { ... /* As adjacency matrix version */ } void visit( graph g, int k ) { AdjListNode al_node; g>visited[k] = ++search_index;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/graph_rep.html (2 of 4) [3/23/2004 2:56:44 PM]
Data Structures and Algorithms: Graph Representations
al_node = ListHead( g>adj_list[k] ); while( n != NULL ) { j = ANodeIndex( ListItem( al_node ) ); if ( !g>visited[j] ) visit( g, j ); al_node = ListNext( al_node ); } } Note that I've assumed the existence of a List ADT with methods,
q q q
ListHead, ListItem, ListNext
and also a AdjListNode ADT with a
q
ANodeIndex
method. The complexity of this traversal can be readily seen to be O(V+E), because it sets visited for each node and then visits each edge twice (each edge appears in two adjacency lists). Breadthfirst Traversal To scan a graph breadthfirst, we use a FIFO queue. static queue q; void search( graph g ) { q = ConsQueue( g>n_nodes ); for(k=0;k<g>n_nodes;k++) g>visited[k] = 0; search_index = 0; for(k=0;k<g>n_nodes;k++) { if ( !g>visited[k] ) visit( g, k ); } void visit( graph g, int k ) { al_node al_node; int j; AddIntToQueue( q, k ); while( !Empty( q ) ) { k = QueueHead( q ); g>visited[k] = ++search_index; al_node = ListHead( g>adj_list[k] ); while( al_node != NULL ) {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/graph_rep.html (3 of 4) [3/23/2004 2:56:44 PM]
Data Structures and Algorithms: Graph Representations
j = ANodeIndex(al_node); if ( !g>visited[j] ) { AddIntToQueue( g, j ); g>visited[j] = 1; /* C hack, 0 = false! */ al_node = ListNext( al_node ); } } } }
Key terms
Adjacency Matrix A structure for representing a graph in which the presence of arcs between nodes is indicated by an entry in a matrix. Adjacency Lists An alternative structure for representing a graph in which the arcs are stored as lists of connections between nodes. Breadthfirst Traversal Traversing a graph by visiting all the nodes attached directly to a starting node first. Depthfirst Traversal Traversing a graph by visiting all the nodes attached to a node attached to a starting node before visiting a second node attached to the starting node. Continue on to Dijkstra's Algorithm
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/graph_rep.html (4 of 4) [3/23/2004 2:56:44 PM]
Data Structures and Algorithms: Graph Algorithms
Data Structures and Algorithms
Prim's Algorithm
Prim's Algorithm Prim's algorithm is very similar to Kruskal's: whereas Kruskal's "grows" a forest of trees, Prim's algorithm grows a single tree until it becomes the minimum spanning tree. Both algorithms use the greedy approach  they add the cheapest edge that will not cause a cycle. But rather than choosing the cheapest edge that will connect any pair of trees together, Prim's algorithm only adds edges that join nodes to the existing tree. (In this respect, Prim's algorithm is very similar to Dijkstra's algorithm for finding shortest paths.) Prim's algorithm works efficiently if we keep a list d[v] of the cheapest weights which connect a vertex, v, which is not in the tree, to any vertex already in the tree. A second list pi[v] keeps the index of the node already in the tree to which v can be connected with cost, d[v]. int *MinimumSpanningTree( Graph g, int n, double **costs ) { Queue q; int u, v; int d[n], *pi; q = ConsEdgeQueue( g, costs ); pi = ConsPredList( n ); for(i=0;i<n;i++) { d[i] = INFINITY; } /* Choose 0 as the "root" of the MST */ d[0] = 0; pi[0] = 0; while ( !Empty( q ) ) { u = Smallest( g ); for each v in g>adj[u] { if ( (v in q) && costs[u][v] < d[v] ) { pi[v] = u; d[v] = costs[u][v]; } } } return pi; } The steps are: 1. The edge queue is constructed 2. A predecessor list of predecessors for each node is constructed.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/prim.html (1 of 2) [3/23/2004 2:56:48 PM]
Data Structures and Algorithms: Graph Algorithms
3. "Best" distances to each node are set to infinity. 4. Choose node 0 as the "root" of the MST (any node will do as the MST must contain all nodes), 5. While the edge queue is not empty, 1. Extract the cheapest edge, u, from the queue, 2. Relax all its neighbours  if the distance of this node from the closest node in the MST formed so far is larger than d[u][v], then update d[u][v] and set v's predecessor to u. 6. Return the predecessor list. The time complexity is O(VlogV + ElogV) = O(ElogV), making it the same as Kruskal's algorithm. However, Prim's algorithm can be improved using Fibonacci Heaps (cf Cormen) to O(E + logV).
Key terms
Predecessor list A data structure for defining a graph by storing a predecessor for each node with that node. Thus it uses a single array of integers to define a subgraph of a graph. Fibonacci Heaps See Cormen, chapter 21. The time complexity is O(VlogV + ElogV) = O(ElogV), making it the same as Kruskal's algorithm. However, Prim's algorithm can be improved using Fibonacci Heaps (cf Cormen) to O(E + logV). Back to the Table of Contents
Proving the MST algorithm
© John Morris, 1998
Graph Representations
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/prim.html (2 of 2) [3/23/2004 2:56:48 PM]
Data Structures and Algorithms: Dijkstra's Algorithm
Data Structures and Algorithms
10.2 Dijkstra's Algorithm
Djikstra's algorithm (named after its discover, E.W. Dijkstra) solves the problem of finding the shortest path from a point in a graph (the source) to a destination. It turns out that one can find the shortest paths from a given source to all points in a graph in the same time, hence this problem is sometimes called the singlesource shortest paths problem. The somewhat unexpected result that all the paths can be found as easily as one further demonstrates the value of reading the literature on algorithms! This problem is related to the spanning tree one. The graph representing all the paths from one vertex to all the others must be a spanning tree  it must include all vertices. There will also be no cycles as a cycle would define more than one path from the selected vertex to at least one other vertex. For a graph, G = (V,E) where V is a set of vertices and E is a set of edges.
q q
Dijkstra's algorithm keeps two sets of vertices: S the set of vertices whose shortest paths from the source have already been determined and VS the remaining vertices. The other data structures needed are: d array of best estimates of shortest path to each vertex pi an array of predecessors for each vertex The basic mode of operation is: 1. Initialise d and pi, 2. Set S to empty, 3. While there are still vertices in VS, i. Sort the vertices in VS according to the current best estimate of their distance from the source, ii. Add u, the closest vertex in VS, to S, iii. Relax all the vertices still in VS connected to u Relaxation The relaxation process updates the costs of all the vertices, v, connected to a vertex, u, if we could improve the best estimate of the shortest path to v by including (u,v) in the path to v.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijkstra.html (1 of 3) [3/23/2004 2:56:52 PM]
Data Structures and Algorithms: Dijkstra's Algorithm
The relaxation procedure proceeds as follows: initialise_single_source( Graph g, Node s ) for each vertex v in Vertices( g ) g.d[v] := infinity g.pi[v] := nil g.d[s] := 0; This sets up the graph so that each node has no predecessor (pi[v] = nil) and the estimates of the cost (distance) of each node from the source (d[v]) are infinite, except for the source node itself (d[s] = 0). Note that we have also introduced a further way to store a graph (or part of a graph  as this structure can only store a spanning tree), the predecessor subgraph  the list of predecessors of each node, pi[j], 1 <= j <= V The edges in the predecessor subgraph are (pi[v],v). The relaxation procedure checks whether the current best estimate of the shortest distance to v (d[v]) can be improved by going through u (i.e. by making u the predecessor of v): relax( Node u, Node v, double w[][] ) if d[v] > d[u] + w[u,v] then d[v] := d[u] + w[u,v] pi[v] := u The algorithm itself is now: shortest_paths( Graph g, Node s ) initialise_single_source( g, s ) S := { 0 } /* Make S empty */ Q := Vertices( g ) /* Put the vertices in a PQ */ while not Empty(Q) u := ExtractCheapest( Q ); AddNode( S, u ); /* Add u to S */ for each vertex v in Adjacent( u ) relax( u, v, w ) Operation of Dijkstra's algorithm As usual, proof of a greedy algorithm is the trickiest part.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijkstra.html (2 of 3) [3/23/2004 2:56:52 PM]
Data Structures and Algorithms: Dijkstra's Algorithm
Animation
In this animation, a number of cases have been selected to show all aspects of the operation of Dijkstra's algorithm. Start by selecting the data set (or you can just work through the first one  which appears by default). Then select either step or run to execute the algorithm. Note that it starts by assigning a weight of infinity to all nodes, and then selecting a source and assigning a weight of zero to it. As nodes are added to the set for which shortest paths are known, their colour is changed to red. When a node is selected, the weights of its neighbours are relaxed .. nodes turn green and flash as they are being relaxed. Once all nodes are relaxed, their predecessors are updated, arcs are turned green when this happens. The cycle of selection, weight relaxation and predecessor update repeats itself until all the shortest path to all nodes has been found. Please email comments to: Dijkstra's Algorithm Animation morris@ee.uwa.edu.au This animation was written by Mervyn Ng and Woi Ang. An alternative animation of Dijskstra's algorithm may give you a different insight!
Key terms
singlesource shortest paths problem A descriptive name for the problem of finding the shortest paths to all the nodes in a graph from a single designated source. This problem is commonly known by the algorithm used to solve it  Dijkstra's algorithm. predecessor list A structure for storing a path through a graph.
Continue on to Operation of Dijkstra's algorithm
© John Morris, 1998
Continue on to Huffman Encoding
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijkstra.html (3 of 3) [3/23/2004 2:56:52 PM]
Data Structures and Algorithms: Edsger Dijkstra
Data Structures and Algorithms Edsger Dijkstra
Edsger Dijkstra made many more contributions to computer science than the algorithm that is named after him. He is currently Professor of Computer Sciences in the University of Texas: this is a link to his home page. Back to Dijkstra's Algorithm
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/e_w_dijkstra.html [3/23/2004 2:56:54 PM]
Data Structures and Algorithms: Dijkstra's Algorithm  Predecessors
Data Structures and Algorithms
10.2.1 Predecessor Lists
The predecessor list is an array of indices, one for each vertex of a graph. Each vertex' entry contains the index of its predecessor in a path through the graph. In this example, the red arrows show the predecessor relations, so the predecessor list would be: Vertex s u v x y Predecessor pi s* x x s x
* Any convention to indicate a vertex with no predecessor will do: it can point to itself, as here, or be set to 1.
Key terms
Back to Dijkstra's algorithm
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dij_pred.html [3/23/2004 2:57:06 PM]
Data Structures and Algorithms: Dijkstra's Algorithm  operation
Data Structures and Algorithms
Operation of Dijkstra's Algorithm
This sequence of diagrams illustrates the operation of Dijkstra's Algorithm.
Initial graph All nodes have infinite cost except the source
Choose the closest node to s. As we initialised d[s] to 0, it's s. Add it to S Relax all nodes adjacent to s. Update predecessor (red arrows) for all nodes updated.
Choose the closest node, x Relax all nodes adjacent to x Update predecessors for u, v and y.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijop.html (1 of 2) [3/23/2004 2:57:14 PM]
Data Structures and Algorithms: Dijkstra's Algorithm  operation
Now y is the closest, add it to S. Relax v and adjust its predecessor.
u is now closest, choose it and adjust its neighbour, v.
Finally, add v. The predecessor list now defines the shortest path from each node to s.
Back to Dijkstra's Algorithm
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijop.html (2 of 2) [3/23/2004 2:57:14 PM]
Data Structures and Algorithms: Dijkstra's Algorithm  Proof
Data Structures and Algorithms
Proof of Dijkstra's Algorithm
We use a proof by contradiction again. But first, we assert the following Lemma: Lemma 1 Shortest paths are composed of shortest paths. The proof of this is based on the notion that if there was a shorter path than any subpath, then the shorter path should replace that subpath to make the whole path shorter. Lemma 2 If s >..> u > v is a shortest path from s to v, then after u has been added to S and relax(u,v,w) called, then d[v] = delta(s,v) and d[v] is not changed thereafter. For formal proofs, see Cormen or any one of the texts which cover this important algorithm. Proof follows from the fact that at all times d[v] >= delta(s,v). Denote the distance of the shortest path from s to u as delta(s,u). After running Dijkstra's algorithm, we assert that d[u] = delta(s,u) for all u. Note that once u is added to S, d[u] is not changed and should be delta(s,u). Proof by contradiction Suppose that u is the first vertex added to S for which d[u] != delta(s,u). We note: 1. u cannot be s, because d[s] = 0. 2. There must be a path from s to u. If there were not, d[u] would be infinity. 3. Since there is a path, there must be a shortest path.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijproof.html (1 of 2) [3/23/2004 2:57:20 PM]
Data Structures and Algorithms: Dijkstra's Algorithm  Proof
Let s (p1)> x > y (p2)> u be the shortest path from s to u. Where x is within S and y is the first vertex not within S.
When x was inserted into S, d[x] = delta(s,x) (since we hypothesise that u was the first vertex for which this was not true). Edge (x,y) was relaxed at that time, so that d[y] = delta(s,y) <= delta(s,u) <= d[u] Now both y and u were in VS when u was chosen, so d[u] <= d[y]. Thus the two inequalities must be equalities, d[y] = delta(s,y) = delta(s,u) = d[u] So d[u] = delta(s,u) contradicting our hypothesis. Thus when each u was inserted, d[u] = delta(s,u). QED Continue on to Huffman Encoding
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/dijproof.html (2 of 2) [3/23/2004 2:57:20 PM]
Data Structures and Algorithms: Introduction
Data Structures and Algorithms
11 Huffman Encoding
This problem is that of finding the minimum length bit string which can be used to encode a string of symbols. One application is text compression: What's the smallest number of bits (hence the minimum size of file) we can use to store an arbitrary piece of text? Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in the input. This table may be derived from the input itself or from data which is representative of the input. For instance, the frequency of occurrence of letters in normal English might be derived from processing a large number of text documents and then used for encoding all text documents. We then need to assign a variablelength bit string to each character that unambiguously represents that character. This means that the encoding for each character must have a unique prefix. If the characters to be encoded are arranged in a binary tree: An encoding for each character is found by following the tree from the route to the character in the leaf: the encoding is the string of symbols on each branch followed. For example: String TEA SEA TEN Encoding tree for ETASNO Notes: 1. As desired, the highest frequency letters  E and T  have two digit encodings, whereas all the others have three digit encodings. 2. Encoding would be done with a lookup table. A divideandconquer approach might have us asking which characters should appear in the left and right subtrees and trying to build the tree from the top down. As with the optimal binary search tree, this will lead to to an exponential time algorithm. A greedy approach places our n characters in n subtrees and starts by combining the two least weight nodes into a tree which is assigned the sum of the two leaf node weights as the weight for its root node. Operation of the Huffman algorithm.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/huffman.html (1 of 3) [3/23/2004 2:57:36 PM]
Encoding 10 00 010 011 00 010 10 00 110
Data Structures and Algorithms: Introduction
The time complexity of the Huffman algorithm is O(nlogn). Using a heap to store the weight of each tree, each iteration requires O(logn) time to determine the cheapest weight and insert the new weight. There are O(n) iterations, one for each item.
Decoding Huffmanencoded Data
Curious readers are, of course, now asking "How do we decode a Huffmanencoded bit string? With these variable length strings, it's not possible to break up an encoded string of bits into characters!"
The decoding procedure is deceptively simple. Starting with the first bit in the stream, one then uses successive bits from the stream to determine whether to go left or right in the decoding tree. When we reach a leaf of the tree, we've decoded a character, so we place that character onto the (uncompressed) output stream. The next bit in the input stream is the first bit of the next character.
Transmission and storage of Huffmanencoded Data
If your system is continually dealing with data in which the symbols have similar frequencies of occurence, then both encoders and decoders can use a standard encoding table/decoding tree. However, even text data from various sources will have quite different characteristics. For example, ordinary English text will have generally have 'e' at the root of the tree, with short encodings for 'a' and 't', whereas C programs would generally have ';' at the root, with short encodings for other punctuation marks such as '(' and ')' (depending on the number and length of comments!). If the data has variable frequencies, then, for optimal encoding, we have to generate an encoding tree for each data set and store or transmit the encoding with the data. The extra cost of transmitting the encoding tree means that we will not gain an overall benefit unless the data stream to be encoded is quite long so that the savings through compression more than compensate for the cost of the transmitting the encoding tree also. Huffman Encoding & Decoding Animation This animation was written by Woi Ang. Sample Code
A full implementation of the Huffman algorithm is available from Verilib. Currently, there is a Java version
Please email comments to: morris@ee.uwa.edu.au
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/huffman.html (2 of 3) [3/23/2004 2:57:36 PM]
Data Structures and Algorithms: Introduction
there. C and C++ versions will soon be available also.
Other problems
Optimal Merge Pattern
We have a set of files of various sizes to be merged. In what order and combinations should we merge them? The solution to this problem is basically the same as the Huffman algorithm  a merge tree is constructed with the largest file at its root. Continue on to Fast Fourier Transforms
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/huffman.html (3 of 3) [3/23/2004 2:57:36 PM]
Data Structures and Algorithms: Huffman Encoding
Data Structures and Algorithms
Operation of the Huffman algorithm
These diagrams show how a Huffman encoding tree is built using a straightforward greedy algorithm which combines the two smallestweight trees at every step.
Initial data sorted by frequency Combine the two lowest frequencies, F and E, to form a subtree of weight 14. Move it into its correct place. Again combine the two lowest frequencies, C and B, to form a subtree of weight 25. Move it into its correct place. Now the subtree with weight, 14, and D are combined to make a tree of weight, 30. Move it to its correct place. Now the two lowest weights are held by the "25" and "30" subtrees, so combine them to make one of weight, 55. Move it after the A.
Finally, combine the A and the "55" subtree to produce the final tree. The encoding table is:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/huffop.html (1 of 2) [3/23/2004 2:57:43 PM]
Data Structures and Algorithms: Huffman Encoding
A C B F E D
0 100 101 1100 1101 111
Back to Huffman encoding
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/huffop.html (2 of 2) [3/23/2004 2:57:43 PM]
Data Structures and Algorithms: Fast Fourier Transforms
Data Structures and Algorithms
12 Fast Fourier Transforms
Fourier transforms have wide application in scientific and engineering problems, for example, they are extensively used in signal processing to transform a signal from the time domain to the frequency domain. Here, we will use them to generate an efficient solution to an apparently unrelated problem  that of multiplying two polynomials. Apart from demonstrating how the Fast Fourier Transform (FFT) algorithm calculates a Discrete Fourier Transform and deriving its time complexity, this approach is designed to reinforce the following points:
q
q
'Better' solutions are known to many problems for which, intuitively, it would not appear possible to find a better solution. As a consequence, unless you have read extensively in any problem area already, you should consult the literature before attempting to solve any numerical or data processing problem presented to you.
Because of the limitations of HTML in handling mathematical equations, the notes for this section were prepared with LaTeX and are available as a PostScript file. Continue on to Hard Problems
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/fft.html [3/23/2004 2:57:45 PM]
Data Structures and Algorithms: Hard Problems
Data Structures and Algorithms
13 Hard or Intractable Problems
If a problem has an O(nk) time algorithm (where k is a constant), then we class it as having polynomial time complexity and as being efficiently solvable. If there is no known polynomial time algorithm, then the problem is classed as intractable. The dividing line is not always obvious. Consider two apparently similar problems: Euler's problem (often characterized as the Bridges of Königsberg  a popular 18th C puzzle) Hamilton's problem asks whether there is a path through a graph which traverses each edge only once. asks whether there is a path through a graph which visits each vertex exactly once.
Euler's problem
The 18th century German city of Königsberg was situated on the river Pregel. Within a park built on the banks of the river, there were two islands joined by seven bridges. The puzzle asks whether it is possible to take a tour through the park, crossing each bridge only once.
An exhaustive search requires starting at every possible point and traversing all the possible paths from that point  an O(n!) problem. However Euler showed that an Eulerian path existed iff
q
q
it is possible to go from any vertex to any other by following the edges (the graph must be connected) and every vertex must have an even number of edges connected to it, with at most two exceptions (which constitute the starting and ending points).
It is easy to see that these are necessary conditions: to complete the tour, one needs to enter and leave
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (1 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
every point except the start and end points. The proof that these are sufficient conditions may be found in the literature . Thus we now have a O(n) problem to determine whether a path exists.
Transform the map into a graph in which the nodes represent the "dry land" points and the arcs represent the bridges.
We can now easily see that the Bridges of Königsberg does not have a solution. A quick inspection shows that it does have a Hamiltonian path.
However there is no known efficient algorithm for determining whether a Hamiltonian path exists. But if a path was found, then it can be verified to be a solution in polynomial time: we simply verify that each edge in the path is actually an edge (O(e) if the edges are stored in an adjacency matrix) and that each vertex is visited only once (O(n2) in the worst case).
Classes P and NP
What does NP mean? Euler's problem lies in the class P: problems solvable in Polynomial time. Hamilton's problem is believed to lie in class NP (Nondeterministic Polynomial). Note that I wrote "believed" in the previous sentence. Noone has succeeded in proving that efficient (ie polynomial time) algorithms don't exist yet! At each step in the algorithm, you guess which possibility to try next. This is the nondeterministic part: it doesn't matter which possibility you try next. There is no information used from previous attempts (other than not trying something that you've already tried) to determine which alternative should be tried next. However, having made a guess, you can determine in polynomial time whether it is a solution or not. Since nothing from previous trials helps you to determine which alternative should be tried next, you are forced to investigate all possibilities to find a solution. So the only systematic thing you can do is use some strategy for
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (2 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
systematically working through all possibilities, eg setting out all permutations of the cities for the travelling salesman's tour. Many other problems lie in class NP. Some examples follow. Composite Numbers Determining whether a number can be written as the product of two other numbers is the composite numbers problem. If a solution is found, it is simple to verify it, but no efficient method of finding the solution exists. Assignment Assignment of compatible roommates: assume we have a number of students to be assigned to rooms in a college. They can be represented as the vertices on a graph with edges linking compatible pairs. If we have two per room, a class P algorithm exists, but if three are to be fitted in a room, we have a class NP problem. Boolean satisfiability Given an arbitrary boolean expression in n variables: a1 op a2 op ... op an where op are boolean operators, and, or, .. Can we find an assignment of (true,false) to the ai so that the expression is true? This problem is equivalent to the circuitsatisfiability problem which asks can we find a set of inputs which will produce a true at the output of a circuit composed of arbitrary logic gates. A solution can only be found by trying all 2n possible assignments. Map colouring The threecolour map colouring problem asks if we can colour a map so that no adjoining countries have the same colour. Once a solution has been guessed, then it is readily proved. [This problem is easily answered if there are only 2 colours  there must be no point at which an odd number of countries meet  or 4 colours there is a proof that 4 colours suffice for any map.]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (3 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
This problem has a graph equivalent: each vertex represents a country and an edge is drawn between two vertices if they share a common border. Its solution has a more general application. If we are scheduling work in a factory: each vertex can represent a task to be performed  they are linked by an edge if they share a common resource, eg require a particular machine. A colouring of the vertices with 3 colours then provides a 3shift schedule for the factory. Many problems are reducible to others: map colouring can be reduced to graph colouring. A solution to a graph colouring problem is effectively a solution to the equivalent map colouring or scheduling problem. The map or graphcolouring problem may be reduced to the boolean satisfiability problem. To give an informal description of this process, assume the three colours are red, blue and green. Denote the partial solution, "A is red" by ar so that we have a set of boolean variables: ar ab ag br bb bg cr ... A is red A is blue A is green B is red B is blue B is green C is red ...
Now a solution to the problem may be found by finding values for ar, ab, etc which make the expression true: ((ar and not ab and not ag) and ( (bb and (cb and (dg .... Thus solving the map colouring problem is equivalent to finding an assignment to the variables which results in a true value for the expression  the boolean satisfiability problem. There is a special class of problems in NP: the NPcomplete problems. All the problems in NP are efficiently reducible to them. By efficiently, we mean in polynomial time, so the term polynomially reducible provides a more precise definition. In 1971, Cook was able to prove that the boolean satisfiability problem was NPcomplete. Proofs now exist showing that many problems in NP are efficiently reducible to the satisfiability problem. Thus
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (4 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
we have a large class of problems which will are all related to each other: finding an efficient solution to one will result in an efficient solution for them all. An efficient solution has so far eluded a very large number of researchers but there is also no proof that these problems cannot be solved in polynomial time, so the search continues. Class NP problems are solvable by nondeterministic algorithms: these algorithms consist of deterministic steps alternating with nondeterministic steps in which a random choice (a guess) must be made. A deterministic algorithm must, given a possible solution,
q q
have at least one set of guessing steps which lead to the acceptance of that solution, and always reject an invalid solution.
We can also view this from the other aspect: that of trying to determine a solution. At each guessing stage, the algorithm randomly selects another element to add to the solution set: this is basically building up a "game" tree. Various techniques exist for pruning the tree  backtracking when an invalid solution is found and trying another branch, but this is where the exponential time complexity starts to enter! Travelling salesman It's possible to cast this problem  which is basically an optimality one, we're looking for the best tour into a yesno one also by simply asking: Can we find a tour with a cost less than x? By asking this question until we find a tour with a cost x for which the answer is provably no, we have found the optimal tour. This problem can also be proved to be in NP. (It is reducible to the Hamiltonian circuit problem.) Various heuristics have been developed to find near optimal solutions with efficient algorithms. One simple approach is the find the minimum spanning tree. One possible tour simple traverses the MST twice. So we can find a tour which is at most twice as long as the optimum tour in polynomial time. Various heuristics can now be applied to reduce this tour, eg by taking shortcuts. An algorithm due to Christofides can be shown to produce a tour which is no more than 50% longer than the optimal tour.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (5 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
It starts with the MST and singles out all cities which are linked to an odd number of cities. These are linked in pairs by a variant of the procedure used to find compatible roommates.
This can then be improved by taking shortcuts.
Another strategy which works well in practice is to divide the "map" into many small regions and to generate the optimum tour by exhaustive search within those small regions. A greedy algorithm can then be used to link the regions. While this algorithm will produce tours as little as 5% longer than the optimum tour in acceptable times, it is still not guaranteed to produce the optimal solution.
Key terms
Polynomial Time Complexity Problems which have solutions with time complexity O(nk) where k is a constant are said to have polynomial time complexity. Class P Set of problems which have solutions with polynomial time complexity. Nondeterministic Polynomial (NP) A problem which can be solved by a series of guessing (nondeterministic) steps but whose solution can be verified as correct in polynomial time is said to lie in class NP. Eulerian Path Path which traverses each arc of a graph exactly once. Hamiltonian Path Path which passes through each node of a graph exactly once. NPComplete Problems Set of problems which are all related to each other in the sense that if any one of them can be
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (6 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Hard Problems
shown to be in class P, all the others are also in class P. Continue on to Games
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/hard.html (7 of 7) [3/23/2004 2:58:05 PM]
Data Structures and Algorithms: Games
Data Structures and Algorithms
14 Games
Naive Solutions
A naive program attempting to play a game like chess will: a. Determine the number of moves which can be made from the current position, b. For each of these moves, i. Apply the move to the current position, ii. Calculate a "score" for the new position, iii. If the maximum search "depth" has been reached, return with this score as the score for this move, iv. else recursively call the program with the new position. c. Choose the move with the best score and return its score and the move generating it to the calling routine. Because there are usually at least 20 possible moves from any given chess position, to search to a depth of m requires ~20m moves. Since good human players usually look 10 or more moves ahead, the simple algorithm would severely tax the capabilities of even the fastest modern computer. However, with a little cunning, the number of moves which needs to be searched can be dramatically reduced  enabling a computer to search deeper in a reasonable time and, as recent events have shown, enable a computer to finally be a match for even the best human players.
AlphaBeta Algorithm
The AlphaBeta algorithm reduces the number of moves which need to be explored by "cutting off" regions of the game tree which cannot produce a better result than has already been obtained in some part of the tree which has already been searched. Appendices: A ANSI C
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/games.html [3/23/2004 2:58:21 PM]
Data Structures and Algorithms: Source Listings
Data Structures and Algorithms
Appendix B: Source Code Listings
This section collects references to all the source code listings inserted in other parts of the notes in one place. Listing collection.h collection.c Description Generic collection specification Array implementation of a collection
collection_ll.c Linked list implementation of a collection coll_a.h coll_at.c binsearch.c tree_struct.c tree_add.c tree_find.c heap_delete.c RadixSort.h RadixSort.c Bins.h Bins.c optbin.c Collection with ordering function set on construction Implementation for a tree Binary search
Trees
Heaps
Radix Sort
Optimal binary search tree
Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/listings.html [3/23/2004 2:58:22 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/binsearch.c
static void *bin_search( Collection c, int low, int high, void *key ) { int mid, cmp; /* Termination check */ if (low > high) return NULL; mid = (high+low)/2; cmp = memcmp(key,ItemKey(c>items[mid]),c>size); if ( cmp == 0 ) /* Match, return item found */ return c>items[mid]; else if ( cmp < 0 ) /* key is less than mid, search lower half */ return bin_search( c, low, mid1, key); else /* key is greater than mid, search upper half */ return bin_search( c, mid+1, high, key ); } void *FindInCollection( Collection c, void *key ) /* Find an item in a collection Precondition: c is a collection created by ConsCollection c is sorted in ascending order of the key key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */ { int low, high; low = 0; high = c>item_cnt1; return bin_search( c, low, high, key ); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/binsearch.c [3/23/2004 2:58:24 PM]
Data Structures and Algorithms: Availability
Data Structures and Algorithms
Getting these notes
Notes
A gzipped tar file of all the notes and animations is available here. It's about 6 Mbytes, so please don't ask for it to be email'd to you  especially if you're using a free mail host such as hotmail: it won't fit! If you place these notes on a public (or semipublic) server anywhere, then please leave the attributions to all the authors in the front page. I'd appreciate it if you'd also let me know where they've been placed  it's nice to know that your work is getting lots of exposure ;).
Animations
The animations alone (including the Java source code used to generate them) are available here.
Problems?
If your decompression program (most will work: gunzip on Linux and WinZip under Windows are known to be fine) complains about a corrupted file, I suggest fetching it again, making doubly sure that the transfer is taking palce in binary mode. Some browsers try to be too smart about file types and try to decompress everything automatically. If this is happening and causing problems for you, then try a different browser  or try to download the file onto your machine without decompression first. If you have problems accessing the files, email me giving me as much information about the problem as possible and I will try to help, but don't expect much from a simple "I can't download your files". John Morris Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/avail.html [3/23/2004 2:58:26 PM]
Data Structures and Algorithms: Slides
Data Structures and Algorithms
PowerPoint Slides
1998 Lectures
The files in the table below are gzipped files of PowerPoint slides. You will need a PowerPoint viewer to look at them. These are the actual slides from the 1998 lectures: expect some improvements, error corrections and changes in the order in which topics are presented. However, the 1999 lectures will mainly use the same material. Please note that the "information density" on lecture slides is very low: printing out all the slides on single pages will consume a large number of trees for the amount of information thus gained. The lecture notes themselves have a much higher information density. However, running through the slides with a viewer may be a valuable way of refreshing your memory about major points made in lectures. If you must print them out, it is strongly suggested that you use PowerPoint's "6up" facility! Lists Stacks Searching Complexity Sorting Bin Sort Searching (2) Searching (3) Hash Tables Dynamic Algorithms Dynamic Algorithms Minimum Spanning Trees Equivalence Classes Graph Representations Dijkstra's Algorithm Huffman Encoding Fourier Transforms Hard Problems Games Experimental Design Functions
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ppt/index.html (1 of 2) [3/23/2004 2:58:28 PM]
Data Structures and Algorithms: Slides
Key points Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/ppt/index.html (2 of 2) [3/23/2004 2:58:28 PM]
Data Structures and Algorithms: Timetable
Data Structures and Algorithms
Course Management Workshops
Before starting on the assignment exercises, it's worthwhile to consider the design of experiments first.
1999 Workshops
q q q q
Lab Schedule 1999 There is no assignment 2. Assignments 3 & 4  1999 Submission instructions
1998 Workshops
You might find that browsing through previous years' workshops and the feedback notes helps you to determine what is expected! Workshop/Assignment 1  1998 Workshop/Assignment 2  1998 Assignments 3 & 4  1998
1997 Workshops
q q q q
Workshop 1  Collections Workshop 2  Searching Workshop 3  QuickSort vs RadixSort Workshop 4  RedBlack Trees
1996 Workshops
q q q q q q q
Workshop 1  Collections Workshop 1  Feedback Workshop 2  Searching Workshop 2  Feedback Workshop 3  Minimum Spanning Trees Workshop 3  Feedback Workshop 4  Minimum Spanning Trees
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/timetable.html (1 of 2) [3/23/2004 3:00:15 PM]
Data Structures and Algorithms: Timetable
q
Workshop 4  Feedback
Past Exams
1997
Tutorial Exercises
1. Arrays or Linked Lists? Overheads, Complexity 2. Asymptotic behaviour, ADT Design 3. Sheet 3 r B+ trees r stable sorting, r a puzzle, r AVL trees, r dynamic memory allocation, r equivalence classes 4. Sheet 4 r Heap Sort, r Quick Sort, r Radix Sort, r Hash Tables, r Search Trees 5. Sheet 5 r MST, 6. Sheet 6 r Hard problems Back to the Table of Contents
© John Morris, 1998 ©John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/timetable.html (2 of 2) [3/23/2004 3:00:15 PM]
Data Structures and Algorithms: Experimental Design
Data Structures and Algorithms
Experimental Design
Designing Experiments
Designing a good experiment is not a trivial task: it needs some thought and care if the results of your work are going to lead to valid conclusions. While there are many variations of good strategies, the following basic steps will generally lead to a good procedure: 1. 2. 3. 4. Form a hypothesis which your experiment(s) will test, Design some experiments to test this hypothesis, Run the experiments and collect the data, Analyze your data: r Transform your raw experimental data into a form which permits readily verifying or disproving the hypothesis, r Estimate the experimental error in the original data and transformed data, r Compare the transformed data with expectations produced by the hypothesis. 5. If the experimental data and expected results are in accord (within the experiment's error limits), then you have a basis for claiming the original hypothesis to be true. 6. If there is a discrepancy, you have a number of strategies: r Analyse the experiments for sources of error, r Repeat the experiments to confirm the results (not usually very profitable for computer based experiments!) r Redesign the experiments or r Form a new hypothesis.
Of course, if you are doing original research (that is, you are testing some new hypothesis for the first time), then you now need to form alternative hypotheses and see whether the experimental data would match them also and repeat the experiments (or formulate new ones) to ensure that you can reproduce the original results. This cycle is potentially neverending! However, for the experiments you will need to do in this course, the results are wellknown and one 'pass' through this process should suffice!
Rules
There are a few very important rules: 1. You must report all experimental results, unless they are clearly wrong (eg your program had an error). 2. You may not reject any measurement unless you can provide an ironclad argument to justify its rejection.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/exp_design.html (1 of 3) [3/23/2004 3:00:28 PM]
Data Structures and Algorithms: Experimental Design
An example of an acceptable reason for rejecting or ignoring a result is: The computer's timer has a minimum resolution of 10ms, so all runs with times less than 1s were ignored because they could have had errors >1%. Of course, a really careful experimenter would not discount these results either, but confirm that they also matched the hypothesis  making allowance for the known error. However, in the interests of efficiency, this level of precision will not be expected here! Reporting results It is usual to transform your results to some form which makes it easy to verify that they conform to the hypothesis. The most common strategy is linearisation  plotting results on a graph with axes designed to produce a straight line. Another good strategy is normalisation  reduction of all results to a constant value. This normalisation strategy is a good one to employ here: assume that you are verifying the time complexity of quicksort. This is usually a O(nlogn) algorithm.
q
q q
Your hypothesis is that quicksort takes n logn time, or more formally: T(n) <= c n log n where T(n)is the time to sort nitems and cis a constant. So time a series of sorts of increasingly larger collections of items. Divide the times by nlogn. If you have a set of constants, c +/ some suitably small variation, then the running time may be expressed as: T(n) = c n log n verifying your hypothesis.
Note that you will need to ensure the values of n chosen for the experiment span a suitably wide range  so that cn logn, the expected running time also spans a reasonable range. For the sort experiment, this will be satisfied if the largest value of n is 10 times the smallest one  giving times differing by a factor of ~30 ( 10log210 ). However, if you were timing a search algorithm with O(logn) time complexity, then values of n ranging over one order of magnitude will only produce times varying by a factor of just over 3 (log2n). This will generally be insufficient to provide a convincing verification (over such a small range, other functions of n will also fit quite well!), so you would need to try to have values of n ranging over, say, three orders of magnitude, to give times ranging over one order of magnitude (log21000 ~ 10). Lecture slides You will find some additional points and examples in the lecture slides. Back to Course Management Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/exp_design.html (2 of 3) [3/23/2004 3:00:28 PM]
Data Structures and Algorithms: Experimental Design
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/exp_design.html (3 of 3) [3/23/2004 3:00:28 PM]
Data Structures and Algorithms: Table of Contents
Data Structures and Algorithms
Laboratories  1999
Preliminary: Watch the Web pages for changes! Tutorials
Thursday, 9 am AG13 Thursday, 2 pm E269
Laboratories
A tutor will be available in the laboratory on three afternoons every week in G.50 to assist you with assignments for PLSD210 (as well as other second year IT subjecsts): watch the Web pages for the actual times.
Assignments
Actual assignment specifications and deadlines will be set at the beginning of semester. Assignment Assignment 1 Assignment 2 Due August 30, 10pm September 13, 10pm
Assignment 3 September 23 preliminary  for feedback only Assignment 3  final Assignment 3  final Assignment 4 October 13, 10pm (soft) October 20, 10pm (hard) October 20, 10pm
Please note the 10pm deadlines: if you haven't finished by then, you have 24 hours before you incur any further penalty. It is strongly suggested that you go home, sleep and tackle any remaining problems with a clear head! It's likely that clear thinking will compensate for your late penalty, whereas 'hacking' will only add to it! Assignment 3 (preliminary) is strongly advised: it will not be graded, but failure to have your specification checked before proceeding will almost certainly lose marks for assignment 3 itself!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/labs_1999.html (1 of 2) [3/23/2004 3:00:35 PM]
Data Structures and Algorithms: Table of Contents
Back to Course Management page
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/labs_1999.html (2 of 2) [3/23/2004 3:00:35 PM]
Data Structures and Algorithms  Assignments 3 & 4
Data Structures and Algorithms
Assignments 3 & 4  1999
Algorithm Implementation and Verification
The basic aims of this assignment are to a. give you some experience in implementing an algorithm of reasonable complexity b. ensure that you understand how to rigorously verify that a function is correct and c. determine its time complexity. For variety  and to give you some practical exposure to more than one algorithm  you will implement an algorithm chosen as specified below as your assignment 3 exercise and test another one (which you will obtain from a classmate who has completed it for assignment 3) for your assignment 4. The list of algorithms to be implemented (and verified) are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Hash Table Lookup, Radix Sort, Fast Fourier Transform, Dijkstra's Algorithm, Kruskal's Algorithm, Prim's Algorithm, Optimal Binary Search Tree, Matrix Chain Multiplication. RedBlack tree, AVLtree,
Code  of varying degrees of suitability  may be found for most of these either in textbooks or on the Web. You will need to take that code and rework it into a suitable class structure: simple copies of other people's code are unlikely to gain much credit. For sorting and searching algorithms, obviously an extension of the Collection class will work well. For graph algorithms, then you should make a generalpurpose Graph class. For the FFT, a class of numerical datasets, called something like DataSet would do: although you could probably use the Collection class also. Matrix chain multiplication is obviously a method on a class of matrices which takes an array of matrices and works out how to most efficiently multiply them, then multiplies them! However, it is likely that you will be able to use someone else's code very effectively by simply
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1999.html (1 of 4) [3/23/2004 3:00:39 PM]
Data Structures and Algorithms  Assignments 3 & 4
"wrapping" it in an appropriate method. It makes sense to team up with one other member of the class so that you can exchange code that you have written for assignment 3 to do assignment 4. You must verify and measure the time complexity of a different algorithm from the one you implemented for assignment 3.
Language
It is required that you use a language with a suitable standard. (Suitable means approved by a recognised standards body  not a socalled industry standard!). ANSI standard C is the obvious candidate. However, algorithms coded in Java are acceptable. (C++ will be allowed if you can convince us that you know enough about the ANSI standard to follow it faithfully! The final approval was in November, 1997. So, if you have a copy of the C++ standard and can convince me that your program conforms to it, I will accept it!) Java also has an ISO standard now and will be allowed for those who want to get some practice with it.
Preliminary Submission
In order that you can get fixed and stable specifications of the class containing the method you'll be verifying for assignment 4, you are required to submit  for feedback and approval only  a copy of the specifications of any classes that you will be implementing in midSeptember. These specifications will be checked and approved so that i. you can proceed with confidence, ii. your partner, who needs a concrete specification of your class to design and write verification and timing programs, can proceed. This preliminary submission will not be marked and you will have a chance to correct any shortcomings, but you will be penalised if it is late  as another member of the class is probably depending on it. Only the specifications are needed  well annotated .h file(s). However, you may submit for comment anything that you have done up to that point.
Report
Assignment 3 Since someone else is going to perform some rigorous tests on your algorithm, you only need to perform some rudimentary tests yourself. It would be extremely unprofessional to hand over for verification something that you had not tested yourself at all! Your assignment 3 report should include, as a minimum, 1. brief description of the structure of your code, 2. suitable references or acknowledgments for any code that you might have obtained (in any form) from others and
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1999.html (2 of 4) [3/23/2004 3:00:39 PM]
Data Structures and Algorithms  Assignments 3 & 4
3. a brief description of your preliminary tests. If your classmate has finished the verification of your code before you need to submit, then it will do no harm to submit his or her "test certificate" (ie report) along with your submission in order to strengthen your case for a good mark. However, the markers will generally try to pair up code and tests in their assessment of submissions. Assignment 4 This report will need to be reasonably substantial as it will need to describe the analysis of the algorithm tested for equivalence classes, the results of the tests (hopefully all as expected!) and the time complexity measurements. The report should be plain ASCII text  not the native form of any word processor.
Choosing your algorithm
Use the second last digit of your student number (ie leaving out the final check digit), choose the appropriate algorithm from the numbered list (or the radix sort challenge!). If your number is xxxxx0x, your problem number is 10. For testing, you can team up with anyone doing a different algorithm for assignment 3. If you have problems (your original teammate withdraws, gets a job with a sixdigit income, breaks a leg, etc), you can obtain the same, or any other algorithm, from someone else.
Special Challenges
Radix Sort The challenge of designing a practical, generalpurpose radix sort that runs faster than the library quicksort is open to any one. Generalpurpose means that, just like our collections that support sorts and searches on items with any form of key, so your radix sort should also support any set of radices for the keys of items to be sorted. If you're attempting this challenge and you're unsure of the significance of this, it would probably be a good idea to seek advice. Bonus: 1.5 times your mark for assignment 3 if you can demonstrate that your radix sort is really faster than the library quicksort on random data that I will generate! Redblack and AVL trees For a team of three: The algorithm in Cormen (and some other texts) is not the most efficient: implement both Cormen's version and that found in Weiss' text as well as AVL trees. Three algorithms or variants = three people!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1999.html (3 of 4) [3/23/2004 3:00:39 PM]
Data Structures and Algorithms  Assignments 3 & 4
Bonus: 1.5 times for the team if you can produce three pieces of code in a consistent style (to make comparisons valid) and, as part of your verification exercise, show exactly how efficient each one is. Obviously, to get the full 1.5 bonus, your report needs to have some intelligent analysis of the differences (if any) observed! Quick sorting We would expect the library implementation of quick sort to be pretty damned quick  certainly faster than a naively coded C variant. Can you code yourself a version of qsort that is as good as the library versions? Bonus: 1.5 times if you're faster than the library quicksort on at least two different machines and you produce a report showing how various speedup efforts improved the performance. 1.4 times if you're close on two machines without resorting to assembler  ie the same code is used on both machines! Prim's and Kruskal's Algorithms Various improvements over the 'standard' algorithms are known. If you can implement one successfully, and demonstrate that it's faster than somebody else's standard one, a bonus of 1.5 will apply also! Other problems If you expect an HD, then I will provide you with an appropriate bonus challenge for any problem if requested. Submission Instructions will be available soon!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1999.html (4 of 4) [3/23/2004 3:00:39 PM]
Data Structures and Algorithms  Submission
Data Structures and Algorithms
Submitting Assignments
You can use the same procedure that you used for CLP110 assignments (or the similar facility that should be available for the Unix machines by second semester). a. go to one of the NT machines in 1.51 (there are some in other labs also) b. ftp the files you want to submit from wherever you prepared them, c. use the submission program (submit) you can see sitting on the desk top. The submission program tells you at each step what it wants you to do, so you shouldn't have any trouble with it. If you stuff it up (to use a favourite expression of one of your classmates), the submission program will not let you try again with the same assignment code, so I have created assignment codes xA (where x is the assignment number). We will ignore submissions in the original area if we find one in the "A" area, eg if you submit assignment 3 and find that you've made an error, simply make a complete new submission to the 3A area. When we find the one in the 3A area, we will throw away the one in the 3 area. This will make the process almost paperless and hopefully a little more efficient. An annotated copy of the report file will be emailed back to you with your mark. If you have problems, email me with some details of the problem. You can either print out your submission and hand it in normally or copy it to a floppy disc and hand that in. In any case, if you email me advising of a problem, you can hand it in before Tuesday's lecture with no penalty.
DO NOT (that's NOT) SUBMIT by EMAIL
The obligatory FAQ section
No Submit Icon on your desktop? Follow these simple instructions:
q q
The submit program lives in W:\BIN\SUBMIT.EXE One can make a new shortcut to point to it by: r Open Explorer (press the Windows key and E) r If the W:\ drive doesnt appear, click on View\GoTo and type in W:\ r Find w:\bin\submit.exe, r drag the icon to the desktop  a shortcut will be made on your desktop.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/submit.html (1 of 2) [3/23/2004 3:00:46 PM]
Data Structures and Algorithms  Submission
Back to Course Management
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/submit.html (2 of 2) [3/23/2004 3:00:46 PM]
Data Structures and Algorithms  Workshop 1
Data Structures and Algorithms
Workshop 1  1998
Familiarisation
The basic aims of this workshop are to
q q
familiarise you with techniques to time an algorithm written in ANSI C and experimentally confirm the time complexity of addition and searching operations for various data structures.
You will need to write a simple program to insert data into a tree, measuring the average time to add an item and to find a randomly chosen item in the tree. 1. Download and save into your directory from the download window: a. Collection.h b. Collection.c c. tree_struct.c d. tree_add.c e. tree_find.c and f. the test program tc.c. 2. Modify the collection implementation so that it uses a tree structure rather than the original array. Edit out the original structure, find and add methods and load in the new ones. Of course, you will be able to leave the specification of the Collection class untouched. 3. Compile into an executable: gcc o tc tc.c collection.c Note that you will need to use an ANSI C compiler as the programs are written in ANSI C. On some Unix machines, cc only accepts the older "K&R C". 4. Run the program and verify that it runs as expected. Examine the test program listing to determine how it runs! 5. Now we want to find out how efficiently it runs: i. Modify the test program so that it generates and then inserts a large number, n, of
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1998.html (1 of 3) [3/23/2004 3:00:49 PM]
Data Structures and Algorithms  Workshop 1
integers into the collection. Note that you will need to be careful about the choice of the set of numbers that you generate. Compare what happens to your times when you use the set of integers, 1 to n, to when you use the rand() function to generate a set of random numbers to add. Once you have created a collection with n items in it, determine how long it takes, on average, to find an item in the collection. Again you will need to generate a set of "probe" data which you search for in the collection. (Searching for the same item multiple times may cause problems and give you misleading answers  why?)
Timing
Timing an individual function call has some traps: read these notes for some guidelines. ii. Determine the average time to search the collection for a randomly generated integer again by finding the time to search for n randomly generated integers. Use the random number generator, rand(), to generate random integers. iii. Modify the program so that it prints out the insertion and searching times for a range of values of n. Suitable values will produce run times between about 1 and 20 seconds. About 10 values should enable you to determine the characteristics of the time vs n curve. Warning: Note carefully that the test program, tc.c is a test program  designed to demonstrate that the collection code is correct. You will need to reorganise it quite a bit to perform the timing analyses efficiently.
Report
Prepare a brief report which summarises your results. (The report should be plain ASCII text  not the native form of any word processor.) This report should start by forming a hypothesis about your expected results. This should be followed by the actual results. The conclusion should provide a convincing argument as to why these results confirm your original hypotheses. It should also highlight and attempt to explain any discrepancies. You are expected to measure the addition and searching times. Your report should also discuss the difference (if any) in results observed when a sequence of integers, 1, 2, .. is used for the test data compared to a randomly chosen list of integers. If you design your program efficiently, you will be able to get it to generate a table of data for you which you can paste directly into the report.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1998.html (2 of 3) [3/23/2004 3:00:49 PM]
Data Structures and Algorithms  Workshop 1
Submission Instructions will be available soon!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1998.html (3 of 3) [3/23/2004 3:00:49 PM]
Data Structures and Algorithms  Workshop 2
Data Structures and Algorithms
Workshop 2  1998
Sorting
The basic aims of this workshop are to
q
compare various sorting routines.
You will need to write a simple program to generate lists of data to be sorted and then to time various sorting algorithms for a sufficient range of sizes of the data list to verify that the expected time complexity is observed. You should augment your Collection class so that you can add two sorting methods:
q q
HeapSort and QuickSort
So that you can verify that your sort algorithms are operating correctly, add another method:
q
int Sorted( Collection c );
which verifies that the data is, in fact, sorted after the sort algorithm has been applied. In order to make this code as generalpurpose and useful as possible, you should ensure that the Collection constructor has an additional argument, the comparison function. This will be discussed in the lecture on Aug 10, so you can either defer worrying about it while you design the rest of the system (it involves a straightforward addition to the design) or test your tutor's memory of the definitely nonintuitive syntax required  alternatively read your C text or the lecture notes. In order to understand the difference in the results that you obtain, you should instrument your algorithms to count the number of comparisons and the number of exchanges made. Note that although you can use the machine's library quicksort routine, qsort, initially, you will need to implement a version of your own by downloading the code from the notes in order to instrument the algorithm. Think carefully about how you will add the instrumentation code and how you will obtain the statistics when the sort has completed. This should disturb the "normal" code as little as possible, so that the instrumented code runs in ordinary programs with no changes and still sorts correctly.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w2_1998.html (1 of 2) [3/23/2004 3:00:57 PM]
Data Structures and Algorithms  Workshop 2
As a final part of this assignment, you should test the library quicksort algorithm to see if you can infer what type of pivot selection is used. (You can limit this to "naive" or otherwise  but anyone who can devise a technique which reliably detects what strategy is used under the "otherwise" heading is assured of 100% for this assignment (as long as no major design crimes are committed!).) Check the qsort library routine on another type of machine as a demonstration that your code is ANSI standard compliant and portable. If it is, then this shouldn't take more than 10 minutes: log into another machine (you can use any other one you like), recompile, rerun and you should have an answer! (Although using one of the fancy GUI compilers on a Windows machine will take more than 10 minutes to set the project up!)
Report
Prepare a brief report which summarises your results. It should contain as a minimum,
q q q q
your raw experimental results the statistics obtained by instrumenting the sorting routines your inferences from this data and your inference as to quality of the library quicksort routine.
(The report should be plain ASCII text  not the native form of any word processor.) This report should start by forming a hypothesis about your expected results. This should be followed by the actual results. The conclusion should provide a convincing argument as to why these results confirm your original hypotheses. It should also highlight and attempt to explain any discrepancies. If you design your program efficiently, you will be able to get it to generate a table of data for you which you can paste directly into the report. Submission Instructions will be available soon!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w2_1998.html (2 of 2) [3/23/2004 3:00:57 PM]
Data Structures and Algorithms  Assignments 3 & 4
Data Structures and Algorithms
Assignments 3 & 4  1998
Algorithm Implementation and Verification
The basic aims of this assignment are to a. give you some experience in implementing an algorithm of reasonable complexity b. ensure that you understand how to rigorously verify that a function is correct and c. determine its time complexity. For variety  and to give you some practical exposure to more than one algorithm  you will implement an algorithm chosen as specified below as your assignment 3 exercise and test another one (which you will obtain from a classmate who has completed it for assignment 3) for your assignment 4. The list of algorithms to be implemented (and verified) are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Hash Table Lookup, Radix Sort, Fast Fourier Transform, Dijkstra's Algorithm, Kruskal's Algorithm, Prim's Algorithm, Optimal Binary Search Tree, Matrix Chain Multiplication. RedBlack tree, AVLtree,
Code  of varying degrees of suitability  may be found for most of these either in textbooks or on the Web. You will need to take that code and rework it into a suitable class structure: simple copies of other people's code are unlikely to gain much credit. For sorting and searching algorithms, obviously an extension of the Collection class will work well. For graph algorithms, then you should make a generalpurpose Graph class. For the FFT, a class of numerical datasets, called something like DataSet would do: although you could probably use the Collection class also. Matrix chain multiplication is obviously a method on a class of matrices which takes an array of matrices and works out how to most efficiently multiply them, then multiplies them! However, it is likely that you will be able to use someone else's code very effectively by simply
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1998.html (1 of 4) [3/23/2004 3:01:02 PM]
Data Structures and Algorithms  Assignments 3 & 4
"wrapping" it in an appropriate method. It makes sense to team up with one other member of the class so that you can exchange code that you have written for assignment 3 to do assignment 4. You must verify and measure the time complexity of a different algorithm from the one you implemented for assignment 3.
Language
It is required that you use a language with a suitable standard. (Suitable means approved by a recognised standards body  not a socalled industry standard!). ANSI standard C is the obvious candidate. However, algorithms coded in Java are acceptable. (C++ would be allowed if the standard had been approved long enough ago for us all to know what was actually in it! The final approval was in November, 1997. I may be persuaded to allow C++ if you have a copy of the C++ standard and can convince me that your program conforms to it!)
Preliminary Submission
In order that you can get fixed and stable specifications of the class containing the method you'll be verifying for assignment 4, you are required to submit  for feedback and approval only  a copy of the specifications of any classes that you will be implementing in midSeptember. These specifications will be checked and approved so that i. you can proceed with confidence, ii. your partner, who needs a concrete specification of your class to design and write verification and timing programs, can proceed. This preliminary submission will not be marked and you will have a chance to correct any shortcomings, but you will be penalised if it is late  as another member of the class is probably depending on it. Only the specifications are needed  well annotated .h file(s). However, you may submit for comment anything that you have done up to that point.
Report
Assignment 3 Since someone else is going to perform some rigorous tests on your algorithm, you only need to perform some rudimentary tests yourself. It would be extremely unprofessional to hand over for verification something that you had not tested yourself at all! Your assignment 3 report should include, as a minimum, 1. brief description of the structure of your code, 2. suitable references or acknowledgments for any code that you might have obtained (in any form) from others and 3. a brief description of your preliminary tests.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1998.html (2 of 4) [3/23/2004 3:01:02 PM]
Data Structures and Algorithms  Assignments 3 & 4
If your classmate has finished the verification of your code before you need to submit, then it will do no harm to submit his or her "test certificate" (ie report) along with your submission in order to strengthen your case for a good mark. However, the markers will generally try to pair up code and tests in their assessment of submissions. Assignment 4 This report will need to be reasonably substantial as it will need to describe the analysis of the algorithm tested for equivalence classes, the results of the tests (hopefully all as expected!) and the time complexity measurements. The report should be plain ASCII text  not the native form of any word processor.
Choosing your algorithm
Use the second last digit of your student number (ie leaving out the final check digit), choose the appropriate algorithm from the numbered list (or the radix sort challenge!). If your number is xxxxx0x, your problem number is 10. For testing, you can team up with anyone doing a different algorithm for assignment 3. If you have problems (your original teammate withdraws, gets a job with a sixdigit income, breaks a leg, etc), you can obtain the same, or any other algorithm, from someone else.
Special Challenges
Radix Sort The challenge of designing a practical, generalpurpose radix sort that runs faster than the library quicksort is open to any one. Generalpurpose means that, just like our collections that support sorts and searches on items with any form of key, so your radix sort should also support any set of radices for the keys of items to be sorted. If you're attempting this challenge and you're unsure of the significance of this, it would probably be a good idea to seek advice. Bonus: 1.5 times your mark for assignment 3 if you can demonstrate that your radix sort is really faster than the library quicksort on random data that I will generate! Redblack trees The algorithm in Cormen (and some other texts) is not the most efficient: a bonus will apply for an implementation of the most efficient version. Bonus: 1.3 times your mark for assignment for an efficient implementation; 1.5 times if you can demonstrate how much more efficient it is than Cormen's version.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1998.html (3 of 4) [3/23/2004 3:01:02 PM]
Data Structures and Algorithms  Assignments 3 & 4
Other problems If you expect an HD, then I will provide you with an appropriate bonus challenge for any problem if requested. Submission Instructions will be available soon!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/a34_1998.html (4 of 4) [3/23/2004 3:01:02 PM]
Data Structures and Algorithms  Workshop 1
Data Structures and Algorithms
Workshop 1  1997
Familiarisation
1. Download and save into your directory from the download window: a. collection.h b. collection.c and c. the test program testcol.c. 2. Compile into an executable: gcc o testcol testcol.c collection.c Note that you will need to use an ANSI C compiler as the programs are written in ANSI C. On some Unix machines, cc only accepts the older "K&R C". 3. Run the program and verify that it runs as expected. Examine the test program listing to determine how it runs! 4. Now we want to find out how efficiently it runs: i. Modify the test program so that it inserts a large number, n, of integers into the collection. You will need to determine n by experimentation: it will need to be large enough so that each run measured in the next section takes several seconds. A value of 105 would probably be a good start! Make a small function to generate a set of unique integers  the numbers from 1 to n will do fine for this exercise. By timing a run for suitably large n, determine the average time to insert an integer into the collection. You will need to use the Unix functions ftime for the Suns s clock for the SG machines to time your programs. Note that the two functions have a different idea of how to tell
s
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1997.html (1 of 2) [3/23/2004 3:01:03 PM]
Data Structures and Algorithms  Workshop 1
you about the time. Read the manual page carefully! You will need to time quite a few runs for this course, so spend a little time to design a simple, efficient timing routine! ii. Determine the average time to search the collection for a randomly generated integer again by finding the time to search for n randomly generated integers. Use the random number generator, rand(), to generate random integers. iii. Modify the program so that it prints out the insertion and searching times for a range of values of n. Suitable values will produce run times between about1 and 20 seconds. About 10 values should enable you to determine the characteristics of the time vs n curve. 5. Now download a linked list version of the same program collection_ll.c and use the same program to print out the time vs n values for this implementation. (You may need to change the range of the values of n!) Submission Instructions
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1997.html (2 of 2) [3/23/2004 3:01:03 PM]
Data Structures and Algorithms  Workshop 2
Data Structures and Algorithms
Workshop 2
Searching
1. Download and save into your directory from the download window: a. binsearch.c 2. Copy your original collection.c to collection_bs.c and replace FindInCollection with the binary search version. 3. Modify AddToCollection to insert new items in sorted order. 4. Time add and find operations and verify that you have the expected behaviour of the time vs n plot. 5. Compare the absolute times with those obtained for add and find operations in the previous workshop. Submit the assignment using the same procedure as before.
Workshop 3 & 4 Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop2.html [3/23/2004 3:01:05 PM]
Data Structures and Algorithms  Workshop 3
Data Structures and Algorithms
Workshop 3 & 4
These two assignments should be undertaken as a collaborative effort. You are expected to write the implementation of one assignment and the test program for the other. Thus you should pair up with another member of the class and a. provide them with an implementation of either the radix sort or the redblack tree for them to test, and b. get an implementation of the other algorithm for testing yourself. Thus you are expected to design and code one algorithm and one extensive test of an algorithm written by someone else. Joint submissions by your team will be preferred, but in cases of difficulty (your teammate wins the lottery and, naturally, decides that there are more fun things in life than redblack trees), you should make sure that you submit one algorithm implementation and one testing program. For your testing program, you can obtain an implementation from any other member of the class if necessary. Testing means both verifying that the algorithm is correct (so perform an equivalence class analysis of the input to make sure that you covered all the tests) and verifying that its time complexity is as expected. The person performing the testing will generally be expected to be responsible for the bulk of the submission report. However the implementor may, of course, contribute any (short) notes to the report.
Workshop 3
Sorting Algorithms
Build programs to sort arrays of 32bit integers using: 1. quicksort (use the standard library function qsort unless you think that you can do better!) 2. radix sort For the radix sort, you will find it helpful to build some simple classes to manage bins, etc. Think about how to structure a solution to the problem before diving in to write code! You may, of course, use any suitable radix for sorting .. base 16, base 32, base 69, etc, should all work reasonably well.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w3_1997.html (1 of 3) [3/23/2004 3:01:10 PM]
Data Structures and Algorithms  Workshop 3
(You may be able to think of reasons why certain bases will perform better than others  see below for some hints.) Since the qsort function asks you to supply the compar function, you should use the same approach to defining your radix sort function, ie you should also pass it a compar function and the size of the elements that are being sorted. This is partly to ensure that you eliminate some systematic variation from your timing experiments (ie you time both functions under as nearly the same ground rules as possible), but also to give you some practice in writing a function to use as a data type. If you time the performance of these two for different values of n, then you should expect to find some interesting results! The most interesting results will come for very large n. You should know what to expect for small n. Write a report that interprets your results. Some of the phenomena that you observe will be related to the way that the operating system treats virtual memory and aspects of the MIPS architecture of the processors. You might find some useful insights if you interpret your data in terms of factors that you may have learnt in other second year courses! Submit both assignments using the submission procedure as before. Hopefully it will be working without error messages before you need it again!
Workshop 4  RedBlack Trees
Your problem scenario is a situation where you have to maintain a searchable collections of items. The items in the collections are always changing  in fact you have, on average, one new item and one deletion for every 10 searches. Write a general purpose class that puts items into a balanced tree  use the redblack tree algorithm, unless you are absolutely convinced that you can get better performance from an AVLtree or any other dynamically balanced tree. The testing of this class should include measurement of the black height of the tree as randomly valued key items are added. It should also time searches and additions to measure the time complexity of the add and find operations. These should confirm the theoretical expectations.
Submission
For full marks, each member of the class should be responsible for one implementation and one test exercise. Assignments are compulsory and failure to submit one is the same as failing the course. Thus a late assignment may not get you any marks, but it will allow you to sit the exam.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w3_1997.html (2 of 3) [3/23/2004 3:01:10 PM]
Data Structures and Algorithms  Workshop 3
Basically you should use the same submission procedure as before. However note that there are now four assignment labels: radix_i, radix_t, rb_i and rb_t. If you and your teammate have a complete submission (implementation, test and report), submit it under the ?_t label ("t" = testing). If you are submitting an implementation by yourself because of some administrative problem (your testing partner won the lottery), submit it under the ?_i label ("i" = implementation): this should hopefully be the rarer case  and if you are able to arrange a last minute test, you can submit the tested code, report, etc, under the ?_t label. I will look at ?_t submissions first, so if there is something there from you, I will assume it supercedes anything I might find elsewhere. Of course, all this depends on getting the system problems sorted out  look in this space for new instructions if the old ones won't work! If you want the feedback to get back to both partners, make sure to include both email addresses in the report!
Due Date
Monday, October 20, 5pm
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w3_1997.html (3 of 3) [3/23/2004 3:01:10 PM]
Data Structures and Algorithms  Workshop 1
Data Structures and Algorithms
Workshop 1
Familiarisation
1. Download and save into your directory from the download window: a. collection.h b. collection.c and c. the test program testcol.c. 2. Compile into an executable: gcc o testcol testcol.c collection.c Note that you will need to use an ANSI C compiler as the programs are written in ANSI C. On some Unix machines, cc only accepts the older "K&R C". 3. Run the program and verify that it runs as expected. Examine the test program listing to determine how it runs! 4. Now we want to find out how efficiently it runs: i. Modify the test program so that it inserts a large number, n, of random integers into the collection. You will need to determine n by experimentation: it will need to be large enough so that the run times measured in the next section are each several seconds. A value of 105 would probably be a good start! Use the random number generator, rand(), to generate random integers. By timing a run for suitably large n, determine the average time to insert an integer into the collection. You will need to use the Unix functions s ftime for the Suns s clock for the SG machines to time your programs. Note that the two functions have a different idea of how to tell you about the time. Read the manual page carefully! You will need to time quite a few runs for this course, so spend a little time to design a simple, efficient timing routine! ii. Determine the average time to search the collection for a randomly generated integer again by finding the time to search for n randomly generated integers. iii. Modify the program so that it prints out the insertion and searching times for a range of values of n. Suitable values will produce run times between about1 and 20 seconds. About 10 values should enable you to determine the characteristics of the time vs n
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop1.html (1 of 2) [3/23/2004 3:01:16 PM]
Data Structures and Algorithms  Workshop 1
curve. 5. Now download a linked list version of the same program collection_ll.c and use the same program to print out the time vs n values for this implementation. (You may need to change the range of the values of n!) Submission Instructions
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop1.html (2 of 2) [3/23/2004 3:01:16 PM]
Data Structures and Algorithms  Assignment 1
Data Structures and Algorithms
Feedback from assignment 1
1. Toolbuilding If you are to be productive software engineers, you must get yourselves into the habit of building tools. Even the most trivial piece of code costs something to r design, r write and r test and debug. You must learn to write for reuse: the main benefit is in the time saved in test and debug  if it's been tested thoroughly before, you can reuse it with confidence! In this assignment, you needed lists of random integers (or  as at least one of you worked out any integers) for testing. So make a function which constructs such lists (in the simplest way possible of course!), e.g. int *ConsIntList( int n ) { int *list = (int *)malloc( n*sizeof(int) ); int i; for(i=0;i<n;i++) list[i] = rand(); return list; } This function can be called as many times as needed to generate test lists of appropriate sizes. Once written and checked, it allows you to concentrate on other aspects of the problem. Timing is a similar problem: you are presented with the problem that each machine has a slightly different method of telling the time. So build a generic timing function that returns a time to the best accuracy that the current machine will allow, e.g. double TimeNow() { double t; #ifdef IRIX /* Irix code to calculate t in secs */ #endif #ifdef SUNOS /* Sun code to calculate t in secs */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback1.html (1 of 4) [3/23/2004 3:01:22 PM]
Data Structures and Algorithms  Assignment 1
#endif return t; } Note that this function is double: it is designed to return seconds. This allows it to get the maximum resolution on any machine. The #ifdef's are for serious programmers who want their code to run without alteration on any machine: each compiler will predefine some suitable string which will allow it to be identified. You will need to check the manuals for individual compilers to find out what the correct values for these strings are. One of you made a starttime()/stoptime() pair and used a static variable to store the time when the starttime function was called. A good idea, which can be done effectively in a single function: /* timer.c */ static double last = 0.0; double TimeSinceLastCall() { double t, diff; #ifdef IRIX /* calculate t in secs as above */ .. #endif diff = t  last; last = t; return diff; } This function can be called at the start of a timed section of code (and the return value thrown away) and again at the end of the timed section. The second return value is the running time of the timed section. 2. Efficiency The assignment specification required you to run the same test for quite a few values of n. So you should have built a function which took n as an argument and called it from a simple loop in main so that one run would get you all the results  once everything was working! This is akin to the "toolbuilding" approach above. 3. Errors Unix is a timesharing OS: even if you're apparently the only user on the machine, many things
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback1.html (2 of 4) [3/23/2004 3:01:22 PM]
Data Structures and Algorithms  Assignment 1
are usually happening. At the minimum, there's the clock interrupt that keeps the machine's time correct. Usually, there'll also be printer daemons (which wake up periodically to check whether you want to print anything), network daemons (which look to see if anyone else is trying to log in to this machine), etc, etc. Even the simplest operating system (eg DOS) will have timer interrupts to perturb your benchmark timing! As a consequence, very accurate timing is impossible. If you're expecting a straight line (and you can usually transform your results so that they should be a straight line), a statistician would take the plot and determine the correlation coefficient, which would give a measure of the probability that the results deviate from a straight line (taking into account experimental error). However, you can just run a straight line through your results and if all the points lie close to it and appear randomly on both sides, then it's a straight line. With a little extra effort, you could use your calculator  or get a spreadsheet  to calculate the standard deviation from the mean of the normalised results. This will give you a simple measure of whether your results lie on a straight line or not. 4. Reporting your results Having determined the form of your results, it was essential that you report your final conclusions in the O notation. This is the standard way of describing the performance of an algorithm. Reporting the slope of the straight line is a nice touch (especially if you add the correlation coefficient  which tells how good a straight line it is!), but your ultimate conclusion should still be O(1) or O(n). 5. Explaining your results The alert among you noted that the AddToCollection routine appeared to be O(n) because of the FindInCollection routine in the postcondition. Having noted that, it's a simple matter to remove it and rerun your experiment. You could either simply comment it out  or better  read the man page for assert and discover that if you recompile with: gcc DNDEBUG *.c all the assert's will be automagically removed! 6. Designing experiments Many people searched their collections for the same items as the ones in the list used to construct the collection in the same order! So the first item is found after 1 operation, the second after 2, etc (or the reverse for LIFO lists). Duplicates complicate the analysis and reduce the times. A better design either searched for an item that was guaranteed not to be in the list (to give worst case time complexity) or generated a new random sequence of items to search for. In the second case, it's important to ensure a low probability that the randomly
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback1.html (3 of 4) [3/23/2004 3:01:22 PM]
Data Structures and Algorithms  Assignment 1
generated item will be found, ie the range of the random numbers must be much greater than the number of items. Users of DOS/Windows machines Please make sure that the extra CRs needed by DOS, etc, are removed before you submit. Unix files should have LFs only at the end of a line. The same applies to reports produced by your word processor: export them as plain text without the extra CRs. Reports which are not plain text will not be accepted  there are a large number of word processors out there: it's much more productive (and therefore beneficial for you!) if the tutors spend time marking your report's content than trying to work out which WP will read it!
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback1.html (4 of 4) [3/23/2004 3:01:22 PM]
Data Structures and Algorithms  Assignment 2
Data Structures and Algorithms
Feedback from assignment 2
1. Toolbuilding Many of you failed to take note of the suggestion about building tools for the timing routine in the feedback for the last assignment. You need to get into the habit of building transportable, reusable tools, so that you can concentrate on the more difficult aspects of the next problem, rather than repeating machinespecific code in every program! 2. PostConditions Noone seems to have noticed that the postcondition for AddToCollection should have been augmented with a "CollectionSorted" assertion! This is simply done with an auxillary routine: int CollectionSorted( collection c ) { int i; for( i=0;i<(c>item_cnt1);i++) { /* item i should be lessthan or equal item i+1 */ if( !(memcmp(ItemKey(c>items[i]), ItemKey(c>items[i+1]), c>size) <= 0) ) return FALSE; } return TRUE; } 3. Ordering Items The memcmp( ItemKey(..), ItemKey(..), size ) approach to defining the order of items makes some assumptions about the structure of the item's key. A much more useful approach is to provide an ItemCmp function, which can provide a general ordering function. This function can order the items according to any rule  not just one which assumes lexicographical ordering of keys of fixed size. Note that if you use one ordering function when adding, you must use exactly the same function when searching. 4. Adding on the key
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback2.html (1 of 3) [3/23/2004 3:01:29 PM]
Data Structures and Algorithms  Assignment 2
When adding, quite a few of you compared items using the value of item which is the handle (C pointer) for the object. You must use ItemKey and memcmp as in bin_search (or better, an ItemCmp function). 5. Normalising The computer can calculate log n too! You need to include the <math.h> header and compile with: gcc ..... lm The lmloads the math library. Some trivial modifications to your program enable one run to do the experiments and process the results for you too! 6. Read the textbooks Even for a very simple insertion sort, some of you produced code with double loops, multiple compare calls, etc. Simple, proven code for standard problems like this can be found in almost all texts on data structures and algorithms! Spend some time reading, before hacking! 7. Designing experiments
r
Think first When searching, most of you searched an nitem collection n times. The best experiments searched each collection  no matter how big it was  a constant, m ( !=n ), times. Then all you have to do is find a value of m large enough to give a time greater than the timer resolution for the smallest collection. This enables you to find quite accurate times even for small collections (use m >> n). This helps when trying to verify log relationships as you need to have values of n spanning quite a wide range of n, because dlogn/dn decreases with n. Some of you were 'misled' when T(n)/logn appeared to become constant. In fact to verify a log relationship a set of logarithmically spaced values of n are best: n = 1000, 2000, 4000, 8000, 16000, 32000, ... would have given a clear result! Perturbing results When designing any experiment, it's important to eliminate all sources of error possible. Here you are timing certain operations (adds, finds, ..): make sure that you are timing only those operations! Thus all unnecessary code should be removed. This includes function calls  put the timing code inside the AddAll, etc functions. s Remove all the code that was originally there to verify the functions, eg the if ( ip == &list[i] ) after the FindInCollection call. These things don't perturb your results by very much, but you can eliminate these sources of error trivially, so why not do it?
s
r
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback2.html (2 of 3) [3/23/2004 3:01:29 PM]
Data Structures and Algorithms  Assignment 2
As many of you will have found, the cache on a modern processor causes your results to 'creep' slowly towards the predicted result. If you're doing the experiment carefully and want to ensure that you've actually proven your hypothesis, then you will need to eliminate all potential sources of error! Users of DOS/Windows machines Please make sure that the extra CRs needed by DOS, etc, are removed before you submit. Unix files should have LFs only at the end of a line. The same applies to reports produced by your word processor: export them as plain text without the extra CRs. Reports which are not plain text will not be accepted  there are a large number of word processors out there: it's much more productive (and therefore beneficial for you!) if the tutors spend time marking your report's content than trying to work out which WP will read it!
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback2.html (3 of 3) [3/23/2004 3:01:29 PM]
Data Structures and Algorithms  Workshop 2
Data Structures and Algorithms
Workshop 3  Minimum Spanning Trees
For the assignments which follow from this workshop, you are expected to produce a program which reads a description of a problem from a file, calculates the minimum spanning tree and prints out the tree and its cost. Rules 1. Nodes are labelled with an arbitrary string. In the present problem, this string will not be longer than 100 characters. However, it would be a mistake to submit a program in which you cannot trivially change this requirement! 2. Node labels have no spaces in them. 3. For simplicity, edges have the same cost in both directions, ie the cost of edge "abc">"pqr" is the same as that for "pqr">"abc". A program which can be trivially extended to handle asymmetric costs will attract a small bonus. You should indicate in your report the changes that are necessary. 4. All edge costs are positive. 5. Unspecified edges are assumed to be impossible. (Your program should use a suitable representation!) 6. You may print out the resulting MST in any intelligible format, but your report should obviously explain the format. Procedure 1. Find a partner and decide how to split the work for this assignment between yourself and your colleague. 2. In the tutorial session preceding the lab session, start the design of the ADTs that you will need for this assignment. By now, you should interpret "design" as meaning "design and formally specify". 3. Get the full design checked off by the lecturer or tutor before proceeding to implement your solution. Note that, for graphs, there are a number of ways of implementing the graph structure. You should understand by now that the specification and the implementation details are quite separate. You can derive a specification by looking at the requirements of the problem which specify abstract operations that will be needed.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop3.html (1 of 3) [3/23/2004 3:01:35 PM]
Data Structures and Algorithms  Workshop 2
4. For assignment 3, a. Formally test each ADT used by performing an equivalence class analysis on it and generating a program (or programs) to check each class. Note that while you may have constructed the full MST program at this point, it is not required for this submission. 5. For assignment 4, 1. Submit a program which will read the test file and find and print out the MST. 2. Design (or automatically generate) additional test sets to formally test the whole program. 3. Confirm that the running time of the whole algorithm is as expected. Running time of the algorithm does not include the time to load the data from the test file. For 2 and 3, you may generate the test data within the test programs themselves, ie it is not necessary to read data from a file. A script which automatically runs the test program with a set of test files is also acceptable. 6. Submissions a. Both submissions should be accompanied by an appropriate report. b. You should make one submission with your partner. Either one of you can make the actual submission: just make sure that you have collected all the relevant files into the submission directory. c. Your report (and the program file prologues) should clearly identify the contributions of each partner to the joint submission. File format Lines From To 1 2 n+2 Notes:
q q q
Content n
Format %d %s
Notes Number of nodes Node Label
1
n+1 labeli
EOF labeli labelj cij %s %s %g Edge descriptor and weight
Fields are separated by spaces. The remainder of a line is to be ignored and may be used for comments. Costs are real numbers: you may assume costs are less than 106.
A sample of the format may be found in mst.test.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop3.html (2 of 3) [3/23/2004 3:01:35 PM]
Data Structures and Algorithms  Workshop 2
Submission Follow the same submission procedure used for the previous assignments. Dates 5pm, Thu 19th Sept Designs submitted by this deadline will be checked and returned by the following day.
Designs Checked and Signed Off
Assignment 3 5pm, Tue 8th Oct ADTs programmed and verified Assignment 4 Full program verified Time complexity verified 5pm, Thu 24th Oct
Proceeding to the coding stage without having your design checked is not likely to be productive. For a quick review and signoff, you may bring your design to me any time before the deadline: otherwise, submit it using the normal submission procedure as the "3des" assignment. The submission procedure automatically sends me an email message when you have submitted  I will attempt to review all designs within 24 hours and will email you my comments.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/workshop3.html (3 of 3) [3/23/2004 3:01:35 PM]
Data Structures and Algorithms  Assignment 3
Data Structures and Algorithms
Feedback from assignment 3
1. Separating the classes Many of you failed to place each class in a separate file! This allows: 1. separate development  once the specification is decided, you can individually work on separate parts of the whole problem, 2. separate verification  each class can be verified independently of the others. Of course, sometimes one class depends on another, so complete independence can't be achieved. However the testing strategy becomes crystal clear, test all the classes which don't depend on any others first, then test classes that only depend on this first group, and so on. 2. Equivalence Classes Very few of you had any decent approach to proving individual methods of classes correct. Some were trivial .. simply put some data in an object and verify that you could get it back! Such tests can be performed entirely automatically: the program sets the object's attributes and compares the values returned by 'projector' functions. By using a program for this, you make use of the machine's ability to mechanically compare large amounts of data accurately (once the test program is correct, of course!). Generally, there will be large number of equivalence classes  and therefore test cases. These can be handled in three ways: 1. Use program code to generate the test data sets. For instance you want to test 0,1,2,...,n,n+1 items where the items are random numbers. Write a function to generate the appropriately sized arrays of random numbers. 2. Use data structures to hold the tests. For example, node labels  make an array of strings: char *labels[] = { "a", "aa", "aaa", "b", "" };
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback3.html (1 of 3) [3/23/2004 3:01:39 PM]
Data Structures and Algorithms  Assignment 3
#define N_LABELS
(sizeof(labels)/sizeof(char *))
Note how C allows you to put an arbitrary number of items in an array, using [], and #define a symbol which gives the number of items. This means that as you discover a need for more tests, they are trivially added to labels and no other part of the program needs changing! 3. Put the test data in files  prepared with a text editor, or another program. This would be a good approach for testing the MST itself s determine what cases need testing, s produce a number of files with the appropriate data in them, s run the program reading from each file in turn (give the files names like "graph1", "graph2", etc, so that a program can automatically read them all!) or s write a Unix shell script to run the program with each file as input and capture the test output. 3. Presenting Verification Results The best way to do this is with a table: Class Brief description of equivalence class Representative r Value of data, Test r Name of test r Location, file, r Name of proram r Name of data r Name of function set, r etc 106 data_1 data_2_out data_2_in no_data.c null_data.c large_n.c testx.c testx.c testx.c
Expected Result Result
No data Empty data set n > max Single datum 2 points out of order 2 points in order
Assertion Assertion raised raised NULL return NULL
Assertion Assertion raised raised Same data OK returned Order reversed OK
Order OK unchanged
Some examples of the sorts of entries that could be made in each column are shown.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback3.html (2 of 3) [3/23/2004 3:01:39 PM]
Data Structures and Algorithms  Assignment 3
You can obviously vary the columns (particularly the second and third) to suit the style of test that you are making.
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback3.html (3 of 3) [3/23/2004 3:01:39 PM]
Data Structures and Algorithms  Assignment 4
Data Structures and Algorithms
Feedback from assignment 4
1. Testing the MST Proving your MST algorithm has two parts: r Proving that a spanning tree is produced and r Proving that it's the minimum spanning tree. The first can be done easily by inspection or by a simple program. Proving that you've found the MST is not quite so simple: you could rely on the formal proof of the algorithm and simply attempt to prove that the operations (cycle, getcheapest, etc) on which the MST algorithm relies are performing correctly or you could do something like exhaustively generate allthe possible trees and thus demonstrate that indeed the tree produced by your program was a MST. For maximum marks, you were expected to address both parts of the problem in your report and do something about the first part. One of the postconditions of your MST method should have been: The tree produced is a spanning tree. Adding an additional method SpanningTree( Graph g )to your graph class and using it as the postcondition for MSTis the best approach. You can then simply run the MST method on a number of carefully chosen graphs and if the postcondition assertion is never raised (except perhaps for deliberate error inputs, such as disjoint graphs which don't have a spanning tree at all!) then you can assert that your function is producing a spanning tree at least! 2. Complexity of MST Incredibly, quite a number of people started their experiments with a faulty hypothesis. There is no excuse for this  the correct expression can be found in any one of a number of texts. It's also in the PLDS210 notes on the Web.
Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback4.html (1 of 2) [3/23/2004 3:01:41 PM]
Data Structures and Algorithms  Assignment 4
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/feedback4.html (2 of 2) [3/23/2004 3:01:41 PM]
Data Structures and Algorithms: Past Exams
Data Structures and Algorithms
Past Exams
1997
November, 1997 Final Exam Note that the material on abstract data types which used to be in the PLSD210 course was moved to the CLP110 course in 1997 and will not be examined directly in PLSD210. However, CLP110 is a prerequisite for this course, so an understanding of the basic principles of objectoriented design and abstract data types is expected! Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Exams/index.html [3/23/2004 3:01:50 PM]
Data Structures and Algorithms  Tutorials
Data Structures and Algorithms Tutorials
Tutorial 1
1.
Arrays or Linked Lists?
An array implementation of a collection requires O(n) time to search it (assuming it's not ordered). A linked list also requires O(n) time to search. Yet one of these will be quite a bit faster on a highperformance modern processor. Which one? Why? Hint: Part of the answer is found in the next question and part in IPS205  the computer architecture section.
2.
Overheads
The storage requirements for a typical modern RISC processor are: Type integer pointer float double Space (bytes) 4 4 4 8
A typical implementation of mallocwill use an extra 4 bytes every time it allocates a block of memory. Calculate the overheads for storing various numbers of items of the types listed using the array and list implementations of our collectionobject. Overhead here means that if a data structure requires 1140 bytes to store 1000 bytes of data, the overhead is 14%. Fill in the table: Item type Number of items 100 integer 1000 100 Array List
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tutorials.html (1 of 3) [3/23/2004 3:01:57 PM]
Data Structures and Algorithms  Tutorials
double struct { int x, y; double z[20]; }
1000 100 1000
3.
Complexity
Modern processors have clock speeds in excess of 100MHz. Thus a RISC processor may be executing more than 1x108 machine instructions per second. This means they can process of the order of 1x107 "operations" per second. An "operation" is loosely defined here as something like one iteration of a very simple loop. Assuming that you patience allows you to wait a. one second, b. one minute, c. one hour, d. one day. Calculate how large a problem you can solve if it is i. O(log2 n) ii. O(n) iii. O(sqrt(n)) iv. O(n log2 n) v. vi. vii. viii. ix. O(log2 n) O(n2) O(2n) O(n!) O(nn)
Numbers beyond the range of your calculator can simply be reported as "> 10x" or "< 10x", where x is determined by your calculator. To try this in reverse, assume that to be certain of beating Kasparov in the next "Man vs machine" chess challenge, we would need to look ahead 40 moves. How long will it take one of today's computers to calculate each move? For simplicity, assume that, on average, the number of possible moves is the same for every move: but if you know of any other estimate for the number of moves in chess, then use that. And if you don't know western chess, substitute Chinese chess or Go (and the appropriate
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tutorials.html (2 of 3) [3/23/2004 3:01:57 PM]
Data Structures and Algorithms  Tutorials
current champion's name!). Tutorials (cont)
© John Morris, 1996
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tutorials.html (3 of 3) [3/23/2004 3:01:57 PM]
Data Structures and Algorithms: Tutorial Problems 2
Data Structures and Algorithms
Tutorial Problems: Part 2
Tutorial 2
q
Asymptotic behaviour
a. Threshhold values For what values of n is 4 x 106 n2 > 10 x 2n ? b. Algorithm comparison Algorithm A requires 200 machine cycles for each iteration and requires nlogn iterations to solve a problem of size n. A simpler algorithm, B, requires 25 machine cycles for each iteration and requires n2 iterations to solve a problem of size n. Under what conditions will you prefer algorithm A over algorithm B?
Tutorial 3
q
Simple ADT Design
A doubleended queue or deque is one that has both LIFO and FIFO behaviour, ie you can add an item to the head or the tail of a list and extract an item from the head or the tail. Taking the following specification for the Collection class, modify it to handle a deque. Note:
q
q
There are quite a few ways that a software engineer could do this: see how many you can devise! A software engineer would probably try to ensure that code using the original specification continued to function correctly.
Similarly, modify the implementation to handle a deque. /* Specification for Collection */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut2.html (1 of 5) [3/23/2004 3:02:03 PM]
Data Structures and Algorithms: Tutorial Problems 2
typedef struct t_Collection *Collection; Collection ConsCollection( int max_items, int item_size ); /* Construct a new Collection Precondition: max_items > 0 Postcondition: returns a pointer to an empty Collection */ void AddToCollection( Collection c, void *item ); /* Add an item to a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ void DeleteFromCollection( Collection c, void *item ); /* Delete an item from a Collection Precondition: (c is a Collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ void *FindInCollection( Collection c, void *key ); /* Find an item in a Collection Precondition: c is a Collection created by a call to ConsCollection key != NULL Postcondition: returns an item identified by key if one exists, otherwise returns NULL */
/* Linked list implementation of a collection */ #include /* calloc */ #include /* NULL */ #include /* Needed for assertions */ #include "collection.h" /* import the specification */ extern void *ItemKey( void * ); struct t_node {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut2.html (2 of 5) [3/23/2004 3:02:03 PM]
Data Structures and Algorithms: Tutorial Problems 2
void *item; struct t_node *next; } node; struct t_collection { int size; /* Needed by FindInCollection */ struct t_node *node; }; collection ConsCollection(int max_items, int item_size ) /* Construct a new collection Precondition: (max_items > 0) && (item_size > 0) Postcondition: returns a pointer to an empty collection */ { collection c; /* Although redundant, this assertion should be retained as it tests compliance with the formal specification */ assert( max_items > 0 ); assert( item_size > 0 ); c = (collection)calloc( 1, sizeof(struct t_collection) ); c>node = (struct t_node *)0; c>size = item_size; return c; } void AddToCollection( collection c, void *item ) /* Add an item to a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count < max_items) && (item != NULL) Postcondition: item has been added to c */ { struct t_node *new; assert( c != NULL ); assert( item != NULL ); /* Allocate space for a node for the new item */ new = (struct t_node *)malloc(sizeof(struct t_node)); /* Attach the item to the node */ new>item = item; /* Make the existing list `hang' from this one */ new>next = c>node; /* The new item is the new head of the list */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut2.html (3 of 5) [3/23/2004 3:02:03 PM]
Data Structures and Algorithms: Tutorial Problems 2
c>node = new; assert( FindInCollection( c, ItemKey( item ) ) != NULL ); } void DeleteFromCollection( collection c, void *item ) /* Delete an item from a collection Precondition: (c is a collection created by a call to ConsCollection) && (existing item count >= 1) && (item != NULL) Postcondition: item has been deleted from c */ { struct t_node *node, *prev; assert( c != NULL ); /* The requirement that the collection has at least one item is expressed a little differently */ assert( c>node != NULL ); assert( item != NULL); /* Select node at head of list */ prev = node = c>node; /* Loop until we've reached the end of the list */ while( node != NULL ) { if ( item == node>item ) { /* Found the item to be deleted, relink the list around it */ if( node == c>node ) /* We're deleting the head */ c>node = node>next; else prev>next = node>next; /* Free the node */ free( node ); break; } prev = node; node = node>next; } }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut2.html (4 of 5) [3/23/2004 3:02:03 PM]
Data Structures and Algorithms: Tutorial Problems 2
Key terms
deque A doubleended queue  one to which items can be added at both the head and the tail and one from which items can be extracted from the head or the tail. Continue on to Tutorials: Part 3
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut2.html (5 of 5) [3/23/2004 3:02:03 PM]
Data Structures and Algorithms: Tutorial Problems 3
Data Structures and Algorithms
Tutorial Problems: Part 3
Tutorial 3
4.
B+tree Design
You are constructing a database for the Tax Office of a mediumsized Pacific nation. The primary key for records is a 9digit tax file number. (This Tax Office is so preoccupied with a new tax scheme that their Prime Minister is touting as the answer to all their economic woes, that it hasn't learnt about binary representations for numbers yet and still uses one byte per decimal digit. They get really uptight when anyone mentions the year 2000 problem!) The records for this database will be stored on discs which have an average access time of 2.5ms to read a block of 4Kbytes. Each disc uses a 32bit integer to address blocks within the disc. The database spreads over multiple discs, so a 16bit disc identifier has to be added to each block address. It takes 150ns to read a word from the computer's main memory. The population is 17x106 taxpayers and 0.78 x 103 politicians (1.8x103 millionaires have never needed to submit a tax return.). There are also records for 1.2 x 106 foreign residents who pay some tax and 2.8x106 companies, trusts and other tax avoidance structures. a. How much space will you use to store the indices for this database? Work out the minimum and maximum values for the space  in either disc blocks or bytes. (Don't worry about the poor taxpayers' records, which are humungous as they include details such as the cost of the beer consumed by every travelling salesman in every bar in the country  in order to calculate FBT.) b. How long will it take to find a taxpayer's record? c. How much disc space has been wasted in the indices? d. Many compilers align fields on to word boundaries for efficiency. For example, a 9byte field is padded out to 12bytes (the next multiple of the 4byte word length) so that the next field lies on a word boundary. Would this be a good thing to do in this case? Would it be worth going to some effort to prevent the compiler from padding out short (<4byte) fields?
5.
Stable Sorting
A stable sorting algorithm is one which leaves equal keys in the same order as they appeared in the original collection. Work out which of the algorithms that you have studied so far will produce stable sorts.
6.
A Numerical Puzzle
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut3.html (1 of 3) [3/23/2004 3:02:09 PM]
Data Structures and Algorithms: Tutorial Problems 3
You are given an array containing n integers. You are asked to determine if the list contains two numbers that sum to some arbitrary integer, k. For example, if the list is 8, 4, 1 and 6 and k = 10, the answer is "yes", because 4+6 = 10. a. Find a way to solve this problem that is O(n2). b. Can you do better? What is the time complexity of the better method? Hint: Consider sorting the array first. 7.
Dynamically Balanced Trees
Show that any AVL tree can be coloured as a redblack tree.
8.
Dynamic Memory Allocation
Modern objectoriented software makes extensive use of the malloc and free operations. Unfortunately, they are generally quite expensive (in time) and thus efficient routines are important. free places freed blocks into free lists. In order to reuse memory as much as possible, malloc will generally search the free lists for a block before requesting another one from the operating system (a very expensive exercise!). A bestfit algorithm searches for the smallest free block into which the requested block will fit. Suggest a number of ways in which free could organise the free lists and give the time complexities for a. free to add a block to the free lists b. malloc to find a "bestfit" block from these lists. Describe any drawbacks that any method might have.
9.
Equivalence Classes
You are required to test some sorting routines: a. an insertion sort, b. a heap sort and c. a quick sort. i. Propose a simple specification for a general purpose sorting function. ii. Using the specification, derive a set of tests that you would perform to verify the function. iii. Would your knowledge of the algorithms used by each sorting method cause you to extend the test set? For the purpose of drawing up examples, use records whose keys are integers.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut3.html (2 of 3) [3/23/2004 3:02:09 PM]
Data Structures and Algorithms: Tutorial Problems 3
Key terms
stable sort A In a stable sort, the order of equal keys in the original is retained in the output. Continue on to Tutorials: Part 4
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut3.html (3 of 3) [3/23/2004 3:02:09 PM]
Data Structures and Algorithms: Tutorial Problems 4
Data Structures and Algorithms
Tutorial Problems: Part 4
Tutorial 4
Heap Sort 1. What would be the a. minimum, b. maximum number of elements in a heap of height h? 2. Where in a heap would I find the smallest element? 3. Is an array that is sorted in reverse order a heap? 4. Is { 23, 17, 14, 6, 13, 10, 1, 5, 7, 12 } a heap? 5. Convert the Heapify function to an iterative form. 6. How could I make a FIFO queue with a heap? Is this likely to be a practical thing to do? 7. I have k sorted lists containing a total of n elements. Give an O(nlogk) algorithm to merge these lists into a single sorted list. Hint: Use a heap for the merge. 8. A dary heap has d children for each node. (An ordinary heap is a binary heap.) Show how to store a dary heap in an array. Quick Sort 1. What is the space complexity of the "standard" recursive quicksort? Hint: Consider the stack frames used. 2. Convert a recursive quicksort to an iterative one. 3. Suggest a simple strategy that can be used to turn any sort into a stable sort. Radix Sort Inplace sorting algorithms do not require any more space than that occupied by the records themselves. 1. I wish to sort a collection of items whose keys can only take the values 0 or 1. Devise an O(n) algorithm for sorting these items in place. You may use auxillary storage of constant (O(n)) size. 2. Can this approach be used to sort n records with bbit keys in O(bn) time? Explain your answer. A sort? An integer array of n elements has a majority element if there is an element that occurs more than n/2
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut4.html (1 of 2) [3/23/2004 3:02:16 PM]
Data Structures and Algorithms: Tutorial Problems 4
times in the array. For example, the array: [ 2,2,5,1,5,5,2,5,5 ] has a majority element, but the array [ 2,2,5,1,5,5,1,5 ] does not have a majority element. Design an alghorithm to determine whether or not an array contains a majority element and to return that element if it exists. Hash Tables 1. Collisions in linked lists a. What is the worst case performance of a hash table in which collisions are stored in linked lists attached to the primary table? b. Could I improve this by keeping the items in the linked lists in sorted order? c. Could I use any other auxillary structure to improve the worst case performance? 2. I have a linked list of items with very long keys, k. The hash value of each key, h(k) is stored with each item. How might I make use of the hash value to speed up a search? Will this strategy work with a search tree? If yes, under what conditions? Search Trees 1. How many binary search trees can I make from the list A B C D E? 2. When inserting a new node into a redblack tree, we set its colour to red. We could have set its colour to be black without violating the "if a node is red, its children are black" property. Why was it set to red?
Key terms
stable sort A In a stable sort, the order of equal keys in the original is retained in the output. Continue on to Tutorials: Part 5
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut4.html (2 of 2) [3/23/2004 3:02:16 PM]
Data Structures and Algorithms: Tutorial Problems 5
Data Structures and Algorithms
Tutorial Problems: Part 5
Graphs 1. Is the minimum spanning tree of a graph unique? Provide an example to prove your answer. 2. A minimum spanning tree has already been computed for a graph. Suppose that a new node is added (along with appropriate costs). How long will it take to recompute the minimum spanning tree? 3. Let T be a MST of a graph G and let L be the sorted list of the edge weights of T. Show that for any other MST T' of G, the list L is also the sorted list of edge weights of T'. 4. You are given a directed graph, G = (V,E). Each edge (u,v) in G has a value r(u,v) associated with it. For all r(u,v), 0 <= r(u,v) <= 1. r(u,v) can be interpreted as the probability that a channel from u to v will not fail, ie it's a measure of the reliability of the channel. Devise an efficient algorithm to find the most reliable path between two given vertices. 5. Suppose that all the edge weights in a graph are integers between 1 and  E . a. Does this improve the time to find the MST? b. Does this improve the running time of Dijkstra's algorithm? 6. Explain how to modify Dijkstra's algorithm so that if there is more than one minimum path from u to v, the path with the fewest number of edges is chosen. Continue on to Tutorials: Part 6
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut5.html [3/23/2004 3:02:21 PM]
Data Structures and Algorithms: Tutorial Problems 5
Data Structures and Algorithms
Tutorial Problems: Part 5
Hard Problems 1. I'm just about to depart on a sales trip to all of my company's customers. My company is paying for my air fares, so I don't need to worry about the cost, but I'd really like to get the maximum number of frequent flyer points! How long will it take for me to work out how to get a free flight to [Tahiti, Maldives, Santorini, Hawaii, Samoa select one only]? Assume that frequent flyer points are calculated simply on the distance between air ports. You need to determine a trip that will earn you the points necessary to get you a free flight to your chosen destination. Is this really a hard problem? Continue on to Tutorials: Part 7
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut6.html [3/23/2004 3:02:26 PM]
Data Structures and Algorithms  Workshop 1
Data Structures and Algorithms
Workshop 1  1999
Familiarisation
The basic aims of this workshop are to
q q
familiarise you with techniques to time an algorithm written in ANSI C and experimentally confirm the time complexity of addition and searching operations for various data structures.
You will need to write a simple program to insert data into a tree, measuring the average time to add an item and to find a randomly chosen item in the tree. 1. Download and save into your directory from the download window: a. Collection.h b. Collection.c c. tree_struct.c d. tree_add.c e. tree_find.c and f. the test program tc.c. 2. Modify the collection implementation so that it uses a tree structure rather than the original array. Edit out the original structure, find and add methods and load in the new ones. Of course, you will be able to leave the specification of the Collection class untouched. 3. Compile into an executable: gcc o tc tc.c collection.c Note that you will need to use an ANSI C compiler as the programs are written in ANSI C. On some Unix machines, cc only accepts the older "K&R C". 4. Run the program and verify that it runs as expected. Examine the test program listing to determine how it runs! 5. Now we want to find out how efficiently it runs: i. Modify the test program so that it generates and then inserts a large number, n, of
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1999.html (1 of 3) [3/23/2004 3:02:31 PM]
Data Structures and Algorithms  Workshop 1
integers into the collection. Note that you will need to be careful about the choice of the set of numbers that you generate. Compare what happens to your times when you use the set of integers, 1 to n, to when you use the rand() function to generate a set of random numbers to add. Once you have created a collection with n items in it, determine how long it takes, on average, to find an item in the collection. Again you will need to generate a set of "probe" data which you search for in the collection. (Searching for the same item multiple times may cause problems and give you misleading answers  why?)
Timing
Timing an individual function call has some traps: read these notes for some guidelines. ii. Determine the average time to search the collection for a randomly generated integer again by finding the time to search for n randomly generated integers. Use the random number generator, rand(), to generate random integers. iii. Modify the program so that it prints out the insertion and searching times for a range of values of n. Suitable values will produce run times between about 1 and 20 seconds. About 10 values should enable you to determine the characteristics of the time vs n curve. Warning: Note carefully that the test program, tc.c is a test program  designed to demonstrate that the collection code is correct. You will need to reorganise it quite a bit to perform the timing analyses efficiently.
Report
Prepare a brief report which summarises your results. (The report should be plain ASCII text  not the native form of any word processor.) This report should start by forming a hypothesis about your expected results. This should be followed by the actual results. The conclusion should provide a convincing argument as to why these results confirm your original hypotheses. It should also highlight and attempt to explain any discrepancies. You are expected to measure the addition and searching times. Your report should also discuss the difference (if any) in results observed when a sequence of integers, 1, 2, .. is used for the test data compared to a randomly chosen list of integers. If you design your program efficiently, you will be able to get it to generate a table of data for you which you can paste directly into the report.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1999.html (2 of 3) [3/23/2004 3:02:31 PM]
Data Structures and Algorithms  Workshop 1
Submission Instructions will be available soon!
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/w1_1999.html (3 of 3) [3/23/2004 3:02:31 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tc.c
/** Test the Collection class **/ #include <stdio.h> #include <math.h> #include <stdlib.h> #include "Collection.h" #include "timer.h" int KeyCmp( void *a, void *b ) { int av, bv; av = *(int *)a; bv = *(int *)b; /* printf("KC a %d b %d\n", av, bv ); */ if ( av < bv ) return 1; else if ( av > bv ) return +1; return 0; } void AddAll( Collection c, int *list, int n ) { int i; for(i=0;i<n;i++) { /* printf("Add %d ..\n", list[i] ); */ AddToCollection( c, &list[i] ); /* printf("Find %d ..\n", list[i] ); if ( FindInCollection( c, &list[i] ) ) {} else { printf("Add failure item %d, value %d\n", i, list[i] ); } */ } } int Check( Collection c, int *list, int n ) { int i, *ip, found; found = 0; for(i=0;i<n;i++) { if ( (ip=FindInCollection( c, &list[i] )) != NULL ) { if ( *ip == list[i] ) { found++; } } } return found; } int *GenList( int n, int random ) { int i, *ip; ip = (int *)malloc( n*sizeof(int) ); for(i=0;i<n;i++) { ip[i] = random?rand():i; } return ip; } #define #define #define #define LOW HIGH 1e4 N_REP 100 TESTS 1e4 1e2
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tc.c (1 of 2) [3/23/2004 3:02:40 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tc.c
void main() { Collection c, cs[N_REP]; int cnt, n, random, k; int *list, *list2; double dt, mdt; printf("Tree Complexity\n"); for(random=0;random<2;random++) { for( n = LOW; n < HIGH; n = n * 2 ) { printf("%6d: ", n ); fflush( stdout ); list = GenList( n, !random ); if ( list == NULL ) { printf("Insuff space for list\n"); continue; } (void)Elapsed(); for(k=0;k<N_REP;k++) { cs[k] = c = ConsCollection( 100, KeyCmp ); if( c == NULL ) { break; } else { AddAll( c, list, n ); } } if ( c != NULL ) { dt = Elapsed(); mdt = (dt / n) / N_REP; printf(" %10.3f %12.4e %12.4e %12.4e", dt, mdt, mdt/log((double)n), mdt/n ); } for(k=0;k<N_REP;k++) { if( cs[k] != NULL ) { DeleteCollection( cs[k] ); } } #ifdef EXTRA list2 = GenList( TESTS, TRUE ); if ( list2 != NULL ) { (void)Elapsed(); AddAll( c, list2, TESTS ); dt = Elapsed(); mdt = dt/TESTS; printf(" %10.3f %12.4e %12.4e %12.4e\n", dt, mdt, mdt/log((double)n), mdt/n ); free( list2 ); } else { printf("Insuff space for extra list!\n"); } #endif free( list ); printf("\n"); } } }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/tc.c (2 of 2) [3/23/2004 3:02:40 PM]
Data Structures and Algorithms  Timing Programs
Data Structures and Algorithms
Timing a function
Most systems seem to have implementations of the ANSI C routine clock(). You can find its specifications with the man 3 clock command. Each call of clock() returns the time in ticks since the last call. So, to time a function, simply call the function to be timed in between two clock() calls: long t; t = clock(); /* Call function to be timed */ x = f( ..... ); /* Calculate time since previous clock call */ t = clock()  t; /* Convert to seconds */ printf("CPU time %g\n", (double)t/CLOCKS_PER_SEC ); A good toolbuilding approach would have you build another (trivial) function, say, double TimeUsed(); which returned the time difference in seconds and prevented your program from needing to worry abnut the conversion from ticks to seconds. For an example of a simple function which can readily be included in your program, see timer.h and timer.c. However, be careful to note that the minimum resolution of the clock function will almost invariably be more than 1 tick. On the SGI machines, it's actually 10ms. This means that you have to ensure that your test function runs for much longer than 10ms to get accurate times. Thus you will usually have to call the function under test quite a few times in order to find an accurate time: long t; double tN; t = clock(); /* Call function to be timed N times */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/timing.html (1 of 2) [3/23/2004 3:02:42 PM]
Data Structures and Algorithms  Timing Programs
for(i=0;i<N;i++) { x = f( ..... ); } /* Calculate time since previous clock call */ tN = clock()  t; /* Convert to seconds */ tN = tN/CLOCKS_PER_SEC; /* Calculate the average */ printf("CPU time %g\n", tN/N ); You will need to determine a suitable value of N by experimentation .. it will obviously vary with the complexity of the function being tested!
Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/timing.html (2 of 2) [3/23/2004 3:02:42 PM]
man page: rand()
RAND(3V) NAME
C LIBRARY FUNCTIONS
RAND(3V)
rand, srand  simple random number generator SYNOPSIS srand(seed) int seed; rand() DESCRIPTION rand() uses a multiplicative congruential random number generator with period 2**32 to return successive pseudorandom numbers in the range from 0 to (2**31)1. srand() can be called at any time to reset the randomnumber generator to a random starting point. The generator is initially seeded with a value of 1. SYSTEM V DESCRIPTION rand() returns successive pseudorandom numbers in the range from 0 to (2**15)1. SEE ALSO drand48(3), random(3) NOTES The spectral properties of rand() leave a great deal to be desired. drand48(3) and random(3) provide much better, though more elaborate, randomnumber generators. BUGS The low bits of the numbers generated are not very random; use the middle bits. In particular the lowest bit alternates between 0 and 1.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/rand.html [3/23/2004 3:02:44 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/timer.h
/* timer.h */ double Elapsed( void ); /* Returns time in seconds since last call */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/timer.h [3/23/2004 3:03:17 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/timer.c
/* timer.c */ #include <time.h> static double last = 0.0; double Elapsed() { double t, diff; t = clock(); diff = t  last; last = t; return diff/CLOCKS_PER_SEC; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/timer.c [3/23/2004 3:03:27 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/testcol.c
/** Test the collection class **/ #include <stdio.h> #include "collection.h" int *ItemKey( int *item ) { return item; } void AddAll( collection c, int *list, int n ) { int i; for(i=0;i<n;i++) { AddToCollection( c, &list[i] ); if ( FindInCollection( c, &list[i] ) ) {} else { printf("Add failure item %d, value %d\n", i, list[i] ); } } } void CheckAll( collection c, int *list, int n ) { int i, *ip; for(i=0;i<n;i++) { if ( (ip=FindInCollection( c, &list[i] )) != NULL ) { if ( ip == &list[i] ) {} else { printf("Find mismatch: list[%d] = %d ", i, list[i] ); printf(" @ %d/ %d @ %d\n", &list[i], *ip, ip ); } } else { printf("Find failure item %d, value %d\n", i, list[i] ); } } } void DeleteAll_1( collection c, int *list, int n ) { int i; for(i=0;i<n;i++) { DeleteFromCollection( c, &list[i] ); if ( FindInCollection( c, &list[i] ) ) { printf("Delete failure item %d, value %d\n", i, list[i] ); } } } void DeleteAll_2( collection c, int *list, int n ) { int i;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/testcol.c (1 of 2) [3/23/2004 3:03:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/testcol.c
for(i=n1;i>=0;i) { DeleteFromCollection( c, &list[i] ); if ( FindInCollection( c, &list[i] ) ) { printf("Delete failure item %d, value %d\n", i, list[i] ); } } } void main() { collection c; int list[] = { 2, 3, 45, 67, 89, 99 }; #define N (sizeof(list)/sizeof(int)) c = ConsCollection( 100, sizeof( int ) ); AddAll( c, list, N ); printf("Added %d items\n", N CheckAll( c, list, N ); printf("Checked %d items\n", DeleteAll_1( c, list, N ); printf("Deleted all items\n" AddAll( c, list, N ); printf("Added %d items\n", N DeleteAll_2( c, list, N ); printf("Deleted all items\n" }
); N ); ); ); );
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/testcol.c (2 of 2) [3/23/2004 3:03:37 PM]
Time functions
TIME(3V) NAME
C LIBRARY FUNCTIONS
TIME(3V)
time, ftime  get date and time SYNOPSIS #include <sys/types.h> #include <sys/time.h> time_t time(tloc) time_t *tloc; #include <sys/timeb.h> int ftime(tp) struct timeb *tp; DESCRIPTION time() returns the time since 00:00:00 GMT, measured in seconds.
Jan.
1,
1970,
If tloc is nonNULL, the return value is also stored in location to which tloc points. ftime() fills in a <sys/timeb.h>: struct timeb { time_t unsigned short short };
the
structure pointed to by tp, as defined in
time; short millitm; timezone; dstflag;
The structure contains the time since the epoch in seconds, up to 1000 milliseconds of moreprecise interval, the local time zone (measured in minutes of time westward from Greenwich), and a flag that, if nonzero, indicates that Daylight Saving time applies locally during the appropriate part of the year. RETURN VALUES time() returns the value of time on success. returns (time_t) 1. On success, ftime() returns no useful value. returns 1.
On failure, it
On failure, it
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/time.html (1 of 2) [3/23/2004 3:03:39 PM]
Time functions
SEE ALSO date(1V), gettimeofday(2), ctime(3V)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/time.html (2 of 2) [3/23/2004 3:03:39 PM]
Data Structures and Algorithms  xx
Data Structures and Algorithms
Timing on the SG machines
In their wisdom, the designers of IRIX decided not to implement the ftime routine found on the Suns! Use the IRIX routine, clock instead. You can find its specifications with the man 3 clock command.
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Labs/clock.html [3/23/2004 3:03:40 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/mst.test
9 a b c d e f g h i a b c d d e c g c g i g b a
/* Number of nodes */ /* Node labels */
b c d e f f f f i i h h h h
4.0 /* Edge a>b has cost 4.0 */ 8.0 7.0 9.0 14.0 10.0 4.0 2.0 2.0 6.0 7.0 1.0 11.0 8.0
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/source/mst.test [3/23/2004 3:03:42 PM]
THE UNIVERSITY OF WESTERN AUSTRALIA DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING SECOND SEMESTER EXAMINATIONS NOVEMBER 1997 623.210
PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 Time allowed: 3 Hours Reading time: 10 minutes
This paper contains Section A Section B 7 pages
Instructions
Answer as many questions as you can in the time allowed. There are 96 marks on this paper. A reasonable target is 85 marks. Write answers to Section A on the question paper and hand it in with your answers to Section B.
Name
Student Number
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 2
Section A
Short Answer Questions Total 46 marks
Write your answers on this sheet in the space provided. The answers to each part in Section A should require no more than two concise sentences. In some cases, a single word or expression will suffice. If you need more space or wish to change your answer, use your answer book. Make sure to number the question clearly in your answer book.
Marks
1 2
Program efficiency √n is O(n2). True or False? What is meant by the statement “f is Θ(g)”?
1
2
3
Arrange the following functions of n in the order in which they grow as n approaches infinity (slowest growing function first): n2 1.0005n n1000 √(n3 ) log2n
3
4
What is meant by the statement: “This algorithm runs in O(1) time”.
2
5
Other than time, name a factor which sometimes needs to be considered in determining the complexity of an algorithm.
1
6
Give an example of an algorithm in which this factor might be important.
1
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 3 7 Programming strategy If you working on a large software project, would you insist that code was written in ANSI standard C? Give two reasons for your answer.
2
8
If I was constructing a software model for a class of vehicles, what files would I create? Provide a one line description of the contents of each file.
3
9
Why do I ensure that the details of the attributes of a software class are hidden in the implementation file so that users of the class cannot access them? (At least two reasons – one sentence each.)
2
10
In defining function f, the following comment has been added to the specification. double f( int n ); /* Precond: n >= 0 */ What should be added to the implementation of this function?
2
11
Searching All the questions in this subsection are related: read all the questions before answering any of them. I need to search a large (>106) collection of items. Arrange the following searching algorithms in order of their expected running time – slowest first. If you would expect two algorithms to take about the same time, then group them together. Consider the actual searching time only – assume that any necessary data structures have been built already. (a) Linear search (b) Hash table lookup (c) RedBlack tree search (d) Binary search
3
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 4
12
List two of the problems encountered in obtaining good performance with hash table lookup schemes?
2
13
What are the best and worst time complexities found with hash table lookup schemes?
2
14
What time complexity can I always obtain as long as I have a rule for ordering items in my collections?
1
15
What is the time complex to add an item to (a) a collection set up for binary searching and (b) a redblack tree?
2
16
What are the best and worst time complexities for searching a binary tree?
2
17
Under what conditions would you obtain these best and worst times? (One sentence each – a simple diagram or two might help!)
2
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 5 18 When would I need to use a complex treebalancing algorithm, such as the one to build a redblack tree?
2
19
Sorting All the questions in this subsection are related: read all the questions before answering any of them. I need to sort a large (>106) collection of items. Arrange the following sorting algorithms in order of their expected running time – slowest first. If you would expect two algorithms to take about the same time, then group them together. Assume that you have so much memory on your computer that memory will not be a factor. (a) Quick sort (b) Insertion sort (c) Radix sort (d) Heap sort
3
20
When would you use insertion or bubble sort effectively? Explain your answer.
2
21
I can obtain better performance from another algorithm. What is it?
1
22
Give two restrictions on the use of this algorithm.
2
23
Graphs Why is it necessary to be able to distinguish between problems which map to the travelling salesman’s problem and the minimum spanning tree problem?
2
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 6
24
What data structure would you use to determine whether adding an edge to a graph causes a cycle? Write one sentence describing how this structure is used.
2
25
What is the time complexity of the cycle determining operation?
1
26
Hard problems Give the time complexity of a typical intractible algorithm
1
27
Can I solve such a problem for (a) small n? (b) large n? In each case, add to your yes or no answer a phrase describing the quality of the best answer that you can obtain with a practical computer.
3
28
Verifying functions Why is the concept of equivalence classes useful in verifying functions?
2
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 7
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 8
Section B
QUESTION B1 15 marks
You are developing a private network for your company which has a very large number of outlets all over the country for its mechanically produced, sterile (untouched by human hands!) hamburgers. Each outlet must be connected to the network so that management can arrange to ship all the packaging (the most significant component of your company’s output) to the outlets just as it is needed. This network is to be just like the Internet, with nets of multiple redundant links connecting all the outlets. Nodes will be placed in strategic locations. From each node it is possible to have multiple links to other nodes. Network nodes receive large numbers of messages (not all relevant to the company’s operations – see below) and are responsible for forwarding them to their correct destination. Links use a variety of technologies – copper wire, optical fibre and satellite. (The chairman of the board hasn’t really understood the Internet yet, so there is a very low bandwidth link between his secretary’s desk and his: his secretary copies the messages onto pieces of paper and carries them into his office.) All of these links have different bandwidths and thus different costs associated with sending a message over them. In cases of extreme packaging shortage, outlets have to communicate directly with each other to arrange emergency supplies. As part of the network design process, you have to determine: (a) The most efficient routing for messages from any outlet to any other. Which algorithm would you use for this? What is its complexity? (b) The most efficient route for a broadcast message (which emanates from the chairman’s office) to reach all nodes. Which algorithm would you use for this? What is its complexity? (c) The chairman hasn’t heard of Internet television yet, so insists on visiting each outlet once a year to encourage the workers. He’s getting rather frail now, so that it’s important that the most efficient way for him to do this is found. For some reason, this pure public relations exercise has landed on your desk too – perhaps because you have the only uptodate database containing all the outlet locations. You are required to plan his route. Which algorithm would you use for this? How long would it take you to compute the chairman’s route? (d) You once accidentally showed a few of your colleagues how the network was could run all the chat programs – because it was using standard Internet protocols. Within two days, the prototype network became clogged with messages that seemed to contain little more than “Hi” followed by a name and questions about the weather at other outlets around the country. When the chairman heard about this, he thought it was magnificent that all his employees were talking to each other and refused your request to junk all chat packets from the network. You were forced to add a filter which tagged all chat packets as “nonurgent” and packaging supply and other messages which were actually relevant to the company’s operations with 57 other levels of urgency. Which
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 9 algorithm should you use at each network node to ensure that messages relating to the company’s operations take precedence over the weather? If there are, on average, n messages waiting for forwarding at each node at any one time and it takes approximately c microseconds to allocate space for a new message, compare its urgency level with another and decide to swap their positions. Approximately how long will it take to receive each message at a node? QUESTION B2 10 marks
Your P9 computer is able to analyse one million chess moves per second. A genetic engineer has succeeded in combining some of Kasparov’s DNA with some recovered from Einstein’s fingerprints in a cloned monkey which can now  with absolute reliability  think 10 moves ahead. Assume there are, on average, about 20 possible moves at each position. Assume also that you are able to purchase and connect together, without loss of efficiency, as many of your P9’s as you need. You have 100 seconds for each move. How many P9’s will you need in order to at least draw with this monkey?
SECOND SEMESTER EXAMINATIONS NOVEMBER 1996 PROGRAMMING LANGUAGES AND SOFTWARE DESIGN 210 623.210 PAGE 10
QUESTION B3
25 marks
Design a software module for supporting operations on a class of graphs. This class must provide all the methods necessary to calculate a minimum spanning tree (MST). (Provision of methods to support other common graph algorithms will attract a small bonus  a maximum of 5 marks which will be used to compensate for other flaws in your answer and increasing your chance of obtaining the maximum 25 marks for this question.) Rules: i) Graphs consist of a set of nodes and edges. ii) Initially a graph will be constructed with no nodes and no edges. iii) Nodes and edges are to be added separately. iv) The number of nodes and edges in the graph at any one time needs to be available. (a) Provide a complete formal software definition for the graph class. This should be in the form of a program module that would be accepted by an ANSI C compiler. (Minor syntactic errors will be ignored.) (b) Suggest a set of data structures which could be used effectively internally in the graph structure to handle the nodes, edges and any other information needed by the class to support operations on it. Obviously, the structures you mention should be sufficient to implement the minimum spanning tree algorithm. (c) Describe how the cycle determining step of the MST algorithm will work. You may do this by i) simply describing the algorithm step by step in natural language (with appropriate references to the actual data structures to be used) or ii) providing suitable annotated actual code (The comments should be sufficiently detailed to enable the algorithm to be understood. ) or iii) any combination of (i) and (ii). It is strongly suggested that appropriate diagrams to show the working of the algorithm should be used to augment your description.
Data Structures and Algorithms: Tutorial Problems 7
Data Structures and Algorithms
Tutorial Problems: Part 7
Under construction! 1. Continue on to Tutorials: Part 7
© John Morris, 1998
Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Tutorials/tut7.html [3/23/2004 3:03:55 PM]
Data Structures and Algorithms  TEXTS
Data Structures and Algorithms
Texts
The following is a (nonexhaustive) list of texts which are in the UWA library which cover aspects of this course. Not all the texts cover all the material  you will need to search a little for some of the topics. Since there are many texts here, it's probably simpler to note a few representative catalogue numbers and simply look in the shelves in that area! For instance 005.37 obviously has a decent block of texts. Texts highlighted in red have been used as sources for some of the material in this course.
Brown, Marc H. Algorithm animation / Marc H. Brown. Cambridge, Mass : M.I.T. Press, c1988. FIZ 006.6 1988 ALG
x
Harel, David, 1950Algorithmics : the spirit of computing / David Harel. Wokingham, England ; Reading, Mass : AddisonWesley, c1987. FIZ 004 1987 ALG
x
Sedgewick, Robert, 1946Algorithms / Robert Sedgewick. Reading, Mass : AddisonWesley, c1983. SRR 517.6 1983 ALG
DUE 221196
x
Sedgewick, Robert, 1946Algorithms / Robert Sedgewick. Reading, Mass : AddisonWesley, c1988. FIZ Reserve 517.6 1988 ALG FIZ Reserve 517.6 1988 ALG
x x
Kingston, Jeffrey H. (Jeffrey Howard) Algorithms and data structures : design, correctness, analysis / Sydney : AddisonWesley, 1990. FIZ 005.73 1990 ALG DUE 300896
x
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/texts.html (1 of 4) [3/23/2004 3:04:04 PM]
Data Structures and Algorithms  TEXTS
Wirth, Niklaus, 1934Algorithms + data structures=programs / Niklaus Wirth. Englewood Cliffs, N.J : PrenticeHall, c1976. FIZ 005.1 1976 ALG FIZ 005.1 1976 ALG
x x
Moret, B. M. E. (Bernard M. E.) Algorithms from P to NP / B.M.E. Moret, H.D. Shapiro. Redwood City, CA : Benjamin/Cummings, c1991FIZ 005.1 1991 ALG
Sedgewick, Robert, 1946Algorithms in C / Robert Sedgewick. Reading, Mass : AddisonWesley Pub. Co., c1990. SRR 005.133 1990 ALG
Collected algorithms from ACM. New York, N.Y : Association for Computing Machinery, 1975R 005.1 FIZ Reference MICROFICHE MP 430 FIZ Microform
x x
Moffat, David V., 1944Common algorithms in Pascal with programs for reading / David V. Englewood Cliffs, N.J : PrenticeHall, c1984. FIZ 005.133 1984 COM
x
Baase, Sara. Computer algorithms : introduction to design and analysis / Sara Reading, Mass : AddisonWesley Pub. Co., c1978. FIZ 005.1 1978 COM
x
Walker, Henry M., 1947Computer science 2 : principles of software engineering, data Glenview, Ill : Scott, Foresman, c1989. FIZ 005.1 1989 COM
Garey, Michael R. Computers and intractability : a guide to the theory of NPSan Francisco : W. H. Freeman, c1979. FIZ 005.1 1979 COM DUE 030996
x
Aho, Alfred V.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/texts.html (2 of 4) [3/23/2004 3:04:04 PM]
Data Structures and Algorithms  TEXTS
Data structures and algorithms / Alfred V. Aho, John E. Hopcroft, Reading, Mass : AddisonWesley, c1983. FIZ 005.73 1983 DAT
x
Aho, Alfred V. The design and analysis of computer algorithms / Alfred V. Aho, Reading, Mass : AddisonWesley Pub. Co., [1974] FIZ 005.1 1974 DES FIZ 005.1 1974 DES
x x
Mehlhorn, Kurt, 1949UNIF Effiziente Allgorithmen. English. Data structures and algorithms / Kurt Mehlhorn. Berlin ; New York : Springer, 1984. FIZ 005.73 1984 DAT V. 2 FIZ 005.73 1984 DAT V. 3 FIZ 005.73 1984 DAT V. 1
x x x
Brassard, Gilles, 1955Fundamentals of algorithmics / Gilles Brassard and Paul Bratley. Englewood, N.J. : Prentice Hall, c1996. FIZ 517.6 1996 FUN DUE 221196
x
Horowitz, Ellis. Fundamentals of computer algorithms / Ellis Horowitz, Sartaj Potomac, Md : Computer Science Press, c1978. FIZ 005.12 1978 FUN DUE 300896
x
Gonnet, G. H. (Gaston H.) Handbook of algorithms and data structures : in Pascal and C / Wokingham, England ; Reading, Mass : AddisonWesley Pub. Co., SRR 005.133 1991 HAN
x
Cormen, Thomas H. Introduction to algorithms / Thomas H. Cormen, Charles E. Cambridge, Mass : MIT Press ; New York : McGrawHill, c1990. FIZ Reserve 005.1 1990 INT FIZ 3day 005.1 1990 INT
x x
Tremblay, JeanPaul, 1938An Introduction to computer science : an algorithmic approach / New York : McGrawHill, c1979. FIZ 005.1 1979 INT
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/texts.html (3 of 4) [3/23/2004 3:04:04 PM]
x
Data Structures and Algorithms  TEXTS
Machtey, Michael. An introduction to the general theory of algorithms / Michael New York : North Holland, c1978. FIZ 005.13 1978 INT
x
Greene, Daniel H., 1955Mathematics for the analysis of algorithms / Daniel H. Greene, Boston : Birkhauser, c1981. FIZ 517.6 1981 GRE
x
Reinelt, G. (Gerhard) The traveling salesman : computational solutions for TSP Berlin ; New York : SpringerVerlag, c1994. FIZ P 004.05 P27 Classic data structures in C++ / Timothy A. Budd. Reading, Mass. : AddisonWesley Pub. Co., c1994. FIZ 005.73 1994 CLA
x
x
Standish, Thomas A., 1941Data structure techniques / Thomas A. Standish. Reading, MA : AddisonWesley, c1980. FIZ 005.73 1980 DAT
x
Table of Contents
© John Morris, 1996
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/texts.html (4 of 4) [3/23/2004 3:04:04 PM]
Data Structures & Algorithms  Courses
Data Structures & Algorithms  Online courses
This is a partial list of online course material and tutorials for data structures and algorithms. 1. Thomas Niemann's text on sorting and searching 2. Updated version of Thomas Niemann's text
Continue on to algorithm animations Back to the Table of Contents
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/www_ds.html [3/23/2004 3:04:06 PM]
Sorting and Searching Algorithms: A Cookbook
Title Introduction Sorting
Insertion Sort Shell Sort Quicksort Comparison
Sorting and Searching Algorithms: A Cookbook
by Thomas Niemann
Other works... A Guide to Lex & Yacc This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know C, and that you are familiar with concepts such as arrays and pointers. The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by techniques for implementing dictionaries, structures that allow efficient search, insert, and delete operations. The last section illustrates algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is included. This document has been translated into Russian. If you are interested in translating, please send me email. Special thanks go to Pavel Dubner, whose numerous suggestions were much appreciated. The following files may be downloaded:
q q
Dictionaries
Hash Tables Binary Search Trees RedBlack Trees Skip Lists Comparison
Very Large Files
External Sorting BTrees
Bibliography
PDF format (153k) source code for the above (16k)
Permission to reproduce this document, in whole or in part, is given provided the original web site listed below is referenced, and no additional restrictions apply. Source code, when part of a software project, may be used freely without reference to the author. Thomas Niemann Portland, Oregon http://members.xoom.com/thomasn/s_man.htm
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_man.htm (1 of 2) [3/23/2004 3:04:11 PM]
Sorting and Searching Algorithms: A Cookbook
Visit my Home Page.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_man.htm (2 of 2) [3/23/2004 3:04:11 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_title.htm
Sorting and Searching Algorithms: A Cookbook
by Thomas Niemann
Other works... A Guide to Lex & Yacc This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know C, and that you are familiar with concepts such as arrays and pointers. The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by techniques for implementing dictionaries, structures that allow efficient search, insert, and delete operations. The last section illustrates algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is included. This document has been translated into Russian. If you are interested in translating, please send me email. Special thanks go to Pavel Dubner, whose numerous suggestions were much appreciated. The following files may be downloaded:
q q
PDF format (153k) source code for the above (16k)
Permission to reproduce this document, in whole or in part, is given provided the original web site listed below is referenced, and no additional restrictions apply. Source code, when part of a software project, may be used freely without reference to the author. Thomas Niemann Portland, Oregon http://members.xoom.com/thomasn/s_man.htm Visit my Home Page.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_title.htm [3/23/2004 3:04:13 PM]
Sorting and Searching Algorithms: A Cookbook
Thomas Niemann
Preface
This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know C, and that you are familiar with concepts such as arrays and pointers. The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by techniques for implementing dictionaries, structures that allow efficient search, insert, and delete operations. The last section illustrates algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is available at the site listed below. Permission to reproduce this document, in whole or in part, is given provided the original web site listed below is referenced, and no additional restrictions apply. Source code, when part of a software project, may be used freely without reference to the author.
THOMAS NIEMANN Portland, Oregon
email: home:
thomasn@jps.net http://members.xoom.com/thomasn/s_man.htm
By the same author: A Guide to Lex and Yacc, at http://members.xoom.com/thomasn/y_man.htm.
2
CONTENTS
1.
INTRODUCTION
4
2. 2.1 2.2 2.3 2.4 3. 3.1 3.2 3.3 3.4 3.5 4. 4.1 4.2 5.
SORTING Insertion Sort Shell Sort Quicksort Comparison DICTIONARIES Hash Tables Binary Search Trees RedBlack Trees Skip Lists Comparison VERY LARGE FILES External Sorting BTrees BIBLIOGRAPHY
8 8 10 11 14 15 15 19 21 25 26 29 29 32 36
3
1. Introduction
Arrays and linked lists are two basic data structures used to store information. We may wish to search, insert or delete records in a database based on a key value. This section examines the performance of these operations on arrays and linked lists.
Arrays
Figure 11 shows an array, seven elements long, containing numeric values. To search the array sequentially, we may use the algorithm in Figure 12. The maximum number of comparisons is 7, and occurs when the key we are searching for is in A[6].
0 1 2 3 4 5 6 4 7 16 20 37 38 43 Ub M Lb
Figure 11: An Array
int function SequentialSearch (Array A , int Lb , int Ub , int Key ); begin for i = Lb to Ub do if A [ i ] = Key then return i ; return –1; end; Figure 12: Sequential Search
4
int function BinarySearch (Array A , int Lb , int Ub , int Key ); begin do forever M = ( Lb + Ub )/2; if ( Key < A[M]) then Ub = M – 1; else if (Key > A[M]) then Lb = M + 1; else return M ; if (Lb > Ub) then return –1; end; Figure 13: Binary Search
If the data is sorted, a binary search may be done (Figure 13). Variables Lb and Ub keep track of the lower bound and upper bound of the array, respectively. We begin by examining the middle element of the array. If the key we are searching for is less than the middle element, then it must reside in the top half of the array. Thus, we set Ub to (M – 1). This restricts our next iteration through the loop to the top half of the array. In this way, each iteration halves the size of the array to be searched. For example, the first iteration will leave 3 items to test. After the second iteration, there will be one item left to test. Therefore it takes only three iterations to find any number. This is a powerful method. Given an array of 1023 elements, we can narrow the search to 511 elements in one comparison. After another comparison, and we’re looking at only 255 elements. In fact, we can search the entire array in only 10 comparisons. In addition to searching, we may wish to insert or delete entries. Unfortunately, an array is not a good arrangement for these operations. For example, to insert the number 18 in Figure 11, we would need to shift A[3]…A[6] down by one slot. Then we could copy number 18 into A[3]. A similar problem arises when deleting numbers. To improve the efficiency of insert and delete operations, linked lists may be used.
5
Linked Lists
X
18
P
#
4 7 16 20 37 38 43
Figure 14: A Linked List In Figure 14 we have the same values stored in a linked list. Assuming pointers X and P, as shown in the figure, value 18 may be inserted as follows:
X>Next = P>Next; P>Next = X;
Insertion and deletion operations are very efficient using linked lists. You may be wondering how pointer P was set in the first place. Well, we had to do a sequential search to find the insertion point X. Although we improved our performance for insertion/deletion, it was done at the expense of search time.
Timing Estimates
Several methods may be used to compare the performance of algorithms. One way is simply to run several tests for each algorithm and compare the timings. Another way is to estimate the time required. For example, we may state that search time is O(n) (bigoh of n). This means that search time, for large n, is proportional to the number of items n in the list. Consequently, we would expect search time to triple if our list increased in size by a factor of three. The bigO notation does not describe the exact time that an algorithm takes, but only indicates an upper bound on execution time within a constant factor. If an algorithm takes O(n2) time, then execution time grows no worse than the square of the size of the list.
6
n 1 16 256 4,096 65,536 1,048,476 16,775,616
lg n 0 4 8 12 16 20 24
n lg n 0 64 2,048 49,152 1,048,565 20,969,520 402,614,784
n 1.25 1 32 1,024 32,768 1,048,476 33,554,432 1,073,613,825
n2 1 256 65,536 16,777,216 4,294,967,296 1,099,301,922,576 281,421,292,179,456
Table 11: Growth Rates Table 11 illustrates growth rates for various functions. A growth rate of O(lg n) occurs for algorithms similar to the binary search. The lg (logarithm, base 2) function increases by one when n is doubled. Recall that we can search twice as many items with one more comparison in the binary search. Thus the binary search is a O(lg n) algorithm. If the values in Table 11 represented microseconds, then a O(lg n) algorithm may take 20 microseconds to process 1,048,476 items, a O(n1.25) algorithm might take 33 seconds, and a O(n2) algorithm might take up to 12 days! In the following chapters a timing estimate for each algorithm, using bigO notation, will be included. For a more formal derivation of these formulas you may wish to consult the references.
Summary
As we have seen, sorted arrays may be searched efficiently using a binary search. However, we must have a sorted array to start with. In the next section various ways to sort arrays will be examined. It turns out that this is computationally expensive, and considerable research has been done to make sorting algorithms as efficient as possible. Linked lists improved the efficiency of insert and delete operations, but searches were sequential and timeconsuming. Algorithms exist that do all three operations efficiently, and they will be the discussed in the section on dictionaries.
7
2. Sorting
Several algorithms are presented, including insertion sort, shell sort, and quicksort. Sorting by insertion is the simplest method, and doesn’t require any additional storage. Shell sort is a simple modification that improves performance significantly. Probably the most efficient and popular method is quicksort, and is the method of choice for large arrays.
2.1 Insertion Sort
One of the simplest methods to sort an array is an insertion sort. An example of an insertion sort occurs in everyday life while playing cards. To sort the cards in your hand you extract a card, shift the remaining cards, and then insert the extracted card in the correct place. This process is repeated until all the cards are in the correct sequence. Both average and worstcase time is O(n2). For further reading, consult Knuth [1998].
8
Theory
Starting near the top of the array in Figure 21(a), we extract the 3. Then the above elements are shifted down until we find the correct place to insert the 3. This process repeats in Figure 21(b) with the next number. Finally, in Figure 21(c), we complete the sort by inserting 2 in the correct place.
4
4 4 1 2 1 2
3 4 1 2
D
3 1 2
3
3 4 3 4 2 2
1 3 4 2
E
4 1 2
1
1 3 4
1
1 2
F
3 4 2
3 4
3 4
Figure 21: Insertion Sort Assuming there are n elements in the array, we must index through n – 1 entries. For each entry, we may need to examine and shift up to n – 1 other entries, resulting in a O(n2) algorithm. The insertion sort is an inplace sort. That is, we sort the array inplace. No extra memory is required. The insertion sort is also a stable sort. Stable sorts retain the original ordering of keys when identical keys are present in the input data.
Implementation
Source for the insertion sort algorithm may be found in file ins.c. Typedef T and comparison operator compGT should be altered to reflect the data stored in the table.
9
2.2 Shell Sort
Shell sort, developed by Donald L. Shell, is a nonstable inplace sort. Shell sort improves on the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worstcase time is O(n1.5). For further reading, consult Knuth [1998].
Theory
In Figure 22(a) we have an example of sorting by insertion. First we extract 1, shift 3 and 5 down one slot, and then insert the 1, for a count of 2 shifts. In the next frame, two shifts are required before we can insert the 2. The process continues until the last frame, where a total of 2 + 2 + 1 = 5 shifts have been made. In Figure 22(b) an example of shell sort is illustrated. We begin by doing an insertion sort using a spacing of two. In the first frame we examine numbers 31. Extracting 1, we shift 3 down one slot for a shift count of 1. Next we examine numbers 52. We extract 2, shift 5 down, and then insert 2. After sorting with a spacing of two, a final pass is made with a spacing of one. This is simply the traditional insertion sort. The total shift count using shell sort is 1+1+1 = 3. By using an initial spacing larger than one, we were able to quickly shift values to their proper destination.
2s 3 1 3 5 2 4 1s 3 1 5 3 2 4 5 1 2 4 1s 1 2 3 5 4 5 1 2 4 2s 1 2 3 5 4 1s 1 2 3 4 5 1s 1 2 3 4 5
D
E
Figure 22: Shell Sort Various spacings may be used to implement shell sort. Typically the array is sorted with a large spacing, the spacing reduced, and the array sorted again. On the final sort, spacing is one. Although the shell sort is easy to comprehend, formal analysis is difficult. In particular, optimal spacing values elude theoreticians. Knuth has experimented with several values and recommends that spacing h for an array of size N be based on the following formula: Let h1 = 1, hs +1 = 3hs + 1, and stop with ht when ht + 2 ≥ N
 10 
Thus, values of h are computed as follows: h1 = 1 h2 = (3 × 1) + 1 = 4 h3 = (3 × 4) + 1 = 13 h4 = (3 × 13) + 1 = 40 h5 = (3 × 40) + 1 = 121 To sort 100 items we first find hs such that hs ≥ 100. For 100 items, h5 is selected. Our final value (ht) is two steps lower, or h3. Therefore our sequence of h values will be 1341. Once the initial h value has been determined, subsequent values may be calculated using the formula hs −1 = hs / 3
Implementation
Source for the shell sort algorithm may be found in file shl.c. Typedef T and comparison operator compGT should be altered to reflect the data stored in the array. The central portion of the algorithm is an insertion sort with a spacing of h.
2.3 Quicksort
Although the shell sort algorithm is significantly better than insertion sort, there is still room for improvement. One of the most popular sorting algorithms is quicksort. Quicksort executes in O(n lg n) on average, and O(n2) in the worstcase. However, with proper precautions, worstcase behavior is very unlikely. Quicksort is a nonstable sort. It is not an inplace sort as stack space is required. For further reading, consult Cormen [1990].
Theory
The quicksort algorithm works by partitioning the array to be sorted, then recursively sorting each partition. In Partition (Figure 23), one of the array elements is selected as a pivot value. Values smaller than the pivot value are placed to the left of the pivot, while larger values are placed to the right.
 11 
int function Partition (Array A, int Lb, int Ub); begin select a pivot from A[Lb]…A[Ub]; reorder A[Lb]…A[Ub] such that: all values to the left of the pivot are ≤ pivot all values to the right of the pivot are ≥ pivot return pivot position; end; procedure QuickSort (Array A, int Lb, int Ub); begin if Lb < Ub then M = Partition (A, Lb, Ub); QuickSort (A, Lb, M – 1); QuickSort (A, M + 1, Ub); end; Figure 23: Quicksort Algorithm In Figure 24(a), the pivot selected is 3. Indices are run starting at both ends of the array. One index starts on the left and selects an element that is larger than the pivot, while another index starts on the right and selects an element that is smaller than the pivot. In this case, numbers 4 and 1 are selected. These elements are then exchanged, as is shown in Figure 24(b). This process repeats until all elements to the left of the pivot are ≤ the pivot, and all items to the right of the pivot are ≥ the pivot. QuickSort recursively sorts the two subarrays, resulting in the array shown in Figure 24(c).
Lb Ub
2 3 5 1
D
4
SLYRW Lb M
2 3 5
Lb
4
E
1
F
1
2
3
4
5
Figure 24: Quicksort Example As the process proceeds, it may be necessary to move the pivot so that correct ordering is maintained. In this manner, QuickSort succeeds in sorting the array. If we’re lucky the pivot selected will be the median of all values, equally dividing the array. For a moment, let’s assume
 12 
that this is the case. Since the array is split in half at each step, and Partition must eventually examine all n elements, the run time is O(n lg n). To find a pivot value, Partition could simply select the first element (A[Lb]). All other values would be compared to the pivot value, and placed either to the left or right of the pivot as appropriate. However, there is one case that fails miserably. Suppose the array was originally in order. Partition would always select the lowest value as a pivot and split the array with one element in the left partition, and Ub – Lb elements in the other. Each recursive call to quicksort would only diminish the size of the array to be sorted by one. Therefore n recursive calls would be required to do the sort, resulting in a O(n2) run time. One solution to this problem is to randomly select an item as a pivot. This would make it extremely unlikely that worstcase behavior would occur.
Implementation
The source for the quicksort algorithm may be found in file qui.c. Typedef T and comparison operator compGT should be altered to reflect the data stored in the array. Several enhancements have been made to the basic quicksort algorithm: • The center element is selected as a pivot in partition. If the list is partially ordered, this will be a good choice. Worstcase behavior occurs when the center element happens to be the largest or smallest element each time partition is invoked. For short arrays, insertSort is called. Due to recursion and other overhead, quicksort is not an efficient algorithm to use on small arrays. Consequently, any array with fewer than 12 elements is sorted using an insertion sort. The optimal cutoff value is not critical and varies based on the quality of generated code. Tail recursion occurs when the last statement in a function is a call to the function itself. Tail recursion may be replaced by iteration, resulting in a better utilization of stack space. This has been done with the second call to QuickSort in Figure 23. After an array is partitioned, the smallest partition is sorted first. This results in a better utilization of stack space, as short partitions are quickly sorted and dispensed with.
•
•
•
Included in file qsort.c is the source for qsort, an ANSIC standard library function usually implemented with quicksort. Recursive calls were replaced by explicit stack operations. Table 21 shows timing statistics and stack utilization before and after the enhancements were applied.
time ( µ s) stacksize before after before after 103 51 540 28 1,630 911 912 112 34,183 20,016 1,908 168 658,003 470,737 2,436 252
count 16 256 4,096 65,536
Table 21: Effect of Enhancements on Speed and Stack Utilization
 13 
2.4 Comparison
In this section we will compare the sorting algorithms covered: insertion sort, shell sort, and quicksort. There are several factors that influence the choice of a sorting algorithm: • • Stable sort. Recall that a stable sort will leave identical keys in the same relative position in the sorted output. Insertion sort is the only algorithm covered that is stable. Space. An inplace sort does not require any extra space to accomplish its task. Both insertion sort and shell sort are inplace sorts. Quicksort requires stack space for recursion, and therefore is not an inplace sort. Tinkering with the algorithm considerably reduced the amount of time required. Time. The time required to sort a dataset can easily become astronomical (Table 11). Table 22 shows the relative timings for each method. The time required to sort a randomly ordered dataset is shown in Table 23. Simplicity. The number of statements required for each algorithm may be found in Table 22. Simpler algorithms result in fewer programming errors.
•
•
method
insertion sort shell sort quicksort
statements
9 17 21
average time
worstcase time
O (n ) O (n 1.25) O (n lg n )
2
O (n 2) O (n 1.5) O (n 2)
Table 22: Comparison of Methods
count 16 256 4,096 65,536
insertion 39 µs 4,969 µs 1.315 sec 416.437 sec
shell 45 µs 1,230 µs .033 sec 1.254 sec
quicksort 51 µs 911 µs .020 sec .461 sec
Table 23: Sort Timings
 14 
3. Dictionaries
Dictionaries are data structures that support search, insert, and delete operations. One of the most effective representations is a hash table. Typically, a simple function is applied to the key to determine its place in the dictionary. Also included are binary trees and redblack trees. Both tree methods use a technique similar to the binary search algorithm to minimize the number of comparisons during search and update operations on the dictionary. Finally, skip lists illustrate a simple approach that utilizes random numbers to construct a dictionary.
3.1 Hash Tables
Hash tables are a simple and effective method to implement dictionaries. Average time to search for an element is O(1), while worstcase time is O(n). Cormen [1990] and Knuth [1998] both contain excellent discussions on hashing.
Theory
A hash table is simply an array that is addressed via a hash function. For example, in Figure 31, HashTable is an array with 8 elements. Each element is a pointer to a linked list of numeric data. The hash function for this example simply divides the data key by 8, and uses the remainder as an index into the table. This yields a number from 0 to 7. Since the range of indices for HashTable is 0 to 7, we are guaranteed that the index is valid.
HashTable
0 1 2 3 4 5 6 7 # 22 # # # 6 11 27 # # # 19 # 16
Figure 31: A Hash Table To insert a new item in the table, we hash the key to determine which list the item goes on, and then insert the item at the beginning of the list. For example, to insert 11, we divide 11 by 8 giving a remainder of 3. Thus, 11 goes on the list starting at HashTable[3]. To find a
 15 
number, we hash the number and chain down the correct list to see if it is in the table. To delete a number, we find the number and remove the node from the linked list. Entries in the hash table are dynamically allocated and entered on a linked list associated with each hash table entry. This technique is known as chaining. An alternative method, where all entries are stored in the hash table itself, is known as direct or open addressing and may be found in the references. If the hash function is uniform, or equally distributes the data keys among the hash table indices, then hashing effectively subdivides the list to be searched. Worstcase behavior occurs when all keys hash to the same index. Then we simply have a single linked list that must be sequentially searched. Consequently, it is important to choose a good hash function. Several methods may be used to hash key values. To illustrate the techniques, I will assume unsigned char is 8bits, unsigned short int is 16bits, and unsigned long int is 32bits. • Division method (tablesize = prime). This technique was used in the preceding example. A HashValue, from 0 to (HashTableSize  1), is computed by dividing the key value by the size of the hash table and taking the remainder. For example:
typedef int HashIndexType; HashIndexType Hash(int Key) { return Key % HashTableSize; }
Selecting an appropriate HashTableSize is important to the success of this method. For example, a HashTableSize of two would yield even hash values for even Keys, and odd hash values for odd Keys. This is an undesirable property, as all keys would hash to the same value if they happened to be even. If HashTableSize is a power of two, then the hash function simply selects a subset of the Key bits as the table index. To obtain a more random scattering, HashTableSize should be a prime number not too close to a power of two. • Multiplication method (tablesize = 2n). The multiplication method may be used for a HashTableSize that is a power of 2. The Key is multiplied by a constant, and then the necessary bits are extracted to index into the table. Knuth recommends using the fractional part of the product of the key and the golden ratio, or 5 − 1 / 2 . For example, assuming a word size of 8 bits, the golden ratio is multiplied by 28 to obtain 158. The product of the 8bit key and 158 results in a 16bit integer. For a table size of 25 the 5 most significant bits of the least significant word are extracted for the hash value. The following definitions may be used for the multiplication method:
(
)
 16 
/* 8bit index */ typedef unsigned char HashIndexType; static const HashIndexType K = 158; /* 16bit index */ typedef unsigned short int HashIndexType; static const HashIndexType K = 40503; /* 32bit index */ typedef unsigned long int HashIndexType; static const HashIndexType K = 2654435769; /* w=bitwidth(HashIndexType), size of table=2**m */ static const int S = w  m; HashIndexType HashValue = (HashIndexType)(K * Key) >> S;
For example, if HashTableSize is 1024 (210), then a 16bit index is sufficient and S would be assigned a value of 16 – 10 = 6. Thus, we have:
typedef unsigned short int HashIndexType; HashIndexType Hash(int Key) { static const HashIndexType K = 40503; static const int S = 6; return (HashIndexType)(K * Key) >> S; }
•
Variable string addition method (tablesize = 256). To hash a variablelength string, each character is added, modulo 256, to a total. A HashValue, range 0255, is computed.
typedef unsigned char HashIndexType; HashIndexType Hash(char *str) { HashIndexType h = 0; while (*str) h += *str++; return h; }
•
Variable string exclusiveor method (tablesize = 256). This method is similar to the addition method, but successfully distinguishes similar words and anagrams. To obtain a hash value in the range 0255, all bytes in the string are exclusiveor'd together. However, in the process of doing each exclusiveor, a random component is introduced.
typedef unsigned char HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { unsigned char h = 0; while (*str) h = Rand8[h ^ *str++]; return h; }
 17 
Rand8 is a table of 256 8bit unique random numbers. The exact ordering is not critical.
The exclusiveor method has its basis in cryptography, and is quite effective Pearson [1990]. • Variable string exclusiveor method (tablesize ≤ 65536). If we hash the string twice, we may derive a hash value for an arbitrary table size up to 65536. The second time the string is hashed, one is added to the first character. Then the two 8bit hash values are concatenated together to form a 16bit hash value.
typedef unsigned short int HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { HashIndexType h; unsigned char h1, h2; if (*str == 0) return 0; h1 = *str; h2 = *str + 1; str++; while (*str) { h1 = Rand8[h1 ^ *str]; h2 = Rand8[h2 ^ *str]; str++; } /* h is in range 0..65535 */ h = ((HashIndexType)h1 << 8)(HashIndexType)h2; /* use division method to scale */ return h % HashTableSize }
Assuming n data items, the hash table size should be large enough to accommodate a reasonable number of entries. As seen in Table 31, a small table size substantially increases the average time to find a key. A hash table may be viewed as a collection of linked lists. As the table becomes larger, the number of lists increases, and the average number of nodes on each list decreases. If the table size is 1, then the table is really a single linked list of length n. Assuming a perfect hash function, a table size of 2 has two lists of length n/2. If the table size is 100, then we have 100 lists of length n/100. This considerably reduces the length of the list to be searched. There is considerable leeway in the choice of table size.
size 1 2 4 8 16 32 64 time 869 432 214 106 54 28 15 size 128 256 512 1024 2048 4096 8192 time 9 6 4 4 3 3 3
Table 31: HashTableSize vs. Average Search Time (µs), 4096 entries
 18 
Implementation
Source for the hash table algorithm may be found in file has.c. Typedef T and comparison operator compEQ should be altered to reflect the data stored in the table. The hashTableSize must be determined and the hashTable allocated. The division method was used in the hash function. Function insertNode allocates a new node and inserts it in the table. Function deleteNode deletes and frees a node from the table. Function findNode searches the table for a particular value.
3.2 Binary Search Trees
In the Introduction, we used the binary search algorithm to find data stored in an array. This method is very effective, as each iteration reduced the number of items to search by onehalf. However, since data was stored in an array, insertions and deletions were not efficient. Binary search trees store data in nodes that are linked in a treelike fashion. For randomly inserted data, search time is O(lg n). Worstcase behavior occurs when ordered data is inserted. In this case the search time is O(n). See Cormen [1990] for a more detailed description.
Theory
A binary search tree is a tree where each node has a left and right child. Either child, or both children, may be missing. Figure 32 illustrates a binary search tree. Assuming k represents the value of a given node, then a binary search tree also has the following property: all children to the left of the node have values smaller than k, and all children to the right of the node have values larger than k. The top of a tree is known as the root, and the exposed nodes at the bottom are known as leaves. In Figure 32, the root is node 20 and the leaves are nodes 4, 16, 37, and 43. The height of a tree is the length of the longest path from root to leaf. For this example the tree height is 2.
20
7
38
4
16
37
43
Figure 32: A Binary Search Tree To search a tree for a given value, we start at the root and work down. For example, to search for 16, we first note that 16 < 20 and we traverse to the left child. The second comparison finds that 16 > 7, so we traverse to the right child. On the third comparison, we succeed.
 19 
4 7 16 20 37 38 43
Figure 33: An Unbalanced Binary Search Tree Each comparison results in reducing the number of items to inspect by onehalf. In this respect, the algorithm is similar to a binary search on an array. However, this is true only if the tree is balanced. Figure 33 shows another tree containing the same values. While it is a binary search tree, its behavior is more like that of a linked list, with search time increasing proportional to the number of elements stored.
Insertion and Deletion
Let us examine insertions in a binary search tree to determine the conditions that can cause an unbalanced tree. To insert an 18 in the tree in Figure 32, we first search for that number. This causes us to arrive at node 16 with nowhere to go. Since 18 > 16, we simply add node 18 to the right child of node 16 (Figure 34).
20
7
38
4
16
37
43
18
Figure 34: Binary Tree After Adding Node 18
 20 
Now we can see how an unbalanced tree can occur. If the data is presented in an ascending sequence, each node will be added to the right of the previous node. This will create one long chain, or linked list. However, if data is presented for insertion in a random order, then a more balanced tree is possible. Deletions are similar, but require that the binary search tree property be maintained. For example, if node 20 in Figure 34 is removed, it must be replaced by node 37. This results in the tree shown in Figure 35. The rationale for this choice is as follows. The successor for node 20 must be chosen such that all nodes to the right are larger. Therefore we need to select the smallest valued node to the right of node 20. To make the selection, chain once to the right (node 38), and then chain to the left until the last node is found (node 37). This is the successor for node 20.
37
7
38
4
16
43
18
Figure 35: Binary Tree After Deleting Node 20
Implementation
Source for the binary search tree algorithm may be found in file bin.c. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the tree. Each Node consists of left, right, and parent pointers designating each child and the parent. Data is stored in the data field. The tree is based at root, and is initially NULL. Function insertNode allocates a new node and inserts it in the tree. Function deleteNode deletes and frees a node from the tree. Function findNode searches the tree for a particular value.
3.3 RedBlack Trees
Binary search trees work best when they are balanced or the path length from root to any leaf is within some bounds. The redblack tree algorithm is a method for balancing trees. The name derives from the fact that each node is colored red or black, and the color of the node is instrumental in determining the balance of the tree. During insert and delete operations, nodes may be rotated to maintain tree balance. Both average and worstcase search time is O(lg n). See Cormen [1990] for details.
 21 
Theory
A redblack tree is a balanced binary search tree with the following properties: 1. Every node is colored red or black. 2. Every leaf is a NIL node, and is colored black. 3. If a node is red, then both its children are black. 4. Every simple path from a node to a descendant leaf contains the same number of black nodes. The number of black nodes on a path from root to leaf is known as the black height of a tree. These properties guarantee that any path from the root to a leaf is no more than twice as long as any other path. To see why this is true, consider a tree with a black height of two. The shortest distance from root to leaf is two, where both nodes are black. The longest distance from root to leaf is four, where the nodes are colored (root to leaf): red, black, red, black. It is not possible to insert more black nodes as this would violate property 4, the blackheight requirement. Since red nodes must have black children (property 3), having two red nodes in a row is not allowed. The largest path we can construct consists of an alternation of redblack nodes, or twice the length of a path containing only black nodes. All operations on the tree must maintain the properties listed above. In particular, operations that insert or delete items from the tree must abide by these rules.
Insertion
To insert a node, we search the tree for an insertion point, and add the node to the tree. The new node replaces an existing NIL node at the bottom of the tree, and has two NIL nodes as children. In the implementation, a NIL node is simply a pointer to a common sentinel node that is colored black. After insertion, the new node is colored red. Then the parent of the node is examined to determine if the redblack tree properties have been violated. If necessary, we recolor the node and do rotations to balance the tree. By inserting a red node with two NIL children, we have preserved blackheight property (property 4). However, property 3 may be violated. This property states that both children of a red node must be black. Although both children of the new node are black (they’re NIL), consider the case where the parent of the new node is red. Inserting a red node under a red parent would violate this property. There are two cases to consider: • Red parent, red uncle: Figure 36 illustrates a redred violation. Node X is the newly inserted node, with both parent and uncle colored red. A simple recoloring removes the redred violation. After recoloring, the grandparent (node B) must be checked for validity, as its parent may be red. Note that this has the effect of propagating a red node up the tree. On completion, the root of the tree is marked black. If it was originally red, then this has the effect of increasing the blackheight of the tree.
 22 
•
Red parent, black uncle: Figure 37 illustrates a redred violation, where the uncle is colored black. Here the nodes may be rotated, with the subtrees adjusted as shown. At this point the algorithm may terminate as there are no redred conflicts and the top of the subtree (node A) is colored black. Note that if node X was originally a right child, a left rotation would be done first, making the node a left child.
Each adjustment made while inserting a node causes us to travel up the tree one step. At most one rotation (2 if the node is a right child) will be done, as the algorithm terminates in this case. The technique for deletion is similar.
%
EODFN
$
UHG SDUHQW UHG XQFOH
&
;
UHG
%
UHG
$
EODFN EODFN
&
;
UHG
Figure 36: Insertion – Red Parent, Red Uncle
 23 
%
EODFN
$
UHG SDUHQW EODFN
&
γ
δ
XQFOH
ε
;
UHG
α
β
$
EODFN
;
UHG UHG
%
α
β
γ
&
EODFN
δ
Figure 37: Insertion – Red Parent, Black Uncle
ε
 24 
Implementation
Source for the redblack tree algorithm may be found in file rbt.c. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the tree. Each Node consists of left, right, and parent pointers designating each child and the parent. The node color is stored in color, and is either RED or BLACK. The data is stored in the data field. All leaf nodes of the tree are sentinel nodes, to simplify coding. The tree is based at root, and initially is a sentinel node. Function insertNode allocates a new node and inserts it in the tree. Subsequently, it calls insertFixup to ensure that the redblack tree properties are maintained. Function deleteNode deletes a node from the tree. To maintain redblack tree properties, deleteFixup is called. Function findNode searches the tree for a particular value.
3.4 Skip Lists
Skip lists are linked lists that allow you to skip to the correct node. The performance bottleneck inherent in a sequential scan is avoided, while insertion and deletion remain relatively efficient. Average search time is O(lg n). Worstcase search time is O(n), but is extremely unlikely. An excellent reference for skip lists is Pugh [1990].
Theory
The indexing scheme employed in skip lists is similar in nature to the method used to lookup names in an address book. To lookup a name, you index to the tab representing the first character of the desired entry. In Figure 38, for example, the topmost list represents a simple linked list with no tabs. Adding tabs (middle figure) facilitates the search. In this case, level1 pointers are traversed. Once the correct segment of the list is found, level0 pointers are traversed to find the specific entry.
# 0 abe art ben bob cal cat dan don
# 0 1
abe
art
ben
bob
cal
cat
dan
don
# 0 1 2
abe
art
ben
bob
cal
cat
dan
don
Figure 38: Skip List Construction
 25 
The indexing scheme may be extended as shown in the bottom figure, where we now have an index to the index. To locate an item, level2 pointers are traversed until the correct segment of the list is identified. Subsequently, level1 and level0 pointers are traversed. During insertion the number of pointers required for a new node must be determined. This is easily resolved using a probabilistic technique. A random number generator is used to toss a computer coin. When inserting a new node, the coin is tossed to determine if it should be level1. If you win, the coin is tossed again to determine if the node should be level2. Another win, and the coin is tossed to determine if the node should be level3. This process repeats until you lose. If only one level (level0) is implemented, the data structure is a simple linkedlist with O(n) search time. However, if sufficient levels are implemented, the skip list may be viewed as a tree with the root at the highest level, and search time is O(lg n). The skip list algorithm has a probabilistic component, and thus a probabilistic bounds on the time required to execute. However, these bounds are quite tight in normal circumstances. For example, to search a list containing 1000 items, the probability that search time will be 5 times the average is about 1 in 1,000,000,000,000,000,000.
Implementation
Source for the skip list algorithm may be found in file skl.c. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the list. In addition, MAXLEVEL should be set based on the maximum size of the dataset. To initialize, initList is called. The list header is allocated and initialized. To indicate an empty list, all levels are set to point to the header. Function insertNode allocates a new node, searches for the correct insertion point, and inserts it in the list. While searching, the update array maintains pointers to the upperlevel nodes encountered. This information is subsequently used to establish correct links for the newly inserted node. The newLevel is determined using a random number generator, and the node allocated. The forward links are then established using information from the update array. Function deleteNode deletes and frees a node, and is implemented in a similar manner. Function findNode searches the list for a particular value.
3.5 Comparison
We have seen several ways to construct dictionaries: hash tables, unbalanced binary search trees, redblack trees, and skip lists. There are several factors that influence the choice of an algorithm: • Sorted output. If sorted output is required, then hash tables are not a viable alternative. Entries are stored in the table based on their hashed value, with no other ordering. For binary trees, the story is different. An inorder tree walk will produce a sorted list. For example:
 26 
void WalkTree(Node *P) { if (P == NIL) return; WalkTree(P>Left); /* examine P>Data here */ WalkTree(P>Right); } WalkTree(Root);
To examine skip list nodes in order, simply chain through the level0 pointers. For example:
Node *P = List.Hdr>Forward[0]; while (P != NIL) { /* examine P>Data here */ P = P>Forward[0]; }
•
Space. The amount of memory required to store a value should be minimized. This is especially true if many small nodes are to be allocated. ♦ For hash tables, only one forward pointer per node is required. In addition, the hash table itself must be allocated. ♦ For redblack trees, each node has a left, right, and parent pointer. In addition, the color of each node must be recorded. Although this requires only one bit, more space may be allocated to ensure that the size of the structure is properly aligned. Therefore each node in a redblack tree requires enough space for 34 pointers. ♦ For skip lists, each node has a level0 forward pointer. The probability of having a level1 pointer is ½. The probability of having a level2 pointer is ¼. In general, the number of forward pointers per node is
n = 1+ 1 1 + + L =2. 2 4
•
Time. The algorithm should be efficient. This is especially true if a large dataset is expected. Table 32 compares the search time for each algorithm. Note that worstcase behavior for hash tables and skip lists is extremely unlikely. Actual timing tests are described below. Simplicity. If the algorithm is short and easy to understand, fewer mistakes may be made. This not only makes your life easy, but the maintenance programmer entrusted with the task of making repairs will appreciate any efforts you make in this area. The number of statements required for each algorithm is listed in Table 32.
•
 27 
method hash table unbalanced tree redblack tree skip list
statements 26 41 120 55
average time O (1) O (lg n) O (lg n) O (lg n)
worstcase time O (n) O (n) O (lg n) O (n)
Table 32: Comparison of Dictionaries Average time for insert, search, and delete operations on a database of 65,536 (216) randomly input items may be found in Table 33. For this test the hash table size was 10,009 and 16 index levels were allowed for the skip list. Although there is some variation in the timings for the four methods, they are close enough so that other considerations should come into play when selecting an algorithm.
method hash table unbalanced tree redblack tree skip list insert 18 37 40 48 search 8 17 16 31 delete 10 26 37 35
Table 33: Average Time (µs), 65536 Items, Random Input
order
random input
ordered input
count 16 256 4,096 65,536 16 256 4,096 65,536
hash table 4 3 3 8 3 3 3 7
unbalanced tree 3 4 7 17 4 47 1,033 55,019
redblack tree 2 4 6 16 2 4 6 9
skip list 5 9 12 31 4 7 11 15
Table 34: Average Search Time (us) Table 34 shows the average search time for two sets of data: a random set, where all values are unique, and an ordered set, where values are in ascending order. Ordered input creates a worstcase scenario for unbalanced tree algorithms, as the tree ends up being a simple linked list. The times shown are for a single search operation. If we were to search for all items in a database of 65,536 values, a redblack tree algorithm would take .6 seconds, while an unbalanced tree algorithm would take 1 hour.
 28 
4. Very Large Files
The previous algorithms have assumed that all data reside in memory. However, there may be times when the dataset is too large, and alternative methods are required. In this section, we will examine techniques for sorting (external sorts) and implementing dictionaries (Btrees) for very large files.
4.1 External Sorting
One method for sorting a file is to load the file into memory, sort the data in memory, then write the results. When the file cannot be loaded into memory due to resource limitations, an external sort applicable. We will implement an external sort using replacement selection to establish initial runs, followed by a polyphase merge sort to merge the runs into one sorted file. I highly recommend you consult Knuth [1998], as many details have been omitted.
 29 
Theory
For clarity, I’ll assume that data is on one or more reels of magnetic tape. Figure 41 illustrates a 3way polyphase merge. Initially, in phase A, all data is on tapes T1 and T2. Assume that the beginning of each tape is at the bottom of the frame. There are two sequential runs of data on T1: 48, and 67. Tape T2 has one run: 59. At phase B, we’ve merged the first run from tapes T1 (48) and T2 (59) into a longer run on tape T3 (4589). Phase C is simply renames the tapes, so we may repeat the merge again. In phase D we repeat the merge, with the final output on tape T3. phase A T1 7 6 8 4 T2 T3
9 5 9 8 5 4
B 7 6 9 8 5 4
C
7 6 9 8 7 6 5 4
D
Figure 41: Merge Sort Several interesting details have been omitted from the previous illustration. For example, how were the initial runs created? And, did you notice that they merged perfectly, with no extra runs on any tapes? Before I explain the method used for constructing initial runs, let me digress for a bit. In 1202, Leonardo Fibonacci presented the following exercise in his Liber Abbaci (Book of the Abacus): “How many pairs of rabbits can be produced from a single pair in a year’s time?” We may assume that each pair produces a new pair of offspring every month, each pair becomes fertile at the age of one month, and that rabbits never die. After one month, there will be 2 pairs of rabbits; after two months there will be 3; the following month the original pair and the pair born during the first month will both usher in a new pair, and there will be 5 in all; and so on. This series, where each number is the sum of the two preceding numbers, is known as the Fibonacci sequence:
 30 
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... . Curiously, the Fibonacci series has found widespread application to everything from the arrangement of flowers on plants to studying the efficiency of Euclid’s algorithm. There’s even a Fibonacci Quarterly journal. And, as you might suspect, the Fibonacci series has something to do with establishing initial runs for external sorts. Recall that we initially had one run on tape T2, and 2 runs on tape T1. Note that the numbers {1,2} are two sequential numbers in the Fibonacci series. After our first merge, we had one run on T1 and one run on T2. Note that the numbers {1,1} are two sequential numbers in the Fibonacci series, only one notch down. We could predict, in fact, that if we had 13 runs on T2, and 21 runs on T1 {13,21}, we would be left with 8 runs on T1 and 13 runs on T3 {8,13} after one pass. Successive passes would result in run counts of {5,8}, {3,5}, {2,3}, {1,1}, and {0,1}, for a total of 7 passes. This arrangement is ideal, and will result in the minimum number of passes. Should data actually be on tape, this is a big savings, as tapes must be mounted and rewound for each pass. For more than 2 tapes, higherorder Fibonacci numbers are used. Initially, all the data is on one tape. The tape is read, and runs are distributed to other tapes in the system. After the initial runs are created, they are merged as described above. One method we could use to create initial runs is to read a batch of records into memory, sort the records, and write them out. This process would continue until we had exhausted the input tape. An alternative algorithm, replacement selection, allows for longer runs. A buffer is allocated in memory to act as a holding place for several records. Initially, the buffer is filled. Then, the following steps are repeated until the input is exhausted: • • • • Select the record with the smallest key that is ≥ the key of the last record written. If all keys are smaller than the key of the last record written, then we have reached the end of a run. Select the record with the smallest key for the first record of the next run. Write the selected record. Replace the selected record with a new record from input.
 31 
Figure 42 illustrates replacement selection for a small file. The beginning of the file is to the right of each frame. To keep things simple, I’ve allocated a 2record buffer. Typically, such a buffer would hold thousands of records. We load the buffer in step B, and write the record with the smallest key (6) in step C. This is replaced with the next record (key 8). We select the smallest key ≥ 6 in step D. This is key 7. After writing key 7, we replace it with key 4. This process repeats until step F, where our last key written was 8, and all keys are less than 8. At this point, we terminate the run, and start another. Step A B C D E F G H Input 534867 5348 534 53 5 Buffer 67 87 84 34 54 5 Output
6 76 876 3  876 43  876 543  876
Figure 42: Replacement Selection This strategy simply utilizes an intermediate buffer to hold values until the appropriate time for output. Using random numbers as input, the average length of a run is twice the length of the buffer. However, if the data is somewhat ordered, runs can be extremely long. Thus, this method is more effective than doing partial sorts. When selecting the next output record, we need to find the smallest key ≥ the last key written. One way to do this is to scan the entire list, searching for the appropriate key. However, when the buffer holds thousands of records, execution time becomes prohibitive. An alternative method is to use a binary tree structure, so that we only compare lg n items.
Implementation
Source for the external sort algorithm may be found in file ext.c. Function makeRuns calls readRec to read the next record. Function readRec employs the replacement selection algorithm (utilizing a binary tree) to fetch the next record, and makeRuns distributes the records in a Fibonacci distribution. If the number of runs is not a perfect Fibonacci number, dummy runs are simulated at the beginning of each file. Function mergeSort is then called to do a polyphase merge sort on the runs.
4.2 BTrees
Dictionaries for very large files typically reside on secondary storage, such as a disk. The dictionary is implemented as an index to the actual file and contains the key and record address of data. To implement a dictionary we could use redblack trees, replacing pointers with offsets from the beginning of the index file, and use random access to reference nodes of the tree. However, every transition on a link would imply a disk access, and would be prohibitively
 32 
expensive. Recall that lowlevel disk I/O accesses disk by sectors (typically 256 bytes). We could equate node size to sector size, and group several keys together in each node to minimize the number of I/O operations. This is the principle behind Btrees. Good references for Btrees include Knuth [1998] and Cormen [1998]. For B+trees, consult Aho [1983].
Theory
Figure 43 illustrates a Btree with 3 keys/node. Keys in internal nodes are surrounded by pointers, or record offsets, to keys that are less than or greater than, the key value. For example, all keys less than 22 are to the left and all keys greater than 22 are to the right. For simplicity, I have not shown the record address associated with each key.
22
10
16
26
4
6
8
12 14
18 20
24
28 30
Figure 43: BTree We can locate any key in this 2level tree with three disk accesses. If we were to group 100 keys/node, we could search over 1,000,000 keys in only three reads. To ensure this property holds, we must maintain a balanced tree during insertion and deletion. During insertion, we examine the child node to verify that it is able to hold an additional node. If not, then a new sibling node is added to the tree, and the child’s keys are redistributed to make room for the new node. When descending for insertion and the root is full, then the root is spilled to new children, and the level of the tree increases. A similar action is taken on deletion, where child nodes may be absorbed by the root. This technique for altering the height of the tree maintains a balanced tree.
BTree any node 1 x 1 → 2 x 1/2 2 x 1/2 → 1 x 1 B*Tree any node 2 x 1 → 3 x 2/3 3 x 2/3 → 2 x 1 B + Tree leaf only 1 x 1 → 2 x 1/2 2 x 1/2 → 1 x 1 B ++ Tree leaf only 3 x 1 → 4 x 3/4 3 x 1/2 → 2 x 3/4
data stored in on insert, split on delete, join
Table 41: BTree Implementations Several variants of the Btree are listed in Table 41. The standard Btree stores keys and data in both internal and leaf nodes. When descending the tree during insertion, a full child node is first redistributed to adjacent nodes. If the adjacent nodes are also full, then a new node is created, and ½ the keys in the child are moved to the newly created node. During deletion, children that are ½ full first attempt to obtain keys from adjacent nodes. If the adjacent nodes are also ½ full, then two nodes are joined to form one full node. B*trees are similar, only the nodes
 33 
are kept 2/3 full. This results in better utilization of space in the tree, and slightly better performance.
22
10
16
26
4
6
8
10 12 14
16 18 20
22 24
26 28 30
Figure 44: B+Tree Figure 44 illustrates a B+tree. All keys are stored at the leaf level, with their associated data values. Duplicates of the keys appear in internal parent nodes to guide the search. Pointers have a slightly different meaning than in conventional Btrees. The left pointer designates all keys less than the value, while the right pointer designates all keys greater than or equal to (GE) the value. For example, all keys less than 22 are on the left pointer, and all keys greater than or equal to 22 are on the right. Notice that key 22 is duplicated in the leaf, where the associated data may be found. During insertion and deletion, care must be taken to properly update parent nodes. When modifying the first key in a leaf, the last GE pointer found while descending the tree will require modification to reflect the new key value. Since all keys are in leaf nodes, we may link them for sequential access. The last method, B++trees, is something of my own invention. The organization is similar to B+trees, except for the split/join strategy. Assume each node can hold k keys, and the root node holds 3k keys. Before we descend to a child node during insertion, we check to see if it is full. If it is, the keys in the child node and two nodes adjacent to the child are all merged and redistributed. If the two adjacent nodes are also full, then another node is added, resulting in four nodes, each ¾ full. Before we descend to a child node during deletion, we check to see if it is ½ full. If it is, the keys in the child node and two nodes adjacent to the child are all merged and redistributed. If the two adjacent nodes are also ½ full, then they are merged into two nodes, each ¾ full. Note that in each case, the resulting nodes are ¾ full. This is halfway between ½ full and completely full, allowing for an equal number of insertions or deletions in the future. Recall that the root node holds 3k keys. If the root is full during insertion, we distribute the keys to four new nodes, each ¾ full. This increases the height of the tree. During deletion, we inspect the child nodes. If there are only three child nodes, and they are all ½ full, they are gathered into the root, and the height of the tree decreases. Another way of expressing the operation is to say we are gathering three nodes, and then scattering them. In the case of insertion, where we need an extra node, we scatter to four nodes. For deletion, where a node must be deleted, we scatter to two nodes. The symmetry of the operation allows the gather/scatter routines to be shared by insertion and deletion in the implementation.
 34 
Implementation
Source for the B++tree algorithm may be found in file btr.c. In the implementationdependent section, you’ll need to define bAdrType and eAdrType, the types associated with Btree file offsets and data file offsets, respectively. You’ll also need to provide a callback function that is used by the B++tree algorithm to compare keys. Functions are provided to insert/delete keys, find keys, and access keys sequentially. Function main, at the bottom of the file, provides a simple illustration for insertion. The code provided allows for multiple indices to the same data. This was implemented by returning a handle when the index is opened. Subsequent accesses are done using the supplied handle. Duplicate keys are allowed. Within one index, all keys must be the same length. A binary search was implemented to search each node. A flexible buffering scheme allows nodes to be retained in memory until the space is needed. If you expect access to be somewhat ordered, increasing the bufCt will reduce paging.
 35 
5. Bibliography
Aho, Alfred V. and Jeffrey D. Ullman [1983]. Data Structures and Algorithms. AddisonWesley, Reading, Massachusetts. Cormen, Thomas H., Charles E. Leiserson and Ronald L. Rivest [1990]. Algorithms. McGrawHill, New York. Introduction to
Knuth, Donald. E. [1998]. The Art of Computer Programming, Volume 3, Sorting and Searching. AddisonWesley, Reading, Massachusetts. Pearson, Peter K [1990]. Fast Hashing of VariableLength Text Strings. Communications of the ACM, 33(6):677680, June 1990. Pugh, William [1990]. Skip lists: A Probabilistic Alternative To Balanced Trees. Communications of the ACM, 33(6):668676, June 1990.
 36 
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm
Introduction
Arrays and linked lists are two basic data structures used to store information. We may wish to search, insert or delete records in a database based on a key value. This section examines the performance of these operations on arrays and linked lists.
Arrays
Figure 11 shows an array, seven elements long, containing numeric values. To search the array sequentially, we may use the algorithm in Figure 12. The maximum number of comparisons is 7, and occurs when the key we are searching for is in A[6].
Figure 11: An Array
int function SequentialSearch (Array A, int Lb, int Ub, int Key); begin for i = Lb to Ub do if A(i) = Key then return i; return 1; end;
Figure 12: Sequential Search
If the data is sorted, a binary search may be done (Figure 13). Variables Lb and Ub keep track of the lower bound and upper bound of the array, respectively. We begin by examining the middle element
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm (1 of 4) [3/23/2004 3:06:22 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm
of the array. If the key we are searching for is less than the middle element, then it must reside in the top half of the array. Thus, we set Ub to (M  1). This restricts our next iteration through the loop to the top half of the array. In this way, each iteration halves the size of the array to be searched. For example, the first iteration will leave 3 items to test. After the second iteration, there will be 1 item left to test. Therefore it takes only three iterations to find any number. This is a powerful method. Given an array of 1023 elements, we can narrow the search to 511 items in one comparison. Another comparison, and we're looking at only 255 elements. In fact, we can search the entire array in only 10 comparisons. In addition to searching, we may wish to insert or delete entries. Unfortunately, an array is not a good arrangement for these operations. For example, to insert the number 18 in Figure 11, we would need to shift A[3]...A[6] down by one slot. Then we could copy number 18 into A[3]. A similar problem arises when deleting numbers. To improve the efficiency of insert and delete operations, linked lists may be used. int function BinarySearch (Array A, int Lb, int Ub, int Key); begin do forever M = (Lb + Ub)/2; if (Key < A[M]) then Ub = M  1; else if (Key > A[M]) then Lb = M + 1; else return M; if (Lb > Ub) then return 1; end;
Figure 13: Binary Search
Linked Lists
In Figure 14, we have the same values stored in a linked list. Assuming pointers X and P, as shown in the figure, value 18 may be inserted as follows: X>Next = P>Next; P>Next = X; Insertion and deletion operations are very efficient using linked lists. You may be wondering how pointer P was set in the first place. Well, we had to do a sequential search to find the insertion point
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm (2 of 4) [3/23/2004 3:06:22 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm
X. Although we improved our performance for insertion/deletion, it has been at the expense of search time.
Figure 14: A Linked List
Timing Estimates
Several methods may be used to compare the performance of algorithms. One way is simply to run several tests for each algorithm and compare the timings. Another way is to estimate the time required. For example, we may state that search time is O(n) (bigoh of n). This means that search time, for large n, is proportional to the number of items n in the list. Consequently, we would expect search time to triple if our list increased in size by a factor of three. The bigO notation does not describe the exact time that an algorithm takes, but only indicates an upper bound on execution time within a constant factor. If an algorithm takes O(n2) time, then execution time grows no worse than the square of the size of the list. n 1 16 256 4,096 65,536 1,048,476 16,775,616 lg n n lg n 0 4 8 12 16 20 0 64 2,048 49,152 1,048,565 20,969,520 n1.25 1 32 1,024 32,768 1,048,476 33,554,432 n2 1 256 65,536 16,777,216 4,294,967,296 1,099,301,922,576
24 402,614,784 1,073,613,825 281,421,292,179,456
Table 11: Growth Rates
Table 11 illustrates growth rates for various functions. A growth rate of O(lg n) occurs for algorithms similar to the binary search. The lg (logarithm, base 2) function increases by one when n is doubled.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm (3 of 4) [3/23/2004 3:06:22 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm
Recall that we can search twice as many items with one more comparison in the binary search. Thus the binary search is a O(lg n) algorithm. If the values in Table 11 represented microseconds, then a O(lg n) algorithm may take 20 microseconds to process 1,048,476 items, a O(n1.25) algorithm might take 33 seconds, and a O(n2) algorithm might take up to 12 days! In the following chapters a timing estimate for each algorithm, using bigO notation, will be included. For a more formal derivation of these formulas you may wish to consult the references.
Summary
As we have seen, sorted arrays may be searched efficiently using a binary search. However, we must have a sorted array to start with. In the next section various ways to sort arrays will be examined. It turns out that this is computationally expensive, and considerable research has been done to make sorting algorithms as efficient as possible. Linked lists improved the efficiency of insert and delete operations, but searches were sequential and timeconsuming. Algorithms exist that do all three operations efficiently, and they will be the discussed in the section on dictionaries.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_int.htm (4 of 4) [3/23/2004 3:06:22 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_srt.htm
Sorting
Several algorithms are presented, including insertion sort, shell sort, and quicksort. Sorting by insertion is the simplest method, and doesn't require any additional storage. Shell sort is a simple modification that improves performance significantly. Probably the most efficient and popular method is quicksort, and is the method of choice for large arrays.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_srt.htm [3/23/2004 3:06:24 PM]
Insertion Sort
Insertion Sort
One of the simplest methods to sort an array is an insertion sort. An example of an insertion sort occurs in everyday life while playing cards. To sort the cards in your hand you extract a card, shift the remaining cards, and then insert the extracted card in the correct place. This process is repeated until all the cards are in the correct sequence. Both average and worstcase time is O(n2). For further reading, consult Knuth [1998].
Theory
Starting near the top of the array in Figure 21(a), we extract the 3. Then the above elements are shifted down until we find the correct place to insert the 3. This process repeats in Figure 21(b) with the next number. Finally, in Figure 21(c), we complete the sort by inserting 2 in the correct place.
Figure 21: Insertion Sort
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ins.htm (1 of 2) [3/23/2004 3:06:34 PM]
Insertion Sort
Assuming there are n elements in the array, we must index through n  1 entries. For each entry, we may need to examine and shift up to n  1 other entries, resulting in a O(n2) algorithm. The insertion sort is an inplace sort. That is, we sort the array inplace. No extra memory is required. The insertion sort is also a stable sort. Stable sorts retain the original ordering of keys when identical keys are present in the input data.
Implementation
An ANSIC implementation for insertion sort is included. Typedef T and comparison operator compGT should be altered to reflect the data stored in the table.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ins.htm (2 of 2) [3/23/2004 3:06:34 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ins.txt
/* insert sort */ #include <stdio.h> #include <stdlib.h> typedef int T; typedef int tblIndex; /* type of item to be sorted */ /* type of subscript */
#define compGT(a,b) (a > b) void insertSort(T *a, tblIndex lb, tblIndex ub) { T t; tblIndex i, j; /************************** * sort array a[lb..ub] * **************************/ for (i = lb + 1; i <= ub; i++) { t = a[i]; /* Shift elements down until */ /* insertion point found. */ for (j = i1; j >= lb && compGT(a[j], t); j) a[j+1] = a[j]; /* insert */ a[j+1] = t; } } void fill(T *a, tblIndex lb, tblIndex ub) { tblIndex i; srand(1); for (i = lb; i <= ub; i++) a[i] = rand(); } int main(int argc, char *argv[]) { tblIndex maxnum, lb, ub; T *a; /* commandline: * * ins maxnum * * ins 2000 * sorts 2000 records * */ maxnum = atoi(argv[1]); lb = 0; ub = maxnum  1; if ((a = malloc(maxnum * sizeof(T))) == 0) { fprintf (stderr, "insufficient memory (a)\n"); exit(1); } fill(a, lb, ub); insertSort(a, lb, ub); return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ins.txt [3/23/2004 3:06:35 PM]
Shell Sort
Shell Sort
Shell sort, developed by Donald L. Shell, is a nonstable inplace sort. Shell sort improves on the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worstcase time is O(n1.5). For further reading, consult Knuth [1998].
Theory
In Figure 22(a) we have an example of sorting by insertion. First we extract 1, shift 3 and 5 down one slot, and then insert the 1, for a count of 2 shifts. In the next frame, two shifts are required before we can insert the 2. The process continues until the last frame, where a total of 2 + 2 + 1 = 5 shifts have been made. In Figure 22(b) an example of shell sort is illustrated. We begin by doing an insertion sort using a spacing of two. In the first frame we examine numbers 31. Extracting 1, we shift 3 down one slot for a shift count of 1. Next we examine numbers 52. We extract 2, shift 5 down, and then insert 2. After sorting with a spacing of two, a final pass is made with a spacing of one. This is simply the traditional insertion sort. The total shift count using shell sort is 1+1+1 = 3. By using an initial spacing larger than one, we were able to quickly shift values to their proper destination.
Figure 22: Shell Sort
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.htm (1 of 2) [3/23/2004 3:06:40 PM]
Shell Sort
Various spacings may be used to implement a shell sort. Typically the array is sorted with a large spacing, the spacing reduced, and the array sorted again. On the final sort, spacing is one. Although the shell sort is easy to comprehend, formal analysis is difficult. In particular, optimal spacing values elude theoreticians. Knuth has experimented with several values and recommends that spacing h for an array of size N be based on the following formula: Let h1 = 1, hs+1 = 3hs + 1, and stop with ht when ht+2 >= N. Thus, values of h are computed as follows: h1 = 1 h2 = (3 x 1) + 1 = 4 h3 = (3 x 4) + 1 = 13 h4 = (3 x 13) + 1 = 40 h5 = (3 x 40) + 1 = 121 To sort 100 items we first find an hs such that hs >= 100. For 100 items, h5 is selected. Our final value (ht) is two steps lower, or h3. Therefore our sequence of h values will be 1341. Once the initial h value has been determined, subsequent values may be calculated using the formula hs1 = floor(hs / 3).
Implementation
An ANSIC implementation for shell sort is included. Typedef T and comparison operator compGT should be altered to reflect the data stored in the array. The central portion of the algorithm is an insertion sort with a spacing of h.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.htm (2 of 2) [3/23/2004 3:06:40 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.txt
/* shell sort */ #include <stdio.h> #include <stdlib.h> typedef int T; typedef int tblIndex; /* type of item to be sorted */ /* type of subscript */
#define compGT(a,b) (a > b) void shellSort(T *a, tblIndex lb, tblIndex ub) { tblIndex n, h, i, j; T t; /************************** * sort array a[lb..ub] * **************************/ /* compute largest increment */ n = ub  lb + 1; h = 1; if (n < 14) h = 1; else if (sizeof(tblIndex) == 2 && n > 29524) h = 3280; else { while (h < n) h = 3*h + 1; h /= 3; h /= 3; } while (h > 0) { /* sortbyinsertion in increments of h */ for (i = lb + h; i <= ub; i++) { t = a[i]; for (j = ih; j >= lb && compGT(a[j], t); j = h) a[j+h] = a[j]; a[j+h] = t; } /* compute next increment */ h /= 3; } } void fill(T *a, tblIndex lb, tblIndex ub) { tblIndex i; srand(1); for (i = lb; i <= ub; i++) a[i] = rand(); } int main(int argc, char *argv[]) { tblIndex maxnum, lb, ub; T *a; /* commandline: * * shl maxnum * * shl 2000 * sort 2000 records */ maxnum = atoi(argv[1]); lb = 0; ub = maxnum  1; if ((a = malloc(maxnum * sizeof(T))) == 0) {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.txt (1 of 2) [3/23/2004 3:06:46 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.txt
fprintf (stderr, "insufficient memory (a)\n"); exit(1); } fill(a, lb, ub); shellSort(a, lb, ub); return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_shl.txt (2 of 2) [3/23/2004 3:06:46 PM]
Quicksort
Quicksort
Although the shell sort algorithm is significantly better than insertion sort, there is still room for improvement. One of the most popular sorting algorithms is quicksort. Quicksort executes in O(n lg n) on average, and O(n2) in the worstcase. However, with proper precautions, worstcase behavior is very unlikely. Quicksort is a nonstable sort. It is not an inplace sort as stack space is required. For further reading, consult Cormen [1990].
Theory
The quicksort algorithm works by partitioning the array to be sorted, then recursively sorting each partition. In Partition (Figure 23), one of the array elements is selected as a pivot value. Values smaller than the pivot value are placed to the left of the pivot, while larger values are placed to the right. int function Partition (Array A, int Lb, int Ub); begin select a pivot from A[Lb]...A[Ub]; reorder A[Lb]...A[Ub] such that: all values to the left of the pivot are <= pivot all values to the right of the pivot are >= pivot return pivot position; end; procedure QuickSort (Array A, int Lb, int Ub); begin if Lb < Ub then M = Partition (A, Lb, Ub); QuickSort (A, Lb, M  1); QuickSort (A, M + 1, Ub); end;
Figure 23: Quicksort Algorithm
In Figure 24(a), the pivot selected is 3. Indices are run starting at both ends of the array. One index starts on the left and selects an element that is larger than the pivot, while another index starts on the right and selects an element that is smaller than the pivot. In this case, numbers 4 and 1 are selected. These elements are then exchanged, as is shown in Figure 24(b). This process repeats until all elements to the left of the pivot <= the pivot, and all elements to the right of the pivot are >= the pivot.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.htm (1 of 3) [3/23/2004 3:06:56 PM]
Quicksort
QuickSort recursively sorts the two subarrays, resulting in the array shown in Figure 24(c).
Figure 24: Quicksort Example
As the process proceeds, it may be necessary to move the pivot so that correct ordering is maintained. In this manner, QuickSort succeeds in sorting the array. If we're lucky the pivot selected will be the median of all values, equally dividing the array. For a moment, let's assume that this is the case. Since the array is split in half at each step, and Partition must eventually examine all n elements, the run time is O(n lg n). To find a pivot value, Partition could simply select the first element (A[Lb]). All other values would be compared to the pivot value, and placed either to the left or right of the pivot as appropriate. However, there is one case that fails miserably. Suppose the array was originally in order. Partition would always select the lowest value as a pivot and split the array with one element in the left partition, and Ub  Lb elements in the other. Each recursive call to quicksort would only diminish the size of the array to be sorted by one. Therefore n recursive calls would be required to do the sort, resulting in a O(n2) run time. One solution to this problem is to randomly select an item as a pivot. This would make it extremely unlikely that worstcase behavior would occur.
Implementation
An ANSIC implementation for quicksort is included. Typedef T and comparison operator compGT should be altered to reflect the data stored in the array. Several enhancements have been made to the basic quicksort algorithm:
q
q
The center element is selected as a pivot in partition. If the list is partially ordered, this will be a good choice. Worstcase behavior occurs when the center element happens to be the largest or smallest element each time partition is invoked. For short arrays, insertSort is called. Due to recursion and other overhead, quicksort is not an
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.htm (2 of 3) [3/23/2004 3:06:56 PM]
Quicksort
q
q
efficient algorithm to use on small arrays. Consequently, any array with fewer than 12 elements is sorted using an insertion sort. The optimal cutoff value is not critical and varies based on the quality of generated code. Tail recursion occurs when the last statement in a function is a call to the function itself. Tail recursion may be replaced by iteration, resulting in a better utilization of stack space. This has been done with the second call to QuickSort in Figure 23. After an array is partitioned, the smallest partition is sorted first. This results in a better utilization of stack space, as short partitions are quickly sorted and dispensed with.
Also included is an ANSIC implementation, of qsort, a standard C library function usually implemented with quicksort. Recursive calls were replaced by explicit stack operations. Table 21, shows timing statistics and stack utilization before and after the enhancements were applied. time (µs) before 103 1,630 34,183 after 51 911 20,016 stacksize before after 540 912 1,908 2,436 28 112 168 252
count 16 256 4,096
65,536 658,003 460,737
Table 21: Effect of Enhancements on Speed and Stack Utilization
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.htm (3 of 3) [3/23/2004 3:06:56 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.txt
/* quicksort */ #include <stdio.h> #include <stdlib.h> typedef int T; typedef int tblIndex; /* type of item to be sorted */ /* type of subscript */
#define compGT(a,b) (a > b) void insertSort(T *a, tblIndex lb, tblIndex ub) { T t; tblIndex i, j; /************************** * sort array a[lb..ub] * **************************/ for (i = lb + 1; i <= ub; i++) { t = a[i]; /* Shift elements down until */ /* insertion point found. */ for (j = i1; j >= lb && compGT(a[j], t); j) a[j+1] = a[j]; /* insert */ a[j+1] = t; } } tblIndex partition(T *a, tblIndex lb, tblIndex ub) { T t, pivot; tblIndex i, j, p; /******************************* * partition array a[lb..ub] * *******************************/ /* select pivot and exchange with 1st element */ p = lb + ((ub  lb)>>1); pivot = a[p]; a[p] = a[lb]; /* sort lb+1..ub based on pivot */ i = lb+1; j = ub; while (1) { while (i < j && compGT(pivot, a[i])) i++; while (j >= i && compGT(a[j], pivot)) j; if (i >= j) break; t = a[i]; a[i] = a[j]; a[j] = t; j; i++; } /* pivot belongs in a[j] */ a[lb] = a[j]; a[j] = pivot; return j; } void quickSort(T *a, tblIndex lb, tblIndex ub) { tblIndex m; /**************************
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.txt (1 of 2) [3/23/2004 3:06:59 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.txt
* sort array a[lb..ub] * **************************/ while (lb < ub) { /* quickly sort short lists */ if (ub  lb <= 12) { insertSort(a, lb, ub); return; } /* partition into two segments */ m = partition (a, lb, ub); /* sort the smallest partition */ /* to minimize stack requirements */ if (m  lb <= ub  m) { quickSort(a, lb, m  1); lb = m + 1; } else { quickSort(a, m + 1, ub); ub = m  1; } } } void fill(T *a, tblIndex lb, tblIndex ub) { tblIndex i; srand(1); for (i = lb; i <= ub; i++) a[i] = rand(); } int main(int argc, char *argv[]) { tblIndex maxnum, lb, ub; T *a; /* commandline: * * qui maxnum * * qui 2000 * sorts 2000 records * */ maxnum = atoi(argv[1]); lb = 0; ub = maxnum  1; if ((a = malloc(maxnum * sizeof(T))) == 0) { fprintf (stderr, "insufficient memory (a)\n"); exit(1); } fill(a, lb, ub); quickSort(a, lb, ub); return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qui.txt (2 of 2) [3/23/2004 3:06:59 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qsort.txt
/* qsort() */ #include <stdio.h> #include <stdlib.h> #include <limits.h> typedef int T; /* type of item to be sorted */
#define MAXSTACK (sizeof(size_t) * CHAR_BIT) static void exchange(void *a, void *b, size_t size) { size_t i; /****************** * exchange a,b * ******************/ for (i = sizeof(int); i <= size; i += sizeof(int)) { int t = *((int *)a); *(((int *)a)++) = *((int *)b); *(((int *)b)++) = t; } for (i = i  sizeof(int) + 1; i <= size; i++) { char t = *((char *)a); *(((char *)a)++) = *((char *)b); *(((char *)b)++) = t; } } void qsort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { void *lbStack[MAXSTACK], *ubStack[MAXSTACK]; int sp; unsigned int offset; /******************** * ANSIC qsort() * ********************/ lbStack[0] = (char *)base; ubStack[0] = (char *)base + (nmemb1)*size; for (sp = 0; sp >= 0; sp) { char *lb, *ub, *m; char *P, *i, *j; lb = lbStack[sp]; ub = ubStack[sp]; while (lb < ub) { /* select pivot and exchange with 1st element */ offset = (ub  lb) >> 1; P = lb + offset  offset % size; exchange (lb, P, size); /* partition into two segments */ i = lb + size; j = ub; while (1) { while (i < j && compar(lb, i) > 0) i += size; while (j >= i && compar(j, lb) > 0) j = size; if (i >= j) break; exchange (i, j, size); j = size; i += size; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qsort.txt (1 of 2) [3/23/2004 3:07:04 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qsort.txt
/* pivot belongs in A[j] */ exchange (lb, j, size); m = j; /* keep processing smallest segment, and stack largest */ if (m  lb <= ub  m) { if (m + size < ub) { lbStack[sp] = m + size; ubStack[sp++] = ub; } ub = m  size; } else { if (m  size > lb) { lbStack[sp] = lb; ubStack[sp++] = m  size; } lb = m + size; } } } } void fill(T *lb, T *ub) { T *i; srand(1); for (i = lb; i <= ub; i++) *i = rand(); } int Comp(const void *a, const void *b) { return *(T *)a  *(T *)b; } int main(int argc, char *argv[]) { int maxnum; int *a, *lb, *ub; /* commandline: * * qsort maxnum * * qsort 2000 * sorts 2000 records * */ maxnum = atoi(argv[1]); if ((a = malloc(maxnum * sizeof(T))) == 0) { fprintf (stderr, "insufficient memory (a)\n"); exit(1); } lb = a; ub = a + maxnum  1; fill(lb, ub); qsort(a, maxnum, sizeof(T), Comp); return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_qsort.txt (2 of 2) [3/23/2004 3:07:04 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm1.htm
Comparison
In this section we will compare the sorting algorithms covered: insertion sort, shell sort, and quicksort. There are several factors that influence the choice of a sorting algorithm:
q
q
q
q
Stable sort. Recall that a stable sort will leave identical keys in the same relative position in the sorted output. Insertion sort is the only algorithm covered that is stable. Space. An inplace sort does not require any extra space to accomplish its task. Both insertion sort and shell sort are in place sorts. Quicksort requires stack space for recursion, and therefore is not an inplace sort. Tinkering with the algorithm considerably reduced the amount of time required. Time. The time required to sort a dataset can easily become astronomical (Table 11). Table 22 shows the relative timings for each method. The time required to sort a randomly ordered dataset is shown in Table 23. Simplicity. The number of statements required for each algorithm may be found in Table 22. Simpler algorithms result in fewer programming errors. method insertion sort shell sort quicksort statements average time worstcase time 9 17 21 O(n2) O(n1.25) O(n lg n) O(n2) O(n1.5) O(n2)
Table 22: Comparison of Sorting Methods
count insertion 16 256 4,096 39 µs shell 45 µs quicksort 51 µs 911 µs .020 sec .461 sec
4,969 µs 1,230 µs 1.315 sec .033 sec
65,536 416.437 sec 1.254 sec
Table 23: Sort Timings
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm1.htm [3/23/2004 3:07:10 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_dic.htm
Dictionaries
Dictionaries are data structures that support search, insert, and delete operations. One of the most effective representations is a hash table. Typically, a simple function is applied to the key to determine its place in the dictionary. Also presented are binary trees and redblack trees. Both tree methods use a technique similar to the binary search algorithm to minimize the number of comparisons during search and update operations on the dictionary. Finally, skip lists illustrate a simple approach that utilizes random numbers to construct a dictionary.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_dic.htm [3/23/2004 3:07:11 PM]
Hash Tables
Hash Tables
Hash tables are a simple and effective method to implement dictionaries. Average time to search for an element is O(1), while worstcase time is O(n). Cormen [1990] and Knuth [1998] both contain excellent discussions on hashing.
Theory
A hash table is simply an array that is addressed via a hash function. For example, in Figure 31, HashTable is an array with 8 elements. Each element is a pointer to a linked list of numeric data. The hash function for this example simply divides the data key by 8, and uses the remainder as an index into the table. This yields a number from 0 to 7. Since the range of indices for HashTable is 0 to 7, we are guaranteed that the index is valid.
Figure 31: A Hash Table
To insert a new item in the table, we hash the key to determine which list the item goes on, and then insert the item at the beginning of the list. For example, to insert 11, we divide 11 by 8 giving a remainder of 3. Thus, 11 goes on the list starting at HashTable[3]. To find a number, we hash the number and chain down the correct list to see if it is in the table. To delete a number, we find the number and remove the node from the linked list. Entries in the hash table are dynamically allocated and entered on a linked list associated with each hash table entry. This technique is known as chaining. An alternative method, where all entries are stored in the hash table itself, is known as direct or open addressing and may be found in the references.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.htm (1 of 5) [3/23/2004 3:07:24 PM]
Hash Tables
If the hash function is uniform, or equally distributes the data keys among the hash table indices, then hashing effectively subdivides the list to be searched. Worstcase behavior occurs when all keys hash to the same index. Then we simply have a single linked list that must be sequentially searched. Consequently, it is important to choose a good hash function. Several methods may be used to hash key values. To illustrate the techniques, I will assume unsigned char is 8bits, unsigned short int is 16bits and unsigned long int is 32bits.
q
Division method (tablesize = prime). This technique was used in the preceeding example. A HashValue, from 0 to (HashTableSize  1), is computed by dividing the key value by the size of the hash table and taking the remainder. For example: typedef int HashIndexType; HashIndexType Hash(int Key) { return Key % HashTableSize; } Selecting an appropriate HashTableSizeis important to the success of this method. For example, a HashTableSizeof two would yield even hash values for even Keys, and odd hash values for odd Keys. This is an undesirable property, as all keys would hash to the same value if they happened to be even. If HashTableSizeis a power of two, then the hash function simply selects a subset of the Keybits as the table index. To obtain a more random scattering, HashTableSizeshould be a prime number not too close to a power of two.
q
Multiplication method (tablesize = 2n). The multiplication method may be used for a HashTableSize that is a power of 2. The Key is multiplied by a constant, and then the necessary bits are extracted to index into the table. Knuth recommends using the fractional part of the product of the key and the golden ratio, or (sqrt(5)  1)/2. For example, assuming a word size of 8 bits, the golden ratio is multiplied by 28 to obtain 158. The product of the 8bit key and 158 results in a 16bit integer. For a table size of 25 the 5 most significant bits of the least significant word are extracted for the hash value. The following definitions may be used for the multiplication method: /* 8bit index */ typedef unsigned char HashIndexType; static const HashIndexType K = 158; /* 16bit index */ typedef unsigned short int HashIndexType; static const HashIndexType K = 40503; /* 32bit index */ typedef unsigned long int HashIndexType; static const HashIndexType K = 2654435769; /* w=bitwidth(HashIndexType), size of table=2**m */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.htm (2 of 5) [3/23/2004 3:07:24 PM]
Hash Tables
static const int S = w  m; HashIndexType HashValue = (HashIndexType)(K * Key) >> S; For example, if HashTableSizeis 1024 (210), then a 16bit index is sufficient and Swould be assigned a value of 16  10 = 6. Thus, we have: typedef unsigned short int HashIndexType; HashIndexType Hash(int Key) { static const HashIndexType K = 40503; static const int S = 6; return (HashIndexType)(K * Key) >> S; }
q
Variable string addition method (tablesize = 256). To hash a variablelength string, each character is added, modulo 256, to a total. A HashValue, range 0255, is computed. typedef unsigned char HashIndexType; HashIndexType Hash(char *str) { HashIndexType h = 0; while (*str) h += *str++; return h; }
q
Variable string exclusiveor method (tablesize = 256). This method is similar to the addition method, but successfully distinguishes similar words and anagrams. To obtain a hash value in the range 0255, all bytes in the string are exclusiveor'd together. However, in the process of doing each exclusiveor, a random component is introduced. typedef unsigned char HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { unsigned char h = 0; while (*str) h = Rand8[h ^ *str++]; return h; } Rand8is a table of 256 8bit unique random numbers. The exact ordering is not critical. The exclusiveor method has its basis in cryptography, and is quite effective (Pearson [1990]).
q
Variable string exclusiveor method (tablesize <= 65536). If we hash the string twice, we may derive a hash value for an arbitrary table size up to 65536. The second time the string is hashed, one is added to the first character. Then the two 8bit hash values are concatenated
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.htm (3 of 5) [3/23/2004 3:07:24 PM]
Hash Tables
together to form a 16bit hash value. typedef unsigned short int HashIndexType; unsigned char Rand8[256]; HashIndexType Hash(char *str) { HashIndexType h; unsigned char h1, h2; if (*str == 0) return 0; h1 = *str; h2 = *str + 1; str++; while (*str) { h1 = Rand8[h1 ^ *str]; h2 = Rand8[h2 ^ *str]; str++; } /* h is in range 0..65535 */ h = ((HashIndexType)h1 << 8)(HashIndexType)h2; /* use division method to scale */ return h % HashTableSize } Assuming n data items, the hash table size should be large enough to accommodate a reasonable number of entries. As seen in Table 31, a small table size substantially increases the average time to find a key. A hash table may be viewed as a collection of linked lists. As the table becomes larger, the number of lists increases, and the average number of nodes on each list decreases. If the table size is 1, then the table is really a single linked list of length n. Assuming a perfect hash function, a table size of 2 has two lists of length n/2. If the table size is 100, then we have 100 lists of length n/100. This considerably reduces the length of the list to be searched. There is considerable leeway in the choice of table size. size time 1 869 2 432 4 214 8 106 16 32 64 54 28 15 size 128 256 512 1024 2048 4096 8192 time 9 6 4 4 3 3 3
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.htm (4 of 5) [3/23/2004 3:07:24 PM]
Hash Tables
Table 31: HashTableSize vs. Average Search Time (µs), 4096 entries
Implementation
An ANSIC implementation of a hash table is included. Typedef T and comparison operator compEQ should be altered to reflect the data stored in the table. The hashTableSize must be determined and the hashTable allocated. The division method was used in the hash function. Function insertNode allocates a new node and inserts it in the table. Function deleteNode deletes and frees a node from the table. Function findNode searches the table for a particular value.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.htm (5 of 5) [3/23/2004 3:07:24 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt
/* hash table */ #include <stdio.h> #include <stdlib.h> /* modify these lines to establish data type */ typedef int T; /* type of item to be stored */ typedef int hashTableIndex; /* index into hash table */ #define compEQ(a,b) (a == b) typedef struct Node_ { struct Node_ *next; T data; } Node; Node **hashTable; int hashTableSize; hashTableIndex hash(T data) { /*********************************** * hash function applied to data * ***********************************/ return (data % hashTableSize); } Node *insertNode(T data) { Node *p, *p0; hashTableIndex bucket; /************************************************ * allocate node for data and insert in table * ************************************************/ /* insert node at beginning of list */ bucket = hash(data); if ((p = malloc(sizeof(Node))) == 0) { fprintf (stderr, "out of memory (insertNode)\n"); exit(1); } p0 = hashTable[bucket]; hashTable[bucket] = p; p>next = p0; p>data = data; return p; } void deleteNode(T data) { Node *p0, *p; hashTableIndex bucket; /******************************************** * delete node containing data from table * ********************************************/ /* find node */ p0 = 0; bucket = hash(data); p = hashTable[bucket]; while (p && !compEQ(p>data, data)) { p0 = p; p = p>next; } if (!p) return;
/* next node */ /* data stored in node */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt (1 of 3) [3/23/2004 3:07:27 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt
/* p designates node to delete, remove it from list */ if (p0) /* not first node, p0 points to previous node */ p0>next = p>next; else /* first node on chain */ hashTable[bucket] = p>next; free (p); } Node *findNode (T data) { Node *p; /******************************* * find node containing data * *******************************/ p = hashTable[hash(data)]; while (p && !compEQ(p>data, data)) p = p>next; return p; } int main(int argc, char **argv) { int i, *a, maxnum, random; /* commandline: * * has maxnum hashTableSize [random] * * has 2000 100 * processes 2000 records, tablesize=100, sequential numbers * has 4000 200 r * processes 4000 records, tablesize=200, random numbers * */ maxnum = atoi(argv[1]); hashTableSize = atoi(argv[2]); random = argc > 3; if ((a = malloc(maxnum * sizeof(*a))) == 0) { fprintf (stderr, "out of memory (a)\n"); exit(1); } if ((hashTable = malloc(hashTableSize * sizeof(Node *))) == 0) { fprintf (stderr, "out of memory (hashTable)\n"); exit(1); } if (random) { /* random */ /* fill "a" with unique random numbers */ for (i = 0; i < maxnum; i++) a[i] = rand(); printf ("ran ht, %d items, %d hashTable\n", maxnum, hashTableSize); } else { for (i=0; i<maxnum; i++) a[i] = i; printf ("seq ht, %d items, %d hashTable\n", maxnum, hashTableSize); } for (i = 0; i < maxnum; i++) { insertNode(a[i]); } for (i = maxnum1; i >= 0; i) { findNode(a[i]);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt (2 of 3) [3/23/2004 3:07:27 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt
} for (i = maxnum1; i >= 0; i) { deleteNode(a[i]); } return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_has.txt (3 of 3) [3/23/2004 3:07:27 PM]
Binary Search Trees
Binary Search Trees
In the introduction we used the binary search algorithm to find data stored in an array. This method is very effective, as each iteration reduced the number of items to search by onehalf. However, since data was stored in an array, insertions and deletions were not efficient. Binary search trees store data in nodes that are linked in a treelike fashion. For randomly inserted data, search time is O(lg n). Worstcase behavior occurs when ordered data is inserted. In this case the search time is O(n). See Cormen [1990] for a more detailed description.
Theory
A binary search tree is a tree where each node has a left and right child. Either child, or both children, may be missing. Figure 32 illustrates a binary search tree. Assuming k represents the value of a given node, then a binary search tree also has the following property: all children to the left of the node have values smaller than k, and all children to the right of the node have values larger than k. The top of a tree is known as the root, and the exposed nodes at the bottom are known as leaves. In Figure 32, the root is node 20 and the leaves are nodes 4, 16, 37, and 43. The height of a tree is the length of the longest path from root to leaf. For this example the tree height is 2.
Figure 32: A Binary Search Tree
To search a tree for a given value, we start at the root and work down. For example, to search for 16, we first note that 16 < 20 and we traverse to the left child. The second comparison finds that 16 > 7, so we traverse to the right child. On the third comparison, we succeed. Each comparison results in reducing the number of items to inspect by onehalf. In this respect, the algorithm is similar to a binary search on an array. However, this is true only if the tree is balanced. For example, Figure 33 shows another tree containing the same values. While it is a binary search tree, its behavior is more like that of a linked list, with search time increasing proportional to the number of elements stored.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.htm (1 of 3) [3/23/2004 3:07:52 PM]
Binary Search Trees
Figure 33: An Unbalanced Binary Search Tree
Insertion and Deletion
Let us examine insertions in a binary search tree to determine the conditions that can cause an unbalanced tree. To insert an 18 in the tree in Figure 32, we first search for that number. This causes us to arrive at node 16 with nowhere to go. Since 18 > 16, we simply add node 18 to the right child of node 16 (Figure 34). Now we can see how an unbalanced tree can occur. If the data is presented in an ascending sequence, each node will be added to the right of the previous node. This will create one long chain, or linked list. However, if data is presented for insertion in a random order, then a more balanced tree is possible. Deletions are similar, but require that the binary search tree property be maintained. For example, if node 20 in Figure 34 is removed, it must be replaced by node 37. This results in the tree shown in Figure 35. The rationale for this choice is as follows. The successor for node 20 must be chosen such that all nodes to the right are larger. Therefore we need to select the smallest valued node to the right of node 20. To make the selection, chain once to the right (node 38), and then chain to the left until the last node is found (node 37). This is the successor for node 20.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.htm (2 of 3) [3/23/2004 3:07:52 PM]
Binary Search Trees
Figure 34: Binary Tree After Adding Node 18
Figure 35: Binary Tree After Deleting Node 20
Implementation
An ANSIC implementation for a binary search tree is included. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the tree. Each Node consists of left, right, and parent pointers designating each child and the parent. Data is stored in the data field. The tree is based at root, and is initially NULL. Function insertNode allocates a new node and inserts it in the tree. Function deleteNode deletes and frees a node from the tree. Function findNode searches the tree for a particular value.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.htm (3 of 3) [3/23/2004 3:07:52 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt
/* binary search tree */ #include <stdio.h> #include <stdlib.h> typedef int T; #define compLT(a,b) (a < b) #define compEQ(a,b) (a == b) typedef struct Node_ { struct Node_ *left; struct Node_ *right; struct Node_ *parent; T data; } Node; Node *root = NULL; Node *insertNode(T data) { Node *x, *current, *parent; /*********************************************** * allocate node for data and insert in tree * ***********************************************/ /* find x's parent */ current = root; parent = 0; while (current) { if (compEQ(data, current>data)) return (current); parent = current; current = compLT(data, current>data) ? current>left : current>right; } /* setup new node */ if ((x = malloc (sizeof(*x))) == 0) { fprintf (stderr, "insufficient memory (insertNode)\n"); exit(1); } x>data = data; x>parent = parent; x>left = NULL; x>right = NULL; /* insert x in tree */ if(parent) if(compLT(x>data, parent>data)) parent>left = x; else parent>right = x; else root = x; return(x); } void deleteNode(Node *z) { Node *x, *y; /***************************** * delete node z from tree * *****************************/ /* y will be removed from the parent chain */ if (!z  z == NULL) return; /* type of item to be stored */
/* /* /* /*
left child */ right child */ parent */ data stored in node */
/* root of binary tree */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt (1 of 3) [3/23/2004 3:07:58 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt
/* find tree successor */ if (z>left == NULL  z>right == NULL) y = z; else { y = z>right; while (y>left != NULL) y = y>left; } /* x is y's only child */ if (y>left != NULL) x = y>left; else x = y>right; /* remove y from the parent chain */ if (x) x>parent = y>parent; if (y>parent) if (y == y>parent>left) y>parent>left = x; else y>parent>right = x; else root = x; /* /* /* if y is the node we're removing */ z is the data we're removing */ if z and y are not the same, replace z with y. */ (y != z) { y>left = z>left; if (y>left) y>left>parent = y; y>right = z>right; if (y>right) y>right>parent = y; y>parent = z>parent; if (z>parent) if (z == z>parent>left) z>parent>left = y; else z>parent>right = y; else root = y; free (z); } else { free (y); } } Node *findNode(T data) { /******************************* * find node containing data * *******************************/ Node *current = root; while(current != NULL) if(compEQ(data, current>data)) return (current); else current = compLT(data, current>data) ? current>left : current>right; return(0); } int main(int argc, char **argv) { int i, *a, maxnum, random; /* commandline:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt (2 of 3) [3/23/2004 3:07:58 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt
* * bin * * bin * bin * */ maxnum = random =
maxnum random 5000 2000 r // 5000 sequential // 2000 random
atoi(argv[1]); argc > 2;
if ((a = malloc(maxnum * sizeof(*a))) == 0) { fprintf (stderr, "insufficient memory (a)\n"); exit(1); } if (random) { /* random */ /* fill "a" with unique random numbers */ for (i = 0; i < maxnum; i++) a[i] = rand(); printf ("ran bt, %d items\n", maxnum); } else { for (i=0; i<maxnum; i++) a[i] = i; printf ("seq bt, %d items\n", maxnum); } for (i = 0; i < maxnum; i++) { insertNode(a[i]); } for (i = maxnum1; i >= 0; i) { findNode(a[i]); } for (i = maxnum1; i >= 0; i) { deleteNode(findNode(a[i])); } return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bin.txt (3 of 3) [3/23/2004 3:07:58 PM]
RedBlack Trees
RedBlack Trees
Binary search trees work best when they are balanced or the path length from root to any leaf is within some bounds. The redblack tree algorithm is a method for balancing trees. The name derives from the fact that each node is colored red or black, and the color of the node is instrumental in determining the balance of the tree. During insert and delete operations, nodes may be rotated to maintain tree balance. Both average and worstcase search time is O(lg n). For details, consult Cormen [1990].
Theory
A redblack tree is a balanced binary search tree with the following properties: 1. 2. 3. 4. Every node is colored red or black. Every leaf is a NIL node, and is colored black. If a node is red, then both its children are black. Every simple path from a node to a descendant leaf contains the same number of black nodes.
The number of black nodes on a path from root to leaf is known as the blackheight of a tree. These properties guarantee that any path from the root to a leaf is no more than twice as long as any other. To see why this is true, consider a tree with a black height of two. The shortest distance from root to leaf is two, where both nodes are black. The longest distance from root to leaf is four, where the nodes are colored (root to leaf): red, black, red, black. It is not possible to insert more black nodes as this would violate property 4, the blackheight requirement. Since red nodes must have black children (property 3), having two red nodes in a row is not allowed. The largest path we can construct consists of an alternation of redblack nodes, or twice the length of a path containing only black nodes. All operations on the tree must maintain the properties listed above. In particular, operations which insert or delete items from the tree must abide by these rules.
Insertion
To insert a node, we search the tree for an insertion point, and add the node to the tree. A new node replaces an existing NIL node at the bottom of the tree, and has two NIL nodes as children. In the implementation, a NIL node is simply a pointer to a common sentinel node that is colored black. After insertion, the new node is colored red. Then the parent of the node is examined to determine if the redblack tree properties have been violated. If necessary, we recolor the node and do rotations to balance the tree. By inserting a red node with two NIL children, we have preserved blackheight property (property 4). However, property 3 may be violated. This property states that both children of a red node must be black. Although both children of the new node are black (they're NIL), consider the case where the
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.htm (1 of 5) [3/23/2004 3:08:14 PM]
RedBlack Trees
parent of the new node is red. Inserting a red node under a red parent would violate this property. There are two cases to consider:
q
Red parent, red uncle: Figure 36 illustrates a redred violation. Node X is the newly inserted node, with both parent and uncle colored red. A simple recoloring removes the redred violation. After recoloring, the grandparent (node B) must be checked for validity, as its parent may be red. Note that this has the effect of propagating a red node up the tree. On completion, the root of the tree is marked black. If it was originally red, then this has the effect of increasing the blackheight of the tree. Red parent, black uncle: Figure 37 illustrates a redred violation, where the uncle is colored black. Here the nodes may be rotated, with the subtrees adjusted as shown. At this point the algorithm may terminate as there are no redred conflicts and the top of the subtree (node A) is colored black. Note that if node X was originally a right child, a left rotation would be done first, making the node a left child.
q
Each adjustment made while inserting a node causes us to travel up the tree one step. At most 1 rotation (2 if the node is a right child) will be done, as the algorithm terminates in this case. The technique for deletion is similar.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.htm (2 of 5) [3/23/2004 3:08:14 PM]
RedBlack Trees
Figure 36: Insertion  Red Parent, Red Uncle
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.htm (3 of 5) [3/23/2004 3:08:14 PM]
RedBlack Trees
Figure 37: Insertion  Red Parent, Black Uncle
Implementation
An ANSIC implementation for redblack trees is included. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the tree. Each Node consists of left, right, and parent pointers designating each child and the parent. The node color is stored in
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.htm (4 of 5) [3/23/2004 3:08:14 PM]
RedBlack Trees
color, and is either RED or BLACK. The data is stored in the data field. All leaf nodes of the tree are sentinel nodes, to simplify coding. The tree is based at root, and initially is a sentinel node. Function insertNode allocates a new node and inserts it in the tree. Subsequently, it calls insertFixup to ensure that the redblack tree properties are maintained. Function deleteNode deletes a node from the tree. To maintain redblack tree properties, deleteFixup is called. Function findNode searches the tree for a particular value.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.htm (5 of 5) [3/23/2004 3:08:14 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt
/* redblack tree */ #include #include #include #include <stdio.h> <stdlib.h> <string.h> <stdarg.h> /* type of item to be stored */
typedef int T; #define compLT(a,b) (a < b) #define compEQ(a,b) (a == b)
/* RedBlack tree description */ typedef enum { BLACK, RED } nodeColor; typedef struct Node_ { struct Node_ *left; struct Node_ *right; struct Node_ *parent; nodeColor color; T data; } Node;
/* /* /* /* /*
left child */ right child */ parent */ node color (BLACK, RED) */ data stored in node */
#define NIL &sentinel /* all leafs are sentinels */ Node sentinel = { NIL, NIL, 0, BLACK, 0}; Node *root = NIL; void rotateLeft(Node *x) { /************************** * rotate node x to left * **************************/ Node *y = x>right; /* establish x>right link */ x>right = y>left; if (y>left != NIL) y>left>parent = x; /* establish y>parent link */ if (y != NIL) y>parent = x>parent; if (x>parent) { if (x == x>parent>left) x>parent>left = y; else x>parent>right = y; } else { root = y; } /* link x and y */ y>left = x; if (x != NIL) x>parent = y; } void rotateRight(Node *x) { /**************************** * rotate node x to right * ****************************/ Node *y = x>left; /* establish x>left link */ x>left = y>right; if (y>right != NIL) y>right>parent = x; /* root of RedBlack tree */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt (1 of 5) [3/23/2004 3:08:21 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt
/* establish y>parent link */ if (y != NIL) y>parent = x>parent; if (x>parent) { if (x == x>parent>right) x>parent>right = y; else x>parent>left = y; } else { root = y; } /* link x and y */ y>right = x; if (x != NIL) x>parent = y; } void insertFixup(Node *x) { /************************************* * maintain RedBlack tree balance * * after inserting node x * *************************************/ /* check RedBlack properties */ while (x != root && x>parent>color == RED) { /* we have a violation */ if (x>parent == x>parent>parent>left) { Node *y = x>parent>parent>right; if (y>color == RED) { /* uncle is RED */ x>parent>color = BLACK; y>color = BLACK; x>parent>parent>color = RED; x = x>parent>parent; } else { /* uncle is BLACK */ if (x == x>parent>right) { /* make x a left child */ x = x>parent; rotateLeft(x); } /* recolor and rotate */ x>parent>color = BLACK; x>parent>parent>color = RED; rotateRight(x>parent>parent); } } else { /* mirror image of above code */ Node *y = x>parent>parent>left; if (y>color == RED) { /* uncle is RED */ x>parent>color = BLACK; y>color = BLACK; x>parent>parent>color = RED; x = x>parent>parent; } else { /* uncle is BLACK */ if (x == x>parent>left) { x = x>parent; rotateRight(x);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt (2 of 5) [3/23/2004 3:08:21 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt
} x>parent>color = BLACK; x>parent>parent>color = RED; rotateLeft(x>parent>parent); } } } root>color = BLACK; } Node *insertNode(T data) { Node *current, *parent, *x; /*********************************************** * allocate node for data and insert in tree * ***********************************************/ /* find where node belongs */ current = root; parent = 0; while (current != NIL) { if (compEQ(data, current>data)) return (current); parent = current; current = compLT(data, current>data) ? current>left : current>right; } /* setup new node */ if ((x = malloc (sizeof(*x))) == 0) { printf ("insufficient memory (insertNode)\n"); exit(1); } x>data = data; x>parent = parent; x>left = NIL; x>right = NIL; x>color = RED; /* insert node in tree */ if(parent) { if(compLT(data, parent>data)) parent>left = x; else parent>right = x; } else { root = x; } insertFixup(x); return(x); } void deleteFixup(Node *x) { /************************************* * maintain RedBlack tree balance * * after deleting node x * *************************************/ while (x != root && x>color == BLACK) { if (x == x>parent>left) { Node *w = x>parent>right; if (w>color == RED) { w>color = BLACK; x>parent>color = RED; rotateLeft (x>parent);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt (3 of 5) [3/23/2004 3:08:21 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt
w = x>parent>right; } if (w>left>color == BLACK && w>right>color == BLACK) { w>color = RED; x = x>parent; } else { if (w>right>color == BLACK) { w>left>color = BLACK; w>color = RED; rotateRight (w); w = x>parent>right; } w>color = x>parent>color; x>parent>color = BLACK; w>right>color = BLACK; rotateLeft (x>parent); x = root; } } else { Node *w = x>parent>left; if (w>color == RED) { w>color = BLACK; x>parent>color = RED; rotateRight (x>parent); w = x>parent>left; } if (w>right>color == BLACK && w>left>color == BLACK) { w>color = RED; x = x>parent; } else { if (w>left>color == BLACK) { w>right>color = BLACK; w>color = RED; rotateLeft (w); w = x>parent>left; } w>color = x>parent>color; x>parent>color = BLACK; w>left>color = BLACK; rotateRight (x>parent); x = root; } } } x>color = BLACK; } void deleteNode(Node *z) { Node *x, *y; /***************************** * delete node z from tree * *****************************/ if (!z  z == NIL) return; if (z>left == NIL  z>right == NIL) { /* y has a NIL node as a child */ y = z; } else { /* find tree successor with a NIL node as a child */ y = z>right; while (y>left != NIL) y = y>left; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt (4 of 5) [3/23/2004 3:08:21 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt
/* x is y's only child */ if (y>left != NIL) x = y>left; else x = y>right; /* remove y from the parent chain */ x>parent = y>parent; if (y>parent) if (y == y>parent>left) y>parent>left = x; else y>parent>right = x; else root = x; if (y != z) z>data = y>data; if (y>color == BLACK) deleteFixup (x); free (y); } Node *findNode(T data) { /******************************* * find node containing data * *******************************/ Node *current = root; while(current != NIL) if(compEQ(data, current>data)) return (current); else current = compLT (data, current>data) ? current>left : current>right; return(0); } void main(int argc, char **argv) { int a, maxnum, ct; Node *t; /* commandline: * * rbt maxnum * * rbt 2000 * process 2000 records * */ maxnum = atoi(argv[1]); for (ct = maxnum; ct; ct) { a = rand() % 9 + 1; if ((t = findNode(a)) != NULL) { deleteNode(t); } else { insertNode(a); } } }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_rbt.txt (5 of 5) [3/23/2004 3:08:21 PM]
Skip Lists
Skip Lists
Skip lists are linked lists that allow you to skip to the correct node. The performance bottleneck inherent in a sequential scan is avoided, while insertion and deletion remain relatively efficient. Average search time is O(lg n). Worstcase search time is O(n), but is extremely unlikely. An excellent reference for skip lists is Pugh [1990].
Theory
The indexing scheme employed in skip lists is similar in nature to the method used to lookup names in an address book. To lookup a name, you index to the tab representing the first character of the desired entry. In Figure 38, for example, the topmost list represents a simple linked list with no tabs. Adding tabs (middle figure) facilitates the search. In this case, level1 pointers are traversed. Once the correct segment of the list is found, level0 pointers are traversed to find the specific entry.
Figure 38: Skip List Construction
The indexing scheme may be extended as shown in the bottom figure, where we now have an index to the index. To locate an item, level2 pointers are traversed until the correct segment of the list is identified. Subsequently, level1 and level0 pointers are traversed. During insertion the number of pointers required for a new node must be determined. This is easily resolved using a probabilistic technique. A random number generator is used to toss a computer coin. When inserting a new node, the coin is tossed to determine if it should be level1. If you win, the coin is tossed again to determine if the node should be level2. Another win, and the coin is tossed to determine if the node should be level3. This process repeats until you lose. If only one level (level0)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.htm (1 of 2) [3/23/2004 3:08:27 PM]
Skip Lists
is implemented, the data structure is a simple linkedlist with O(n) search time. However, if sufficient levels are implemented, the skip list may be viewed as a tree with the root at the highest level, and search time is O(lg n). The skip list algorithm has a probabilistic component, and thus a probabilistic bounds on the time required to execute. However, these bounds are quite tight in normal circumstances. For example, to search a list containing 1000 items, the probability that search time will be 5 times the average is about 1 in 1,000,000,000,000,000,000.
Implementation
An ANSIC implementation for skip lists is included. Typedef T and comparison operators compLT and compEQ should be altered to reflect the data stored in the list. In addition, MAXLEVEL should be set based on the maximum size of the dataset. To initialize, initList is called. The list header is allocated and initialized. To indicate an empty list, all levels are set to point to the header. Function insertNode allocates a new node, searches for the correct insertion point, and inserts it in the list. While searching, the update array maintains pointers to the upperlevel nodes encountered. This information is subsequently used to establish correct links for the newly inserted node. The newLevel is determined using a random number generator, and the node allocated. The forward links are then established using information from the update array. Function deleteNode deletes and frees a node, and is implemented in a similar manner. Function findNode searches the list for a particular value.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.htm (2 of 2) [3/23/2004 3:08:27 PM]
Skip Lists: A Probabilistic Alternative to Balanced Trees
Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. William Pugh
Binary trees can be used for representing abstract data types such as dictionaries and ordered lists. They work well when the elements are inserted in a random order. Some sequences of operations, such as inserting the elements in order, produce degenerate data structures that give very poor performance. If it were possible to randomly permute the list of items to be inserted, trees would work well with high probability for any input sequence. In most cases queries must be answered online, so randomly permuting the input is impractical. Balanced tree algorithms rearrange the tree as operations are performed to maintain certain balance conditions and assure good performance. Skip lists are a probabilistic alternative to balanced trees. Skip lists are balanced by consulting a random number generator. Although skip lists have bad worstcase performance, no input sequence consistently produces the worstcase performance (much like quicksort when the pivot element is chosen randomly). It is very unlikely a skip list data structure will be significantly unbalanced (e.g., for a dictionary of more than 250 elements, the chance that a search will take more than 3 times the expected time is less than one in a million). Skip lists have balance properties similar to that of search trees built by random insertions, yet do not require insertions to be random. Balancing a data structure probabilistically is easier than explicitly maintaining the balance. For many applications, skip lists are a more natural representation than trees, also leading to simpler algorithms. The simplicity of skip list algorithms makes them easier to implement and provides significant constant factor speed improvements over balanced tree and selfadjusting tree algorithms. Skip lists are also very space efficient. They can easily be configured to require an average of 1 1 / 3 pointers per element (or even less) and do not require balance or priority information to be stored with each node. Also giving every fourth node a pointer four ahead (Figure 1c) requires that no more than n/4 + 2 nodes be examined. If every (2i)th node has a pointer 2i nodes ahead (Figure 1d), the number of nodes that must be examined can be reduced to log2 n while only doubling the number of pointers. This data structure could be used for fast searching, but insertion and deletion would be impractical. A node that has k forward pointers is called a level k node. If every (2i)th node has a pointer 2i nodes ahead, then levels of nodes are distributed in a simple pattern: 50% are level 1, 25% are level 2, 12.5% are level 3 and so on. What would happen if the levels of nodes were chosen randomly, but in the same proportions (e.g., as in Figure 1e)? A node’s ith forward pointer, instead of pointing 2i–1 nodes ahead, points to the next node of level i or higher. Insertions or deletions would require only local modifications; the level of a node, chosen randomly when the node is inserted, need never change. Some arrangements of levels would give poor execution times, but we will see that such arrangements are rare. Because these data structures are linked lists with extra pointers that skip over intermediate nodes, I named them skip lists.
SKIP LIST ALGORITHMS
This section gives algorithms to search for, insert and delete elements in a dictionary or symbol table. The Search operation returns the contents of the value associated with the desired key or failure if the key is not present. The Insert operation associates a specified key with a new value (inserting the key if it had not already been present). The Delete operation deletes the specified key. It is easy to support additional operations such as “find the minimum key” or “find the next key”. Each element is represented by a node, the level of which is chosen randomly when the node is inserted without regard for the number of elements in the data structure. A level i node has i forward pointers, indexed 1 through i. We do not need to store the level of a node in the node. Levels are capped at some appropriate constant MaxLevel. The level of a list is the maximum level currently in the list (or 1 if the list is empty). The header of a list has forward pointers at levels one through MaxLevel. The forward pointers of the header at levels higher than the current maximum level of the list point to NIL.
SKIP LISTS
We might need to examine every node of the list when searching a linked list (Figure 1a). If the list is stored in sorted order and every other node of the list also has a pointer to the node two ahead it in the list (Figure 1b), we have to examine no more than n/2 + 1 nodes (where n is the length of the list).
a b
3
6
7
9
12
17
19
21
25
26
NIL NIL
3
6
7
9
12
17
19
21
25
26
c 3
6
9 7 12
17
21 19 25
26
NIL
d 3 6 7
9 12
21 17 19 25 26
NIL
e 3
6 7 9 12 17 19 21
25 26
NIL
FIGURE 1  Linked lists with additional pointers
Initialization
An element NIL is allocated and given a key greater than any legal key. All levels of all skip lists are terminated with NIL. A new list is initialized so that the the level of the list is equal to 1 and all forward pointers of the list’s header point to NIL.
the previous maximum level of the list, we update the maximum level of the list and initialize the appropriate portions of the update vector. After each deletion, we check if we have deleted the maximum element of the list and if so, decrease the maximum level of the list.
Search Algorithm
We search for an element by traversing forward pointers that do not overshoot the node containing the element being searched for (Figure 2). When no more progress can be made at the current level of forward pointers, the search moves down to the next level. When we can make no more progress at level 1, we must be immediately in front of the node that contains the desired element (if it is in the list).
Choosing a Random Level
Initially, we discussed a probability distribution where half of the nodes that have level i pointers also have level i+1 pointers. To get away from magic constants, we say that a fraction p of the nodes with level i pointers also have level i+1 pointers. (for our original discussion, p = 1/2). Levels are generated randomly by an algorithm equivalent to the one in Figure 5. Levels are generated without reference to the number of elements in the list.
Insertion and Deletion Algorithms
To insert or delete a node, we simply search and splice, as shown in Figure 3. Figure 4 gives algorithms for insertion and deletion. A vector update is maintained so that when the search is complete (and we are ready to perform the splice), update[i] contains a pointer to the rightmost node of level i or higher that is to the left of the location of the insertion/deletion. If an insertion generates a node with a level greater than
At what level do we start a search? Defining L(n)
In a skip list of 16 elements generated with p = 1/2, we might happen to have 9 elements of level 1, 3 elements of level 2, 3 elements of level 3 and 1 element of level 14 (this would be very unlikely, but it could happen). How should we handle this? If we use the standard algorithm and start our search at level 14, we will do a lot of useless work. Where should we start the search? Our analysis suggests that ideally we would start a search at the level L where we expect 1/p nodes. This happens when L = log1/p n. Since we will be referring frequently to this formula, we will use L(n) to denote log1/p n. There are a number of solutions to the problem of deciding how to handle the case where there is an element with an unusually large level in the list. • Don’t worry, be happy. Simply start a search at the highest level present in the list. As we will see in our analysis, the probability that the maximum level in a list of n elements is significantly larger than L(n) is very small. Starting a search at the maximum level in the list does not add more than a small constant to the expected search time. This is the approach used in the algorithms described in this paper.
Search(list, searchKey) x := list→header  loop invariant: x→key < searchKey for i := list→level downto 1 do while x→forward[i]→key < searchKey do x := x→forward[i]  x→key < searchKey ≤ x→forward[1] →key x := x→forward[1] if x→key = searchKey then return x→value else return failure FIGURE 2  Skip list search algorithm
Search path
update[i]→forward[i]
6 3 7 9 12 19 21
25 26
NIL
original list, 17 to be inserted
6 3 7 9 12 17 19 21
25 26
NIL
list after insertion, updated pointers in grey
FIGURE 3  Pictorial description of steps involved in performing an insertion • Use less than you are given. Although an element may contain room for 14 pointers, we don’t need to use all 14. We can choose to utilize only L(n) levels. There are a number of ways to implement this, but they all complicate the algorithms and do not noticeably improve performance, so this approach is not recommended. • Fix the dice. If we generate a random level that is more than one greater than the current maximum level in the list, we simply use one plus the current maximum level in the list as the level of the new node. In practice and intuitively, this change seems to work well. However, it totally destroys our ability to analyze the resulting algorithms, since the level of a node is no longer completely random. Programmers should probably feel free to implement this, purists should avoid it.
Determining MaxLevel
Since we can safely cap levels at L(n), we should choose MaxLevel = L(N) (where N is an upper bound on the number of elements in a skip list). If p = 1/2, using MaxLevel = 16 is appropriate for data structures containing up to 216 elements.
ANALYSIS OF SKIP LIST ALGORITHMS
The time required to execute the Search, Delete and Insert operations is dominated by the time required to search for the appropriate element. For the Insert and Delete operations, there is an additional cost proportional to the level of the node being inserted or deleted. The time required to find an element is proportional to the length of the search path, which is determined by the pattern in which elements with different levels appear as we traverse the list.
Insert(list, searchKey, newValue) local update[1..MaxLevel] x := list→header for i := list→level downto 1 do while x→forward[i]→key < searchKey do x := x→forward[i]  x→key < searchKey ≤ x→forward[i]→key update[i] := x x := x→forward[1] if x→key = searchKey then x→value := newValue else lvl := randomLevel() if lvl > list→level then for i := list→level + 1 to lvl do update[i] := list→header list→level := lvl x := makeNode(lvl, searchKey, value) for i := 1 to level do x→forward[i] := update[i]→forward[i] update[i]→forward[i] := x Delete(list, searchKey) local update[1..MaxLevel] x := list→header for i := list→level downto 1 do while x→forward[i]→key < searchKey do x := x→forward[i] update[i] := x x := x→forward[1] if x→key = searchKey then for i := 1 to list→level do if update[i]→forward[i] ≠ x then break update[i]→forward[i] := x→forward[i] free(x) while list→level > 1 and list→header→forward[list→level] = NIL do list→level := list→level – 1 FIGURE 4  Skip List insertion and deletion algorithms
Probabilistic Philosophy
The structure of a skip list is determined only by the number
randomLevel() lvl := 1  random() that returns a random value in [0...1) while random() < p and lvl < MaxLevel do lvl := lvl + 1 return lvl FIGURE 5  Algorithm to calculate a random level
elements in the skip list and the results of consulting the random number generator. The sequence of operations that produced the current skip list does not matter. We assume an adversarial user does not have access to the levels of nodes; otherwise, he could create situations with worstcase running times by deleting all nodes that were not level 1. The probabilities of poor running times for successive operations on the same data structure are NOT independent; two successive searches for the same element will both take exactly the same time. More will be said about this later.
Analysis of expected search cost
We analyze the search path backwards, travelling up and to the left. Although the levels of nodes in the list are known and fixed when the search is performed, we act as if the level of a node is being determined only when it is observed while backtracking the search path. At any particular point in the climb, we are at a situation similar to situation a in Figure 6 – we are at the ith forward pointer of a node x and we have no knowledge about the levels of nodes to the left of x or about the level of x, other than that the level of x must be at least i. Assume the x is not the header (the is equivalent to assuming the list extends infinitely to the left). If the level of x is equal to i, then we are in situation b. If the level of x is greater than i, then we are in situation c. The probability that we are in situation c is p. Each time we are in situation c, we climb up a level. Let C(k) = the expected cost (i.e, length) of a search path that climbs up k levels in an infinite list: C(0) = 0 C(k) = (1–p) (cost in situation b) + p (cost in situation c) By substituting and simplifying, we get: C(k) = (1–p) (1 + C(k)) + p (1 + C(k–1)) C(k) = 1/p + C(k–1) C(k) = k/p
Our assumption that the list is infinite is a pessimistic assumption. When we bump into the header in our backwards climb, we simply climb up it, without performing any leftward movements. This gives us an upper bound of (L(n)–1)/p on the expected length of the path that climbs from level 1 to level L(n) in a list of n elements. We use this analysis go up to level L(n) and use a different analysis technique for the rest of the journey. The number of leftward movements remaining is bounded by the number of elements of level L(n) or higher in the entire list, which has an expected value of 1/p. We also move upwards from level L(n) to the maximum level in the list. The probability that the maximum level of the list is a greater than k is equal to 1–(1–pk)n , which is at most npk. We can calculate the expected maximum level is at most L(n) + 1/(1–p). Putting our results together, we find Total expected cost to climb out of a list of n elements ≤ L(n)/p + 1/(1–p) which is O(log n).
Number of comparisons
Our result is an analysis of the “length” of the search path. The number of comparisons required is one plus the length of the search path (a comparison is performed for each position in the search path, the “length” of the search path is the number of hops between positions in the search path).
Probabilistic Analysis
It is also possible to analyze the probability distribution of search costs. The probabilistic analysis is somewhat more complicated (see box). From the probabilistic analysis, we can calculate an upper bound on the probability that the actual cost of a search exceeds the expected cost by more than a specified ratio. Some results of this analysis are shown in Figure 8.
Choosing p
Table 1 gives the relative times and space requirements for different values of p. Decreasing p also increases the variabil
?
probability = 1p x situation a
Need to climb k levels from here
probability = p
?
x Still need to climb k levels from here situation b
?
x situation c
FIGURE 6  Possible situations in backwards traversal of the search path
Need to climb only k1 levels from here
ity of running times. If 1/p is a power of 2, it will be easy to generate a random level from a stream of random bits (it requires an average of (log2 1/p)/(1–p) random bits to generate a random level). Since some of the constant overheads are related to L(n) (rather than L(n)/p), choosing p = 1/4 (rather than 1/2) slightly improves the constant factors of the speed of the algorithms as well. I suggest that a value of 1/4 be used for p unless the variability of running times is a primary concern, in which case p should be 1/2.
p
1/ 2 1/ e 1 /4 1/ 8 1 /16
Normalized search times (i.e., normalized L(n)/p ) 1 0.94... 1 1.33... 2
Avg. # of pointers per node (i.e., 1/(1 – p)) 2 1.58... 1.33... 1.14... 1.07...
Sequences of operations
The expected total time for a sequence of operations is equal to the sum of the expected times of each of the operations in the sequence. Thus, the expected time for any sequence of m searches in a data structure that contains n elements is O(m log n). However, the pattern of searches affects the probability distribution of the actual time to perform the entire sequence of operations. If we search for the same item twice in the same data structure, both searches will take exactly the same amount of time. Thus the variance of the total time will be four times the variance of a single search. If the search times for two elements are independent, the variance of the total time is equal to the sum of the variances of the individual searches. Searching for the same element over and over again maximizes the variance.
TABLE 1 – Relative search speed and space requirements, depending on the value of p.
Constant factors
Constant factors can make a significant difference in the practical application of an algorithm. This is particularly true for sublinear algorithms. For example, assume that algorithms A and B both require O(log n) time to process a query, but that B is twice as fast as A: in the time algorithm A takes to process a query on a data set of size n, algorithm B can process a query on a data set of size n 2 . There are two important but qualitatively different contributions to the constant factors of an algorithm. First, the inherent complexity of the algorithm places a lower bound on any implementation. Selfadjusting trees are continuously rearranged as searches are performed; this imposes a significant overhead on any implementation of selfadjusting trees. Skip list algorithms seem to have very low inherent constantfactor overheads: the inner loop of the deletion algorithm for skip lists compiles to just six instructions on the 68020. Second, if the algorithm is complex, programmers are deterred from implementing optimizations. For example, balanced tree algorithms are normally described using recursive insert and delete procedures, since that is the most simple and intuitive method of describing the algorithms. A recursive insert or delete procedure incurs a procedure call overhead. By using nonrecursive insert and delete procedures, some of this overhead can be eliminated. However, the complexity of nonrecursive algorithms for insertion and deletion in a balanced tree is intimidating and this complexity deters most programmers from eliminating recursion in these routines. Skip list al1 101 102 103 104 105 106 107 108 109 3.0
ALTERNATIVE DATA STRUCTURES
Balanced trees (e.g., AVL trees [Knu73] [Wir76]) and selfadjusting trees [ST85] can be used for the same problems as skip lists. All three techniques have performance bounds of the same order. A choice among these schemes involves several factors: the difficulty of implementing the algorithms, constant factors, type of bound (amortized, probabilistic or worstcase) and performance on a nonuniform distribution of queries.
Implementation difficulty
For most applications, implementers generally agree skip lists are significantly easier to implement than either balanced tree algorithms or selfadjusting tree algorithms.
p = 1/4, n = 256 p = 1/4, n = 4,096 p = 1/4, n = 65,536 p = 1/2, n = 256 p = 1/2, n = 4,096 p = 1/2, n = 65,536 1.0 2.0
Prob.
Ratio of actual cost to expected cost
FIGURE 8  This graph shows a plot of an upper bound on the probability of a search taking substantially longer than expected. The vertical axis show the probability that the length of the search path for a search exceeds the average length by more than the ratio on the horizontal axis. For example, for p = 1/2 and n = 4096, the probability that the search path will be more than three times the expected length is less than one in 200 million. This graph was calculated using our probabilistic upper bound.
Implementation Skip lists nonrecursive AVL trees recursive 2–3 trees Self–adjusting trees: topdown splaying bottomup splaying
Search Time 0.051 msec (1.0) 0.046 msec (0.91) 0.054 msec (1.05) 0.15 msec 0.49 msec (3.0) (9.6)
Insertion Time 0.065 msec (1.0) 0.10 msec (1.55) 0.21 msec (3.2) 0.16 msec 0.51 msec (2.5) (7.8)
Deletion Time 0.059 msec (1.0) 0.085 msec (1.46) 0.21 msec (3.65) 0.18 msec 0.53 msec (3.1) (9.0)
Table 2  Timings of implementations of different algorithms gorithms are already nonrecursive and they are simple enough that programmers are not deterred from performing optimizations. Table 2 compares the performance of implementations of skip lists and four other techniques. All implementations were optimized for efficiency. The AVL tree algorithms were written by James Macropol of Contel and based on those in [Wir76]. The 2–3 tree algorithms are based on those presented in [AHU83]. Several other existing balanced tree packages were timed and found to be much slower than the results presented below. The selfadjusting tree algorithms are based on those presented in [ST85]. The times in this table reflect the CPU time on a Sun3/60 to perform an operation in a data structure containing 216 elements with integer keys. The values in parenthesis show the results relative to the skip list time The times for insertion and deletion do not include the time for memory management (e.g, in C programs, calls to malloc and free). Note that skip lists perform more comparisons than other methods (the skip list algorithms presented here require an average of L(n)/p + 1/(1–p) + 1 comparisons). For tests using real numbers as keys, skip lists were slightly slower than the nonrecursive AVL tree algorithms and search in a skip list was slightly slower than search in a 2–3 tree (insertion and deletion using the skip list algorithms was still faster than using the recursive 2–3 tree algorithms). If comparisons are very expensive, it is possible to change the algorithms so that we never compare the search key against the key of a node more than once during a search. For p = 1/2, this produces an upper bound on the expected number of comparisons of 7/2 + 3/2 log2 n. This modification is discussed in [Pug89b]. ments takes more than 5 times the expected time is about 1 in 1018.
Nonuniform query distribution
Selfadjusting trees have the property that they adjust to nonuniform query distributions. Since skip lists are faster than selfadjusting trees by a significant constant factor when a uniform query distribution is encountered, selfadjusting trees are faster than skip lists only for highly skewed distributions. We could attempt to devise selfadjusting skip lists. However, there seems little practical motivation to tamper with the simplicity and fast performance of skip lists; in an application where highly skewed distributions are expected, either selfadjusting trees or a skip list augmented by a cache may be preferable [Pug90].
ADDITIONAL WORK ON SKIP LISTS
I have described a set of algorithms that allow multiple processors to concurrently update a skip list in shared memory [Pug89a]. This algorithms are much simpler than concurrent balanced tree algorithms. They allow an unlimited number of readers and n busy writers in a skip list of n elements with very little lock contention. Using skip lists, it is easy to do most (all?) the sorts of operations you might wish to do with a balanced tree such as use search fingers, merge skip lists and allow ranking operations (e.g., determine the kth element of a skip list) [Pug89b]. Tom Papadakis, Ian Munro and Patricio Poblette [PMP90] have done an exact analysis of the expected search time in a skip list. The upper bound described in this paper is close to their exact bound; the techniques they needed to use to derive an exact analysis are very complicated and sophisticated. Their exact analysis shows that for p = 1/2 and p = 1/4, the upper bound given in this paper on the expected cost of a search is not more than 2 comparisons more than the exact expected cost. I have adapted idea of probabilistic balancing to some other problems arising both in data structures and in incremental computation [PT88]. We can generate the level of a node based on the result of applying a hash function to the element (as opposed to using a random number generator). This results in a scheme where for any set S, there is a unique data structure that represents S and with high probability the data structure is approximately balanced. If we combine this idea with an applicative (i.e., persistent) probabilistically balanced data structure and a scheme such as hashedconsing [All78] which allows constanttime structural equality tests of applicative data structures, we get a number of interesting properties, such as constanttime equality tests for the representations of sequences. This scheme also has a number of applications for incremental computation. Since skip lists are
Type of performance bound
These three classes of algorithm have different kinds of performance bounds. Balanced trees have worstcase time bounds, selfadjusting trees have amortized time bounds and skip lists have probabilistic time bounds. With selfadjusting trees, an individual operation can take O(n) time, but the time bound always holds over a long sequence of operations. For skip lists, any operation or sequence of operations can take longer than expected, although the probability of any operation taking significantly longer than expected is negligible. In certain realtime applications, we must be assured that an operation will complete within a certain time bound. For such applications, selfadjusting trees may be undesirable, since they can take significantly longer on an individual operation than expected (e.g., an individual search can take O(n) time instead of O(log n) time). For realtime systems, skip lists may be usable if an adequate safety margin is provided: the chance that a search in a skip lists containing 1000 ele
somewhat awkward to make applicative, a probabilistically balanced tree scheme is used.
RELATED WORK
James Discroll pointed out that R. Sprugnoli suggested a method of randomly balancing search trees in 1981 [Spr81]. With Sprugnoli’s approach, the state of the data structure is not independent of the sequence of operations which built it. This makes it much harder or impossible to formally analyze his algorithms. Sprugnoli gives empirical evidence that his algorithm has good expected performance, but no theoretical results. A randomized data structure for ordered sets is described in [BLLSS86]. However, a search using that data structure requires O(n1/2) expected time. Cecilia Aragon and Raimund Seidel describe a probabilistically balanced search trees scheme [AC89]. They discuss how to adapt their data structure to nonuniform query distributions.
SOURCE CODE AVAILABILITY
Skip list source code libraries for both C and Pascal are available for anonymous ftp from mimsy.umd.edu.
CONCLUSIONS
From a theoretical point of view, there is no need for skip lists. Balanced trees can do everything that can be done with skip lists and have good worstcase time bounds (unlike skip lists). However, implementing balanced trees is an exacting task and as a result balanced tree algorithms are rarely implemented except as part of a programming assignment in a data structures class. Skip lists are a simple data structure that can be used in place of balanced trees for most applications. Skip lists algorithms are very easy to implement, extend and modify. Skip lists are about as fast as highly optimized balanced tree algorithms and are substantially faster than casually implemented balanced tree algorithms.
[BLLSS86] Bentley, J., F. T. Leighton, M.F. Lepley, D. Stanat and J. M. Steele, A Randomized Data Structure For Ordered Sets, MIT/LCS Technical Memo 297, May 1986. [Knu73] Knuth, D. “Sorting and Searching,” The Art of Computer Programming, Vol. 3, AddisonWesley Publishing Company, 1973. [PMP90] Papadakis, Thomas, Ian Munro and Patricio Poblette, Exact Analysis of Expected Search Cost in Skip Lists, Tech Report # ????, Dept. of Computer Science, Univ. of Waterloo, January 1990. [PT89] Pugh, W. and T. Teitelbaum, “Incremental Computation via Function Caching,” Proc. of the Sixteenth conference on the Principles of Programming Languages, 1989. [Pug89a] Pugh, W., Concurrent Maintenance of Skip Lists, Tech Report TRCS2222, Dept. of Computer Science, University of Maryland, College Park, 1989. [Pug89b] Pugh, W., Whatever you might want to do using Balanced Trees, you can do it faster and more simply using Skip Lists, Tech Report CS–TR–2286, Dept. of Computer Science, University of Maryland, College Park, July 1989. [Pug90] Pugh, W. Slow Optimally Balanced Search Strategies vs. Cached Fast Uniformly Balanced Search Strategies, to appear in Information Processing Letters. [Spr81] Sprugnoli, R. Randomly Balanced Binary Trees, Calcolo, V17 (1981), pp 99117. [ST85] Sleator, D. and R. Tarjan “SelfAdjusting Binary Search Trees,” Journal of the ACM, Vol 32, No. 3, July 1985, pp. 652666. [Wir76] Wirth, N. Algorithms + Data Structures = Programs, PrenticeHall, 1976.
ACKNOWLEDGEMENTS
Thanks to the referees for their helpful comments. Special thanks to all those people who supplied enthusiasm and encouragement during the years in which I struggled to get this work published, especially Alan Demers, Tim Teitelbaum and Doug McIlroy. This work was partially supported by an AT&T Bell Labs Fellowship and by NSF grant CCR– 8908900.
REFERENCES
[AC89] Aragon, Cecilia and Raimund Seidel, Randomized Search Trees, Proceedings of the 30th Ann. IEEE Symp on Foundations of Computer Science, pp 540–545, October 1989. Aho, A., Hopcroft, J. and Ullman, J. Data Structures and Algorithms, AddisonWesley Publishing Company, 1983.
[AHU83]
[All78]
John Allen. Anatomy of LISP, McGraw Hill Book Company, NY, 1978.
PROBABILISTIC ANALYSIS
In addition to analyzing the expected performance of skip lists, we can also analyze the probabilistic performance of skip lists. This will allow us to calculate the probability that an operation takes longer than a specified time. This analysis is based on the same ideas as our analysis of the expected cost, so that analysis should be understood first. A random variable has a fixed but unpredictable value and a predictable probability distribution and average. If X is a random variable, Prob{ X = t } denotes the probability that X equals t and Prob{ X > t } denotes the probability that X is greater than t. For example, if X is the number obtained by throwing a unbiased die, Prob{ X > 3 } = 1/2. It is often preferable to find simple upper bounds on values whose exact value is difficult to calculate. To discuss upper bounds on random variables, we need to define a partial ordering and equality on the probability distributions of nonnegative random variables. Definitions (=prob and ≤prob). Let X and Y be nonnegative independent random variables (typically, X and Y would denote the time to execute algorithms AX and AY). We define X ≤ prob Y to be true if and only if for any value t, the probability that X exceeds t is less than the probability that Y exceeds t. More formally: X =prob Y iff ∀ t, Prob{ X > t } = Prob{ Y > t } and X ≤ prob Y iff ∀ t, Prob{ X > t } ≤ Prob{ Y > t }. s For example, the graph in Figure 7shows the probability distribution of three random variables X, Y and Z. Since the probability distribution curve for X is completely under the curves for Y and Z, X ≤prob Y and X ≤prob Z. Since the probability curves for Y and Z intersect, neither Y ≤prob Z nor Z ≤prob Y. Since the expected value of a random variable X is simply the area under the curve Prob{ X > t }, if X ≤prob Y then the average of X is less than or equal to the average of Y. We make use of two probability distributions: Definition (binomial distributions — B(t, p)). Let t be a nonnegative integer and p be a probability. The term B(t, p) denotes a random variable equal to the number of successes seen in a series of t independent random trials where the probability of a success in a trial is p. The average and variance of B(t, p) are tp and tp(1 – p) respectively. s Definition (negative binomial distributions — NB(s, p)). Let s be a nonnegative integer and p be a probability. The term NB(s, p) denotes a random variable equal to the number of failures seen before the sth success in a series of random independent trials where the probability of a success in a trial is p. The average and variance of NB(s, p) are s(1–p)/p and s(1–p)/p2 respectively. s
Probabilistic analysis of search cost
The number of leftward movements we need to make before we move up a level (in an infinite list) has a negative binomial distribution: it is the number of failures (situations b’s) we see before we see the first success (situation c) in a series of independent random trials, where the probability of success is p. Using the probabilistic notation introduced above: Cost to climb one level in an infinite list =prob 1+ NB(1, p). We can sum the costs of climbing each level to get the total cost to climb up to level L(n): Cost to climb to level L(n) in an infinite list =prob (L(n) – 1) + NB(L(n) – 1, p). Our assumption that the list is infinite is a pessimistic assumption: Cost to climb to level L(n) in a list of n elements ≤prob (L(n) – 1) + NB(L(n) – 1, p). Once we have climbed to level L(n), the number of leftward movements is bounded by the number of elements of level L(n) or greater in a list of n elements. The number of elements of level L(n) or greater in a list of n elements is a random variable of the form B(n, 1/np). Let M be a random variable corresponding to the maximum level in a list of n elements. The probability that the level of a node is greater than k is pk, so Prob{ M > k } = 1– (1–pk)n < npk. Since npk = pk–L(n) and Prob{ NB(1, 1–p) + 1 > i} = pi, we get an probabilistic upper bound of M ≤prob L(n) + NB(1, 1 – p) + 1. Note that the average of L(n) + NB(1, 1 – p) + 1 is L(n) + 1/(1–p). This gives a probabilistic upper bound on the cost once we have reached level L(n) of B(n, 1/np) + (L(n) + NB(1, 1 – p) + 1) – L(n). Combining our results to get a probabilistic upper bound on the total length of the search path (i.e., cost of the entire search): total cost to climb out of a list of n elements ≤prob (L(n) – 1) + NB(L(n) – 1, p) + B(n, 1/np) + NB(1, 1 – p) + 1 The expected value of our upper bound is equal to (L(n) – 1) + (L(n) – 1)(1 – p)/p + 1/p + p/(1–p) + 1 = L(n)/p + 1/(1–p), which is the same as our previously calculated upper bound on the expected cost of a search. The variance of our upper bound is (L(n) – 1)(1–p)/p2 + (1 – 1/np)/p + p/(1–p)2 < (1–p)L(n)/p2 + p/(1–p)2 + (2p–1)/p2 . Figure 8 show a plot of an upper bound on the probability of an actual search taking substantially longer than average, based on our probabilistic upper bound.
1
Prob{ X > t } Prob{ Y > t } Prob{ Z > t }
Prob
0
t
FIGURE 7 – Plots of three probability distributions
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt
/* skip list */ #include <stdio.h> #include <stdlib.h> /* define datatype and compare operators here */ typedef int T; /* type of item to be stored */ #define compLT(a,b) (a < b) #define compEQ(a,b) (a == b) /* levels range from (0 .. MAXLEVEL) */ #define MAXLEVEL 15 typedef struct Node_ { T data; struct Node_ *forward[1]; } Node; typedef struct { Node *hdr; int listLevel; } SkipList; SkipList list; #define NIL list.hdr Node *insertNode(T data) { int i, newLevel; Node *update[MAXLEVEL+1]; Node *x; /*********************************************** * allocate node for data and insert in list * ***********************************************/ /* find where data belongs */ x = list.hdr; for (i = list.listLevel; i >= 0; i) { while (x>forward[i] != NIL && compLT(x>forward[i]>data, data)) x = x>forward[i]; update[i] = x; } x = x>forward[0]; if (x != NIL && compEQ(x>data, data)) return(x); /* determine level */ for (newLevel = 0; rand() < RAND_MAX/2 && newLevel < MAXLEVEL; newLevel++); if (newLevel > list.listLevel) { for (i = list.listLevel + 1; i <= newLevel; i++) update[i] = NIL; list.listLevel = newLevel; } /* make new node */ if ((x = malloc(sizeof(Node) + newLevel*sizeof(Node *))) == 0) { printf ("insufficient memory (insertNode)\n"); exit(1); } x>data = data; /* update forward links */ for (i = 0; i <= newLevel; i++) { x>forward[i] = update[i]>forward[i];
/* user's data */ /* skip list forward pointer */
/* list Header */ /* current level of list */
/* skip list information */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt (1 of 3) [3/23/2004 3:09:31 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt
update[i]>forward[i] = x; } return(x); } void deleteNode(T data) { int i; Node *update[MAXLEVEL+1], *x; /******************************************* * delete node containing data from list * *******************************************/ /* find where data belongs */ x = list.hdr; for (i = list.listLevel; i >= 0; i) { while (x>forward[i] != NIL && compLT(x>forward[i]>data, data)) x = x>forward[i]; update[i] = x; } x = x>forward[0]; if (x == NIL  !compEQ(x>data, data)) return; /* adjust forward pointers */ for (i = 0; i <= list.listLevel; i++) { if (update[i]>forward[i] != x) break; update[i]>forward[i] = x>forward[i]; } free (x); /* adjust header level */ while ((list.listLevel > 0) && (list.hdr>forward[list.listLevel] == NIL)) list.listLevel; } Node *findNode(T data) { int i; Node *x = list.hdr; /******************************* * find node containing data * *******************************/ for (i = list.listLevel; i >= 0; i) { while (x>forward[i] != NIL && compLT(x>forward[i]>data, data)) x = x>forward[i]; } x = x>forward[0]; if (x != NIL && compEQ(x>data, data)) return (x); return(0); } void initList() { int i; /************************** * initialize skip list * **************************/ if ((list.hdr = malloc(sizeof(Node) + MAXLEVEL*sizeof(Node *))) == 0) { printf ("insufficient memory (initList)\n"); exit(1); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt (2 of 3) [3/23/2004 3:09:31 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt
for (i = 0; i <= MAXLEVEL; i++) list.hdr>forward[i] = NIL; list.listLevel = 0; } int main(int argc, char **argv) { int i, *a, maxnum, random; /* commandline: * * skl maxnum [random] * * skl 2000 * process 2000 sequential records * skl 4000 r * process 4000 random records * */ maxnum = atoi(argv[1]); random = argc > 2; initList(); if ((a = malloc(maxnum * sizeof(*a))) == 0) { fprintf (stderr, "insufficient memory (a)\n"); exit(1); } if (random) { /* fill "a" with unique random numbers */ for (i = 0; i < maxnum; i++) a[i] = rand(); printf ("ran, %d items\n", maxnum); } else { for (i = 0; i < maxnum; i++) a[i] = i; printf ("seq, %d items\n", maxnum); } for (i = 0; i < maxnum; i++) { insertNode(a[i]); } for (i = maxnum1; i >= 0; i) { findNode(a[i]); } for (i = maxnum1; i >= 0; i) { deleteNode(a[i]); } return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_skl.txt (3 of 3) [3/23/2004 3:09:31 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm
Comparison
We have seen several ways to construct dictionaries: hash tables, unbalanced binary search trees, redblack trees, and skip lists. There are several factors that influence the choice of an algorithm:
q
Sorted output. If sorted output is required, then hash tables are not a viable alternative. Entries are stored in the table based on their hashed value, with no other ordering. For binary trees, the story is different. An inorder tree walk will produce a sorted list. For example: void WalkTree(Node *P) { if (P == NIL) return; WalkTree(P>Left); /* examine P>Data here */ WalkTree(P>Right); } WalkTree(Root); To examine skip list nodes in order, simply chain through the level0 pointers. For example: Node *P = List.Hdr>Forward[0]; while (P != NIL) { /* examine P>Data here */ P = P>Forward[0]; }
q
Space. The amount of memory required to store a value should be minimized. This is especially true if many small nodes are to be allocated.
r
For hash tables, only one forward pointer per node is required. In addition, the hash table itself must be allocated. For redblack trees, each node has a left, right, and parent pointer. In addition, the color of each node must be recorded. Although this requires only one bit, more space may be allocated to ensure that the size of the structure is properly aligned. Therefore each node in a redblack tree requires enough space for 34 pointers. For skip lists, each node has a level0 forward pointer. The probability of having a level1 pointer is 1/2. The probability of having a level2 pointer is 1/4. In general, the
r
r
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm (1 of 3) [3/23/2004 3:09:34 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm
number of forward pointers per node is n = 1 + 1/2 + 1/4 + ... = 2.
q
Time. The algorithm should be efficient. This is especially true if a large dataset is expected. Table 32 compares the search time for each algorithm. Note that worstcase behavior for hash tables and skip lists is extremely unlikely. Actual timing tests are described below. Simplicity. If the algorithm is short and easy to understand, fewer mistakes may be made. This not only makes your life easy, but the maintenance programmer entrusted with the task of making repairs will appreciate any efforts you make in this area. The number of statements required for each algorithm is listed in Table 32. method hash table unbalanced tree redblack tree skip list statements average time worstcase time 26 41 120 55 O(1) O(lg n) O(lg n) O(lg n) O(n) O(n) O(lg n) O(n)
q
Table 32: Comparison of Dictionaries
Average time for insert, search, and delete operations on a database of 65,536 (216) randomly input items may be found in Table 33. For this test the hash table size was 10,009 and 16 index levels were allowed for the skip list. Although there is some variation in the timings for the four methods, they are close enough so that other considerations should come into play when selecting an algorithm. method hash table unbalanced tree redblack tree skip list insert search delete 18 37 40 48 8 17 16 31 10 26 37 35
Table 33: Average Time (µs), 65536 Items, Random Input
Table 34 shows the average search time for two sets of data: a random set, where all values are unique, and an ordered set, where values are in ascending order. Ordered input creates a worstcase
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm (2 of 3) [3/23/2004 3:09:34 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm
scenario for unbalanced tree algorithms, as the tree ends up being a simple linked list. The times shown are for a single search operation. If we were to search for all items in a database of 65,536 values, a redblack tree algorithm would take .6 seconds, while an unbalanced tree algorithm would take 1 hour. count hash table unbalanced tree redblack tree skip list 16 random input 256 4,096 65,536 16 ordered input 256 4,096 65,536 4 3 3 8 3 3 3 7 3 4 7 17 4 47 1,033 55,019 2 4 6 16 2 4 6 9 5 9 12 31 4 7 11 15
Table 34: Average Search Time (µs)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_cm2.htm (3 of 3) [3/23/2004 3:09:34 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_vlf.htm
Very Large Files
The previous algorithms have assumed that all data reside in memory. However, there may be times when the dataset is too large, and alternative methods are required. In this section, we will examine techniques for sorting (external sort) and implementing dictionaries (Btrees) for very large files.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_vlf.htm [3/23/2004 3:09:35 PM]
External Sorting
External Sorting
One method for sorting a file is to load the file into memory, sort the data in memory, then write the results. When the file cannot be loaded into memory due to resource limitations, an external sort applicable. We will implement an external sort using replacement selection to establish initial runs, followed by a polyphase merge sort to merge the runs into one sorted file. I highly recommend you consult Knuth [1998], as many details have been omitted.
Theory
For clarity, I'll assume that data is on one or more reels of magnetic tape. Figure 41 illustrates a 3way polyphase merge. Initially, in phase A, all data is on tapes T1 and T2. Assume that the beginning of each tape is at the bottom of the frame. There are two sequential runs of data on T1: 48, and 67. Tape T2 has one run: 59. At phase B, we've merged the first run from tapes T1 (48) and T2 (59) into a longer run on tape T3 (4589). Phase C is simply renames the tapes, so we may repeat the merge again. In phase D we repeat the merge, with the final output on tape T3. Phase T1 T2 T3 7 6 8 4
A
9 5 9 8 5 4
B
7 6 9 8 5 4
C
7 6 9 8 7 6 5 4
D
Figure 41:
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.htm (1 of 3) [3/23/2004 3:09:38 PM]
External Sorting
Merge Sort
Several interesting details have been omitted from the previous illustration. For example, how were the initial runs created? And, did you notice that they merged perfectly, with no extra runs on any tapes? Before I explain the method used for constructing initial runs, let me digress for a bit. In 1202, Leonardo Fibonacci presented the following exercise in his Liber Abbaci (Book of the Abacus): "How many pairs of rabbits can be produced from a single pair in a year's time?" We may assume that each pair produces a new pair of offspring every month, each pair becomes fertile at the age of one month, and that rabbits never die. After one month, there will be 2 pairs of rabbits; after two months there will be 3; the following month the original pair and the pair born during the first month will both usher in a new pair, and there will be 5 in all; and so on. This series, where each number is the sum of the two preceeding numbers, is known as the Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... . Curiously, the Fibonacci series has found widespread application to everything from the arrangement of flowers on plants to studying the efficiency of Euclid's algorithm. There's even a Fibonacci Quarterly journal. And, as you might suspect, the Fibonacci series has something to do with establishing initial runs for external sorts. Recall that we initially had one run on tape T2, and 2 runs on tape T1. Note that the numbers {1,2} are two sequential numbers in the Fibonacci series. After our first merge, we had one run on T1 and one run on T2. Note that the numbers {1,1} are two sequential numbers in the Fibonacci series, only one notch down. We could predict, in fact, that if we had 13 runs on T2, and 21 runs on T1 {13,21}, we would be left with 8 runs on T1 and 13 runs on T3 {8,13} after one pass. Successive passes would result in run counts of {5,8}, {3,5}, {2,3}, {1,1}, and {0,1}, for a total of 7 passes. This arrangement is ideal, and will result in the minimum number of passes. Should data actually be on tape, this is a big savings, as tapes must be mounted and rewound for each pass. For more than 2 tapes, higherorder Fibonacci numbers are used. Initially, all the data is on one tape. The tape is read, and runs are distributed to other tapes in the system. After the initial runs are created, they are merged as described above. One method we could use to create initial runs is to read a batch of records into memory, sort the records, and write them out. This process would continue until we had exhausted the input tape. An alternative algorithm, replacement selection, allows for longer runs. A buffer is allocated in memory to act as a holding place for several records. Initially, the buffer is filled. Then, the following steps are repeated until the input is exhausted:
q q
q q
Select the record with the smallest key that is >= the key of the last record written. If all keys are smaller than the key of the last record written, then we have reached the end of a run. Select the record with the smallest key for the first record of the next run. Write the selected record. Replace the selected record with a new record from input.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.htm (2 of 3) [3/23/2004 3:09:38 PM]
External Sorting
Figure 42 illustrates replacement selection for a small file. The beginning of the file is to the right of each frame. To keep things simple, I've allocated a 2record buffer. Typically, such a buffer would hold thousands of records. We load the buffer in step B, and write the record with the smallest key (6) in step C. This is replaced with the next record (key 8). We select the smallest key >= 6 in step D. This is key 7. After writing key 7, we replace it with key 4. This process repeats until step F, where our last key written was 8, and all keys are less than 8. At this point, we terminate the run, and start another. Step Input A B C D E F G H 534867 5348 534 53 5 67 87 84 34 54 5 6 76 876 3  876 43  876 543  876 Buffer Output
Figure 42: Replacement Selection
This strategy simply utilizes an intermediate buffer to hold values until the appropriate time for output. Using random numbers as input, the average length of a run is twice the length of the buffer. However, if the data is somewhat ordered, runs can be extremely long. Thus, this method is more effective than doing partial sorts. When selecting the next output record, we need to find the smallest key >= the last key written. One way to do this is to scan the entire list, searching for the appropriate key. However, when the buffer holds thousands of records, execution time becomes prohibitive. An alternative method is to use a binary tree structure, so that we only compare lg n items.
Implementation
An ANSIC implementation of an external sort is included. Function makeRuns calls readRec to read the next record. Function readRec employs the replacement selection algorithm (utilizing a binary tree) to fetch the next record, and makeRuns distributes the records in a Fibonacci distribution. If the number of runs is not a perfect Fibonacci number, dummy runs are simulated at the beginning of each file. Function mergeSort is then called to do a polyphase merge sort on the runs.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.htm (3 of 3) [3/23/2004 3:09:38 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
/* external sort */ #include <stdio.h> #include <stdlib.h> #include <string.h> /**************************** * implementation dependent * ****************************/ /* template for workfiles (8.3 format) */ #define FNAME "_sort%03d.dat" #define LNAME 13 /* comparison operators */ #define compLT(x,y) (x < y) #define compGT(x,y) (x > y) /* define the record to be sorted here */ #define LRECL 100 typedef int keyType; typedef struct recTypeTag { keyType key; #if LRECL char data[LRECLsizeof(keyType)]; #endif } recType; /****************************** * implementation independent * ******************************/ typedef enum {false, true} bool; typedef struct tmpFileTag { FILE *fp; char name[LNAME]; recType rec; int dummy; bool eof; bool eor; bool valid; int fib; } tmpFileType; static static static static tmpFileType **file; int nTmpFiles; char *ifName; char *ofName;
/* sort key for record */ /* other fields */
/* /* /* /* /* /* /* /*
file pointer */ filename */ last record read */ number of dummy runs */ endoffile flag */ endofrun flag */ true if rec is valid */ ideal fibonacci number */
/* /* /* /*
array of file info for tmp files */ number of tmp files */ input filename */ output filename */
static int level; static int nNodes; void deleteTmpFiles(void) { int i;
/* level of runs */ /* number of nodes for selection tree */
/* delete merge files and free resources */ if (file) { for (i = 0; i < nTmpFiles; i++) { if (file[i]) { if (file[i]>fp) fclose(file[i]>fp); if (*file[i]>name) remove(file[i]>name); free (file[i]); } } free (file); }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (1 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
} void termTmpFiles(int rc) { /* cleanup files */ remove(ofName); if (rc == 0) { int fileT; /* file[T] contains results */ fileT = nTmpFiles  1; fclose(file[fileT]>fp); file[fileT]>fp = NULL; if (rename(file[fileT]>name, ofName)) { perror("io1"); deleteTmpFiles(); exit(1); } *file[fileT]>name = 0; } deleteTmpFiles(); } void cleanExit(int rc) { /* cleanup tmp files and exit */ termTmpFiles(rc); exit(rc); } void *safeMalloc(size_t size) { void *p; /* safely allocate memory and initialize to zero */ if ((p = calloc(1, size)) == NULL) { printf("error: malloc failed, size = %d\n", size); cleanExit(1); } return p; } void initTmpFiles(void) { int i; tmpFileType *fileInfo; /* initialize merge files */ if (nTmpFiles < 3) nTmpFiles = 3; file = safeMalloc(nTmpFiles * sizeof(tmpFileType*)); fileInfo = safeMalloc(nTmpFiles * sizeof(tmpFileType)); for (i = 0; i < nTmpFiles; i++) { file[i] = fileInfo + i; sprintf(file[i]>name, FNAME, i); if ((file[i]>fp = fopen(file[i]>name, "w+b")) == NULL) { perror("io2"); cleanExit(1); } } } recType *readRec(void) { typedef struct iNodeTag { /* internal node */ struct iNodeTag *parent;/* parent of internal node */ struct eNodeTag *loser; /* external loser */ } iNodeType; typedef struct eNodeTag { /* external node */ struct iNodeTag *parent;/* parent of external node */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (2 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
recType rec; int run; bool valid; } eNodeType; typedef struct nodeTag { iNodeType i; eNodeType e; } nodeType; static nodeType *node; static eNodeType *win; static FILE *ifp; static bool eof; static int maxRun; static int curRun; iNodeType *p; static bool lastKeyValid; static keyType lastKey;
/* input record */ /* run number */ /* input record is valid */
/* internal node */ /* external node */
/* /* /* /* /* /* /* /* /*
array of selection tree nodes */ new winner */ input file */ true if endoffile, input */ maximum run number */ current run number */ pointer to internal nodes */ true if lastKey is valid */ last key written */
/* read next record using replacement selection */ /* check for first call */ if (node == NULL) { int i; if (nNodes < 2) nNodes = 2; node = safeMalloc(nNodes * sizeof(nodeType)); for (i = 0; i < nNodes; i++) { node[i].i.loser = &node[i].e; node[i].i.parent = &node[i/2].i; node[i].e.parent = &node[(nNodes + i)/2].i; node[i].e.run = 0; node[i].e.valid = false; } win = &node[0].e; lastKeyValid = false; if ((ifp = fopen(ifName, "rb")) == NULL) { printf("error: file %s, unable to open\n", ifName); cleanExit(1); } } while (1) { /* replace previous winner with new record */ if (!eof) { if (fread(&win>rec, sizeof(recType), 1, ifp) == 1) { if ((!lastKeyValid  compLT(win>rec.key, lastKey)) && (++win>run > maxRun)) maxRun = win>run; win>valid = true; } else if (feof(ifp)) { fclose(ifp); eof = true; win>valid = false; win>run = maxRun + 1; } else { perror("io4"); cleanExit(1); } } else { win>valid = false; win>run = maxRun + 1; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (3 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
/* adjust loser and winner pointers */ p = win>parent; do { bool swap; swap = false; if (p>loser>run < win>run) { swap = true; } else if (p>loser>run == win>run) { if (p>loser>valid && win>valid) { if (compLT(p>loser>rec.key, win>rec.key)) swap = true; } else { swap = true; } } if (swap) { /* p should be winner */ eNodeType *t; t = p>loser; p>loser = win; win = t; } p = p>parent; } while (p != &node[0].i); /* end of run? */ if (win>run != curRun) { /* win>run = curRun + 1 */ if (win>run > maxRun) { /* end of output */ free(node); return NULL; } curRun = win>run; } /* output top of tree */ if (win>run) { lastKey = win>rec.key; lastKeyValid = true; return &win>rec; } } } void makeRuns(void) { recType *win; int fileT; int fileP; int j;
/* /* /* /*
winner */ last file */ next to last file */ selects file[j] */
/* Make initial runs using replacement selection. * Runs are written using a Fibonacci distintbution. */ /* initialize file structures */ fileT = nTmpFiles  1; fileP = fileT  1; for (j = 0; j < fileT; j++) { file[j]>fib = 1; file[j]>dummy = 1; } file[fileT]>fib = 0;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (4 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
file[fileT]>dummy = 0; level = 1; j = 0; win = readRec(); while (win) { bool anyrun; anyrun = false; for (j = 0; win && j <= fileP; j++) { bool run; run = false; if (file[j]>valid) { if (!compLT(win>key, file[j]>rec.key)) { /* append to an existing run */ run = true; } else if (file[j]>dummy) { /* start a new run */ file[j]>dummy; run = true; } } else { /* first run in file */ file[j]>dummy; run = true; } if (run) { anyrun = true; /* flush run */ while(1) { if (fwrite(win, sizeof(recType), 1, file[j]>fp) != 1) { perror("io3"); cleanExit(1); } file[j]>rec.key = win>key; file[j]>valid = true; if ((win = readRec()) == NULL) break; if (compLT(win>key, file[j]>rec.key)) break; } } } /* if no room for runs, up a level */ if (!anyrun) { int t; level++; t = file[0]>fib; for (j = 0; j <= fileP; j++) { file[j]>dummy = t + file[j+1]>fib  file[j]>fib; file[j]>fib = t + file[j+1]>fib; } } } } void rewindFile(int j) { /* rewind file[j] and read in first record */ file[j]>eor = false; file[j]>eof = false; rewind(file[j]>fp); if (fread(&file[j]>rec, sizeof(recType), 1, file[j]>fp) != 1) {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (5 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
if (feof(file[j]>fp)) { file[j]>eor = true; file[j]>eof = true; } else { perror("io5"); cleanExit(1); } } } void mergeSort(void) { int fileT; int fileP; int j; tmpFileType *tfile; /* polyphase merge sort */ fileT = nTmpFiles  1; fileP = fileT  1; /* prime the files */ for (j = 0; j < fileT; j++) { rewindFile(j); } /* each pass through loop merges one run */ while (level) { while(1) { bool allDummies; bool anyRuns; /* scan for runs */ allDummies = true; anyRuns = false; for (j = 0; j <= fileP; j++) { if (!file[j]>dummy) { allDummies = false; if (!file[j]>eof) anyRuns = true; } } if (anyRuns) { int k; keyType lastKey; /* merge 1 run file[0]..file[P] > file[T] */ while(1) { /* each pass thru loop writes 1 record to file[fileT] */ /* find smallest key */ k = 1; for (j = 0; j <= fileP; j++) { if (file[j]>eor) continue; if (file[j]>dummy) continue; if (k < 0  (k != j && compGT(file[k]>rec.key, file[j]>rec.key))) k = j; } if (k < 0) break; /* write record[k] to file[fileT] */ if (fwrite(&file[k]>rec, sizeof(recType), 1, file[fileT]>fp) != 1) { perror("io6"); cleanExit(1);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (6 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
} /* replace record[k] */ lastKey = file[k]>rec.key; if (fread(&file[k]>rec, sizeof(recType), 1, file[k]>fp) == 1) { /* check for end of run on file[s] */ if (compLT(file[k]>rec.key, lastKey)) file[k]>eor = true; } else if (feof(file[k]>fp)) { file[k]>eof = true; file[k]>eor = true; } else { perror("io7"); cleanExit(1); } } /* fixup dummies */ for (j = 0; j <= fileP; j++) { if (file[j]>dummy) file[j]>dummy; if (!file[j]>eof) file[j]>eor = false; } } else if (allDummies) { for (j = 0; j <= fileP; j++) file[j]>dummy; file[fileT]>dummy++; } /* end of run */ if (file[fileP]>eof && !file[fileP]>dummy) { /* completed a fibonoccilevel */ level; if (!level) { /* we're done, file[fileT] contains data */ return; } /* fileP is exhausted, reopen as new */ fclose(file[fileP]>fp); if ((file[fileP]>fp = fopen(file[fileP]>name, "w+b")) == NULL) { perror("io8"); cleanExit(1); } file[fileP]>eof = false; file[fileP]>eor = false; rewindFile(fileT); /* f[0],f[1]...,f[fileT] < f[fileT],f[0]...,f[T1] */ tfile = file[fileT]; memmove(file + 1, file, fileT * sizeof(tmpFileType*)); file[0] = tfile; /* start new runs */ for (j = 0; j <= fileP; j++) if (!file[j]>eof) file[j]>eor = false; } } } }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (7 of 8) [3/23/2004 3:09:47 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt
void extSort(void) { initTmpFiles(); makeRuns(); mergeSort(); termTmpFiles(0); } int main(int argc, char *argv[]) { /* commandline: * * ext ifName ofName nTmpFiles nNodes * * ext in.dat out.dat 5 2000 * reads in.dat, sorts using 5 files and 2000 nodes, output to out.dat */ if (argc != 5) { printf("%s ifName ofName nTmpFiles nNodes\n", argv[0]); cleanExit(1); } ifName = argv[1]; ofName = argv[2]; nTmpFiles = atoi(argv[3]); nNodes = atoi(argv[4]); printf("extSort: nFiles=%d, nNodes=%d, lrecl=%d\n", nTmpFiles, nNodes, sizeof(recType)); extSort(); return 0; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_ext.txt (8 of 8) [3/23/2004 3:09:47 PM]
BTrees
BTrees
Dictionaries for very large files typically reside on secondary storage, such as a disk. The dictionary is implemented as an index to the actual file and contains the key and record address of data. To implement a dictionary we could use redblack trees, replacing pointers with offsets from the beginning of the index file, and use random access to reference nodes of the tree. However, every transition on a link would imply a disk access, and would be prohibitively expensive. Recall that lowlevel disk I/O accesses disk by sectors (typically 256 bytes). We could equate node size to sector size, and group several keys together in each node to minimize the number of I/O operations. This is the principle behind Btrees. Good references for Btrees include Knuth [1998] and Cormen [1990]. For B+trees, consult Aho [1983].
Theory
Figure 43 illustrates a Btree with 3 keys/node. Keys in internal nodes are surrounded by pointers, or record offsets, to keys that are less than or greater than, the key value. For example, all keys less than 22 are to the left and all keys greater than 22 are to the right. For simplicity, I have not shown the record address associated with each key.
Figure 43: BTree
We can locate any key in this 2level tree with three disk accesses. If we were to group 100 keys/node, we could search over 1,000,000 keys in only three reads. To ensure this property holds, we must maintain a balanced tree during insertion and deletion. During insertion, we examine the child node to verify that it is able to hold an additional node. If not, then a new sibling node is added to the tree, and the child's keys are redistributed to make room for the new node. When descending for insertion and the root is full, then the root is spilled to new children, and the level of the tree increases. A similar action is taken on deletion, where child nodes may be absorbed by the root. This technique for altering the height of the tree maintains a balanced tree.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.htm (1 of 3) [3/23/2004 3:10:03 PM]
BTrees
BTree data stored in any node
B*Tree any node
B+Tree leaf only
B++Tree leaf only
on insert, split 1 x 1 –>2 x 1/2 2 x 1 –>3 x 2/3 1 x 1 –>2 x 1/2 3 x 1 –>4 x 3/4 on delete, join 2 x 1/2 –>1 x 1 3 x 2/3 –>2 x 1 2 x 1/2 –>1 x 1 3 x 1/2 –>2 x 3/4
Table 41: BTree Implementations
Several variants on the Btree are listed in Table 41. The standard Btree stores keys and data in both internal and leaf nodes. When descending the tree during insertion, a full child node is first redistributed to adjacent nodes. If the adjacent nodes are also full, then a new node is created, and half the keys in the child are moved to the newly created node. During deletion, children that are 1/2 full first attempt to obtain keys from adjacent nodes. If the adjacent nodes are also 1/2 full, then two nodes are joined to form one full node. B*trees are similar, only the nodes are kept 2/3 full. This results in better utilization of space in the tree, and slightly better performance.
Figure 44: B+Tree
Figure 44 illustrates a B+tree. All keys are stored at the leaf level, with their associated data values. Duplicates of the keys appear in internal parent nodes to guide the search. Pointers have a slightly different meaning than in conventional Btrees. The left pointer designates all keys less than the value, while the right pointer designates all keys greater than or equal to (GE) the value. For example, all keys less than 22 are on the left pointer, and all keys greater than or equal to 22 are on the right. Notice that key 22 is duplicated in the leaf, where the associated data may be found. During insertion and deletion, care must be taken to properly update parent nodes. When modifying the first key in a leaf, the tree is walked from leaf to root. The last GE pointer found while descending the tree will require modification to reflect the new key value. Since all keys are in the leaf nodes, we may link them for sequential access. The last method, B++trees, is something of my own invention. The organization is similar to B+trees, except for the split/join strategy. Assume each node can hold k keys, and the root node holds 3k keys. Before we descend to a child node during insertion, we check to see if it is full. If it is, the keys in the
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.htm (2 of 3) [3/23/2004 3:10:03 PM]
BTrees
child node and two nodes adjacent to the child are all merged and redistributed. If the two adjacent nodes are also full, then another node is added, resulting in four nodes, each 3/4 full. Before we descend to a child node during deletion, we check to see if it is 1/2 full. If it is, the keys in the child node and two nodes adjacent to the child are all merged and redistributed. If the two adjacent nodes are also 1/2 full, then they are merged into two nodes, each 3/4 full. This is halfway between 1/2 full and completely full, allowing for an equal number of insertions or deletions in the future. Recall that the root node holds 3k keys. If the root is full during insertion, we distribute the keys to four new nodes, each 3/4 full. This increases the height of the tree. During deletion, we inspect the child nodes. If there are only three child nodes, and they are all 1/2 full, they are gathered into the root, and the height of the tree decreases. Another way of expressing the operation is to say we are gathering three nodes, and then scattering them. In the case of insertion, where we need an extra node, we scatter to four nodes. For deletion, where a node must be deleted, we scatter to two nodes. The symmetry of the operation allows the gather/scatter routines to be shared by insertion and deletion in the implementation.
Implementation
An ANSIC implementation of a B++tree is included. In the implementationdependent section, you'll need to define bAdrType and eAdrType, the types associated with Btree file offsets and data file offsets, respectively. You'll also need to provide a callback function which is used by the B++tree algorithm to compare keys. Functions are provided to insert/delete keys, find keys, and access keys sequentially. Function main, at the bottom of the file, provides a simple illustration for insertion. The code provided allows for multiple indices to the same data. This was implemented by returning a handle when the index is opened. Subsequent accesses are done using the supplied handle. Duplicate keys are allowed. Within one index, all keys must be the same length. A binary search was implemented to search each node. A flexible buffering scheme allows nodes to be retained in memory until the space is needed. If you expect access to be somewhat ordered, increasing the bufCt will reduce paging.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.htm (3 of 3) [3/23/2004 3:10:03 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
#include #include #include #include
<stdio.h> <stdlib.h> <stdarg.h> <string.h>
/* * this file is divided into sections: * stuff you'll probably want to place in a .h file... * implementation dependent *  you'll probably have to change something here * implementation independent *  types and function prototypes that typically go in a .h file * function prototypes *  prototypes for user functions * internals *  local functions *  user functions * main() */ /**************************** * implementation dependent * ****************************/ typedef long eAdrType; typedef long bAdrType; #define CC_EQ #define CC_GT #define CC_LT 0 1 1
/* record address for external record */ /* record address for btree node */
/* compare two keys and return: * CC_LT key1 < key2 * CC_GT key1 > key2 * CC_EQ key1 = key2 */ typedef int (*bCompType)(const void *key1, const void *key2); /****************************** * implementation independent * ******************************/ /* statistics */ int maxHeight; int nNodesIns; int nNodesDel; int nKeysIns; int nKeysDel; int nDiskReads; int nDiskWrites;
/* /* /* /* /* /* /*
maximum height attained */ number of nodes inserted */ number of nodes deleted */ number of keys inserted */ number of keys deleted */ number of disk reads */ number of disk writes */
/* line number for last IO or memory error */ int bErrLineNo; typedef enum {false, true} bool; typedef enum { bErrOk, bErrKeyNotFound, bErrDupKeys, bErrSectorSize, bErrFileNotOpen, bErrFileExists, bErrIO, bErrMemory } bErrType; typedef void *bHandleType;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (1 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
typedef struct { char *iName; int keySize; bool dupKeys; int sectorSize; bCompType comp; } bOpenType;
/* /* /* /* /* /*
info for bOpen() */ name of index file */ length, in bytes, of key */ true if duplicate keys allowed */ size of sector on disk */ pointer to compare function */
/*********************** * function prototypes * ***********************/ bErrType bOpen(bOpenType info, bHandleType *handle); /* * input: * info info for open * output: * handle handle to btree, used in subsequent calls * returns: * bErrOk open was successful * bErrMemory insufficient memory * bErrSectorSize sector size too small or not 0 mod 4 * bErrFileNotOpen unable to open index file */ bErrType bClose(bHandleType handle); /* * input: * handle handle returned by bOpen * returns: * bErrOk file closed, resources deleted */ bErrType bInsertKey(bHandleType handle, void *key, eAdrType rec); /* * input: * handle handle returned by bOpen * key key to insert * rec record address * returns: * bErrOk operation successful * bErrDupKeys duplicate keys (and info.dupKeys = false) * notes: * If dupKeys is false, then all records inserted must have a * unique key. If dupkeys is true, then duplicate keys are * allowed, but they must all have unique record addresses. * In this case, record addresses are included in internal * nodes to generate a "unique" key. */ bErrType bDeleteKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * key key to delete * rec record address of key to delete * output: * rec record address deleted * returns: * bErrOk operation successful * bErrKeyNotFound key not found * notes: * If dupKeys is false, all keys are unique, and rec is not used * to determine which key to delete. If dupKeys is true, then * rec is used to determine which key to delete. */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (2 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
bErrType bFindKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * key key to find * output: * rec record address * returns: * bErrOk operation successful * bErrKeyNotFound key not found */ bErrType bFindFirstKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * output: * key first key in sequential set * rec record address * returns: * bErrOk operation successful * bErrKeyNotFound key not found */ bErrType bFindLastKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * output: * key last key in sequential set * rec record address * returns: * bErrOk operation successful * bErrKeyNotFound key not found */ bErrType bFindNextKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * output: * key key found * rec record address * returns: * bErrOk operation successful * bErrKeyNotFound key not found */ bErrType bFindPrevKey(bHandleType handle, void *key, eAdrType *rec); /* * input: * handle handle returned by bOpen * output: * key key found * rec record address * returns: * bErrOk operation successful * bErrKeyNotFound key not found */ /************* * internals * *************/ /*
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (3 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
* algorithm: * A B+tree implementation, with keys stored in internal nodes, * and keys/record addresses stored in leaf nodes. Each node is * one sector in length, except the root node whose length is * 3 sectors. When traversing the tree to insert a key, full * children are adjusted to make room for possible new entries. * Similarly, on deletion, halffull nodes are adjusted to allow for * possible deleted entries. Adjustments are first done by * examining 2 nearest neighbors at the same level, and redistibuting * the keys if possible. If redistribution won't solve the problem, * nodes are split/joined as needed. Typically, a node is 3/4 full. * On insertion, if 3 nodes are full, they are split into 4 nodes, * each 3/4 full. On deletion, if 3 nodes are 1/2 full, they are * joined to create 2 nodes 3/4 full. * * A LRR (leastrecentlyread) buffering scheme for nodes is used to * simplify storage management, and, assuming some locality of reference, * improve performance. * * To simplify matters, both internal nodes and leafs contain the * same fields. * */ /* macros for addressing fields */ /* primitives */ #define bAdr(p) *(bAdrType *)(p) #define eAdr(p) *(eAdrType *)(p) /* based on k = &[key,rec,childGE] */ #define childLT(k) bAdr((char *)k  sizeof(bAdrType)) #define key(k) (k) #define rec(k) eAdr((char *)(k) + h>keySize) #define childGE(k) bAdr((char *)(k) + h>keySize + sizeof(eAdrType)) /* based on b = &bufType */ #define leaf(b) b>p>leaf #define ct(b) b>p>ct #define next(b) b>p>next #define prev(b) b>p>prev #define fkey(b) &b>p>fkey #define lkey(b) (fkey(b) + ks((ct(b)  1))) #define p(b) (char *)(b>p) /* shortcuts */ #define ks(ct) ((ct) * h>ks) typedef char keyType; /* keys entries are treated as char arrays */
typedef struct { unsigned int leaf:1; /* first bit = 1 if leaf unsigned int ct:15; /* count of keys present bAdrType prev; /* prev node in sequence bAdrType next; /* next node in sequence bAdrType childLT; /* child LT first key */ /* ct occurrences of [key,rec,childGE] */ keyType fkey; /* first occurrence */ } nodeType; typedef struct bufTypeTag { struct bufTypeTag *next; struct bufTypeTag *prev; bAdrType adr; nodeType *p; bool valid; /* /* /* /* /* /*
*/ */ (leaf) */ (leaf) */
location of node */ next */ previous */ on disk */ in memory */ true if buffer contents valid */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (4 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
bool modified; } bufType;
/* true if buffer modified */
/* one node for each open handle */ typedef struct hNodeTag { struct hNodeTag *prev; /* previous node */ struct hNodeTag *next; /* next node */ FILE *fp; /* idx file */ int keySize; /* key length */ bool dupKeys; /* true if duplicate keys */ int sectorSize; /* block size for idx records */ bCompType comp; /* pointer to compare routine */ bufType root; /* root of btree, room for 3 sets */ bufType bufList; /* head of buf list */ void *malloc1; /* malloc'd resources */ void *malloc2; /* malloc'd resources */ bufType gbuf; /* gather buffer, room for 3 sets */ bufType *curBuf; /* current location */ keyType *curKey; /* current key in current node */ unsigned int maxCt; /* minimum # keys in node */ int ks; /* sizeof key entry */ bAdrType nextFreeAdr; /* next free btree record address */ } hNode; static hNode hList; static hNode *h; /* list of hNodes */ /* current hNode */
#define error(rc) lineError(__LINE__, rc) static bErrType lineError(int lineno, bErrType rc) { if (rc == bErrIO  rc == bErrMemory) if (!bErrLineNo) bErrLineNo = lineno; return rc; } static bAdrType allocAdr(void) { bAdrType adr; adr = h>nextFreeAdr; h>nextFreeAdr += h>sectorSize; return adr; } static bErrType flush(bufType *buf) { int len; /* number of bytes to write */ /* flush buffer to disk */ len = h>sectorSize; if (buf>adr == 0) len *= 3; /* root */ if (fseek(h>fp, buf>adr, SEEK_SET)) return error(bErrIO); if (fwrite(buf>p, len, 1, h>fp) != 1) return error(bErrIO); buf>modified = false; nDiskWrites++; return bErrOk; } static bErrType flushAll(void) { bErrType rc; /* return code */ bufType *buf; /* buffer */ if (h>root.modified) if ((rc = flush(&h>root)) != 0) return rc; buf = h>bufList.next; while (buf != &h>bufList) { if (buf>modified)
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (5 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
if ((rc = flush(buf)) != 0) return rc; buf = buf>next; } } static bErrType assignBuf(bAdrType adr, bufType **b) { /* assign buf to adr */ bufType *buf; /* buffer */ bErrType rc; /* return code */ if (adr == 0) { *b = &h>root; return bErrOk; } /* search for buf with matching adr */ buf = h>bufList.next; while (buf>next != &h>bufList) { if (buf>valid && buf>adr == adr) break; buf = buf>next; } /* either buf points to a match, or it's last one in list (LRR) */ if (buf>valid) { if (buf>adr != adr) { if (buf>modified) { if ((rc = flush(buf)) != 0) return rc; } buf>adr = adr; buf>valid = false; } } else { buf>adr = adr; } /* remove from current position and place at front of list */ buf>next>prev = buf>prev; buf>prev>next = buf>next; buf>next = h>bufList.next; buf>prev = &h>bufList; buf>next>prev = buf; buf>prev>next = buf; *b = buf; return bErrOk; } static bErrType writeDisk(bufType *buf) { /* write buf to disk */ buf>valid = true; buf>modified = true; return bErrOk; } static bErrType readDisk(bAdrType adr, bufType **b) { /* read data into buf */ int len; bufType *buf; /* buffer */ bErrType rc; /* return code */ if ((rc = assignBuf(adr, &buf)) != 0) return rc; if (!buf>valid) { len = h>sectorSize; if (adr == 0) len *= 3; /* root */ if (fseek(h>fp, adr, SEEK_SET)) return error(bErrIO); if (fread(buf>p, len, 1, h>fp) != 1) return error(bErrIO); buf>modified = false;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (6 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
buf>valid = true; nDiskReads++; } *b = buf; return bErrOk; } typedef enum { MODE_FIRST, MODE_MATCH } modeEnum; static int search( bufType *buf, void *key, eAdrType rec, keyType **mkey, modeEnum mode) { /* * input: * p * key * rec * output: * k * returns: * CC_EQ * CC_LT * CC_GT */ int cc; int m; int lb; int ub; bool foundDup;
pointer to node key to find record address (dupkey only) pointer to keyType info key = mkey key < mkey key > mkey /* /* /* /* /* condition code */ midpoint of search */ lowerbound of binary search */ upperbound of binary search */ true if found a duplicate key */
/* scan current node for key using binary search */ foundDup = false; lb = 0; ub = ct(buf)  1; while (lb <= ub) { m = (lb + ub) / 2; *mkey = fkey(buf) + ks(m); cc = h>comp(key, key(*mkey)); if (cc < 0) /* key less than key[m] */ ub = m  1; else if (cc > 0) /* key greater than key[m] */ lb = m + 1; else { /* keys match */ if (h>dupKeys) { switch (mode) { case MODE_FIRST: /* backtrack to first key */ ub = m  1; foundDup = true; break; case MODE_MATCH: /* rec's must also match */ if (rec < rec(*mkey)) { ub = m  1; cc = CC_LT; } else if (rec > rec(*mkey)) { lb = m + 1; cc = CC_GT; } else {
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (7 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
return CC_EQ; } break; } } else { return cc; } } } if (ct(buf) == 0) { /* empty list */ *mkey = fkey(buf); return CC_LT; } if (h>dupKeys && (mode == MODE_FIRST) && foundDup) { /* next key is first key in set of duplicates */ *mkey += ks(1); return CC_EQ; } /* didn't find key */ return cc; } static bErrType scatterRoot(void) { bufType *gbuf; bufType *root; /* scatter gbuf to root */ root = &h>root; gbuf = &h>gbuf; memcpy(fkey(root), fkey(gbuf), ks(ct(gbuf))); childLT(fkey(root)) = childLT(fkey(gbuf)); ct(root) = ct(gbuf); leaf(root) = leaf(gbuf); return bErrOk; } static bErrType scatter(bufType bufType *gbuf; keyType *gkey; bErrType rc; int iu; int k0Min; int knMin; int k0Max; int knMax; int sw; int len; int base; int extra; int ct; int i; /* * input: * pbuf * pkey * is * tmp * output: * tmp */ *pbuf, keyType *pkey, int is, bufType **tmp) { /* gather buf */ /* gather buf key */ /* return code */ /* number of tmp's used */ /* min #keys that can be mapped to tmp[0] */ /* min #keys that can be mapped to tmp[1..3] */ /* max #keys that can be mapped to tmp[0] */ /* max #keys that can be mapped to tmp[1..3] */ /* shift width */ /* length of remainder of buf */ /* base count distributed to tmps */ /* extra counts */
parent buffer of gathered keys where we insert a key if needed in parent number of supplied tmps array of tmp's to be used for scattering array of tmp's used for scattering
/* scatter gbuf to tmps, placing 3/4 max in each tmp */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (8 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
gbuf = &h>gbuf; gkey = fkey(gbuf); ct = ct(gbuf); /**************************************** * determine number of tmps to use (iu) * ****************************************/ iu = is; /* determine limits */ if (leaf(gbuf)) { /* minus 1 to allow for insertion */ k0Max= h>maxCt  1; knMax= h>maxCt  1; /* plus 1 to allow for deletion */ k0Min= (h>maxCt / 2) + 1; knMin= (h>maxCt / 2) + 1; } else { /* can hold an extra gbuf key as it's translated to a LT pointer */ k0Max = h>maxCt  1; knMax = h>maxCt; k0Min = (h>maxCt / 2) + 1; knMin = ((h>maxCt+1) / 2) + 1; } /* calculate iu, number of tmps to use */ while(1) { if (iu == 0  ct > (k0Max + (iu1)*knMax)) { /* add a buffer */ if ((rc = assignBuf(allocAdr(), &tmp[iu])) != 0) return rc; /* update sequential links */ if (leaf(gbuf)) { /* adjust sequential links */ if (iu == 0) { /* no tmps supplied when splitting root for first time */ prev(tmp[0]) = 0; next(tmp[0]) = 0; } else { prev(tmp[iu]) = tmp[iu1]>adr; next(tmp[iu]) = next(tmp[iu1]); next(tmp[iu1]) = tmp[iu]>adr; } } iu++; nNodesIns++; } else if (iu > 1 && ct < (k0Min + (iu1)*knMin)) { /* del a buffer */ iu; /* adjust sequential links */ if (leaf(gbuf) && tmp[iu1]>adr) { next(tmp[iu1]) = next(tmp[iu]); } next(tmp[iu1]) = next(tmp[iu]); nNodesDel++; } else { break; } } /* establish count for each tmp used */ base = ct / iu; extra = ct % iu; for (i = 0; i < iu; i++) { int n;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (9 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
n = base; /* distribute extras, one at a time */ /* don't do to 1st node, as it may be internal and can't hold it */ if (i && extra) { n++; extra; } ct(tmp[i]) = n; } /************************************** * update sequential links and parent * **************************************/ if (iu != is) { /* link last node to next */ if (leaf(gbuf) && next(tmp[iu1])) { bufType *buf; if ((rc = readDisk(next(tmp[iu1]), &buf)) != 0) return rc; prev(buf) = tmp[iu1]>adr; if ((rc = writeDisk(buf)) != 0) return rc; } /* shift keys in parent */ sw = ks(iu  is); if (sw < 0) { len = ks(ct(pbuf))  (pkey  fkey(pbuf)) + sw; memmove(pkey, pkey  sw, len); } else { len = ks(ct(pbuf))  (pkey  fkey(pbuf)); memmove(pkey + sw, pkey, len); } /* don't count LT buffer for empty parent */ if (ct(pbuf)) ct(pbuf) += iu  is; else ct(pbuf) += iu  is  1; } /******************************* * distribute keys to children * *******************************/ for (i = 0; i < iu; i++) { /* update LT pointer and parent nodes */ if (leaf(gbuf)) { /* update LT, tmp[i] */ childLT(fkey(tmp[i])) = 0; /* update parent */ if (i == 0) { childLT(pkey) = tmp[i]>adr; } else { memcpy(pkey, gkey, ks(1)); childGE(pkey) = tmp[i]>adr; pkey += ks(1); } } else { if (i == 0) { /* update LT, tmp[0] */ childLT(fkey(tmp[i])) = childLT(gkey); /* update LT, parent */ childLT(pkey) = tmp[i]>adr; } else { /* update LT, tmp[i] */ childLT(fkey(tmp[i])) = childGE(gkey);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (10 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
/* update parent key */ memcpy(pkey, gkey, ks(1)); childGE(pkey) = tmp[i]>adr; gkey += ks(1); pkey += ks(1); ct(tmp[i]); } } /* install keys, tmp[i] */ memcpy(fkey(tmp[i]), gkey, ks(ct(tmp[i]))); leaf(tmp[i]) = leaf(gbuf); gkey += ks(ct(tmp[i])); } leaf(pbuf) = false; /************************ * write modified nodes * ************************/ if ((rc = writeDisk(pbuf)) != 0) return rc; for (i = 0; i < iu; i++) if ((rc = writeDisk(tmp[i])) != 0) return rc; return bErrOk; } static bErrType gatherRoot(void) { bufType *gbuf; bufType *root; /* gather root to gbuf */ root = &h>root; gbuf = &h>gbuf; memcpy(p(gbuf), root>p, 3 * h>sectorSize); leaf(gbuf) = leaf(root); ct(root) = 0; return bErrOk; } static bErrType gather(bufType *pbuf, keyType **pkey, bufType **tmp) { bErrType rc; /* return code */ bufType *gbuf; keyType *gkey; /* * input: * pbuf parent buffer * pkey pointer to match key in parent * output: * tmp buffers to use for scatter * pkey pointer to match key in parent * returns: * bErrOk operation successful * notes: * Gather 3 buffers to gbuf. Setup for subsequent scatter by * doing the following: *  setup tmp buffer array for scattered buffers *  adjust pkey to point to first key of 3 buffers */ /* find 3 if (*pkey *pkey if ((rc = if ((rc = adjacent buffers */ == lkey(pbuf)) = ks(1); readDisk(childLT(*pkey), &tmp[0])) != 0) return rc; readDisk(childGE(*pkey), &tmp[1])) != 0) return rc;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (11 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
if ((rc = readDisk(childGE(*pkey + ks(1)), &tmp[2])) != 0) return rc; /* gather nodes to gbuf */ gbuf = &h>gbuf; gkey = fkey(gbuf); /* tmp[0] */ childLT(gkey) = childLT(fkey(tmp[0])); memcpy(gkey, fkey(tmp[0]), ks(ct(tmp[0]))); gkey += ks(ct(tmp[0])); ct(gbuf) = ct(tmp[0]); /* tmp[1] */ if (!leaf(tmp[1])) { memcpy(gkey, *pkey, ks(1)); childGE(gkey) = childLT(fkey(tmp[1])); ct(gbuf)++; gkey += ks(1); } memcpy(gkey, fkey(tmp[1]), ks(ct(tmp[1]))); gkey += ks(ct(tmp[1])); ct(gbuf) += ct(tmp[1]); /* tmp[2] */ if (!leaf(tmp[2])) { memcpy(gkey, *pkey+ks(1), ks(1)); childGE(gkey) = childLT(fkey(tmp[2])); ct(gbuf)++; gkey += ks(1); } memcpy(gkey, fkey(tmp[2]), ks(ct(tmp[2]))); ct(gbuf) += ct(tmp[2]); leaf(gbuf) = leaf(tmp[0]); return bErrOk; } bErrType bOpen(bOpenType info, bHandleType *handle) { bErrType rc; /* return code */ int bufCt; /* number of tmp buffers */ bufType *buf; /* buffer */ int maxCt; /* maximum number of keys in a node */ bufType *root; int i; nodeType *p; if ((info.sectorSize < sizeof(hNode))  (info.sectorSize % 4)) return bErrSectorSize; /* determine sizes and offsets */ /* leaf/n, prev, next, [childLT,key,rec]... childGE */ /* ensure that there are at least 3 children/parent for gather/scatter */ maxCt = info.sectorSize  (sizeof(nodeType)  sizeof(keyType)); maxCt /= sizeof(bAdrType) + info.keySize + sizeof(eAdrType); if (maxCt < 6) return bErrSectorSize; /* copy parms to hNode */ if ((h = malloc(sizeof(hNode))) == NULL) return error(bErrMemory); memset(h, 0, sizeof(hNode)); h>keySize = info.keySize; h>dupKeys = info.dupKeys; h>sectorSize = info.sectorSize; h>comp = info.comp; /* childLT, key, rec */ h>ks = sizeof(bAdrType) + h>keySize + sizeof(eAdrType);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (12 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
h>maxCt = maxCt; /* Allocate buflist. * During insert/delete, need simultaneous access to 7 buffers: *  4 adjacent child bufs *  1 parent buf *  1 next sequential link *  1 lastGE */ bufCt = 7; if ((h>malloc1 = malloc(bufCt * sizeof(bufType))) == NULL) return error(bErrMemory); buf = h>malloc1; /* * Allocate bufs. * We need space for the following: *  bufCt buffers, of size sectorSize *  1 buffer for root, of size 3*sectorSize *  1 buffer for gbuf, size 3*sectorsize + 2 extra keys * to allow for LT pointers in last 2 nodes when gathering 3 full nodes */ if ((h>malloc2 = malloc((bufCt+6) * h>sectorSize + 2 * h>ks)) == NULL) return error(bErrMemory); p = h>malloc2; /* initialize buflist */ h>bufList.next = buf; h>bufList.prev = buf + (bufCt  1); for (i = 0; i < bufCt; i++) { buf>next = buf + 1; buf>prev = buf  1; buf>modified = false; buf>valid = false; buf>p = p; p = (nodeType *)((char *)p + h>sectorSize); buf++; } h>bufList.next>prev = &h>bufList; h>bufList.prev>next = &h>bufList; /* initialize root */ root = &h>root; root>p = p; p = (nodeType *)((char *)p + 3*h>sectorSize); h>gbuf.p = p; /* done last to include extra 2 keys */ h>curBuf = NULL; h>curKey = NULL; /* initialize root */ if ((h>fp = fopen(info.iName, "r+b")) != NULL) { /* open an existing database */ if ((rc = readDisk(0, &root)) != 0) return rc; if (fseek(h>fp, 0, SEEK_END)) return error(bErrIO); if ((h>nextFreeAdr = ftell(h>fp)) == 1) return error(bErrIO); } else if ((h>fp = fopen(info.iName, "w+b")) != NULL) { /* initialize root */ memset(root>p, 0, 3*h>sectorSize); leaf(root) = 1; h>nextFreeAdr = 3 * h>sectorSize; } else { /* something's wrong */ free(h); return bErrFileNotOpen; }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (13 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
/* append node to hList */ if (hList.next) { h>prev = hList.next; h>next = &hList; h>prev>next = h; h>next>prev = h; } else { /* first item in hList */ h>prev = h>next = &hList; hList.next = hList.prev = h; } *handle = h; return bErrOk; } bErrType bClose(bHandleType handle) { h = handle; if (h == NULL) return bErrOk; /* remove from list */ if (h>next) { h>next>prev = h>prev; h>prev>next = h>next; } /* flush idx */ if (h>fp) { flushAll(); fclose(h>fp); } if (h>malloc2) free(h>malloc2); if (h>malloc1) free(h>malloc1); free(h); return bErrOk; } bErrType bFindKey(bHandleType handle, void *key, eAdrType *rec) { keyType *mkey; /* matched key */ bufType *buf; /* buffer */ bErrType rc; /* return code */ h = handle; buf = &h>root; /* find key, and return address */ while (1) { if (leaf(buf)) { if (search(buf, key, 0, &mkey, MODE_FIRST) == 0) { *rec = rec(mkey); h>curBuf = buf; h>curKey = mkey; return bErrOk; } else { return bErrKeyNotFound; } } else { if (search(buf, key, 0, &mkey, MODE_FIRST) < 0) { if ((rc = readDisk(childLT(mkey), &buf)) != 0) return rc; } else { if ((rc = readDisk(childGE(mkey), &buf)) != 0) return rc; } } }
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (14 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
} bErrType bInsertKey(bHandleType int rc; keyType *mkey; int len; int cc; bufType *buf, *root; bufType *tmp[4]; unsigned int keyOff; bool lastGEvalid; bool lastLTvalid; bAdrType lastGE; unsigned int lastGEkey; int height; h = handle; root = &h>root; lastGEvalid = false; lastLTvalid = false; /* check for full root */ if (ct(root) == 3 * h>maxCt) { /* gather root and scatter to 4 bufs */ /* this increases btree height by 1 */ if ((rc = gatherRoot()) != 0) return rc; if ((rc = scatter(root, fkey(root), 0, tmp)) != 0) return rc; } buf = root; height = 0; while(1) { if (leaf(buf)) { /* in leaf, and there' room guaranteed */ if (height > maxHeight) maxHeight = height; /* set mkey to point to insertion point */ switch(search(buf, key, rec, &mkey, MODE_MATCH)) { case CC_LT: /* key < mkey */ if (!h>dupKeys && h>comp(key, mkey) == CC_EQ) return bErrDupKeys; break; case CC_EQ: /* key = mkey */ return bErrDupKeys; break; case CC_GT: /* key > mkey */ if (!h>dupKeys && h>comp(key, mkey) == CC_EQ) return bErrDupKeys; mkey += ks(1); break; } /* shift items GE key to right */ keyOff = mkey  fkey(buf); len = ks(ct(buf))  keyOff; if (len) memmove(mkey + ks(1), mkey, len); /* insert new key */ memcpy(key(mkey), key, h>keySize); rec(mkey) = rec; childGE(mkey) = 0; ct(buf)++; if ((rc = writeDisk(buf)) != 0) return rc; /* if new key is first key, then fixup lastGE key */ if (!keyOff && lastLTvalid) { handle, void *key, eAdrType rec) { /* return code */ /* match key */ /* length to shift */ /* condition code */
/* /* /* /* /*
true if GE branch taken */ true if LT branch taken after GE branch */ last childGE traversed */ last childGE key traversed */ height of tree */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (15 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
bufType *tbuf; keyType *tkey; if ((rc = readDisk(lastGE, &tbuf)) != 0) return rc; tkey = fkey(tbuf) + lastGEkey; memcpy(key(tkey), key, h>keySize); rec(tkey) = rec; if ((rc = writeDisk(tbuf)) != 0) return rc; } nKeysIns++; break; } else { /* internal node, descend to child */ bufType *cbuf; /* child buf */ height++; /* read child */ if ((cc = search(buf, key, rec, &mkey, MODE_MATCH)) < 0) { if ((rc = readDisk(childLT(mkey), &cbuf)) != 0) return rc; } else { if ((rc = readDisk(childGE(mkey), &cbuf)) != 0) return rc; } /* check for room in child */ if (ct(cbuf) == h>maxCt) { /* gather 3 bufs and scatter */ if ((rc = gather(buf, &mkey, tmp)) != 0) return rc; if ((rc = scatter(buf, mkey, 3, tmp)) != 0) return rc; /* read child */ if ((cc = search(buf, key, rec, &mkey, MODE_MATCH)) < 0) { if ((rc = readDisk(childLT(mkey), &cbuf)) != 0) return rc; } else { if ((rc = readDisk(childGE(mkey), &cbuf)) != 0) return rc; } } if (cc >= 0  mkey != fkey(buf)) { lastGEvalid = true; lastLTvalid = false; lastGE = buf>adr; lastGEkey = mkey  fkey(buf); if (cc < 0) lastGEkey = ks(1); } else { if (lastGEvalid) lastLTvalid = true; } buf = cbuf; } } return bErrOk; } bErrType bDeleteKey(bHandleType int rc; keyType *mkey; int len; int cc; bufType *buf; bufType *tmp[4]; unsigned int keyOff; bool lastGEvalid; bool lastLTvalid; bAdrType lastGE; unsigned int lastGEkey; bufType *root; handle, void *key, eAdrType *rec) { /* return code */ /* match key */ /* length to shift */ /* condition code */ /* buffer */
/* /* /* /*
true true last last
if GE branch taken */ if LT branch taken after GE branch */ childGE traversed */ childGE key traversed */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (16 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
bufType *gbuf; h = handle; root = &h>root; gbuf = &h>gbuf; lastGEvalid = false; lastLTvalid = false; buf = root; while(1) { if (leaf(buf)) { /* set mkey to point to deletion point */ if (search(buf, key, *rec, &mkey, MODE_MATCH) == 0) *rec = rec(mkey); else return bErrKeyNotFound; /* shift items GT key to left */ keyOff = mkey  fkey(buf); len = ks(ct(buf)1)  keyOff; if (len) memmove(mkey, mkey + ks(1), len); ct(buf); if ((rc = writeDisk(buf)) != 0) return rc; /* if deleted key is first key, then fixup lastGE key */ if (!keyOff && lastLTvalid) { bufType *tbuf; keyType *tkey; if ((rc = readDisk(lastGE, &tbuf)) != 0) return rc; tkey = fkey(tbuf) + lastGEkey; memcpy(key(tkey), mkey, h>keySize); rec(tkey) = rec(mkey); if ((rc = writeDisk(tbuf)) != 0) return rc; } nKeysDel++; break; } else { /* internal node, descend to child */ bufType *cbuf; /* child buf */ /* read child */ if ((cc = search(buf, key, *rec, &mkey, MODE_MATCH)) < 0) { if ((rc = readDisk(childLT(mkey), &cbuf)) != 0) return rc; } else { if ((rc = readDisk(childGE(mkey), &cbuf)) != 0) return rc; } /* check for room to delete */ if (ct(cbuf) == h>maxCt/2) { /* gather 3 bufs and scatter */ if ((rc = gather(buf, &mkey, tmp)) != 0) return rc; /* if && && if last 3 bufs in root, and count is low enough... */ (buf == root ct(root) == 2 ct(gbuf) < (3*(3*h>maxCt))/4) { /* collapse tree by one level */ scatterRoot(); nNodesDel += 3; continue;
} if ((rc = scatter(buf, mkey, 3, tmp)) != 0) return rc; /* read child */
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (17 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
if ((cc = search(buf, key, *rec, &mkey, MODE_MATCH)) < 0) { if ((rc = readDisk(childLT(mkey), &cbuf)) != 0) return rc; } else { if ((rc = readDisk(childGE(mkey), &cbuf)) != 0) return rc; } } if (cc >= 0  mkey != fkey(buf)) { lastGEvalid = true; lastLTvalid = false; lastGE = buf>adr; lastGEkey = mkey  fkey(buf); if (cc < 0) lastGEkey = ks(1); } else { if (lastGEvalid) lastLTvalid = true; } buf = cbuf; } } return bErrOk; } bErrType bFindFirstKey(bHandleType handle, void *key, eAdrType *rec) { bErrType rc; /* return code */ bufType *buf; /* buffer */ h = handle; buf = &h>root; while (!leaf(buf)) { if ((rc = readDisk(childLT(fkey(buf)), &buf)) != 0) return rc; } if (ct(buf) == 0) return bErrKeyNotFound; memcpy(key, key(fkey(buf)), h>keySize); *rec = rec(fkey(buf)); h>curBuf = buf; h>curKey = fkey(buf); return bErrOk; } bErrType bFindLastKey(bHandleType handle, void *key, eAdrType *rec) { bErrType rc; /* return code */ bufType *buf; /* buffer */ h = handle; buf = &h>root; while (!leaf(buf)) { if ((rc = readDisk(childGE(lkey(buf)), &buf)) != 0) return rc; } if (ct(buf) == 0) return bErrKeyNotFound; memcpy(key, key(lkey(buf)), h>keySize); *rec = rec(lkey(buf)); h>curBuf = buf; h>curKey = lkey(buf); return bErrOk; } bErrType bFindNextKey(bHandleType handle, void *key, eAdrType *rec) { bErrType rc; /* return code */ keyType *nkey; /* next key */ bufType *buf; /* buffer */ h = handle; if ((buf = h>curBuf) == NULL) return bErrKeyNotFound; if (h>curKey == lkey(buf)) { /* current key is last key in leaf node */ if (next(buf)) { /* fetch next set */ if ((rc = readDisk(next(buf), &buf)) != 0) return rc;
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (18 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
nkey = fkey(buf); } else { /* no more sets */ return bErrKeyNotFound; } } else { /* bump to next key */ nkey = h>curKey + ks(1); } memcpy(key, key(nkey), h>keySize); *rec = rec(nkey); h>curBuf = buf; h>curKey = nkey; return bErrOk; } bErrType bFindPrevKey(bHandleType handle, void *key, eAdrType *rec) { bErrType rc; /* return code */ keyType *pkey; /* previous key */ keyType *fkey; /* first key */ bufType *buf; /* buffer */ h = handle; if ((buf = h>curBuf) == NULL) return bErrKeyNotFound; fkey = fkey(buf); if (h>curKey == fkey) { /* current key is first key in leaf node */ if (prev(buf)) { /* fetch previous set */ if ((rc = readDisk(prev(buf), &buf)) != 0) return rc; pkey = fkey(buf) + ks((ct(buf)  1)); } else { /* no more sets */ return bErrKeyNotFound; } } else { /* bump to previous key */ pkey = h>curKey  ks(1); } memcpy(key, key(pkey), h>keySize); *rec = rec(pkey); h>curBuf = buf; h>curKey = pkey; return bErrOk; } int comp(const void *key1, const void *key2) { unsigned int const *p1; unsigned int const *p2; p1 = key1; p2 = key2; return (*p1 == *p2) ? CC_EQ : (*p1 > *p2 ) ? CC_GT : CC_LT; } int main(void) { bOpenType info; bHandleType handle; bErrType rc; unsigned int key; remove("t1.dat"); info.iName = "t1.dat"; info.keySize = sizeof(int); info.dupKeys = false; info.sectorSize = 256; info.comp = comp; if ((rc = bOpen(info, &handle)) != bErrOk) { printf("line %d: rc = %d\n", __LINE__, rc);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (19 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt
exit(0); } key = 0x11; if ((rc = bInsertKey(handle, &key, 0x300)) != bErrOk) { printf("line %d: rc = %d\n", __LINE__, rc); exit(0); } bClose(handle); printf("statistics:\n"); printf(" maximum height: printf(" nodes inserted: printf(" nodes deleted: printf(" keys inserted: printf(" keys deleted: printf(" disk reads: printf(" disk writes: return 0; }
%8d\n", %8d\n", %8d\n", %8d\n", %8d\n", %8d\n", %8d\n",
maxHeight); nNodesIns); nNodesDel); nKeysIns); nKeysDel); nDiskReads); nDiskWrites);
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_btr.txt (20 of 20) [3/23/2004 3:10:37 PM]
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bib.htm
Bibliography
Aho, Alfred V. and Jeffrey D. Ullman [1983]. Data Structures and Algorithms. AddisonWesley, Reading, Massachusetts. Cormen, Thomas H., Charles E. Leiserson and Ronald L. Rivest [1990]. Introduction to Algorithms. McGrawHill, New York. Knuth, Donald E. [1998]. The Art of Computer Programming, Volume 3, Sorting and Searching. AddisonWesley, Reading, Massachusetts. Pearson, Peter K. [1990]. Fast Hashing of VariableLength Text Strings. Communications of the ACM, 33(6):677680, June 1990. Pugh, William [1990]. Skip Lists: A Probabilistic Alternative to Balanced Trees. Communications of the ACM, 33(6):668676, June 1990.
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/niemann/s_bib.htm [3/23/2004 3:10:39 PM]
Algorithm Animations
Data Structures and Algorithms
Animated Algorithms
The following pages contain animations of some of the algorithms covered in this text. Please note that a. Some of the Java classes take a very long time to load! b. These animations are currently the result of a major effort to enhance the data structures and algorithms course and are thus subject to continuous enhancement. Comments are most welcome! 1.
UWA animations
Please note that these are under active development! Sorting algorithms a. b. c. d. e. f. Woi Ang's Insertion Sort Animation Woi Ang's QuickSort Animation Chien Wei Tan's QuickSort Animation Woi Ang's Bin Sort Animation Woi Ang's Radix Sort Animation Woi Ang's Priority Queue Animation
Searching Algorithms a. Mervyn Ng's Red Black Tree Animation b. Woi Ang's Hash Table Construction Animation c. Woi Ang's Optimal Binary Search Tree Animation Greedy algorithms a. Woi Ang's Huffman Encoding & Decoding Animation Dynamic algorithms a. Woi Ang's Matrix Chain Multiplication Animation Graph algorithms 1. Mervyn Ng's Minimum Spanning Tree Animation
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/alg_anim.html (1 of 2) [3/23/2004 3:28:00 PM]
Algorithm Animations
2. Mervyn Ng's Animation of Dijkstra's Algorithm If you find the animations useful, but want them a little closer to home, you can download a file of them all: anim.tar.gz. They are also available by ftp. If you do download them, please don't forget to acknowledge Woi Ang as the author wherever you use them and I'd appreciate it if you'd let me know .. and, of course, if you have any suggestions or comments, they're most welcome: morris@ee.uwa.edu.au. Back to the Table of Contents
© John Morris, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/alg_anim.html (2 of 2) [3/23/2004 3:28:00 PM]
Quicksort Animation
QSort Test Demonstration
If you have any comments or suggestions for improvements, please feel free to email me
© Chien Wei Tan, 1998
http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/Java/q_sort/tqs_new.html [3/23/2004 3:28:06 PM]
THIS IS MANUAL FOR C PROGRAMMING & DATA STRUCTURES FOR I YEAR B.TECH STUDENTS...
This action might not be possible to undo. Are you sure you want to continue?