You are on page 1of 38


Linked lists
A linked link is a simple way to store a list of items. It's main advantage over arrays is that it is very quick to insert and delete items. It can also change size dynamically. The disadvantages are that you cannot directly reference the nth element, and a memory overhead.

How does it work?

A linked list consists of a number of nodes, each of which consists of a piece of data and a pointer to the next node. In the picture, each big rectangle is a node. The left block is the data (in this case, a character) and the right block is the pointer. The last pointer is a null pointer. The nodes can reside anywhere in memory, independant of each other, and are allocated as needed. The variable used to access the list is usually a pointer to the first node (called the head). Sometimes it is also useful to have a pointer to the last node (called the tail). I will call these the head and tail pointers respectively.

To insert an element X after node A, do the following: 1. 2. 3. 4. Allocate memory for a new node, B. Make the data element of B equal to X. Make the pointer of B equal the pointer of A. Make the pointer of A equal the address of B.

To insert an element X at the front of a list, do the following: 1. 2. 3. 4. Allocate memory for a new node, B. Make the data element of B equal to X. Make the pointer of B equal the head pointer. Make the head pointer equal the address of B.

To delete a node, do the following: 1. Save a copy of the pointer to the node. 2. Set the previous element's pointer (or the head pointer, if deleting the first element) to the current element's pointer. 3. Dispose of the memory allocated to the deleted node, using the saved pointer.

1.2. Trees
A tree is a hierachial data structure. An example is a directory tree (excluding any files in the tree): Each directory consists of a name and subdirectories. Trees are typically drawn as in the diagram below. The circles are the data elements (the names of directories) and the lines below each circle join it to its subtrees


The data elements in a tree are called nodes. The node at the "top" of a tree is called the root (shown in red). The children of a node are the nodes below it and connected to it (like subdirectories in a directory tree). A leaf is a node with no children (sometimes trees will only store actual data in the leaves). A node's parent is the node above it and connected to it (every node except the root has a parent). Descendants and ancestors are defined in the same way as family relationships e.g. a parent's parent's parent is an ancestor and a child's child's child is a descendant. A subtree consists of a node and all its descendants (which is itself a tree, with the root being the chosen node).

Binary trees
The rest of this page will deal almost entirely with binary trees. A binary tree is a tree in which every node has at most two children, and the order of the children is usually important. The children are usually referred to as the left child and the right child. Also, if there is only one child, significance is usually still given to whether it is a left or a right child. The left subtree of a node is the subtree consisting of the left child and all its descendants, and likewise for the right subtree.

Binary search trees

A binary search tree is a binary tree in which the data elements are of a type with some kind of ordering (e.g. integers or strings). The left subtree of any given node contains only children smaller than that node, and the right subtree contains only children larger than that node (children that are equal can go either way as long as you are consistent). Binary search trees are usually constructed one element at a time. To add a new element, start at the root and go left if the new element is smaller than the root and right otherwise. Repeat this at each node until you find an unoccupied space and add the new node there. To search for a given element, follow the same process as with adding a new element until you either find the element you want or find an empty spot, in which case the element isn't there. You can also convert a binary search tree into a sorted list of the elements by doing a in-order walk of the tree. This is the basis of the binary tree sort.

Walking a tree
A "walk" of a tree is basically a way of ordering its elements. There is usually a particular action that has to be performed on each element in the given order. There are three standard types of walk. All of them are recursive, and all of them involve outputting the root, walking the left subtree and walking the right subtree. The difference is in the ordering. They are as follows: Pre-order walk Visit the root first, then the left and right subtrees In-order walk Visit the left subtree, then the root, then the right subtree Post-order walk Visit the left and right subtrees first, then the root The in-order walk is particularly useful on a binary search tree, because the elements are visited in sorted order. Walks are generally useful on trees where each node has some logical relation to its subtrees. For example, a tree could have arithmetic operators at nodes, which represent an operation to be applied to the subtrees, and numbers at the leaves. Then an in-order walk would produce normal notation while a post-order walk would produce the Reverse Polish or postfix notation used by old HP calculators.

1.3. Hash tables

A hash table is a data structure useful for storing a set of items in which the order is not important. It is useful if you want to be able to build a set up an element at a time,

and at every stage being able to know whether a particular element is in the table (e.g. for avoiding duplicates).

How does it work?

The basis of a hash table is a hash function. A hash function is a function that takes an element of whatever data type you are storing (integer, string, etc) and outputs an integer in a certain range. A good hash function is one that gets a fairly even distribution of the numbers in the output range, even if the input values are very poorly distributed (for example, English words are poorly distributed since they only use 26 different letters and some letters and some combinations of letter occur far more frequently than others. Many hash functions work by computing a large value modulo the table size. In this case it is generally a good idea to make the table size a prime number. The hash table itself consists of an array whose indexes are the range of the hash function. You should always try to make the table at least 50% bigger than the actual number of items that will ever need to be stored in it, and the larger the better. The elements of the array are just elements of the set you are using (you can also store associated values with each key). You also need a way to specify whether any given array entry is "in use" or not. Elements can occur anywhere in the hash table, with the condition that there must be no unused entries between an element's hash value and the element's address.

To add an element to the hash table, first apply the hash function to it to get a hash value. Then go to that element in the array, and search from there until you find a table entry that isn't occupied (possibly wrapping at the end of the array). Then place the element in the empty table entry.

Searching for an element is described on the searching page.

1.4. Circular queues

A circular queue is a particular implementation of a queue. It is very efficient. It is also quite useful in low level code, because insertion and deletion are totally independant, which means that you don't have to worry about an interrupt handler trying to do an insertion at the same time as your main code is doing a deletion.

How does it work?

A circular queue consists of an array that contains the items in the queue, two array indexes and an optional length. The indexes are called the head and tail pointers and are labelled H and T on the diagram.

The head pointer points to the first element in the queue, and the tail pointer points just beyond the last element in the queue. If the tail pointer is before the head pointer, the queue wraps around the end of the array.

Is the queue empty or full?

There is a problem with this: Both an empty queue and a full queue would be indicated by having the head and tail point to the same element. There are two ways around this: either maintain a variable with the number of items in the queue, or create the array with one more element that you will actually need so that the queue is never full.

Insertion and deletion are very simple. To insert, write the element to the tail index and increment the tail, wrapping if necessary. To delete, save the head element and increment the head, wrapping if necessary. Don't use a modulus operator for wrapping (mod in Pascal, % in C) as this is very slow. Either use an if statement or (even better) make the size of the array a power of two and simulate the mod with a binary and (& in C).

1.5. Bulk move queues

A bulk move queue is a particular implementation of a queue. It is not quite as efficient as a circular queue and uses much more memory, but this is made up for by its simplicity.

How does it work?

It works very much like a circular queue, except that it doesn't "wrap around". Instead, you make the array much bigger than the maximum number of items in the queue. When you run out of room at the end of the array, you move the entire contents of the queue back to the front of the array (this is the "bulk move" from which the structure gets its name).

Insertion and deletion are very simple. To insert, write the element to the tail index and increment the tail. To delete, save the head element and increment the head. If after an insertion the tail points beyond the end of the array, then do the following: 1. Copy the contents of the queue to the front of the array (I recommend the move procedure in Pascal or the memcpy function in C, rather than doing it manually). 2. Set the head pointer to the new head of the queue and the new tail pointer to the new tail of the queue. Note if the queue occupies more than half the array, then you must be careful that you don't overwrite the frontmost queue elements before they are copied because the old and new queue will overlap (and memcpy doesn't allow overlap - use memmove in this case).

1.6. Heaps
A heap is a a type of tree. It is mostly useful as an implementation of a priority queue, and it is also the data structures used in heapsort. A heap is a binary tree with the properties that it is perfectly balanced and that any node is larger than both its children (or smaller than both its children - you can choose which definition to use based on the problem at hand). The operations that can be performed on a heap are insertion of an item into the heap and removal of the root of the heap. Although a heap is a tree, it is well suited to an array representation: The root is element 1, and the children of node i are 2i and 2i+1. For example:


1. Add the new element to the next available slot / next array index. 2. While the element is greater than its parent, swap it with the parent.

Deleting the root

1. Replace the root with the last element. 2. While this element is smaller than either of its children, swap it with the larger child (if there are two children).

1.7. Stacks
What is a stack?
A stack is a data structure that works like a stack of items, such as plates. Items can be placed onto the stack, and the most recently added item can be removed at any point. This principle is sometimes referred to as LIFO (which stands for Last In First Out) because the last item onto the stack is the first one off it.

What's it for?
Stacks have a number of uses. One of them is recursion: although the compiler usually hides the details for you, the arguments to procedures as well as the return addresses are placed on a stack and removed when the procedure exits. You might need a stack if you are simulating recursion. Another use is to evaluate expressions in postfix or Reverse Polish notation (used by old HP calculators). Basically place numbers on the stack as they are encountered, and whenever an operator is found, pop the top two numbers off the stack, apply the operator to them and push the result back on the stack.

Stacks are easier to implement that queues because you only ever work at one end. The simplest method is to keep an array that contains the items (with the bottom stack element at the start of the array) and a count of how many items there are. To push an item onto the stack, increase the count and write it into the array. To pop an item off the stack, copy the data from the array and decrease the count. The only disadvantages of this approach are the limits imposed by arrays in some environments such as the 64K limit on data structures in real mode environments (e.g. Turbo Pascal) and the inability to easily resize an array (although you can get around

this in Pascal with dynamic allocation). It is possible to implement a stack using a linked list, but it is a lot more complicated.

1.8. Queues
What is a queue?
A queue is a data structure that takes its name from a physical queue, such as at a supermarket. Items are inserted into a queue at one end and removed at the other. This principle is often referred to as FIFO (which stands for First In First Out) because the first item into the queue is the first one out.

What's it for?
A queue is essential to any breadth first search. It can also be used to simulate things like a printer queue that operates on a "first come, first serve" basis.

There are a number of ways of implementating queues. One method that is fairly efficient is a circular queue. A slightly easier method is a bulk move queue. I sometimes use my own variety, in which I split the queue into two parts with an array for each. I remove items from the one and add to the other, and when the one I'm removing from is empty I swap the two. A queues can also be implemented as a linked list, but this isn't usually a good idea.

1.9. Priority queues

A priority queue is similar to a queue. The difference is that every item has an associated priority, and instead of removing the item that has been in the queue the longest, you remove the item with the highest priority. This is like a print queue, where higher priority jobs are printed first. Another use of a priority queue is simulations, where the priority of an event is the time that it will take place. Thus events can be scheduled in any order, and then simulated in chronological order.

For priority queues that could get very large, a heap is by far the best option. However, for small priority queues (say less that 50 elements) a simpler solution like a sorted array or a sorted linked list will probably run faster.

2. Algorithms
Algorithms are one of the two building blocks of a program, the other being data structures. An algorithm is a recipe for making a program or a component of a program. Having not only of a selection of standard algorithms but also knowing their usefulness and how to evaluate the efficiency of your own algorithms is essential to writing good programs. Many data structures have associated algorithms for working on that particular data structure (e.g. deletion from a heap). For descriptions of these, see the Data Structures page. 1. Efficiency of algorithms 2. Recursion vs. iteration 3. Dynamic programming 4. Sorting 5. Searching 6. Graphs 7. Shortest paths 8. Matching 9. Network flow 10. General tricks

2.1. Efficiency of algorithms

Big O
"Big O" refers to a way of rating the efficiency of an algorithm. It is only a rough estimate of the actual running time of the algorithm, but it will give you an idea of the performance relative to the size of the input data. An algorithm is said to be of order O(expression), or simply of order expression (where expression is some function of n, like n2 and n is the size of the data) if there exist numbers p, q and r so that the running time always lies below between p.expression+q for n > r. Generally expression is made as simple and as small as possible. For example, the following piece of code executes the innermost code (n2 - n) / 2, and the whole expression for the running time will be a quadratic. Since this lies below n2 for all n > 100, the algorithm is said to have order O(n2).
for i := n downto 2 do begin h := 1; for j := 2 to i do if list[j] > list[h] then h := j; temp := list[h];

list[h] := list[i]; list[i] := list[temp]; end;

(this is the algorithm for Selection Sort)

How fast is fast enough?

This depends on several factors, but mainly the size of the input data. In general, O(nx) is considered reasonable (where x is a small constant) while O(xn) is considered unreasonable unless n is going to be really small. O(nx) is generally referred to as polynomial time. However, if n is reasonably large (e.g. 1000) then O(n2) is reasonable while O(n4) is unreasonable. A good rule of thumb is to substitute the value of n into the expression, then divide by a number between 10 million and 100 million to get a rough time in seconds (use 100 million for very short, simple inner loops and 10 million for complex inner loops). Another expression that sometimes insinuates itself in Big O expressions is log n. This is often the case in recursive algorithms which split the data in half at every stage, and log n is proportional to the number of levels of recursion. log n grows extremely slowly and is while it looks similar to n it is actually much closer to 1. The base of the logarithm is not given since this only scales the result, but it is usually derived from a log base 2. However recursive algorithms usually have high overhead so this should be accounted for when applying the rule of thumb above.

What if the running time isn't fixed?

In many cases, the running time depends heavily on the nature of the input data. One example is a Linear Search, in which you might find the item you're looking for right immediately but you might have to search through the entire list. In this situation, the worst case running time is used, in this case order O(n). Sometimes average running times are given (e.g. Quick Sort has a worst case running time of order O(n2), but an average time of order O(n.log n)). However, this is quite hard to calculate.

Choosing an algorithm
Choosing an algorithm isn't a case of simply picking the fastest one and implementing it. Especially in a competition, you will also have to balance this against the time it will take you to code the algorithm. In fact you should use the simplest algorithm that will run in the time provided, since you generally get no more points for a faster but more complex algorithm. In fact it is sometimes worth implementing an algorithm that you know will not be fast enough to get 100%, since 90% for a slow, correct algorithm is better than 0% for a fast but broken one.

When choosing an algorithm you should also plan what type of data structures you intend to use. The speed of some algorithms depends on what types of data structures they are combined with.

2.2. Recursion
Recursion is when a procedure or function calls itself. It is extremely difficult to grasp at first, but a very powerful concept. Here's a trivial example:
function sum_to_n(n : integer) : integer; begin if n = 1 then sum_to_n := 1 else sum_to_n := n + sum_to_n(n - 1); end;

This calculates the sum of the numbers up to n. It works by first finding out the sum of the numbers up to n - 1 and then adding n. This sounds a bit like circular reasoning, and if it wasn't for the first line of the function it would be. The first line prevents the function for endlessly calling itself by explicitly returning the sum of the numbers up to 1, namely 1. This is the simplest form of recursion, and it would be really easy to replace the whole thing with a loop that goes from 1 to n and increments a counter. Here's a more complex example that would be slightly harder to replace with iteration (although any recursion can be replaced by iteration):
function power(a, b : longint) : longint; begin if b = 0 then power := 1 else if odd(b) then power := a * power(a, b - 1) else power := sqr(power(a, b div 2)); end;

This calculates ab with order O(log b). It relies on the facts that ab+1 = a.ab and a2b = (ab)2.

Pros of recursion

Recursive procedures are usually much shorter, simpler and easier to debug than iterative versions. Many algorithms are defined recursively, so it is easy to implement them recursively. Many data structures are naturally recursive (trees, for example) and so it is natural to operate on them recursively.

Cons of recursion

Recursive procedures are slightly slower than iterative ones, because of the overhead of procedure calls. Recursive procedures can use a lot of stack space. Badly thought out recursive procedures can sometimes be very slow. For example, you might be tempted to use the following to calculate any given term of the Fibonacci sequence (1, 1, 2, 3, 5, 8, 13 etc, where each term is the sum of the previous two):
function fibonacci(n : integer) : integer; begin if n < 3 then fibonacci := 1 else fibonacci := fibonacci(n - 2) + fibonacci(n - 1); end;

The problem with this is that fibonacci(n - 1) also calls fibonacci(n 2), so the latter is calculated twice. For smaller n, the calculation is done even more times. The net result is that the running time is exponential. A far better way to do this would be as follows:
function fibonacci(n : integer) : integer; var prev, cur, temp : integer; i : integer; begin prev := 0; cur := 1; for i := 2 to n do begin temp := prev + cur; prev := cur; cur := temp; end; fibonacci := cur;

It's longer than the recursive version, but much, much, faster.

Sometimes it isn't so trivial to convert a recursive algorithm into an iterative one, but you still know that the same work is being done repeatedly and want to eliminate this. A quick way (although not necessarily the best; dynamic programming is usually better) to do this is to save the result the first time it is calculated, and then use the saved version later. This relies on there being enough memory to save all possible results that one might want to store. Here is an example, applied to the Fibonacci problem.
var cache : array[1..MAXN] of longint;

procedure initialise; begin fill cache with -1's, indicating that the value isn't known

fillchar(cache, sizeof(cache), 255); if we initialise the first 2 here, it simplies the real function cache[1] := 1; cache[2] := 1; end; function fibonacci(n : integer) : longint; begin if (cache[n] = -1) then cache[n] := fibonacci(n - 1) + fibonacci(n - 2); fibonacci := cache[n]; end;

2.3. Dynamic programming

What is dynamic programming?
Dynamic programming is not a specific algorithm, but rather a programming technique (like recursion). It is best described as creating solutions to large problems by combining solutions to smaller problems. On some problems, applying dynamic programming can change the efficiency from exponential to polynomial.

An example
Consider the problem of computing the Nth Fibonacci number F(N). The Fibonacci sequence starts 1, 1, 2, 3, 5, 8, ..., with each number being the sum of the previous two. So a simple recursive implementation might look like this:
function fib(n : integer) : integer; begin if n <= 2 then fib := 1 else fib := fib(n - 2) + fib(n - 1); end;

That is all well and good, but how efficienct is it? It turns out to take time proportional to the answer, which is in fact exponential time (O(sn), where s is the golden ratio). This example is usually used to show the pitfalls of recursion, but also provides a simple example of converting a problem to dynamic programming. The key observation is that the recursive function is a true function in the mathematical sense: it has no side effects, and the output depends only on the input n. This means that we don't have to start by asking for the Nth Fibonacci number and work downwards, but can start with the first one and work upwards. Here is the code (arr is an array that is assumed to be big enough):
function fib(n : integer) : integer; var i : integer;

begin arr[1] := 1; arr[2] := 1; for i := 3 to n do arr[i] := arr[i - 1] + arr[i - 2]; fib := arr[n]; end;

This computes the smaller problems of finding F(1), F(2), F(3) and so on, and combines these to produce the answers to the bigger problem of finding F(n).

The general approach

Identifying problems that can be solving with dynamic programming is something that comes with practice. To begin with, the following procedure will help you see the dynamic programming solutions. 1. Write a recursive solution to the problem. 2. Modify the recursive function until it becomes a mathematical function. That is, it must have no side effect and the answer must depend only on the parameters. It may also depend on global variables but these must be static input data, not variables that change over the course of the program. If you cannot modify the recursion to do this, then it is possible that the problem cannot be solved with dynamic programming. 3. Try to reduce the number of parameters as much as possible. If there are parameters that are the same for the entire program, make them global variables. If there are parameters that can be calculated from other parameters, remove them. 4. Create an array (possibly multi-dimensional) whose indices correspond to the parameters of the recursive function. The array should be able to hold the answer for every possible call to the function. 5. Work out what dependencies are used in the recursive function. In the Fibonacci example, F(n) depends on F(n - 1) and F(n - 2). 6. Write nested loops to loop over all possible inputs to the function, up to the inputs you actually want. Organise the loops so that dependencies are satisfied (e.g. so that F(n - 1) and F(n - 2) are computed before they are needed to compute F(n)). For each set of inputs, compute the output and store it into the array. Do not call the recursive function but rather get the answer from the array.

Saving memory
Sometimes the parameter space (the set of all legal input parameters) is too large to store in an array. This will usually happen if there are several parameters. To get around this, it is sometimes possible to get around this by only storing a subset of the parameter space (this depends on the problem). To return to the Fibonacci problem, we see that we only need to keep the last two Fibonacci numbers, as the previous ones are never used again. The modified code is:

function fib(n : integer) : integer; var f1, f2, cur, i : integer; begin f1 := 1; f2 := 1; cur := 1; for i := 3 to n do begin f1 := f2; f2 := cur; cur := f1 + f2; end; fib := cur; end;

More generally, the array will have multiple dimensions, and sometimes only one or two "rows" of the array are needed at one time. These cases can be identitied from the dependencies: in the Fibonacci example, F(n) depends only on the previous two elements, so those are all that need to be kept.

Sometimes the dependencies in problems are rather complicated, and this can make the ordering of the loops rather tricky. This often arises in problems involving graphs, where the dependencies are edges in the graph. Another problem with writing loops is that it is sometimes more efficient to avoid solving sub-problems that are never actually used to solve the main problem. Both of these problems are addressed by a technique known as memoisation or caching. It is equivalent to dynamic programming, but I find it to be more work and generally only implement it when these problems arise. The basic approach is to stop after step 4 in the process above, leaving you with a recursive function and an array for holding the outputs. Initially tag all the array elements to indicate that they have not been computed (e.g. by storing -1). Then modify the recursive function to return the array value if it has already been computed, and to do the work otherwise. The array thus acts as a cache of results that have already been computed. Reusing the Fibonacci example:
var cache : array[1..MAXN] of longint;

procedure initialise; begin fill cache with -1's, indicating that the value isn't known fillchar(cache, sizeof(cache), 255); if we initialise the first 2 here, it simplies the real function cache[1] := 1; cache[2] := 1; end; function fib(n : integer) : integer; begin if (cache[n] = -1) then

cache[n] := fib(n - 1) + fib(n - 2); fib := cache[n]; end;

2.4. Sorting
Sorting a list of items is a major topic that I can't even begin to deal with here (thick books have been written on the subject). I'm just going to mention a few sorting algorithms and their characteristics. In terms of performance, sorts mostly come in three flavours: 1. O(n2) 2. O(n.log n) on average, but O(n2) at worst 3. O(n.log n) The first class shouldn't be used for sorting really big arrays, say bigger than 2000 (there is no real limit; it depends on your time constraints). However they are usually very quick to code and debug, so always consider using them unless you are sure that they won't cut it. The algorithms in the second class are very seldom O(n2), so it is fine to use them in most applications, particularly in computer olympiads where you only have 5 or 10 test cases and are highly unlikely to get a really slow case, and even if you do you only lose some of your points. These algorithms can actually be faster than those in the third class because they are simpler. Each of the sorts below has a description of the algorithm, the efficiency, pros and cons and other notes.

Bubble sort
Algorithm Scan through the array from left to right, comparing adjacent elements. Whenever two adjacent elements are out of order, swap them. Repeat this until the array is sorted. Efficiency


Simple to implement and understand, but not much more so than several more efficient sorts Will be very quick on an already sorted list Probably the slowest sort ever invented



Don't bother learning this sort. There are other sorts which are just as easy to implement and much faster. Although there are some time saving modifications that can be made (e.g. keeping track of the last swap in the previous pass, and only going up to there in the current pass), they aren't worth it.

Selection sort
Algorithm Search the array for the largest element, and swap it with the element at the end. Then repeat for just the first n - 1 elements, then for the first n - 2 etc. This extracts the elements individually in reverse order. Efficiency

Pros Easy to implement Is no faster on a partially sorted array Cons

Notes This is a reasonable sort to use for sorting a fairly small number of items, and my personal favourite of the O(n2) sorts.

Insertion sort
Algorithm The first part of the list is sorted and the remainder is unsorted. To begin with, all elements are in the unsorted part. Go through the elements in the unsorted part sequencially and insert each into the sorted part by searching for it's correct position with a linear search or a binary search. Efficiency

Pros Fairly easy to implement with a linear search If a backwards linear search is used, it is much faster on an partially sorted array On an array, insertion is slow because all the elements after it have to be moved up.


Notes This sort isn't especially useful by itself (although it is a respectable O(n2) sort), but forms the basis of Shell sort.

Shell sort

Algorithm Consider the sequence 1, 4, 13, 40, 121 etc, where each term is triple the previous one plus one. Go through the sequence backwards, starting with the largest term less than the number of items being sorted. For each member of the sequence g, do the following Sort the elements with indices 1, 1 + g, 1 + 2g, 1 + 3g ... Sort the elements with indices 2, 2 + g, 2 + 2g, 2 + 3g ... Sort the elements with indices 3, 3 + g, 3 + 2g, 3 + 3g ... And so on, up to g, 2g, 3g, 4g, ... Each of these sorts should be done with a sort that works quickly on a partially sorted list; a good choice is Insertion sort. Efficiency At worst O(n3/2). It's also possible to use sequences other than 1, 4, 13, ... but they do not necessarily give the same efficiency. Pros Usually much faster than regular insertion sort for very little extra work The fastest of the non-recursive sorts shown here Easy to get working Cons

Running time is slighly sensitive to the data (but far less so than binary tree sort or Quicksort) Not quite as fast as ultra-fast sorts like Quicksort If you mess up the sequence, you get a sort that will still work but is slow.

Notes This is an excellent medium/heavy sort, and after trying this out I decided to use this whenever I need a large number of items sorted in a hurry and I don't mind it being a couple of times slower than a heavy duty sort.

Algorithm Save the value of the middle element of the array. Have a "left" pointer that starts at the first element and moves right and a "right" pointer that starts at the last element and moves left. Repeat the following operation until the pointers cross: Scan with the left pointer until an element larger or equal to the saved value is found. Scan with the right pointer until an element smaller or equal to the saved value is found. If the pointers haven't crossed yet, swap the values they point to and manually advance both pointers one position. Then recursively sort the elements in the ranges [start, right] and [left, end]. Efficiency

O(n.log n)

on average


Extremely fast on average Fairly tricky to implement and debug Very slow in the worst case (but see notes) In the worst case, could cause a stack overflow


Notes If the middle element in the array is very large or very small, the sort becomes slower. To reduce the chances of this, you can take median of the first, middle and last elements and put it in the middle. In a degenerate case, Quick sort could also recurse to a very deep level and cause a stack overflow. One way to avoid this is to keep track of the level of recursion (e.g. by passing it as a parameter) and switch to a simpler sort (like selection sort) if the recursion is too deep.

Heap sort
Algorithm Add the elements to a heap one at a time. Then remove the elements from the top of the heap one at a time, and they will be in order. If the heap is done upside down (i.e. largest element at the root) then the incomplete sorted list and the heap can share the same array without requiring extra memory. Efficiency
O(n.log n)

Pros Guaranteed to be O(n.log n) One of the few non-recursive O(n.log n) sorts Fairly easy to debug, because it is non-recursive and because it consists of two smaller and unrelated parts Not as fast on average as Quick sort Quite a lot of code to implement the heap operations


Notes This isn't an especially useful sort, but the running time is guaranteed O(n.log n) and it is very light on memory consumption. Also, if you have already implemented a heap for some other reason, then this sort comes almost for free. It is possible to get some extra speed by constructing the heap in one pass: put everything into the heap ignoring the heap condition, then work through the

heap from bottom to top, bubbling elements down the heap where necessary. This makes the heap creation take linear time.

Merge sort
Algorithm Split the array in half and recursively sort each half. Then merge the two halves by keeping a pointer to the front of each list and continuously taking the smaller of the two elements. Efficiency
O(n.log n)

Pros Guaranteed to be O(n.log n) Simple to understand Not as fast as Quick sort on average Requires as much memory as the original array


Notes It is convenient to sort the right half backwards, and have the pointer to the second half move backwards. Then you never have to check whether either list is empty; the pointer simply moves to the biggest element in the other and is never used again. I used to like this sort because it is guaranteed O(n.log n), but it is a memory hog and not as fast as Quicksort in practice.

Binary tree sort

Algorithm Insert each element sequencially into a binary search tree. The do an in-order walk of the tree (if you don't know what this is, read the information on binary search trees). Efficiency O(n.log n) on average Pros If you've already implemented binary search trees, then this sort is quite easy. Cons

Hogs memory for the pointers Not very pleasant to implement or debug Not very fast Is O(n2) on a sorted list

Notes Think of this as the bubble sort of the medium speed sorts: slow and not very useful. Binary search trees are more useful if you are mixing things like insertions, searches, and extraction of sorted data.

2.5. Searching
If you know you are going to need to search for an item in a set, you will need to think carefully about what type of data structure you will use for that set. At school level, the only searches that get mentioned are for sorted and unsorted arrays. However, these are not the only data types that are useful for searching.

Linear search
Start at the beginning of the list and check every element of the list. Duh. Very slow (order O(n)) but works on an unsorted list.

Binary search
This is used for searching in a sorted array. Test the middle element of the array. If it is too big, repeat the process in the left half of the array, and the right half if it's too small. In this way, the amount of space that needs to be searched is halved every time, so the time is O(log n).

Hash search
Searching a hash table is easy and extremely fast: Just find the hash value for the item you're looking for, then go to that index and start searching the array until you find what you're looking for or you hit a blank spot. The order is pretty close to O(1), depending on how full your hash table is.

Binary tree search

Searching a binary tree is just as easy as searching a hash table, but it is usually slower (especially if the tree is badly unbalanced). Just start at the root, then go down the left subtree if the root is too big and the right subtree if it is too small. Repeat until you find what you want or the subtree you want isn't there. The running time is O(log n) on average and O(n) in the worst case.

2.6. Graphs

In this context, the word "graph" does not refer to a plot of one variable against another, but a more abstract concept: A set of nodes, some of which are joined together by edges. An examples is a set of cities (the nodes) connected by various flights (the edges). There are many problems that can be translated into graphs. These often involve a journey between the nodes using the edges.


The things that are connected are called vertices or nodes. Mathematicians call them vertices but computer scientists sometimes call them nodes because they can model nodes in a network. The things connecting them are called edges. A path is a list of edges, with each adjacent pair sharing a vertex. A cycle is similar to a path, but the first and last edges also share a vertex. A graph is connected if there exists a path between any two points. A subgraph is a subset of the vertices of the graph and all the edges between any two of them. A subgraph is considered to be a component of the original graph if it is connected and there are no edges between a vertex in the subgraph and a vertex not in the subgraph. A tree is a connected graph with no cycles. A forest is any graph with no cycles. An Eulerian path is a path that uses every edge exactly once. A Hamiltonian path is a path that uses every vertex exactly once. Eulerian and Hamiltonian cycles are defined similarly. A completegraph is one in which every vertex is connected to every other vertex once. A vertex is a neighbour of another if there is an edge between them. A dense graph is one where most of the potential edges are actual edges. The opposite of a dense graph is a sparse graph. These are relative terms, like "big" and "small".

There are numerous oddities that can occur in graphs that can occur in some problems but not in others, and it is worth remembering these when designing an algorithm.

A point might be connected to itself. Two points might be have more than one edge between them. Edges are sometimes directed. This means that you can only travel along the edge in one direction. Edges are sometimes weighted. Each edge has a certain real number associated with it, and the weight of a path is the sum of the weights on the edges. The weight is also called the length or cost.

There are a number of ways to represent graphs in memory. You can have a matrix where you store a boolean value for every pair of vertices specifying whether an edge exists between those two points. You can store a list of neighbours for every vertex; or you can just store a list of edges (there are probably be other ways, but I can't think of any really useful ones offhand). Graphs are a good example of places where redundant data structures can save time. For example, if you want to execute an operation for every edge then a list of edges will be useful. If you want to construct paths then a list of neighbours for every point will be useful, because you know which points you can go to next. For determining whether there is an edge between two points, the matrix representation would be the most useful.

Minimum spanning tree

A spanning tree of a graph is a subgraph that is a tree and which has edges to every vertex (so no vertices are "isolated"). The minimum spanning tree of a weighted graph is the spanning tree for which the sum of the weights of the edges in the tree is the minimum (this isn't necessarily unique). There are three standard algorithms for finding the MST: 1. Start with a subgraph with no edges. Repeatedly add the shortest edge that doesn't create a cycle, until the graph is a spanning tree. 2. Start with a subgraph that is the same as the original graph. Delete the longest edge that will still keep the subgraph connected. Repeat until the graph is a spanning tree. 3. Start by making the tree consist only of one of the vertices (an arbitrary one). Repeatedly add the shortest edge between a vertex in the tree and a vertex not in the tree, to the tree. The graph will be a spanning tree when the number of edges is one less that the number of vertices. The last method can be implemented as a slight variation on Dijkstra's algorithm: use VW in place of P(V)+VW in the description above, and the MST will be the set of pointers.

The travelling salesman

"The travelling salesman" is an old problem about the following hypothetical situation: A travelling salesman wants to visit a number of cities. There are flights between every pair of cities, but they have different costs. He wants to visit all the cities exactly once and finish where he started, with the minimum cost. This could be

paraphrased in graph theory terms as follows: Find the minimum Hamiltonian cycle of a complete graph. This problem is one of a large class of problems that have frustrated computer scientists for many years, known as the NP Complete problems. There might be a way to solve any NP Complete problem in polynomial time, but nobody has been able to find one. On the other hand, it might be theoretically impossible to find an algorithm that runs in polynomial time, but nobody has been able to prove it. I'm going to outline a few techniques for getting good approximations for this type of problem (technically NP Complete problems are actually Yes/No problems; the optimisation variants are known as NP Hard). In an olympiad, you shouldn't be satisfied with any approximation you get, unless you know it is optimal. You should try to use all the time your program is allowed to find a better answer. You can just start with an exhaustive search that you run for as long as possible and then give your best result yet. However, it is usually possible to structure your program to avoid checking many options that you know are going to be worse than the best solution you already have. For example, in the travelling salesman problem, if the distance you have travelled so far plus the distance to go directly to the finish is greater than the best result you have so far, there is no point in continuing. To get the most out of this, you should try to get a reasonably good estimate before starting the exhausting search. You can do this is several ways:

Use a "smart" algorithm that makes invalid but reasonably accurate assumptions about the solutions. For example, in the travelling salesman you could make the salesman always choose the cheapest flight at each step. Make a few (say 1000) guesses using random numbers. Any other way that you can think of.

If you have a travelling salesman problem with the added information that it is always cheaper to fly direct than going via another city, then there is a smart algorithm that will always give you at most double the optimal cost: Do a pre-ordered walk of the minimum spanning tree.

2.7. Sorting
One of the most basic and common problems in graph theory is to establish the shortest route between two nodes. In many cases the graph itself is not obvious. For example the problem may require the quickest way of solving a puzzle like Rubik's Cube. In this case the nodes are states and the edges are valid transitions.

Floyd's Algorithm
Floyd's Algorithm solves a slightly different problem: it computes the minimum distance between every pair of nodes, without computing the routes themselves. What is particularly nice about Floyd's algorithm is that it very simple and very efficient, so even if you only need the distance between a few pairs of nodes, it may still be useful.

Unfortunately while Floyd's algorithm is simple to remember and implement, it is not at all clear why it works. I'm going to state the algorithm, then attempt to explain it. However it's sometimes best just to go away and think about it, and maybe try a few examples yourself. Floyd's algorithm is best stated in code. The array adj is initialised to the adjacency matrix of distances (INF being a large constant meaning no edge), and it is modified in place to become the matrix of shortest distances.
for y := 1 to N do for x := 1 to N do if adj[x][y] <> INF then for z := 1 to N do if (adj[y][z] <> INF) and (adj[x][y] + adj[y][z] < adj[x][z]) then adj[x][z] := adj[x][y] + adj[y][z];

As one can see, the inner loop is extremely tight. It is also extremely important that the order of the loops is correct. To explain the algorithm, one needs to introduce notation. Let [X, Y, Q] represent the shortest distance from X to Y, using intermediate nodes 1, 2, ... Q only, taking the value INF if no such path exists. Floyd's algorithm is based on the observations that
[X, Z, 0] = adj[X][Z] [X, Z, Y] = min([X, Z, Y - 1], [X, Y, Y - 1] + [Y, Z, Y - 1]) [X, Z, N] = shortest path(X->Z)

At the start of each iteration of the outer loop, adj[X][Z] represents [X, Z, Y - 1] for the particular value of Y used in the loop; at the end, it represents [X, Z, Y]. Looking closely at the loop you may notice some ordering conflicts: the values adj[x][y] and adj[y][z] may in fact represent [X, Y, Y] and [Y, Z, Y]. However it is easy to see that [X, Y, Y] = [X, Y, Y - 1] and [Y, Z, Y] = [Y, Z, Y - 1] so this does not make a difference. There are two other things worth noticing about Floyd's algorithm. The first is that it makes no assumptions about symmetry; it works perfectly well on directed graphs. The second is that apart from giving you shortest paths, it also tells you which paths exist at all. A very slight modification of the algorithm gives you Warshall's algorithm for determining precisely this (known as transitive closure). In fact the algorithms are so similar that they are often referred to as "the Floyd-Warshall algorithm". Finally, one should consider the efficiency of Floyd's algorithm: this is very easy since there are three nested loops from 1 to N, so the efficiency is clearly O(N3). It is tempting to think that it is more efficient for sparse graphs, because of the first if test. However as the algorithm progresses, the adj array rapidly becomes populated, so the if test has little effect on big-O time. What big-O analysis doesn't reveal is that the constant factor is very small. In practical situations Floyd's algorithm may still be faster on sparse graphs than algorithms that have better theoretical performance.

Dijkstra's algorithm
Dijkstra's algorithm finds the shortest path between a specific node and all other nodes. This seems to be unnecessarily wasteful (only one target is required), but in fact the extra information comes for free). While the algorithm is running, the nodes are divided into three sets: the nodes that we are busy processing, the nodes we have finished processing and the nodes we haven't considered yet. All the nodes we are busy processing sit in a priority queue, and the priority is the length of the best path from A to the node found so far. Initially A is placed in the priority queue with priority 0. Then the following process is repeated until we finish processing B (or until the priority queue is empty, if we want to find the shortest path to all other nodes). P(X) is the priority of node X and XY is the length of edge XY. 1. Pop the node with the lowest priority off the priority queue. Call this node V. P(V) is the length of the shortest path from A to V. 2. Mark V as finished processing. 3. Loop through all of V's neighbours that we haven't finished processing. Say W is the current neighbour being considered. 4. If W isn't in the priority queue yet or it is but has a priority higher than P(V) +VW then o set W's priority to P(V)+VW, and add it to the priority queue if necessary. o Store a pointer from W to V (e.g. have an array called prev, and set prev[W] to V). The length of the path to B is P(B). To construct the path itself, follow the stored pointers from B: they form a shortest path going back to A. It is sometimes convenient to apply the algorithm from B to A instead of from A to B, since the path itself then comes out from A to B instead of from B to A. The algorithm doesn't specify how to implement the priority queue. Using a heap results in an O((E+V)log V) algorithm which is good for sparse graphs, while using an unsorted list results in an O(V2) algorithm which is better for dense graphs. Note that if the graph is unweighted, then the priority queue can be replaced by a regular queue and this will be more efficient.

2.8. Matching
What is matching?
Matching is a particular type of problem in graph theory. The idea is to find pairs of vertices joined by edges, so that no vertex is in more than one pair and so that something is minimised or maximised.

Cardinal Bipartite Matching

The simplest type of matching is cardinal bipartite matching. In this problem the given graph is bipartite and the number of pairs must be maximised. Usually there are the same number of vertices in each part of the graph, but not always. This type of problem usually takes the form of assigning the member of one group (e.g. cows) to another (e.g. farmers). The algorithm is based on starting with an arbitrary set of pairs and then repeatedly increasing the number of pairs. The initial set can be build by looping through the vertices in one part of the graph and attempting to find matches for them in the other side of the graph. The process for increasing the number of pairs is chosen in such a way that when it will only fail when an optimum matching has been found. The process is very similar to a shortest path algorithm. Label the two parts of the graph part A and part B. Start with any unmatched vertex in A, say X. Then perform a breadth first search, as if trying to construct the shortest path from X to every other vertex in the graph. The exception is that from vertices in A you may only use edges not in the matching, and from vertices in B you may only use edges in the matching (so from a vertex in B you may only go to its partner in A). The search terminates once an unmatched vertex in B is reached or no new vertices can be reached (the queue is empty). If an unmatched vertex in B (say Y) has been reached then there is a path from X to Y using only edges of the specified type (called an augmenting path). Suppose the path is X -> B1 -> A1 -> B2 -> A2 -> ... -> Bk -> Ak -> Y, were (Ai, Bi) is a pair for every i. Remove all these pairs and make new pairs (X, B1), (A1, B2), (A2, B3), ..., (Ak, Y). There is one more pair than there was before, so the number of pairs has increased. If there is any way to increase the number of pairs then this algorithm will find it for some X (so cycle through the X's). To see this, consider overlaying a non-optimal and an optimal pairing using XOR's (i.e. edges only appear in the overlay if they were matches in exactly one of the original matchings). It shouldn't be too hard to convince yourself that this new graph will contain a path of exactly the type for which the algorithm searches. It is actually more efficient to start the search from all unmatched edges in A by inserting them all into the queue at the beginning. This algorithm is O(VE), where V is the number of vertices and E the number of edges, if the correct data structures are used. This is because each iteration takes O(maxV,E) steps (O(V) for preprocessing and postprocessing and O(E) for the actual breadth first search), and the maximum number of iterations is O(minV,E) because the number of pairs cannot exceed the number of edges or vertices.

2.9. Network Flow

What is network flow?

The network flow problem can be phrased in several ways, the canonical one being: Given a computer network with point to point links of certain capacities, what is the greatest rate at one given machine can send data to another given machine? The problem is more complex that it first sounds because data flows can take several different paths and merge and split. The problem can be represented in graph theory, with computers corresponding to nodes and the network links to edges. Every edge has a capacity (from the question) and a directed flow (which is part of the answer). The flows may not exceed the corresponding capacities and the inflow and outflow must be equal for every node other than the two nodes in question. The nodes in question are known as the source and the sink. An edge whose flow equals its capacity is said to be saturated. An important point about netflow flow that is initially not easy to grasp is that because flow is directed, an edge may be saturated when considered in one direction but not the other. I normally store both capacity and flow for each direction. Suppose there is an edge from A to B. For an undirected graph, capacity[A][B] = capacity[B][A], and for a directed graph capacity[B][A] = 0. I always have flow[A][B] equal the directed flow from A to B, so flow[B][A] = -flow[A][B]. The advantage of this approach is that in either direction one is constrained by flow[X][Y] <= flow[Y][X], and saturation is equivalent to equality (so in a directed graph, a zero flow indicates saturation in the reverse direction).

Solution to network flow problem

The standard method for solving the network flow problem is known as the FordFulkerson method, which is basically the following: 1. Start with no flow along any edge. 2. Find a path from source to sink that doesn't contain any saturated edges in the direction from source to sink. 3. Increase the flow of every edge in this path by the same amount. The amount is the largest amount that will not cause one of the capacities to be exceeded. 4. Go back to step 2 and repeat. When no such path exists, the flow will be maximised. The proof of this is beyond the scope of this page.

Which path?
You may have noticed that the Ford-Fulkerson method is not really an algorithm, because it does not specify how to find the path. Research has found two reasonably simple methods that run quickly in most networks and reasonably fast even in extreme cases. The first is to take the shortest path and the second is to take the path along which the flow could be increased the most. It might seem tempting to use the

longest path (since this would seem to "fill up" the network quickly) but in fact there are some networks where this can take arbitrarily long, apart from being difficult to find. Note that the choice of path finding algorithm only affects the speed of the algorithm, not the amount of the final flow.

Directed or undirected?
You might be wondering whether the edges are themselves directed. The answer is: they can be either directed or undirected. It only affects the algorithm in terms of the idea of a saturated edge: an edge is saturated in a particular direction if the flow cannot be increased in that direction, so a directed edge with no flow is actually saturated in the reverse direction.

Node capacities
So far we have only considered capacities in the edges; what about limiting the amount of data that can flow through a particular node? The Ford-Fulkerson method can't be applied directly to this problem, but fortunately it is possible to think of the problem in a different way that allows it to be applied. Firstly, all undirected edges need to be split into a pair of directed edges, one in each direction. At the end you may finish with flow in both directions, but this can be safely cancelled out. Using the computer analogy, imagine that all the input data comes in at one point, the output data leaves at another point, and that it must travel across an internal bus between them. The capacity of the node is the capacity of this bus, so just introduce it into the network as another link. The node representing the computer is split into two nodes, one representing the input port and the other the output port, and the bus is the edge between them. Once the other edges have been fixed up to point to the correct nodes based on their direction, we have an equivalent problem but with node capacities eliminated, which means that we can apply Ford-Fulkerson as usual.

Minimum Cut
The network flow problem gives you another problem for free, known as the minimum cut problem. This problem asks for the minimum total weight of edges that must be removed to disconnect the source from the sink (so that there is no longer a path from the source to the sink). Surprising, the answer is that the total weight of the minimum cut (or mincut) is the same as the maximum network flow. To find the actual edges, first construct the flow graph using Ford-Fulkerson. Once this is done, there will be a set of nodes S that can be reached from the source by

travelling only along edges that are not saturated in the direction of travel. Choosing all edges that connect nodes in S to nodes outside S forms a minimum cut. The network flow from S to the rest of the graph (call it R) will in fact be the total network flow, since the sink lies in R. However the flow from S to R is equal to the flow in the removed edges, each of which was saturated, so the weight of the mincut will equal the total network flow. To see this is in fact minimal, consider applying the above argument to some other cut. The difference is that the edges that are removed are not necessarily saturated, so the weight of the cut may be greater than the total flow.

There are many problems that can be solved with network flow that do not look like network flow. One example comes from a USA Computer Olympiad from several years ago: summarised, what is the smallest number of nodes that must fail in a network to possibly prevent two given computers to be able to communicate (the given nodes may not fail)? The solution is based on the weighted nodes transformation. Remember that this turns every original node into an edge. Give each such edge a weight of 1, and the other edges an infinite weight (in practice, I think a weight of 3 will suffice). Find the minimum cut, which will consist only of these nodal edges, and then choose these corresponding nodes as the ones that must fail. Network flow also arises in certain limited cases. One such case is that of finding the maximal way to match up two sets. For example, given some boys and some girls at a dance and a list of pairs that are willing to dance with each other, find how many pairs can be matched up (where a person can only be matched up with one person and of the opposite sex). This doesn't look like network flow because there are no distinguished nodes to use as the source and sink. However one can create source and sink nodes that do not correspond to any attribute of the problem. Represent each person by a node, and if a boy is willing to dance with a girl then place a directed from the boy to the girl of weight 1. Create a new node as the source and connect it to each boy with weight 1, and connect each girl to a sink with weight 1. Now the number of matchings will equal the network flow, and the matchings themselves correspond to the boy-girl edges that are saturated. Notice that although the network flow problem would allow non-integer flows (such as a boy dancing half with one girl and half with another), an examination of the Ford-Fulkerson algorithm shows that as long as all weights are integers, the flows will also be integers. Unfortunately this algorithm cannot be easily extended to either the problem of weighted matching (such as a degree of willingness) or the problem where the objects

to be matched do not fall into two distinct sets. Both can be solved in polynomial time, but the algorithms are more sophisticated.

2.10. General tips

Memory use
Remember that a competition is different from real world programming. In the real world you should try to conserve memory. In competitions you usually have a set memory limit, and you get the same number of points irrespective of how much you use. For example, if you are using a hash table then you can make the hash table much bigger than you really need. You should also not be afraid to use larger types (32-bit vs. 16-bit integers, for example) than necessary, as overflows caused by using undersized data types aren't always easy to detect.

Multiple passes
In spite of what was said above, sometimes there is a shortage of memory. In some cases it is possible to work around this by making several passes through the input file. To illustrate this, consider the following problem: A number of coloured rectangles are placed on top of each other, with vertices at integer coordinates and aligned with the axes. You must find the area of each colour that is visible. Although there are better ways to solve this problem, a simple approach is to create a 2D array in memory to simulate the area in which the rectangles are being placed, and for each rectangle, draw it in on the array. The problem is that the constraints of the problem may make the canvas too large to fit into memory. If it is only slightly too large then the problem can be solved by splitting the canvas into a few smaller pieces, and making a separate pass through the input file for each piece of the canvas. For each piece one can store the area of each colour and then total this across all the pieces to get an answer.

3. Pascal
This section is devoted to a number of clever tricks that can be done with Pascal. This section has been updated for the post-2000 IOI environment, namely Free Pascal on a 32-bit Intel processor and 32-bit operating system. Some of this information is more generally useful, but some is not. 1. 2. 3. 4. 5. 6. Miscellaneous Copying data Dynamic allocation Exit procedures Timer Text files

7. Typecasting

3.1. Miscellaneous
Use longints
By default, Free Pascal follows Turbo Pascal in making integer a 16-bit type. However, when using a 32-bit machine, arithmetic with 16-bit integers is actually slower than with 32-bit integers. Using longint also reduces the chance of an arithmetic overflow. Since it is such habit to use integer, it may be worth doing a global search and replace after you have finished code to make you that you are using longint everywhere. One place where you might not want to use longint is in large arrays. Here longints use more memory. This also slows things down, because it wastes memory bandwidth and cache space. However, it is not a huge slowdown and you should consciously think about whether you need to use a smaller data type before doing so.

Large integers
Free Pascal supports a data type that Turbo Pascal does not: int64, a signed 64-bit integer. This supports very large integers, up to 263-1 or 9223372036854775807.

Compiler flags
Both Free Pascal and Turbo Pascal support compiler directives for checking various types of errors. The two most important ones are

Range checking. This checks that you are not accessing array elements out of bounds, and catches some cases of a number being too big for its data type. $Q+ Overflow checking. This catches arithmetic operations that generate a number too large for the result variable.

These will slow your program down a lot, but are incredibly useful in catching bugs early. This is one of the few advantages Pascal programmers have at an olympiad, and you should put it to use. Turn these flags on while writing and testing your program. Turn them off only if you need to test the speed of your program, and to hand in. Do not forget to take them out when handing in, otherwise your program will run very slowly during evaluation.

Reading integers

While all Pascal programmers will be familiar with the readln function, many do not know that the read function can be very handy for reading text files. If you have a file full of numbers, then read can be used to read in the next one - whether it is separated by a space or a newline from the previous one.

3.2. Copying data

is a Pascal function that allows you to fill any data structure with a particular byte. Although it would appear to only be useful for arrays of bytes, a little knowledge of the storage format of other structures can make it a lot more useful.

Many data types can be zeroed by using fillchar with an argument of 0. This includes all integer and floating point types as well as the set type (makes it the empty set), and will set boolean variables to false. A value of 255 (0xff) will set signed integer data types to -1, and unsigned integer data types to their largest possible value. A value of 127 (0x7f) will set signed integer data types to a value close to their largest value. In many cases, these operations could be done with a for loop. The reasons for using fillchar are that it is quicker to write and it is quite optimised.

A similar function to fillchar is move. This allows a chunk of data to be copied from one memory location to another. It is more flexible than the assignment operator because you don't have to copy complete data structures; you can select any region of memory you like. This makes it quite handy when using dynamic arrays.

3.3. Dynamic allocation

The problem
The new command in Pascal does not allow you to choose how much memory to allocate at run time; thus when creating an array you must choose an upper bound in advance. This presents problems when for example you need 200 lists with a maximum total size of 60000 bytes. Since each array could be up to 60000 bytes by itself, you would need to allocate 60000 bytes to each array. This is clearly not practical. The usual solution in this case is to use linked lists. However, linked lists are not well suited to all applications, have a high memory overhead and are usually slow to work with. They are also more difficult to write and debug.

The solution
The solution is dynamic allocation, which means to allocate to each array only as much memory as it needs at run time. This will be old news to C programmers who are basically forced to do this using the malloc command. In Pascal you need to use the getmem and freemem commands.

type integer_array = array[1..30000] of integer; integer_pointer = ^integer_array; var the_pointer : integer_pointer; {pointer to access the array} number : integer; {actual number of items} begin {get the actual number of items} getmem(the_pointer, number * sizeof(integer)); {do stuff with the the_pointer^ array} freemem(the_pointer, number * sizeof(integer)); end.

This code illustrates how to dynamically allocate memory for the array. Note that although the upper limit of 30000 is arbitrary: the actual limit is number, and it is up to you to make sure that you never use a higher index (otherwise some really weird stuff can happen). You should make the arbitrary limit as high as possible to prevent range check errors that will result if the error checker thinks you are trying to access something that is out of bounds (the range checker doesn't know about dynamic allocation). You must also make sure that the freemem command specifies the same amount of memory as the getmem command.

It is not possible to directly resize a dynamically allocated array. You must allocate a new array, copy the data into the new array and free the old array. Because of this, it may sometimes be necessary to make an initial pass of the input file to determine how much memory to allocate to each array before doing the main pass to fill the arrays with data.

3.4. Exit Procedures

What are exit procedures?
When a program crashes because of a run time error (such as dividing by zero), you do not normally get a chance to respond to the error in any way. An exit procedure

allows you to "clean up" in the event of an error, although you cannot easily resume the program. One possible use for an exit procedure is to close an output file: if a file is not closed, then data can be lost. A graphics program might want to change back to text mode if it crashes, so that the user isn't left with a messed up screen. The most common use that I've found in olympiad problems is in optimisation problems where partial marks can be obtained for a non-optimal solution: keep track of the best solution found so far, and if the program crashes then quickly output this solution. This could also be done by just writing solutions as they are found, overwriting previous solutions, but this has two disadvantages:

Writing many solutions to file many times is much slower than writing the best solution to file once. Some olympiads (including the International Olympiad in Informatics) will not evaluate a test run that crashed (returned a non-zero exit code). An exit procedure allows you to pretend that nothing went wrong.

The variable exitproc contains a pointer to the default exit procedure. You must set it to point to your own exit procedure. However, you must save the original pointer and restore it inside your exit procedure to allow the normal shutdown procedure to take place.
var oldexitproc : pointer; {the default exit procedure} procedure myexitproc; begin exitproc := oldexitproc; writeln('myexitproc reached with exitcode ', exitcode); exitcode := 0; end; begin oldexitproc := exitproc; exitproc := @myexitproc; { do stuff }


The error is suppressed by setting exitcode to 0. If you test this in a program that crashes, you may notice that Free Pascal still produces the usual error log that indicates an error (although the operating system still thinks the program worked). If you want to suppress this as well, then set the variable erroraddr to nil. The exit procedure is run whether your program crashes or terminates normally, so it is a great place to put the code for writing your output file to ensure that it definitely gets written.

3.5. Timer
Why use a timer?
In practically all olympiads that use automatic evaluation, each test run has a time limit placed on it. While some problems are "all or nothing", there are other problems (particularly optimisation problems) for which it is possible to get some marks for a nearly correct answer. In these cases, you might have already found such an answer inside the time limit, in which case you want to stop and output this answer rather than exceeding the time limit to find the best answer.

How do I measure time?

There are essentially three ways to do it. 1. Use a function provided by the contest organisers. This is normally a function that will tell you how long your program has been running. This is obviously the best way to do it. 2. Use some library function to measure time. This may not match up to the way the contest environment measures time (e.g. CPU time versus wall clock time), and may not be legal in some contests. 3. Use a counter which you increment inside a loop. You will need to calibrate this counter.

Other issues
The primary issue is to decide how often to check the time. If you check too seldom, you might end up going over time anyway, but checking too often will slow down your program. Generally a good bet is to place the check just outside your innermost loop, but ultimately you need to decide what is best for each solution. You also need to exit once you've determined that you've used enough time. It is often a good idea to user a timer in conjunction with an exit procedure. In this case, executing halt(0) will set the exit procedure in motion, and all your cleaning up can be done in there.

3.6. Text files


If you are debugging a program that writes to a file, you may want to redirect the output to the screen. To do this, simply change the filename in the assign statement to '' (the empty string). You can also get input from keyboard instead of file by doing the same thing for an input file. Here's a trick for lazy people: if you assign and reset the file variable input, then readln statements that don't specify a file will read from the file you assigned instead of from keyboard. If you do the same thing for the output variable then you can have a default output file instead of writing to the screen. If the problem you're solving takes input and output from standard in and standard out (think keyboard and screen), then tricks can be reversed to allow one to work with files while testing.

3.7. Typecasting
What is typecasting?
Typecasting is pretending that a variable of one type is actually of another type (this isn't a formal definition). For example, a boolean variable could be typecast to a byte, which would allow the data in the variable to be treated as a number. You might want to do this if you wanted to add 1 to a number if a boolean variable is true, in which case you can just add the byte version of the variable. To typecast a variable in Pascal, use the type you want to typecast to as if it were a function e.g. byte(a_boolean_var) returns the data in the variable a_boolean_var as if it were a byte. Turbo Pascal won't let you just typecast anything to anything; it has some rules, one of which is that the data type and the data must have the data size. It will also sometimes do some automatic conversion, for example if you typecast an integer to a longint, it will actually convert the integer into a longint.

Why use typecasting?

Typecasting is only useful if you know something about the way different types are stored internally. For example, a boolean is a byte that contains 1 for true or 0 for false, or a char is a byte containing the ASCII code of the character. Usually you use typecasting to access the data in a variable in a different way. A good example is typecasting an integer to a set. A set with the range 0 to 15 (note that it must start at 0 because of the way Turbo Pascal deals with sets) is a 2 byte integer with each bit representing one element's presence or absence. Thus if an integer is put in a loop from 0 to 65535 (use a word, because 65535 is too big for an integer) and typecast to a set of this type inside the loop, then the set will loop through all subsets of the set 0..15!

Another way to typecast

Sometimes Turbo Pascal doesn't allow you to do a typecast that it thinks doesn't make sense. If you know what you are doing, you can work around it with the absolute keyword. When declaring a variable followed by this keyword and the name of another variable, the compiler will make the two variables point to the same block of memory if it can. For example:
var aset : set of 0..15; aword : word absolute aset;

Here the two variables use the same two bytes of memory.