You are on page 1of 2

COUNT THE NUMBER OF OPERATIONS 4 x MEMBERSHIP 2 x INSERT 2 x REMOVE EXPANDABLE ARRAY (initial size doubles when the array

is full) Insertion: add to the end -> O(1), when it requires doubling could be O(n) Take out the lowest rank (for each node is f) node: O(n) as its not sorted Removal by ID: O(n) Membership test: O(n), we have to go through all slots in the array

EXPANDABLE ARRAY SORTED BY F-VALUE Insertion: using binary search (O(log2(n))), we have the cost of binary search + O(n) in the worst case to move all the elements, so O(n) Take out the lowest rank (for each node is f) node: O(log(n)) as its sorted Removal by ID: O(n), because it is sorted by f value not by ID Membership test: O(n), we have to go through all slots in the array since it is sorted by f value

EXPANDABLE ARRAY SORTED BY ID Insertion: using binary search (O(log2(n))), we have the cost of binary search + O(n) in the worst case to move all the elements, so O(n) Take out the lowest rank (for each node is f) node: O(n) as its sorted by ID Removal by ID: O(log(n)) to find the element, because it is sorted by ID now, but to take advantage of that we need not to shift the array each time. The cost of compacting the array is O(n), so the smart idea is to do that when insertion is performed, because it wont affect the big O notation of insertion. Note that then binary search is O(C), where C is the capacity of the array => N for this case will be the capacity (worst case it will be 2n) Membership test: O(log(n)), because it is sorted by ID

SORTED SINGLE LINKED LIST, based on the ID Insertion: O(n), because we have to go through the list until insertion point found, and then create a node and adjust a couple of links Take out the lowest rank (for each node is f) node: O(n) as its sorted by ID so need to go through all the elements in the list Removal by ID: O(n), because we cannot perform binary search and have to go through the list Membership test: O(n), we have to go through the list until we find it or find an element with greater ID

SORTED SINGLE LINKED LIST, based on the f value Insertion: O(n), because we have to go through the list until insertion point found, and then create a node and adjust a couple of links Take out the lowest rank (for each node is f) node: O(1), as it is the first element Removal by ID: O(n), because it is sorted by f value not by ID Membership test: O(n), as we have to go through the list

HASH TABLE (an array of linked lists (buckets)). N nodes (waypoints in A*) and M buckets The hash value gives the index of the bucket the node will be put in. For integers modulus M. % works well supposing the distribution of hashed value is homogeneously distributed across the domain. The hash function should provide the best possible distribution of elements from the domain on elements of the co-domain. Anyway, once the bucket is found, we have a sorted link list where we put the node. Average size of the lists assuming uniform distribution is N/M. Computing the hash function is done in O(1). Once the bucket is found, finding the element in the list costs at worst O(N/M). M is chosen. At the extremes you have M = N, M = 1 (degenerated table -> (sorted) linked list). Trade-off of hash table is located in space. The larger the M value, the worst the performance of the lowest rank removal, which becomes O(N), but the others become O(1) => giant hashed array. Insertion: Find the bucket (O(1)), then insert in the list and resort in O(nlogn). So O(nlogn). More precisely is O(N/Mlog(N/M)). Actually it is simpler not to resort and go through the list and do it that way in O(N/M). Take out the lowest rank (for each node is f) node: if hashed on the ID and the linked lists are sorted on the f values, the cost becomes O(M). If sorted on the ID, the cost still is O(N), because all elements on each buckets must be examined. Hashing for f is not helpful, because that wont determine the lowest f value. So best solution is to hash for ID and sort buckets by f. Removal by ID: hash + O(N/M) to actually adjust links in the list. Membership test: hash (O(1)) + going through the list (O(N/M)), so O(N/M).

EXTREME OPTIMIZATION BUT DOUBLE THE SPACE: hashed array + array of node ids sorted by f valuebut then the update of that structure would be bad??? Djikstra (BFS) - slow, best-first