* Function: Append
* Usage: Append(first, second)
* The following function appends the second linked list on
* to the end of the first.
void Append(Node *& first, Node* second)
if (first->next == NULL)
To show why the first one will work, here is a diagram showing what happens when you have an empty list passed toAppend. This first picture is whenAppend has first been called but has not executed. Note howfirst is a reference to the list in main.
Node *list = NULL;
Node *toAppend = new Node;
toAppend->next = NULL;
The functionPopRocks takes the cell in the list that the parameter points to, removes that cell
from its current position in the list, and tacks it on the end of the list. Therefore, if the parameter
is a pointer to the first element in the list, the lists would end up looking like this:
30 -> 45 -> 60 -> 15
't' -> 'a' -> 'r' -> 's'
"hang" -> "a" -> "salami," -> "I'm" -> "a" -> "lasagna" -> "hog!" -> "Go"
starts at N and is halved each time, so it requires ~lg N iterations to reach 1 and one more to
reach 0.b starts at N/2 and is decremented by one each time, so it takes ~N/2 steps to become 1
and one more to reach 0.a will reach 0 first, taking lg N steps, soBinky has O(lg N)
two recursive calls that are each one smaller. This is the same as Towers of Hanoi or knapsack. You might recognize the classic 2N pattern or get the result by solving the recurrence relation of T(N) = 2*T(N-1) + 1 or drawing the recursion tree and counting the number of calls.
To find all the words one letter away from a target by changing one letter and then doing a
lexicon look up takes time O(k log N). If the length of the target word is k, then it takes time
O(25 * k) = O(k) to generate all of the candidate words. This is because we change one letter at a
time and have 25 possible choices of what we could change it to per letter. Then, since we
assume word lookup in the lexicon is done with a binary search, it takes time O(log N) to see if a
word is in the lexicon, where N is the size of the lexicon. Thus, our total time is O(k log N) to
use this first algorithm.
If instead we iterate through the entire lexicon, then our total runtime is O(k * N). This is
because if there are N words in the dictionary then it takes time O(N) to iterate through it. Then,
if our target word has length k, then to see if a given word is one character away takes time O(k),
since we possibly have to iterate over the entire target string. Thus the total runtime is O(k * N).
From the Big-O analysis, it would appear the first method would always be better. However, in the first algorithm our O(k) is more expensive than in the second, since we have to generate 25 * k candidate strings. For a smaller lexicon then it may be more beneficial to do the second
This action might not be possible to undo. Are you sure you want to continue?