You are on page 1of 11

Chapter 1

Computational thinking
Introduction
The expression ‘computational thinking’ is talked about a lot in computing and educational circles these days. It
is not a new concept; the term was first coined in 1996 by Seymour Papert.

Computing people
Seymour Papert
Seymour Papert is a computer scientist who, among other things, helped to develop Logo – a programming
language that aimed to help students think ‘computationally’.

Extra info
Logo is a programming language designed by Seymour Papert (among others), and is intended to help children
learn programming as well as mathematical concepts. It is very easy to learn and its most well-known feature
is turtle graphics, where a screen object called a ‘turtle’ is driven around the screen under programmatic or
direct command. Logo is still useful today in demonstrating computational thinking topics such as abstraction
and decomposition.
®
An implementation of turtle graphics is available for Python and its commands are accessible once the ‘turtle’
library is loaded into the Python programming environment.
Here is a demonstration program that draws a succession of concentric circles.
Computer systems are notoriously difficult to produce. Non-trivial systems soon become complex and, because
of this, various methodologies and strategies have been developed to make development easier and to keep large
projects under control (see Chapter 9).
The discipline of software engineering is concerned with this aspect of systems
development and certain practices have become standard and have stood the test of time.
New ideas continue to evolve to refine the process yet further.
As well as the development of systems, the greater use of computers has helped to change
the way we think about things and understand the world and the universe. One good example
is the realisation that we ourselves are the product of digital information in the form of our
own DNA.
The realisation that the complexities of life and the world around us are explainable in
terms of information systems and often very simple processes, has allowed us to look at the
world and ourselves in a new and powerful way.
Understanding how things work in terms of natural information systems also allows us to
produce new inventions based on the changed perspectives that computers bring us; for
example neural networks borrow understanding from animal nervous systems in order to
process large numbers of inputs and predict outcomes that are otherwise uncertain. Some
success continues to result from research into artificial intelligence, again, using systems that
mimic human behaviour.
In recent years, the nature of computational thinking has been developed and given much
publicity and impetus by Jeannette Wing in the US.
Key term
Computational thinking A problem-solving approach that borrows techniques from
computer science, notably abstraction, problem decomposition and the development of
algorithms. Computational thinking is applied to a wide variety of problem domains and
not just to the development of computer systems.
Computing people
Jeannette Wing
At the time of writing, Jeannette Wing is Corporate Vice President of Microsoft Research. In this role she
oversees Microsoft’s various research laboratories around the world.
Jeannette Wing has had a distinguished career. Prior to joining Microsoft, she worked at the University of
Southern California and then Carnegie Mellon University in Pittsburgh, where she was President’s Professor of
Computer Science.
While at Carnegie Mellon, she devoted much energy to the promotion of computational thinking and how it is a
powerful approach to solving a wide variety of problems, not necessarily involving computers. She sees it as a
vital skill that should be taught to all children, as important as the 3Rs.
Carnegie Mellon still has a ‘Center for Computational Thinking’, where computational thinking applications are
explored and new ways to apply it are devised.

How computers help us think


The widespread use of computers has changed the way in which we solve problems. Here are some examples.
1. We can get the technology to do all the hard work for us. Problems that in the past involved
just too much work or time can now be tackled extremely quickly; this means we have to
formulate them in such a way as to harness the raw speed and power of a computer. We have
to approach problems differently in order to get the best out of the computer’s power.
2. Formulating a problem for computer solution in itself clarifies our understanding of the
problem. We might not otherwise realise that a problem can be broken down into simpler
parts.
3. Understanding how computers store and process data provides us with powerful analogies for
understanding how the world works.

Some examples of computational thinking


We can start with something simple. Not that many years ago, anyone who was engaged in some form of
creative writing would have to use a pen or maybe a typewriter. The writer of a factual book would need to
make copious notes and keep them organised so that a coherent product could be made. Such writing necessarily
involved numerous revisions, lots of crossing out and the throwing away of much paper, together with the
endless labour of rewriting.
Of course we don’t have to do that any more – we have word processors. But the thing is,
not only have word processors liberated us from much drudgery, they have also liberated our
minds. We are no longer afraid to commit ideas to paper or screen because we can amend
what we say so easily. The creative process has been transformed by the technology. We
think differently.
On a larger scale, the Human Genome Project used computer technology to process vast
amounts of data and also lend insights into how our own biological information processing
systems function.
Here are some more examples of how computational thinking can help us, as suggested by
Jeannette Wing.
We can:
• look at a problem and assess how difficult it is
• use recursion to apply a simple solution repetitiously
• reformulate a problem into something familiar that we know how to solve
• model a problem so that we can create a program that can be run on a computer
• look at a proposed solution and assess it for elegance and efficiency
• build in processes to our solutions that limit damage and recover from errors
• scale our solutions to cope with bigger similar problems.
Example
Shotgun sequencing
Humans, and all life on Earth, are products of information contained in our nucleic acids – in most cases, this is
DNA (deoxyribonucleic acid). This is a long molecule made from repeating units called nucleotides, of which
there are only four different types. The sequence of these largely determines our characteristics – our similarities
and differences, at least at a physical level.
Shotgun sequencing is a method of breaking up long sequences of DNA into small pieces. These segments can
be analysed rapidly to determine the sequences of nucleotides. Computer processing is used to recognise where
the short segments overlap and so can be used to determine the overall sequence of the whole molecule. This is
much faster and easier to carry out than trying to read a single intact piece of DNA.
In this example, computer programs processed the vast amount of data involved, but also the project was made
possible by our understanding that even complicated organs and whole bodies are basically constructed by
recursively following a plan.
When looking at solving a problem with the help of computational thinking, we have to decide first what parts
of the problem (if any) are best suited to a computer solution. This links back to the age-old question that
predates modern computers: whether a particular problem is computable; that is, is there an algorithm that will
always give the correct output for a valid input? It has been demonstrated that in some cases, this question is
undecidable, so we often have to use our practical experience to make a judgement. We have to decide which
parts of a problem are best suited to a human resolution. Computational thinking encourages us to decide what
computers are best at and what humans are best at. Good solutions to messy real-world problems need to find
good answers to this question.
Question
Think of some other techniques from computer science that translate into real-world problem solving.

Breaking down problems


One of the most powerful benefits of thinking in computational terms is that it encourages us not to be
frightened of large and complex problems. Over the years, computer scientists and analysts have developed
approaches that attempt to break down a large problem into its component parts. This is called decomposition.
The aim is that the smaller parts are then easier to understand and solve.
Key term
Decomposition The breaking down of a problem into smaller parts that are easier to solve.
The smaller parts can sometimes be solved recursively; that is, they can be run again and
again until that part of the problem is solved.
This approach, popularised in the 1970s, was called ‘top-down design’. Top-down design
led to the widespread use of modular programming, where the additional benefits are that
different parts of the overall finished program can be assigned to different programmers,
lightening the load on each and also making the best use of their individual skills. It is also
much easier to debug a program if it is constructed from smaller component parts rather than
a big sprawling single entity.
Once an overall design has been decided upon, effectively a menu is produced that leads to
the writing of the separate modules. This can be hierarchical, where each sub-problem leads
to smaller components in a tree-like structure.

A drawback of the top-down approach is that it assumes that the whole solution to the problem is knowable in
advance. Increasingly, this is not the case, and plans and ideas change as a project develops. Nonetheless, as a
starting point, it is useful if a problem can be split up even to some extent.
Also, this hierarchical approach is less useful with many more modern applications. With
the widespread adoption of event-driven programming, such a neat top-down structure is not
always appropriate. However, it is a computational tool that still has its uses.
Decomposition does not have to be hierarchical. It can take into account parallel processes,
where alternative paths are possible. It is still a help in breaking down the problem.
Decomposition can be applied at various levels in computing scenarios. As we have seen,
we can break down a problem into different functional components that lead to modules or
program procedures. We can also break a problem down into processes, data stores and data
flows. This approach, again developed in the 1970s, leads to a data-flow diagram. The
advantage of this approach is that the major components and activities in a system are laid out
before any effort is expended on the finer details of algorithm development.
Structured programming
Early programs were commonly developed on an ad hoc basis, with no particular rules as to how to lay them
out. In particular, programmers often used the now infamousGOTO statement that transferred control
unconditionally to some other point in a program.
IF condition THEN goto label
or worse
IF condition THEN goto 230
This made programs very hard to read and maintain and was vigorously opposed by the computer scientist
Edsger Dijkstra, notably in a letter entitled Go To Statement Considered Harmful, where Dijkstra argued for
banning the construct from all languages, and over time it did indeed drop from favour, being replaced
by structured programming. In structured programming, functions (or procedures) were packaged off and
designed to perform just one or a limited set of jobs. This improved readability and you should still make sure
that your programs are packaged up into fairly simple modules.
Structured programming gained favour also because it was shown by Böhm and Jacopini
in 1966 that any computable function can be carried out by using no more than three different
types of programming construct, thereby eliminating the need for GOTO.
These constructs are:
1. sequence: executing one statement or subprogram after another
2. selection: branching to a different place in a program according to the value of a Boolean
expression
3. iteration: repeating a section of code until a Boolean expression is true.
So, structured programming is another common method of decomposition routinely used by computer
professionals and we can learn from it when tackling many everyday problems – solve one problem at a time!
Key point
Structured programming remains one of the key factors that affects the way in which your practical
programming course work is marked and judged.

Objects
Object-oriented programming is a common way of breaking down problems and functionality at the same
time. An object, which is based on a class, is a container of attributes (data) and methods (code). This is popular
because each object can be isolated from others, which minimises errors due to interference, and it also
facilitates the reuse of objects for similar problems. For a full coverage of objects, see pages 91–96.
Key term
Object-oriented programming A program made up of objects (custom-made data structures
to represent often-used real-world entities) that interact. Object-oriented languages include
Java and C . Object-oriented programming is covered in more detail at the end of Chapter
6.

Decomposition in real life


We decompose problems routinely in real life. What computational thinking gives to us is the realisation that
this is what we are doing and the encouragement to break problems down consciously rather than intuitively.
Question
Consider the advantages of each level of detail given in this example. When would you use each?

Example
A friend is travelling to visit you at your home. You need to explain how to get there. Consider the following
approaches:
1. Get the train to Central Station, then get a taxi to 24 Acacia Avenue.
2. Get the train to Central Station, then get the number 23 bus. Get off the bus after six stops,
walk down Back Street, take the second right into Acacia Avenue. Number 24 is 100 metres
along on the right.
Clearly, the level of decomposition can be tailored to the need of the moment.

Questions
1 0
A decimal number such as 21 can be decomposed into its separate digits; that is, 2 × 10 1 ×10 .
1. Decompose the binary number 1000001.
2. Decompose the decimal equivalent of binary 1000001.
3. Decompose the hexadecimal decimal equivalent of binary 1000001.
Key points
– Algorithms are an integral part of the decomposition process.
– Small problems extracted from big problems should be solvable by an algorithm.
– Recursive algorithms are particularly useful in solving simple repetitive problems that may
have any number of iterations.

The power of algorithms


An algorithm is, to put it another way, a procedure – in the widest sense of the word. A chef gets the ingredients
for a meal as input, carries out various processes (procedures) on them as an algorithm or method and outputs a
delicious meal. Organisations have algorithms or procedures for appointing new members of staff, banks have
procedures for deciding whether to grant someone a mortgage and schools and colleges have procedures for
determining entry to some courses or for disciplining recalcitrant students.
Devising algorithms is another crucial and long-standing part of computational thinking.
Although humans have been creating and following algorithms for millennia, it is the
development of computers that has highlighted the crucial importance and centrality of
algorithms in all problem solving. As with decomposition, becoming adept at formulating
algorithms, learning from computer science, has many useful spin-off benefits in the wider
world.
Formulating algorithms is notoriously hard to do. For most non-trivial problems, there can
be a whole range of possible ways to go about it. Even after a system has been implemented,
it is usually the case that better algorithms can be devised that would make the system more
robust, easier to use and crucially run faster or use fewer resources. It can often also be the
case that the algorithm does not always return the correct result.
The power of algorithms often comes from the short cuts that have been designed into
them. This in turn often comes from a proper decomposition of the problem in the first place.
Some of the most effective algorithms are based on recursively applying a simple process.
Example
Consider searching an ordered list of numbers in a file for a particular number. This has many practical
applications in everyday situations, such as looking up someone’s bank account by account number.
You could write an algorithm to start at the beginning and continue until the account is found, or the end of the
file is reached.
This would work. The list might be enormous and could take up significant processor time.
If the list of numbers is in order, better methods exist that could find an item much faster. A well-known
example is the binary search algorithm. The whole point of this algorithm is to examine the middle item. If this
is the item required then the search is complete. If the item is less than the number required, the middle of the
left side of the list is examined, otherwise the middle of the right side. Each time an item is checked, the number
of remaining items that need to be looked at is halved.
Here is an implementation of the binary search algorithm:

This is a recursive algorithm called ‘search’. Recursive means that when written as a function, it calls itself from
within itself. Notice the lines that start with ‘return search’. The beauty of writing this recursively is that very
little code is needed to produce an iterative search that will occur as often as needed.
Algorithms cannot work on their own. They are designed to ‘do something’ ‘to’ or ‘with’ something else. The
something else is data. If the data has been structured, as in an ordered list, this makes devising an efficient
algorithm that much easier. This is another lesson to learn when applying computational thinking to real-world
problems.
Designers of algorithms need to bear certain things in mind when producing an algorithmic
solution.
Algorithms must exactly describe what they are supposed to do. Any ambiguity will make
them unreliable when implemented. Computers don’t understand vagueness (unless
programmed to do so in an unambiguous way!).
Algorithms must end. No end means no result. This is something to watch for in recursive
algorithms. It is easy to miss out an end condition.
Algorithms must be correct. There is no point in running an algorithm if the end result is
incorrect.
Algorithms must work with any instance of the same problem. The whole point of
presenting algorithms to a computer is that they can be applied to different sets of similar
data.
Question
What is the end condition in the binary search implementation given on this page?
Once an algorithm has been designed, next comes the easy bit – coding it into a
programming language. A programmer who knows the syntax of a given language should be
able to translate a well-designed algorithm into code.
Key points
– Computer scientists have long learned the power of decomposition to solve problems.
– Many ways of decomposition have been developed.
– We can use this key concept to solve many everyday problems.
Practice questions
1. Define the term ‘recursion’.
2. GNU is an operating system. Explain why the name GNU is recursive.
3. A library accepts new members and stores data about them. It issues them with a card. It also
updates membership details when necessary. When the member leaves, the record for that
member is deleted.
Express this library system as a data-flow diagram.

You might also like