44 upvotes00 downvotes

1K views477 pagesfamouse yaht.pdf with haskell wikibooks (http://en.wikibooks.org/wiki/Haskell) appended to it.
If you want a hard copy of this document, try http://www.cafepress.com/glen_quagmire.142964960

Jun 20, 2007

© Attribution Non-Commercial (BY-NC)

PDF, TXT or read online from Scribd

famouse yaht.pdf with haskell wikibooks (http://en.wikibooks.org/wiki/Haskell) appended to it.
If you want a hard copy of this document, try http://www.cafepress.com/glen_quagmire.142964960

Attribution Non-Commercial (BY-NC)

1K views

44 upvotes00 downvotes

famouse yaht.pdf with haskell wikibooks (http://en.wikibooks.org/wiki/Haskell) appended to it.
If you want a hard copy of this document, try http://www.cafepress.com/glen_quagmire.142964960

Attribution Non-Commercial (BY-NC)

You are on page 1of 477

Copyright (c) Hal Daume III, 2002-2006. The preprint version of this tutorial is

intended to be free to the entire Haskell community. It may be distributed under

the terms of the GNU Free Document License, as permission has been granted to

incorporate it into the Wikibooks projects.

About This Report

The goal of the Yet Another Haskell Tutorial is to provide a complete introduction to

the Haskell programming language. It assumes no knowledge of the Haskell language

or familiarity with functional programming in general. However, general familiarity

with programming concepts (such as algorithms) will be helpful. This is not intended

to be an introduction to programming in general; rather, to programming in Haskell.

Sufficient familiarity with your operating system and a text editor is also necessary

(this report only discusses installation on configuration on Windows and *Nix system;

other operating systems may be supported – consult the documentation of your chosen

compiler for more information on installing on other platforms).

What is Haskell?

Haskell is called a lazy, pure functional programming language. It is called lazy be-

cause expressions which are not needed to determine the answer to a problem are not

evaluated. The opposite of lazy is strict, which is the evaluation strategy of most com-

mon programming languages (C, C++, Java, even ML). A strict language is one in

which every expression is evaluated, whether the result of its computation is important

or not. (This is probably not entirely true as optimizing compilers for strict languages

often do what’s called “dead code elimination” – this removes unused expressions from

the program.) It is called pure because it does not allow side effects (A side effect is

something that affects the “state” of the world. For instance, a function that prints

something to the screen is said to be side-effecting, as is a function which affects the

value of a global variable.) – of course, a programming language without side effects

would be horribly useless; Haskell uses a system of monads to isolate all impure com-

putations from the rest of the program and perform them in the safe way (see Chapter 9

for a discussion of monads proper or Chapter 5 for how to do input/output in a pure

language).

Haskell is called a functional language because the evaluation of a program is

equivalent to evaluating a function in the pure mathematical sense. This also differs

from standard languages (like C and Java) which evaluate a sequence of statements,

one after the other (this is termed an imperative language).

i

ii

The history of Haskell is best described using the words of the authors. The following

text is quoted from the published version of the Haskell 98 Report:

Programming Languages and Computer Architecture (FPCA ’87) in Port-

land, Oregon, to discuss an unfortunate situation in the functional pro-

gramming community: there had come into being more than a dozen non-

strict, purely functional programming languages, all similar in expressive

power and semantic underpinnings. There was a strong consensus at this

meeting that more widespread use of this class of functional languages was

being hampered by the lack of a common language. It was decided that a

committee should be formed to design such a language, providing faster

communication of new ideas, a stable foundation for real applications de-

velopment, and a vehicle through which others would be encouraged to

use functional languages. This document describes the result of that com-

mittee’s efforts: a purely functional programming language called Haskell,

named after the logician Haskell B. Curry whose work provides the logical

basis for much of ours.

The committee’s primary goal was to design a language that satisfied these

constraints:

ing building large systems.

2. It should be completely described via the publication of a formal

syntax and semantics.

3. It should be freely available. Anyone should be permitted to imple-

ment the language and distribute it to whomever they please.

4. It should be based on ideas that enjoy a wide consensus.

5. It should reduce unnecessary diversity in functional programming

languages.

The committee intended that Haskell would serve as a basis for future

research in language design, and hoped that extensions or variants of the

language would appear, incorporating experimental features.

Haskell has indeed evolved continuously since its original publication. By

the middle of 1997, there had been four iterations of the language design

(the latest at that point being Haskell 1.4). At the 1997 Haskell Workshop

in Amsterdam, it was decided that a stable variant of Haskell was needed;

this stable language is the subject of this Report, and is called “Haskell

98”.

Haskell 98 was conceived as a relatively minor tidy-up of Haskell 1.4,

making some simplifications, and removing some pitfalls for the unwary.

iii

tors are committed to supporting Haskell 98 exactly as specified, for the

foreseeable future.

The original Haskell Report covered only the language, together with a

standard library called the Prelude. By the time Haskell 98 was stabilised,

it had become clear that many programs need access to a larger set of li-

brary functions (notably concerning input/output and simple interaction

with the operating system). If these program were to be portable, a set of

libraries would have to be standardised too. A separate effort was there-

fore begun by a distinct (but overlapping) committee to fix the Haskell 98

Libraries.

Clearly you’re interested in Haskell since you’re reading this tutorial. There are many

motivations for using Haskell. My personal reason for using Haskell is that I have

found that I write more bug-free code in less time using Haskell than any other lan-

guage. I also find it very readable and extensible.

Perhaps most importantly, however, I have consistently found the Haskell commu-

nity to be incredibly helpful. The language is constantly evolving (that’s not to say

it’s instable; rather that there are numerous extensions that have been added to some

compilers which I find very useful) and user suggestions are often heeded when new

extensions are to be implemented.

My two biggest complaints, and the complaints of most Haskellers I know, are: (1) the

generated code tends to be slower than equivalent programs written in a language like

C; and (2) it tends to be difficult to debug.

The second problem tends not be to a very big issue: most of the code I’ve written

is not buggy, as most of the common sources of bugs in other languages simply don’t

exist in Haskell. The first issue certainly has come up a few times in my experience;

however, CPU time is almost always cheaper than programmer time and if I have to

wait a little longer for my results after having saved a few days programming and

debugging.

Of course, this isn’t the case of all applications. Some people may find that the

speed hit taken for using Haskell is unbearable. However, Haskell has a standardized

foreign-function interface which allow you to link in code written in other languages,

for when you need to get the most speed out of your code. If you don’t find this

sufficient, I would suggest taking a look at the language O’Caml, which often out-

performs even C++, yet also has many of the benefits of Haskell.

iv

Target Audience

There have been many books and tutorials written about Haskell; for a (nearly) com-

plete list, visit the http://haskell.org/bookshelf (Haskell Bookshelf) at the

Haskell homepage. A brief survey of the tutorials available yields:

reader is familiar with functional programming en large.

• Online Haskell Course is a short course (in German) for beginning with Haskell.

• Two Dozen Short Lessons in Haskell is the draft of an excellent textbook that

emphasizes user involvement.

School on Advanced Functional Programming.

Though all of these tutorials is excellent, they are on their own incomplete: The

“Gentle Introduction” is far too advanced for beginning Haskellers and the others tend

to end too early, or not cover everything. Haskell is full of pitfalls for new programmers

and experienced non-functional programmers alike, as can be witnessed by reading

through the archives of the Haskell mailing list.

It became clear that there is a strong need for a tutorial which is introductory in the

sense that it does not assume knowledge of functional programming, but which is ad-

vanced in the sense that it does assume some background in programming. Moreover,

none of the known tutorials introduce input/output and interactivity soon enough (Paul

Hudak’s book is an exception in that it does introduce IO by page 35, though the focus

and aim of that book and this tutorial are very different). This tutorial is not for begin-

ning programmers; some experience and knowledge of programming and computers is

assumed (though the appendix does contain some background information).

The Haskell language underwent a standardization process and the result is called

Haskell 98. The majority of this book will cover the Haskell 98 standard. Any de-

viations from the standard will be noted (for instance, many compilers offer certain

extensions to the standard which are useful; some of these may be discussed).

The goals of this tutorial are:

• to provide a good sense of how Haskell can be used in the real world

v

A Short Introduction to Haskell :

http://haskell.org/aboutHaskell.html

Haskell Wiki :

http://haskell.org/hawiki/

Haskell-Tutorial :

ftp://ftp.geoinfo.tuwien.ac.at/navratil/HaskellTutorial.

pdf

Tour of the Haskell Prelude :

http://www.cs.uu.nl/˜afie/haskell/tourofprelude.html

Courses in Haskell :

http://haskell.org/classes/

Acknowledgements

It would be inappropriate not to give credit also to the original designers of Haskell.

Those are: Arvind, Lennart Augustsson, Dave Barton, Brian Boutel, Warren Burton,

Jon Fairbairn, Joseph Fasel, Andy Gordon, Maria Guzman, Kevin Hammond, Ralf

Hinze, Paul Hudak, John Hughes, Thomas Johnsson, Mark Jones, Dick Kieburtz, John

Launchbury, Erik Meijer, Rishiyur Nikhil, John Peterson, Simon Peyton Jones, Mike

Reeve, Alastair Reid, Colin Runciman, Philip Wadler, David Wise, Jonathan Young.

Finally, I would like to specifically thank Simon Peyton Jones, Simon Marlow,

John Hughes, Alastair Reid, Koen Classen, Manuel Chakravarty, Sigbjorn Finne and

Sven Panne, all of whom have made my life learning Haskell all the more enjoyable by

always being supportive. There were doubtless others who helped and are not listed,

but these are those who come to mind.

Also thanks to the many people who have reported “bugs” in the first edition.

- Hal Daumé III

vi

Contents

1 Introduction 3

2 Getting Started 5

2.1 Hugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 Where to get it . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.2 Installation procedures . . . . . . . . . . . . . . . . . . . . . 6

2.1.3 How to run it . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.4 Program options . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.5 How to get help . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Glasgow Haskell Compiler . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.1 Where to get it . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.2 Installation procedures . . . . . . . . . . . . . . . . . . . . . 8

2.2.3 How to run the compiler . . . . . . . . . . . . . . . . . . . . 8

2.2.4 How to run the interpreter . . . . . . . . . . . . . . . . . . . 8

2.2.5 Program options . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.6 How to get help . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3 NHC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.1 Where to get it . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.2 Installation procedures . . . . . . . . . . . . . . . . . . . . . 9

2.3.3 How to run it . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.4 Program options . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.5 How to get help . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Language Basics 11

3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Pairs, Triples and More . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.3 Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.3.1 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3.2 Simple List Functions . . . . . . . . . . . . . . . . . . . . . 18

3.4 Source Code Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.5.1 Let Bindings . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.5.2 Infix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

vii

viii CONTENTS

3.6 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.7 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.8 Interactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Type Basics 37

4.1 Simple Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.2 Polymorphic Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.3 Type Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3.2 Equality Testing . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3.3 The Num Class . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3.4 The Show Class . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.4 Function Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.4.1 Lambda Calculus . . . . . . . . . . . . . . . . . . . . . . . . 42

4.4.2 Higher-Order Types . . . . . . . . . . . . . . . . . . . . . . 42

4.4.3 That Pesky IO Type . . . . . . . . . . . . . . . . . . . . . . . 44

4.4.4 Explicit Type Declarations . . . . . . . . . . . . . . . . . . . 45

4.4.5 Functional Arguments . . . . . . . . . . . . . . . . . . . . . 46

4.5 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.5.1 Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.5.2 Multiple Constructors . . . . . . . . . . . . . . . . . . . . . 49

4.5.3 Recursive Datatypes . . . . . . . . . . . . . . . . . . . . . . 51

4.5.4 Binary Trees . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.5.5 Enumerated Sets . . . . . . . . . . . . . . . . . . . . . . . . 52

4.5.6 The Unit type . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.6 Continuation Passing Style . . . . . . . . . . . . . . . . . . . . . . . 53

5 Basic Input/Output 57

5.1 The RealWorld Solution . . . . . . . . . . . . . . . . . . . . . . . . 57

5.2 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.3 The IO Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.4 A File Reading Program . . . . . . . . . . . . . . . . . . . . . . . . 64

6 Modules 67

6.1 Exports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.2 Imports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

6.3 Hierarchical Imports . . . . . . . . . . . . . . . . . . . . . . . . . . 70

6.4 Literate Versus Non-Literate . . . . . . . . . . . . . . . . . . . . . . 71

6.4.1 Bird-scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6.4.2 LaTeX-scripts . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7 Advanced Features 73

7.1 Sections and Infix Operators . . . . . . . . . . . . . . . . . . . . . . 73

7.2 Local Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7.3 Partial Application . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7.4 Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

CONTENTS ix

7.5 Guards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

7.6 Instance Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7.6.1 The Eq Class . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7.6.2 The Show Class . . . . . . . . . . . . . . . . . . . . . . . . 86

7.6.3 Other Important Classes . . . . . . . . . . . . . . . . . . . . 86

7.6.4 Class Contexts . . . . . . . . . . . . . . . . . . . . . . . . . 89

7.6.5 Deriving Classes . . . . . . . . . . . . . . . . . . . . . . . . 89

7.7 Datatypes Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.7.1 Named Fields . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.8 More Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

7.8.1 Standard List Functions . . . . . . . . . . . . . . . . . . . . 92

7.8.2 List Comprehensions . . . . . . . . . . . . . . . . . . . . . . 94

7.9 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

7.10 Finite Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7.11 Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.12 The Final Word on Lists . . . . . . . . . . . . . . . . . . . . . . . . 99

8.1 Type Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

8.2 Newtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

8.3 Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.3.1 Strict Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.4 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

8.4.1 Pong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

8.4.2 Computations . . . . . . . . . . . . . . . . . . . . . . . . . . 109

8.5 Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

8.6 Kinds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8.7 Class Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

8.8 Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

9 Monads 119

9.1 Do Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

9.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

9.3 A Simple State Monad . . . . . . . . . . . . . . . . . . . . . . . . . 124

9.4 Common Monads . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

9.5 Monadic Combinators . . . . . . . . . . . . . . . . . . . . . . . . . 134

9.6 MonadPlus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

9.7 Monad Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . 139

9.8 Parsing Monads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

9.8.1 A Simple Parsing Monad . . . . . . . . . . . . . . . . . . . . 144

9.8.2 Parsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

CONTENTS 1

10.1 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.2 Mutable Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.3 Mutable References . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.4 The ST Monad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.5 Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.6 Regular Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . 157

10.7 Dynamic Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

2 CONTENTS

Chapter 1

Introduction

This tutorial contains a whole host of example code, all of which should have been

included in its distribution. If not, please refer to the links off of the Haskell web site

(haskell.org) to get it. This book is formatted to make example code stand out

from the rest of the text.

Occasionally, we will refer to interaction betwen you and the operating system

and/or the interactive shell (more on this in Section 2).

Strewn throughout the tutorial, we will often make additional notes to something

written. These are often for making comparisons to other programming languages or

adding helpful information.

If we’re covering a difficult or confusing topic and there is something you should

watch out for, we will place a warning.

functions). This will look something like this:

Within the body text, Haskell keywords will appear like this: where, identifiers as

map, types as String and classes as Eq.

3

4 CHAPTER 1. INTRODUCTION

Chapter 2

Getting Started

There are three well known Haskell system: Hugs, GHC and NHC. Hugs is exclusively

an interpreter, meaning that you cannot compile stand-alone programs with it, but can

test and debug programs in an interactive environment. GHC is both an interpreter (like

Hugs) and a compiler which will produce stand-alone programs. NHC is exclusively a

compiler. Which you use is entirely up to you. I’ve tried to make a list of some of the

differences in the following list but of course this is far from exhaustive:

Hugs - very fast to load files; slow to run them; implements almost all of Haskell

98 (the standard) and most extensions; built-in support for module browsing;

cannot create stand-alones; written in C; works on almost every platform; build

in graphics library.

GHC - interactive environment is slower than Hugs to load, but allows function defini-

tions in the environment (in Hugs you have to put them in a file); implements all

of Haskell 98 and extensions; good support for interfacing with other languages;

in a sense the “de facto” standard.

NHC - less used and no interactive environment, but produces smaller and often faster

executables than does GHC; supports Haskell 98 and some extensions.

I, personally, have all of them installed and use them for different purposes. I tend

to use GHC to compile (primarily because I’m most familiar with it) and the Hugs

interactive environment, since it is much faster. As such, this is what I would suggest.

However, that is a fair amount to download an install, so if you had to go with just one,

I’d get GHC, since it contains both a compiler and interactive environment.

Following is a description of how to download and install each of this as of the

time this tutorial was written. It may have changed – see http://haskell.org

(the Haskell website) for up-to-date information.

5

6 CHAPTER 2. GETTING STARTED

2.1 Hugs

Hugs supports almost all of the Haskell 98 standard (it lacks some of the libraries),

as well as a number of advanced/experimental extensions, including: multi-parameter

type classes, extensible records, rank-2 polymorphism, existentials, scoped type vari-

ables, and restricted type synonyms.

The official Hugs web page is at:

http://haskell.org/hugs (http://haskell.org/hugs)

If you go there, there is a link titled “downloading” which will send you to the

download page. From that page, you can download the appropriate version of Hugs for

your computer.

Once you’ve downloaded Hugs, installation differs depending on your platform, how-

ever, installation for Hugs is more of less identical to installation for any program on

your platform.

For Windows when you click on the “msi” file to download, simply choose “Run This

Program” and the installation will begin automatically. From there, just follow

the on-screen instructions.

For RPMs use whatever RPM installation program you know best.

For source first gunzip the file, then untar it. Presumably if you’re using a system

which isn’t otherwise supported, you know enough about your system to be able

to run configure scripts and make things by hand.

On Unix machines, the Hugs interpreter is usually started with a command line of the

form: hugs [option — file] ...

On Windows , Hugs may be started by selecting it from the start menu or by double

clicking on a file with the .hs or .lhs extension. (This manual assumes that Hugs has

already been successfully installed on your system.)

Hugs uses options to set system parameters. These options are distinguished by a

leading + or - and are used to customize the behaviour of the interpreter. When Hugs

starts, the interpreter performs the following tasks:

these options. On Windows 95/NT, the registry is also queried for Hugs option

settings.

2.2. GLASGOW HASKELL COMPILER 7

• Internal data structures are initialized. In particular, the heap is initialized, and

its size is fixed at this point; if you want to run the interpreter with a heap size

other than the default, then this must be specified using options on the command

line, in the environment or in the registry.

• The prelude file is loaded. The interpreter will look for the prelude file on the

path specified by the -P option. If the prelude, located in the file Prelude.hs,

cannot be found in one of the path directories or in the current directory, then

Hugs will terminate; Hugs will not run without the prelude file.

• Program files specified on the command line are loaded. The effect of a com-

mand hugs f1 ... fn is the same as starting up Hugs with the hugs command and

then typing :load f1 ... fn. In particular, the interpreter will not terminate if a

problem occurs while it is trying to load one of the specified files, but it will

abort the attempted load command.

The environment variables and command line options used by Hugs are described

in the following sections.

To list all of the options would take too much space. The most important option at this

point is “+98” or “-98”. When you start hugs with “+98” it is in Haskell 98 mode,

which turns off all extensions. When you start in “-98”, you are in Hugs mode and all

extensions are turned on. If you’ve downloaded someone else’s code and you’re having

trouble loading it, first make sure you have the “98” flag set properly.

Further information on the Hugs options is in the manual:

http://cvs.haskell.org/Hugs/pages/hugsman/started.html ().

To get Hugs specific help, go to the Hugs web page. To get general Haskell help, go to

the Haskell web page.

The Glasgow Haskell Compiler (GHC) is a robust, fully-featured, optimising compiler

and interactive environment for Haskell 98; GHC compiles Haskell to either native

code or C. It implements numerous experimental language extensions to Haskell 98;

for example: concurrency, a foreign language interface, multi-parameter type classes,

scoped type variables, existential and universal quantification, unboxed types, excep-

tions, weak pointers, and so on. GHC comes with a generational garbage collector, and

a space and time profiler.

8 CHAPTER 2. GETTING STARTED

Go to the official GHC web page http://haskell.org/ghc (GHC) to download

the latest release. The current version as of the writing of this tutorial is 5.04.2 and can

be downloaded off of the GHC download page (follow the “Download” link). From

that page, you can download the appropriate version of GHC for your computer.

Once you’ve downloaded GHC, installation differs depending on your platform; how-

ever, installation for GHC is more of less identical to installation for any program on

your platform.

For Windows when you click on the “msi” file to download, simply choose “Run This

Program” and the installation will begin automatically. From there, just follow

the on-screen instructions.

For RPMs use whatever RPM installation program you know best.

For source first gunzip the file, then untar it. Presumably if you’re using a system

which isn’t otherwise supported, you know enough about your system to be able

to run configure scripts and make things by hand.

For a more detailed description of the installation procedure, look at the GHC users

manual under “Installing GHC”.

Running the compiler is fairly easy. Assuming that you have a program written with a

main function in a file called Main.hs, you can compile it simply by writing:

% ghc --make Main.hs -o main

The “–make” option tells GHC that this is a program and not just a library and you

want to build it and all modules it depends on. “Main.hs” stipulates the name of the

file to compile; and the “-o main” means that you want to put the output in a file called

“main”.

that this is an executable file.

You can then run the program by simply typing “main” at the prompt.

GHCi is invoked with the command “ghci” or “ghc –interactive”. One or more modules

or filenames can also be specified on the command line; this instructs GHCi to load the

specified modules or filenames (and all the modules they depend on), just as if you had

said :load modules at the GHCi prompt.

2.3. NHC 9

To list all of the options would take too much space. The most important option at

this point is “-fglasgow-exts”. When you start GHCi without “-fglasgow-exts” it is

in Haskell 98 mode, which turns off all extensions. When you start with “-fglasgow-

exts”, all extensions are turned on. If you’ve downloaded someone elses code and

you’re having trouble loading it, first make sure you have this flag set properly.

Further information on the GHC and GHCi options are in the manual off of the

GHC web page.

To get GHC(i) specific help, go to the GHC web page. To get general Haskell help, go

to the Haskell web page.

2.3 NHC

About NHC. . .

2.3.2 Installation procedures

2.3.3 How to run it

2.3.4 Program options

2.3.5 How to get help

2.4 Editors

With good text editor, programming is fun. Of course, you can get along with simplistic

editor capable of just cut-n-paste, but good editor is capable of doing most of the chores

for you, letting you concentrate on what you are writing. With respect to programming

in Haskell, good text editor should have as much as possible of the following features:

• Code completion

10 CHAPTER 2. GETTING STARTED

Haskell via haskell-mode and accompanying Elist code (available from http://www.haskell.org/haskel

()), and . . . .

What’s else available?. . .

(X)Emacs seem to do the best job, having all the features listed above. Indentation

is aware about Haskell’s 2-dimensional layout rules (see Section 7.11, very smart and

have to be seen in action to be believed. You can quickly jump to the definition of

chosen function with the help of ”Definitions” menu, and name of the currently edited

function is always displayed in the modeline.

Chapter 3

Language Basics

In this chapter we present the basic concepts of Haskell. In addition to familiarizing you

with the interactive environments and showing you how to compile a basic program,

we introduce the basic syntax of Haskell, which will probably be quite alien if you are

used to languages like C and Java.

However, before we talk about specifics of the language, we need to establish some

general properties of Haskell. Most importantly, Haskell is a lazy language, which lazy

means that no computation takes place unless it is forced to take place when the result

of that computation is used.

This means, for instance, that you can define infinitely large data structures, pro-

vided that you never use the entire structure. For instance, using imperative-esque

pseudo-code, we could create an infinite list containing the number 1 in each position

by doing something like:

List makeList()

{

List current = new List();

current.value = 1;

current.next = makeList();

return current;

}

By looking at this code, we can see what it’s trying to do: it creates a new list, sets

its value to 1 and then recursively calls itself to make the rest of the list. Of course, if

you actually wrote this code and called it, the program would never terminate, because

makeList would keep calling itself ad infinitum.

This is because we assume this imperative-esque language is strict, the opposite of strict

lazy. Strict languages are often referred to as “call by value,” while lazy languages are

referred to as “call by name.” In the above pseudo-code, when we “run” makeList

on the fifth line, we attempt to get a value out of it. This leads to an infinite loop.

The equivalent code in Haskell is:

11

12 CHAPTER 3. LANGUAGE BASICS

makeList = 1 : makeList

This program reads: we’re defining something called makeList (this is what goes

on the left-hand side of the equals sign). On the right-hand side, we give the definition

of makeList. In Haskell, the colon operator is used to create lists (we’ll talk more

about this soon). This right-hand side says that the value of makeList is the element

1 stuck on to the beginning of the value of makeList.

However, since Haskell is lazy (or “call by name”), we do not actually attempt to

evaluate what makeList is at this point: we simply remember that if ever in the future

we need the second element of makeList, we need to just look at makeList.

Now, if you attempt to write makeList to a file, print it to the screen, or calculate

the sum of its elements, the operation won’t terminate because it would have to evaluate

an infinitely long list. However, if you simply use a finite portion of the list (say the

first 10 elements), the fact that the list is infinitely long doesn’t matter. If you only use

the first 10 elements, only the first 10 elements are ever calculated. This is laziness.

case-sensitive Second, Haskell is case-sensitive. Many languages are, but Haskell actually uses

values case to give meaning. Haskell distinguishes between values (for instance, numbers:

1, 2, 3, . . . ); strings: “abc”, “hello”, . . . ; characters: ‘a’, ‘b’, ‘ ’, . . . ; even functions:

types for instance, the function squares a value, or the square-root function); and types (the

categories to which values belong).

By itself, this is not unusual. Most languages have some system of types. What is

unusual is that Haskell requires that the names given to functions and values begin with

a lower-case letter and that the names given to types begin with an upper-case letter.

The moral is: if your otherwise correct program won’t compile, be sure you haven’t

named your function Foo, or something else beginning with a capital letter.

side effects Being a functional language, Haskell eschews side effects. A side effect is essen-

tially something that happens in the course of executing a function that is not related to

the output produced by that function.

For instance, in a language like C or Java, you are able to modify “global” variables

from within a function. This is a side effect because the modification of this global

variable is not related to the output produced by the function. Furthermore, modifying

the state of the real world is considered a side effect: printing something to the screen,

reading a file, etc., are all side effecting operations.

pure Functions that do not have side effects are called pure. An easy test for whether or

not a function is pure is to ask yourself a simple question: “Given the same arguments,

will this function always produce the same result?”.

All of this means that if you’re used to writing code in an imperative language (like

C or Java), you’re going to have to start thinking differently. Most importantly, if you

have a value x, you must not think of x as a register, a memory location or anything

else of that nature. x is simply a name, just as “Hal” is my name. You cannot arbitrarily

decide to store a different person in my name any more than you can arbitrarily decide

to store a different value in x. This means that code that might look like the following

C code is invalid (and has no counterpart) in Haskell:

int x = 5;

3.1. ARITHMETIC 13

x = x + 1;

A call like x = x + 1 is called destructive update because we are destroying destructive update

whatever was in x before and replacing it with a new value. Destructive update does

not exist in Haskell.

By not allowing destructive updates (or any other such side effecting operations),

Haskell code is very easy to comprehend. That is, when we define a function f, and

call that function with a particular argument a in the beginning of a program, and then,

at the end of the program, again call f with the same argument a, we know we will

get out the same result. This is because we know that a cannot have changed and

because we know that f only depends on a (for instance, it didn’t increment a global

counter). This property is called referential transparency and basically states that if referential transparency

two functions f and g produce the same values for the same arguments, then we may

replace f with g (and vice-versa).

parency. The definition given above is the one I like best. They all carry

the same interpretation; the differences lie in how they are formalized.

3.1 Arithmetic

Let’s begin our foray into Haskell with simple arithmetic. Start up your favorite inter-

active shell (Hugs or GHCi; see Chapter 2 for installation instructions). The shell will

output to the screen a few lines talking about itself and what it’s doing and then should

finish with the cursor on a line reading:

Prelude>

From here, you can begin to evaluate expressions. An expression is basically some- expressions

thing that has a value. For instance, the number 5 is an expression (its value is 5). Val-

ues can be built up from other values; for instance, 5 + 6 is an expression (its value is

11). In fact, most simple arithmetic operations are supported by Haskell, including plus

(+), minus (-), times (*), divided-by (/), exponentiation (ˆ) and square-root (sqrt).

You can experiment with these by asking the interactive shell to evaluate expressions

and to give you their value. In this way, a Haskell shell can be used as a powerful

calculator. Try some of the following:

Prelude> 5*4+3

23

Prelude> 5ˆ5-2

3123

Prelude> sqrt 2

1.4142135623730951

14 CHAPTER 3. LANGUAGE BASICS

Prelude> 5*(4+3)

35

We can see that, in addition to the standard arithmetic operations, Haskell also

allows grouping by parentheses, hence the difference between the values of 5*4+3 and

5*(4+3). The reason for this is that the “understood” grouping of the first expression is

operator precedence (5*4)+3, due to operator precedence.

Also note that parentheses aren’t required around function arguments. For instance,

we simply wrote sqrt 2, not sqrt(2), as would be required in most other lan-

guages. You could write it with the parentheses, but in Haskell, since function applica-

tion is so common, parentheses aren’t required.

times it is better to leave them in anyway; other people will probably

have to read your code, and if extra parentheses make the intent of the

code clearer, use them.

may find it odd that sqrt 2 comes back with a decimal point (i.e., is a

floating point number) even though the argument to the function seems to

be an integer. This interchangability of numeric types is due to Haskell’s

system of type classes and will be discussed in detail in Section 4.3).

Exercises

Exercise 3.1 We’ve seen that multiplication binds more tightly than addition. Can you

think of a way to determine whether function application binds more or less tightly

than multiplication?

In addition to single values, we should also address multiple values. For instance, we

may want to refer to a position by its x/y coordinate, which would be a pair of integers.

To make a pair of integers is simple: you enclose the pair in parenthesis and separate

them with a comma. Try the following:

Prelude> (5,3)

(5,3)

3.3. LISTS 15

Here, we have a pair of integers, 5 and 3. In Haskell, the first element of a pair

need not have the same type as the second element: that is, pairs are allowed to be

ogeneous heterogeneous. For instance, you can have a pair of an integer with a string. This

contrasts with lists, which must be made up of elements of all the same type (we will

discuss lists further in Section 3.3).

There are two predefined functions that allow you to extract the first and second

elements of a pair. They are, respectively, fst and snd. You can see how they work

below:

5

Prelude> snd (5, "hello")

"hello"

In addition to pairs, you can define triples, quadruples etc. To define a triple and a

quadruple, respectively, we write:

Prelude> (1,2,3)

(1,2,3)

Prelude> (1,2,3,4)

(1,2,3,4)

And so on. In general, pairs, triples, and so on are called tuples and can store fixed tuples

amounts of heterogeneous data.

NOTE The functions fst and snd won’t work on anything longer

than a pair; if you try to use them on a larger tuple, you will get a message

stating that there was a type error. The meaning of this error message will

be explained in Chapter 4.

Exercises

Exercise 3.2 Use a combination of fst and snd to extract the character out of the tuple

((1,’a’),"foo").

3.3 Lists

The primary limitation of tuples is that they hold only a fixed number of elements:

pairs hold two, triples hold three, and so on. A data structure that can hold an arbitrary

number of elements is a list. Lists are assembled in a very similar fashion to tuples,

except that they use square brackets instead of parentheses. We can define a list like:

16 CHAPTER 3. LANGUAGE BASICS

Prelude> [1,2]

[1,2]

Prelude> [1,2,3]

[1,2,3]

Lists don’t need to have any elements. The empty list is simply [].

Unlike tuples, we can very easily add an element on to the beginning of the list

cons operator using the colon operator. The colon is called the “cons” operator; the process of adding

an element is called “consing.” The etymology of this is that we are constructing a

new list from an element and an old list. We can see the cons operator in action in the

following examples:

Prelude> 0:[1,2]

[0,1,2]

Prelude> 5:[1,2,3,4]

[5,1,2,3,4]

We can actually build any list by using the cons operator (the colon) and the empty

list:

Prelude> 5:1:2:3:4:[]

[5,1,2,3,4]

syntactic sugar In fact, the [5,1,2,3,4] syntax is “syntactic sugar” for the expression using the

explicit cons operators and empty list. If we write something using the [5,1,2,3,4]

notation, the compiler simply translates it to the expression using (:) and [].

feature, which is added to make the syntax nicer.

One further difference between lists and tuples is that, while tuples are heteroge-

homogenous neous, lists must be homogenous. This means that you cannot have a list that holds

both integers and strings. If you try to, a type error will be reported.

Of course, lists don’t have to just contain integers or strings; they can also contain

tuples or even other lists. Tuples, similarly, can contain lists and other tuples. Try some

of the following:

Prelude> [(1,1),(2,4),(3,9),(4,16)]

[(1,1),(2,4),(3,9),(4,16)]

Prelude> ([1,2,3,4],[5,6,7])

([1,2,3,4],[5,6,7])

3.3. LISTS 17

There are two basic list functions: head and tail. The head function returns

the first element of a (non-empty) list, and the tail function returns all but the first

element of a (non-empty) list.

To get the length of a list, you use the length function:

5

Prelude> head [1,2,3,4,10]

1

Prelude> length (tail [1,2,3,4,10])

4

3.3.1 Strings

In Haskell, a String is simply a list of Chars. So, we can create the string “Hello” as:

Prelude> ’H’:’e’:’l’:’l’:’o’:[]

"Hello"

"Hello World"

Additionally, non-string values can be converted to strings using the show func-

tion, and strings can be converted to non-string values using the read function. Of

course, if you try to read a value that’s malformed, an error will be reported (note that

this is a run-time error, not a compile-time error):

"Five squared is 25"

Prelude> read "5" + 3

8

Prelude> read "Hello" + 3

Program error: Prelude.read: no parse

Above, the exact error message is implementation dependent. However, the inter-

preter has inferred that you’re trying to add three to something. This means that when

we execute read "Hello", we expect to be returned a number. However, "Hello"

cannot be parsed as a number, so an error is reported.

18 CHAPTER 3. LANGUAGE BASICS

Much of the computation in Haskell programs is done by processing lists. There are

three primary list-processing functions: map, filter and foldr (also foldl).

The map function takes as arguments a list of values and a function that should be

applied to each of the values. For instance, there is a built-in function Char.toUpper

that takes as input a Char and produces a Char that is the upper-case version of the

original argument. So, to covert an entire string (which is simply a list of characters)

to upper case, we can map the toUpper function across the entire list:

"HELLO WORLD"

Char.toUpper. In Hugs, simply use toUpper.

When you map across a list, the length of the list never changes – only the individ-

ual values in the list change.

To remove elements from the list, you can use the filter function. This function

allows you to remove certain elements from a list depending on their value, but not on

their context. For instance, the function Char.isLower tells you whether a given

character is lower case. We can filter out all non-lowercase characters using this:

"elloorld"

The function foldr takes a little more getting used to. foldr takes three argu-

ments: a function, an initial value and a list. The best way to think about foldr is

that it replaces occurences of the list cons operator (:) with the function parameter and

replaces the empty list constructor ([]) with the initial value. Thus, if we have a list:

3 : 8 : 12 : 5 : []

3 + 8 + 12 + 5 + 0

Prelude> foldr (+) 0 [3,8,12,5]

28

We can perform the same sort of operation to calculate the product of all the ele-

ments on a list:

Prelude> foldr (*) 1 [4,8,5]

160

3.3. LISTS 19

We said earlier that folding is like replacing (:) with a particular function and ([])

with an initial element. This raises a question as to what happens when the function

isn’t associative (a function (·) is associative if a · (b · c) = (a · b) · c). When we write

4 · 8 · 5 · 1, we need to specify where to put the parentheses. Namely, do we mean

((4 · 8) · 5) · 1 or 4 · (8 · ((5 · 1))? foldr assumes the function is right-associative (i.e.,

the correct bracketing is the latter). Thus, when we use it on a non-associtive function

(like minus), we can see the effect:

0

==> 4 - (foldr (-) 1 [8,5])

==> 4 - (8 - foldr (-) 1 [5])

==> 4 - (8 - (5 - foldr (-) 1 []))

==> 4 - (8 - (5 - 1))

==> 4 - (8 - 4)

==> 4 - 4

==> 0

The foldl function goes the other way and effectively produces the opposite

bracketing. foldl looks the same when applied, so we could have done summing

just as well with foldl:

28

However, we get different results when using the non-associative function minus:

-16

This is because foldl uses the opposite bracketing. The way it accomplishes this

is essentially by going all the way down the list, taking the last element and combining

it with the initial value via the provided function. It then takes the second-to-last ele-

ment in the list and combines it to this new value. It does so until there is no more list

left.

The derivation here proceeds in the opposite fashion:

==> foldl (-) (1 - 4) [8,5]

==> foldl (-) ((1 - 4) - 8) [5]

==> foldl (-) (((1 - 4) - 8) - 5) []

20 CHAPTER 3. LANGUAGE BASICS

==> ((1 - 4) - 8) - 5

==> ((-3) - 8) - 5

==> (-11) - 5

==> -16

Note that once the foldl goes away, the parenthesization is exactly the opposite

of the foldr.

NOTE foldl is often more efficient than foldr for reasons that

we will discuss in Section 7.8. However, foldr can work on infinite

lists, while foldl cannot. This is because before foldl does any-

thing, it has to go to the end of the list. On the other hand, foldr

starts producing output immediately. For instance, foldr (:) []

[1,2,3,4,5] simply returns the same list. Even if the list were in-

finite, it would produce output. A similar function using foldl would

fail to produce any output.

If this discussion of the folding functions is still somewhat unclear, that’s okay.

We’ll discuss them further in Section 7.8.

Exercises

Exercise 3.3 Use map to convert a string into a list of booleans, each element in the

new list representing whether or not the original element was a lower-case character.

That is, it should take the string “aBCde” and return [True,False,False,True,True].

Exercise 3.4 Use the functions mentioned in this section (you will need two of them)

to compute the number of lower-case letters in a string. For instance, on “aBCde” it

should return 3.

Exercise 3.5 We’ve seen how to calculate sums and products using folding functions.

Given that the function max returns the maximum of two numbers, write a function

using a fold that will return the maximum value in a list (and zero if the list is empty).

So, when applied to [5,10,2,8,1] it will return 10. Assume that the values in the list are

always ≥ 0. Explain to yourself why it works.

Exercise 3.6 Write a function that takes a list of pairs of length at least 2 and re-

turns the first component of the second element in the list. So, when provided with

[(5,’b’),(1,’c’),(6,’a’)], it will return 1.

As programmers, we don’t want to simply evaluate small expressions like these – we

want to sit down, write code in our editor of choice, save it and then use it.

We already saw in Sections 2.2 and 2.3 how to write a Hello World program and

how to compile it. Here, we show how to use functions defined in a source-code file

in the interactive environment. To do this, create a file called Test.hs and enter the

following code:

3.4. SOURCE CODE FILES 21

module Test

where

x = 5

y = (6, "Hello")

z = x * fst y

“Test” (in general module names should match file names; see Section 6 for more on

this). In this module, there are three definitions: x, y and z. Once you’ve written

and saved this file, in the directory in which you saved it, load this in your favorite

interpreter, by executing either of the following:

% hugs Test.hs

% ghci Test.hs

This will start Hugs or GHCi, respectively, and load the file. Alternatively, if you

already have one of them loaded, you can use the “:load” command (or just “:l”) to

load a module, as:

Prelude> :l Test.hs

...

Test>

Between the first and last line, the interpreter will print various data to explain what

it is doing. If any errors appear, you probably mistyped something in the file; double

check and then try again.

You’ll notice that where it used to say “Prelude” it now says “Test.” That means

that Test is the current module. You’re probably thinking that “Prelude” must also be

a module. Exactly correct. The Prelude module (usually simply referred to as “the

Prelude”) is always loaded and contains the standard definitions (for instance, the (:) the Prelude

operator for lists, or (+) or (*), fst, snd and so on).

Now that we’ve loaded Test, we can use things that were defined in it. For example:

Test> x

5

Test> y

(6,"Hello")

Test> z

30

One final issue regarding how to compile programs to stand-alone executables re-

mains. In order for a program to be an executable, it must have the module name

22 CHAPTER 3. LANGUAGE BASICS

“Main” and must contain a function called main. So, if you go in to Test.hs and re-

name it to “Main” (change the line that reads module Test to module Main), we

simply need to add a main function. Try this:

Now, save the file, and compile it (refer back to Section 2 for information on how

to do this for your compiler). For example, in GHC, you would say:

This will create a file called “test” (or on Windows, “test.exe”) that you can then

run.

% ./test

Hello World

C:\> test.exe

Hello World

3.5 Functions

Now that we’ve seen how to write code in a file, we can start writing functions. As you

might have expected, functions are central to Haskell, as it is a functional language.

This means that the evaluation of a program is simply the evaluation of a function.

We can write a simple function to square a number and enter it into our Test.hs

file. We might define this as follows:

square x = x * x

In this function definition, we say that we’re defining a function square that takes

one argument (aka parameter), which we call x. We then say that the value of square

x is equal to x * x.

Haskell also supports standard conditional expressions. For instance, we could

define a function that returns −1 if its argument is less than 0; 0 if its argument is 0;

and 1 if its argument is greater than 0 (this is called the signum function):

3.5. FUNCTIONS 23

signum x =

if x < 0

then -1

else if x > 0

then 1

else 0

Test> signum 5

1

Test> signum 0

0

Test> signum (5-10)

-1

Test> signum (-1)

-1

Note that the parenthesis around “-1” in the last example are required; if missing,

the system will think you are trying to subtract the value “1” from the value “signum,”

which is illtyped.

The if/then/else construct in Haskell is very similar to that of most other program-

ming languages; however, you must have both a then and an else clause. It evaluates if/then/else

the condition (in this case x < 0 and, if this evaluates to True, it evaluates the then

condition; if the condition evaluated to False, it evaluates the else condition).

You can test this program by editing the file and loading it back into your interpreter.

If Test is already the current module, instead of typing :l Test.hs again, you can

simply type :reload or just :r to reload the current file. This is usually much faster.

Haskell, like many other languages, also supports case constructions. These are case/of

used when there are multiple values that you want to check against (case expressions

are actually quite a bit more powerful than this – see Section 7.4 for all of then details).

Suppose we wanted to define a function that had a value of 1 if its argument were

0; a value of 5 if its argument were 1; a value of 2 if its argument were 2; and a value of

−1 in all other instances. Writing this function using if statements would be long and

very unreadable; so we write it using a case statement as follows (we call this function

f):

f x =

case x of

0 -> 1

1 -> 5

2 -> 2

_ -> -1

24 CHAPTER 3. LANGUAGE BASICS

In this program, we’re defining f to take an argument x and then inspect the value

of x. If it matches 0, the value of f is 1. If it matches 1, the value of f is 5. If it maches

2, then the value of f is 2; and if it hasn’t matched anything by that point, the value of

wildcard f is −1 (the underscore can be thought of as a “wildcard” – it will match anything) .

The indentation here is important. Haskell uses a system called “layout” to struc-

ture its code (the programming language Python uses a similar system). The layout

system allows you to write code without the explicit semicolons and braces that other

languages like C and Java require.

careful about whether you are using tabs or spaces. If you can configure

your editor to never use tabs, that’s probably better. If not, make sure

your tabs are always 8 spaces long, or you’re likely to run in to problems.

The general rule for layout is that an open-brace is inserted after the keywords

where, let, do and of, and the column position at which the next command appears

is remembered. From then on, a semicolon is inserted before every new line that is

indented the same amount. If a following line is indented less, a close-brace is inserted.

This may sound complicated, but if you follow the general rule of indenting after each

of those keywords, you’ll never have to remember it (see Section 7.11 for a more

complete discussion of layout).

Some people prefer not to use layout and write the braces and semicolons explicitly.

This is perfectly acceptable. In this style, the above function might look like:

f x = case x of

{ 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> 1 }

Of course, if you write the braces and semicolons explicitly, you’re free to structure

the code as you wish. The following is also equally valid:

f x =

case x of { 0 -> 1 ;

1 -> 5 ; 2 -> 2

; _ -> 1 }

However, structuring your code like this only serves to make it unreadable (in this

case).

Functions can also be defined piece-wise, meaning that you can write one version

of your function for certain parameters and then another version for other parameters.

For instance, the above function f could also be written as:

f 0 = 1

f 1 = 5

f 2 = 2

f _ = -1

3.5. FUNCTIONS 25

Here, the order is important. If we had put the last line first, it would have matched

every argument, and f would return -1, regardless of its argument (most compilers

will warn you about this, though, saying something about overlapping patterns). If we

had not included this last line, f would produce an error if anything other than 0, 1 or

2 were applied to it (most compilers will warn you about this, too, saying something

about incomplete patterns). This style of piece-wise definition is very popular and will

be used quite frequently throughout this tutorial. These two definitions of f are actually

equivalent – this piece-wise version is translated into the case expression.

More complicated functions can be built from simpler functions using function

composition. Function composition is simply taking the result of the application of one

function and using that as an argument for another. We’ve already seen this way back

in arithmetic (Section 3.1), when we wrote 5*4+3. In this, we were evaluating 5 ∗ 4

and then applying +3 to the result. We can do the same thing with our square and f

functions:

Test> square (f 1)

25

Test> square (f 2)

4

Test> f (square 1)

5

Test> f (square 2)

-1

parentheses around the inner function are necessary; otherwise, in the first line, the

interpreter would think that you were trying to get the value of “square f,” which

has no meaning. Function application like this is fairly standard in most programming

languages. There is another, more mathematically oriented, way to express function

composition, using the (.) (just a single period) function. This (.) function is supposed

to look like the (◦) operator in mathematics.

Haskell we write f . g also to mean “f following g.”

The meaning of f ◦ g is simply that (f ◦ g)(x) = f (g(x)). That is,

applying the value x to the function f ◦ g is the same as applying it to g,

taking the result, and then applying that to f .

The (.) function (called the function composition function), takes two functions function composition

and makes them in to one. For instance, if we write (square . f), this means that

it creates a new function that takes an argument, applies f to that argument and then

applies square to the result. Conversely, (f . square) means that it creates

a new function that takes an argument, applies square to that argument and then

applies f to the result. We can see this by testing it as before:

Test> (square . f) 1

25

26 CHAPTER 3. LANGUAGE BASICS

Test> (square . f) 2

4

Test> (f . square) 1

5

Test> (f . square) 2

-1

Haskell compiler will think we’re trying to compose square with the value f 1 in

the first line, which makes no sense since f 1 isn’t even a function.

It would probably be wise to take a little time-out to look at some of the functions

that are defined in the Prelude. Undoubtedly, at some point, you will accidentally

rewrite some already-existing function (I’ve done it more times than I can count), but

if we can keep this to a minimum, that would save a lot of time. Here are some simple

functions, some of which we’ve already seen:

sqrt the square root function

id the identity function: id x = x

fst extracts the first element from a pair

snd extracts the second element from a pair

null tells you whether or not a list is empty

head returns the first element on a non-empty list

tail returns everything but the first element of a non-empty list

++ concatenates two lists

== checks to see if two elements are equal

/= checks to see if two elements are unequal

Here, we show example usages of each of these functions:

Prelude> sqrt 2

1.41421

Prelude> id "hello"

"hello"

Prelude> id 5

5

Prelude> fst (5,2)

5

Prelude> snd (5,2)

2

Prelude> null []

True

Prelude> null [1,2,3,4]

False

Prelude> head [1,2,3,4]

1

Prelude> tail [1,2,3,4]

[2,3,4]

Prelude> [1,2,3] ++ [4,5,6]

3.5. FUNCTIONS 27

[1,2,3,4,5,6]

Prelude> [1,2,3] == [1,2,3]

True

Prelude> ’a’ /= ’b’

True

Prelude> head []

We can see that applying head to an empty list gives an error (the exact error

message depends on whether you’re using GHCi or Hugs – the shown error message is

from Hugs).

Often we wish to provide local declarations for use in our functions. For instance, if

you remember back to your grade school mathematics courses, the following equation

to find the roots (zeros) of a polynomial of the form ax2 + bx + c = 0: x =

is used √

(−b ± b2 − 4ac)/2a. We could write the following function to compute the two

values of x:

roots a b c =

((-b + sqrt(b*b - 4*a*c)) / (2*a),

(-b - sqrt(b*b - 4*a*c)) / (2*a))

To remedy this problem, Haskell allows for local bindings. That is, we can create

values inside of a function that only that function can see. For instance, we could

create a local binding for sqrt(b*b-4*a*c) and call it, say, det and then use that

in both places where sqrt(b*b - 4*a*c) occurred. We can do this using a let/in

declaration:

roots a b c =

let det = sqrt (b*b - 4*a*c)

in ((-b + det) / (2*a),

(-b - det) / (2*a))

In fact, you can provide multiple declarations inside a let. Just make sure they’re

indented the same amount, or you will have layout problems:

roots a b c =

let det = sqrt (b*b - 4*a*c)

twice_a = 2*a

in ((-b + det) / twice_a,

(-b - det) / twice_a)

28 CHAPTER 3. LANGUAGE BASICS

3.5.2 Infix

Infix functions are ones that are composed of symbols, rather than letters. For instance,

(+), (*), (++) are all infix functions. You can use them in non-infix mode by

enclosing them in parentheses. Hence, the two following expressions are the same:

Prelude> 5 + 10

15

Prelude> (+) 5 10

15

Similarly, non-infix functions (like map) can be made infix by enclosing them in

backquotes (the ticks on the tilde key on American keyboards):

"HELLO WORLD"

Prelude> Char.toUpper ‘map‘ "Hello World"

"HELLO WORLD"

Char.toUpper. In Hugs, simply use toUpper.

3.6 Comments

There are two types of comments in Haskell: line comments and block comments.

Line comments begin with the token -- and extend until the end of the line. Block

comments begin with {- and extend to a corresponding -}. Block comments can be

nested.

and -} correspond to /* and */.

Comments are used to explain your program in English and are completely ignored

by compilers and interpreters. For example:

module Test2

where

main =

putStrLn "Hello World" -- write a string

-- to the screen

produces integer. {- this is an embedded

3.7. RECURSION 29

matching end-comment token: -}

f x =

case x of

0 -> 1 -- 0 maps to 1

1 -> 5 -- 1 maps to 5

2 -> 2 -- 2 maps to 2

_ -> -1 -- everything else maps to -1

This example program shows the use of both line comments and (embedded) block

comments.

3.7 Recursion

In imperative languages like C and Java, the most basic control structure is a loop (like

a for loop). However, for loops don’t make much sense in Haskell because they require

destructive update (the index variable is constantly being updated). Instead, Haskell

uses recursion.

A function is recursive if it calls itself (see Appendix B for more). Recursive func-

tions exist also in C and Java but are used less than they are in functional languages.

The prototypical recursive function is the factorial function. In an imperative language,

you might write this as something like:

int factorial(int n) {

int fact = 1;

for (int i=2; i <= n; i++)

fact = fact * i;

return fact;

}

While this code fragment will successfully compute factorials for positive integers,

it somehow ignores the basic definition of factorial, usually given as:

1 n=1

n! =

n ∗ (n − 1)! otherwise

This definition itself is exactly a recursive definition: namely the value of n! de-

pends on the value of (n − 1)!. If you think of ! as a function, then it is calling itself.

We can translate this definition almost verbatim into Haskell code:

factorial 1 = 1

factorial n = n * factorial (n-1)

This is likely the simplest recursive function you’ll ever see, but it is correct.

30 CHAPTER 3. LANGUAGE BASICS

int factorial(int n) {

if (n == 1)

return 1;

else

return n * factorial(n-1);

}

concept of induction in mathematics (see Chapter B for a more formal treatment of

this). However, usually a problem can be thought of as having one or more base cases

and one or more recursive-cases. In the case of factorial, there is one base case

(when n = 1) and one recursive case (when n > 1). For designing your own recusive

algorithms, it is often useful to try to differentiate these two cases.

Turning now to the task of exponentiation, suppose that we have two positive in-

tegers a and b, and that we want to calculate ab . This problem has a single base case:

namely when b is 1. The recursive case is when b > 1. We can write a general form as:

b a b=1

a =

a ∗ ab−1 otherwise

Again, this translates directly into Haskell code:

exponent a 1 = a

exponent a b = a * exponent a (b-1)

functions on lists. In this case, usually the base case is the empty list [], and the

recursive case is a cons list (i.e., a value consed on to another list).

Consider the task of calculating the length of a list. We can again break this down

into two cases: either we have an empty list or we have a non-empty list. Clearly the

length of an empty list is zero. Furthermore, if we have a cons list, then the length of

this list is just the length of its tail plus one. Thus, we can define a length function as:

my_length [] = 0

my_length (x:xs) = 1 + my_length xs

Haskell functions, we prefix them with my so the compiler doesn’t be-

come confused.

3.8. INTERACTIVITY 31

Similarly, we can consider the filter function. Again, the base case is the empty

list, and the recursive case is a cons list. However, this time, we’re choosing whether

to keep an element, depending on whether or not a particular predicate holds. We can

define the filter function as:

my_filter p [] = []

my_filter p (x:xs) =

if p x

then x : my_filter p xs

else my_filter p xs

In this code, when presented with an empty list, we simply return an empty list.

This is because filter cannot add elements; it can only remove them.

When presented with a list of the form (x:xs), we need to decide whether or not

to keep the value x. To do this, we use an if statement and the predicate p. If p x is

true, then we return a list that begins with x followed by the result of filtering the tail

of the list. If p x is false, then we exclude x and return the result of filtering the tail of

the list.

We can also define map and both fold functions using explicit recursion. See the

exercises for the definition of map and Chapter 7 for the folds.

Exercises

Exercise 3.7 The fibonacci sequence is defined by:

1 n = 1 or n = 2

Fn =

Fn−2 + Fn−1 otherwise

Write a recursive function fib that takes a positive integer n as a parameter and

calculates Fn .

Exercise 3.8 Define a recursive function mult that takes two positive integers a and

b and returns a*b, but only uses addition (i.e., no fair just using multiplication). Begin

by making a mathematical definition in the style of the previous exercise and the rest of

this section.

Exercise 3.9 Define a recursive function my map that behaves identically to the stan-

dard function map.

3.8 Interactivity

If you are familiar with books on other (imperative) languages, you might be wonder-

ing why you haven’t seen many of the standard programs written in tutorials of other

languages (like ones that ask the user for his name and then says “Hi” to him by name).

The reason for this is simple: Being a pure functional language, it is not entirely clear

how one should handle operations like user input.

32 CHAPTER 3. LANGUAGE BASICS

After all, suppose you have a function that reads a string from the keyboard. If

you call this function twice, and the user types something the first time and something

else the second time, then you no longer have a function, since it would return two

different values. The solution to this was found in the depths of category theory, a

monads branch of formal mathematics: monads. We’re not yet ready to talk about monads

formally, but for now, think of them simply as a convenient way to express operations

like input/output. We’ll discuss them in this context much more in Chapter 5 and then

discuss monads for monads’ sake in Chapter 9.

Suppose we want to write a function that’s interactive. The way to do this is to

do notation use the do keyword. This allows us to specify the order of operations (remember that

normally, since Haskell is a lazy language, the order in which operations are evaluated

in it is unspecified). So, to write a simple program that asks a user for his name and

then address him directly, enter the following code into “Name.hs”:

module Main

where

import IO

main = do

hSetBuffering stdin LineBuffering

putStrLn "Please enter your name: "

name <- getLine

putStrLn ("Hello, " ++ name ++ ", how are you?")

putStrLn but not the first. This is because function application binds

more tightly than ++, so without the parentheses, the second would be

interpreted as (putStrLn "Hello, ") ++ name ++ ....

You can then either load this code in your interpreter and execute main by simply

typing “main,” or you can compile it and run it from the command line. I’ll show the

results of the interactive approach:

Main> main

Please enter your name:

Hal

Hello, Hal, how are you?

Main>

And there’s interactivity. Let’s go back and look at the code a little, though. We

name the module “Main,” so that we can compile it. We name the primary function

“main,” so that the compile knows that this is the function to run when the program

is run. On the fourth line, we import the IO library, so that we can access the IO

3.8. INTERACTIVITY 33

functions. On the seventh line, we start with do, telling Haskell that we’re executing a

sequence of commands.

The first command is hSetBuffering stdin LineBuffering, which you

should probably ignore for now (incidentally, this is only required by GHC – in Hugs

you can get by without it). The necessity for this is because, when GHC reads input, it

expects to read it in rather large blocks. A typical person’s name is nowhere near large

enough to fill this block. Thus, when we try to read from stdin, it waits until it’s

gotten a whole block. We want to get rid of this, so we tell it to use LineBuffering

instead of block buffering.

The next command is putStrLn, which prints a string to the screen. On the

ninth line, we say “name <- getLine.” This would normally be written “name =

getLine,” but using the arrow instead of the equal sign shows that getLine isn’t

a real function and can return different values. This command means “run the action

getLine, and store the results in name.”

The last line constructs a string using what we read in the previous line and then

prints it to the screen.

Another example of a function that isn’t really a function would be one that returns

a random value. In this context, a function that does this is called randomRIO. Us-

ing this, we can write a “guess the number” program. Enter the following code into

“Guess.hs”:

module Main

where

import IO

import Random

main = do

hSetBuffering stdin LineBuffering

num <- randomRIO (1::Int, 100)

putStrLn "I’m thinking of a number between 1 and 100"

doGuessing num

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

let guessNum = read guess

if guessNum < num

then do putStrLn "Too low!"

doGuessing num

else if read guess > num

then do putStrLn "Too high!"

doGuessing num

else do putStrLn "You Win!"

Let’s examine this code. On the fifth line we write “import Random” to tell the

34 CHAPTER 3. LANGUAGE BASICS

compiler that we’re going to be using some random functions (these aren’t built into

the Prelude). In the first line of main, we ask for a random number in the range

(1, 100). We need to write ::Int to tell the compiler that we’re using integers here –

not floating point numbers or other numbers. We’ll talk more about this in Section 4.

On the next line, we tell the user what’s going on, and then, on the last line of main,

we tell the compiler to execute the command doGuessing.

The doGuessing function takes the number the user is trying to guess as an

argument. First, it asks the user to guess and then accepts their guess (which is a

String) from the keyboard. The if statement checks first to see if their guess is too

low. However, since guess is a string, and num is an integer, we first need to convert

guess to an integer by reading it. Since “read guess” is a plain, pure function

(and not an IO action), we don’t need to use the <- notation (in fact, we cannot); we

simply bind the value to guessNum. Note that while we’re in do notation, we don’t

need ins for lets.

If they guessed too low, we inform them and then start doGuessing over again.

If they didn’t guess too low, we check to see if they guessed too high. If they did,

we tell them and start doGuessing again. Otherwise, they didn’t guess too low and

they didn’t guess too high, so they must have gotten it correct. We tell them that they

won and exit. The fact that we exit is implicit in the fact that there are no commands

following this. We don’t need an explicit return () statement.

You can either compile this code or load it into your interpreter, and you will get

something like:

Main> main

I’m thinking of a number between 1 and 100

Enter your guess:

50

Too low!

Enter your guess:

75

Too low!

Enter your guess:

85

Too high!

Enter your guess:

80

Too high!

Enter your guess:

78

Too low!

Enter your guess:

79

You Win!

The recursive action that we just saw doesn’t actually return a value that we use in

any way. In the case when it does, the “obvious” way to write the command is actually

3.8. INTERACTIVITY 35

incorrect. Here, we will give the incorrect version, explain why it is wrong, then give

the correct version.

Let’s say we’re writing a simple program that repeatedly asks the user to type in a

few words. If at any point the user enters the empty word (i.e., he just hits enter without

typing anything), the program prints out everything he’s typed up until that point and

then exits. The primary function (actually, an action) in this program is one that asks

the user for a word, checks to see if it’s empty, and then either continues or ends. The

incorrect formulation of this might look something like:

askForWords = do

putStrLn "Please enter a word:"

word <- getLine

if word == ""

then return []

else return (word : askForWords)

Before reading ahead, see if you can figure out what is wrong with the above code.

The error is on the last line, specifically with the term word : askForWords.

Remember that when using (:), we are making a list out of an element (in this case

word) and another list (in this case, askForWords). However, askForWords is

not a list; it is an action that, when run, will produce a list. That means that before we

can attach anything to the front, we need to run the action and take the result. In this

case, we want to do something like:

askForWords = do

putStrLn "Please enter a word:"

word <- getLine

if word == ""

then return []

else do

rest <- askForWords

return (word : rest)

Here, we first run askForWords, take the result and store it in the variable rest.

Then, we return the list created from word and rest.

By now, you should have a good understanding of how to write simple functions,

compile them, test functions and programs in the interactive environment, and manip-

ulate lists.

Exercises

Exercise 3.10 Write a program that will repeatedly ask the user for numbers until she

types in zero, at which point it will tell her the sum of all the numbers, the product of

all the numbers, and, for each number, its factorial. For instance, a session might look

like:

36 CHAPTER 3. LANGUAGE BASICS

5

Give me a number (or 0 to stop):

8

Give me a number (or 0 to stop):

2

Give me a number (or 0 to stop):

0

The sum is 15

The product is 80

5 factorial is 120

8 factorial is 40320

2 factorial is 2

Hint: write an IO action that reads a number and, if it’s zero, returns the empty list. If

it’s not zero, it recurses itself and then makes a list out of the number it just read and

the result of the recursive call.

Chapter 4

Type Basics

Haskell uses a system of static type checking. This means that every expression in

Haskell is assigned a type. For instance ’a’ would have type Char, for “character.”

Then, if you have a function which expects an argument of a certain type and you give

it the wrong type, a compile-time error will be generated (that is, you will not be able

to compile the program). This vastly reduces the number of bugs that can creep into

your program.

Furthermore, Haskell uses a system of type inference. This means that you don’t

even need to specify the type of expressions. For comparison, in C, when you define

a variable, you need to specify its type (for instance, int, char, etc.). In Haskell, you

needn’t do this – the type will be inferred from context.

the type of an expression; this often helps debugging. In fact, it is some-

times considered good style to explicitly specify the types of outermost

functions.

Both Hugs and GHCi allow you to apply type inference to an expression to find its

type. This is done by using the :t command. For instance, start up your favorite shell

and try the following:

Prelude> :t ’c’

’c’ :: Char

This tells us that the expression ’c’ has type Char (the double colon :: is used

throughout Haskell to specify types).

There are a slew of built-in types, including Int (for integers, both positive and neg-

ative), Double (for floating point numbers), Char (for single characters), String (for

37

38 CHAPTER 4. TYPE BASICS

strings), and others. We have already seen an expression of type Char; let’s examine

one of type String:

Prelude> :t "Hello"

"Hello" :: String

You can also enter more complicated expressions, for instance, a test of equality:

’a’ == ’b’ :: Bool

You should note that even though this expression is false, it still has a type, namely

the type Bool.

though I’ve heard “boo-leen” once or twice) and has two possible values:

True and False.

You can observe the process of type checking and type inference by trying to get

the shell to give you the type of an ill-typed expression. For instance, the equality

operator requires that the type of both of its arguments are of the same type. We can

see that Char and String are of different types by trying to compare a character to a

string:

ERROR - Type error in application

*** Expression : ’a’ == "a"

*** Term : ’a’

*** Type : Char

*** Does not match : [Char]

The first line of the error (the line containing “Expression”) tells us the expression

in which the type error occured. The second line tells us which part of this expression

is ill-typed. The third line tells us the inferred type of this term and the fourth line

tells us what it needs to have matched. In this case, it says that type type Char doesn’t

match the type [Char] (a list a characters – a string in Haskell is represented as a list of

characters).

As mentioned before, you can explicitely specify the type of an expression using

the :: operator. For instance, instead of ”a” in the previous example, we could have

written (”a”::String). In this case, this has no effect since there’s only one possible

interpretation of ”a”. However, consider the case of numbers. You can try:

Prelude> :t 5 :: Int

5 :: Int

Prelude> :t 5 :: Double

5 :: Double

4.2. POLYMORPHIC TYPES 39

Here, we can see that the number 5 can be instantiated as either an Int our a Double.

What if we don’t specify the type?

Prelude> :t 5

5 :: Num a => a

Not quite what you expected? What this means, briefly, is that if some type a is

an instance of the Num class, then type type of the expression 5 can be of type a.

If that made no sense, that’s okay for now. In Section 4.3 we talk extensively about

type classes (which is what this is). The way to read this, though, is to say “a being an

instance of Num implies a.”

Exercises

Exercise 4.1 Figure out for yourself, and then verify the types of the following expres-

sions, if they have a type. Also note if the expression is a type error:

1. ’h’:’e’:’l’:’l’:’o’:[]

2. [5,’a’]

3. (5,’a’)

4. (5::Int) + 10

5. (5::Int) + (10::Double)

Haskell employs a polymorphic type system. This essentially means that you can have

type variables, which we have alluded to before. For instance, note that a function like

tail doesn’t care what the elements in the list are:

[6,7,8,9]

Prelude> tail "hello"

"ello"

Prelude> tail ["the","man","is","happy"]

["man","is","happy"]

This is possible because tail has a polymorphic type: [α] → [α]. That means it

can take as an argument any list and return a value which is a list of the same type.

The same analysis can explain the type of fst:

Prelude> :t fst

forall a b . (a,b) -> a

40 CHAPTER 4. TYPE BASICS

Here, GHCi has made explicit the universal quantification of the type values. That

is, it is saying that for all types a and b, fst is a function from (a, b) to a.

Exercises

Exercise 4.2 Figure out for yourself, and then verify the types of the following expres-

sions, if they have a type. Also note if the expression is a type error:

1. snd

2. head

3. null

4. head . tail

5. head . head

We saw last section some strange typing having to do with the number five. Before we

delve too deeply into the subject of type classes, let’s take a step back and see some of

the motivation.

4.3.1 Motivation

In many languages (C++, Java, etc.), there exists a system of overloading. That is,

a function can be written that takes parameters of differing types. For instance, the

canonical example is the equality function. If we want to compare two integers, we

should use an integer comparison; if we want to compare two floating point numbers,

we should use a floating point comparison; if we want to compare two characters,

we should use a character comparison. In general, if we want to compare two things

which have type α, we want to use an α-compare. We call α a type variable since it is

a variable whose value is a type.

NOTE In general, type variables will be written using the first part of

the Greek alphabet: α, β, γ, δ, . . . .

Unfortunately, this presents some problems for static type checking, since the type

checker doesn’t know which types a certain operation (for instance, equality testing)

will be defined for. There are as many solutions to this problem as there are statically

typed languages (perhaps a slight exageration, but not so much so). The one chosen

in Haskell is the system of type classes. Whether this is the “correct” solution or the

“best” solution of course depends on your application domain. It is, however, the one

we have, so you should learn to love it.

4.3. TYPE CLASSES 41

Returning to the issue of equality testing, what we want to be able to do is define a

function == (the equality operator) which takes two parameters, each of the same type

(call it α), and returns a boolean. But this function may not be defined for every type;

just for some. Thus, we associate this function == with a type class, which we call

Eq. If a specific type α belongs to a certain type class (that is, all functions associated

with that class are implemented for α), we say that α is an instance of that class. For

instance, Int is an instance of Eq since equality is defined over integers.

In addition to overloading operators like ==, Haskell has overloaded numeric constants

(i.e., 1, 2, 3, etc.). This was done so that when you type in a number like 5, the

compiler is free to say 5 is an integer or floating point number as it sees fit. It defines

the Num class to contain all of these numbers and certain minimal operations over

them (addition, for instance). The basic numeric types (Int, Double) are defined to be

instances of Num.

We have only skimmed the surface of the power (and complexity) of type classes

here. There will be much more discussion of them in Section 8.4, but we need some

more background before we can get there. Before we do that, we need to talk a little

more about functions.

Another of the standard classes in Haskell is the Show class. Types which are mem-

bers of the Show class have functions which convert values of that type to a string.

This function is called show. For instance show applied to the integer 5 is the string

“5”; show applied to the character ’a’ is the three-character string “’a”’ (the first and

last characters are apostrophes). show applied to a string simply puts quotes around it.

You can test this in the interpreter:

Prelude> show 5

"5"

Prelude> show ’a’

"’a’"

Prelude> show "Hello World"

"\"Hello World\""

NOTE The reason the backslashes appear in the last line is because

the interior quotes are “escaped”, meaning that they are part of the string,

not part of the interpreter printing the value. The actual string doesn’t

contain the backslashes.

Some types are not instances of Show; functions for example. If you try to show a

function (like sqrt), the compiler or interpreter will give you some cryptic error mes-

42 CHAPTER 4. TYPE BASICS

In Haskell, functions are first class values, meaning that just as 1 or ’c’ are values

which have a type, so are functions like square or ++. Before we talk too much about

functions, we need to make a short diversion into very theoretical computer science

(don’t worry, it won’t be too painful) and talk about the lambda calculus.

The name “Lambda Calculus”, while perhaps daunting, describes a fairly simple sys-

tem for representing functions. The way we would write a squaring function in lambda

calculus is: λx.x∗x, which means that we take a value, which we will call x (that’s what

“λx. means) and then multiply it by itself. The λ is called “lambda abstraction.” In

general, lambdas can only have one parameter. If we want to write a function that takes

two numbers, doubles the first and adds it to the second, we would write: λxλy.2∗x+y.

When we apply a value to a lambda expression, we remove the outermost λ and replace

every occurrence of the lambda variable with the value. For instance, if we evaluate

(λx.x ∗ x)5, we remove the lambda and replace every occurrence of x with 5, yielding

(5 ∗ 5) which is 25.

In fact, Haskell is largely based on an extension of the lambda calculus, and these

two expressions can be written directly in Haskell (we simply replace the λ with a

backslash and the . with an arrow; also we don’t need to repeat the lambdas; and, of

course, in Haskell we have to give them names if we’re defining functions):

f = \x y -> 2*x + y

25

Prelude> (\x y -> 2*x + y) 5 4

14

We can see in the second example that we need to give the lambda abstraction two

arguments, one corresponding to x and the other corresponding to y.

“Higher-Order Types” is the name given to functions. The type given to functions mim-

icks the lambda calculus representation of the functions. For instance, the definition of

square gives λx.x ∗ x. To get the type of this, we first ask ourselves what the type of x

4.4. FUNCTION TYPES 43

is. Say we decide x is an Int. Then, we notice that the function square takes an Int

and produces a value x*x. We know that when we multiply two Ints together, we get

another Int, so the type of the results of square is also an Int. Thus, we say the type

of square is Int → Int.

We can apply a similar analysis to the function f above. The value of this function

(remember, functions are values) is something which takes a value x and given that

value, produces a new value, which takes a value y and produces 2*x+y. For instance,

if we take f and apply only one number to it, we get (λxλy.2x + y)5 which becomes

our new value λy.2(5) + y, where all occurances of x have been replaced with the

applied value, 5.

So we know that f takes an Int and produces a value of some type, of which we’re

not sure. But we know the type of this value is the type of λy.2(5) + y. We ap-

ply the above analysis and find out that this expression has type Int → Int. Thus, f

takes an Int and produces something which has type Int → Int. So the type of f is

Int → (Int → Int).

have α → β → γ it is assume that β → γ is grouped. If you want the

other way, with α → β grouped, you need to put parentheses around

them.

This isn’t entirely accurate. As we saw before, numbers like 5 aren’t really of type

Int, they are of type Num a ⇒ a.

We can easily find the type of Prelude functions using “:t” as before:

Prelude> :t head

head :: [a] -> a

Prelude> :t tail

tail :: [a] -> [a]

Prelude> :t null

null :: [a] -> Bool

Prelude> :t fst

fst :: (a,b) -> a

Prelude> :t snd

snd :: (a,b) -> b

We read this as: “head” is a function that takes a list containing values of type “a”

and gives back a value of type “a”; “tail” takes a list of “a”s and gives back another list

of “a”s; “null” takes a list of “a”s and gives back a boolean; “fst” takes a pair of type

“(a,b)” and gives back something of type “a”, and so on.

NOTE Saying that the type of fst is (a, b) → a does not necessarily

mean that it simply gives back the first element; it only means that it

gives back something with the same type as the first element.

44 CHAPTER 4. TYPE BASICS

We can also get the type of operators like + and * and ++ and :; however, in order

to do this we need to put them in parentheses. In general, any function which is used

infix (meaning in the middle of two arguments rather than before them) must be put in

parentheses when getting its type.

Prelude> :t (+)

(+) :: Num a => a -> a -> a

Prelude> :t (*)

(*) :: Num a => a -> a -> a

Prelude> :t (++)

(++) :: [a] -> [a] -> [a]

Prelude> :t (:)

(:) :: a -> [a] -> [a]

The types of + and * are the same, and mean that + is a function which, for some

type a which is an instance of Num, takes a value of type a and produces another

function which takes a value of type a and produces a value of type a. In short hand,

we might say that + takes two values of type a and produces a value of type a, but this

is less precise.

The type of ++ means, in shorthand, that, for a given type a, ++ takes two lists of

as and produces a new list of as. Similarly, : takes a value of type a and another value

of type [a] (list of as) and produces another value of type [a].

You might be tempted to try getting the type of a function like putStrLn:

Prelude> :t putStrLn

putStrLn :: String -> IO ()

Prelude> :t readFile

readFile :: FilePath -> IO String

What in the world is that IO thing? It’s basically Haskell’s way of representing

that these functions aren’t really functions. They’re called “IO Actions” (hence the

IO). The immediate question which arises is: okay, so how do I get rid of the IO.

In brief, you can’t directly remove it. That is, you cannot write a function with type

IO String → String. The only way to use things with an IO type is to combine them

with other functions using (for example), the do notation.

For example, if you’re reading a file using readFile, presumably you want to do

something with the string it returns (otherwise, why would you read the file in the first

place). Suppose you have a function f which takes a String and produces an Int. You

can’t directly apply f to the result of readFile since the input to f is String and the

output of readFile is IOString and these don’t match. However, you can combine

these as:

4.4. FUNCTION TYPES 45

main = do

s <- readFile "somefile"

let i = f s

putStrLn (show i)

Here, we use the arrow convention to “get the string out of the IO action” and then

apply f to the string (called s). We then, for example, print i to the screen. Note that

the let here doesn’t have a corresponding in. This is because we are in a do block.

Also note that we don’t write i <- f s because f is just a normal function, not an

IO action.

It is sometimes desirable to explicitly specify the types of some elements or functions,

for one (or more) of the following reasons:

• Clarity

• Speed

• Debugging

Some people consider it good software engineering to specify the types of all top-

level functions. If nothing else, if you’re trying to compile a program and you get type

errors that you cannot understand, if you declare the types of some of your functions

explicitly, it may be easier to figure out where the error is.

Type declarations are written separatly from the function definition. For instance,

we could explicitly type the function square as in the following code (an explicitly

declared type is called a type signature):

square x = x*x

These two lines do not even have to be next to eachother. However, the type that you

specify must match the inferred type of the function definition (or be more specific).

In this definition, you could apply square to anything which is an instance of Num:

Int, Double, etc. However, if you knew apriori that square were only going to be

applied to value of type Int, you could refine its type as:

square x = x*x

Now, you could only apply square to values of type Int. Moreover, with this def-

inition, the compiler doesn’t have to generate the general code specified in the original

46 CHAPTER 4. TYPE BASICS

function definition since it knows you will only apply square to Ints, so it may be

able to generate faster code.

If you have extensions turned on (“-98” in Hugs or “-fglasgow-exts” in GHC(i)),

you can also add a type signature to expressions and not just functions. For instance,

you could write:

which tells the compiler that x is an Int; however, it leaves the compiler alone

to infer the type of the rest of the expression. What is the type of square in this

example? Make your guess then you can check it either by entering this code into a file

and loading it into your interpreter or by asking for the type of the expression:

In Section 3.3 we saw examples of functions taking other functions as arguments. For

instance, map took a function to apply to each element in a list, filter took a func-

tion that told it which elements of a list to keep, and foldl took a function which told

it how to combine list elements together. As with every other function in Haskell, these

are well-typed.

Let’s first think about the map function. It’s job is to take a list of elements and

produce another list of elements. These two lists don’t necessarily have to have the

same types of elements. So map will take a value of type [a] and produce a value of

type [b]. How does it do this? It uses the user-supplied function to convert. In order

to convert an a to a b, this function must have type a → b. Thus, the type of map is

(a → b) → [a] → [b], which you can verify in your interpreter with “:t”.

We can apply the same sort of analysis to filter and discern that it has type

(a → Bool) → [a] → [a]. As we presented the foldl function, you might be tempted

to give it type (a → a → a) → a → [a] → a, meaning that you take a function which

combines two as into another one, an initial value of type a, a list of as to produce a final

value of type a. In fact, foldl has a more general type: (a → b → a) → a → [b] → a.

So it takes a function which turn an a and a b into an a, an initial value of type a and a

list of bs. It produces an a.

To see this, we can write a function count which counts how many members of a

list satisfy a given constraint. You can of course you filter and length to do this,

but we will also do it using foldr:

module Count

where

import Char

4.5. DATA TYPES 47

count2 p l = foldr (\x c -> if p x then c+1 else c) 0 l

The functioning of count1 is simple. It filters the list l according to the predicate

p, then takes the length of the resulting list. On the other hand, count2 uses the intial

value (which is an integer) to hold the current count. For each element in the list l,

it applies the lambda expression shown. This takes two arguments, c which holds the

current count and x which is the current element in the list that we’re looking at. It

checks to see if p holds about x. If it does, it returns the new value c+1, increasing the

count of elements for which the predicate holds. If it doesn’t, it just returns c, the old

count.

Exercises

Exercise 4.3 Figure out for yourself, and then verify the types of the following expres-

sions, if they have a type. Also note if the expression is a type error:

1. \x -> [x]

2. \x y z -> (x,y:z:[])

3. \x -> x + 5

5. \x -> x ’a’

6. \x -> x x

7. \x -> x + x

Tuples and lists are nice, common ways to define structured values. However, it is

often desirable to be able to define our own data structures and functions over them.

So-called “datatypes” are defined using the data keyword.

4.5.1 Pairs

For instance, a definition of a pair of elements (much like the standard, build-in pair

type) could be:

48 CHAPTER 4. TYPE BASICS

Let’s walk through this code one word at a time. First we say “data” meaning that

we’re defining a datatype. We then give the name of the datatype, in this case, “Pair.”

The “a” and “b” that follow “Pair” are type parameters, just like the “a” is the type of

the function map. So up until this point, we’ve said that we’re going to define a data

structure called “Pair” which is parameterized over two types, a and b.

After the equals sign, we specify the constructors of this data type. In this case,

there is a single constructor, “Pair” (this doesn’t necessarily have to have the same

name as the type, but in this case it seems to make more sense). After this pair, we

again write “a b”, which means that in order to construct a Pair we need two values,

one of type a and one of type b.

This definition introduces a function, Pair :: a -> b -> Pair a b that

you can use to construct Pairs. If you enter this code into a file and load it, you can

see how these are constructed:

Datatypes> :t Pair

Pair :: a -> b -> Pair a b

Datatypes> :t Pair ’a’

Pair ’a’ :: a -> Pair Char a

Datatypes> :t Pair ’a’ "Hello"

:t Pair ’a’ "Hello"

Pair ’a’ "Hello" :: Pair Char [Char]

So, by giving Pair two values, we have completely constructed a value of type

Pair. We can write functions involving pairs as:

pairFst (Pair x y) = x

pairSnd (Pair x y) = y

In this, we’ve used the pattern matching capabilities of Haskell to look at a pair

an extract values from it. In the definition of pairFst we take an entire Pair and

extract the first element; similarly for pairSnd. We’ll discuss pattern matching in

much more detail in Section 7.4.

Exercises

Exercise 4.4 Write a data type declaration for Triple, a type which contains three

elements, all of different types. Write functions tripleFst, tripleSnd and tripleThr

to extract respectively the first, second and third elements.

Exercise 4.5 Write a datatype Quadruple which holds four elements. However, the

first two elements must be the same type and the last two elements must be the same

type. Write a function firstTwo which returns a list containing the first two elements

and a function lastTwo which returns a list containing the last two elements. Write

type signatures for these functions

4.5. DATA TYPES 49

We have seen an example of the data type with one constructor: Pair. It is also

possible (and extremely useful) to have multiple constructors.

Let us consider a simple function which searches through a list for an element

satisfying a given predicate and then returns the first element satisfying that predicate.

What should we do if none of the elements in the list satisfy the predicate? A few

options are listed below:

• Raise an error

• Loop indefinitely

• ...

Raising an error is certainly an option (see Section 10.1 to see how to do this).

The problem is that it is difficult/impossible to recover from such errors. Looping

indefinitely is possible, but not terribly useful. We could write a sister function which

checks to see if the list contains an element satisfying a predicate and leave it up to the

user to always use this function first. We could return the first element, but this is very

ad-hoc and difficult to remember.

The fact that there is no basic option to solve this problem simply means we have to

think about it a little more. What are we trying to do? We’re trying to write a function

which might succeed and might not. Furthermore, if it does succeed, it returns some

sort of value. Let’s write a datatype:

| Just a

This is one of the most common datatypes in Haskell and is defined in the Prelude.

Here, we’re saying that there are two possible ways to create something of type

Maybe a. The first is to use the nullary constructor Nothing, which takes no ar-

guments (this is what “nullary” means). The second is to use the constructor Just,

together with a value of type a.

The Maybe type is useful in all sorts of circumstances. For instance, suppose

we want to write a function (like head) which returns the first element of a given

list. However, we don’t want the program to die if the given list is empty. We can

accomplish this with a function like:

firstElement [] = Nothing

firstElement (x:xs) = Just x

50 CHAPTER 4. TYPE BASICS

The type signature here says that firstElement takes a list of as and produces

something with type Maybe a. In the first line of code, we match against the empty

list []. If this match succeeds (i.e., the list is, in fact, empty), we return Nothing. If

the first match fails, then we try to match against x:xs which must succeed. In this

case, we return Just x.

For our findElement function, we represent failure by the value Nothing and

success with value a by Just a. Our function might look something like this:

findElement p [] = Nothing

findElement p (x:xs) =

if p x then Just x

else findElement p xs

The first line here gives the type of the function. In this case, our first argument

is the predicate (and takes an element of type a and returns True if and only if the

element satisfies the predicate); the second argument is a list of as. Our return value

is maybe an a. That is, if the function succeeds, we will return Just a and if not,

Nothing.

Another useful datatype is the Either type, defined as:

| Right b

is either a value of type a (using the Left constructor) or a value of type b (using the

Right constructor).

Exercises

Exercise 4.6 Write a datatype Tuple which can hold one, two, three or four elements,

depending on the constructor (that is, there should be four constructors, one for each

number of arguments). Also provide functions tuple1 through tuple4 which take a

tuple and return Just the value in that position, or Nothing if the number is invalid

(i.e., you ask for the tuple4 on a tuple holding only two elements).

Exercise 4.7 Based on our definition of Tuple from the previous exercise, write a

function which takes a Tuple and returns either the value (if it’s a one-tuple), a

Haskell-pair (i.e., (’a’,5)) if it’s a two-tuple, a Haskell-triple if it’s a three-tuple

or a Haskell-quadruple if it’s a four-tuple. You will need to use the Either type to

represent this.

4.5. DATA TYPES 51

We can also define recursive datatypes. These are datatypes whose definitions are

based on themselves. For instance, we could define a list datatype as:

| Cons a (List a)

In this definition, we have defined what it means to be of type List a. We say that a

list is either empty (Nil) or it’s the Cons of a value of type a and another value of type

List a. This is almost identical to the actual definition of the list datatype in Haskell,

except that uses special syntax where [] corresponds to Nil and : corresponds to

Cons. We can write our own length function for our lists as:

listLength Nil = 0

listLength (Cons x xs) = 1 + listLength xs

This function is slightly more complicated and uses recursion to calculate the

length of a List. The first line says that the length of an empty list (a Nil) is 0.

This much is obvious. The second line tells us how to calculate the length of a non-

empty list. A non-empty list must be of the form Cons x xs for some values of x

and xs. We know that xs is another list and we know that whatever the length of the

current list is, it’s the length of its tail (the value of xs) plus one (to account for x).

Thus, we apply the listLength function to xs and add one to the result. This gives

us the length of the entire list.

Exercises

Exercise 4.8 Write functions listHead, listTail, listFoldl and listFoldr

which are equivalent to their Prelude twins, but function on our List datatype. Don’t

worry about exceptional conditions on the first two.

We can define datatypes that are more complicated than lists. Suppose we want to

define a structure that looks like a binary tree. A binary tree is a structure that has a

single root node; each node in the tree is either a “leaf” or a “branch.” If it’s a leaf, it

holds a value; if it’s a branch, it holds a value and a left child and a right child. Each of

these children is another node. We can define such a data type as:

data BinaryTree a

= Leaf a

| Branch (BinaryTree a) a (BinaryTree a)

which holds an a, or it’s a branch with a left child (which is a BinaryTree of as), a

52 CHAPTER 4. TYPE BASICS

node value (which is an a), and a right child (which is also a BinaryTree of as). It is

simple to modify the listLength function so that instead of calculating the length

of lists, it calculates the number of nodes in a BinaryTree. Can you figure out how?

We can call this function treeSize. The solution is given below:

treeSize (Leaf x) = 1

treeSize (Branch left x right) =

1 + treeSize left + treeSize right

Here, we say that the size of a leaf is 1 and the size of a branch is the size of its left

child, plus the size of its right child, plus one.

Exercises

Exercise 4.9 Write a function elements which returns the elements in a BinaryTree

in a bottom-up, left-to-right manner (i.e., the first element returned in the left-most leaf,

followed by its parent’s value, followed by the other child’s value, and so on). The re-

sult type should be a normal Haskell list.

Exercise 4.10 Write a fold function for BinaryTrees and rewrite elements in

terms of it (call the new one elements2).

You can also use datatypes to define things like enumerated sets, for instance, a type

which can only have a constrained number of values. We could define a color type:

data Color

= Red

| Orange

| Yellow

| Green

| Blue

| Purple

| White

| Black

This would be sufficient to deal with simple colors. Suppose we were using this to

write a drawing program, we could then write a function to convert between a Color

and a RGB triple. We can write a colorToRGB function, as:

colorToRGB Orange = (255,128,0)

colorToRGB Yellow = (255,255,0)

colorToRGB Green = (0,255,0)

colorToRGB Blue = (0,0,255)

4.6. CONTINUATION PASSING STYLE 53

colorToRGB White = (255,255,255)

colorToRGB Black = (0,0,0)

If we wanted also to allow the user to define his own custom colors, we could

change the Color datatype to something like:

data Color

= Red

| Orange

| Yellow

| Green

| Blue

| Purple

| White

| Black

| Custom Int Int Int -- R G B components

A final useful datatype defined in Haskell (from the Prelude) is the unit type. It’s

definition is:

data () = ()

The only true value of this type is (). This is essentially the same as a void type in

a langauge like C or Java and will be useful when we talk about IO in Chapter 5.

We’ll dwell much more on data types in Sections 7.4 and 8.3.

There is a style of functional programming called “Continuation Passing Style” (also

simply “CPS”). The idea behind CPS is to pass around as a function argument what to

do next. I will handwave through an example which is too complex to write out at this

point and then give a real example, though one with less motivation.

Consider the problem of parsing. The idea here is that we have a sequence of

tokens (words, letters, whatever) and we want to ascribe structure to them. The task

of converting a string of Java tokens to a Java abstract syntax tree is an example of a

54 CHAPTER 4. TYPE BASICS

parsing problem. So is the task of taking an English sentence and creating a parse tree

(though the latter is quite a bit harder).

Suppose we’re parsing something like C or Java where functions take arguments

in parentheses. But for simplicity, assume they are not separated by commas. That

is, a function call looks like myFunction(x y z). We want to convert this into

something like a pair containing first the string “myFunction” and then a list with three

string elements: “x”, “y” and “z”.

The general approach to solving this would be to write a function which parses

function calls like this one. First it would look for an identifier (“myFunction”), then

for an open parenthesis, then for zero or more identifiers, then for a close parenthesis.

One way to do this would be to have two functions:

parseFunction ::

[Token] -> Maybe ((String, [String]), [Token])

parseIdentifier ::

[Token] -> Maybe (String, [Token])

then it returns the pair described earlier, together with whatever is left after parsing the

function. Similarly, parseIdentifier will parse one of the arguments. If it returns

Nothing, then it’s not an argument; if it returns Just something, then that something

is the argument paired with the rest of the tokens.

What the parseFunction function would do is to parse an identifier. If this

fails, it fails itself. Otherwise, it continues and tries to parse a open parenthesis. If that

succeeds, it repeatedly calls parseIdentifier until that fails. It then tries to parse

a close parenthesis. If that succeeds, then it’s done. Otherwise, it fails.

There is, however, another way to think about this problem. The advantage to this

solution is that functions no longer need to return the remaining tokens (which tends to

get ugly). Instead of the above, we write functions:

parseFunction ::

[Token] -> ((String, [String]) -> [Token] -> a) ->

([Token] -> a) -> a

parseIdentifier ::

[Token] -> (String -> [Token] -> a) ->

([Token] -> a) -> a

and two continuations. The first continuation is what to do when you succeed. The

second continuation is what to do if you fail. What parseIdentifier does, then,

is try to read an identifier. If this succeeds, it calls the first continuation with that

identifier and the remaining tokens as arguments. If reading the identifier fails, it calls

the second continuation with all the tokens.

4.6. CONTINUATION PASSING STYLE 55

open parenthesis, zero or more identifiers and a close parenthesis. Thus, the first thing

it does is call parseIdentifier. The first argument it gives is the list of tokens.

The first continuation (which is what parseIdentifier should do if it succeeds)

is in turn a function which will look for an open parenthesis, zero or more arguments

and a close parethesis. The second argument (the failure argument) is just going to be

the failure function given to parseFunction.

Now, we simply need to define this function which looks for an open parenthesis,

zero or more arguments and a close parethesis. This is easy. We write a function which

looks for the open parenthesis and then calls parseIdentifier with a success

continuation that looks for more identifiers, and a “failure” continuation which looks

for the close parenthesis (note that this failure doesn’t really mean failure – it just means

there are no more arguments left).

I realize this discussion has been quite abstract. I would willingly give code for all

this parsing, but it is perhaps too complex at the moment. Instead, consider the problem

of folding across a list. We can write a CPS fold as:

cfold’ f z [] = z

cfold’ f z (x:xs) = f x z (\y -> cfold’ f y xs)

In this code, cfold’ take a function f which takes three arguments, slightly dif-

ferent from the standard folds. The first is the current list element, x, the second is the

accumulated element, z, and the third is the continuation: basicially, what to do next.

We can write a wrapper function for cfold’ that will make it behave more like a

normal fold:

10

CPS> cfold (:) [] [1,2,3]

[1,2,3]

One thing that’s nice about formulating cfold in terms of the helper function

cfold’ is that we can use the helper function directly. This enables us to change, for

instance, the evaluation order of the fold very easily:

[1,2,3,4,5,6,7,8,9,10]

CPS> cfold’ (\x t g -> g (x : t)) [] [1..10]

[10,9,8,7,6,5,4,3,2,1]

56 CHAPTER 4. TYPE BASICS

The only difference between these calls to cfold’ is whether we call the continu-

ation before or after constructing the list. As it turns out, this slight difference changes

the behavior for being like foldr to being like foldl. We can evaluate both of these

calls as follows (let f be the folding function):

==> cfold’ f [] [1,2,3]

==> f 1 [] (\y -> cfold’ f y [2,3])

==> 1 : ((\y -> cfold’ f y [2,3]) [])

==> 1 : (cfold’ f [] [2,3])

==> 1 : (f 2 [] (\y -> cfold’ f y [3]))

==> 1 : (2 : ((\y -> cfold’ f y [3]) []))

==> 1 : (2 : (cfold’ f [] [3]))

==> 1 : (2 : (f 3 [] (\y -> cfold’ f y [])))

==> 1 : (2 : (3 : (cfold’ f [] [])))

==> 1 : (2 : (3 : []))

==> [1,2,3]

==> cfold’ f [] [1,2,3]

==> (\x t g -> g (x:t)) 1 [] (\y -> cfold’ f y [2,3])

==> (\g -> g [1]) (\y -> cfold’ f y [2,3])

==> (\y -> cfold’ f y [2,3]) [1]

==> cfold’ f [1] [2,3]

==> (\x t g -> g (x:t)) 2 [1] (\y -> cfold’ f y [3])

==> cfold’ f (2:[1]) [3]

==> cfold’ f [2,1] [3]

==> (\x t g -> g (x:t)) 3 [2,1] (\y -> cfold’ f y [])

==> cfold’ f (3:[2,1]) []

==> [3,2,1]

be difficult to master. We will revisit the topic more thoroughly later in the book.

Exercises

Exercise 4.11 Test whether the CPS-style fold mimicks either of foldr and foldl.

If not, where is the difference?

Exercise 4.12 Write map and filter using continuation passing style.

Chapter 5

Basic Input/Output

ations like input/output into a pure functional language. Before we give the solution,

let’s take a step back and think about the difficulties inherent in such a task.

Any IO library should provide a host of functions, containing (at a minimum) op-

erations like:

There are two issues here. Let’s first consider the initial two examples and think

about what their types should be. Certainly the first operation (I hesitate to call it a

“function”) should take a String argument and produce something, but what should it

produce? It could produce a unit (), since there is essentially no return value from

printing a string. The second operation, similarly, should return a String, but it doesn’t

seem to require an argument.

We want both of these operations to be functions, but they are by definition not

functions. The item that reads a string from the keyboard cannot be a function, as it

will not return the same String every time. And if the first function simply returns ()

every time, there should be no problem with replacing it with a function f = (),

due to referential transparency. But clearly this does not have the desired effect.

In a sense, the reason that these items are not functions is that they interact with the

“real world.” Their values depend directly on the real world. Supposing we had a type

RealWorld, we might write these functions as having type:

57

58 CHAPTER 5. BASIC INPUT/OUTPUT

readAString :: RealWorld -> (RealWorld, String)

That is, printAString takes a current state of the world and a string to print;

it then modifies the state of the world in such a way that the string is now printed and

returns this new value. Similarly, readAString takes a current state of the world

and returns a new state of the world, paired with the String that was typed.

This would be a possible way to do IO, though it is more than somewhat unweildy.

In this style (assuming an initial RealWorld state were an argument to main), our

“Name.hs” program from Section 3.8 would look something like:

main rW =

let rW’ = printAString rW "Please enter your name: "

(rW’’,name) = readAString rW’

in printAString rW’’

("Hello, " ++ name ++ ", how are you?")

This is not only hard to read, but prone to error, if you accidentally use the wrong

version of the RealWorld. It also doesn’t model the fact that the program below makes

no sense:

main rW =

let rW’ = printAString rW "Please enter your name: "

(rW’’,name) = readAString rW’

in printAString rW’ -- OOPS!

("Hello, " ++ name ++ ", how are you?")

In this program, the reference to rW’’ on the last line has been changed to a ref-

erence to rW’. It is completely unclear what this program should do. Clearly, it must

read a string in order to have a value for name to be printed. But that means that the

RealWorld has been updated. However, then we try to ignore this update by using an

“old version” of the RealWorld. There is clearly something wrong happening here.

Suffice it to say that doing IO operations in a pure lazy functional language is not

trivial.

5.2 Actions

The breakthrough for solving this problem came when Phil Wadler realized that mon-

ads would be a good way to think about IO computations. In fact, monads are able to

express much more than just the simple operations described above; we can use them

to express a variety of constructions like concurrence, exceptions, IO, non-determinism

and much more. Moreover, there is nothing special about them; they can be defined

within Haskell with no special handling from the compiler (though compilers often

choose to optimize monadic operations).

5.2. ACTIONS 59

As pointed out before, we cannot think of things like “print a string to the screen” or

“read data from a file” as functions, since they are not (in the pure mathematical sense).

Therefore, we give them another name: actions. Not only do we give them a special

name, we give them a special type. One particularly useful action is putStrLn, which

prints a string to the screen. This action has type:

(). This means that this function is actually an action (that is what the IO means).

Furthermore, when this action is evaluated (or “run”) , the result will have type ().

the IO monad, but we will gloss over this for now.

getLine :: IO String

This means that getLine is an IO action that, when run, will have type String.

The question immediately arises: “how do you ‘run’ an action?”. This is something

that is left up to the compiler. You cannot actually run an action yourself; instead, a

program is, itself, a single action that is run when the compiled program is executed.

Thus, the compiler requires that the main function have type IO (), which means that

it is an IO action that returns nothing. The compiled code then executes this action.

However, while you are not allowed to run actions yourself, you are allowed to

combine actions. In fact, we have already seen one way to do this using the do

notation (how to really do this will be revealed in Chapter 9). Let’s consider the original

name program:

main = do

hSetBuffering stdin LineBuffering

putStrLn "Please enter your name: "

name <- getLine

putStrLn ("Hello, " ++ name ++ ", how are you?")

over, the <- notation is a way to get the value out of an action. So, in this program,

we’re sequencing four actions: setting buffering, a putStrLn, a getLine and an-

other putStrLn. The putStrLn action has type String → IO (), so we provide it a

String, so the fully applied action has type IO (). This is something that we are allowed

to execute.

The getLine action has type IO String, so it is okay to execute it directly. How-

ever, in order to get the value out of the action, we write name <- getLine, which

basically means “run getLine, and put the results in the variable called name.”

60 CHAPTER 5. BASIC INPUT/OUTPUT

Normal Haskell constructions like if/then/else and case/of can be used within the

do notation, but you need to be somewhat careful. For instance, in our “guess the

number” program, we have:

do ...

if (read guess) < num

then do putStrLn "Too low!"

doGuessing num

else if read guess > num

then do putStrLn "Too high!"

doGuessing num

else do putStrLn "You Win!"

If we think about how the if/then/else construction works, it essentially takes three

arguments: the condition, the “then” branch, and the “else” branch. The condition

needs to have type Bool, and the two branches can have any type, provided that they

have the same type. The type of the entire if/then/else construction is then the type of

the two branches.

In the outermost comparison, we have (read guess) < num as the condition.

This clearly has the correct type. Let’s just consider the “then” branch. The code here

is:

doGuessing num

Here, we are sequencing two actions: putStrLn and doGuessing. The first

has type IO (), which is fine. The second also has type IO (), which is fine. The type

result of the entire computation is precisely the type of the final computation. Thus, the

type of the “then” branch is also IO (). A similar argument shows that the type of the

“else” branch is also IO (). This means the type of the entire if/then/else construction

is IO (), which is just what we want.

Win!"”. This is somewhat overly verbose. In fact, “else putStrLn

"You Win!"” would have been sufficient, since do is only necessary to

sequence actions. Since we have only one action here, it is superfluous.

another one,” and hence write something like:

then putStrLn "Too low!"

doGuessing num

else ...

5.2. ACTIONS 61

Here, since we didn’t repeat the do, the compiler doesn’t know that the putStrLn

and doGuessing calls are supposed to be sequenced, and the compiler will think

you’re trying to call putStrLn with three arguments: the string, the function doGuessing

and the integer num. It will certainly complain (though the error may be somewhat dif-

ficult to comprehend at this point).

We can write the same doGuessing function using a case statement. To do this,

we first introduce the Prelude function compare, which takes two values of the same

type (in the Ord class) and returns one of GT, LT, EQ, depending on whether the first

is greater than, less than or equal to the second.

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

case compare (read guess) num of

LT -> do putStrLn "Too low!"

doGuessing num

GT -> do putStrLn "Too high!"

doGuessing num

EQ -> putStrLn "You Win!"

Here, again, the dos after the ->s are necessary on the first two options, because

we are sequencing actions.

If you’re used to programming in an imperative language like C or Java, you might

think that return will exit you from the current function. This is not so in Haskell.

In Haskell, return simply takes a normal value (for instance, one of type IO Int) and

makes it into an action that returns the given value (for instance, the value of type Int).

In particular, in an imperative language, you might write this function as:

print "Enter your guess:";

int guess = atoi(readLine());

if (guess == num) {

print "You win!";

return ();

}

if (guess < num) {

print "Too low!";

doGuessing(num);

} else {

print "Too high!";

doGuessing(num);

}

}

62 CHAPTER 5. BASIC INPUT/OUTPUT

Here, because we have the return () in the first if match, we expect the code

to exit there (and in mode imperative languages, it does). However, the equivalent code

in Haskell, which might look something like:

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

case compare (read guess) num of

EQ -> do putStrLn "You win!"

return ()

if (read guess < num)

then do print "Too low!";

doGuessing

else do print "Too high!";

doGuessing

will not behave as you expect. First of all, if you guess correctly, it will first print “You

win!,” but it won’t exit, and it will check whether guess is less than num. Of course

it is not, so the else branch is taken, and it will print “Too high!” and then ask you to

guess again.

On the other hand, if you guess incorrectly, it will try to evaluate the case statement

and get either LT or GT as the result of the compare. In either case, it won’t have a

pattern that matches, and the program will fail immediately with an exception.

Exercises

Exercise 5.1 Write a program that asks the user for his or her name. If the name is

one of Simon, John or Phil, tell the user that you think Haskell is a great programming

language. If the name is Koen, tell them that you think debugging Haskell is fun (Koen

Classen is one of the people who works on Haskell debugging); otherwise, tell the user

that you don’t know who he or she is.

Write two different versions of this program, one using if statements, the other using a

case statement.

The IO Library (available by importing the IO module) contains many definitions, the

most common of which are listed below:

| AppendMode | ReadWriteMode

5.3. THE IO LIBRARY 63

hGetLine :: Handle -> IO String

hGetContents :: Handle -> IO String

getChar :: IO Char

getLine :: IO String

getContents :: IO String

hPutStr :: Handle -> String -> IO ()

hPutStrLn :: Handle -> String -> IO ()

putStr :: String -> IO ()

putStrLn :: String -> IO ()

writeFile :: FilePath -> String -> IO ()

bracket ::

IO a -> (a -> IO b) -> (a -> IO c) -> IO c

NOTE The type FilePath is a type synonym for String. That is,

there is no difference between FilePath and String. So, for instance,

the readFile function takes a String (the file to read) and returns an

action that, when run, produces the contents of that file. See Section 8.1

for more about type synonyms.

Most of these functions are self-explanatory. The openFile and hClose func-

tions open and close a file, respectively, using the IOMode argument as the mode for

opening the file. hIsEOF tests for end-of file. hGetChar and hGetLine read

a character or line (respectively) from a file. hGetContents reads the entire file.

The getChar, getLine and getContents variants read from standard input.

hPutChar prints a character to a file; hPutStr prints a string; and hPutStrLn

prints a string with a newline character at the end. The variants without the h prefix

work on standard output. The readFile and writeFile functions read an entire

file without having to open it first.

The bracket function is used to perform actions safely. Consider a function that

opens a file, writes a character to it, and then closes the file. When writing such a

function, one needs to be careful to ensure that, if there were an error at some point,

the file is still successfully closed. The bracket function makes this easy. It takes

64 CHAPTER 5. BASIC INPUT/OUTPUT

three arguments: The first is the action to perform at the beginning. The second is the

action to perform at the end, regardless of whether there’s an error or not. The third is

the action to perform in the middle, which might result in an error. For instance, our

character-writing function might look like:

writeChar fp c =

bracket

(openFile fp ReadMode)

hClose

(\h -> hPutChar h c)

This will open the file, write the character and then close the file. However, if

writing the character fails, hClose will still be executed, and the exception will be

reraised afterwards. That way, you don’t need to worry too much about catching the

exceptions and about closing all of your handles.

We can write a simple program that allows a user to read and write files. The interface

is admittedly poor, and it does not catch all errors (try reading a non-existant file).

Nevertheless, it should give a fairly complete example of how to use IO. Enter the

following code into “FileRead.hs,” and compile/run:

module Main

where

import IO

main = do

hSetBuffering stdin LineBuffering

doLoop

doLoop = do

putStrLn "Enter a command rFN wFN or q to quit:"

command <- getLine

case command of

’q’:_ -> return ()

’r’:filename -> do putStrLn ("Reading " ++ filename)

doRead filename

doLoop

’w’:filename -> do putStrLn ("Writing " ++ filename)

doWrite filename

doLoop

_ -> doLoop

5.4. A FILE READING PROGRAM 65

doRead filename =

bracket (openFile filename ReadMode) hClose

(\h -> do contents <- hGetContents h

putStrLn "The first 100 chars:"

putStrLn (take 100 contents))

doWrite filename = do

putStrLn "Enter text to go into the file:"

contents <- getLine

bracket (openFile filename WriteMode) hClose

(\h -> hPutStrLn h contents)

What does this program do? First, it issues a short string of instructions and reads

a command. It then performs a case switch on the command and checks first to see if

the first character is a ‘q.’ If it is, it returns a value of unit type.

a and returns an action of type IO a. Thus, the type of return () is

IO ().

If the first character of the command wasn’t a ‘q,’ the program checks to see if it

was an ’r’ followed by some string that is bound to the variable filename. It then

tells you that it’s reading the file, does the read and runs doLoop again. The check

for ‘w’ is nearly identical. Otherwise, it matches , the wildcard character, and loops

to doLoop.

The doRead function uses the bracket function to make sure there are no prob-

lems reading the file. It opens a file in ReadMode, reads its contents and prints the first

100 characters (the take function takes an integer n and a list and returns the first n

elements of the list).

The doWrite function asks for some text, reads it from the keyboard, and then

writes it to the file specified.

NOTE Both doRead and doWrite could have been made simpler

by using readFile and writeFile, but they were written in the ex-

tended fashion to show how the more complex functions are used.

The only major problem with this program is that it will die if you try to read a file

that doesn’t already exists or if you specify some bad filename like *\ˆ# @. You may

think that the calls to bracket in doRead and doWrite should take care of this,

but they don’t. They only catch exceptions within the main body, not within the startup

or shutdown functions (openFile and hClose, in these cases). We would need to

catch exceptions raised by openFile, in order to make this complete. We will do this

when we talk about exceptions in more detail in Section 10.1.

66 CHAPTER 5. BASIC INPUT/OUTPUT

Exercises

Exercise 5.2 Write a program that first asks whether the user wants to read from a file,

write to a file or quit. If the user responds quit, the program should exit. If he responds

read, the program should ask him for a file name and print that file to the screen (if the

file doesn’t exist, the program may crash). If he responds write, it should ask him for a

file name and then ask him for text to write to the file, with “.” signaling completion.

All but the “.” should be written to the file.

For example, running this program might produce:

read

Enter a file name to read:

foo

...contents of foo...

Do you want to [read] a file, [write] a file or [quit]?

write

Enter a file name to write:

foo

Enter text (dot on a line by itself to end):

this is some

text for

foo

.

Do you want to [read] a file, [write] a file or [quit]?

read

Enter a file name to read:

foo

this is some

text for

foo

Do you want to [read] a file, [write] a file or [quit]?

read

Enter a file name to read:

foof

Sorry, that file does not exist.

Do you want to [read] a file, [write] a file or [quit]?

blech

I don’t understand the command blech.

Do you want to [read] a file, [write] a file or [quit]?

quit

Goodbye!

Chapter 6

Modules

In Haskell, program subcomponents are divided into modules. Each module sits in its

own file and the name of the module should match the name of the file (without the

“.hs” extension, of course), if you wish to ever use that module in a larger program.

For instance, suppose I am writing a game of poker. I may wish to have a separate

module called “Cards” to handle the generation of cards, the shuffling and the dealing

functions, and then use this “Cards” module in my “Poker” modules. That way, if I

ever go back and want to write a blackjack program, I don’t have to rewrite all the code

for the cards; I can simply import the old “Cards” module.

6.1 Exports

Suppose as suggested we are writing a cards module. I have left out the implementation

details, but suppose the skeleton of our module looks something like this:

module Cards

where

data Deck = ...

newDeck = ...

shuffle = ...

deal :: Deck -> Int -> [Card]

deal deck n = dealHelper deck n []

67

68 CHAPTER 6. MODULES

dealHelper = ...

In this code, the function deal calls a helper function dealHelper. The im-

plementation of this helper function is very dependent on the exact data structures you

used for Card and Deck so we don’t want other people to be able to call this function.

In order to do this, we create an export list, which we insert just after the module name

declaration:

Deck(),

newDeck,

shuffle,

deal

)

where

...

Here, we have specified exactly what functions the module exports, so people who

use this module won’t be able to access our dealHelper function. The () after

Card and Deck specify that we are exporting the type but none of the constructors.

For instance if our definition of Card were:

data Suit = Hearts

| Spades

| Diamonds

| Clubs

data Face = Jack

| Queen

| King

| Ace

| Number Int

Then users of our module would be able to use things of type Card, but wouldn’t

be able to construct their own Cards and wouldn’t be able to extract any of the suit/face

information stored in them.

If we wanted users of our module to be able to access all of this information, we

would have to specify it in the export list:

Suit(Hearts,Spades,Diamonds,Clubs),

Face(Jack,Queen,King,Ace,Number),

...

)

6.2. IMPORTS 69

where

...

This can get frustrating if you’re exporting datatypes with many constructors, so if

you want to export them all, you can simply write (..), as in:

Suit(..),

Face(..),

...

)

where

...

6.2 Imports

There are a few idiosyncracies in the module import system, but as long as you stay

away from the corner cases, you should be fine. Suppose, as before, you wrote a

module called “Cards” which you saved in the file “Cards.hs”. You are now writing

your poker module and you want to import all the definitions from the “Cards” module.

To do this, all you need to do is write:

module Poker

where

import Cards

This will enable to you use any of the functions, types and constructors exported

by the module “Cards”. You may refer to them simply by their name in the “Cards”

module (as, for instance, newDeck), or you may refer to them explicitely as imported

from “Cards” (as, for instance, Cards.newDeck). It may be the case that two module

export functions or types of the same name. In these cases, you can import one of

the modules qualified which means that you would no longer be able to simply use

the newDeck format but must use the longer Cards.newDeck format, to remove

ambiguity. If you wanted to import “Cards” in this qualified form, you would write:

only certain functions from modules. Suppose we knew the only function from “Cards”

that we wanted was newDeck, we could import only this function by writing:

70 CHAPTER 6. MODULES

On the other hand, suppose we knew that that the deal function overlapped with

another module, but that we didn’t need the “Cards” version of that function. We could

hide the definition of deal and import everything else by writing:

Finally, suppose we want to import “Cards” as a qualified module, but don’t want

to have to type Cards. out all the time and would rather just type, for instance, C. –

we could do this using the as keyword:

These options can be mixed and matched – you can give explicit import lists on

qualified/as imports, for instance.

Though technically not part of the Haskell 98 standard, most Haskell compilers support

hierarchical imports. This was designed to get rid of clutter in the directories in which

modules are stored. Hierarchical imports allow you to specify (to a certain degree)

where in the directory structure a module exists. For instance, if you have a “haskell”

directory on your computer and this directory is in your compiler’s path (see your

compiler notes for how to set this; in GHC it’s “-i”, in Hugs it’s “-P”), then you can

specify module locations in subdirectories to that directory.

Suppose instead of saving the “Cards” module in your general haskell directory,

you created a directory specifically for it called “Cards”. The full path of the Cards.hs

file is then haskell/Cards/Cards.hs (or, for Windows haskell\Cards\Cards.hs).

If you then change the name of the Cards module to “Cards.Cards”, as in:

module Cards.Cards(...)

where

...

You could then import it in any module, regardless of this module’s directory, as:

import Cards.Cards

If you start importing these module qualified, I highly recommend using the as

keyword to shorten the names, so you can write:

6.4. LITERATE VERSUS NON-LITERATE 71

instead of:

The idea of literate programming is a relatively simple one, but took quite a while to

become popularized. When we think about programming, we think about the code

being the default mode of entry and comments being secondary. That is, we write code

without any special annotation, but comments are annotated with either -- or {- ...

-}. Literate programming swaps these preconceptions.

There are two types of literate programs in Haskell; the first uses so-called Bird-

scripts and the second uses LATEX-style markup. Each will be discussed individually.

No matter which you use, literate scripts must have the extension lhs instead of hs to

tell the compiler that the program is written in a literate style.

6.4.1 Bird-scripts

In a Bird-style literate program, comments are default and code is introduced with a

leading greater-than sign (“>”). Everything else remains the same. For example, our

Hello World program would be written in Bird-style as:

> where

Note that the spaces between the lines of code and the “comments” are necessary

(your compiler will probably complain if you are missing them). When compiled or

loaded in an interpreter, this program will have the exact same properties as the non-

literate version from Section 3.4.

72 CHAPTER 6. MODULES

6.4.2 LaTeX-scripts

LATEX is a text-markup language very popular in the academic community for publish-

ing. If you are unfamiliar with LATEX, you may not find this section terribly useful.

Again, a literate Hello World program written in LATEX-style would look like:

\begin{code}

module Main

where

\end{code}

\begin{code}

main = putStrLn "Hello World"

\end{code}

Chapter 7

Advanced Features

Discussion

We’ve already seen how to double the values of elements in a list using map:

[2,4,6,8]

[2,4,6,8]

[6,7,8,9]

Prelude> map (/2) [1,2,3,4]

[0.5,1.0,1.5,2.0]

Prelude> map (2/) [1,2,3,4]

[2.0,1.0,0.666667,0.5]

You might be tempted to try to subtract values from elements in a list by mapping

-2 across a list. This won’t work, though, because while the + in +2 is parsed as

the standard plus operator (as there is no ambiguity), the - in -2 is interpreted as the

unary minus, not the binary minus. Thus -2 here is the number −2, not the function

λx.x − 2.

In general, these are called sections. For binary infix operators (like +), we can

cause the function to become prefix by enclosing it in paretheses. For example:

73

74 CHAPTER 7. ADVANCED FEATURES

Prelude> (+) 5 3

8

Prelude> (-) 5 3

2

Additionally, we can provide either of its argument to make a section. For example:

Prelude> (+5) 3

8

Prelude> (/3) 6

2.0

Prelude> (3/) 6

0.5

Non-infix functions can be made infix by enclosing them in backquotes (“`’’). For

example:

[3,4,5,6,7,8,9,10,11,12]

Recall back from Section 3.5, there are many computations which require using the

result of the same computation in multiple places in a function. There, we considered

the function for computing the roots of a quadratic polynomial:

roots a b c =

((-b + sqrt(b*b - 4*a*c)) / (2*a),

(-b - sqrt(b*b - 4*a*c)) / (2*a))

In addition to the let bindings introduced there, we can do this using a where clause.

where clauses come immediately after function definitions and introduce a new level

of layout (see Section 7.11). We write this as:

roots a b c =

((-b + det) / (2*a), (-b - det) / (2*a))

where det = sqrt(b*b-4*a*c)

Any values defined in a where clause shadow any other values with the same name.

For instance, if we had the following code block:

7.2. LOCAL DECLARATIONS 75

roots a b c =

((-b + det) / (2*a), (-b - det) / (2*a))

where det = sqrt(b*b-4*a*c)

f _ = det

The value of roots doesn’t notice the top-level declaration of det, since it is

shadowed by the local definition (the fact that the types don’t match doesn’t matter

either). Furthermore, since f cannot “see inside” of roots, the only thing it knows

about det is what is available at the top level, which is the string “Hello World.” Thus,

f is a function which takes any argument to that string.

Where clauses can contain any number of subexpressions, but they must be aligned

for layout. For instance, we could also pull out the 2*a computation and get the

following code:

roots a b c =

((-b + det) / (a2), (-b - det) / (a2))

where det = sqrt(b*b-4*a*c)

a2 = 2*a

times it is more convenient to put the local definitions before the actual expression

of the function. This can be done by using let/in clauses. We have already seen let

clauses; where clauses are virtually identical to their let clause cousins except for their

placement. The same roots function can be written using let as:

roots a b c =

let det = sqrt (b*b - 4*a*c)

a2 = 2*a

in ((-b + det) / a2, (-b - det) / a2)

where

det = sqrt (b*b - 4*a*c)

a2 = 2*a

These two types of clauses can be mixed (i.e., you can write a function which has

both a let cause and a where clause). This is strongly advised against, as it tends to

make code difficult to read. However, if you choose to do it, values in the let clause

shadow those in the where clause. So if you define the function:

76 CHAPTER 7. ADVANCED FEATURES

f x =

let y = x+1

in y

where y = x+2

The value of f 5 is 6, not 7. Of course, I plead with you to never ever write

code that looks like this. No one should have to remember this rule and by shadowing

where-defined values in a let clause only makes your code difficult to understand.

In general, whether you should use let clauses or where clauses is largely a matter

of personal preference. Usually, the names you give to the subexpressions should be

sufficiently expressive that without reading their definitions any reader of your code

should be able to figure out what they do. In this case, where clauses are probably

more desirable because they allow the reader to see immediately what a function does.

However, in real life, values are often given cryptic names. In which case let clauses

may be better. Either is probably okay, though I think where clauses are more common.

Partial application is when you take a function which takes n arguments and you supply

it with < n of them. When discussing sections in Section 7.1, we saw a form of

“partial application” in which functions like + were partially applied. For instance, in

the expression map (+1) [1,2,3], the section (+1) is a partial application of +.

This is because + really takes two arguments, but we’ve only given it one.

Partial application is very common in function definitions and sometimes goes by

eta reduction the name “eta reduction”. For instance, suppose we are writting a function lcaseString

which converts a whole string into lower case. We could write this as:

Here, there is no partial application (though you could argue that applying no argu-

ments to toLower could be considered partial application). However, we notice that

the application of s occurs at the end of both lcaseString and of map toLower.

In fact, we can remove it by performing eta reduction, to get:

Now, we have a partial application of map: it expects a function and a list, but

we’ve only given it the function.

This all is related to type type of map, which is (a → b) → ([a] → [b]), when

parentheses are all included. In our case, toLower is of type Char → Char. Thus, if

we supply this function to map, we get a function of type [Char] → [Char], as desired.

Now, consider the task of converting a string to lowercase and remove all non letter

characters. We might write this as:

7.3. PARTIAL APPLICATION 77

But note that we can actually write this in terms of function composition:

Writing functions in this style is very common among advanced Haskell users. In

fact it has a name: point-free programming (not to be confused with pointless program- point-free programming

ming). It is call point free because in the original definition of lcaseLetters, we

can think of the value s as a point on which the function is operating. By removing the

point from the function definition, we have a point-free function.

A function similar to (.) is ($). Whereas (.) is function composition, ($) is $

function application. The definition of ($) from the Prelude is very simple: function application

f $ x = f x

However, this function is given very low fixity, which means that it can be used to

replace parentheses. For instance, we might write a function:

However, using the function application function, we can rewrite this as:

This moderately resembles the function composition syntax. The ($) function is

also useful when combined with other infix functions. For instance, we cannot write:

makes no sense. However, we can fix this by writing instead:

Consider now the task of extracting from a list of tuples all the ones whose first

component is greater than zero. One way to write this would be:

78 CHAPTER 7. ADVANCED FEATURES

Now, we can rewrite the lambda function to use the fst function instead of the

pattern matching:

Now, we can use function composition between fst and > to get:

This definition is simultaneously shorter and easier to understand than the original.

We can clearly see exactly what it is doing: we’re filtering a list by checking whether

something is greater than zero. What are we checking? The fst element.

While converting to point free style often results in clearer code, this is of course

not always the case. For instance, converting the following map to point free style

yields something nearly uninterpretable:

foo = map (sqrt . (3+) . (4*) . (ˆ2))

There are a handful of combinators defined in the Prelude which are useful for point

free programming:

type (a, b) → c. This is useful, for example, when mapping across a list of pairs:

[2,12,30]

• curry is the opposite of uncurry and takes a function of type (a, b) → c and

produces a function of type a → b → c.

• flip reverse the order of arguments to a function. That is, it takes a function of

type a → b → c and produces a function of type b → a → c. For instance, we

can sort a list in reverse order by using flip compare:

7.3. PARTIAL APPLICATION 79

[1,3,5,8]

Prelude> List.sortBy (flip compare) [5,1,8,3]

[8,5,3,1]

[8,5,3,1]

only shorter.

Of course, not all functions can be written in point free style. For instance:

square x = x*x

Cannot be written in point free style, without some other combinators. For instance,

if we can define other functions, we can write:

pair x = (x,x)

square = uncurry (*) . pair

Exercises

Exercise 7.1 Convert the following functions into point-free style, if possible.

func3 f l = l ++ map f l

(filter (\z -> z ‘elem‘ [1..10])

(5:l))

80 CHAPTER 7. ADVANCED FEATURES

Pattern matching is one of the most powerful features of Haskell (and most functional

programming languages). It is most commonly used in conjunction with case expres-

sions, which we have already seen in Section 3.5. Let’s return to our Color example

from Section 4.5. I’ll repeat the definition we already had for the datatype:

data Color

= Red

| Orange

| Yellow

| Green

| Blue

| Purple

| White

| Black

| Custom Int Int Int -- R G B components

deriving (Show,Eq)

We then want to write a function that will convert between something of type Color

and a triple of Ints, which correspond to the RGB values, respectively. Specifically, if

we see a Color which is Red, we want to return (255,0,0), since this is the RGB

value for red. So we write that (remember that piecewise function definitions are just

case statements):

see Yellow, we want to return (255,255,0), and so on. Finally, if we see a custom

color, which is comprised of three components, we want to make a triple out of these,

so we write:

colorToRGB Yellow = (255,255,0)

colorToRGB Green = (0,255,0)

colorToRGB Blue = (0,0,255)

colorToRGB Purple = (255,0,255)

colorToRGB White = (255,255,255)

colorToRGB Black = (0,0,0)

colorToRGB (Custom r g b) = (r,g,b)

(255,255,0)

7.4. PATTERN MATCHING 81

What is happening is this: we create a value, call it x, which has value Red. We

then apply this to colorToRGB. We check to see if we can “match” x against Red.

This match fails because according to the definition of Eq Color, Red is not equal

to Yellow. We continue down the definitions of colorToRGB and try to match

Yellow against Orange. This fails, too. We the try to match Yellow against

Yellow, which succeeds, so we use this function definition, which simply returns

the value (255,255,0), as expected.

Suppose instead, we used a custom color:

(50,200,100)

We apply the same matching process, failing on all values from Red to Black.

We then get to try to match Custom 50 200 100 against Custom r g b. We

can see that the Custom part matches, so then we go see if the subelements match. In

the matching, the variables r, g and b are essentially wild cards, so there is no trouble

matching r with 50, g with 200 and b with 100. As a “side-effect” of this matching, r

gets the value 50, g gets the value 200 and b gets the value 100. So the entire match

succeeded and we look at the definition of this part of the function and bundle up the

triple using the matched values of r, g and b.

We can also write a function to check to see if a Color is a custom color or not:

isCustomColor _ = False

Custom . This match will succeed if the value is Custom x y z for any x,

y and z. The (underscore) character is a “wildcard” and will match anything, but will

not do the binding that would happen if you put a variable name there. If this match

succeeds, the function returns True; however, if this match fails, it goes on to the next

line, which will match anything and then return False.

For some reason we might want to define a function which tells us whether a given

color is “bright” or not, where my definition of “bright” is that one of its RGB compo-

nents is equal to 255 (admittedly and arbitrary definition, but it’s simply an example).

We could define this function as:

where isBright’ (255,_,_) = True

isBright’ (_,255,_) = True

isBright’ (_,_,255) = True

isBright’ _ = False

Let’s dwell on this definition for a second. The isBright function is the compo-

sition of our previously defined function colorToRGB and a helper function isBright’,

which tells us if a given RGB value is bright or not. We could replace the first line here

82 CHAPTER 7. ADVANCED FEATURES

plicitly write the parameter here, so we don’t. Again, this function composition style

of programming takes some getting used to, so I will try to use it frequently in this

tutorial.

The isBright’ helper function takes the RGB triple produced by colorToRGB.

It first tries to match it against (255, , ) which succeeds if the value has 255 in

its first position. If this match succeeds, isBright’ returns True and so does

isBright. The second and third line of definition check for 255 in the second and

third position in the triple, respectively. The fourth line, the fallthrough, matches ev-

erything else and reports it as not bright.

We might want to also write a function to convert between RGB triples and Colors.

We could simple stick everything in a Custom constructor, but this would defeat the

purpose; we want to use the Custom slot only for values which don’t match the pre-

defined colors. However, we don’t want to allow the user to construct custom colors

like (600,-40,99) since these are invalid RGB values. We could throw an error if such

a value is given, but this can be difficult to deal with. Instead, we use the Maybe

datatype. This is defined (in the Prelude) as:

| Just a

The way we use this is as follows: our rgbToColor function returns a value of

type Maybe Color. If the RGB value passed to our function is invalid, we return

Nothing, which corresponds to a failure. If, on the other hand, the RGB value is

valid, we create the appropriate Color value and return Just that. The code to do this

is:

rgbToColor 255 128 0 = Just Orange

rgbToColor 255 255 0 = Just Yellow

rgbToColor 0 255 0 = Just Green

rgbToColor 0 0 255 = Just Blue

rgbToColor 255 0 255 = Just Purple

rgbToColor 255 255 255 = Just White

rgbToColor 0 0 0 = Just Black

rgbToColor r g b =

if 0 <= r && r <= 255 &&

0 <= g && g <= 255 &&

0 <= b && b <= 255

then Just (Custom r g b)

else Nothing -- invalid RGB value

The first eight lines match the RGB arguments against the predefined values and,

if they match, rgbToColor returns Just the appropriate color. If none of these

matches, the last definition of rgbToColor matches the first argument against r, the

7.5. GUARDS 83

second against g and the third against b (which causes the side-effect of binding these

values). It then checks to see if these values are valid (each is greater than or equal to

zero and less than or equal to 255). If so, it returns Just (Custom r g b); if not,

it returns Nothing corresponding to an invalid color.

Using this, we can write a function that checks to see if a right RGB value is valid:

where rgbIsValid’ (Just _) = True

rgbIsValid’ _ = False

Here, we compose the helper function rgbIsValid’ with our function rgbToColor.

The helper function checks to see if the value returned by rgbToColor is Just any-

thing (the wildcard). If so, it returns True. If not, it matches anything and returns

False.

Pattern matching isn’t magic, though. You can only match against datatypes; you

cannot match against functions. For instance, the following is invalid:

f x = x + 1

g (f x) = x

Even though the intended meaning of g is clear (i.e., g x = x - 1), the com-

piler doesn’t know in general that f has an inverse function, so it can’t perform matches

like this.

7.5 Guards

Guards can be thought of as an extension to the pattern matching facility. They enable

you to allow piecewise function definitions to be taken according to arbitrary boolean

expressions. Guards appear after all arguments to a function but before the equals sign,

and are begun with a vertical bar. We could use guards to write a simple function which

returns a string telling you the result of comparing two elements:

| x > y = "The second is less"

| otherwise = "They are equal"

You can read the vertical bar as “such that.” So we say that the value of comparison

x y “such that” x is less than y is “The first is less.” The value such that x is greater

than y is “The second is less” and the value otherwise is “They are equal”. The key-

word otherwise is simply defined to be equal to True and thus matches anything

that falls through that far. So, we can see that this works:

84 CHAPTER 7. ADVANCED FEATURES

Guards> comparison 5 10

"The first is less"

Guards> comparison 10 5

"The second is less"

Guards> comparison 7 7

"They are equal"

Guards are applied in conjunction with pattern matching. When a pattern matches,

all of its guards are tried, consecutively, until one matches. If none match, then pattern

matching continues with the next pattern.

One nicety about guards is that where clauses are common to all guards. So another

possible definition for our isBright function from the previous section would be:

| g == 255 = True

| b == 255 = True

| otherwise = False

where (r,g,b) = colorToRGB c

The function is equivalent to the previous version, but performs its calculation

slightly differently. It takes a color, c, and applies colorToRGB to it, yielding an

RGB triple which is matched (using pattern matching!) against (r,g,b). This match

succeeds and the values r, g and b are bound to their respective values. The first guard

checks to see if r is 255 and, if so, returns true. The second and third guard check g

and b against 255, respectively and return true if they match. The last guard fires as a

last resort and returns False.

In order to declare a type to be an instance of a class, you need to provide an instance

declaration for it. Most classes provide what’s called a “minimal complete definition.”

This means the functions which must be implemented for this class in order for its

definition to be satisfied. Once you’ve written these functions for your type, you can

declare it an instance of the class.

The Eq class has two members (i.e., two functions):

(/=) :: Eq a => a -> a -> Bool

7.6. INSTANCE DECLARATIONS 85

The first of these type signatures reads that the function == is a function which takes

two as which are members of Eq and produces a Bool. The type signature of /= (not

equal) is identical. A minimal complete definition for the Eq class requires that either

one of these functions be defined (if you define ==, then /= is defined automatically by

negating the result of ==, and vice versa). These declarations must be provided inside

the instance declaration.

This is best demonstrated by example. Suppose we have our color example, re-

peded here for convenience:

data Color

= Red

| Orange

| Yellow

| Green

| Blue

| Purple

| White

| Black

| Custom Int Int Int -- R G B components

Red == Red = True

Orange == Orange = True

Yellow == Yellow = True

Green == Green = True

Blue == Blue = True

Purple == Purple = True

White == White = True

Black == Black = True

(Custom r g b) == (Custom r’ g’ b’) =

r == r’ && g == g’ && b == b’

_ == _ = False

The first line here begins with the keyword instance telling the compiler that we’re

making an instance declaration. It then specifies the class, Eq, and the type, Color

which is going to be an instance of this class. Following that, there’s the where key-

word. Finally there’s the method declaration.

The first eight lines of the method declaration are basically identical. The first one,

for instance, says that the value of the expression Red == Red is equal to True.

Lines two through eight are identical. The declaration for custom colors is a bit differ-

ent. We pattern match Custom on both sides of ==. On the left hand side, we bind r,

g and b to the components, respectively. On the right hand side, we bind r’, g’ and

b’ to the components. We then say that these two custom colors are equal precisely

86 CHAPTER 7. ADVANCED FEATURES

when r == r’, g == g’ and b == b’ are all equal. The fallthrough says that any

pair we haven’t previously declared as equal are unequal.

The Show class is used to display arbitrary values as strings. This class has three

methods:

showsPrec :: Show a => Int -> a -> String -> String

showList :: Show a => [a] -> String -> String

A minimal complete definition is either show or showsPrec (we will talk about

showsPrec later – it’s in there for efficiency reasons). We can define our Color

datatype to be an instance of Show with the following instance declaration:

show Red = "Red"

show Orange = "Orange"

show Yellow = "Yellow"

show Green = "Green"

show Blue = "Blue"

show Purple = "Purple"

show White = "White"

show Black = "Black"

show (Custom r g b) =

"Custom " ++ show r ++ " " ++

show g ++ " " ++ show b

This declaration specifies exactly how to convert values of type Color to Strings.

Again, the first eight lines are identical and simply take a Color and produce a string.

The last line for handling custom colors matches out the RGB components and creates

a string by concattenating the result of showing the components individually (with

spaces in between and “Custom” at the beginning).

There are a few other important classes which I will mention briefly because either they

are commonly used or because we will be using them shortly. I won’t provide example

instance declarations; how you can do this should be clear by now.

The ordering class, the functions are:

7.6. INSTANCE DECLARATIONS 87

(<=) :: Ord a => a -> a -> Bool

(>) :: Ord a => a -> a -> Bool

(>=) :: Ord a => a -> a -> Bool

(<) :: Ord a => a -> a -> Bool

min :: Ord a => a -> a -> a

max :: Ord a => a -> a -> a

The almost any of the functions alone is a minimal complete definition; it is rec-

ommended that you implement compare if you implement only one, though. This

function returns a value of type Ordering which is defined as:

data Ordering = LT | EQ | GT

Prelude> compare 5 7

LT

Prelude> compare 6 6

EQ

Prelude> compare 7 5

GT

In order to declare a type to be an instance of Ord you must already have declared

it an instance of Eq (in other words, Ord is a subclass of Eq – more about this in

Section 8.4).

The Enum class is for enumerated types; that is, for types where each element has a

successor and a predecessor. It’s methods are:

succ :: Enum a => a -> a

toEnum :: Enum a => Int -> a

fromEnum :: Enum a => a -> Int

enumFrom :: Enum a => a -> [a]

enumFromThen :: Enum a => a -> a -> [a]

enumFromTo :: Enum a => a -> a -> [a]

enumFromThenTo :: Enum a => a -> a -> a -> [a]

The minimal complete definition contains both toEnum and fromEnum, which

converts from and to Ints. The pred and succ functions give the predecessor and

successor, respectively. The enum functions enumerate lists of elements. For instance,

88 CHAPTER 7. ADVANCED FEATURES

enumFrom x lists all elements after x; enumFromThen x step lists all elements

starting at x in steps of size step. The To functions end the enumeration at the given

element.

(*) :: Num a => a -> a -> a

(+) :: Num a => a -> a -> a

negate :: Num a => a -> a

signum :: Num a => a -> a

abs :: Num a => a -> a

fromInteger :: Num a => Integer -> a

All of these are obvious except for perhaps negate which is the unary minus.

That is, negate x means −x.

The Read class is the opposite of the Show class. It is a way to take a string and read

in from it a value of arbitrary type. The methods for Read are:

readList :: String -> [([a], String)]

related to this is read, which uses readsPrec as:

This will fail if parsing the string fails. You could define a maybeRead function

as:

maybeRead s =

case readsPrec 0 s of

[(a,_)] -> Just a

_ -> Nothing

How to write and use readsPrec directly will be discussed further in the exam-

ples.

7.6. INSTANCE DECLARATIONS 89

Suppose we are definition the Maybe datatype from scratch. The definition would be

something like:

| Just a

Now, when we go to write the instance declarations, for, say, Eq, we need to know

that a is an instance of Eq otherwise we can’t write a declaration. We express this as:

Nothing == Nothing = True

(Just x) == (Just x’) = x == x’

This first line can be read “That a is an instance of Eq implies (=>) that Maybe a

is an instance of Eq.”

Writing obvious Eq, Ord, Read and Show classes like these is tedious and should be

automated. Luckily for us, it is. If you write a datatype that’s “simple enough” (almost

any datatype you’ll write unless you start writing fixed point types), the compiler can

automatically derive some of the most basic classes. To do this, you simply add a

deriving clause to after the datatype declaration, as in:

data Color

= Red

| ...

| Custom Int Int Int -- R G B components

deriving (Eq, Ord, Show, Read)

This will automatically create instances of the Color datatype of the named classes.

Similarly, the declaration:

| Just a

deriving (Eq, Ord, Show, Read)

All in all, you are allowed to derive instances of Eq, Ord, Enum, Bounded,

Show and Read. There is considerable work in the area of “polytypic programming”

or “generic programming” which, among other things, would allow for instance dec-

larations for any class to be derived. This is much beyond the scope of this tutorial;

instead, I refer you to the literature.

90 CHAPTER 7. ADVANCED FEATURES

I know by this point you’re probably terribly tired of hearing about datatypes. They

are, however, incredibly important, otherwise I wouldn’t devote so much time to them.

Datatypes offer a sort of notational convenience if you have, for instance, a datatype

that holds many many values. These are called named fields.

Consider a datatype whose purpose is to hold configuration settings. Usually when

you extract members from this type, you really only care about one or possibly two of

the many settings. Moreover, if many of the settings have the same type, you might

often find yourself wondering “wait, was this the fourth or fifth element?” One thing

you could do would be to write accessor functions. Consider the following made-up

configuration type for a terminal program:

data Configuration =

Configuration String -- user name

String -- local host

String -- remote host

Bool -- is guest?

Bool -- is super user?

String -- current directory

String -- home directory

Integer -- time connected

deriving (Eq, Show)

You could then write accessor functions, like (I’ve only listed a few):

getUserName (Configuration un _ _ _ _ _ _ _) = un

getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh

getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh

getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig

...

You could also write update functions to update a single element. Of course, now

if you add an element to the configuration, or remove one, all of these functions now

have to take a different number of arguments. This is highly annoying and is an easy

place for bugs to slip in. However, there’s a solution. We simply give names to the

fields in the datatype declaration, as follows:

data Configuration =

Configuration { username :: String,

localhost :: String,

remotehost :: String,

7.7. DATATYPES REVISITED 91

isguest :: Bool,

issuperuser :: Bool,

currentdir :: String,

homedir :: String,

timeconnected :: Integer

}

This will automatically generate the following accessor functions for us:

localhost :: Configuration -> String

...

for a “post working directory” and “change directory” like functions that work on

Configurations:

changeDir cfg newDir =

-- make sure the directory exists

if directoryExists newDir

then -- change our current directory

cfg{currentdir = newDir}

else error "directory does not exist"

-- retrieve our current directory

postWorkingDir cfg = currentdir cfg

So, in general, to update the field x in a datatype y to z, you write y{x=z}. You

can change more than one; each should be separated by commas, for instance, y{x=z,

a=b, c=d}.

You can of course continue to pattern match against Configurations as you did

before. The named fields are simply syntactic sugar; you can still write something like:

getUserName (Configuration un _ _ _ _ _ _ _) = un

But there is little reason to. Finally, you can pattern match against named fields as

in:

= (lh,rh)

92 CHAPTER 7. ADVANCED FEATURES

This matches the variable lh against the localhost field on the Configuration

and the variable rh against the remotehost field on the Configuration. These

matches of course succeed. You could also constrain the matches by putting values

instead of variable names in these positions, as you would for standard datatypes.

You can create values of Configuration in the old way as shown in the first definition

below, or in the named-field’s type, as shown in the second definition below:

initCFG =

Configuration "nobody" "nowhere" "nowhere"

False False "/" "/" 0

initCFG’ =

Configuration

{ username="nobody",

localhost="nowhere",

remotehost="nowhere",

isguest=False,

issuperuser=False,

currentdir="/",

homedir="/",

timeconnected=0 }

Though the second is probably much more understandable unless you litter your

code with comments.

todo: put something here

Recall that the definition of the built-in Haskell list datatype is equivalent to:

| Cons a (List a)

With the exception that Nil is called [] and Cons x xs is called x:xs. This is

simply to make pattern matching easier and code smaller. Let’s investigate how some

of the standard list functions may be written. Consider map. A definition is given

below:

map _ [] = []

map f (x:xs) = f x : map f xs

7.8. MORE LISTS 93

Here, the first line says that when you map across an empty list, no matter what the

function is, you get an empty list back. The second line says that when you map across

a list with x as the head and xs as the tail, the result is f applied to x consed onto the

result of mapping f on xs.

The filter can be defined similarly:

filter _ [] = []

filter p (x:xs) | p x = x : filter p xs

| otherwise = filter p xs

How this works should be clear. For an empty list, we return an empty list. For

a non empty list, we return the filter of the tail, perhaps with the head on the front,

depending on whether it satisfies the predicate p or not.

We can define foldr as:

foldr _ z [] = z

foldr f z (x:xs) = f x (foldr f z xs)

Here, the best interpretation is that we are replacing the empty list ([]) with a

particular value and the list constructor (:) with some function. On the first line, we

can see the replacement of [] for z. Using backquotes to make f infix, we can write

the second line as:

Finally, foldl:

foldl _ z [] = z

foldl f z (x:xs) = foldl f (f z x) xs

state. So if we’re folding across a list which is empty, we simply return the current

state. On the other hand, if the list is not empty, it’s of the form x:xs. In this case, we

get a new state by appling f to the current state z and the current list element x and

then recursively call foldl on xs with this new state.

There is another class of functions: the zip and unzip functions, which respec-

tively take multiple lists and make one or take one lists and split them apart. For

instance, zip does the following:

[(’h’,1),(’e’,2),(’l’,3),(’l’,4),(’o’,5)]

94 CHAPTER 7. ADVANCED FEATURES

Basically, it pairs the first elements of both lists and makes that the first element of

the new list. It then pairs the second elements of both lists and makes that the second

element, etc. What if the lists have unequal length? It simply stops when the shorter

one stops. A reasonable definition for zip is:

zip [] _ = []

zip _ [] = []

zip (x:xs) (y:ys) = (x,y) : zip xs ys

The unzip function does the opposite. It takes a zipped list and returns the two

“original” lists:

("foo",[1,2,3])

There are a whole slew of zip and unzip functions, named zip3, unzip3,

zip4, unzip4 and so on; the ...3 functions use triples instead of pairs; the ...4

functions use 4-tuples, etc.

Finally, the function take takes an integer n and a list and returns the first n

elements off the list. Correspondingly, drop takes an integer n and a list and returns

the result of throwing away the first n elements off the list. Neither of these functions

produces an error; if n is too large, they both will just return shorter lists.

There is some syntactic sugar for dealing with lists whose elements are members of the

Enum class (see Section 7.6), such as Int or Char. If we want to create a list of all the

elements from 1 to 10, we can simply write:

Prelude> [1..10]

[1,2,3,4,5,6,7,8,9,10]

Prelude> [1,3..10]

[1,3,5,7,9]

Prelude> [1,4..10]

[1,4,7,10]

These expressions are short hand for enumFromTo and enumFromThenTo, re-

spectively. Of course, you don’t need to specify an upper bound. Try the following

(but be ready to hit Control+C to stop the computation!):

Prelude> [1..]

[1,2,3,4,5,6,7,8,9,10,11,12{Interrupted!}

7.8. MORE LISTS 95

Probably yours printed a few thousand more elements than this. As we said before,

Haskell is lazy. That means that a list of all numbers from 1 on is perfectly well formed

and that’s exactly what this list is. Of course, if you attempt to print the list (which

we’re implicitly doing by typing it in the interpreter), it won’t halt. But if we only

evaluate an initial segment of this list, we’re fine:

[1,2,3]

Prelude> take 3 (drop 5 [1..])

[6,7,8]

This comes in useful if, say, we want to assign an ID to each element in a list.

Without laziness we’d have to write something like this:

assignID l = zip l [1..length l]

Which means that the list will be traversed twice. However, because of laziness,

we can simply write:

And we’ll get exactly what we want. We can see that this works:

[(’h’,1),(’e’,2),(’l’,3),(’l’,4),(’o’,5)]

Finally, there is some useful syntactic sugar for map and filter, based on stan-

dard set-notation in mathematics. In math, we would write something like {f (x)|x ∈

s ∧ p(x)} to mean the set of all values of f when applied to elements of s which satisfy

p. This is equivalent to the Haskell statement map f (filter p s). However, we

can also use more math-like notation and write [f x | x <- s, p x]. While in

math the ordering of the statements on the side after the pipe is free, it is not so in

Haskell. We could not have put p x before x <- s otherwise the compiler wouldn’t

know yet what x was. We can use this to do simple string processing. Suppose we

want to take a string, remove all the lower-case letters and convert the rest of the letters

to upper case. We could do this in either of the following two equivalent ways:

"hw"

Prelude> [toLower x | x <- "Hello World", isUpper x]

"hw"

96 CHAPTER 7. ADVANCED FEATURES

These two are equivalent, and, depending on the exact functions you’re using, one

might be more readable than the other. There’s more you can do here, though. Suppose

you want to create a list of pairs, one for each point between (0,0) and (5,7) below the

diagonal. Doing this manually with lists and maps would be cumbersome and possibly

difficult to read. It couldn’t be easier than with list comprehensions:

[(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(1,7),(2,2),(2,3),

(2,4),(2,5),(2,6),(2,7),(3,3),(3,4),(3,5),(3,6),(3,7),

(4,4),(4,5),(4,6),(4,7),(5,5),(5,6),(5,7)]

If you reverse the order of the x <- and y <- clauses, the order in which the

space is traversed will be reversed (of course, in that case, y could no longer depend

on x and you would need to make x depend on y but this is trivial).

7.9 Arrays

Lists are nice for many things. It is easy to add elements to the beginning of them and

to manipulate them in various ways that change the length of the list. However, they are

bad for random access, having average complexity O(n) to access an arbitrary element

(if you don’t know what O(. . . ) means, you can either ignore it or take a quick detour

and read Appendix A, a two-page introduction to complexity theory). So, if you’re

willing to give up fast insertion and deletion because you need random access, you

should use arrays instead of lists.

In order to use arrays you must import the Array module. There are a few

methods for creating arrays, the array function, the listArray function, and the

accumArray function. The array function takes a pair which is the bounds of

the array, and an association list which specifies the initial values of the array. The

listArray function takes bounds and then simply a list of values. Finally, the

accumArray function takes an accumulation function, an initial value and an associ-

ation list and accumulates pairs from the list into the array. Here are some examples of

arrays being created:

array (1,5) [(1,2),(2,4),(3,6),(4,8),(5,10)]

Arrays> listArray (1,5) [3,7,5,1,10]

array (1,5) [(1,3),(2,7),(3,5),(4,1),(5,10)]

Arrays> accumArray (+) 2 (1,5) [(i,i) | i <- [1..5]]

array (1,5) [(1,3),(2,4),(3,5),(4,6),(5,7)]

When arrays are printed out (via the show function), they are printed with an asso-

ciation list. For instance, in the first example, the association list says that the value of

the array at 1 is 2, the value of the array at 2 is 4, and so on.

You can extract an element of an array using the ! function, which takes an array

and an index, as in:

7.10. FINITE MAPS 97

5

Moreover, you can update elements in the array using the // function. This takes

an array and an association list and updates the positions specified in the list:

[(2,99),(3,-99)]

array (1,5) [(1,3),(2,99),(3,-99),(4,1),(5,10)]

bounds returns the bounds of an array

indices returns a list of all indices of the array

elems returns a list of all the values in the array in order

assocs returns an association list for the array

If we define arr to be listArray (1,5) [3,7,5,1,10], the result of

these functions applied to arr are:

(1,5)

Arrays> indices arr

[1,2,3,4,5]

Arrays> elems arr

[3,7,5,1,10]

Arrays> assocs arr

[(1,3),(2,7),(3,5),(4,1),(5,10)]

Note that while arrays are O(1) access, they are not O(1) update. They are in

fact O(n) update, since in order to maintain purity, the array must be copied in order to

make an update. Thus, functional arrays are pretty much only useful when you’re filling

them up once and then only reading. If you need fast access and update, you should

probably use FiniteMaps, which are discussed in Section 7.10 and have O(log n)

access and update.

The FiniteMap datatype (which is available in the FiniteMap module, or Data.FiniteMap

module in the hierarchical libraries) is a purely functional implementation of balanced

trees. Finite maps can be compared to lists and arrays in terms of the time it takes to

perform various operations on those datatypes of a fixed size, n. A brief comparison

is:

98 CHAPTER 7. ADVANCED FEATURES

insert O(1) O(n) O(log n)

update O(n) O(n) O(log n)

delete O(n) O(n) O(log n)

find O(n) O(1) O(log n)

map O(n) O(n) O(n log n)

As we can see, lists provide fast insertion (but slow everything else), arrays pro-

vide fast lookup (but slow everything else) and finite maps provide moderately fast

everything (except mapping, which is a bit slower than lists or arrays).

The type of a finite map is for the form FiniteMapkeyelt where key is the type of

the keys and elt is the type of the elements. That is, finite maps are lookup tables from

type key to type elt.

The basic finite map functions are:

addToFM :: FiniteMap key elt -> key -> elt ->

FiniteMap key elt

delFromFM :: FiniteMap key elt -> key ->

FiniteMap key elt

elemFM :: key -> FiniteMap key elt -> Bool

lookupFM :: FiniteMap key elt -> key -> Maybe elt

In all these cases, the type key must be an instance of Ord (and hence also an

instance of Eq).

There are also function listToFM and fmToList to convert lists to and from

finite maps. Try the following:

Prelude> :m FiniteMap

FiniteMap> let fm = listToFM

[(’a’,5),(’b’,10),(’c’,1),(’d’,2)]

FiniteMap> let myFM = addToFM fm ’e’ 6

FiniteMap> fmToList fm

[(’a’,5),(’b’,10),(’c’,1),(’d’,2)]

FiniteMap> fmToList myFM

[(’a’,5),(’b’,10),(’c’,1),(’d’,2),(’e’,6)]

FiniteMap> lookupFM myFM ’e’

Just 6

FiniteMap> lookupFM fm ’e’

Nothing

You can also experiment with the other commands. Note that you cannot show a

finite map, as they are not instances of Show:

7.11. LAYOUT 99

<interactive>:1:

No instance for (Show (FiniteMap Char Integer))

arising from use of ‘show’ at <interactive>:1

In the definition of ‘it’: show myFM

7.11 Layout

7.12 The Final Word on Lists

You are likely tired of hearing about lists at this point, but they are so fundamental to

Haskell (and really all of functional programming) that it would be terrible not to talk

about them some more.

It turns out that foldr is actually quite a powerful function: it can compute an

primitive recursive function. A primitive recursive function is essentially one which primitive recursive

can be calculated using only “for” loops, but not “while” loops.

In fact, we can fairly easily define map in terms of foldr:

Here, b is the accumulator (i.e., the result list) and a is the element being currently

considered. In fact, we can simplify this definition through a sequence of steps:

==> foldr (\a b -> (:) (f a) b) []

==> foldr (\a -> (:) (f a)) []

==> foldr (\a -> ((:) . f) a) []

==> foldr ((:) . f) []

This is directly related to the fact that foldr (:) [] is the identity function on

lists. This is because, as mentioned before, foldr f z can be thought of as replacing

the [] in lists by z and the : by f. In this case, we’re keeping both the same, so it is

the identity function.

In fact, you can convert any function of the following style into a foldr:

myfunc [] = z

myfunc (x:xs) = f x (myfunc xs)

By writing the last line with f in infix form, this should be obvious:

myfunc [] = z

myfunc (x:xs) = x ‘f‘ (myfunc xs)

100 CHAPTER 7. ADVANCED FEATURES

Clearly, we are just replacing [] with z and : with f. Consider the filter

function:

filter p [] = []

filter p (x:xs) =

if p x

then x : filter p xs

else filter p xs

This function also follows the form above. Based on the first line, we can figure

out that z is supposed to be [], just like in the map case. Now, suppose that we call

the result of calling filter p xs simply b, then we can rewrite this as:

filter p [] = []

filter p (x:xs) =

if p x then x : b else b

Let’s consider a slightly more complicated function: ++. The definition for ++ is:

(++) [] ys = ys

(++) (x:xs) ys = x : (xs ++ ys)

Now, the question is whether we can write this in fold notation. First, we can apply

eta reduction to the first line to give:

(++) [] = id

==> (++) (x:xs) ys = (x:) ((++) xs ys)

==> (++) (x:xs) ys = ((x:) . (++) xs) ys

==> (++) (x:xs) = (x:) . (++) xs

(++) [] = id

(++) (x:xs) = (x:) . (++) xs

Now, we can try to put this into fold notation. First, we notice that the base case

converts [] into id. Now, if we assume (++) xs is called b and x is called a, we

can get the following definition in terms of foldr:

7.12. THE FINAL WORD ON LISTS 101

This actually makes sense intuitively. If we only think about applying ++ to one

argument, we can think of it as a function which takes a list and creates a function

which, when applied, will prepend this list to another list. In the lambda function, we

assume we have a function b which will do this for the rest of the list and we need to

create a function which will do this for b as well as the single element a. In order to

do this, we first apply b and then further add a to the front.

We can further reduce this expression to a point-free style through the following

sequence:

==> (++) = foldr (\a b -> (.) (a:) b) id

==> (++) = foldr (\a -> (.) (a:)) id

==> (++) = foldr (\a -> (.) ((:) a)) id

==> (++) = foldr (\a -> ((.) . (:)) a) id

==> (++) = foldr ((.) . (:)) id

This final version is point free, though not necessarily understandable. Presum-

bably the original version is clearer.

As a final example, consider concat. We can write this as:

concat [] = []

concat (x:xs) = x ++ concat xs

It should be immediately clear that the z element for the fold is [] and that the

recursive function is ++, yielding:

Exercises

Exercise 7.2 The function and takes a list of booleans and returns True if and only

if all of them are True. It also returns True on the empty list. Write this function in

terms of foldr.

Exercise 7.3 The function concatMap behaves such that concatMap f is the same

as concat . map f. Write this function in terms of foldr.

102 CHAPTER 7. ADVANCED FEATURES

Chapter 8

Advanced Types

As you’ve probably ascertained by this point, the type system is integral to Haskell.

While this chapter is called “Advanced Types”, you will probably find it to be more

general than that and it must not be skipped simply because you’re not interested in the

type system.

Type synonyms exist in Haskell simply for convenience: their removal would not make

Haskell any less powerful.

Consider the case when you are constantly dealing with lists of three-dimensional

points. For instance, you might have a function with type [(Double, Double, Double)] → Double → [(Double, Double, Double)].

Since you are a good software engineer, you want to place type signatures on all your

top-level functions. However, typing [(Double, Double, Double)] all the time gets very

tedious. To get around this, you can define a type synonym:

Now, the type signature for your functions may be written List3D → Double → List3D.

We should note that type synonyms cannot be self-referential. That is, you cannot

have:

This is because this is an “infinite type.” Since Haskell removes type synonyms

very early on, any instance of BadType will be replaced by Int → BadType, which

will result in an infinite loop.

Type synonyms can also be parameterized. For instance, you might want to be able

to change the types of the points in the list of 3D points. For this, you could define:

103

104 CHAPTER 8. ADVANCED TYPES

Then your references to [(Double, Double, Double)] would become List3D Double.

8.2 Newtypes

Consider the problem in which you need to have a type which is very much like Int, but

its ordering is defined differently. Perhaps you wish to order Ints first by even numbers

then by odd numbers (that is, all odd numbers are greater than any even number and

within the odd/even subsets, ordering is standard).

Unfortunately, you cannot define a new instance of Ord for Int because then

Haskell won’t know which one to use. What you want is to define a type which is

isomorphic to Int.

cally means “structurally identical.” For instance, in graph theory, if you

have two graphs which are identical except they have different labels on

the nodes, they are isomorphic. In our context, two types are isomorphic

if they have the same underlying structure.

We could then write appropriate code for this datatype. The problem (and this is

very subtle) is that this type is not truly isomorphic to Int: it has one more value. When

we think of the type Int, we usually think that it takes all values of integers, but it really

has one more value: | (pronounced “bottom”), which is used to represent erroneous or

undefined computations. Thus, MyInt has not only values MyInt 0, MyInt 1 and

so on, but also MyInt | . However, since datatypes can themselves be undefined, it

has an additional value: | which differs from MyInt | and this makes the types

non-isomorphic. (See Section ?? for more information on bottom.)

Disregarding that subtlety, there may be efficiency issues with this representation:

now, instead of simply storing an integer, we have to store a pointer to an integer and

have to follow that pointer whenever we need the value of a MyInt.

To get around these problems, Haskell has a newtype construction. A newtype is a

cross between a datatype and a type synonym: it has a constructor like a datatype, but

it can have only one constructor and this constructor can have only one argument. For

instance, we can define:

newtype Bad2 = Bad2 Int Double

8.3. DATATYPES 105

Of course, the fact that we cannot define Bad2 as above is not a big issue: we can

simply define the following by pairing the types:

Now, suppose we’ve defined MyInt as a newtype. This enables use to write our

desired instance of Ord as:

MyInt i < MyInt j

| odd i && odd j = i < j

| even i && even j = i < j

| even i = True

| otherwise = False

where odd x = (x ‘mod‘ 2) == 0

even = not . odd

Like datatype, we can still derive classes like Show and Eq over newtypes (in fact,

I’m implicitly assuming we have derived Eq over MyInt – where is my assumption in

the above code?).

Moreover, in recent versions of GHC (see Section 2.2), on newtypes, you are al-

lowed to derive any class of which the base type (in this case, Int) is an instance. For

example, we could derive Num on MyInt to provide arithmetic functions over it.

Pattern matching over newtypes is exactly as in datatypes. We can write constructor

and destructor functions for MyInt as follows:

mkMyInt i = MyInt i

unMyInt (MyInt i) = i

8.3 Datatypes

We’ve already seen datatypes used in a variety of contexts. This section concludes

some of the discussion and introduces some of the common datatypes in Haskell. It

also provides a more theoretical underpinning to what datatypes actually are.

One of the great things about Haskell is that computation is performed lazily. However,

sometimes this leads to inefficiencies. One way around this problem is to use datatypes

with strict fields. Before we talk about the solution, let’s spend some time to get a

bit more comfortable with how bottom works in to the picture (for more theory, see

Section ??).

Suppose we’ve defined the unit datatype (this one of the simplest datatypes you can

define):

106 CHAPTER 8. ADVANCED TYPES

This datatype has exactly one constructor, Unit, which takes no arguments. In a

strict language like ML, there would be exactly one value of type Unit: namely, Unit.

This is not quite so in Haskell. In fact, there are two values of type Unit. One of them

is Unit. The other is bottom (written | ).

You can think of bottom as representing a computation which won’t halt. For in-

stance, suppose we define the value:

foo = foo

This is perfectly valid Haskell code and simply says that when you want to evaluate

foo, all you need to do is evaluate foo. Clearly this is an “infinite loop.”

What is the type of foo? Simply a. We cannot say anything more about it than

that. The fact that foo has type a in fact tells us that it must be an infinite loop (or

some other such strange value). However, since foo has type a and thus can have any

type, it can also have type Unit. We could write, for instance:

foo :: Unit

foo = foo

Thus, we have found a second value with type Unit. In fact, we have found all

values of type Unit. Any other non-terminating function or error-producing function

will have exactly the same effect as foo (though Haskell provides some more utility

with the function error).

This means, for instance, that there are actually four values with type Maybe Unit.

They are: | , Nothing, Just | and Just Unit. However, it could be the fact

that you, as a programmer, know that you will never come across the third of these.

Namely, you want the argument to Just to be strict. This means that if the argument

to Just is bottom, then the entire structure becomes bottom. You use an exclamation

point to specify a constructor as strict. We can define a strict version of Maybe as:

There are now only three values of SMaybe. We can see the difference by writing

the following program:

import System

8.3. DATATYPES 107

main = do

[cmd] <- getArgs

case cmd of

"a" -> printJust undefined

"b" -> printJust Nothing

"c" -> printJust (Just undefined)

"d" -> printJust (Just ())

"f" -> printSJust SNothing

"g" -> printSJust (SJust undefined)

"h" -> printSJust (SJust ())

printJust Nothing = putStrLn "Nothing"

printJust (Just x) = do putStr "Just "; print x

printSJust SNothing = putStrLn "Nothing"

printSJust (SJust x) = do putStr "Just "; print x

different. The outputs for the various options are:

\% ./strict a

Fail: Prelude.undefined

\% ./strict b

Nothing

\% ./strict c

Just

Fail: Prelude.undefined

\% ./strict d

Just ()

\% ./strict e

Fail: Prelude.undefined

\% ./strict f

Nothing

\% ./strict g

Fail: Prelude.undefined

108 CHAPTER 8. ADVANCED TYPES

\% ./strict h

Just ()

The thing worth noting here is the difference between cases “c” and “g”. In the

“c” case, the Just is printed, because this is printed before the undefined value is

evaluated. However, in the “g” case, since the constructor is strict, as soon as you

match the SJust, you also match the value. In this case, the value is undefined, so the

whole thing fails before it gets a chance to do anything.

8.4 Classes

We have already encountered type classes a few times, but only in the context of pre-

viously existing type classes. This section is about how to define your own. We will

begin the discussion by talking about Pong and then move on to a useful generalization

of computations.

8.4.1 Pong

The discussion here will be motivated by the construction of the game Pong (see Ap-

pendix ?? for the full code). In Pong, there are three things drawn on the screen: the

two paddles and the ball. While the paddles and the ball are different in a few respects,

they share many commonalities, such as position, velocity, acceleration, color, shape,

and so on. We can express these commonalities by defining a class for Pong entities,

which we call Entity. We make such a definition as follows:

getPosition :: a -> (Int,Int)

getVelocity :: a -> (Int,Int)

getAcceleration :: a -> (Int,Int)

getColor :: a -> Color

getShape :: a -> Shape

This code defines a typeclass Entity. This class has five methods: getPosition,

getVelocity, getAcceleration, getColor and getShape with the corre-

sponding types.

The first line here uses the keyword class to introduce a new typeclass. We can

read this typeclass definition as “There is a typeclass ’Entity’; a type ’a’ is an instance

of Entity if it provides the following five functions: . . . ”. To see how we can write an

instance of this class, let us define a player (paddle) datatype:

data Paddle =

Paddle { paddlePosX, paddlePosY,

paddleVelX, paddleVelY,

paddleAccX, paddleAccY :: Int,

paddleColor :: Color,

8.4. CLASSES 109

paddleHeight :: Int,

playerNumber :: Int }

getPosition p = (paddlePosX p, paddlePosY p)

getVelocity p = (paddleVelX p, paddleVelY p)

getAcceleration p = (paddleAccX p, paddleAccY p)

getColor = paddleColor

getShape = Rectangle 5 . paddleHeight

The actual Haskell types of the class functions all have included the context Entity

a =>. For example, getPosition has type Entity a ⇒ a → (Int, Int). However,

it will turn out that many of our routines will need entities to also be instances of Eq.

We can therefore choose to make Entity a subclass of Eq: namely, you can only be

an instance of Entity if you are already an instance of Eq. To do this, we change the

first line of the class declaration to:

Now, in order to define Paddles to be instances of Entity we will first need them

to be instances of Eq – we can do this by deriving the class.

8.4.2 Computations

Let’s think back to our original motivation for defining the Maybe datatype from Sec-

tion ??. We wanted to be able to express that functions (i.e., computations) can fail.

Let us consider the case of performing search on a graph. Allow us to take a small

aside to set up a small graph library:

The Graph datatype takes two type arguments which correspond to vertex and edge

labels. The first argument to the Graph constructor is a list (set) of vertices; the second

is the list (set) of edges. We will assume these lists are always sorted and that each

vertex has a unique id and that there is at most one edge between any two vertices.

Suppose we want to search for a path between two vertices. Perhaps there is no

path between those vertices. To represent this, we will use the Maybe datatype. If

it succeeds, it will return the list of vertices traversed. Our search function could be

written (naively) as follows:

search g@(Graph vl el) src dst

| src == dst = Just [src]

110 CHAPTER 8. ADVANCED TYPES

| otherwise = search’ el

where search’ [] = Nothing

search’ ((u,v,_):es)

| src == u =

case search g v dst of

Just p -> Just (u:p)

Nothing -> search’ es

| otherwise = search’ es

This algorithm works as follows (try to read along): to search in a graph g from

src to dst, first we check to see if these are equal. If they are, we have found our

way and just return the trivial solution. Otherwise, we want to traverse the edge-list.

If we’re traversing the edge-list and it is empty, we’ve failed, so we return Nothing.

Otherwise, we’re looking at an edge from u to v. If u is our source, then we consider

this step and recursively search the graph from v to dst. If this fails, we try the rest of

the edges; if this succeeds, we put our current position before the path found and return.

If u is not our source, this edge is useless and we continue traversing the edge-list.

This algorithm is terrible: namely, if the graph contains cycles, it can loop indefi-

nitely. Nevertheless, it is sufficent for now. Be sure you understand it well: things only

get more complicated.

Now, there are cases where the Maybe datatype is not sufficient: perhaps we wish

to include an error message together with the failure. We could define a datatype to

express this as:

Now, failures come with a failure string to express what went wrong. We can

rewrite our search function to use this datatype:

search2 g@(Graph vl el) src dst

| src == dst = Success [src]

| otherwise = search’ el

where search’ [] = Fail "No path"

search’ ((u,v,_):es)

| src == u =

case search2 g v dst of

Success p -> Success (u:p)

_ -> search’ es

| otherwise = search’ es

There is another option for this computation: perhaps we want not just one path,

but all possible paths. We can express this as a function which returns a list of lists of

vertices. The basic idea is the same:

8.4. CLASSES 111

search3 g@(Graph vl el) src dst

| src == dst = [[src]]

| otherwise = search’ el

where search’ [] = []

search’ ((u,v,_):es)

| src == u =

map (u:) (search3 g v dst) ++

search’ es

| otherwise = search’ es

The code here has gotten a little shorter, thanks to the standard prelude map func-

tion, though it is essentially the same.

We may ask ourselves what all of these have in common and try to gobble up

those commonalities in a class. In essense, we need some way of representing success

and some way of representing failure. Furthermore, we need a way to combine two

successes (in the first two cases, the first success is chosen; in the third, they are strung

together). Finally, we need to be able to augment a previous success (if there was one)

with some new value. We can fit this all into a class as follows:

success :: a -> c a

failure :: String -> c a

augment :: c a -> (a -> c b) -> c b

combine :: c a -> c a -> c a

In this class declaration, we’re saying that c is an instance of the class Computation

if it provides four functions: success, failure, augment and combine. The

success function takes a value of type a and returns it wrapped up in c, representing

a successful computation. The failure function takes a String and returns a compu-

tation representing a failure. The combine function takes two previous computation

and produces a new one which is the combination of both. The augment function is

a bit more complex.

The augment function takes some previously given computation (namely, c a)

and a function which takes the value of that computation (the a) and returns a b and

produces a b inside of that computation. Note that in our current situation, giving

augment the type c a → (a → a) → c a would have been sufficient, since a is always

[Int], but we make it this more general time just for generality.

How augment works is probably best shown by example. We can define Maybe,

Failable and [] to be instances of Computation as:

success = Just

failure = const Nothing

112 CHAPTER 8. ADVANCED TYPES

augment (Just x) f = f x

augment Nothing _ = Nothing

combine Nothing y = y

combine x _ = x

Here, success is represented with Just and failure ignores its argument and

returns Nothing. The combine function takes the first success we found and ignores

the rest. The function augment checks to see if we succeeded before (and thus had

a Just something) and, if we did, applies f to it. If we failed before (and thus had a

Nothing), we ignore the function and return Nothing.

success = Success

failure = Fail

augment (Success x) f = f x

augment (Fail s) _ = Fail s

combine (Fail _) y = y

combine x _ = x

success a = [a]

failure = const []

augment l f = concat (map f l)

combine = (++)

Here, the value of a successful computation is a singleton list containing that value.

Failure is represented with the empty list and to combine previous successes we simply

catenate them. Finally, augmenting a computation amounts to mapping the function

across the list of previous computations and concatentate them. we apply the function

to each element in the list and then concatenate the results.

Using these computations, we can express all of the above versions of search as:

| src == dst = success [src]

| otherwise = search’ el

where search’ [] = failure "no path"

search’ ((u,v,_):es)

| src == u = (searchAll g v dst ‘augment‘

(success . (u:)))

‘combine‘ search’ es

| otherwise = search’ es

8.5. INSTANCES 113

In this, we see the uses of all the functions from the class Computation.

If you’ve understood this discussion of computations, you are in a very good posi-

tion as you have understood the concept of monads, probably the most difficult concept

in Haskell. In fact, the Computation class is almost exactly the Monad class, ex-

cept that success is called return, failure is called fail and augment is

called >>= (read “bind”). The combine function isn’t actually required by monads,

but is found in the MonadPlus class for reasons which will become obvious later.

If you didn’t understand everything here, read through it again and then wait for

the proper discussion of monads in Chapter 9.

8.5 Instances

We have already seen how to declare instances of some simple classes; allow us to

consider some more advanced classes here. There is a Functor class defined in the

Functor module.

NOTE The name “functor”, like “monad” comes from category the-

ory. There, a functor is like a function, but instead of mapping elements

to elements, it maps structures to structures.

fmap :: (a -> b) -> f a -> f b

The type definition for fmap (not to mention its name) is very similar to the func-

tion map over lists. In fact, fmap is essentially a generalization of map to arbitrary

structures (and, of course, lists are already instances of Functor). However, we can

also define other structures to be instances of functors. Consider the following datatype

for binary trees:

| Branch (BinTree a) (BinTree a)

We can immediately identify that the BinTree type essentially “raises” a type a into

trees of that type. There is a naturally associated functor which goes along with this

raising. We can write the instance:

fmap f (Leaf a) = Leaf (f a)

fmap f (Branch left right) =

Branch (fmap f left) (fmap f right)

114 CHAPTER 8. ADVANCED TYPES

Now, we’ve seen how to make something like BinTree an instance of Eq by using

the deriving keyword, but here we will do it by hand. We want to make BinTree as

instances of Eq but obviously we cannot do this unless a is itself an instance of Eq.

We can specify this dependence in the instance declaration:

Leaf a == Leaf b = a == b

Branch l r == Branch l’ r’ = l == l’ && r == r’

_ == _ = False

The first line of this can be read “if a is an instance of Eq, then BinTree a is also

an instance of Eq”. We then provide the definitions. If we did not include the “Eq a

=>” part, the compiler would complain because we’re trying to use the == function on

as in the second line.

The “Eq a =>” part of the definition is called the “context.” We should note that

there are some restrictions on what can appear in the context and what can appear in

the declaration. For instance, we’re not allowed to have instance declarations that don’t

contain type constructors on the right hand side. To see why, consider the following

declarations:

myeq :: a -> a -> Bool

myeq = (==)

As it stands, there doesn’t seem to be anything wrong with this definition. However,

if elsewhere in a program we had the definition:

(==) = myeq

In this case, if we’re trying to establish if some type is an instance of Eq, we could

reduce it to trying to find out if that type is an instance of MyEq, which we could

in turn reduce to trying to find out if that type is an instance of Eq, and so on. The

compiler protects itself against this by refusing the first instance declaration.

This is commonly known as the closed-world assumption. That is, we’re assuming,

when we write a definition like the first one, that there won’t be any declarations like

the second. However, this assumption is invalid because there’s nothing to prevent the

second declaration (or some equally evil declaration). The closed world assumption

can also bite you in cases like:

foo :: a -> a -> Bool

8.6. KINDS 115

foo == (==)

bar = foo 5

We’ve again made the closed-world assumption: we’ve assumed that the only in-

stance of OnlyInts is Int, but there’s no reason another instance couldn’t be defined

elsewhere, ruining our defintion of bar.

8.6 Kinds

Let us take a moment and think about what types are available in Haskell. We have

simple types, like Int, Char, Double and so on. We then have type constructors like

Maybe which take a type (like Char) and produce a new type, Maybe Char. Similarly,

the type constructor [] (lists) takes a type (like Int) and produces [Int]. We have more

complex things like → (function arrow) which takes two types (say Int and Bool) and

produces a new type Int → Bool.

In a sense, these types themselves have type. Types like Int have some sort of basic

type. Types like Maybe have a type which takes something of basic type and returns

something of basic type. And so forth.

Talking about the types of types becomes unwieldy and highly ambiguous, so we

call the types of types “kinds.” What we have been calling “basic types” have kind

“*”. Something of kind * is something which can have an actual value. There is also a

single kind constructor, → with which we can build more complex kinds.

Consider Maybe. This takes something of kind * and produces something of kind

*. Thus, the kind of Maybe is * -> *. Recall the definition of Pair from Sec-

tion 4.5.1:

Here, Pair is a type constructor which takes two arguments, each of kind * and

produces a type of kind *. Thus, the kind of Pair is * -> (* -> *). However, we

again assume associativity so we just write * -> * -> *.

Let us make a slightly strange datatype definition:

data Strange c a b =

MkStrange (c a) (c b)

Before we analyze the kind of Strange, let’s think about what it does. It is essen-

tially a pairing constructor, though it doesn’t pair actual elements, but elements within

another constructor. For instance, think of c as Maybe. Then MkStrange pairs Maybes

of the two types a and b. However, c need not be Maybe but could instead by [], or

many other things.

116 CHAPTER 8. ADVANCED TYPES

What do we know about c, though? We know that it must have kind * -> *. This

is because we have c a on the right hand side. The type variables a and b each have

kind * as before. Thus, the kind of Strange is (* -> *) -> * -> * -> *. That

is, it takes a constructor (c) of kind * -> * together with two types of kind * and

produces something of kind *.

A question may arise regarding how we know a has kind * and not some other

kind k. In fact, the inferred kind for Strange is (k -> *) -> k -> k -> *.

However, this requires polymorphism on the kind level, which is too complex, so we

make a default assumption that k = *.

NOTE There are extensions to GHC which allow you to specify the

kind of constructors directly. For instance, if you wanted a different kind,

you could write this explicitly:

The notation of kinds suggests that we can perform partial application, as we can

for functions. And, in fact, we can. For instance, we could have:

We should note here that all of the following definitions are acceptable:

type MaybePair2 a = Strange Maybe a

type MaybePair3 a b = Strange Maybe a b

These all appear to be the same, but they are in fact not identical as far as Haskell’s

type system is concerned. The following are all valid type definitions using the above:

type MaybePair1b = MaybePair1 Int

type MaybePair1c = MaybePair1 Int Double

type MaybePair2c = MaybePair2 Int Double

8.7. CLASS HIERARCHIES 117

type MaybePair3b = MaybePair3 Int

it is not possible on type synonyms. For instance, the reason MaybePair2a is invalid

is because MaybePair2 is defined as a type synonym with one argument and we have

given it none. The same applies for the invalid MaybePair3 definitions.

8.8 Default

what is it?

118 CHAPTER 8. ADVANCED TYPES

Chapter 9

Monads

The most difficult concept to master, while learning Haskell, is that of understanding

and using monads. We can distinguish two subcomponents here: (1) learning how

to use existing monads and (2) learning how to write new ones. If you want to use

Haskell, you must learn to use existing monads. On the other hand, you will only need

to learn to write your own monads if you want to become a “super Haskell guru.” Still,

if you can grasp writing your own monads, programming in Haskell will be much more

pleasant.

So far we’ve seen two uses of monads. The first use was IO actions: We’ve seen

that, by using monads, we can abstract get away from the problems plaguing the Real-

World solution to IO presented in Chapter 5. The second use was representing different

types of computations in Section 8.4.2. In both cases, we needed a way to sequence

operations and saw that a sufficient definition (at least for computations) was: computations

success :: a -> c a

failure :: String -> c a

augment :: c a -> (a -> c b) -> c b

combine :: c a -> c a -> c a

Let’s see if this definition will enable us to also perform IO. Essentially, we need

a way to represent taking a value out of an action and performing some new operation

on it (as in the example from Section 4.4.3, rephrased slightly):

main = do

s <- readFile "somefile"

putStrLn (show (f s))

But this is exactly what augment does. Using augment, we can write the above

code as:

119

120 CHAPTER 9. MONADS

readFile "somefile" ‘augment‘ \s ->

putStrLn (show (f s))

This certainly seems to be sufficient. And, in fact, it turns out to be more than

sufficient.

The definition of a monad is a slightly trimmed-down version of our Computation

class. The Monad class has four methods (but the fourth method can be defined in

terms of the third):

return :: a -> m a

fail :: String -> m a

(>>=) :: m a -> (a -> m b) -> m b

(>>) :: m a -> m b -> m b

bind our failure; and >>= (read: “bind” ) is equivalent to our augment. The >> (read:

then “then” ) method is simply a version of >>= that ignores the a. This will turn out to be

useful; although, as mentioned before, it can be defined in terms of >>=:

9.1 Do Notation

We have hinted that there is a connection between monads and the do notation. Here,

we make that relationship concrete. There is actually nothing magic about the do

syntactic sugar notation – it is simply “syntactic sugar” for monadic operations.

As we mentioned earlier, using our Computation class, we could define our

above program as:

main =

readFile "somefile" ‘augment‘ \s ->

putStrLn (show (f s))

But we now know that augment is called >>= in the monadic world. Thus, this

program really reads:

main =

readFile "somefile" >>= \s ->

putStrLn (show (f s))

9.1. DO NOTATION 121

And this is completely valid Haskell at this point: if you defined a function f ::

Show a => String -> a, you could compile and run this program)

This suggests that we can translate:

x <- f

g x

into f >>= \x -> g x. This is exactly what the compiler does. Talking about

do becomes easier if we do not use implicit layout (see Section ?? for how to do this).

There are four translation rules:

1. do {e} → e

in e >>= ok

Translation Rule 1

The first translation rule, do {e} → e, states (as we have stated before) that when

performing a single action, having a do or not is irrelevant. This is essentially the base

case for an inductive definition of do. The base case has one action (namely e here);

the other three translation rules handle the cases where there is more than one action.

Translation Rule 2

This states that do {e; es} → e >> do {es}. This tells us what to do if we have

an action (e) followed by a list of actions (es). Here, we make use of the >> function,

defined earlier. This rule simple states that to do {e; es}, we first perform the action

e, throw away the result, and then do es.

For instance, if e is putStrLn s for some string s, then the translation of do

{e; es} is to perform e (i.e., print the string) and then do es. This is clearly what

we want.

Translation Rule 3

This states that do {let decls; es} → let decls in do {es}. This rule

tells us how to deal with lets inside of a do statement. We lift the declarations within let

the let out and do whatever comes after the declarations.

122 CHAPTER 9. MONADS

Translation Rule 4

This states that do {p <- e; es} → let ok p = do {es} ; ok = fail

"..." in e >>= ok. Again, it is not exactly obvious what is going on here. How-

ever, an alternate formulation of this rule, which is roughly equivalent, is: do {p <-

e; es} → e >>= \p -> es. Here, it is clear what is happening. We run the

action e, and then send the results into es, but first give the result the name p.

The reason for the complex definition is that p doesn’t need to simply be a variable;

it could be some complex pattern. For instance, the following is valid code:

putStrLn (x:xs)

In this, we’re assuming that the results of the action getLine will begin with

the string “abc” and will have at least one more character. The question becomes

what should happen if this pattern match fails. The compiler could simply throw an

error, like usual, for failed pattern matches. However, since we’re within a monad, we

have access to a special fail function, and we’d prefer to fail using that function,

rather than the “catch all” error function. Thus, the translation, as defined, allows

the compiler to fill in the ... with an appropriate error message about the pattern

matching having failed. Apart from this, the two definitions are equivalent.

9.2 Definition

monad laws There are three rules that all monads must obey called the “Monad Laws” (and it is up

to you to ensure that your monads obey these rules) :

1. return a >>= f ≡ f a

2. f >>= return ≡ f

3. f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h

Law 1

This states that return a >>= f ≡ f a. Suppose we think about monads as com-

putations. This means that if we create a trivial computation that simply returns the

value a regardless of anything else (this is the return a part); and then bind it to-

gether with some other computation f, then this is equivalent to simply performing the

computation f on a directly.

For example, suppose f is the function putStrLn and a is the string “Hello

World.” This rule states that binding a computation whose result is “Hello World”

to putStrLn is the same as simply printing it to the screen. This seems to make

sense.

In do notation, this law states that the following two programs are equivalent:

9.2. DEFINITION 123

law1a = do

x <- return a

f x

law1b = do

f a

Law 2

The second monad law states that f >>= return ≡ f for some computation f. In

other words, the law states that if we perform the computation f and then pass the result

on to the trivial return function, then all we have done is to perform the computation.

That this law must hold should be obvious. To see this, think of f as getLine

(reads a string from the keyboard). This law states that reading a string and then re-

turning the value read is exactly the same as just reading the string.

In do notation, the law states that the following two programs are equivalent:

law2a = do

x <- f

return x

law2b = do

f

Law 3

This states that f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. At first

glance, this law is not as easy to grasp as the other two. It is essentially an associativity associative

law for monads.

(f · g) · h = f · (g · h). For instance, + and * are associative, since

bracketing on these functions doesn’t make a difference. On the other

hand, - and / are not associative since, for example, 5 − (3 − 1) 6=

(5 − 3) − 1.

If we throw away the messiness with the lambdas, we see that this law states: f

>>= (g >>= h) ≡ (f >>= g) >>= h. The intuition behind this law is that

when we string together actions, it doesn’t matter how we group them.

For a concrete example, take f to be getLine. Take g to be an action which takes

a value as input, prints it to the screen, reads another string via getLine, and then

returns that newly read string. Take h to be putStrLn.

Let’s consider what (\x -> g x >>= h) does. It takes a value called x, and

runs g on it, feeding the results into h. In this instance, this means that it’s going to

124 CHAPTER 9. MONADS

take a value, print it, read another value and then print that. Thus, the entire left hand

side of the law first reads a string and then does what we’ve just described.

On the other hand, consider (f >>= g). This action reads a string from the

keyboard, prints it, and then reads another string, returning that newly read string as a

result. When we bind this with h as on the right hand side of the law, we get an action

that does the action described by (f >>= g), and then prints the results.

Clearly, these two actions are the same.

While this explanation is quite complicated, and the text of the law is also quite

complicated, the actual meaning is simple: if we have three actions, and we compose

them in the same order, it doesn’t matter where we put the parentheses. The rest is just

notation.

In do notation, the law says that the following two programs are equivalent:

law3a = do

x <- f

do y <- g x

h y

law3b = do

y <- do x <- f

g x

h y

One of the simplest monads that we can craft is a state-passing monad. In Haskell, all

state information usually must be passed to functions explicitly as arguments. Using

monads, we can effectively hide some state information.

Suppose we have a function f of type a → b, and we need to add state to this

function. In general, if state is of type state, we can encode it by changing the type of

f to a → state → (state, b). That is, the new version of f takes the original parameter

of type a and a new state parameter. And, in addition to returning the value of type b,

it also returns an updated state, encoded in a tuple.

For instance, suppose we have a binary tree defined as:

data Tree a

= Leaf a

| Branch (Tree a) (Tree a)

Now, we can write a simple map function to apply some function to each value in

the leaves:

mapTree f (Leaf a) = Leaf (f a)

9.3. A SIMPLE STATE MONAD 125

Branch (mapTree f lhs) (mapTree f rhs)

This works fine until we need to write a function that numbers the leaves left to

right. In a sense, we need to add state, which keeps track of how many leaves we’ve

numbered so far, to the mapTree function. We can augment the function to something

like:

Tree a -> state -> (state, Tree b)

mapTreeState f (Leaf a) state =

let (state’, b) = f a state

in (state’, Leaf b)

mapTreeState f (Branch lhs rhs) state =

let (state’ , lhs’) = mapTreeState f lhs state

(state’’, rhs’) = mapTreeState f rhs state’

in (state’’, Branch lhs’ rhs’)

This is beginning to get a bit unweildy, and the type signature is getting harder and

harder to understand. What we want to do is abstract away the state passing part. That

is, the differences between mapTree and mapTreeState are: (1) the augmented f

type, (2) we replaced the type -> Tree b with -> state -> (state, Tree

b). Notice that both types changed in exactly the same way. We can abstract this away

with a type synonym declaration:

returnState a = \st -> (st, a)

State st b

bindState m k = \st ->

let (st’, a) = m st

m’ = k a

in m’ st’

Let’s examine each of these in turn. The first function, returnState, takes a

value of type a and creates something of type State st a. If we think of the st

as the state, and the value of type a as the value, then this is a function that doesn’t

change the state and returns the value a.

The bindState function looks distinctly like the interior let declarations in mapTreeState.

It takes two arguments. The first argument is an action that returns something of type

126 CHAPTER 9. MONADS

a with state st. The second is a function that takes this a and produces something of

type b also with the same state. The result of bindState is essentially the result of

transforming the a into a b.

The definition of bindState takes an initial state, st. It first applies this to the

State st a argument called m. This gives back a new state st’ and a value a. It

then lets the function k act on a, producing something of type State st b, called

m’. We finally run m’ with the new state st’.

We write a new function, mapTreeStateM and give it the type:

write this function without ever having to explicitly talk about the state:

mapTreeStateM f (Leaf a) =

f a ‘bindState‘ \b ->

returnState (Leaf b)

mapTreeStateM f (Branch lhs rhs) =

mapTreeStateM f lhs ‘bindState‘ \lhs’ ->

mapTreeStateM f rhs ‘bindState‘ \rhs’ ->

returnState (Branch lhs’ rhs’)

In the Leaf case, we apply f to a and then bind the result to a function that takes

the result and returns a Leaf with the new value.

In the Branch case, we recurse on the left-hand-side, binding the result to a func-

tion that recurses on the right-hand-side, binding that to a simple function that returns

the newly created Branch.

As you have probably guessed by this point, State st is a monad, returnState

is analogous to the overloaded return method, and bindState is analogous to the

overloaded >>= method. In fact, we can verify that State st a obeys the monad

laws:

Law 1 states: return a >>= f ≡ f a. Let’s calculate on the left hand side,

substituting our names:

returnState a ‘bindState‘ f

==>

\st -> let (st’, a) = (returnState a) st

m’ = f a

in m’ st’

==>

\st -> let (st’, a) = (\st -> (st, a)) st

in (f a) st’

==>

\st -> let (st’, a) = (st, a)

in (f a) st’

9.3. A SIMPLE STATE MONAD 127

==>

\st -> (f a) st

==>

f a

In the first step, we simply substitute the definition of bindState. In the second

step, we simplify the last two lines and substitute the definition of returnState. In

the third step, we apply st to the lambda function. In the fourth step, we rename st’

to st and remove the let. In the last step, we eta reduce.

Moving on to Law 2, we need to show that f >>= return ≡ f. This is shown

as follows:

f ‘bindState‘ returnState

==>

\st -> let (st’, a) = f st

in (returnState a) st’

==>

\st -> let (st’, a) = f st

in (\st -> (st, a)) st’

==>

\st -> let (st’, a) = f st

in (st’, a)

==>

\st -> f st

==>

f

Finally, we need to show that State obeys the third law: f >>= (\x -> g x

>>= h) ≡ (f >>= g) >>= h. This is much more involved to show, so we will

only sketch the proof here. Notice that we can write the left-hand-side as:

in (\x -> g x ‘bindState‘ h) a st’

==>

\st -> let (st’, a) = f st

in (g a ‘bindState‘ h) st’

==>

\st -> let (st’, a) = f st

in (\st’ -> let (st’’, b) = g a

in h b st’’) st’

==>

\st -> let (st’ , a) = f st

(st’’, b) = g a st’

(st’’’,c) = h b st’’

in (st’’’,c)

128 CHAPTER 9. MONADS

The interesting thing to note here is that we have both action applications on the

same let level. Since let is associative, this means that we can put whichever bracketing

we prefer and the results will not change. Of course, this is an informal, “hand waving”

argument and it would take us a few more derivations to actually prove, but this gives

the general idea.

Now that we know that State st is actually a monad, we’d like to make it an

instance of the Monad class. Unfortunately, the straightforward way of doing this

doesn’t work. We can’t write:

This is because you cannot make instances out of non-fully-applied type synonyms.

Instead, what we need to do instead is convert the type synonym into a newtype, as:

Unfortunately, this means that we need to do some packing and unpacking of the

State constructor in the Monad instance declaration, but it’s not terribly difficult:

return a = State (\state -> (state, a))

State run >>= action = State run’

where run’ st =

let (st’, a) = run st

State run’’ = action a

in run’’ st’

State state (Tree b)

mapTreeM f (Leaf a) = do

b <- f a

return (Leaf b)

mapTreeM f (Branch lhs rhs) = do

lhs’ <- mapTreeM f lhs

rhs’ <- mapTreeM f rhs

return (Branch lhs’ rhs’)

which is significantly cleaner than before. In fact, if we remove the type signature, we

get the more general type:

m (Tree b)

9.3. A SIMPLE STATE MONAD 129

That is, mapTreeM can be run in any monad, not just our State monad.

Now, the nice thing about encapsulating the stateful aspect of the computation like

this is that we can provide functions to get and change the current state. These look

like: getState

putState

getState :: State state state

getState = State (\state -> (state, state))

putState new = State (\_ -> (new, ()))

Here, getState is a monadic operation that takes the current state, passes it

through unchanged, and then returns it as the value. The putState function takes a

new state and produces an action that ignores the current state and inserts the new one.

Now, we can write our numberTree function as:

numberTree :: Tree a -> State Int (Tree (a, Int))

numberTree tree = mapTreeM number tree

where number v = do

cur <- getState

putState (cur+1)

return (v,cur)

runStateM (State f) st = snd (f st)

testTree =

Branch

(Branch

(Leaf ’a’)

(Branch

(Leaf ’b’)

(Leaf ’c’)))

(Branch

(Leaf ’d’)

(Leaf ’e’))

State> runStateM (numberTree testTree) 1

Branch (Branch (Leaf (’a’,1)) (Branch (Leaf (’b’,2))

(Leaf (’c’,3)))) (Branch (Leaf (’d’,4))

(Leaf (’e’,5)))

130 CHAPTER 9. MONADS

This may seem like a large amount of work to do something simple. However,

note the new power of mapTreeM. We can also print out the leaves of the tree in a

left-to-right fashion as:

’a’

’b’

’c’

’d’

’e’

This crucially relies on the fact that mapTreeM has the more general type involving

arbitrary monads – not just the state monad. Furthermore, we can write an action that

will make each leaf value equal to its old value as well as all the values preceeding:

where fluff v = do

cur <- getState

putState (v:cur)

return (v:cur)

Branch (Branch (Leaf "a") (Branch (Leaf "ba")

(Leaf "cba"))) (Branch (Leaf "dcba")

(Leaf "edcba"))

In fact, you don’t even need to write your own monad instance and datatype. All

this is built in to the Control.Monad.State module. There, our runStateM

is called evalState; our getState is called get; and our putState is called

put.

This module also contains a state transformer monad, which we will discuss in

Section 9.7.

It turns out that many of our favorite datatypes are actually monads themselves. Con-

lists sider, for instance, lists. They have a monad definition that looks something like:

return x = [x]

l >>= f = concatMap f l

fail _ = []

9.4. COMMON MONADS 131

This enables us to use lists in do notation. For instance, given the definition:

cross l1 l2 = do

x <- l1

y <- l2

return (x,y)

[(’a’,’d’),(’a’,’e’),(’a’,’f’),(’b’,’d’),(’b’,’e’),

(’b’,’f’)]

It is not a coincidence that this looks very much like the list comprehension form: list comprehensions

[(’a’,’d’),(’a’,’e’),(’a’,’f’),(’b’,’d’),(’b’,’e’),

(’b’,’f’)]

using lists. In fact, in older versions of Haskell, the list comprehension form could be

used for any monad – not just lists. However, in the current version of Haskell, this is

no longer allowed.

The Maybe type is also a monad, with failure being represented as Nothing and Maybe

with success as Just. We get the following instance declaration:

return a = Just a

Nothing >>= f = Nothing

Just x >>= f = f x

fail _ = Nothing

We can use the same cross product function that we did for lists on Maybes. This

is because the do notation works for any monad, and there’s nothing specific to lists

about the cross function.

Just (’a’,’b’)

Monads> cross (Nothing :: Maybe Char) (Just ’b’)

Nothing

Monads> cross (Just ’a’) (Nothing :: Maybe Char)

Nothing

Monads> cross (Nothing :: Maybe Char)

(Nothing :: Maybe Char)

Nothing

132 CHAPTER 9. MONADS

What this means is that if we write a function (like searchAll from Section 8.4) sea

only in terms of monadic operators, we can use it with any monad, depending on what

we mean. Using real monadic functions (not do notation), the searchAll function

looks something like:

| src == dst = return [src]

| otherwise = search’ el

where search’ [] = fail "no path"

search’ ((u,v,_):es)

| src == u =

searchAll g v dst >>= \path ->

return (u:path)

| otherwise = search’ es

The type of this function is Monad m => Graph v e -> Int -> Int ->

m [Int]. This means that no matter what monad we’re using at the moment, this

function will perform the calculation. Suppose we have the following graph:

[(0,1,’l’), (0,2,’m’), (1,3,’n’), (2,3,’m’)]

This represents a graph with four nodes, labelled a,b,c and d. There is an edge

from a to both b and c. There is also an edge from both b and c to d. Using the Maybe

monad, we can compute the path from a to d:

Just [0,1,3]

We provide the type signature, so that the interpreter knows what monad we’re

using. If we try to search in the opposite direction, there is no path. The inability to

find a path is represented as Nothing in the Maybe monad:

Nothing

Note that the string “no path” has disappeared since there’s no way for the Maybe

monad to record this.

If we perform the same impossible search in the list monad, we get the empty list,

indicating no path:

[]

9.4. COMMON MONADS 133

If we perform the possible search, we get back a list containing the first path:

[[0,1,3]]

You may have expected this function call to return all paths, but, as coded, it does

not. See Section 9.6 for more about using lists to represent nondeterminism. nondeterminism

If we use the IO monad, we can actually get at the error message, since IO knows

how to keep track of error messages:

Monads> it

[0,1,3]

Monads> searchAll gr 3 0 :: IO [Int]

*** Exception: user error

Reason: no path

In the first case, we needed to type it to get GHCi to actually evaluate the search.

There is one problem with this implementation of searchAll: if it finds an edge

that does not lead to a solution, it won’t be able to backtrack. This has to do with

the recursive call to searchAll inside of search’. Consider, for instance, what

happens if searchAll g v dst doesn’t find a path. There’s no way for this im-

plementation to recover. For instance, if we remove the edge from node b to node d,

we should still be able to find a path from a to d, but this algorithm can’t find it. We

define:

gr2 = Graph [(0, ’a’), (1, ’b’), (2, ’c’), (3, ’d’)]

[(0,1,’l’), (0,2,’m’), (2,3,’m’)]

*** Exception: user error

Reason: no path

To fix this, we need a function like combine from our Computation class. We

will see how to do this in Section 9.6.

Exercises

Exercise 9.1 Verify that Maybe obeys the three monad laws.

Exercise 9.2 The type Either String is a monad that can keep track of errors. Write an

instance for it, and then try doing the search from this chapter using this monad.

Hint: Your instance declaration should begin: instance Monad (Either String)

where.

134 CHAPTER 9. MONADS

The Monad/Control.Monad library contains a few very useful monadic combina-

tors, which haven’t yet been thoroughly discussed. The ones we will discuss in this

section, together with their types, are:

• (=<<) :: (a -> m b) -> m a -> m b

• mapM :: (a -> m b) -> [a] -> m [b]

• mapM :: (a -> m b) -> [a] -> m ()

• filterM :: (a -> m Bool) -> [a] -> m [a]

• foldM :: (a -> b -> m a) -> a -> [b] -> m a

• sequence :: [m a] -> m [a]

• sequence :: [m a] -> m ()

• liftM :: (a -> b) -> m a -> m b

• when :: Bool -> m () -> m ()

• join :: m (m a) -> m a

In the above, m is always assumed to be an instance of Monad.

In general, functions with an underscore at the end are equivalent to the ones with-

out, except that they do not return any value.

The =<< function is exactly the same as >>=, except it takes its arguments in the

opposite order. For instance, in the IO monad, we can write either of the following:

(readFile "foo" >>= putStrLn)

hello world!

Monads> writeFile "foo" "hello world!" >>

(putStrLn =<< readFile "foo")

hello world!

mapM The mapM, filterM and foldM are our old friends map, filter and foldr

filterM wrapped up inside of monads. These functions are incredibly useful (particularly

foldM foldM) when working with monads. We can use mapM , for instance, to print a list of

things to the screen:

1

2

3

4

5

9.5. MONADIC COMBINATORS 135

We can use foldM to sum a list and print the intermediate sum at each step:

putStrLn (show a ++ "+" ++ show b ++

"=" ++ show (a+b)) >>

return (a+b)) 0 [1..5]

0+1=1

1+2=3

3+3=6

6+4=10

10+5=15

Monads> it

15

The sequence and sequence functions simply “execute” a list of actions. For sequence

instance:

1

2

’a’

*Monads> it

[(),(),()]

*Monads> sequence_ [print 1, print 2, print ’a’]

1

2

’a’

*Monads> it

()

We can see that the underscored version doesn’t return each value, while the non-

underscored version returns the list of the return values.

The liftM function “lifts” a non-monadic function to a monadic function. (Do liftM

not confuse this with the lift function used for monad transformers in Section 9.7.)

This is useful for shortening code (among other things). For instance, we might want

to write a function that prepends each line in a file with its line number. We can do this

with:

numberFile fp = do

text <- readFile fp

let l = lines text

let n = zipWith (\n t -> show n ++ ’ ’ : t) [1..] l

mapM_ putStrLn n

136 CHAPTER 9. MONADS

numberFile fp = do

l <- lines ‘liftM‘ readFile fp

let n = zipWith (\n t -> show n ++ ’ ’ : t) [1..] l

mapM_ putStrLn n

In fact, you can apply any sort of (pure) processing to a file using liftM. For

instance, perhaps we also want to split lines into words; we can do this with:

...

w <- (map words . lines) ‘liftM‘ readFile fp

...

Note that the parentheses are required, since the (.) function has the same fixity

has ‘liftM‘.

Lifting pure functions into monads is also useful in other monads. For instance

liftM can be used to apply function inside of Just. For instance:

Just 6

*Monads> liftM (+1) Nothing

Nothing

when The when function executes a monadic action only if a condition is met. So, if we

only want to print non-empty lines:

["","abc","def","","","ghi"]

abc

def

ghi

Of course, the same could be accomplished with filter, but sometimes when is

more convenient.

join Finally, the join function is the monadic equivalent of concat on lists. In fact,

when m is the list monad, join is exactly concat. In other monads, it accomplishes

a similar task:

Just ’a’

Monads> join (Just (Nothing :: Maybe Char))

Nothing

Monads> join (Nothing :: Maybe (Maybe Char))

Nothing

9.6. MONADPLUS 137

hello

Monads> return (putStrLn "hello")

Monads> join [[1,2,3],[4,5]]

[1,2,3,4,5]

These functions will turn out to be even more useful as we move on to more ad-

vanced topics in Chapter 10.

9.6 MonadPlus

Given only the >>= and return functions, it is impossible to write a function like

combine with type c a → c a → c a. However, such a function is so generally useful combine

that it exists in another class called MonadPlus. In addition to having a combine MonadPlus

function, instances of MonadPlus also have a “zero” element that is the identity

under the “plus” (i.e., combine) action. The definition is:

mzero :: m a

mplus :: m a -> m a -> m a

In order to gain access to MonadPlus, you need to import the Monad module

(or Control.Monad in the hierarchical libraries).

In Section 9.4, we showed that Maybe and list are both monads. In fact, they Maybe

are also both instances of MonadPlus. In the case of Maybe, the zero element is lists

Nothing; in the case of lists, it is the empty list. The mplus operation on Maybe is

Nothing, if both elements are Nothing; otherwise, it is the first Just value. For

lists, mplus is the same as ++.

That is, the instance declarations look like:

mzero = Nothing

mplus Nothing y = y

mplus x _ = x

mzero = []

mplus x y = x ++ y

We can use this class to reimplement the search function we’ve been exploring,

such that it will explore all possible paths. The new function looks like:

| src == dst = return [src]

138 CHAPTER 9. MONADS

| otherwise = search’ el

where search’ [] = fail "no path"

search’ ((u,v,_):es)

| src == u =

(searchAll2 g v dst >>= \path ->

return (u:path)) ‘mplus‘

search’ es

| otherwise = search’ es

Now, when we’re going through the edge list in search’, and we come across a

matching edge, not only do we explore this path, but we also continue to explore the

out-edges of the current node in the recursive call to search’.

The IO monad is not an instance of MonadPlus; we we’re not able to execute

the search with this monad. We can see that when using lists as the monad, we (a) get

all possible paths in gr and (b) get a path in gr2.

[[0,1,3],[0,2,3]]

MPlus> searchAll2 gr2 0 3 :: [[Int]]

[[0,2,3]]

| src == dst = return [src]

| otherwise = search’ el

where search’ [] = fail "no path"

search’ ((u,v,_):es)

| src == u = do

path <- searchAll2 g v dst

rest <- search’ es

return ((u:path) ‘mplus‘ rest)

| otherwise = search’ es

But note that this doesn’t do what we want. Here, if the recursive call to searchAll2

fails, we don’t try to continue and execute search’ es. The call to mplus must be

at the top level in order for it to work.

Exercises

Exercise 9.3 Suppose that we changed the order of arguments to mplus. I.e., the

matching case of search’ looked like:

search’ es ‘mplus‘

(searchAll2 g v dst >>= \path ->

return (u:path))

9.7. MONAD TRANSFORMERS 139

How would you expect this to change the results when using the list monad on gr?

Why?

Often we want to “piggyback” monads on top of each other. For instance, there might

be a case where you need access to both IO operations through the IO monad and

state functions through some state monad. In order to accomplish this, we introduce

a MonadTrans class, which essentially “lifts” the operations of one monad into an- MonadTrans

other. You can think of this as stacking monads on top of eachother. This class has a

simple method: lift. The class declaration for MonadTrans is: lift

lift :: Monad m => m a -> t m a

The idea here is that t is the outer monad and that m lives inside of it. In order to

execute a command of type Monad m => m a, we first lift it into the transformer.

The simplest example of a transformer (and arguably the most useful) is the state

transformer monad, which is a state monad wrapped around an arbitrary monad. Be- state monad

fore, we defined a state monad as:

Now, instead of using a function of type state -> (state, a) as the monad,

we assume there’s some other monad m and make the internal action into something of

type state -> m (state, a). This gives rise to the following definition for a

state transformer: state transformer

StateT (state -> m (state, a))

For instance, we can think of m as IO. In this case, our state transformer monad is

able to execute actions in the IO monad. First, we make this an instance of MonadTrans:

lift m = StateT (\s -> do a <- m

return (s,a))

Here, lifting a function from the realm of m to the realm of StateT state simply

involves keeping the state (the s value) constant and executing the action.

Of course, we also need to make StateT a monad, itself. This is relatively

straightforward, provided that m is already a monad:

140 CHAPTER 9. MONADS

return a = StateT (\s -> return (s,a))

StateT m >>= k = StateT (\s -> do

(s’, a) <- m s

let StateT m’ = k a

m’ s’)

fail s = StateT (\_ -> fail s)

The idea behind the definition of return is that we keep the state constant and

simply return the state/a pair in the enclosed monad. Note that the use of return in

the definition of return refers to the enclosed monad, not the state transformer.

In the definition of bind, we create a new StateT that takes a state s as an ar-

gument. First, it applies this state to the first action (StateT m) and gets the new

state and answer as a result. It then runs the k action on this new state and gets a new

transformer. It finally applies the new state to this transformer. This definition is nearly

identical to the definition of bind for the standard (non-transformer) State monad

described in Section 9.3.

The fail function passes on the call to fail in the enclosed monad, since state

transformers don’t natively know how to deal with failure.

Of course, in order to actually use this monad, we need to provide function getT

getT , putT and evalStateT . These are analogous to getState, putState and

putT runStateM from Section 9.3:

evalStateT

getT :: Monad m => StateT s m s

getT = StateT (\s -> return (s, s))

putT s = StateT (\_ -> return (s, ()))

evalStateT (StateT m) state = do

(s’, a) <- m state

return a

These functions should be straightforward. Note, however, that the result of evalStateT

is actually a monadic action in the enclosed monad. This is typical of monad trans-

formers: they don’t know how to actually run things in their enclosed monad (they

only know how to lift actions). Thus, what you get out is a monadic action in the

inside monad (in our case, IO), which you then need to run yourself.

We can use state transformers to reimplement a version of our mapTreeM function

from Section 9.3. The only change here is that when we get to a leaf, we print out the

value of the leaf; when we get to a branch, we just print out “Branch.”

lift (putStrLn ("Leaf " ++ show a))

9.7. MONAD TRANSFORMERS 141

b <- action a

return (Leaf b)

mapTreeM action (Branch lhs rhs) = do

lift (putStrLn "Branch")

lhs’ <- mapTreeM action lhs

rhs’ <- mapTreeM action rhs

return (Branch lhs’ rhs’)

The only difference between this function and the one from Section 9.3 is the calls

to lift (putStrLn ...) as the first line. The lift tells us that we’re going to

be executing a command in an enclosed monad. In this case, the enclosed monad is

IO, since the command lifted is putStrLn.

The type of this function is relatively complex:

(a -> t IO a1) -> Tree a -> t IO (Tree a1)

Ignoring, for a second, the class constraints, this says that mapTreeM takes an

action and a tree and returns a tree. This just as before. In this, we require that t is

a monad transformer (since we apply lift in it); we require that t IO is a monad,

since we use putStrLn we know that the enclosed monad is IO; finally, we require

that a is an instance of show – this is simply because we use show to show the value

of leaves.

Now, we simply change numberTree to use this version of mapTreeM, and the

new versions of get and put, and we end up with:

where number v = do

cur <- getT

putT (cur+1)

return (v,cur)

Branch

Branch

Leaf ’a’

Branch

Leaf ’b’

Leaf ’c’

Branch

Leaf ’d’

Leaf ’e’

*MTrans> it

142 CHAPTER 9. MONADS

(Branch (Leaf (’b’,1)) (Leaf (’c’,2))))

(Branch (Leaf (’d’,3)) (Leaf (’e’,4)))

One problem not specified in our discussion of MonadPlus is that our search

cycles algorithm will fail to terminate on graphs with cycles. Consider:

gr3 = Graph [(0, ’a’), (1, ’b’), (2, ’c’), (3, ’d’)]

[(0,1,’l’), (1,0,’m’), (0,2,’n’),

(1,3,’o’), (2,3,’p’)]

In this graph, there is a back edge from node b back to node a. If we attempt to run

searchAll2, regardless of what monad we use, it will fail to terminate. Moreover,

if we move this erroneous edge to the end of the list (and call this gr4), the result of

searchAll2 gr4 0 3 will contain an infinite number of paths: presumably we

only want paths that don’t contain cycles.

In order to get around this problem, we need to introduce state. Namely, we need

to keep track of which nodes we have visited, so that we don’t visit them again.

We can do this as follows:

searchAll5 g@(Graph vl el) src dst

| src == dst = do

visited <- getT

putT (src:visited)

return [src]

| otherwise = do

visited <- getT

putT (src:visited)

if src ‘elem‘ visited

then mzero

else search’ el

where

search’ [] = mzero

search’ ((u,v,_):es)

| src == u =

(do path <- searchAll5 g v dst

return (u:path)) ‘mplus‘

search’ es

| otherwise = search’ es

Here, we implicitly use a state transformer (see the calls to getT and putT) to

keep track of visited states. We only continue to recurse, when we encounter a state we

haven’t yet visited. Futhermore, when we recurse, we add the current state to our set

of visited states.

Now, we can run the state transformer and get out only the correct paths, even on

the cyclic graphs:

9.7. MONAD TRANSFORMERS 143

[[0,1,3],[0,2,3]]

MTrans> evalStateT (searchAll5 gr4 0 3) [] :: [[Int]]

[[0,1,3],[0,2,3]]

Here, the empty list provided as an argument to evalStateT is the initial state

(i.e., the initial visited list). In our case, it is empty.

We can also provide an execStateT method that, instead of returning a result,

returns the final state. This function looks like:

execStateT (StateT m) state = do

(s’, a) <- m state

return s’

This is not so useful in our case, as it will return exactly the reverse of evalStateT

(try it and find out!), but can be useful in general (if, for instance, we need to know how

many numbers are used in numberTree).

Exercises

Exercise 9.4 Write a function searchAll6, based on the code for searchAll2,

that, at every entry to the main function (not the recursion over the edge list), prints the

search being conducted. For instance, the output generated for searchAll6 gr 0

3 should look like:

Exploring 0 -> 3

Exploring 1 -> 3

Exploring 3 -> 3

Exploring 2 -> 3

Exploring 3 -> 3

MTrans> it

[[0,1,3],[0,2,3]]

In order to do this, you will have to define your own list monad transformer and make

appropriate instances of it.

Exercise 9.5 Combine the searchAll5 function (from this section) with the searchAll6

function (from the previous exercise) into a single function called searchAll7. This

function should perform IO as in searchAll6 but should also keep track of state

using a state transformer.

144 CHAPTER 9. MONADS

It turns out that a certain class of parsers are all monads. This makes the construction

of parsing libraries in Haskell very clean. In this chapter, we begin by building our own

(small) parsing library in Section 9.8.1 and then introduce the Parsec parsing library in

Section 9.8.2.

Consider the task of parsing. A simple parsing monad is much like a state monad,

where the state is the unparsed string. We can represent this exactly as:

{ runParser :: String -> Either String (String, a) }

We again use Left err to be an error condition. This yields standard instances

of Monad and MonadPlus:

return a = Parser (\xl -> Right (xl,a))

fail s = Parser (\xl -> Left s)

Parser m >>= k = Parser $ \xl ->

case m xl of

Left s -> Left s

Right (xl’, a) ->

let Parser n = k a

in n xl’

mzero = Parser (\xl -> Left "mzero")

Parser p ‘mplus‘ Parser q = Parser $ \xl ->

case p xl of

Right a -> Right a

Left err -> case q xl of

Right a -> Right a

Left _ -> Left err

primitives Now, we want to build up a library of paring “primitives.” The most basic primitive

is a parser that will read a specific character. This function looks like:

char c = Parser char’

where char’ [] = Left ("expecting " ++ show c ++

" got EOF")

char’ (x:xs)

| x == c = Right (xs, c)

9.8. PARSING MONADS 145

show c ++ " got " ++

show x)

Here, the parser succeeds only if the first character of the input is the expected

character.

We can use this parser to build up a parser for the string “Hello”:

helloParser = do

char ’H’

char ’e’

char ’l’

char ’l’

char ’o’

return "Hello"

This shows how easy it is to combine these parsers. We don’t need to worry about

the underlying string – the monad takes care of that for us. All we need to do is

combine these parser primatives. We can test this parser by using runParser and by runParser

supplying input:

Right ("","Hello")

Parsing> runParser helloParser "Hello World!"

Right (" World!","Hello")

Parsing> runParser helloParser "hello World!"

Left "expecting ’H’ got ’h’"

We can have a slightly more general function, which will match any character fitting

a description:

matchChar c = Parser matchChar’

where matchChar’ [] =

Left ("expecting char, got EOF")

matchChar’ (x:xs)

| c x = Right (xs, x)

| otherwise =

Left ("expecting char, got " ++

show x)

146 CHAPTER 9. MONADS

ciHelloParser = do

c1 <- matchChar (‘elem‘ "Hh")

c2 <- matchChar (‘elem‘ "Ee")

c3 <- matchChar (‘elem‘ "Ll")

c4 <- matchChar (‘elem‘ "Ll")

c5 <- matchChar (‘elem‘ "Oo")

return [c1,c2,c3,c4,c5]

but the above implementation works just as well. We can test this function:

Right (" world!","hELlO")

anyChar = Parser anyChar’

where anyChar’ [] =

Left ("expecting character, got EOF")

anyChar’ (x:xs) = Right (xs, x)

many On top of these primitives, we usually build some combinators. The many combi-

nator, for instance, will take a parser that parses entities of type a and will make it into

a parser that parses entities of type [a] (this is a Kleene-star operator):

many (Parser p) = Parser many’

where many’ xl =

case p xl of

Left err -> Right (xl, [])

Right (xl’,a) ->

let Right (xl’’, rest) = many’ xl’

in Right (xl’’, a:rest)

The idea here is that first we try to apply the given parser, p. If this fails, we succeed

but return the empty list. If p succeeds, we recurse and keep trying to apply p until it

fails. We then return the list of successes we’ve accumulated.

In general, there would be many more functions of this sort, and they would be hid-

den away in a library, so that users couldn’t actually look inside the Parser type.

However, using them, you could build up, for instance, a parser that parses (non-

negative) integers:

9.8. PARSING MONADS 147

int = do

t1 <- matchChar isDigit

tr <- many (matchChar isDigit)

return (read (t1:tr))

In this function, we first match a digit (the isDigit function comes from the

module Char/Data.Char) and then match as many more digits as we can. We then

read the result and return it. We can test this parser as before:

Right ("",54)

*Parsing> runParser int "54abc"

Right ("abc",54)

*Parsing> runParser int "a54abc"

Left "expecting char, got ’a’"

Now, suppose we want to parse a Haskell-style list of Ints. This becomes somewhat

difficult because, at some point, we’re either going to parse a comma or a close brace,

but we don’t know when this will happen. This is where the fact that Parser is an

instance of MonadPlus comes in handy: first we try one, then we try the other.

Consider the following code:

intList = do

char ’[’

intList’ ‘mplus‘ (char ’]’ >> return [])

where intList’ = do

i <- int

r <- (char ’,’ >> intList’) ‘mplus‘

(char ’]’ >> return [])

return (i:r)

The first thing this code does is parse and open brace. Then, using mplus, it tries mplus

one of two things: parsing using intList’, or parsing a close brace and returning an

empty list.

The intList’ function assumes that we’re not yet at the end of the list, and so it

first parses an int. It then parses the rest of the list. However, it doesn’t know whether

we’re at the end yet, so it again uses mplus. On the one hand, it tries to parse a comma

and then recurse; on the other, it parses a close brace and returns the empty list. Either

way, it simply prepends the int it parsed itself to the beginning.

One thing that you should be careful of is the order in which you supply arguments

to mplus. Consider the following parser:

148 CHAPTER 9. MONADS

tricky =

mplus (string "Hal") (string "Hall")

You might expect this parser to parse both the words “Hal” and “Hall;” however, it

only parses the former. You can see this with:

Right ("","Hal")

Parsing> runParser tricky "Hall"

Right ("l","Hal")

This is because it tries to parse “Hal,” which succeeds, and then it doesn’t bother

trying to parse “Hall.”

You can attempt to fix this by providing a parser primitive, which detects end-of-file

(really, end-of-string) as:

eof :: Parser ()

eof = Parser eof’

where eof’ [] = Right ([], ())

eof’ xl = Left ("Expecting EOF, got " ++

show (take 10 xl))

tricky2 = do

s <- mplus (string "Hal") (string "Hall")

eof

return s

Right ("",())

Parsing> runParser tricky2 "Hall"

Left "Expecting EOF, got \"l\""

This is because, again, the mplus doesn’t know that it needs to parse the whole

input. So, when you provide it with “Hall,” it parses just “Hal” and leaves the last “l”

lying around to be parsed later. This causes eof to produce an error message.

The correct way to implement this is:

tricky3 =

mplus (do s <- string "Hal"

9.8. PARSING MONADS 149

eof

return s)

(do s <- string "Hall"

eof

return s)

Right ("","Hal")

Parsing> runParser tricky3 "Hall"

Right ("","Hall")

This works precisely because each side of the mplus knows that it must read the

end.

In this case, fixing the parser to accept both “Hal” and “Hall” was fairly simple,

due to the fact that we assumed we would be reading an end-of-file immediately af-

terwards. Unfortunately, if we cannot disambiguate immediately, life becomes signifi-

cantly more complicated. This is a general problem in parsing, and has little to do with

monadic parsing. The solution most parser libraries (e.g., Parsec, see Section 9.8.2)

have adopted is to only recognize “LL(1)” grammars: that means that you must be able

to disambiguate the input with a one token look-ahead.

Exercises

Exercise 9.6 Write a parser intListSpace that will parse int lists but will allow

arbitrary white space (spaces, tabs or newlines) between the commas and brackets.

Given this monadic parser, it is fairly easy to add information regarding source

position. For instance, if we’re parsing a large file, it might be helpful to report the

line number on which an error occurred. We could do this simply by extending the line numbers

Parser type and by modifying the instances and the primitives:

{ runParser :: Int -> String ->

Either String (Int, String, a) }

return a = Parser (\n xl -> Right (n,xl,a))

fail s = Parser (\n xl -> Left (show n ++

": " ++ s))

Parser m >>= k = Parser $ \n xl ->

case m n xl of

Left s -> Left s

Right (n’, xl’, a) ->

let Parser m2 = k a

150 CHAPTER 9. MONADS

in m2 n’ xl’

mzero = Parser (\n xl -> Left "mzero")

Parser p ‘mplus‘ Parser q = Parser $ \n xl ->

case p n xl of

Right a -> Right a

Left err -> case q n xl of

Right a -> Right a

Left _ -> Left err

matchChar c = Parser matchChar’

where matchChar’ n [] =

Left ("expecting char, got EOF")

matchChar’ n (x:xs)

| c x =

Right (n+if x==’\n’ then 1 else 0

, xs, x)

| otherwise =

Left ("expecting char, got " ++

show x)

The definitions for char and anyChar are not given, since they can be written in

terms of matchChar. The many function needs to be modified only to include the

new state.

Now, when we run a parser and there is an error, it will tell us which line number

contains the error:

Right (1,"","Hello")

Parsing2> runParser int 1 "a54"

Left "1: expecting char, got ’a’"

Parsing2> runParser intList 1 "[1,2,3,a]"

Left "1: expecting ’]’ got ’1’"

We can use the intListSpace parser from the prior exercise to see that this does

in fact work:

"[1 ,2 , 4 \n\n ,a\n]"

Left "3: expecting char, got ’a’"

Parsing2> runParser intListSpace 1

"[1 ,2 , 4 \n\n\n ,a\n]"

Left "4: expecting char, got ’a’"

Parsing2> runParser intListSpace 1

9.8. PARSING MONADS 151

Left "5: expecting char, got ’a’"

We can see that the line number, on which the error occurs, increases as we add

additional newlines before the erroneous “a”.

9.8.2 Parsec

As you continue developing your parser, you might want to add more and more fea-

tures. Luckily, Graham Hutton and Daan Leijen have already done this for us in the

Parsec library. This section is intended to be an introduction to the Parsec library; it by

no means covers the whole library, but it should be enough to get you started.

Like our libarary, Parsec provides a few basic functions to build parsers from char-

acters. These are: char, which is the same as our char; anyChar, which is the same char

as our anyChar; satisfy, which is the same as our matchChar; oneOf, which anyChar

takes a list of Chars and matches any of them; and noneOf, which is the opposite of satisfy

oneOf. oneOf

The primary function Parsec uses to run a parser is parse. However, in addition to noneOf

a parser, this function takes a string that represents the name of the file you’re parsing. parse

This is so it can give better error messages. We can try parsing with the above functions:

Right ’a’

ParsecI> parse (char ’a’) "stdin" "ab"

Right ’a’

ParsecI> parse (char ’a’) "stdin" "b"

Left "stdin" (line 1, column 1):

unexpected "b"

expecting "a"

ParsecI> parse (char ’H’ >> char ’a’ >> char ’l’)

"stdin" "Hal"

Right ’l’

ParsecI> parse (char ’H’ >> char ’a’ >> char ’l’)

"stdin" "Hap"

Left "stdin" (line 1, column 3):

unexpected "p"

expecting "l"

Here, we can see a few differences between our parser and Parsec: first, the rest

of the string isn’t returned when we run parse. Second, the error messages produced

are much better.

In addition to the basic character parsing functions, Parsec provides primitives for:

spaces, which is the same as ours; space which parses a single space; letter, spaces

which parses a letter; digit, which parses a digit; string, which is the same as space

ours; and a few others. letter

We can write our int and intList functions in Parsec as: digit

string

152 CHAPTER 9. MONADS

int = do

i1 <- digit

ir <- many digit

return (read (i1:ir))

intList = do

char ’[’

intList’ ‘mplus‘ (char ’]’ >> return [])

where intList’ = do

i <- int

r <- (char ’,’ >> intList’) ‘mplus‘

(char ’]’ >> return [])

return (i:r)

First, note the type signatures. The st type variable is simply a state variable that

we are not using. In the int function, we use the many function (built in to Parsec)

together with the digit function (also built in to Parsec). The intList function is

actually identical to the one we wrote before.

Note, however, that using mplus explicitly is not the preferred method of com-

bining parsers: Parsec provides a <|> function that is a synonym of mplus, but that

looks nicer:

intList = do

char ’[’

intList’ <|> (char ’]’ >> return [])

where intList’ = do

i <- int

r <- (char ’,’ >> intList’) <|>

(char ’]’ >> return [])

return (i:r)

Right [3,5,2,10]

ParsecI> parse intList "stdin" "[3,5,a,10]"

Left "stdin" (line 1, column 6):

unexpected "a"

expecting digit

In addition to these basic combinators, Parsec provides a few other useful ones:

9.8. PARSING MONADS 153

• choice takes a list of parsers and performs an or operation (<|>) between all

of them.

• option takes a default value of type a and a parser that returns something of

type a. It then tries to parse with the parser, but it uses the default value as the

return, if the parsing fails.

• optional takes a parser that returns () and optionally runs it.

• between takes three parsers: an open parser, a close parser and a between

parser. It runs them in order and returns the value of the between parser. This

can be used, for instance, to take care of the brackets on our intList parser.

• notFollowedBy takes a parser and returns one that succeeds only if the given

parser would have failed.

Suppose we want to parse a simple calculator language that includes only plus

and times. Furthermore, for simplicity, assume each embedded expression must be

enclosed in parentheses. We can give a datatype for this language as:

| Expr :+: Expr

| Expr :*: Expr

deriving (Eq, Ord, Show)

parseExpr = choice

[ do i <- int; return (Value i)

, between (char ’(’) (char ’)’) $ do

e1 <- parseExpr

op <- oneOf "+*"

e2 <- parseExpr

case op of

’+’ -> return (e1 :+: e2)

’*’ -> return (e1 :*: e2)

]

Here, the parser alternates between two options (we could have used <|>, but I

wanted to show the choice combinator in action). The first simply parses an int and

then wraps it up in the Value constructor. The second option uses between to parse

text between parentheses. What it parses is first an expression, then one of plus or

times, then another expression. Depending on what the operator is, it returns either e1

:+: e2 or e1 :*: e2.

We can modify this parser, so that instead of computing an Expr, it simply com-

putes the value:

154 CHAPTER 9. MONADS

parseValue = choice

[int

,between (char ’(’) (char ’)’) $ do

e1 <- parseValue

op <- oneOf "+*"

e2 <- parseValue

case op of

’+’ -> return (e1 + e2)

’*’ -> return (e1 * e2)

]

Right 21

bindings Now, suppose we want to introduce bindings into our language. That is, we want

to also be able to say “let x = 5 in” inside of our expressions and then use the variables

getState we’ve defined. In order to do this, we need to use the getState and setState (or

setState updateState) functions built in to Parsec.

updateState

parseValueLet :: CharParser (FiniteMap Char Int) Int

parseValueLet = choice

[ int

, do string "let "

c <- letter

char ’=’

e <- parseValueLet

string " in "

updateState (\fm -> addToFM fm c e)

parseValueLet

, do c <- letter

fm <- getState

case lookupFM fm c of

Nothing -> unexpected ("variable " ++ show c ++

" unbound")

Just i -> return i

, between (char ’(’) (char ’)’) $ do

e1 <- parseValueLet

op <- oneOf "+*"

e2 <- parseValueLet

case op of

’+’ -> return (e1 + e2)

9.8. PARSING MONADS 155

]

The int and recursive cases remain the same. We add two more cases, one to deal

with let-bindings, the other to deal with usages.

In the let-bindings case, we first parse a “let” string, followed by the character we’re

binding (the letter function is a Parsec primitive that parses alphabetic characters),

followed by it’s value (a parseValueLet). Then, we parse the “ in ” and update the

state to include this binding. Finally, we continue and parse the rest.

In the usage case, we simply parse the character and then look it up in the state.

However, if it doesn’t exist, we use the Parsec primitive unexpected to report an

error.

We can see this parser in action using the runParser command, which enables runParser

us to provide an initial state:

"let c=5 in ((5+4)*c)"

Right 45

*ParsecI> runParser parseValueLet emptyFM "stdin"

"let c=5 in ((5+4)*let x=2 in (c+x))"

Right 63

*ParsecI> runParser parseValueLet emptyFM "stdin"

"((let x=2 in 3+4)*x)"

Right 14

Note that the bracketing does not affect the definitions of the variables. For in-

stance, in the last example, the use of “x” is, in some sense, outside the scope of the

definition. However, our parser doesn’t notice this, since it operates in a strictly left-

to-right fashion. In order to fix this omission, bindings would have to be removed (see

the exercises).

Exercises

Exercise 9.7 Modify the parseValueLet parser, so that it obeys bracketing. In

order to do this, you will need to change the state to something like FiniteMap

Char [Int], where the [Int] is a stack of definitions.

156 CHAPTER 9. MONADS

Chapter 10

Advanced Techniques

10.1 Exceptions

10.2 Mutable Arrays

10.3 Mutable References

10.4 The ST Monad

10.5 Concurrency

10.6 Regular Expressions

10.7 Dynamic Types

157

158 CHAPTER 10. ADVANCED TECHNIQUES

Appendix A

Complexity Theory is the study of how long a program will take to run, depending on

the size of its input. There are many good introductory books to complexity theory and

the basics are explained in any good algorithms book. I’ll keep the discussion here to

a minimum.

The idea is to say how well a program scales with more data. If you have a program

that runs quickly on very small amounts of data but chokes on huge amounts of data,

it’s not very useful (unless you know you’ll only be working with small amounts of

data, of course). Consider the following Haskell function to return the sum of the

elements in a list:

sum [] = 0

sum (x:xs) = x + sum xs

How long does it take this function to complete? That’s a very difficult question; it

would depend on all sorts of things: your processor speed, your amount of memory, the

exact way in which the addition is carried out, the length of the list, how many other

programs are running on your computer, and so on. This is far too much to deal with, so

we need to invent a simpler model. The model we use is sort of an arbitrary “machine

step.” So the question is “how many machine steps will it take for this program to

complete?” In this case, it only depends on the length of the input list.

If the input list is of length 0, the function will take either 0 or 1 or 2 or some very

small number of machine steps, depending exactly on how you count them (perhaps 1

step to do the pattern matching and 1 more to return the value 0). What if the list is of

length 1. Well, it would take however much time the list of length 0 would take, plus a

few more steps for doing the first (and only element).

If the input list is of length n, it will take however many steps an empty list would

take (call this value y) and then, for each element it would take a certain number of

steps to do the addition and the recursive call (call this number x). Then, the total time

this function will take is nx + y since it needs to do those additions n many times.

These x and y values are called constant values, since they are independent of n, and

actually dependent only on exactly how we define a machine step, so we really don’t

159

160 APPENDIX A. BRIEF COMPLEXITY THEORY

want to consider them all that important. Therefore, we say that the complexity of this

sum function is O(n) (read “order n”). Basically saying something is O(n) means that

for some constant factors x and y, the function takes nx+ y machine steps to complete.

Consider the following sorting algorithm for lists (commonly called “insertion

sort”):

sort [] = []

sort [x] = [x]

sort (x:xs) = insert (sort xs)

where insert [] = [x]

insert (y:ys) | x <= y = x : y : ys

| otherwise = y : insert ys

The way this algorithm works is as follow: if we want to sort an empty list or a list

of just one element, we return them as they are, as they are already sorted. Otherwise,

we have a list of the form x:xs. In this case, we sort xs and then want to insert x

in the appropriate location. That’s what the insert function does. It traverses the

now-sorted tail and inserts x wherever it naturally fits.

Let’s analyze how long this function takes to complete. Suppose it takes f (n) stepts

to sort a list of length n. Then, in order to sort a list of n-many elements, we first have

to sort the tail of the list first, which takes f (n − 1) time. Then, we have to insert x into

this new list. If x has to go at the end, this will take O(n − 1) = O(n) steps. Putting

all of this together, we see that we have to do O(n) amount of work O(n) many times,

which means that the entire complexity of this sorting algorithm is O(n2 ). Here, the

squared is not a constant value, so we cannot throw it out.

What does this mean? Simply that for really long lists, the sum function won’t take

very long, but that the sort function will take quite some time. Of course there are

algorithms that run much more slowly that simply O(n2 ) and there are ones that run

more quickly than O(n).

Consider the random access functions for lists and arrays. In the worst case, ac-

cessing an arbitrary element in a list of length n will take O(n) time (think about

accessing the last element). However with arrays, you can access any element imme-

diately, which is said to be in constant time, or O(1), which is basically as fast an any

algorithm can go.

There’s much more in complexity theory than this, but this should be enough to

allow you to understand all the discussions in this tutorial. Just keep in mind that O(1)

is faster than O(n) is faster than O(n2 ), etc.

Appendix B

example is factorial, whose definition is:

1 n=0

f act(n) =

n ∗ f act(n − 1) n > 0

Here, we can see that in order to calculate f act(5), we need to calculate f act(4),

but in order to calculatef act(4), we need to calculate f act(3), and so on.

Recursive function definitions always contain a number of non-recursive base cases

and a number of recursive cases. In the case of factorial, we have one of each. The

base case is when n = 0 and the recursive case is when n > 0.

One can actually think of the natural numbers themselves as recursive (in fact, if

you ask set theorists about this, they’ll say this is how it is). That is, there is a zero

element and then for every element, it has a successor. That is 1 = succ(0), 2 =

succ(1), . . . , 573 = succ(572), . . . and so on forever. We can actually implement this

system of natural numbers in Haskell:

This is a recursive type definition. Here, we represent one as Succ Zero and

three as Succ (Succ (Succ Zero)). One thing we might want to do is be able

to convert back and forth beween Nats and Ints. Clearly, we can write a base case as:

natToInt Zero = 0

In order to write the recursive case, we realize that we’re going to have something

of the form Succ n. We can make the assumption that we’ll be able to take n and

produce an Int. Assuming we can do this, all we need to do is add one to this result.

This gives rise to our recursive case:

161

162 APPENDIX B. RECURSION AND INDUCTION

tion is a proof technique which typically breaks problems down into base cases and

“inductive” cases, very analogous to our analysis of recursion.

Let’s say we want to prove the statement n! ≥ n for all n ≥ 0. First we formulate

a base case: namely, we wish to prove the statement when n = 0. When n = 0, n! = 1

by definition. Since n! = 1 > 0 = n, we get that 0! ≥ 0 as desired.

Now, suppose that n > 0. Then n = k + 1 for some value k. We now invoke the

inductive hypothesis and claim that the statement holds for n = k. That is, we assume

that k! ≥ k. Now, we use k to formate the statement for our value of n. That is, n! ≥ n

if and only iff (k + 1)! ≥ (k + 1). We now apply the definition of factorial and get

(k + 1)! = (k + 1) ∗ k!. Now, we know k! ≥ k, so (k + 1) ∗ k! ≥ k + 1 if and only if

k + 1 ≥ 1. But we know that k ≥ 0, which means k + 1 ≥ 1. Thus it is proven.

It may seem a bit counter-intuitive that we are assuming that the claim is true for k

in our proof that it is true for n. You can think of it like this: we’ve proved the statement

for the case when n = 0. Now, we know it’s true for n = 0 so using this we use our

inductive argument to show that it’s true for n = 1. Now, we know that it is true for

n = 1 so we reuse our inductive argument to show that it’s true for n = 2. We can

continue this argument as long as we want and then see that it’s true for all n.

It’s much like pushing down dominoes. You know that when you push down the

first domino, it’s going to knock over the second one. This, in turn will knock over the

third, and so on. The base case is like pushing down the first domino, and the inductive

case is like showing that pushing down domino k will cause the k + 1st domino to fall.

In fact, we can use induction to prove that our natToInt function does the right

thing. First we prove the base case: does natToInt Zero evaluate to 0? Yes, obvi-

ously it does. Now, we can assume that natToInt n evaluates to the correct value

(this is the inductive hypothesis) and ask whether natToInt (Succ n) produces

the correct value. Again, it is obvious that it does, by simply looking at the definition.

Let’s consider a more complex example: addition of Nats. We can write this con-

cisely as:

addNat Zero m = m

addNat (Succ n) m = addNat n (Succ m)

Now, let’s prove that this does the correct thing. First, as the base case, suppose the

first argument is Zero. We know that 0 + m = m regardless of what m is; thus in the

base case the algorithm does the correct thing. Now, suppose that addNat n m does

the correct thing for all m and we want to show that addNat (Succ n) m does the

correct thing. We know that (n + 1) + m = n + (m + 1) and thus since addNat

n (Succ m) does the correct thing (by the inductive hypothesis), our program is

correct.

Appendix C

Solutions To Exercises

Solution 3.1

It binds more tightly; actually, function application binds more tightly than anything

else. To see this, we can do something like:

Prelude> sqrt 3 * 3

5.19615

Solution 3.2

Solution: snd (fst ((1,’a’),"foo")). This is because first we want to take

the first half the the tuple: (1,’a’) and then out of this we want to take the second

half, yielding just ’a’.

If you tried fst (snd ((1,’a’),"foo")) you will have gotten a type error.

This is because the application of snd will leave you with fst "foo". However, the

string “foo” isn’t a tuple, so you cannot apply fst to it.

Solution 3.3

Solution: map Char.isLower ”aBCde”

Solution 3.4

Solution: length (filter Char.isLower ”aBCde”)

Solution 3.5

foldr max 0 [5,10,2,8,1]. You could also use foldl. The foldr case is easier to explain:

we replace each cons with an application of max and the empty list with 0. Thus, the

inner-most application will take the maximum of 0 and the last element of the list (if

it exists). Then, the next-most inner application will return the maximum of what-

ever was the maximum before and the second-to-last element. This will continue on,

carrying to current maximum all the way back to the beginning of the list.

163

164 APPENDIX C. SOLUTIONS TO EXERCISES

In the foldl case, we can think of this as looking at each element in the list in order.

We start off our “state” with 0. We pull off the first element and check to see if it’s

bigger than our current state. If it is, we replace our current state with that number and

the continue. This happens for each element and thus eventually returns the maximal

element.

Solution 3.6

fst (head (tail [(5,’b’),(1,’c’),(6,’a’)]))

Solution 3.7

We can define a fibonacci function as:

fib 1 = 1

fib 2 = 1

fib n = fib (n-1) + fib (n-2)

fib n =

if n == 1 || n == 2

then 1

else fib (n-1) + fib (n-2)

Solution 3.8

We can define:

a b=1

a∗b=

a + a ∗ (b − 1) otherwise

And then type out code:

mult a 1 = a

mult a b = a + mult a (b-1)

Note that it doesn’t matter that of a and b we do the recursion on. We could just as

well have defined it as:

mult 1 b = b

mult a b = b + mult (a-1) b

Solution 3.9

We can define my map as:

165

my_map f [] = []

my_map f (x:xs) = f x : my_map f xs

Recall that the my map function is supposed to apply a function f to every element

in the list. In the case that the list is empty, there are no elements to apply the function

to, so we just return the empty list.

In the case that the list is non-empty, it is an element x followed by a list xs.

Assuming we’ve already properly applied my map to xs, then all we’re left to do is

apply f to x and then stick the results together. This is exactly what the second line

does.

Solution 3.10

The code below appears in Numbers.hs. The only tricky parts are the recursive calls

in getNums and showFactorials.

module Main

where

import IO

main = do

nums <- getNums

putStrLn ("The sum is " ++ show (sum nums))

putStrLn ("The product is " ++ show (product nums))

showFactorials nums

getNums = do

putStrLn "Give me a number (or 0 to stop):"

num <- getLine

if read num == 0

then return []

else do rest <- getNums

return ((read num :: Int):rest)

showFactorials [] = return ()

showFactorials (x:xs) = do

putStrLn (show x ++ " factorial is " ++

show (factorial x))

showFactorials xs

factorial 1 = 1

factorial n = n * factorial (n-1)

The idea for getNums is just as spelled out in the hint. For showFactorials,

we consider first the recursive call. Suppose we have a list of numbers, the first of

166 APPENDIX C. SOLUTIONS TO EXERCISES

which is x. First we print out the string showing the factorial. Then we print out the

rest, hence the recursive call. But what should we do in the case of the empty list?

Clearly we are done, so we don’t need to do anything at all, so we simply return

().

Note that this must be return () instead of just () because if we simply wrote

showFactorials [] = () then this wouldn’t be an IO action, as it needs to be.

For more clarification on this, you should probably just keep reading the tutorial.

Solution 4.1 1.

String or [Char]

4. Int

Solution 4.2

The types:

2. [a]− > a

4. [a]− > a

5. [[a]]− > a

Solution 4.3

The types:

1. a− > [a]. This function takes an element and returns the list containing only that

element.

2. a− > b− > b− > (a, [b]). The second and third argument must be of the same

type, since they go into the same list. The first element can be of any type.

4. a− > String. This ignores the first argument, so it can be any type.

5. (Char− > a)− > a. In this expression, x must be a function which takes a Char

as an argument. We don’t know anything about what it produces, though, so we

call it a.

167

6. Type error. Here, we assume x has type a. But x is applied to itself, so it must

have type b− > c. But then it must have type (b− > c)− > c, but then it must

have type ((b− > c)− > c)− > c and so on, leading to an infinite type.

7. Num a => a− > a. Again, since we apply (+), this must be an instance of

Num.

Solution 4.4

The definitions will be something like:

tripleFst (Triple x y z) = x

tripleSnd (Triple x y z) = y

tripleThr (Triple x y z) = z

Solution 4.5

The code, with type signatures, is:

firstTwo (Quadruple x y z t) = [x,y]

lastTwo (Quadruple x y z t) = [z,t]

We note here that there are only two type variables, a and b associated with

Quadruple.

Solution 4.6

The code:

| Two a b

| Three a b c

| Four a b c d

tuple1 (Two a b ) = Just a

tuple1 (Three a b c ) = Just a

tuple1 (Four a b c d) = Just a

tuple2 (Two a b ) = Just b

tuple2 (Three a b c ) = Just b

168 APPENDIX C. SOLUTIONS TO EXERCISES

tuple3 (Two a b ) = Nothing

tuple3 (Three a b c ) = Just c

tuple3 (Four a b c d) = Just c

tuple4 (Two a b ) = Nothing

tuple4 (Three a b c ) = Nothing

tuple4 (Four a b c d) = Just d

Solution 4.7

The code:

fromTuple :: Tuple a b c d -> Either (Either a (a,b)) (Either (a,b,c) (a,b,

fromTuple (One a ) = Left (Left a )

fromTuple (Two a b ) = Left (Right (a,b) )

fromTuple (Three a b c ) = Right (Left (a,b,c) )

fromTuple (Four a b c d) = Right (Right (a,b,c,d))

Here, we use embedded Eithers to represent the fact that there are four (instead

of two) options.

Solution 4.8

The code:

listTail (Cons x xs) = xs

listFoldl f y Nil = y

listFoldl f y (Cons x xs) = listFoldl f (f y x) xs

listFoldr f y Nil = y

listFoldr f y (Cons x xs) = f x (listFoldr f y xs)

Solution 4.9

The code:

elements (Leaf x) = [x]

elements (Branch lhs x rhs) =

elements lhs ++ [x] ++ elements rhs

Solution 4.10

The code:

169

foldTree f z (Leaf x) = f x z

foldTree f z (Branch lhs x rhs) =

foldTree f (f x (foldTree f z rhs)) lhs

or:

Solution 4.11

It mimicks neither exactly. It’s behavior most closely resembles foldr, but differs

slightly in its treatment of the initial value. We can observe the difference in an inter-

preter:

2

CPS> foldl (-) 0 [1,2,3]

-6

CPS> fold (-) 0 [1,2,3]

-2

Clearly it behaves differently. By writing down the derivations of fold and foldr

we can see exactly where they diverge:

==> 1 - foldr (-) 0 [2,3]

==> ...

==> 1 - (2 - (3 - foldr (-) 0 []))

==> 1 - (2 - (3 - 0))

==> 2

==> fold’ (-) (\y -> 0 - y) [1,2,3]

==> 0 - fold’ (-) (\y -> 1 - y) [2,3]

==> 0 - (1 - fold’ (-) (\y -> 2 - y) [3])

==> 0 - (1 - (2 - 3))

==> -2

Essentially, the primary difference is that in the foldr case, the “initial value” is

used at the end (replacing []), whereas in the CPS case, the initial value is used at the

beginning.

170 APPENDIX C. SOLUTIONS TO EXERCISES

Solution 4.12

Solution 5.1

Using if, we get something like:

main = do

putStrLn "Please enter your name:"

name <- getLine

if name == "Simon" || name == "John" || name == "Phil"

then putStrLn "Haskell is great!"

else if name == "Koen"

then putStrLn "Debugging Haskell is fun!"

else putStrLn "I don’t know who you are."

Note that we don’t need to repeat the dos inside the ifs, since these are only one

action commands.

We could also be a bit smarter and use the elem command which is built in to the

Prelude:

main = do

putStrLn "Please enter your name:"

name <- getLine

if name ‘elem‘ ["Simon", "John", "Phil"]

then putStrLn "Haskell is great!"

else if name == "Koen"

then putStrLn "Debugging Haskell is fun!"

else putStrLn "I don’t know who you are."

Of course, we needn’t put all the putStrLns inside the if statements. We could

instead write:

main = do

putStrLn "Please enter your name:"

name <- getLine

putStrLn

(if name ‘elem‘ ["Simon", "John", "Phil"]

then "Haskell is great!"

else if name == "Koen"

then "Debugging Haskell is fun!"

else "I don’t know who you are.")

171

main = do

putStrLn "Please enter your name:"

name <- getLine

case name of

"Simon" -> putStrLn "Haskell is great!"

"John" -> putStrLn "Haskell is great!"

"Phil" -> putStrLn "Haskell is great!"

"Koen" -> putStrLn "Debugging Haskell is fun!"

_ -> putStrLn "I don’t know who you are."

Solution 5.2

The code might look something like:

import IO

main = do

putStrLn "Do you want to [read] a file, ...?"

cmd <- getLine

case cmd of

"quit" -> return ()

"read" -> do doRead; main

"write" -> do doWrite; main

_ -> do putStrLn

("I don’t understand the command "

++ cmd ++ ".")

main

doRead = do

putStrLn "Enter a file name to read:"

fn <- getLine

bracket (openFile fn ReadMode) hClose

(\h -> do txt <- hGetContents h

putStrLn txt)

doWrite = do

putStrLn "Enter a file name to write:"

fn <- getLine

bracket (openFile fn WriteMode) hClose

(\h -> do putStrLn

"Enter text (...):"

172 APPENDIX C. SOLUTIONS TO EXERCISES

writeLoop h)

writeLoop h = do

l <- getLine

if l == "."

then return ()

else do hPutStrLn h l

writeLoop h

The only interesting things here are the calls to bracket, which ensure the that the

program lives on, regardless of whether there’s a failure or not; and the writeLoop

function. Note that we need to pass the handle returned by openFile (through

bracket to this function, so it knows where to write the input to).

Solution 7.1

Function func3 cannot be converted into point-free style. The others look something

like:

You might have been tempted to try to write func2 as filter f . map,

trying to eta-reduce off the g. In this case, this isn’t possible. This is because the

function composition operator (.) has type (b → c) → (a → b) → (a → c). In this

case, we’re trying to use map as the second argument. But map takes two arguments,

while (.) expects a function which takes only one.

Solution 7.2

We can start out with a recursive definition:

and [] = True

and (x:xs) = x && and xs

Solution 7.3

We can write this recursively as:

173

concatMap f [] = []

concatMap f (x:xs) = f x ++ concatMap f xs

==> foldr (\a b -> (++) (f a) b) []

==> foldr (\a -> (++) (f a)) []

==> foldr (\a -> ((++) . f) a) []

==> foldr ((++) . f) []

Solution 9.1

The first law is: return a >>= f ≡ f a. In the case of Maybe, we get:

return a >>= f

==> Just a >>= \x -> f x

==> (\x -> f x) a

==> f a

f >>= return

==> f >>= \x -> return x

==> f >>= \x -> Just x

At this point, there are two cases depending on whether f is Nothing or not. In

the first case, we get:

==> Nothing

==> f

==> (\x -> Just x) a

==> Just a

==> f

174 APPENDIX C. SOLUTIONS TO EXERCISES

And the second law is shown. The third law states: f >>= (\x -> g x >>=

h) ≡ (f >>= g) >>= h.

If f is Nothing, then the left-hand-side clearly reduces to Nothing. The right-

hand-side reduces to Nothing >>= h which in turn reduces to Nothing, so they

are the same.

Suppose f is Just a. Then the LHS reduces to g a >>= h and the RHS re-

duces to (Just a >>= \x -> g x) >>= h which in turn reduces to g a >>=

h, so these two are the same.

Solution 9.2

The idea is that we wish to use the Left constructor to represent errors on the Right

constructor to represent successes. This leads to an instance declaration like:

return x = Right x

Left s >>= _ = Left s

Right x >>= f = f x

fail s = Left s

Right [0,1,3]

Monads> searchAll gr 3 0 :: Either String [Int]

Left "no path"

Solution 9.3

The order to mplus essentially determins the search order. When the recursive call to

searchAll2 comes first, we are doing depth-first search. When the recursive call to

search’ comes first, we are doing breadth-first search. Thus, using the list monad,

we expect the solutions to come in the other order:

[[0,2,3],[0,1,3]]

Just as we expected.

Solution 9.4

This is a very difficult problem; if you found that you were stuck immediately, please

just read as much of this solution as you need to try it yourself.

First, we need to define a list transformer monad. This looks like:

175

The ListT constructor simply wraps a monadic action (in monad m) which returns

a list.

We now need to make this a monad:

return x = ListT (return [x])

fail s = ListT (return [] )

ListT m >>= k = ListT $ do

l <- m

l’ <- mapM (unListT . k) l

return (concat l’)

Failure (like in the standard list monad) is represented by an empty list: of course, it’s

actually an empty list returned from the enclosed monad. Binding happens essentially

by running the action which will result in a list l. This has type [e]. We now need

to apply k to each of these elements (which will result in something of type ListT m

[e2]. We need to get rid of the ListTs around this (by using unListT) and then

concatenate them to make a single list.

Now, we need to make it an instance of MonadPlus

mzero = ListT (return [])

ListT m1 ‘mplus‘ ListT m2 = ListT $ do

l1 <- m1

l2 <- m2

return (l1 ++ l2)

Here, the zero element is a monadic action which returns an empty list. Addition is

done by executing both actions and then concatenating the results.

Finally, we need to make it an instance of MonadTrans:

lift x = ListT (do a <- x; return [a])

Lifting an action into ListT simply involves running it and getting the value (in

this case, a) out and then returning the singleton list.

Once we have all this together, writing searchAll6 is fairly straightforward:

| src == dst = do

lift $ putStrLn $

176 APPENDIX C. SOLUTIONS TO EXERCISES

return [src]

| otherwise = do

lift $ putStrLn $

"Exploring " ++ show src ++ " -> " ++ show dst

search’ el

where

search’ [] = mzero

search’ ((u,v,_):es)

| src == u =

(do path <- searchAll6 g v dst

return (u:path)) ‘mplus‘

search’ es

| otherwise = search’ es

The only change (besides changing the recursive call to call searchAll6 instead

of searchAll2) here is that we call putStrLn with appropriate arguments, lifted

into the monad.

If we look at the type of searchAll6, we see that the result (i.e., after applying a

graph and two ints) has type MonadTrans t, MonadPlus (t IO) => t IO

[Int]). In theory, we could use this with any appropriate monad transformer; in our

case, we want to use ListT. Thus, we can run this by:

Exploring 0 -> 3

Exploring 1 -> 3

Exploring 3 -> 3

Exploring 2 -> 3

Exploring 3 -> 3

MTrans> it

[[0,1,3],[0,2,3]]

Solution 9.5

This exercise is actually simpler than the previous one. All we need to do is incorporate

the calls to putT and getT into searchAll6 and add an extra lift to the IO calls.

This extra lift is required because now we’re stacking two transformers on top of IO

instead of just one.

| src == dst = do

lift $ lift $ putStrLn $

"Exploring " ++ show src ++ " -> " ++ show dst

visited <- getT

177

putT (src:visited)

return [src]

| otherwise = do

lift $ lift $ putStrLn $

"Exploring " ++ show src ++ " -> " ++ show dst

visited <- getT

putT (src:visited)

if src ‘elem‘ visited

then mzero

else search’ el

where

search’ [] = mzero

search’ ((u,v,_):es)

| src == u =

(do path <- searchAll7 g v dst

return (u:path)) ‘mplus‘

search’ es

| otherwise = search’ es

The type of this has grown significantly. After applying the graph and two ints, this

has type Monad (t IO), MonadTrans t, MonadPlus (StateT [Int]

(t IO)) => StateT [Int] (t IO) [Int].

Essentially this means that we’ve got something that’s a state transformer wrapped

on top of some other arbitrary transformer (t) which itself sits on top of IO. In our

case, t is going to be ListT. Thus, we run this beast by saying:

Exploring 0 -> 3

Exploring 1 -> 3

Exploring 3 -> 3

Exploring 0 -> 3

Exploring 2 -> 3

Exploring 3 -> 3

MTrans> it

[[0,1,3],[0,2,3]]

Solution 9.6

First we write a function spaces which will parse out whitespaces:

spaces :: Parser ()

spaces = many (matchChar isSpace) >> return ()

Now, using this, we simply sprinkle calls to spaces through intList to get

intListSpace:

178 APPENDIX C. SOLUTIONS TO EXERCISES

intListSpace = do

char ’[’

spaces

intList’ ‘mplus‘ (char ’]’ >> return [])

where intList’ = do

i <- int

spaces

r <- (char ’,’ >> spaces >> intList’)

‘mplus‘

(char ’]’ >> return [])

return (i:r)

Right ("",[1,2,4,5])

Parsing> runParser intListSpace "[1 ,2 , 4 \n\n ,a\n]"

Left "expecting char, got ’a’"

Solution 9.7

We do this by replacing the state functions with push and pop functions as follows:

parseValueLet2 = choice

[ int

, do string "let "

c <- letter

char ’=’

e <- parseValueLet2

string " in "

pushBinding c e

v <- parseValueLet2

popBinding c

return v

, do c <- letter

fm <- getState

case lookupFM fm c of

Nothing -> unexpected ("variable " ++

show c ++

" unbound")

Just (i:_) -> return i

, between (char ’(’) (char ’)’) $ do

e1 <- parseValueLet2

179

e2 <- parseValueLet2

case op of

’+’ -> return (e1 + e2)

’*’ -> return (e1 * e2)

]

where

pushBinding c v = do

fm <- getState

case lookupFM fm c of

Nothing -> setState (addToFM fm c [v])

Just l -> setState (addToFM fm c (v:l))

popBinding c = do

fm <- getState

case lookupFM fm c of

Just [_] -> setState (delFromFM fm c)

Just (_:l) -> setState (addToFM fm c l)

The primary difference here is that instead of calling updateState, we use two

local functions, pushBinding and popBinding. The pushBinding function

takes a variable name and a value and adds the value onto the head of the list pointed to

in the state FiniteMap. The popBinding function looks at the value and if there is

only one element on the stack, it completely removes the stack from the FiniteMap;

otherwise it just removes the first element. This means that if something is in the

FiniteMap, the stack is never empty.

This enables us to modify only slightly the usage case; this time, we simply take

the top element off the stack when we need to inspect the value of a variable.

We can test that this works:

"((let x=2 in 3+4)*x)"

Left "stdin" (line 1, column 20):

unexpected variable ’x’ unbound

Index

(), 53 boolean, 38

∗, 13 bounds, 97

+, 13 bracket, 62

++, 17 brackets, 15

−, 13 buffering, 33

−−, 28

., 25 comments, 28–29

.., 94 common sub-expression, 74

/, 13 Compilers, see GHC,NHC,Interpreters

//, 97 concatenate, 17

:, 16 concurrent, 157

::, 37 cons, 16

==, 38 constructors, 48

[], 16 continuation passing style, 53–56

$, 77 CPS, see continuation passing style

ˆ, 13

ΓE30F , 42 destructive update, 13

λ, 42 do notation, 120–122

{−−}, 28 do notation, 32

, 81 drop, 94

as, 70 dynamic, 157

derive, 89

do, 32 Editors, 9–10

hiding, 70 Emacs, 10

import, 69 elems, 97

let, 27 Enum, 87–88

qualified, 69 enumerated types, 52

enumFromThenTo, 94

accumArray, 96 enumFromTo, 94

actions, 58–62 Eq, 84–86

arithmetic, 13–14 equality, 38

array, 96 equals, 41

arrays, 96–97 eta reduction, 76

mutable, 157 evaluation order, 13

assocs, 97 exceptions, 157

exports, 67–69

Bird scripts, 71 expressions, 13

180

INDEX 181

enabling instance, 39

in GHC, 9 interactive programs, 31–36

in Hugs, 7 Interpreters, see GHC,Hugs

IO, 57–66

fallthrough, 82 library, 62–64

false, see boolean isLower, 18

files, 20–22

filter, 18, 93 lambda, 42

FiniteMap, 97–99 lambda calculus, 42

foldl, 18, 93 LaTeX scripts, 72

foldr, 18, 93 layout, 99

folds, 99–101 lazy, i, 11

fst, 15 length, 17, 46

functional, i let

functions, 22–28 and do, 121

anonymous, see lambda list

as arguments, 46–47 comprehensions, 96

associative, 19 listArray, 96

composition, 25 lists, 15–20, 92–96

type, 42–47 comprehensions, 94

cons, 16

getChar, 62 empty, 16

getContents, 62 literate style, 71–72

getLine, 32, 62 local bindings, 27, 74

GHC, 5, 7–9 local declarations, 74

guards, 83–84 loop, 29

Haskell Bookshelf, iv maps, 97–99

hClose, 62 modules, 67–72

head, 17, 43 hierarchical, 70–71

hGetChar, 62 monads, 58

hGetContents, 62 and do, 120–122

hGetLin, 62 combinators, 133

hIsEOF, 62 definition of, 122–124

hPutChar, 62 laws, 122

hPutStr, 62 plus, 137–139

hPutStrLn, 62 st, 157

Hugs, 5–7 state, 124–130

transformer, 139–143

immutable, 12 monads-combinators, 137

imports, 69–70 mutable, see immutable

indices, 97

induction, 30 named fields, 90–92

infix, 44, 73–74 NHC, 5, 9

182 INDEX

Num, 88 type, 37, 117

numeric types, 41 checking, 37

classes, 40–42, 108–113

openFile, 62 instances, 84–89, 113–115

operator precedence, 14 datatypes, 47–53, 89–92, 105–108

Ord, 86–87 constructors, 48–50

output, see IO recursive, 50–51

strict, 105–108

pairs, see tuples default, 117

parentheses, 14 errors, 38

parsing, 143–155 explicit declarations, 45–46

partial application, 76–79 hierarchy, 117

pattern matching, 48, 79–83 higher-order, 42–44

point-free programming, 77 inference, 37

primitive recursive, 99 IO, 44–45

pure, i, 12 kinds, 115–117

putChar, 62 newtype, 104–105

putStr, 62 polymorphic, 39–40

putStrLn, 22, 62 signatures, 45

random numbers, 33 synonyms, 63, 103–104

randomRIO, 33

Unit, 53

Read, 88

unzip, 93

read, 17

user input, see interactive programs

readFile, 62

recursion, 29–31 wildcard, 81

references, 157 writeFile, 62

referential tranparency, 13

regular expressions, 157 zip, 93

sections, 73–74

shadowing, 74

Show, 86

show, 17, 41

snd, 15

sqrt, 13

standard, iv

state, i

strict, i, 11, 105–108

strings, 17

converting from/to, 17

tail, 17, 43

take, 94

toUpper, 18

true, see boolean

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 1

Haskell/Print version

From Wikibooks, the open-content textbooks collection

Table Of Contents

Haskell Basics

Getting set up

Variables and functions

Lists and tuples

Next steps

Type basics

Simple input and output

Type declarations

Elementary Haskell

Recursion

List processing

Pattern matching

More about lists

Control structures

More on functions

Higher order functions

Intermediate Haskell

Modules

Indentation

More on datatypes

Class declarations

Classes and types

Keeping track of State

Monads

Understanding monads

Advanced monads

Additive monads (MonadPlus)

Monad transformers

Practical monads

Advanced Haskell

Arrows

Understanding arrows

Continuation passing style (CPS)

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 2

Mutable objects

Zippers

Applicative Functors

Concurrency

Existentially quantified types

Polymorphism

Advanced type classes

Phantom types

Generalised algebraic data-types (GADT)

Datatype algebra

Wider Theory

Denotational semantics

Equational reasoning

Program derivation

Category theory

The Curry-Howard isomorphism

Haskell Performance

Graph reduction

Laziness

Strictness

Algorithm complexity

Parallelism

Choosing data structures

Libraries Reference

The Hierarchical Libraries

Lists:Arrays:Maybe:Maps

IO:Random Numbers

General Practices

Building a standalone application

Debugging

Testing

Packaging your software (Cabal)

Using the Foreign Function Interface (FFI)

Specialised Tasks

Graphical user interfaces (GUI)

Databases

Web programming

Working with XML

Using Regular Expressions

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 3

Haskell Basics

Getting set up

This chapter will explore how to install the programs you'll need to start coding in Haskell.

Installing Haskell

First of all, you need a Haskell compiler. A compiler is a program that takes your code and spits out an

executable which you can run on your machine.

There are several Haskell compilers available freely, the most popular and fully featured of them all being the

Glasgow Haskell Compiler or GHC for short. The GHC was originally written at the University of Glasgow.

GHC is available for most platforms:

For MS Windows, see the GHC download page (http://haskell.org/ghc/download.html) for details

For MacOS X, Linux or other platforms, you are most likely better off using one of the pre-packaged

versions (http://haskell.org/ghc/distribution_packages.html) for your distribution or operating system.

Note

A quick note to those people who prefer to compile from source: This might be a bad

idea with GHC, especially if it's the first time you install it. GHC is itself mostly

written in Haskell, so trying to bootstrap it by hand from source is very tricky.

Besides, the build takes very long time and consumes a lot of disk space. If you are

sure that you want to build GHC from the source, see Building and Porting GHC at

the GHC homepage (http://hackage.haskell.org/trac/ghc/wiki/Building) .

Getting interactive

If you've just installed GHC, then you'll have also installed a sideline program called GHCi. The 'i' stands for

'interactive', and you can see this if you start it up. Open a shell (or click Start, then Run, then type 'cmd' and

hit Enter if you're on Windows) and type ghci, then press Enter.

You should get output that looks something like the following:

___ ___ _

/ _ \ /\ /\/ __(_)

/ /_\// /_/ / / | | GHC Interactive, version 6.6, for Haskell 98.

/ /_\\/ __ / /___| | http://www.haskell.org/ghc/

\____/\/ /_/\____/|_| Type :? for help.

Prelude>

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 4

The first bit is GHCi's logo. It then informs you it's loading the base package, so you'll have access to most of

the built-in functions and modules that come with GHC. Finally, the Prelude> bit is known as the prompt.

This is where you enter commands, and GHCi will respond with what they evaluate to.

Prelude> 2 + 2

4

Prelude> 5 * 4 + 3

23

Prelude> 2 ^ 5

32

The operators are similar to what they are in other languages: + is addition, * is multiplication, and ^ is

exponentiation (raising to the power of).

GHCi is a very powerful development environment. As we progress through the course, we'll learn how we

can load source files into GHCi, and evaluate different bits of them.

The next chapter will introduce some of the basic concepts of Haskell. Let's dive into that and have a look at

our first Haskell functions.

(All the examples in this chapter can be typed into a Haskell source file and evaluated by loading that file

into GHC or Hugs.)

Variables

Previously, we saw how to do simple arithmetic operations like addition and subtraction. Pop quiz: what is

the area of a circle whose radius is 5 cm? No, don't worry, you haven't stumbled through the Geometry

wikibook by mistake. The area of our circle is π r where r is our radius (5cm) and π , for the sake of

2

___ ___ _

/ _ \ /\ /\/ __(_)

/ /_\// /_/ / / | | GHC Interactive, version 6.4.1, for Haskell 98.

/ /_\\/ __ / /___| | http://www.haskell.org/ghc/

\____/\/ /_/\____/|_| Type :? for help.

Prelude>

So let's see, we want to multiply pi (3.14) times our a radius squared, so that would be

78.5

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 5

Great! Well, now since we have these wonderful, powerful computers to help us calculate things, there really

isn't any need to round pi down to 2 decimal places. Let's do the same thing again, but with a slightly longer

value for pi

Prelude> 3.14159265358979323846264338327950 * (5 ^ 2)

78.53981633974483

Much better, so now how about giving me the circumference of that circle (hint: 2π r)?

Prelude> 2 * 3.14159265358979323846264338327950 * 5

31.41592653589793

2

1963.4954084936207

What we're hoping here is that sooner or later, you are starting to get sick of typing (or copy-and-pasting) all

this text into your interpreter (some of you might even have noticed the up-arrow and Emacs-style key

bindings to zip around the command line). Well, the whole point of programming, we would argue, is to

avoid doing stupid, boring, repetitious work like typing the first 20 digits of pi in a million times. What we

really need is a means of remembering the value of pi:

Note

If this command does not work, you are probably using hugs instead of GHCi, which

expects a slightly different syntax.

Here you are literally telling Haskell to: "let pi be equal to 3.14159...". This introduces the new variable pi,

which is now defined as being the number 3.14159265358979323846264338327950. This will be very handy

because it means that we can call that value back up by just typing pi again:

Prelude> pi

3.141592653589793

Don't worry about all those missing digits; they're just skipped when displaying the value. All the digits will

be used in any future calculations.

Having variables takes some of the tedium out of things. What is the area of a circle having a radius of 5 cm?

How about a radius of 25cm?

Prelude> pi * 5^2

78.53981633974483

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 6

Prelude> pi * 25^2

1963.4954084936207

Note

What we call "variables" in this book are often referred to as "symbols" in other

introductions to functional programming. This is because other languages, namely the

more popular imperative languages have a very different use for variables: keeping

track of state. Variables in Haskell do no such thing; they store a value and an

immutable one at that.

Types

Following the previous example, you might be tempted to try storing a value for that radius. Let's see what

happens:

Prelude> let r = 25

Prelude> 2 * pi * r

<interactive>:1:9:

Couldn't match `Double' against `Integer'

Expected type: Double

Inferred type: Integer

In the second argument of `(*)', namely `r'

In the definition of `it': it = (2 * pi) * r

Whoops! You've just run into a programming concept known as types. Types are a feature of many

programming languages which are designed to catch some of your programming errors early on so that you

find out about them before it's too late. We'll discuss types in more detail later on in the Type basics chapter,

but for now it's useful to think in terms of plugs and connectors. For example, many of the plugs on the back

of your computer are designed to have different shapes and sizes for a purpose. This is partly so that you

don't inadvertently plug the wrong bits of your computer in together and blow something up. Types serve a

similar purpose, but in this particular example, well, types aren't so helpful.

The main problem is that Haskell doesn't let you multiply Integers with real numbers. We'll explain why

later, but for now, you can get around the issue by using a Double for r so that the pieces fit together:

Prelude> 2 * pi * r

157.07963267948966

Variables can contain much more than just simple values such as 3.14. Indeed, they can contain any Haskell

expression whatsoever. So, if we wanted to keep around, say the area of a circle with radius of 5, we could

write something like this:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 7

What's interesting about this is that we've stored a complicated chunk of Haskell (an arithmetic expression

containing a variable) into yet another variable.

We can use variables to store any arbitrary Haskell code, so let's use this to get our acts together.

Prelude> let area2 = pi * r ^ 2

Prelude> area2

1963.4954084936207

So far so good.

Prelude> area2

1963.4954084936207

Wait a second, why didn't this work? That is, why is it that we get the same

value for area as we did back when r was 25? The reason this is the case is

Variables do not

that variables in Haskell do not change. What actually happens when you

vary

defined r the second time is that you are talking about a different r. This is

something that happens in real life as well. How many people do you know

that have the name John? What's interesting about people named John is that

most of the time, you can talk about "John" to your friends, and depending on the context, your friends will

know which John your are refering to. Programming has something similar to context, called scope. We

won't explain scope (at least not now), but Haskell's lexical scope is the magic that lets us define two

different r and always get the right one back. Scope, however, does not solve the current problem. What we

want to do is define a generic area that always gives you the area of a circle. What we could do is just

define it a second time:

Prelude> area3

12.566370614359172

But we are programmers, and programmers loathe repetition. Is there a better way?

Functions

What we are really trying to accomplish with our generic area is to define a function. Defining functions in

Haskell is dead-simple. It is exactly like defining a variable, except with a little extra stuff on the left hand

side. For instance, below is our definition of pi, followed by our definition of area:

Prelude> let area r = pi * r ^ 2

To calculate the area of our two circles, we simply pass it a different value:

Prelude> area 5

78.53981633974483

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 8

Prelude> area 25

1963.4954084936207

Functions allow us to make a great leap forward in the reusability of our code. But let's slow down for a

moment, or rather, back up to dissect things. See the r in our definition area r = ...? This is what we

call a parameter . A parameter is what we use to provide input to the function. When Haskell is interpreting

the function, the value of its parameter must come from the outside. In the case of area, the value of r is 5

when you say area 5, but it is 25 if you say area 25.

Exercises

Prelude> let r = 0

Prelude> let area r = pi * r ^ 2

Prelude> area 5

2. What actually happens? Why? (Hint: remember what was said before about

"scope")

We hope you have completed the very short exercise (I would say thought experiment) above. Fortunately,

the following fragment of code does not contain any unpleasant surprises:

Prelude> let r = 0

Prelude> let area r = pi * r ^ 2

Prelude> area 5

78.53981633974483

An unpleasant surprise here would have been getting the value 0. This is just a consequence of what we

wrote above, namely the value of a parameter is strictly what you pass in when you call the function. And

that is directly a consequence of our old friend scope. Informally, the r in let r = 0 is true when you are

in the top level of the interpreter, but it is not the same r as the one inside our defined function area - the r

inside area overrides the other r; you can think of it as Haskell picking the most specific version of r there

is. If you have many friends all named John, you go with the one which just makes more sense and is specific

to the context; similarly, what value of r we get depends on the scope.

Multiple parameters

Another thing you might want to know about functions is that they can accept more than one parameter. Say

for instance, you want to calculate the area of a rectangle. This is quite simple to express:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 9

Prelude> areaRect 5 10

50

Prelude> areaTriangle 3 9

13.5

Passing parameters in is pretty straightforward: you just give them in the same order that they are defined.

So, whereas areaTriangle 3 9 gives us the area of a triangle with base 3 and height 9,

areaTriangle 9 3 gives us the area with the base 9 and height 3.

Exercises

Write a function to calculate the volume of a box. A box has width, height and

depth. You have to multiply them all to get the volume.

To further cut down the amount of repetition it is possible to call functions from within other functions. A

simple example showing how this can be used is to create a function to compute the area of a Square. We

can think of a square as a special case of a rectangle (the area is still the width multiplied by the length);

however, we also know that the width and length are the same, so why should we need to type it in twice?

Prelude> let areaSquare s = areaRect s s

Prelude> areaSquare 5

25

Exercises

Write a function to calculate the volume of a cylinder. The volume of a cylinder is

the area of the base, which is a circle (you already programmed this function in

this chapter, so reuse it) multiplied by the height.

Summary

1. Variables store values. In fact, they store any arbitrary Haskell expression.

2. Variables do not change.

3. Functions help you write reusable code.

4. Functions can accept more than one parameter.

Notes

1. ^ For readers with prior programming experience: Variables don't change? I only get constants?

Shock! Horror! No... trust us, as we hope to show you in the rest of this book, you can go a very long

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 10

way without changing a single variable! In fact, this non-changing of variables makes life easier

because it makes programs so much more predictable.

Lists and tuples are two ways of crushing several values down into a single value.

Lists

The functional programmer's next best friend

In the last section we introduced the concept of variables and functions in Haskell. Functions are one of the

two major building blocks of any Haskell program. The other is the versatile list. So, without further ado,

let's switch over to the interpreter and build some lists:

Prelude> let truths = [True, False, False]

Prelude> let strings = ["here", "are", "some", "strings"]

The square brackets denote the beginning and the end of the list. List elements are separated by the comma

"," operator. Further, list elements must be all of the same type. Therefore, [42, "life, universe

and everything else"] is not a legal list because it contains two elements of different types, namely,

integer and string respectively. However, [12, 80] or, ["beer", "sandwiches"] are valid lists

because they are both type-homogeneous.

Here is what happens if you try to define a list with mixed-type elements:

<interactive>:1:19:

Couldn't match `Bool' against `[Char]'

Expected type: Bool

Inferred type: [Char]

In the list element: "bonjour"

In the definition of `mixed': mixed = [True, "bonjour"]

If you're confused about this business of lists and types, don't worry about it. We haven't talked very much

about types yet and we are confident that this will clear up as the book progresses.

Building lists

Square brackets and commas aren't the only way to build up a list. Another thing you can do with them is to

build them up piece by piece, by consing things on to them, via the (:) operator.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 11

Prelude> numbers

[1,2,3,4]

Prelude> 0:numbers

[0,1,2,3,4]

When you cons something on to a list (something:someList), what you get back is another list. So,

unsurprisingly, you could keep on consing your way up.

Prelude> 1:0:numbers

[1,0,1,2,3,4]

Prelude> 2:1:0:numbers

[2,1,0,1,2,3,4]

Prelude> 5:4:3:2:1:0:numbers

[5,4,3,2,1,0,1,2,3,4]

In fact, this is just about how all lists are built, by consing them up from the empty list ([]). The commas and

brackets notation is actually a pleasant form of syntactic sugar. In other words, a list like [1,2,3,4,5] is

exactly equivalent to 1:2:3:4:5:[]

You will, however, want to watch out for a potential pitfall in list construction. Whereas 1:2:[] is perfectly

good Haskell, 1:2 is not. In fact, if you try it out in the interpreter, you get a nasty error message.

Example: Whoops!

Prelude> 1:2

<interactive>:1:2:

No instance for (Num [a])

arising from the literal `2' at <interactive>:1:2

Probable fix: add an instance declaration for (Num [a])

In the second argument of `(:)', namely `2'

In the definition of `it': it = 1 : 2

Well, to be fair, the error message is nastier than usual because numbers are slightly funny beasts in Haskell.

Let's try this again with something simpler, but still wrong, True:False

Prelude> True:False

<interactive>:1:5:

Couldn't match `[Bool]' against `Bool'

Expected type: [Bool]

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 12

In the second argument of `(:)', namely `False'

In the definition of `it': it = True : False

The basic intuition for this is that the cons operator, (:) works with this pattern something:someList ;

however, what we gave it is more something:somethingElse. Cons only knows how to stick things

onto lists. We're starting to run into a bit of reasoning about types. Let's summarize so far:

You can only cons (:) something onto a list.

Well, sheesh, aren't types annoying? They are indeed, but as we will see in Type basics, they can also be a

life saver. In either case, when you are programming in Haskell and something blows up, you'll probably

want to get used to thinking "probably a type error".

Exercises

why not?

2. Write a function cons8 that takes a list and conses 8 on to it. Test it out on

the following lists by doing:

1. cons8 []

2. cons8 [1,2,3]

3. cons8 [True,False]

4. let foo = cons8 [1,2,3]

5. cons8 foo

3. Write a function that takes two arguments, a list and a thing, and conses the

thing onto the list. You should start out with let myCons list thing

=

Lists can contain anything, just as long as they are all of the same type. Well, then, chew on this: lists are

things too, therefore, lists can contain... yes indeed, other lists! Try the following in the interpreter:

Prelude> listOfLists

[[1,2],[3,4],[5,6]]

Lists of lists can be pretty tricky sometimes, because a list of things does not have the same type as a thing

all by itself. Let's sort through these implications with a few exercises:

Exercises

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 13

1. Which of these are valid Haskell and which are not? Rewrite in cons

notation.

1. [1,2,3,[]]

2. [1,[2,3],4]

3. [[1,2,3],[]]

2. Which of these are valid Haskell, and which are not? Rewrite in comma

and bracket notation.

1. []:[[1,2,3],[4,5,6]]

2. []:[]

3. []:[]:[]

4. [1]:[]:[]

3. Can Haskell have lists of lists of lists? Why or why not?

4. Why is the following list invalid in Haskell? Don't worry too much if you

don't get this one yet.

1. [[1,2],3,[4,5]]

Lists of lists are extremely useful, because they allow you to express some very complicated, structured data

(two-dimensional matrices, for example). They are also one of the places where the Haskell type system truly

shines. Human programmers, or at least this wikibook author, get confused all the time when working with

lists of lists, and having restrictions of types often helps in wading through the potential mess.

Tuples

A different notion of many

Tuples are another way of storing multiple values in a single value, but they are subtly different in a number

of ways. They are useful when you know, in advance, how many values you want to store, and they lift the

restriction that all the values have to be of the same type. For example, we might want a type for storing pairs

of co-ordinates. We know how many elements there are going to be (two: an x and y co-ordinate), so tuples

are applicable. Or, if we were writing a phonebook application, we might want to crunch three values into

one: the name, phone number and address of someone. Again, we know how many elements there are going

to be. Also, those three values aren't likely to have the same type, but that doesn't matter here, because we're

using tuples.

(True, 1)

("Hello world", False)

(4, 5, "Six", True, 'b')

The first example is a tuple containing two elements. The first one is True and the second is 1. The next

example again has two elements, the first is "Hello world" and the second, False. The third example is a bit

more complex. It's a tuple consisting of five elements, the first is the number 4, the second the number 5, the

third "Six", the fourth True, and the last one the character 'b'. So the syntax for tuples is: separate the different

elements with a comma, and surround the whole thing in parentheses.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 14

A quick note on nomenclature: in general you write n-tuple for a tuple of size n. 2-tuples (that is, tuples with

2 elements) are normally called 'pairs' and 3-tuples triples. Tuples of greater sizes aren't actually all that

common, although, if you were to logically extend the naming system, you'd have 'quadruples', 'quintuples'

and so on, hence the general term 'tuple'.

So tuples are a bit like lists, in that they can store multiple values. However, there is a very key difference:

pairs don't have the same type as triples, and triples don't have the same type as quadruples, and in general,

two tuples of different sizes have different types. You might be getting a little disconcerted because we keep

mentioning this word 'type', but for now, it's just important to grasp how lists and tuples differ in their

approach to sizes. You can have, say, a list of numbers, and add a new number on the front, and it remains a

list of numbers. If you have a pair and wish to add a new element, it becomes a triple, and this is a

[1]

fundamentally different object .

Exercises

1. Write down the 3-tuple whose first element is 4, second element is "hello"

and third element is True.

2. Which of the following are valid tuples ?

1. (4, 4)

2. (4, "hello")

3. (True, "Blah", "foo")

3. Lists can be built by consing new elements on to them: you cons a number

onto a list of numbers, and get back a list of numbers. It turns out that there

is no such way to build up tuples.

1. Why do you think that is?

2. Say for the sake of argument, that there was such a function. What

would you get if you "consed" something on a tuple?

Tuples are handy when you want to return more than one value from a function. In most languages trying to

return two or more things at once means wrapping them up in a special data structure, maybe one that only

gets used in that function. In Haskell, just return them as a tuple.

You can also use tuples as a primitive kind of data structure. But that needs an understanding of types, which

we haven't covered yet.

In this section, we concentrate solely on pairs. This is mostly for simplicity's sake, but pairs are by far and

away the most commonly used size of tuple.

Okay, so we've seen how we can put values in to tuples, simply by using the (x, y, z) syntax. How can

we get them out again? For example, a typical use of tuples is to store the (x, y) co-ordinate pair of a point:

imagine you have a chess board, and want to specify a specific square. You could do this by labeling all the

rows from 1 to 8, and similarly with the columns, then letting, say, (2, 5) represent the square in row 2 and

column 5. Say we want to define a function for finding all the pieces in a given row. One way of doing this

would be to find the co-ordinates of all the pieces, then look at the row part and see if it's equal to whatever

row we're being asked to examine. This function would need, once it had the co-ordinate pair (x, y) of a

piece, to extract the x (the row part). To do this there are two functions, fst and snd, which project the first

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 15

and second elements out of a pair, respectively (in math-speak a function that gets some data out of a

structure is called a "Projection"). Let's see some examples:

2

Prelude> fst (True, "boo")

True

Prelude> snd (5, "Hello")

"Hello"

It should be fairly obvious what these functions do. Note that you can only use these functions on pairs.

Why? It all harks back to the fact that tuples of different sizes are different beasts entirely. fst and snd are

[2]

specialized to pairs, and so you can't use them on anything else .

Exercises

1. Use a combination of fst and snd to extract the 4 from the tuple (

("Hello", 4), True).

2. Normal chess notation is somewhat different to ours: it numbers the rows

from 1-8 but then labels the columns A-H. Could we label a specific point

with a number and a character, like (4, 'a')? What important

difference with lists does this illustrate?

We can apply the same reasoning to tuples about storing lists within lists. Tuples are things too, so you can

store tuples with tuples (within tuples up to any arbitrary level of complexity). Likewise, you could also have

lists of tuples, tuples of lists, all sorts of combinations along the same lines.

((2,3), True)

((2,3), [2,3])

[(1,2), (3,4), (5,6)]

Some discussion about this - what you get out of this, maybe, what's the big idea behind grouping

things together

There is one bit of trickiness to watch out for, however. The type of a tuple is defined not only by its size,

but by the types of objects it contains. For example, the tuples like ("Hello",32) and (47,"World")

are fundamentally different. One is of type (String,Int) tuples, whereas the other is (Int,String).

This has implications for building up lists of tuples. We could very well have lists like [("a",1),

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 16

("b",9),("c",9)], but having a list like [("a",1),(2,"b"),(9,"c")] is right out. Can you spot

the difference?

Exercises

fst [1,2]

1:(2,3)

(2,4):(2,3)

(2,4):[]

[(2,4),(5,5),('a','b')]

([2,4],[2,2])

2. FIXME: to be added

Summary

We have introduced two new notions in this chapter, lists and tuples. To sum up:

They can contain anything as long as all the elements of the list are of the same type

They can also be built by the cons operator, (:), but you can only cons things onto lists

2. Tuples are defined by parentheses and commas : ("Bob",32)

They can contain anything, even things of different types

They have a fixed length, or at least their length is encoded in their type. That is, two tuples

with different lengths will have different types.

3. Lists and tuples can be combined in any number of ways: lists within lists, tuples with lists, etc

We hope that at this point, you're somewhat comfortable enough manipulating them as part of the

fundamental Haskell building blocks (variables, functions and lists), because we're now going to move to

some potentially heady topics, types and recursion. Types, we have alluded to thrice in this chapter without

really saying what they are, so these shall be the next major topic that we cover. But before we get to that,

we're going to make a short detour to help you make better use of the GHC interpreter.

Notes

↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :)

2. More technically, fst and snd have types which limit them to pairs. It would be impossible to

define projection functions on tuples in general, because they'd have to be able to accept tuples of

different sizes, so the type of the function would vary.

Next steps

Haskell files

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 17

Up to now, we've made heavy use of the GHC interpreter. The interpreter is indeed a useful tool for trying

things out quickly and for debugging your code. But we're getting to the point where typing everything

directly into the interpreter isn't very practical. So now, we'll be writing our first Haskell source files.

Open up a file varfun.hs in your favourite text editor (the hs stands for Haskell) and paste the following

definition in. Remember, Haskell uses indentations and spaces to decide where functions (and other things)

begin and end, so make sure there are no leading spaces and that indentations are correct, otherwise GHC

will report parse errors.

area r = pi * r^2

(In case you're wondering, pi is actually predefined in Haskell, no need to include it here). Now change into

the directory where you saved your file, open up ghci, and use :load (or :l for short):

Compiling Main ( varfun.hs, interpreted )

Ok, modules loaded: Main.

*Main>

*Main> area 5

78.53981633974483

If you make changes to the file, just use :reload (:r for short) to reload the file.

Note

GHC can also be used as a compiler. That is, you could use GHC to convert your

Haskell files into a program that can then be run without running the interpreter. See

the documentation for details.

You'll note that there are a couple of differences between how we do things when we type them directly into

ghci, and how we do them when we load them from files. The differences may seem awfully arbitrary for

now, but they're actually quite sensible consequences of the scope, which rest assured, we will explain later.

No let

let x = 3

let y = 2

let area r = pi * r ^ 2

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 18

x = 3

y = 2

area r = pi * r ^ 2

The keyword let is actually something you use a lot in Haskell, but not exactly in this context. We'll see

further on in this chapter when we discuss the use of let bindings.

Prelude> let r = 5

Prelude> r

5

Prelude> let r = 2

Prelude> r

2

On the other hand, writing something like this in a source file does not work

r = 5

r = 2

As we mentioned above, variables do not change, and this is even more the case when you're working in a

source file. This has one very nice implication. It means that:

The order in which you declare things does not matter. For example, the following fragments of code do

exactly the same thing:

y = x * 2 x = 3

x = 3 y = x * 2

This is a unique feature of Haskell and other functional programming languages. The fact that variables never

change means that we can opt to write things in any order that we want (but this is also why you can't

declare something more than once... it would be ambiguous otherwise).

Exercises

Save the functions you had written in the previous module's exercises into a

Haskell file. Load the file in GHCi and test the functions on a few parameters

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 19

Working with actual source code files instead of typing things into the interpreter makes things convenient to

define much more substantial functions than those we've seen up to now. Let's flex some Haskell muscle

here and examine the kinds of things we can do with our functions.

Conditional expressions

if / then / else

Haskell supports standard conditional expressions. For instance, we could define a function that returns - 1 if

its argument is less than 0; 0 if its argument is 0; and 1 if its argument is greater than 0 (this is called the

signum function). Actually, such a function already exists, but let's define one of our own, what we'll call

mySignum.

mySignum x =

if x < 0

then -1

else if x > 0

then 1

else 0

Example:

Test> mySignum 5

1

Test> mySignum 0

0

Test> mySignum (5-10)

-1

Test> mySignum (-1)

-1

Note that the parenthesis around "-1" in the last example are required; if missing, the system will think you

are trying to subtract the value "1" from the value "signum," which is ill-typed.

The if/then/else construct in Haskell is very similar to that of most other programming languages; however,

you must have both a then and an else clause. It evaluates the condition (in this case x < 0 and, if this

evaluates to True, it evaluates the then condition; if the condition evaluated to False, it evaluates the

else condition).

You can test this program by editing the file and loading it back into your interpreter. If Test is already the

current module, instead of typing :l Test.hs again, you can simply type :reload or just :r to reload

the current file. This is usually much faster.

case

Haskell, like many other languages, also supports case constructions. These are used when there are

multiple values that you want to check against (case expressions are actually quite a bit more powerful than

this -- see the Pattern matching chapter for all of the details).

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 20

Suppose we wanted to define a function that had a value of 1 if its argument were 0; a value of 5 if its

argument were 1; a value of 2 if its argument were 2; and a value of - 1 in all other instances. Writing this

function using if statements would be long and very unreadable; so we write it using a case statement as

follows (we call this function f):

f x =

case x of

0 -> 1

1 -> 5

2 -> 2

_ -> -1

In this program, we're defining f to take an argument x and then inspect the value of x. If it matches 0, the

value of f is 1. If it matches 1, the value of f is 5. If it matches 2, then the value of f is 2; and if it hasn't

matched anything by that point, the value of f is - 1 (the underscore can be thought of as a "wildcard" -- it

will match anything).

The indentation here is important. Haskell uses a system called "layout" to structure its code (the

programming language Python uses a similar system). The layout system allows you to write code without the

explicit semicolons and braces that other languages like C and Java require.

you are using tabs or spaces. If you can configure your editor to never use

tabs, that's probably better. If not, make sure your tabs are always 8 spaces

long, or you're likely to run in to problems.

Indentation

The general rule for layout is that an open-brace is inserted after the keywords where, let, do and of, and

the column position at which the next command appears is remembered. From then on, a semicolon is

inserted before every new line that is indented the same amount. If a following line is indented less, a close-

brace is inserted. This may sound complicated, but if you follow the general rule of indenting after each of

those keywords, you'll never have to remember it (see the Indentation chapter for a more complete discussion

of layout).

Some people prefer not to use layout and write the braces and semicolons explicitly. This is perfectly

acceptable. In this style, the above function might look like:

f x = case x of

{ 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> -1 }

Of course, if you write the braces and semicolons explicitly, you're free to structure the code as you wish.

The following is also equally valid:

f x =

case x of { 0 -> 1 ;

1 -> 5 ; 2 -> 2

; _ -> -1 }

However, structuring your code like this only serves to make it unreadable (in this case).

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 21

Functions can also be defined piece-wise, meaning that you can write one version of your function for certain

parameters and then another version for other parameters. For instance, the above function f could also be

written as:

f 0 = 1

f 1 = 5

f 2 = 2

f _ = -1

Here, the order is important. If we had put the last line first, it would have matched every argument, and f

would return -1, regardless of its argument (most compilers will warn you about this, though, saying

something about overlapping patterns). If we had not included this last line, f would produce an error if

anything other than 0, 1 or 2 were applied to it (most compilers will warn you about this, too, saying

something about incomplete patterns). This style of piece-wise definition is very popular and will be used

quite frequently throughout this tutorial. These two definitions of f are actually equivalent -- this piece-wise

version is translated into the case expression.

Function composition

More complicated functions can be built from simpler functions using function composition. Function

composition is simply taking the result of the application of one function and using that as an argument for

another. We've already seen this back in the Getting set up chapter, when we wrote 5*4+3. In this, we were

evaluating 5 * 4 and then applying + 3 to the result. We can do the same thing with our square and f

functions:

square x = x^2

Example:

Test> square (f 1)

25

Test> square (f 2)

4

Test> f (square 1)

5

Test> f (square 2)

-1

The result of each of these function applications is fairly straightforward. The parentheses around the inner

function are necessary; otherwise, in the first line, the interpreter would think that you were trying to get the

value of square f, which has no meaning. Function application like this is fairly standard in most

programming languages. There is another, more mathematically oriented, way to express function

composition, using the (.) (just a single period) function. This (.) function is supposed to look like the ( )

operator in mathematics.

Note

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 22

g also to mean "f following g."

applying the value x to the function is the same as applying it to g, taking the

result, and then applying that to f.

The (.) function (called the function composition function), takes two functions and makes them in to one.

For instance, if we write (square . f), this means that it creates a new function that takes an argument,

applies f to that argument and then applies square to the result. Conversely, (f . square) means that

it creates a new function that takes an argument, applies square to that argument and then applies f to the

result. We can see this by testing it as before:

Example:

Test> (square . f) 1

25

Test> (square . f) 2

4

Test> (f . square) 1

5

Test> (f . square) 2

-1

Here, we must enclose the function composition in parentheses; otherwise, the Haskell compiler will think

we're trying to compose square with the value f 1 in the first line, which makes no sense since f 1 isn't

even a function.

It would probably be wise to take a little time-out to look at some of the functions that are defined in the

Prelude. Undoubtedly, at some point, you will accidentally rewrite some already-existing function (I've done

it more times than I can count), but if we can keep this to a minimum, that would save a lot of time.

Let Bindings

Often we wish to provide local declarations for use in our functions. For instance, if you remember back to

your grade school mathematics courses, the following equation is used to find the roots (zeros) of a

2

polynomial of the form ax + bx + c = 0: . We could write the following

function to compute the two values of x:

roots a b c =

((-b + sqrt(b*b - 4*a*c)) / (2*a),

(-b - sqrt(b*b - 4*a*c)) / (2*a))

Notice that our definition here has a bit of redundancy. It is not quite as nice as the mathematical definition

because we have needlessly repeated the code for sqrt(b*b - 4*a*c). To remedy this problem, Haskell

allows for local bindings. That is, we can create values inside of a function that only that function can see.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 23

For instance, we could create a local binding for sqrt(b*b-4*a*c) and call it, say, disc and then use

that in both places where sqrt(b*b - 4*a*c) occurred. We can do this using a let/in declaration:

roots a b c =

let disc = sqrt (b*b - 4*a*c)

in ((-b + disc) / (2*a),

(-b - disc) / (2*a))

In fact, you can provide multiple declarations inside a let. Just make sure they're indented the same amount,

or you will have layout problems:

roots a b c =

let disc = sqrt (b*b - 4*a*c)

twice_a = 2*a

in ((-b + disc) / twice_a,

(-b - disc) / twice_a)

Type basics

Types in programming are a way of grouping similar values. In Haskell, the type system is a powerful way

of ensuring there are fewer mistakes in your code.

Introduction

Programming deals with different sorts of entities. For example, consider adding two numbers together:

2+3

What are 2 and 3? They are numbers, clearly. But how about the plus sign in the middle? That's certainly not

a number. So what is it?

Similarly, consider a program that asks you for your name, then says "Hello". Neither your name nor the

word Hello is a number. What are they then? We might refer to all words and sentences and so forth as Text.

In fact, it's more normal in programming to use a slightly more esoteric word, that is, String.

If you've ever set up a database before, you'll likely have come across types.

For example, say we had a table in a database to store details about a person's

In Haskell, the rule contacts; a kind of personal telephone book. The contents might look like this:

is that all type

names have to begin

First Last Telephone

with a capital letter. Address

Name Name number

We shall adhere to

this convention Sherlock Holmes 743756 221B Baker Street London

henceforth. 99 Long Road Street

Bob Jones 655523

Villestown

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 24

The fields contain values. Sherlock is a value as is 99 Long Road Street Villestown as well as

655523. As we've said, types are a way of grouping different sorts of data. What do we have in the above

table? Two of the columns, First name and Last name contain text, so we say that the values are of type

String. The type of the third column is a dead giveaway by its name, Telephone number. Values in that

column have the type of Number!

At first glance one may be tempted to class address as a string. However, the semantics behind an innocent

address are quite complex. There's a whole lot of human conventions that dictate. For example, if the first

line contains a number, then that's the number of the house, if not, then it's probably the name of the house,

except if the line begins with PO Box then it's just a postal box address and doesn't indicate where the person

lives at all... Clearly, there's more going on here than just Text. We could say that addresses are Text; there'd

be nothing wrong with that. However, claiming they're of some different type, say, Address, is more

powerful. If we know some piece of data has the type of Text, that's not very helpful. However, if we know it

has the type of Address, we instantly know much more about the piece of data.

We might also want to apply this line of reasoning to our telephone number column. Indeed, it would be a

good idea to come up with a TelephoneNumber type. Then if we were to come across some arbitrary

sequence of digits, knowing that sequence of digits was of type TelephoneNumber, we would have access to

a lot more information than if it were just a Number.

So far, what we've done just seems like categorizing things -- hardly a feature which would cause every

modern programming language designer to incorporate into their language! In the next section we explore

how Haskell uses types to the programmer's benefit.

Characters and strings

The best way to explore how types work in Haskell is to fire up GHCi. Let's do it! Once we're up and

running, let us get to know the :type command.

'H' :: Char

(The :type can be also shortened to :t, which we shall use from now on.)

And there we have it. You give GHCi an expression and it returns its type. In this case we gave it the literal

value 'H' - the letter H enclosed in single quotation marks (a.k.a. apostrophe, ANSI 39) and GHCi printed it

followed by the "::" symbol which reads "is of type" followed by Char. The whole thing reads: 'H' is of type

Char.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 25

"Hello World" :: [Char]

In this case we gave it some text enclosed in double quotation marks and GHCi printed "Hello

World" :: [Char]. [Char] means a list of characters. Notice the difference between Char and [Char] -

the square brackets are used to construct literal lists, and they are also used to describe the list type.

Exercises

1. Try using the :type command on the literal value "H" (notice the double

quotes). What happens? Why?

2. Try using the :type command on the literal value 'Hello World'

(notice the single quotes). What happens? Why?

This is essentially what strings are in Haskell - lists of characters. A string in Haskell can be initialized in

several ways: It may be entered as a sequence of characters enclosed in double quotation marks (ANSI 34); it

may be constructed similar to any other list as individual elements of type Char joined together with the ":"

function and terminated by an empty list or, built with individual Char values enclosed in brackets and

separated by commas.

So, for the final time, what precisely is this concept of text that we're throwing around? One way of

interpreting it is to say it's basically a sequence of characters. Think about it: the word "Hey" is just the

character 'H' followed by the character 'e' followed by the character 'y'. Haskell uses a list to hold this

sequence of characters. Square brackets indicate a list of things, for example here [Char] means 'a list of

Chars'.

Haskell has a concept of type synonyms. Just as in the English language, two words that mean the same

thing, for example 'fast' and 'quick', are called synonyms, in Haskell two types which are exactly the same

are called 'type synonyms'. Everywhere you can use [Char], you can use String. So to say:

Is also perfectly valid. From here on we'll mostly refer to text as String, rather than [Char].

Boolean values

One of the other useful types in most languages is called a Boolean or Bool for short. This has two values:

true or false. This turns out to be very useful. For example consider a program that would ask the user for a

name then look that name up in a spreadsheet. It might be useful to have a function, nameExists, which

indicates whether or not the name of the user exists in the spreadsheet. If it does exist, you could say that it is

true that the name exists, and if not, you could say that it is false that the name exists. So we've come across

Bools. The two values of bools are, as we've mentioned, true and false. In Haskell boolean values are

capitalized (for reasons that will later become clear):

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 26

Prelude> :t True

True :: Bool

Prelude> :t False

False :: Bool

This shouldn't need too much explaining at this point. The values True and False are categorized as Booleans,

that is to say, they have type Bool.

Numeric types

If you've been playing around with typing :t on all the familiar values you've come across, perhaps you've run

into the following complication:

Prelude> :t 5

5 :: Num a => a

We'll defer the explanation of this until later. The short version of the story is that there are many different

types of numbers (fractions, whole numbers, etc) and 5 can be any one of them. This weird-looking type

relates to a Haskell feature called type classes, which we will be playing with later in this book.

Functional types

So far, we've covered what we call values, and explained how types help to categorize them, but also

describe them. The next thing we'll look at is what makes the type system truly powerful: We can assign

[3]

types not only to values, but to functions as well . Let's look at some examples.

Example: not

not False = True

not is a standard Prelude function that simply negates Bools, in the sense that truth turns into falsity and

vice versa. For example, given the above example we gave using Bools, nameExists, we could define a

similar function that would test whether a name doesn't exist in the spreadsheet. It would likely look

something like this:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 27

To assign a type to not we look at two things: the type of values it takes as its input, and the type of values

it returns. In our example, things are easy. not takes a Bool (the Bool to be negated), and returns a Bool (the

negated Bool). Therefore, we write that:

You can read this as 'not is a function from things of type Bool to things of type Bool'.

A common programming task is to take a list of Strings, then join them all up into a single string, but insert a

newline character between each one, so they all end up on different lines. For example, say you had the list

["Bacon", "Sausages", "Egg"], and wanted to convert it to something resembling a shopping list,

the natural thing to do would be to join the list together into a single string, placing each item from the list

onto a new line. This is precisely what unlines does. unwords is similar, but it uses a space instead of a

newline as a separator. (mnemonic: un = unite)

"Bacon\nSausages\nEgg"

Prelude> unwords ["Bacon", "Sausages", "Egg"]

"Bacon Sausages Egg"

Notice the weird output from unlines. This isn't particularly related to types, but it's worth noting anyway,

so we're going to digress a little and explore why this is. Basically, any output from GHCi is first run through

the show function, which converts it into a String. This makes sense, because GHCi shows you the result of

your commands as text, so it has to be a String. However, what does show do if you give it something which

is already a String? Although the obvious answer would be 'do nothing', the behaviour is actually slightly

different: any 'special characters', like tabs, newlines and so on in the String are converted to their 'escaped

forms', which means that rather than a newline actually making the stuff following it appear on the next line,

it is shown as "\n". To avoid this, we can use the putStrLn function, which GHCi sees and doesn't run

your output through show.

Bacon

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 28

Sausages

Egg

Bacon Sausages Egg

The second result may look identical, but notice the lack of quotes. putStrLn outputs exactly what you

give it. Also, note that you can only pass it a String. Calls like putStrLn 5 will fail. You'd need to convert

the number to a String first, that is, use show: putStrLn (show 5) (or use the equivalent function

print: print 5).

Getting back to the types. What would the types of unlines and unwords be? Well, again, let's look at

both what they take as an argument, and what they return. As we've just seen, we've been feeding these

functions a list, and each of the items in the list has been a String. Therefore, the type of the argument is

[String]. They join all these Strings together into one long String, so the return type has to be String.

Therefore, both of the functions have type [String] -> String. Note that we didn't mention the fact

that the two functions use different separators. This is totally inconsequential when it comes to types — all

that matters is that they return a String. The type of a String with some newlines is precisely the same as the

type of a String with some spaces.

Text presents a problem to computers. Once everything is reduced down to its lowest level, all a computer

knows how to deal with is 1's and 0's: computers speak in binary. As talking in binary isn't very convenient,

humans have come up with ways of making computers store text. Every character is first converted to a

number, then that number is converted to binary and stored. Hence, a piece of text, which is just a sequence

of characters, can be encoded into binary. Normally, we're only interested in how to encode characters into

their numerical representations, because the number to binary bit is very easy.

The easiest way of converting characters to numbers is simply to write all the possible characters down, then

number them. For example, we might decide that 'a' corresponds to 1, then 'b' to 2, and so on. This is exactly

what a thing called the ASCII standard is: 128 of the most commonly-used characters, numbered. Of course,

it would be a bore to sit down and look up a character in a big lookup table every time we wanted to encode

[4]

it, so we've got two functions that can do it for us, chr (pronounced 'char') and ord :

ord :: Char -> Int

Remember earlier when we stated Haskell has many numeric types? The simplest is Int, which represents

[5]

whole numbers, or integers, to give them their proper name. So what do the above type signatures say?

Recall how the process worked for not above. We look at the type of the function's argument, then at the

type of the function's result. In the case of chr (find the character corresponding to a specific numeric

encoding), the type signature tells us that it takes arguments of type Int and has a result of type Char. The

converse is the case with ord (find the specific numeric encoding for a given character: it takes things of

type Char and returns things of type Int.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 29

To make things more concrete, here are a few examples of function calls to chr and ord, so you can see

how the types work out. Notice that the two functions aren't in the standard prelude, but instead in the

Data.Char module, so you have to load that module with the :m (or :module) command.

Prelude> :m Data.Char

Prelude Data.Char> chr 97

'a'

Prelude Data.Char> chr 98

'b'

Prelude Data.Char> ord 'c'

99

So far, all we've seen is functions that take a single argument. This isn't very interesting! For example, the

following is a perfectly valid Haskell function, but what would its type be?

f x y = x + 5 + 2 * y

As we've said a few times, there's more than one type for numbers, but we're going to cheat here and pretend

that x and y have to be Ints.

The general technique for forming the type of a function in more than one

argument, then, is to just write down all the types of the arguments in a row,

in order (so in this case x first then y), then write -> in between all of them. There are very deep

Finally, add the type of the result to the end of the row and stick a final -> in reasons for this,

just before it. So in this case, we have: which we'll cover in

the chapter on

FIXME: use images here. Currying.

1. Write down the types of the arguments. We've already said that x and y

have to be Ints, so it becomes:

Int Int

^^ x is an Int ^^ y is an Int as well

3. Add in the result type and a final ->. In our case, we're just doing some basic arithmetic so the result

remains an Int.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 30

^^ We're returning an Int

^^ There's the extra -> that got added in

As you'll learn in the Practical Haskell section of the course, one popular

group of Haskell libraries are the GUI ones. These provide functions for

dealing with all the parts of Windows or Linux you're familiar with: opening A library is a

and closing application windows, moving the mouse around etc. One of the collection of

functions from one of these libraries is called openWindow, and you can use common code used

by many programs.

it to open a new window in your application. For example, say you're writing

a word processor like Microsoft Word, and the user has clicked on the

'Options' button. You need to open a new window which contains all the

[6]

options that they can change. Let's look at the type signature for this function :

Example: openWindow

Don't panic! Here are a few more types you haven't come across yet. But don't worry, they're quite simple.

All three of the types there, WindowTitle, WindowSize and Window are defined by the GUI library that

provides openWindow. As we saw when constructing the types above, because there are two arrows, the

first two types are the types of the parameters, and the last is the type of the result. WindowTitle holds the

title of the window (what appears in the blue bar - you didn't change the color, did you? - at the top),

WindowSize how big the window should be. The function then returns a value of type Window which you

can use to get information on and manipulate the window.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 31

Exercises

Finding types for functions is a basic Haskell skill that you should become very

familiar with. What are the types of the following functions?

1. The negate function, which takes an Int and returns that Int with its sign

swapped. For example, negate 4 = -4, and negate (-2) = 2

2. The && function, pronounced 'and', that takes two Bools and returns a third

Bool which is True if both the arguments were, and False otherwise.

3. The || function, pronounced 'or', that takes two Bools and returns a third

Bool which is True if either of the arguments were, and False otherwise.

For any functions hereafter involving numbers, you can just assume the numbers

are Ints.

1. f x y = not x && y

2. g x = (2*x - 1)^2

3. h x y z = chr (x - 2)

Polymorphic types

So far all we've looked at are functions and values with a single type. However, if you start playing around

with :t in GHCi you'll quickly run into things that don't have types beginning with the familiar capital letter.

For example, there's a function that finds the length of a list, called (rather predictably) length. Remember

that [Foo] is a list of things of type Foo. However, we'd like length to work on lists of any type. I.e. we'd

rather not have a lengthInts :: [Int] -> Int, as well as a lengthBools :: [Bool] ->

Int, as well as a lengthStrings :: [String] -> Int, as well as a...

That's too complicated. We want one single function that will find the length of any type of list. The way

Haskell does this is using type variables. For example, the actual type of length is as follows:

Type variables begin with a lowercase letter. Indeed, this is why types have to

begin with an uppercase letter — so they can be distinguished from type

variables. When Haskell sees a type variable, it allows any type to take its We'll look at the

place. This is exactly what we want. In type theory (a branch of mathematics), theory behind

this is called polymorphism: functions or values with only a single type (like polymorphism in

much more detail

all the ones we've looked at so far except length) are called monomorphic,

later in the course.

and things that use type variables to admit more than one type are therefore

polymorphic.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 32

As we saw, you can use the fst and snd functions to extract parts of pairs. By this time you should be in

the habit of thinking "What type is that function?" about every function you come across. Let's examine fst

and snd. First, a few sample calls to the functions:

1

Prelude> fst ("Hello", False)

"Hello"

Prelude> snd (("Hello", False), 4)

4

To begin with, let's point out the obvious: these two functions take a pair as their parameter and return one

part of this pair. The important thing about pairs, and indeed tuples in general, is that they don't have to be

homogeneous with respect to types; their different parts can be different types. Indeed, that is the case in the

second and third examples above. If we were to say:

That would force the first and second part of input pair to be the same type. That illustrates an important

aspect to type variables: although they can be replaced with any type, they have to be replaced with the same

type everywhere. So what's the correct type? Simply:

snd :: (a, b) -> b

Note that if you were just given the type signatures, you might guess that they return the first and second

parts of a pair, respectively. In fact this is not necessarily true, they just have to return something with the

same type of the first and second parts of the pair.

Now we've explored the basic theory behind types and types in Haskell, let's look at how they appear in

code. Most Haskell programmers will annotate every function they write with its associated type. That is,

you might be writing a module that looks something like this:

import Data.Char

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 33

lowercase = map toLower

capitalise x =

let capWord [] = []

capWord (x:xs) = toUpper x : xs

in unwords (map capWord (words x))

This is a small library that provides some frequently used string manipulation functions. uppercase

converts a string to uppercase, lowercase to lowercase, and capitalize capitalizes the first letter of

every word. Providing a type for these functions makes it more obvious what they do. For example, most

Haskellers would write the above module something like the following:

import Data.Char

uppercase = map toUpper

lowercase = map toLower

capitalise x =

let capWord [] = []

capWord (x:xs) = toUpper x : xs

in unwords (map capWord (words x))

Note that you can group type signatures together into a single type signature (like ours for uppercase and

lowercase above) if the two functions share the same type.

Type inference

So far, we've explored types by using the :t command in GHCi. However, before you came across this

chapter, you were still managing to write perfectly good Haskell code, and it has been accepted by the

compiler. In other words, it's not necessary to add type signatures. However, if you don't add type signatures,

that doesn't mean Haskell simply forgets about typing altogether! Indeed, when you didn't tell Haskell the

types of your functions and variables, it worked them out. This is a process called type inference, whereby the

compiler starts with the types of things it knows, then works out the types of the rest of the things. Type

inference for Haskell is decidable, which means that the compiler can always work out the types, even if you

[7]

never write them in . Lets look at some examples to see how the compiler works out types.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 34

isL c = c == 'l'

This function takes a character and sees if it is an 'l' character. The compiler derives the type for isL

something like the following:

'l' :: Char

Replacing the second ''a'' in the signature for (==) with the type of

'l':

(==) :: Char -> Char -> Bool

isL :: Char -> Bool

[8]

The first line indicates that the type of the function (==), which tests for equality, is a -> a -> Bool

. (We include the function name in parentheses because it's an operator: its name consists of all non-

alphanumeric characters. More on this later.) The compiler also knows that something in 'single quotes' has

type Char, so clearly the literal 'l' has type Char. Next, the compiler starts replacing the type variables in the

signature for (==) with the types it knows. Note that in one step, we went from a -> a -> Bool to

Char -> Char -> Bool, because the type variable a was used in both the first and second argument, so

they need to be the same. And so we arrive at a function that takes a single argument (whose type we don't

know yet, but hold on!) and applies it as the first argument to (==). We have a particular instance of the

polymorphic type of (==), that is, here, we're talking about (==) :: Char -> Char -> Bool

because we know that we're comparing Chars. Therefore, as (==) :: Char -> Char -> Bool and

we're feeding the parameter into the first argument to (==), we know that the parameter has the type of

Char. Phew!

But wait, we're not even finished yet! What's the return type of the function? Thankfully, this bit is a bit

easier. We've fed two Chars into a function which (in this case) has type Char -> Char -> Bool, so

we must have a Bool. Note that the return value from the call to (==) becomes the return value of our isL

function.

So, let's put it all together. isL is a function which takes a single argument. We discovered that this

argument must be of type Char. Finally, we derived that we return a Bool. So, we can confidently say that

isL has the type:

isL c = c == 'l'

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 35

And, indeed, if you miss out the type signature, the Haskell compiler will discover this on its own, using

exactly the same method we've just run through.

So if type signatures are optional, why bother with them at all? Here are a few reasons:

Documentation: the most prominent reason is that it makes your code easier to read. With most

functions, the name of the function along with the type of the function are sufficient to guess at what

the function does. (Of course, you should always comment your code anyway.)

Debugging: if you annotate a function with a type, then make a typo in the body of the function, the

compiler will tell you at compile-time that your function is wrong. Missing off the type signature could

have the effect of allowing your function to compile, and the compiler would assign it an erroneous

type. You wouldn't know until you ran your program that it was wrong. In fact, this is so important,

let's explore it some more.

fiveOrSix True = 5

fiveOrSix False = 6

pairToInt x = fiveOrSix (fst x)

Our function fiveOrSix takes a Bool. When pairToInt receives its arguments, it knows, because of the

type signature we've annotated it with, that the first element of the pair is a Bool. So, we could extract this

using fst and pass that into fiveOrSix, and this would work, because the type of the first element of the

pair and the type of the argument to fiveOrSix are the same.

This is really central to typed languages. When passing expressions around you have to make sure the types

match up like they did here. If they don't, you'll get type errors when you try to compile; your program won't

typecheck. This is really how types help you to keep your programs bug-free. To take a very trivial example:

Having that line as part of your program will make it fail to compile, because you can't add two strings

together! More likely, you wanted to use the string concatenation operator, which joins two strings together

into a single one:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 36

An easy typo to make, but because you use Haskell, it was caught when you tried to compile. You didn't

have to wait until you ran the program for the bug to become apparent.

This was only a simple example. However, the idea of types being a system to catch mistakes works on a

much larger scale too. In general, when you make a change to your program, you'll change the type of one of

the elements. If this change isn't something that you intended, then it will show up immediately. A lot of

Haskell programmers remark that once they have fixed all the type errors in their programs, and their

programs compile, that they tend to 'just work': function flawlessly first time, with only minor problems.

Run-time errors, where your program goes wrong when you run it rather than when you compile it, are much

rarer in Haskell than in other languages. This is a huge advantage of a strong type system like Haskell's.

Exercises

To come.

Notes

↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :)

2. More technically, fst and snd have types which limit them to pairs. It would be impossible to

define projection functions on tuples in general, because they'd have to be able to accept tuples of

different

↑ sizes, so the type of the function would vary.

3. ↑ In fact, these are one and the same concept in Haskell.

4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close

enough.

↑

5. To make things even more confusing, there's actually even more than one type for integers! Don't

worry,

↑ we'll come on to this in due course.

6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is

there.

↑

7. Some of the newer type system extensions to GHC do break this, however, so you're better off just

always

↑ putting down types anyway.

8. This is a slight lie. That type signature would mean that you can compare two values of any type

whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a

kind of 'restricted polymorphism' that allows type variables to range over some, but not all types.

Haskell implements this using type classes, which we'll learn about later. In this case, the correct type

of (==) is Eq a => a -> a -> Bool.

So far this tutorial has discussed functions that return values, which is well and good. But how do we write

"Hello world"? To give you a rough taste of it, here is a small variant of the "Hello world" program:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 37

import System.IO

main = do

putStrLn "Please enter your name: "

name <- getLine

putStrLn ("Hello, " ++ name ++ ", how are you?")

At the very least, what should be clear is that dealing with input and output (IO) in Haskell is not a lost

cause! Functional languages have always had a problem with input and output because they require side

effects. Functions always have to return the same results for the same arguments. But how can a function

"getLine" return the same value every time it is called? Before we give the solution, let's take a step back and

think about the difficulties inherent in such a task.

Any IO library should provide a host of functions, containing (at a minimum) operations like:

read a string from a keyboard

write data to a file

read data from a file

There are two issues here. Let's first consider the initial two examples and think about what their types

should be. Certainly the first operation (I hesitate to call it a "function") should take a String argument and

produce something, but what should it produce? It could produce a unit (), since there is essentially no

return value from printing a string. The second operation, similarly, should return a String, but it doesn't

seem to require an argument.

We want both of these operations to be functions, but they are by definition not functions. The item that

reads a string from the keyboard cannot be a function, as it will not return the same String every time. And

if the first function simply returns () every time, then referential transparency tells us we should have no

problem with replacing it with a function f _ = (). But clearly this does not have the desired effect.

Actions

The breakthrough for solving this problem came when Phil Wadler realized that monads would be a good

way to think about IO computations. In fact, monads are able to express much more than just the simple

operations described above; we can use them to express a variety of constructions like concurrence,

exceptions, IO, non-determinism and much more. Moreover, there is nothing special about them; they can be

defined within Haskell with no special handling from the compiler (though compilers often choose to

optimize monadic operations). Monads also have a somewhat undeserved reputation of being difficult to

understand. So we're going to leave things at that -- knowing simply that IO somehow makes use of monads

without neccesarily understanding the gory details behind them (they really aren't so gory). So for now, we

can forget that monads even exist.

As pointed out before, we cannot think of things like "print a string to the screen" or "read data from a file"

as functions, since they are not (in the pure mathematical sense). Therefore, we give them another name:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 38

actions. Not only do we give them a special name, we give them a special type. One particularly useful

action is putStrLn, which prints a string to the screen. This action has type:

As expected, putStrLn takes a string argument. What it returns is of type IO (). This means that this

function is actually an action (that is what the IO means). Furthermore, when this action is evaluated (or

"run") , the result will have type ().

Note

Actually, this type means that putStrLn is an action "within the IO monad", but

we will gloss over this for now.

getLine :: IO String

This means that getLine is an IO action that, when run, will have type String.

The question immediately arises: "how do you 'run' an action?". This is something that is left up to the

compiler. You cannot actually run an action yourself; instead, a program is, itself, a single action that is run

when the compiled program is executed. Thus, the compiler requires that the main function have type IO

(), which means that it is an IO action that returns nothing. The compiled code then executes this action.

However, while you are not allowed to run actions yourself, you are allowed to combine actions. There are

two ways to go about this. The one we will focus on in this chapter is the do notation, which provides a

convenient means of putting actions together, and allows us to get useful things done in Haskell without

having to understand what really happens. Lurking behind the do notation is the more explicit approach using

the (>>=) operator, but we will not be ready to cover this until the chapter Understanding monads.

Note

Do notation is just syntactic sugar for (>>=). If you have experience with higher

order functions, it might be worth starting with the latter approach and coming back

here to see how do notation gets used.

main = do

putStrLn "Please enter your name: "

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 39

putStrLn ("Hello, " ++ name ++ ", how are you?")

We can consider the do notation as a way to combine a sequence of actions. Moreover, the <- notation is a

way to get the value out of an action. So, in this program, we're sequencing three actions: a putStrLn, a

getLine and another putStrLn. The putStrLn action has type String -> IO (), so we provide it

a String, so the fully applied action has type IO (). This is something that we are allowed to run as a

program.

Exercises

Write a program which asks the user for the base and height of a triangle,

calculates its area and prints it to the screen. The interaction should look

something like:

The base?

3.3

The height?

5.4

The area of that triangle is 8.91

Hint: you can use the function read to convert user strings like "3.3" into

numbers like 3.3 and function show to convert a number into string.

While we are allowed to get a value out of certain actions like getLine, we certainly are not obliged to do

so. For example, we could very well have written something like this:

main = do

putStrLn "Please enter your name: "

getLine

putStrLn ("Hello, how are you?")

Clearly, that isn't very useful: the whole point of prompting the user for his or her name was so that we could

do something with the result. That being said, it is conceivable that one might wish to read a line and

completely ignore the result. Omitting the <- will allow for that; the action will happen, but the data won't be

stored anywhere.

In order to get the value out of the action, we write name <- getLine, which basically means "run

getLine, and put the results in the variable called name."

The <- can be used with any action (except the last)

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 40

On the flip side, there are also very few restrictions which actions can have values gotten out of them.

Consider the following example, where we put the results of each action into a variable (except the last...

more on that later):

main = do

x <- putStrLn "Please enter your name: "

name <- getLine

putStrLn ("Hello, " ++ name ++ ", how are you?")

The variable x gets the value out of its action, but that isn't very interesting because the action returns the unit

value (). So while we could technically get the value out of any action, it isn't always worth it. But wait,

what about that last action? Why can't we get a value out of that? Let's see what happens when we try:

main = do

x <- putStrLn "Please enter your name: "

name <- getLine

y <- putStrLn ("Hello, " ++ name ++ ", how are you?")

Whoops!

YourName.hs:5:2:

The last statement in a 'do' construct must be an expression

This is a much more interesting example, but it requires a somewhat deeper understanding of Haskell than we

currently have. Suffice it to say, whenever you use <- to get the value of an action, Haskell is always

expecting another action to follow it. So the very last action better not have any <-s.

Controlling actions

Normal Haskell constructions like if/then/else and case/of can be used within the do notation, but

you need to be somewhat careful. For instance, in a simple "guess the number" program, we have:

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

if (read guess) < num

then do putStrLn "Too low!"

doGuessing num

else if (read guess) > num

then do putStrLn "Too high!"

doGuessing num

else do putStrLn "You Win!"

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 41

If we think about how the if/then/else construction works, it essentially takes three arguments: the

condition, the "then" branch, and the "else" branch. The condition needs to have type Bool, and the two

branches can have any type, provided that they have the same type. The type of the entire if/then/else

construction is then the type of the two branches.

In the outermost comparison, we have (read guess) < num as the condition. This clearly has the

correct type. Let's just consider the "then" branch. The code here is:

doGuessing num

Here, we are sequencing two actions: putStrLn and doGuessing. The first has type IO (), which is

fine. The second also has type IO (), which is fine. The type result of the entire computation is precisely

the type of the final computation. Thus, the type of the "then" branch is also IO (). A similar argument

shows that the type of the "else" branch is also IO (). This means the type of the entire if/then/else

construction is IO (), which is just what we want.

Note

In this code, the last line is else do putStrLn "You Win!". This is

somewhat overly verbose. In fact, else putStrLn "You Win!" would have

been sufficient, since do is only necessary to sequence actions. Since we have only

one action here, it is superfluous.

It is incorrect to think to yourself "Well, I already started a do block; I don't need another one," and hence

write something like:

then putStrLn "Too low!"

doGuessing num

else ...

Here, since we didn't repeat the do, the compiler doesn't know that the putStrLn and doGuessing calls

are supposed to be sequenced, and the compiler will think you're trying to call putStrLn with three

arguments: the string, the function doGuessing and the integer num. It will certainly complain (though the

error may be somewhat difficult to comprehend at this point).

We can write the same doGuessing function using a case statement. To do this, we first introduce the

Prelude function compare, which takes two values of the same type (in the Ord class) and returns one of

GT, LT, EQ, depending on whether the first is greater than, less than or equal to the second.

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

case compare (read guess) num of

LT -> do putStrLn "Too low!"

doGuessing num

GT -> do putStrLn "Too high!"

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 42

doGuessing num

EQ -> putStrLn "You Win!"

Here, again, the dos after the ->s are necessary on the first two options, because we are sequencing actions.

If you're used to programming in an imperative language like C or Java, you might think that return will

exit you from the current function. This is not so in Haskell. In Haskell, return simply takes a normal

value (for instance, one of type Int) and makes it into an action that returns the given value (for the same

example, the action would be of type IO Int). In particular, in an imperative language, you might write this

function as:

print "Enter your guess:";

int guess = atoi(readLine());

if (guess == num) {

print "You win!";

return ();

}

if (guess < num) {

print "Too low!";

doGuessing(num);

} else {

print "Too high!";

doGuessing(num);

}

}

Here, because we have the return () in the first if match, we expect the code to exit there (and in most

imperative languages, it does). However, the equivalent code in Haskell, which might look something like:

doGuessing num = do

putStrLn "Enter your guess:"

guess <- getLine

case compare (read guess) num of

EQ -> do putStrLn "You win!"

return ()

if (read guess < num)

then do print "Too low!";

doGuessing

else do print "Too high!";

doGuessing

First of all, if you guess correctly, it will first print "You win!," but it won't exit, and it will check whether

guess is less than num. Of course it is not, so the else branch is taken, and it will print "Too high!" and then

ask you to guess again.

On the other hand, if you guess incorrectly, it will try to evaluate the case statement and get either LT or GT

as the result of the compare. In either case, it won't have a pattern that matches, and the program will fail

immediately with an exception.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 43

Exercises

main =

do x <- getX

putStrLn x

getX =

do return "hello"

return "aren't"

return "these"

return "returns"

return "rather"

return "pointless?"

Why?

Exercises

Write a program that asks the user for his or her name. If the name is one of

Simon, John or Phil, tell the user that you think Haskell is a great programming

language. If the name is Koen, tell them that you think debugging Haskell is fun

(Koen Classen is one of the people who works on Haskell debugging); otherwise,

tell the user that you don't know who he or she is.

Actions may look easy up to now, but they are actually a common stumbling block for new Haskellers. If

you have run into trouble working with actions, you might consider looking to see if one of your problems or

questions matches the cases below. It might be worth skimming this section now, and coming back to it when

you actually experience trouble.

One temptation might be to simplify our program for getting a name and printing it back out. Here is one

unsuccessful attempt:

main =

do putStrLn "What is your name? "

putStrLn ("Hello " ++ getLine)

Ouch!

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 44

YourName.hs:3:26:

Couldn't match expected type `[Char]'

against inferred type `IO String'

Let us boil the example above to its simplest form. Would you expect this program to compile?

main =

do putStrLn getLine

For the most part, this is the same (attempted) program, except that we've stripped off the superflous "What is

your name" prompt as well as the polite "Hello". One trick to understanding this is to reason about it in terms

of types. Let us compare:

getLine :: IO String

We can use the same mental machinery we learned in Type basics to figure how everything went wrong.

Simply put, putStrLn is expecting a String as input. We do not have a String, but something

tantalisingly close, an IO String. This represents an action that will give us a String when it's run. To

obtain the String that putStrLn wants, we need to run the action, and we do that with the ever-handy

left arrow, <-.

main =

do name <- getLine

putStrLn name

main =

do putStrLn "What is your name? "

name <- getLine

putStrLn ("Hello " ++ name)

Now the name is the String we are looking for and everything is rolling again.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 45

Fine, so we've made a big deal out of the idea that you can't use actions in situations that don't call for them.

The converse of this is that you can't use non-actions in situations that DO expect them. Say we want to greet

the user, but this time we're so excited to meet them, we just have to SHOUT their name out:

main =

do name <- getLine

loudName <- makeLoud name

putStrLn ("Hello " ++ loudName ++ "!")

putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

String

makeLoud :: String -> String

makeLoud s = map toUpper s

Expected type: IO t

Inferred type: String

In a 'do' expression: loudName <- makeLoud name

This is quite similar to the problem we ran into above: we've got a mismatch between something that is

expecting an IO type, and something which is not. This time, the cause is our use of the left arrow <-; we're

trying to left arrow a value of makeLoud name, which really isn't left arrow material. It's basically the

same mismatch we saw in the previous section, except now we're trying to use regular old String (the loud

name) as an IO String, which clearly are not the same thing. The latter is an action, something to be run,

whereas the former is just an expression minding its own business. So how do we extricate ourselves from

this mess? We have a number of options:

We could find a way to turn makeLoud into an action, to make it return IO String. But this is not

desirable, because the whole point of functional programming is to cleanly separate our side-effecting

stuff (actions) from the pure and simple stuff. For example, what if we wanted to use makeLoud from

some other, non-IO, function? An IO makeLoud is certainly possible (how?), but missing the point

entirely.

We could use return to promote the loud name into an action, writing something like loudName

<- return (makeLoud name). This is slightly better, in that we are at least leaving the

makeLoud itself function nice and IO-free, whilst using it in an IO-compatible fashion. But it's still

moderately clunky, because by virtue of left arrow, we're implying that there's action to be had -- how

exciting! -- only to let our reader with a somewhat anticlimatic return

Or we could use a let binding...

It turns out that Haskell has a special extra-convenient syntax for let bindings in actions. It looks a little like

this:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 46

main =

do name <- getLine

let loudName = makeLoud name

putStrLn ("Hello " ++ loudName ++ "!")

putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

If you're paying attention, you might notice that the let binding above is missing an in. This is because let

bindings in do blocks do not require the in keyword. You could very well use it, but then you'd have to

make a mess of your do blocks. For what it's worth, the following two blocks of code are equivalent.

sweet unsweet

let loudName = makeLoud name let loudName = makeLoud name

putStrLn ("Hello " ++ loudName ++ "!") in do putStrLn ("Hello " ++ loudName ++ "!")

putStrLn ("Oh boy! Am I excited to meet putStrLn ("Oh boy! Am I excited to

you, " ++ loudName) meet you, " ++ loudName)

Exercises

1. Why does the unsweet version of the let binding require an extra do

keyword?

2. Do you always need the extra do?

3. (extra credit) Curiously, let without in is exactly how we wrote things

when we were playing with the interpreter in the beginning of this book.

Why can you omit the in keyword in the interpreter, when you'd have to

put it in when typing up a source file?

We've been insisting rather vehemently on the distinction between actions and expressions and hope that you

have a more or less solid grasp on the difference (if not, it will become clearer as you write more real-life

code). But it turns out that there is a deeper, more beautiful and perhaps frightening truth behind this. Ready?

To be precise, the world of expressions can be handily divided into those that are actions, and those that are

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 47

not. We've been making a distinction between actions and expressions the whole time, but what we should

really have been making the distinction between is expressions-that-are-actions and expressions-that-aren't.

This isn't quite a hair-splitting terminological difference either. It will not be entirely useful for now, but for

me at least it is vaguely reassuring to know that underneath this heavy do-block-left-arrow machinery is a

very elegant monadic core, that actions are first class citizens in the same way that functions and other

expressions are. Sure, there is some extra sugar that makes actions more agreeable to use, but like a graphical

user interface, it is never really strictly obligatory. Moreover, it is sometimes very powerful to manipulate

actions in the same way we do the non-actions, for example to map or foldr over them. Unimpressed? It's

ok, it's not strictly necessary to know this, but it might make Haskell just a little tastier for you to know it.

Just think of this as foreshadowing to Understanding monads.

Learn more

At this point, you should have the skills you need to do some fancier input/output. Here are some IO-related

options to consider.

You could continue the sequential track, by learning more about types and eventually monads.

Alternately: you could start learning about building graphical user interfaces in the GUI chapter

For more IO-related functionality, you could also consider learning more about the System.IO library

Type declarations

The type declaration for type synonyms.

The newtype declaration, which is a cross between the other two.

In this chapter, we will focus on the most essential way, data, and to make life easier, type. You'll find out

about newtype later on, but don't worry too much about it; it's there mainly for optimisation.

Here is a data structure for a simple list of anniversaries:

data Anniversary =

Birthday String Int Int Int -- Name, month, day, year

| Wedding String String Int Int Int -- First partner's name, second partner's name, month, day,

year

This declares a new data type Anniversary with two constructor functions called Birthday and

Wedding. As usual with Haskell the case of the first letter is important: type names and constructor

functions must always start with capital letters. Note also the vertical bar: this marks the point where one

alternative ends and the next begins; you can think of it almost as an or - which you'll remember was || -

except used in types.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 48

The declaration says that an Anniversary can be one of two things; a Birthday or a Wedding. A Birthday

contains one string and three integers, and a Wedding contains two strings and three integers. The comments

(after the "--") explain what the fields actually mean.

Now we can create new anniversaries by calling the constructor functions. For example, suppose we have

John Smith born on 3rd July 1968:

johnSmith :: Anniversary

johnSmith = Birthday "John Smith" 7 3 1968

smithWedding :: Anniversary

smithWedding = Wedding "John Smith" "Jane Smith" 3 4 1997

anniversaries :: [Anniversary]

anniversaries = [johnSmith, smithWedding]

(Obviously a real application would not hard-code its entries: this is just to show how constructor functions

work).

Constructor functions can do all of the things ordinary functions can do. Anywhere you could use an ordinary

function you can use a constructor function.

Anniversaries will need to be converted into strings for printing. This needs another function:

name ++ " born " ++ showDate month day year

name1 ++ " married " ++ name2 ++ " " ++ showDate month day year

This shows the one way that constructor functions are special: they can also be used to deconstruct objects.

showAnniversary takes an argument of type Anniversary. If the argument is a Birthday then the

first version gets used, and the variables name, month, date and year are bound to its contents. If the

argument is a Wedding then the second version is used and the arguments are bound in the same way. The

brackets indicate that the whole thing is one argument split into five or six parts, rather than five or six

separate arguments.

Notice the relationship between the type and the constructors. All versions of showAnniversary convert

an anniversary to a string. One of them handles the Birthday case and the other handles the Wedding

case.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 49

-- Non-US readers may wish to rearrange this

Of course, it's a bit clumsy having the date passed around as three separate integers. What we really need is a

new datatype:

Constructor functions are allowed to be the same name as the type, and if there is only one then it is good

practice to make it so.

It would also be nice to make it clear that the strings in the Anniversary type are names, but still be able to

manipulate them like ordinary strings. The type declaration does this:

This says that a Name is a synonym for a String. Any function that takes a String will now take a

Name as well, and vice versa. The right hand side of a type declaration can be a more complex type as

well. For example String itself is defined in the standard libraries as

data Anniversary =

Birthday Name Date

| Wedding Name Name Date

which is a lot easier to read. We can also have a type for the list:

johnSmith :: Anniversary

johnSmith = Birthday "John Smith" (Date 7 3 1968)

smithWedding :: Anniversary

smithWedding = Wedding "John Smith" "Jane Smith" (Date 3 4 1997)

anniversaries :: AnniversaryBook

anniversaries = [johnSmith, smithWedding]

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 50

name ++ " born " ++ showDate date

name1 ++ " married " ++ name2 ++ showDate date

showDate (Date m d y) = show m ++ "/" ++ show d ++ "/" ++ show y

Elementary Haskell

Recursion

Recursion is a clever idea that says that a given function can use itself as part of its definition.

Numeric recursion

The factorial function

In mathematics, especially combinatorics, there is a function used fairly frequently called the factorial

[9]

function . This takes a single argument, a number, finds all the numbers between one and this number, and

multiplies them all together. For example, the factorial of 6 is 1 × 2 × 3 × 4 × 5 × 6 = 720. This is an

interesting function for us, because it is a candidate to be written in the recursive style.

Factorial of 6 = 6 × 5 × 4 × 3 × 2 × 1

Factorial of 5 = 5 × 4 × 3 × 2 × 1

Notice how we've lined things up. What you can see here is that the factorial of 6 involves the factorial of 5.

In fact, the factorial of 6 is just 6 × (factorial of 5). Let's look at some more examples:

Factorial of 3 = 3 × 2 × 1

Factorial of 2 = 2 × 1

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 51

Factorial of 8 = 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1

Factorial of 7 = 7 × 6 × 5 × 4 × 3 × 2 × 1

Indeed, we can see that the factorial of any number is just that number multiplied by the factorial of the

number one less than it. There's one exception to this: if we ask for the factorial of 0, we don't want to

multiply 0 by the factorial of -1! In fact, we just say the factorial of 0 is 1 (we define it to be so. It just is,

okay?). So, we can sum up the definition of the factorial function:

The factorial of 0 is 1

The factorial of any other number is that number multiplied by the factorial of the number one less

than it.

factorial 0 = 1

factorial n = n * factorial (n-1)

This defines a new function called factorial. The first line says that the factorial of 0 is 1, and the

second one says that the factorial of any other number n is equal to n times the factorial of n-1. Note the

parentheses around the n-1: without them this would have been parsed as (factorial n) - 1; function

application (applying a value to a function) will happen before anything else does (we say that function

application binds more tightly than anything else).

This all seems a little voodoo so far, though. How does it work? Well, let's look at what happens when you

execute factorial 3:

2 isn't 0, so we recur.

1 isn't 0, so we recur.

0 is 0, so we return 1.

We multiply the current number, 1, by the result of the recursion, 1, obtaining 1 (1 × 1).

We multiply the current number, 2, by the result of the recursion, 1, obtaining 2 (2 × 1 × 1).

We multiply the current number, 3, by the result of the recursion, obtaining 6 (3 × 2 × 1 × 1).

(Note that we end up with the one appearing twice, but that's okay, because the 'base case' is 0 rather than 1.

This is just mathematical convention (it's useful to have the factorial of 0 defined); we could have stopped at

1 if had wanted to.)

We can see how the multiplication 'builds up' through the recursion.

Exercises

Type the factorial function into a Haskell source file and load it into your favourite

Haskell environment.

What is factorial 5?

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 52

What about factorial 1000? If you have a scientific calculator (that isn't

your computer), try it there first. Does Haskell give you what you expected?

What about factorial (-1)?

A quick aside

This section is aimed at people used to more imperative-style languages like C and Java.

This example shows how you do loops in Haskell. The idiomatic way of doing this in an imperative language

would be to use a for loop, like the following (in C):

int factorial(int n) {

int res = 1;

for (i = 1; i <= n; i++) res *= i;

return res;

}

This isn't possible in Haskell because you're changing the value of the variable res (a destructive update),

but you can use recursion. To do it through recursion, you take your current result and modify it before the

recursive call. An example: sometimes you'll want to read input from the user that includes linebreaks/

newlines. A looping solution would be to read a line of input, append it to a string variable containing all

previous lines, check it for whatever marks the end of the input (ending the loop if true, or looping again if

false). Here's a recursive solution which will accumulate input until the '.' is input:

getLinesUntilDot :: IO [String]

getLinesUntilDot =

do x <- getLine

if x == "." then return []

else do xs <- getLinesUntilDot

return (x:xs)

The other way would be to use lists: you could throw all the numbers between 1 and n in a list, then use the

product function to multiply them all together:

product [1..10]

Another thing to note is that you shouldn't be worried about poor performance through recursion with

Haskell. In general, functional programming compilers include a lot of optimization for recursion, including

one important one called tail-call optimisation; remember too that Haskell is lazy - if a calculation isn't

needed, it won't be done. We'll learn about these in later chapters.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 53

Unlike the other example, the order of the two recursive declarations is important. Haskell matches function

calls starting at the top and picking the first one that matches. In this case, if we had the equation starting

factorial n before the 'special case' starting factorial 0, then the general n would match anything

passed into it, including, importantly, 0. So a call factorial 0 would match on the general, n case, the

compiler would conclude that factorial 0 equals 0 * factorial -1, and so on to negative infinity.

Not what we want.

It turns out a lot of functions are recursive! For example, let's think about multiplication. When you were first

introduced to multiplication (remember that moment? :)), it may have been through a process of 'repeated

addition'. That is, 5 × 4 is just 5 added to itself 4 times. So, it turns out we can define multiplication

recursively:

n * 1 = n

n * m = n + n * (m - 1)

Recursion, then, generally looks at two cases: what to do if the argument is the base case (normally either 1

or 0), and what to do otherwise. The actual recursion normally happens in the latter case, passing the number

minus one back into the function, so that we proceed down the number line. When the number hits 1 or 0,

our base case is invoked, and we stop.

Exercises

above for factorial 3.

2. Define a recursive function power such that power x y raises x to the y

power.

3. You are given a function plusOne x = x + 1. Without using any other

(+)s, define a recursive function addition such that addition x y

adds x and y together.

List-based recursion

In fact, a lot of functions in Haskell will turn out to be recursive, especially those concerning lists. Consider

the length function that finds the lengths of lists.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 54

length [] = 0

length (x:xs) = 1 + length xs

Note the syntax. The (x:xs) represents a list where x is the first element and xs is the rest of the list. See

the section on Pattern matching

How about the concatenation function, (++) which joins two lists together? (Some examples of usage are

also given, as we haven't come accross this function so far.)

[1,2,3,4,5,6]

Prelude> "Hello " ++ "world" -- Strings are lists of Chars

"Hello world"

[] ++ ys = ys

(x:xs) ++ ys = x : xs ++ ys

We seem to have a recurring pattern. With list-based functions, at least, we tend to think in two cases: what

to do if the list is empty (the base case), and what to do otherwise. The actual recursion normally happens in

the second step, where we pass the tail of the list to our function again, so that the list becomes progressively

smaller. When it hits the empty list, our base case is invoked.

Exercises

Give recursive definitions for the following list-based functions. In each case,

think what the base case would be, then think what the general case would look

like, in terms of everything smaller than it.

count and returns the list which is that element repeated that many times.

E.g. replicate 3 'a' = "aaa". (Hint: think about what replicate of

anything with a count of 0 should be; a count of 0 is your 'base case'.)

2. (!!) :: [a] -> Int -> a, which returns the element at the given

'index'. The first element is at index 0, the second at index 1, and so on.

Note that with this function, you're recurring both numerically and down a

list.

3. (A bit harder.) zip :: [a] -> [b] -> [(a, b)], which takes two

lists and 'zips' them together, so that the first pair in the resulting list is the

first two elements of the two lists, and so on. E.g. zip [1,2,3] "abc"

= [(1, 'a'), (2, 'b'), (3, 'c')]. If either of the lists is

shorter than the other, you can stop once either list runs out. E.g. zip

[1,2] "abc" = [(1, 'a'), (2, 'b')].

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 55

Recursion is used to define nearly all functions to do with lists and numbers. The next time you need a list-

based algorithm, start with a case for the empty list and a case for the non-empty list and see if your

algorithm is recursive.

Summary

Recursion is the practise of using a function you're defining in the body of the function itself. It nearly always

comes in two parts: a base case and a recursive case. Recursion is especially useful for dealing with list- and

number-based functions.

Notes

↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :)

2. More technically, fst and snd have types which limit them to pairs. It would be impossible to

define projection functions on tuples in general, because they'd have to be able to accept tuples of

different

↑ sizes, so the type of the function would vary.

3. ↑ In fact, these are one and the same concept in Haskell.

4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close

enough.

↑

5. To make things even more confusing, there's actually even more than one type for integers! Don't

worry,

↑ we'll come on to this in due course.

6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is

there.

↑

7. Some of the newer type system extensions to GHC do break this, however, so you're better off just

always

↑ putting down types anyway.

8. This is a slight lie. That type signature would mean that you can compare two values of any type

whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a

kind of 'restricted polymorphism' that allows type variables to range over some, but not all types.

Haskell implements this using type classes, which we'll learn about later. In this case, the correct type

↑ (==) is Eq a => a -> a -> Bool.

of

9. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we

don't use it here.

Pattern matching

Pattern matching is a convenient way to bind variables to different parts of a given value.

You've actually met pattern matching before, in the lists chapter. Recall functions like map:

map _ [] = []

map f (x:xs) = f x : map f xs

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 56

Here there are four different patterns going on: two per equation. Let's explore each one in turn (although not

in the order they appeared in that example):

[] is a pattern that matches the empty list. It doesn't bind any variables.

(x:xs) is a pattern that matches something (which gets bound to x), which is cons'd, using the

function (:), onto something else (which gets bound to the variable xs).

f is a pattern which matches anything at all, and binds f to that something.

_ is the pattern which matches anything at all, but doesn't do any binding.

So pattern matching is a way of assigning names to things (or binding those names to those things), and

possibly breaking down expressions into subexpressions at the same time (as we did with the list in the

definition of map).

However, you can't pattern match with anything. For example, you might want to define a function like the

following to chop off the first three elements of a list:

However, that won't work, and will give you an error. The problem is that the function (++) isn't allowed

in patterns. So what is allowed?

The one-word answer is constructors. Recall algebraic datatypes, which look something like:

Here Bar and Baz are constructors for the type Foo. And so you can pattern match with them:

f Bar = 1

f (Baz x) = x - 1

Remember that lists are defined thusly (note that the following isn't actually valid syntax: lists are in reality

deeply grained into Haskell):

So the empty list, [], and the (:) function, are in reality constructors of the list datatype, so you can pattern

match with them.

Note, however, that as [x, y, z] is just syntactic sugar for x:y:z:[], you can still pattern match using

the former form:

dropThree (_:_:_:xs) = xs

If the only relevant information is the type of the constructor (regardless of the number of its elements) the

{} pattern can be used:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 57

g Bar {} = True

g Baz {} = False

The function g does not have to be changed when the number of elements of the constructors Bar or Baz

changes. Note: Foo does not have to be a record for this to work.

h Baz2 {barName=name} = length name

h Bar2 {} = 0

There is one exception to the rule that you can only pattern match with constructors. It's known as n+k

patterns. It is indeed valid Haskell 98 to write something like:

pred (n+1) = n

However, this is generally accepted as bad form and not many Haskell programmers like this exception, and

so try to avoid it.

The short answer is that wherever you can bind variables, you can pattern match. Let's have a look at that

more precisely.

Equations

The first place is in the left-hand side of function equations. For example, our above code for map:

map _ [] = []

map f (x:xs) = f x : map f xs

Here we're binding, and doing pattern-matching, on the left hand side of both of these equations.

You can obviously bind variables with a let expression or where clause. As such, you can also do pattern

matching here. A trivial example:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 58

let Just x = lookup "bar" [("foo", 1), ("bar", 2), ("baz", 3)]

Case expressions

One of the most obvious places you can use pattern binding is on the left hand side of case branches:

case someRandomList of

[] -> "The list was empty"

(x:xs) -> "The list wasn't empty: the first element was " ++ x ++ ", and " ++

"there were " ++ show (length xs) ++ " more elements in the list."

Lambdas

As lambdas can be easily converted into functions, you can pattern match on the left-hand side of lambda

expressions too:

Note that here, along with on the left-hand side of equations as described above, you have to use parentheses

around your patterns (unless they're just _ or are just a binding, not a pattern, like x).

List comprehensions

After the | in list comprehensions, you can pattern match. This is actually extremely useful. For example, the

function catMaybes from Data.Maybe takes a list of Maybes, filters all the Just xs, and gets rid of all

the Just wrappers. It's easy to write it using list comprehensions:

catMaybes ms = [ x | Just x <- ms ]

If the pattern match fails, it just moves on to the next element in ms. (More formally, as list comprehensions

are just the list monad, a failed pattern match invokes fail, which is the empty list in this case, and so gets

ignored.)

That's mostly it, but there are one or two other places you'll find as you progress through the book. Here's a

list in case you're very eager already:

Similarly, with let bindings in do-blocks, you can pattern match analogously to 'real' let bindings.

By now we have seen the basic tools for working with lists. We can build lists up from the cons operator (:)

and the empty list [] (see Lists and tuples if you are unsure about this); and we can take them apart by using

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 59

a combination of Recursion and Pattern matching. In this chapter, we will delve a little deeper into the inner-

workings and the use of Haskell lists. We'll discover a little bit of new notation and some characteristically

Haskell-ish features like infinite lists and list comprehensions. But before going into this, let us step back for

a moment and combine the things we have already learned about lists.

Constructing Lists

We'll start by making a function to double every element of a list of integers. First, we must specify the type

declaration for our function. For our purposes here, the function maps a list of integers to another list of

integers:

Then, we must specify the function definition itself. We'll be using a recursive definition, which consists of

1. the general case which iteratively generates a successive and simpler general case and

2. the base case, where iteration stops.

doubleList [] = []

Since by definition, there are no more elements beyond the end of a list, intuition tells us iteration must stop

at the end of the list. The easiest way to accomplish this is to return the null list: As a constant, it halts our

iteration. As the empty list, it doesn't change the value of any list we append it to.

The general case requires some explanation. Remember that ":" is one of a special class of functions known

as "constructors". The important thing about constructors is that they can be used to break things down as

part of "pattern matching" on the left hand side of function definitions. In this case the argument passed to

doubleList is broken down into the first element of the list (known as the "head") and the rest of the list

(known as the "tail").

On the right hand side doubleList builds up a new list by using ":". It says that the first element of the result is

twice the head of the argument, and the rest of the result is obtained by applying "doubleList" to the tail. Note

the naming convention implicit in (n:ns). By appending an "s" to the element "n" we are forming its plural.

The idea is that the head contains one item while the tail contains many, and so should be pluralised.

doubleList [1,2,3,4]

We can work this out longhand by substituting the argument into the function definition, just like schoolbook

algebra:

= (1*2) : (2*2) : doubleList (3 : [4])

= (1*2) : (2*2) : (3*2) : doubleList (4 : [])

= (1*2) : (2*2) : (3*2) : (4*2) : doubleList []

= (1*2) : (2*2) : (3*2) : (4*2) : []

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 60

= 2 : 4 : 6 : 8 : []

= [2, 4, 6, 8]

Notice how the definition for empty lists terminates the recursion. Without it, the Haskell compiler would

have had no way to know what to do when it reached the end of the list.

Also notice that it would make no difference when we did the multiplications (unless one of them is an error

or nontermination: we'll get to that later). If I had done them immediately it would have made absolutely no

difference. This is an important property of Haskell: it is a "pure" functional programming language. Because

evaluation order can never change the result, it is mostly left to the compiler to decide when to actually

evaluate things. Haskell is a "lazy" evaluation language, so evaluation is usually deferred until the value is

really needed, but the compiler is free to evaluate things sooner if this will improve efficiency. From the

programmer's point of view evaluation order rarely matters (except in the case of infinite lists, of which more

will be said shortly).

Of course a function to double a list has limited generality. An obvious generalization would be to allow

multiplication by any number. That is, we could write a function "multiplyList" that takes a multiplicand as

well as a list of integers. It would be declared like this:

multiplyList _ [] = []

multiplyList m (n:ns) = (m*n) : multiplyList m ns

This example introduces the "_", which is used for a "don't care" argument; it will match anything, like *

does in shells or .* in regular expressions. The multiplicand is not used for the null case, so instead of being

bound to an unused argument name it is explicitly thrown away, by "setting" _ to it. ("_" can be thought of

as a write-only "variable".)

The type declaration needs some explanation. Hiding behind the rather odd syntax is a deep and clever idea.

The "->" arrow is actually an operator for types, and is right associative. So if you add in the implied brackets

the type definition is actually

Think about what this is saying. It means that "multiplyList" doesn't take two arguments. Instead it takes one

(an Integer), and then returns a new function. This new function itself takes one argument (a list of Integers)

and returns a new list of Integers. This process of functions taking one argument is called "currying", and is

very important.

or it can do something which, in any other language, would be an error; this is partial function application

and because we're using Haskell, we can write the following neat & elegant bits of code:

doubleList = multiplyList 2

evens = doubleList [1,2,3,4]

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 61

It may help you to understand if you put the implied brackets in the first definition of "evens":

In other words "multiplyList 2" returns a new function that is then applied to [1,2,3,4].

Haskell has a convenient shorthand for specifying a list containing a sequence of integers. Some examples

are enough to give the flavor:

Code Result

---- ------

[1..10] [1,2,3,4,5,6,7,8,9,10]

[2,4..10] [2,4,6,8,10]

[5,4..1] [5,4,3,2,1]

[1,3..10] [1,3,5,7,9]

The same notation can be used for floating point numbers and characters as well. However, be careful with

floating point numbers: rounding errors can cause unexpected things to happen. Try this:

[0,0.1 .. 1]

Similarly, there are limits to what kind of sequence can be written through dot-dot notation. You can't put in

[0,1,1,2,3,5,8..100]

and expect to get back the rest of the Fibonacci series, or put in the beginning of a geometric sequence like

[1,3,9,27..100]

Infinite Lists

One of the most mind-bending things about Haskell lists is that they are allowed to be infinite. For example,

the following generates the infinite list of integers starting with 1:

[1..]

(If you try this in GHCi, remember you can stop an evaluation with C-c).

Or you could define the same list in a more primitive way by using a recursive function:

positiveInts = intsFrom 1

This works because Haskell uses lazy evaluation: it never actually evaluates more than it needs at any given

moment. In most cases an infinite list can be treated just like an ordinary one. The program will only go into

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 62

an infinite loop when evaluation would actually require all the values in the list. Examples of this include

sorting or printing the entire list. However:

will define "evens" to be the infinite list [2,4,6,8....]. And you can pass "evens" into other functions, and it

will all just work. See the exercise 4 below for an example of how to process an infinite list and then take the

first few elements of the result.

Infinite lists are quite useful in Haskell. Often it's more convenient to define an infinite list and then take the

first few items than to create a finite list. Functions that process two lists in parallel generally stop with the

shortest, so making the second one infinite avoids having to find the length of the first. An infinite list is

often a handy alternative to the traditional endless loop at the top level of an interactive program.

Exercises

Write the following functions and test them out. Don't forget the type declarations.

returns [11,21,31,41]

2. dropInt drops the first n items in a list and returns the rest. so dropInt 3

[11,21,31,41,51] returns [41,51].

3. sumInt returns the sum of the items in a list.

4. scanSum adds the items in a list and returns a list of the running totals. So

scanSum [2,3,4,5] returns [2,5,9,14]. Is there any difference between

"scanSum (takeInt 10 [1..])" and "takeInt 10 (scanSum [1..])"?

5. diffs returns a list of the differences between adjacent items. So diffs

[3,5,6,8] returns [2,1,2]. (Hint: write a second function that takes two lists

and finds the difference between corresponding items).

Deconstructing lists

So now we know how to generate lists by appending to the empty list, or using infinite lists and their

notation. Very useful.

But what happens if our function is not generating a list and handing it off to some other function, but is

rather receiving a list? It needs to be analyzed and broken down in some way.

For this purpose, Haskell includes the same basic functionality as other programming languages, except with

better names than "cdr" or "car": the "head" and "tail" functions.

tail :: [a] -> [a]

From these two functions we can build pretty much all the functionality we want. If we want the first item in

the list, a simple head will do:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 63

Code Result

---- ------

head [1,2,3] 1

head [5..100] 5

If we want the second item in a list, we have to be a bit clever: head gives the first item in a list, and tail

effectively removes the first item in a list. They can be combined, though:

Code Result

---- ------

head(tail [1,2,3,4,5]) 2

head(tail (tail [1,2,3,4,5])) 3

Enough tails can reach to arbitrary elements; usually this is generalized into a function which is passed a list

and a number, which gives the position in a list to return.

Exercises

Write a function which takes a list and a number and returns the given element;

use head or tail, and not !!.

List comprehensions

This is one further way to deconstruct lists; it is called a List comprehension. List comprehensions are useful

and concise expressions, although they are fairly rare.

List comprehensions are basically syntactic sugar for a common pattern dealing with lists: when one wants to

take a list and generate a new list composed only of elements of the first list that meet a certain condition.

One could write this out manually. For example, suppose one wants to take a list [1..10], and only retain the

even numbers? One could handcraft a recursive function called retainEven, based on a test for evenness

which we've already written called isEven:

isEven n

| n < 0 = error "isEven needs a positive integer"

| ((mod n 2) == 0) = True -- Even numbers have no remainder when divided by 2

| otherwise = False -- If it has a remainder of anything but 0, it is not even

retainEven [] = []

retainEven (e:es)

| isEven e = e:retainEven es --If something is even, let's hang onto it

| otherwise = retainEven es --If something isn't even, discard it and move on

Exercises

Write a function which will take a list and return only odd numbers greater than 1.

Hint: isOdd can be defined as the negation of isEven.

This is fairly verbose, though, and we had to go through a fair bit of effort and define an entirely new

function just to accomplish the relatively simple task of filtering a list. Couldn't it be generalized? What we

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 64

want to do is construct a new list with only the elements of an old list for which some boolean condition is

true. Well, we could generalize our function writing above like this, involving the higher-order functions map

and filter. For example, the above can also be written as

We can do this through the list comprehension form, which looks like this:

We can read the first half as an arbitrary expression modifying n, which will then be prepended to a new list.

In this case, n isn't being modified, so we can think of this as repeatedly prepending the variable, like

n:n:n:n:[] - but where n is different each time. n is drawn (the "<-") from the list es (a subtle point is that es

can be the name of a list, or it can itself be a list).

Thus if es is equal to [1,2,3,4], then we would get back the list [2,4].

We can do more than that, and list comprehensions can be easily modifiable. Perhaps we wish to generalize

factoring a list, instead of just factoring it by evenness (that is, by 2). Well, given that ((mod n x) == 0)

returns true for numbers n which are factorizable by x, it's obvious how to use it, no? Write a function using

a list comprehension which will take an integer, and a list of integers, and return a list of integers which are

divisible by the first argument. In other words, the type signature is thus:

returnFact 10 [10..1000]

[10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,....etc.]

Which is as it should be. But what if we want to write the opposite? What if we want to write a function

which returns those integers which are not divisible? The modification is very simple, and the type signature

the same. What decides whether a integer will be added to the list or not is the mod function, which currently

returns true for those to be added. A simple 'not' suffices to reverse when it returns true, and so reverses the

operation of the list:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 65

rmFact x ys = [n | n<-ys , (not ((mod n x) == 0))]

[11,12,13,14,15,16,17,18,19,21,22,23,24,25,26,27,28,29,......etc.]

Of course this function is not perfect. We can still do silly things like

*** Exception: divide by zero

We can stack on more tests besides the one: maybe all our even numbers should be larger than 2:

Fortunately, our Boolean tests are commutative, so it doesn't matter whether (n > 2) or (isEven 2) is evaluated

first.

It's useful to note that the left arrow in list comprehensions can be used with pattern matching. For example,

suppose we had a list of tuples [(Integer, Integer)]. What we would like to do is return the first

element of every tuple whose second element is even. We could write it with a filter and a map, or we could

write it as follows:

Control structures

Haskell offers several ways of expressing a choice between different values. This section will describe them

all and explain what they are for:

if Expressions

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 66

otherwise the <false-value> is returned. Note that in Haskell if is an

The else is

expression (returning a value) rather than a statement (to be executed).

required!

Because of this the usual indentation is different from imperative languages. If

you need to break an if expression across multiple lines then you should

indent it like one of these:

if <condition>

then <1>

else <0>

if <condition>

then

<true-value>

else

<false-value>

message42 n =

if n == 42

then "The Answer is forty two."

else "The Answer is not forty two."

Unlike many other languages, in Haskell the else is required. Since if is an expression, it must return a

result, and the else ensures this.

case Expressions

case expressions are a generalization of if expressions. As an example, let's clone if as a case:

case <condition> of

True -> <true-value>

False -> <false-value>

_ -> error "Neither True nor False? How can that be?"

First, this checks <condition> for a pattern match against True. If they match, the whole expression will

evaluate to <true-value>, otherwise it will continue down the list. You can use _ as the pattern wildcard.

In fact, the left hand side of any case branch is just a pattern, so it can also be used for binding:

case str of

(x:xs) -> "The first character is " ++ [x] ++ "; the rest of the string is " ++ xs

"" -> "This is the empty string."

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 67

This expression tells you whether str is the empty string or something else. Of course, you could just do this

with an if-statement (with a condition of null str), but using a case binds variables to the head and tail of

our list, which is convenient in this instance.

You can use multiple equations as an alternative to case expressions. The case expression above could be

named describeString and written like this:

describeString (x:xs) = "The first character is " ++ [x] ++ "; the rest of the string is " ++ xs

describeString "" = "This is the empty string."

Named functions and case expressions at the top level are completely interchangeable. In fact the function

definition form shown here is just syntactic sugar for a case expression.

The handy thing about case expressions is that they can go inside other expressions, or be used in an

anonymous function. TODO: this isn't really limited to case. For example, this case expression returns a

string which is then concatenated with two other strings to create the result:

describeColour c =

"This colour is "

++ (case c of

Black -> "black"

White -> "white"

RGB _ _ _ -> "freaky, man, sort of in between")

++ ", yeah?"

You can also put where clauses in a case expression, just as you can in functions:

describeColour c =

"This colour is "

++ (case c of

Black -> "black"

White -> "white"

RGB red green blue -> "freaky, man, sort of " ++ show av

where av = (red + green + blue) `div` 3

)

++ ", yeah?"

Guards

As shown, if we have a top-level case expression, we can just give multiple equations for the function

instead, which is normally neater. Is there an analogue for if expressions? It turns out there is.

We use some additonal syntax known as "guards". A guard is a boolean condition, like this:

describeLetter c

| c >= 'a' && c <= 'z' = "Lower case"

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 68

| otherwise = "Not a letter"

Note the lack of an = before the first |. Guards are evaluated in the order they appear. That is, if you have a

set up similar to the following:

f (pattern1) | predicate1 = w

| predicate2 = x

f (pattern2) | predicate3 = y

| predicate4 = z

Then the input to f will be pattern-matched against pattern1. If it succeeds, then predicate1 will be evaluated.

If this is true, then w is returned. If not, then predicate2 is evaluated. If this is true, then x is returned. Again,

if not, then we jump out of this 'branch' of f and try to pattern match against pattern2, repeating the guards

procedure with predicate3 and predicate4. If no guards match, an error will be produced at runtime, so it's

always a good idea to leave an 'otherwise' guard in there to handle the "But this can't happen!' case.

The otherwise you saw above is actually just a normal value defined in the Standard Prelude as:

otherwise :: Bool

otherwise = True

This works because of the sequential evaluation described a couple of paragraphs back: if none of the guards

previous to your 'otherwise' one are true, then your otherwise will definitely be true and so whatever is on the

right-hand side gets returned. It's just nice for readability's sake.

One nicety about guards is that where clauses are common to all guards.

doStuff x

| x < 3 = report "less than three"

| otherwise = report "normal"

where

report y = "the input is " ++ y

It's worth noting that there is a fundamental difference between if-expressions and case-expressions. if-

expressions, and guards, only check to see if a boolean expression evaluated to True. case-expressions, and

multiple equations for the same function, pattern match against the input. Make sure you understand this

important distinction.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 69

List processing

Because lists are such a fundamental data type in Haskell it has a wide collection of functions for processing

them. These are mostly to be found in a library module called the "Standard Prelude" which is automatically

imported to all Haskell programs.

Map

This module will explain one particularly important function, called "map", and then describe some of the

other list processing functions that work in similar ways.

multiplyList _ [] = []

multiplyList m (n:ns) = (m*n) : multiplyList m ns

This works on a list of integers, multiplying each item by a constant. But Haskell allows us to pass functions

around just as easily as we can pass integers. So instead of passing a multiplier "m" we could pass a function

"f", like this:

mapList1 _ [] = []

mapList1 f (n:ns) = (f n) : mapList1 f ns

Take a minute to compare the two functions. The difference is in the first parameter. Instead of being just an

integer it is now a function. This function parameter has the type "(Integer -> Integer)", meaning that it is a

function from one integer to another. The second line says that if this is applied to an empty list then the

result is itself an empty list, and the third line says that for a non-empty list the result is "f" applied to the first

item in the list, followed by a recursive call to "mapList1" for the rest of the list.

Remember that "*" has type "Integer -> Integer -> Integer". So if I write "(2*)" then this returns a new

function that doubles its argument and has type "Integer -> Integer". But that is exactly what I can pass to

"mapList1". So now I can write "doubleList" like this:

The two are equivalent because if I just pass one argument to mapList1 I get back a new function. The

second version is more natural for newcomers to Haskell, but experts often favour the first, known as "point

free" style.

Obviously this idea is not limited to just integers. I could just as easily write

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 70

mapListString _ [] = []

mapListString f (n:ns) = (f n) : mapList1 f ns

and have a function that does this for strings. But this is horribly wasteful: the code is exactly the same for

both strings and integers. What is needed is a way to say that "mapList" works for both Integers, Strings, and

any other type I might want to put in a list. In fact there is no reason why the input list should be the same

type as the output list: I might very well want to convert a list of integers into a list of their string

representations, or vice versa. And indeed Haskell provides a way to do this. The Standard Prelude contains

the following definition of "map":

map _ [] = []

map f (x:xs) = (f x) : map f xs

Instead of constant types like String or Integer this definition uses type variables. These start with lower case

letters (as opposed to type constants that start with upper case) and otherwise follow the same lexical rules as

normal variables. However the convention is to start with "a" and go up the alphabet. Even the most

complicated functions rarely get beyond "d".

A list of things of type "a".

Then it returns a new list containing things of type "b", constructed by applying the function to all of the

things of type "a".

Exercises

A list of lists of Ints ll that, for each element of l, contains the factors of l.

It will help to know that

Folds

A fold applies a function to a list in a similar way to map, but it accumulates a single result instead of a list.

Take for example, a function like sum, which might be implemented as follows:

Example: sum

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 71

sum [] = 0

sum (x:xs) = x + sum xs

or product:

Example: product

product [] = 1

product (x:xs) = x * product xs

or, concat, which takes a list of lists and joins (concatenates) them into one:

Example: concat

concat [] = []

concat (x:xs) = x ++ concat xs

There is a certain pattern of recursion common to all of these. It is known as a fold, possibly from the idea

that a list is being "folded up" into a single value, or that a function is being "folded between" the elements of

the list.

The Standard Prelude has four fold functions: "foldr", "foldl", "foldr1" and "foldl1".

The most natural and commonly used of these in a lazy language like Haskell is the right-associative foldr:

foldr f z [] = z

foldr f z (x:xs) = f x (foldr f z xs)

The first argument is a function with two arguments, the second is a "zero" value for the accumulator, and the

third is the list to be folded.

For example, in sum, f is (+), and z is 0, and in concat, f is (++) and z is []. In many cases, like all of our

examples so far, the function passed to a fold will have both its arguments be of the same type, but this is not

necessarily the case in general.

What foldr f z xs does is to replace each cons (:) in the list xs with the function f, and the empty list at

the end with z.

a : b : c : []

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 72

becomes

f a (f b (f c z))

This is perhaps most elegantly seen by picturing the list data structure as a tree:

: f

/ \ / \

a : foldr f z a f

/ \ -------------> / \

b : b f

/ \ / \

c [] c z

It is fairly easy to see with this picture that foldr (:) [] is just the identity function on lists.

foldl f z [] = z

foldl f z (x:xs) = foldl f (f z x) xs

So brackets in the resulting expression accumulate on the left. Our list above, after being transformed by

foldl f z becomes:

f (f (f z a) b) c

: f

/ \ / \

a : foldl f z f c

/ \ -------------> / \

b : f b

/ \ / \

c [] z a

Technical Note: The left associative fold is tail-recursive, that is, it recurses immediately, calling itself. For

this reason the compiler will optimise it to a simple loop, and it will then be much more efficient than

foldr. However, Haskell is a lazy language, and so the calls to f will by default be left unevaluated,

building up an expression in memory whose size is linear in the length of the list, exactly what we hoped to

avoid in the first place. To get back this efficiency, there is a version of foldl which is strict, that is, it forces

the evaluation of f immediately, called foldl'. Note the single quote character: this is pronounced "fold-

ell-tick". A tick is a valid character in Haskell identifiers. foldl' can be found in the library Data.List.. As a

rule you should use foldr on lists that might be infinite or where the fold is building up a data structure,

and foldl' if the list is known to be finite and comes down to a single value.

As previously noted, the type declaration for foldr makes it quite possible for the list elements and result to

be of different types. For example, "read" is a function that takes a string and converts it into some type (the

type system is smart enough to figure out which one). In this case we convert it into a float.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 73

Example: The list elements and results can have different types

addStr str x = read str + x

sumStr = foldr addStr 0.0

If you substitute the types Float and String for the type variables "a" and "b" in the type of foldr you will

see that this is type correct.

There is also a variant called foldr1 (that is "fold - arr - one") which dispenses with an explicit zero by

taking the last element of the list instead:

foldr1 f [x] = x

foldr1 f (x:xs) = f x (foldr1 f xs)

foldr1 _ [] = error "Prelude.foldr1: empty list"

foldl1 f (x:xs) = foldl f x xs

foldl1 _ [] = error "Prelude.foldl1: empty list"

Note: There is additionally a strict version of foldl1 called foldl1' in the Data.List library.

Notice that in this case all the types have to be the same, and that an empty list is an error. These variants are

occasionally useful, especially when there is no obvious candidate for z, but you need to be sure that the list

is not going to be empty. If in doubt, use foldr or foldl'.

One good reason that right-associative folds are more natural to use in Haskell than left-associative ones is

that right folds can operate on infinite lists, which are not so uncommon in Haskell programming. If the input

function f only needs its first parameter to produce the first part of the output, then everything works just fine.

However, a left fold will continue recursing, never producing anything in terms of output until it reaches the

end of the input list. Needless to say, this never happens if the input list is infinite, and the program will spin

endlessly in an infinite loop.

As a toy example of how this can work, consider a function "echoes" taking a list of integers, and producing

a list where if the number n occurs in the input list, then n replicated n times will occur in the output list. We

will make use of the prelude function "replicate": replicate n x is a list of length n with x the value of

every element.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 74

or as a foldl:

but only the first definition works on an infinite list like [1..]. Try it!

Note the syntax in the above example: the \xs x -> means that xs is set to the first argument outside the

parentheses (in this case, []), and x is set to the second (will end up being the argument of echoes when it

is called).

As a final example, another thing that you might notice is that "map" itself is patterned as a fold:

Folding takes a little time to get used to, but it is a fundamental pattern in functional programming, and

eventually becomes very natural. Any time you want to traverse a list and build up a result from its members

you want a fold.

Exercises

Define the following functions recursively (like the definitions for sum,

product and concat above), then turn them into a fold:

and :: [Bool] -> Bool, which returns True if a list of Bools are all

True, and False otherwise.

or :: [Bool] -> Bool, which returns True if any of a list of Bools

are True, and False otherwise.

element of a list (hint: max :: Ord a => a -> a -> a returns the

maximum of two values).

minimum :: Ord a => [a] -> a, which returns the minimum

element of a list (hint: min :: Ord a => a -> a -> a returns the

minimum of two values).

Scans

A "scan" is much like a cross between a map and a fold. Folding a list accumulates a single return value,

whereas mapping puts each item through a function with no accumulation. A scan does both: it accumulates

a value like a fold, but instead of returning a final value it returns a list of all the intermediate values.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 75

This accumulates the list from the left, and the second argument becomes the first item in the resulting list.

So scanl (+) 0 [1,2,3] = [0,1,3,6]

This is the same as scanl, but uses the first item of the list as a zero parameter. It is what you would

typically use if the input and output items are the same type. Notice the difference in the type signatures.

scanl1 (+) [1,2,3] = [1,3,6].

scanr1 :: (a -> a -> a) -> [a] -> [a]

These two functions are the exact counterparts of scanl and scanl1. They accumulate the totals from the

right. So:

scanr1 (+) [1,2,3] = [6,5,3]

Exercises

1 up to Int.

More to be added

More on functions

As functions are absolutely essential to functional programming, there are some nice features you can use to

make using functions easier.

Private Functions

Remember the sumStr function from the chapter on list processing. It used another function called

addStr:

addStr x str = x + read str

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 76

sumStr = foldl addStr 0.0

gives 11.5.

But maybe you don't want addStr cluttering up the top level of your program. Haskell lets you nest

declarations in two subtly different ways:

where addStr x str = x + read str

sumStr =

let addStr x str = x + read str

in foldl addStr 0.0

The difference between let and where lies in the fact that let foo = 5 in foo + foo is an

expression, but foo + foo where foo = 5 is not. (Try it: an interpreter will reject the latter

expression.) Where clauses are part of the function declaration as a whole, which makes a difference when

using guards.

Anonymous Functions

An alternative to creating a named function like addStr is to create an anonymous function, also known as

a lambda function. For example, sumStr could have been defined like this:

The bit in the parentheses is a lambda function. The backslash is used as the nearest ASCII equivalent to the

Greek letter lambda (λ). This example is a lambda function with two arguments, x and str, and the result is

"x + read str". So, the sumStr presented just above is precisely the same as the one that used addStr in a

let binding.

Lambda functions are handy for one-off function parameters, especially where the function in question is

simple. The example above is about as complicated as you want to get.

As we noted in the previous chapter, you can take an operator and turn it into a function by surrounding it in

brackets:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 77

2 + 4

(+) 2 4

This is called making the operator prefix: you're using it before its arguments, so it's known as a prefix

function. We can now formalise the term 'operator': it's a function which is entirely non-alphanumeric

characters, and is used infix (normally). You can define your own operators just the same as functions, just

don't use any alphanumeric characters. For example, here's the set-difference definition from Data.List:

xs \\ ys = foldl (\x y -> delete y x) xs ys

Note that aside from just using operators infix, you can define them infix as well. This is a point that most

newcomers to Haskell miss. I.e., although one could have written:

It's more common to define operators infix. However, do note that in type declarations, you have to surround

the operators by parentheses.

(2+) 4

(+4) 2

These sections are functions in their own right. (2+) has the type Int -> Int, for example, and you can

pass sections to other functions, e.g. map (+2) [1..4].

If you have a (prefix) function, and want to use it as an operator, simply surround it by backticks:

1 `elem` [1..4]

This is called making the function infix: you're using it in between its arguments. It's normally done for

readability purposes: 1 `elem` [1..4] reads better than elem 1 [1..4]. You can also define

functions infix:

x `elem` xs = any (==x) xs

But once again notice that in the type signature you have to use the prefix style.

(1 `elem`) [1..4]

(`elem` [1..4]) 1

You can only make binary functions (those that take two arguments) infix. Think about the functions you

use, and see which ones would read better if you used them infix.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 78

Exercises

Convert the following let- or where-bindings to lambdas:

map f xs where f x = x * 2 + 3

let f x y = read x + y in foldr f 1 xs

Sections are just syntactic sugar for lambda

operations. I.e. (+2) is equivalent to \x -> x + 2.

What would the following sections 'desugar' to?

What would be their types?

(4+)

(1 `elem`)

(`notElem` "abc")

Higher-order functions are functions that take other functions as arguments. We have already met some of

them, such as map, so there isn't anything really frightening or unfamiliar about them. They offer a form of

abstraction that is unique to the functional programming style. In functional programming languages like

Haskell, functions are just like any other value, so it doesn't get any harder to deal with higher-order

functions.

Higher order functions have a separate chapter in this book, not because they are particularly difficult --

we've already worked with them, after all -- but because they are powerful enough to draw special attention

to them. We will see in this chapter how much we can do if we can pass around functions as values.

Generally speaking, it is a good idea to abstract over a functionality whenever we can. Besides, Haskell

without higher order functions wouldn't be quite as much fun.

Don't get too excited, but quickSort is certainly one of the quickest. Have you heard of it? If you did, you

can skip the following subsection and go straight to the next one:

The idea is very much simple. For a big list, we pick an element, and divide the whole list into three parts.

The first part has all elements that should go before that element, the second part consists of all of the

elements that are equal to the picked element, the third has the elements that ought to go after that element.

And then, of course, we are supposed to concatenate these. What we get is somewhat better, right?

The trick is to note that only the first and the third are yet to be sorted, and for the second, sorting doesn't

really make sense (they are all equal!). How to go about sorting the yet-to-be-sorted sub-lists? Why... apply

the same algorithm on them again! By the time the whole process is finished, you get a completely sorted list.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 79

-- note that this is the base case for the recursion

quickSort [] = []

-- actually, the third case takes care of this one pretty well

-- I just wanted you to take it step by step

quickSort [x] = [x]

-- we pick the first element as our "pivot", the rest is to be sorted

-- don't forget to include the pivot in the middle part!

quickSort (x : xs) = (quickSort less) ++ (x : equal) ++ (quickSort more)

where less = filter (< x) xs

equal = filter (== x) xs

more = filter (> x) xs

And we are done! I suppose if you have met quickSort before, you thought recursion is a neat trick but is

hard to implement as so many things need to be kept track of.

With quickSort at our disposal, sorting any list is a piece of cake. Suppose we have a list of String,

maybe from a dictionary, and we want to sort them, we just apply quickSort to the list. For the rest of this

chapter, we will use a pseudo-dictionary of words (but a 25,000 word dictionary should do the trick as well):

But, what if we wanted to sort them in the descending order? Easy, just reverse the list, reverse

sortedDictionary gives us what we want.

But wait! We didn't really sort in the descending order, we sorted (in the ascending order) and reversed it.

They may have the same effect, but they are not the same thing!

Besides, you might object that the list you got isn't what you wanted. "a" should certainly be placed before

"I". "Linux" should be placed between "have" and "thing". What's the problem here?

The problem is, the way Strings are represented in a typical programming settings is by a list of ASCII

characters. ASCII (and almost all other encodings of characters) specifies that the character code for capital

letters are less than the small letters. Bummer. So "Z" is less than "a". We should do something about it.

Looks like we need a case insensitive quickSort as well. It might come handy some day.

But, there's no way you can blend that into quickSort as it stands. We have work to do.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 80

What we need to do is to factor out the comparisons quickSort makes. We need to provide quickSort

with a function that compares two elements, and gives an Ordering, and as you can imagine, an

Ordering is any of LT, EQ, GT.

To sort in the descending order, we supply quickSort with a function that returns the opposite of the

usual Ordering. For the case-insensitive sort, we may need to define the function ourselves. By all means,

we want to make quickSort applicable to all such functions so that we don't end up writing it over and

over again, each time with only minor changes.

So, forget the version of quickSort we have now, and let's think again.

Our quickSort will take two things this time: first, the comparison function, and second, the list to sort.

A comparison function will be a function that takes two things, say, x and y, and compares them. If x is less

than y (according to the criteria we want to implement by this function), then the value will be LT. If they are

equal (well, equal with respect to the comparison, we want "Linux" and "linux" to be equal when we are

dealing with the insensitive case), we will have EQ. The remaining case gives us GT (pronounced: greater

than, for obvious reasons).

-- the first two equations should not change

-- they need to accept the comparison function though

quickSort comparison [] = []

quickSort comparison [x] = [x]

-- but the changes are worth it!

quickSort comparison (x : xs) = (quickSort comparison less) ++ (x : equal) ++ (quickSort

comparison more)

where less = filter (\y -> comparison y x == LT) xs

equal = filter (\y -> comparison y x == EQ) xs

more = filter (\y -> comparison y x == GT) xs

Cool!

Note

Almost all the basic data types in Haskell are members of the Ord class. This class

defines an ordering, the "natural" one. The functions (or, operators, in this case) (<),

(<=) or (>) provide shortcuts to the compare function each type defines. When

we want to use the natural ordering as defined by the types themselves, the above

code can be written using those operators, as we did last time. In fact, that makes for

much clearer style; however, we wrote it the long way just to make the relationship

between sorting and comparing more evident.

Reuse. We can reuse quickSort to serve different purposes.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 81

-- uses the compare function from the Ord class

usual = compare

-- the descending ordering, note we flip the order of the arguments to compare

descending x y = compare y x

insensitive = ...

-- can you think of anything without making a very big list of all possible cases?

The comparison is just compare from the Ord class. This was our quickSort, before the tweaking.

now gives

And finally,

gives

Exercises

Write insensitive, such that quickSort insensitive dictionary

gives ["a", "for", "have", "I", "Linux", "thing"]

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 82

Our quickSort has type (a -> a -> Ordering) -> [a] -> [a].

Most of the time, the type of a higher-order function provides a good guideline about how to use it. A

straightforward way of reading the type signature would be, "quickSort takes a function that gives an

ordering of as, and a list of as, to give a list of as". It is then natural to guess that the function sorts the list

respecting the given ordering function.

Note that the parentheses surrounding a -> a -> Ordering is mandatory. It says that a -> a ->

Ordering altogether form a single argument, an argument that happens to be a function. What happens if

we omit the parentheses? We would get a function of type a -> a -> Ordering -> [a] -> [a],

which accepts four arguments instead of the desired two (a -> a -> Ordering and [a]). Furthermore

none of the four arguments, neither a nor Ordering nor [a] are functions, so omitting the parentheses

would give us something that isn't a higher order function.

Furthermore, it's worth noting that the -> operator is right-associative, which means that a -> a ->

Ordering -> [a] -> [a] means the same thing as a -> (a -> (Ordering -> ([a] ->

[a]))). We really must insist that the a -> a -> Ordering be clumped together by writing those

parenetheses... but wait... if -> is right-associative, wouldn't that mean that the correct signature (a -> a

-> Ordering) -> [a] -> [a] actualy means... (a -> a -> Ordering) -> ([a] -> [a])

?

If you think about it, we're trying to build a function that takes two arguments, a function and a list, returning

a list. Instead, what this type signature is telling us is that our function takes ONE argument (a function) and

returns another function. That is profoundly odd... but if you're lucky, it might also strike you as being

profoundly beautiful. Functions in multiple arguments are fundamentally the same thing as functions that

take one argument and give another function back. It's OK if you're not entirely convinced. We'll go into a

little bit more detail below and then show how something like this can be turned to our advantage.

Currying

Intermediate Haskell

Modules

Modules

Haskell modules are a useful way to group a set of related functionalities into a single package and manage a

set of different functions that have the same name. The module definition is the first thing that goes in your

Haskell file.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 83

Note that

2. The name of the module begins with a capital letter

Importing

One thing your module can do is import functions from other modules. Thats is, in between the module

declaration and the rest of your code, you may include some import declarations such as

import Data.Char (toLower, toUpper)

import Data.List

import MyModule

Imported datatypes are specified by their name, followed by a list of imported constructors in parenthesis.

For example:

-- import only the Tree data type, and its Node constructor from Data.Tree

import Data.Tree (Tree(Node))

Now what to do if you import some modules, but some of them have overlapping definitions? Or if you

import a module, but want to overwrite a function yourself? There are three ways to handle these cases:

Qualified imports, hiding definitions and renaming imports.

Qualified imports

Say MyModule and MyOtherModule both have a definition for remove_e, which removes all instances of

e from a string. However, MyModule only removes lower-case e's, and MyOtherModule removes both upper

and lower case. In this case the following code is ambiguous:

import MyModule

import MyOtherModule

-- someFunction puts a c in front of the text, and removes all e's from the rest

someFunction :: String -> String

someFunction text = 'c' : remove_e text

In this case, it isn't clear which remove_e is meant. To avoid this, use the qualified keyword:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 84

import qualified MyOtherModule

someFunction text = 'c' : MyModule.remove_e text -- Will work, removes lower case e's

someOtherFunction text = 'c' : MyOtherModule.remove_e text -- Will work, removes all e's

someIllegalFunction text = 'c' : remove_e text -- Won't work, remove_e isn't defined.

See the difference. In this case the function remove_e isn't even defined. We call the functions from the

imported modules by adding the module's name. Note that MyModule.remove_e also works if the

qualified flag isn't included. The difference lies in the fact that remove_e is ambiguously defined in the first

case, and undefined in the second case. If we have a remove_e defined in the current module, then using

remove_e without any prefix will call this function.

Note

function composition (.). Writing reverse.MyModule.remove_e is bound to

confuse your Haskell compiler. One solution is stylistic: to always use spaces for

function composition, for example, reverse . remove_e or Just .

remove_e or even Just . MyModule.remove_e

Hiding definitions

Now suppose we want to import both MyModule and MyOtherModule, but we know for sure we want to

remove all e's, not just the lower cased ones. It will become really tedious (and disorderly) to add

MyOtherModule before every call to remove_e. Can't we just not import remove_e from MyModule?

The answer is: yes we can.

import MyModule hiding (remove_e)

import MyOtherModule

This works. Why? Because of the word hiding on the import line. Followed by it, is a list of functions that

shouldn't be imported. Hiding more than one function works like this:

Note that algebraic datatypes and type synonyms cannot be hidden. These are always imported. If you have a

datatype defined in more modules, you must use qualified.

Renaming imports

This is not really a technique to allow for overwriting, but it is often used along with the qualified flag.

Imagine:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 85

Especially when using qualified, this gets irritating. What we can do about it, is using the as keyword:

imported functions. As long as there are no ambiguous definitions, the following is also possible:

import MyModule as My

import MyCompletelyDifferentModule as My

In this case, both the functions in MyModule and the functions in MyCompletelyDifferentModule

can be prefixed with My.

Exporting

In the examples at the start of this article, the words "import everything exported from MyModule" were

used. This raises a question. How can we decide which functions are exported and which stay "internal"?

Here's how:

In this case, only remove_e and add_two are exported. While add_two is allowed to make use of add_

one, functions in modules that import MyModule aren't allowed to try to use add_one, as it isn't exported.

Datatype export specifications are written quite similarly to import. You name the type, and follow with the

list of constructors in parenthesis:

| Leaf a

In this case, the module declaration could be rewritten "MyModule2 (Tree(..))", declaring that all

constructors are exported.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 86

Note: maintaining an export list is good practise not only because it reduces namespace pollution, but also

because it enables certain compile-time optimizations (http://www.haskell.org/haskellwiki/Performance/

GHC#Inlining) which are unavailable otherwise.

Notes

In Haskell98, the last standardised version of Haskell, the module system is fairly conservative. But recent

common practice consists of using an hierarchical module system, using periods to section off namespaces.

See the Haskell report for more details on the module system:

http://www.haskell.org/onlinereport/modules.html

Indentation

Haskell relies on indentation to reduce the verbosity of your code, but working with the indentation rules can

be a bit confusing. The rules may seem many and arbitrary, but the reality of things is that there are only one

or two layout rules, and all the seeming complexity and arbitrariness comes from how these rules interact

with your code. So to take the frustration out of indentation and layout, the simplest solution is to get a grip

on these rules.

Whilst the rest of this chapter will discuss in detail Haskell's indentation system, you will do fairly well if you

just remember a single rule:

Code which is part of some expression should be indented further in than the

line containing the beginning of that expression

What does that mean? The easiest example is a let binding group. The equations binding the variables are

part of the let expression, and so should be indented further in than the beginning of the binding group: the let

keyword. So,

let

x = a

y = b

Although you actually only need to indent by one extra space, it's more normal to place the first line

alongside the 'let' and indent the rest to line up:

let x = a

y = b

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 87

do foo

bar

baz

where x = a

y = b

case x of

p -> foo

p' -> baz

Note that with 'case' it's less common to place the next expression on the same line as the beginning of the

expression, as with 'do' and 'where'. Also note we lined up the arrows here: this is purely aesthetic and isn't

counted as different layout; only indentation, whitespace beginning on the far-left edge, makes a difference

to layout. Things get more complicated when the beginning of the expression isn't right at the left-hand edge.

In this case, it's safe to just indent further than the beginning of the line containing the beginning of the

expression. So,

myFunction firstArgument secondArgument = do -- the 'do' isn't right at the left-hand edge

foo -- so indent these commands more than the beginning of

the line containing the 'do'.

bar

baz

Here are some alternative layouts to the above which would have also worked:

do foo

bar

baz

bar

baz

A mechanical translation

Did you know that layout (whitespace) is optional? It is entirely possible to

treat Haskell as a one-dimensional language like C, using semicolons to

separate things, and curly braces to group them back. It is sometimes

useful to avoid

To understand layout, you need to understand two things: where we need layout or to mix it

semicolons/braces, and how to get there from layout. The entire layout process with semicolons and

can be summed up in three translation rules: braces.

1. If you see one of the layout keywords, (let, where, of, do), insert an

open curly brace (right before the stuff that follows it)

2. If you see something indented to the SAME level, insert a semicolon

3. If you see something indented LESS, insert a closing curly brace

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 88

Exercises

In one word, what happens if you see something indented MORE?

Exercises

Translate the following layout into curly braces and semicolons. Note: to

underscore the mechanical nature of this process, we deliberately chose something

which is probably not valid Haskell:

of a

b

c

d

where

a

b

c

do

you

like

the

way

i let myself

abuse

these

layout rules

Layout in action

Wrong Right

second thing second thing

third thing third thing

do within if

What happens if we put a do expression with an if? Well, as we stated above, the keywords if then

else, and everything besides the 4 layout keywords do not affect layout. So things remain exactly the same:

Wrong Right

if foo if foo

then do first thing then do first thing

second thing second thing

third thing third thing

else do something else else do something else

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 89

Remember from the First Rule of Layout Translation (above) that although the keyword do tells Haskell to

insert a curly brace, where the curly braces goes depends not on the do, but the thing that immediately

follows it. For example, this weird block of code is totally acceptable:

do

first thing

second thing

third thing

As a result, you could also write combined if/do combination like this:

Wrong Right

if foo

if foo

then do

then do first thing

first thing

second thing

second thing

third thing

third thing

else do something else

else do something else

This is also the reason why you can write things like this

main = do

first thing

second thing

instead of

main =

do first thing

second thing

if within do

This is a combination which trips up many Haskell programmers. Why does the following block of code not

work?

do first thing

if condition

then foo

else bar

third thing

Just to reiterate, the if then else block is not at fault for this problem. Instead, the issue is that the do

block notices that the then part is indented to the same column as the if part, so it is not very happy,

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 90

because from its point of view, it just found a new statement of the block. It is as if you had written the

unsugared version on the right:

do first thing do { first thing

if condition ; if condition

then foo ; then foo

else bar ; else bar

third thing ; third thing }

Naturally enough, your Haskell compiler is unimpressed, because it thinks that you never finished writing

your if expression, before charging off to write some other new statement, oh ye of little attention span.

Your compiler sees that you have written something like if condition;, which is clearly bad, because it

is unfinished. So, in order to fix this, we need to indent the bottom parts of this if block a little bit inwards

do first thing do { first thing

if condition ; if condition

then foo then foo

else bar else bar

third thing ; third thing }

This little bit of indentation prevents the do block from misinterpreting your then as a brand new

expression.

Exercises

The if-within-do problem has tripped up so many Haskellers, that one programmer

has posted a proposal (http://hackage.haskell.org/trac/haskell-prime/ticket/23) to

the Haskell prime initiative to add optional semicolons between if then else.

How would that fix the problem?

References

The Haskell Report (lexemes) (http://www.haskell.org/onlinereport/lexemes.html#sect2.7) - see 2.7 on

layout

More on datatypes

Enumerations

One special case of the data declaration is the enumeration. This is simply a data type where none of the

constructor functions have any arguments:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 91

| August | September | October | November | December

You can mix constructors that do and do not have arguments, but its only an enumeration if none of the

constructors have arguments. The section below on "Deriving" explains why the distinction is important. For

instance,

| Yellow | Magenta | White | RGB Int Int Int

deriving (Eq, Ord, Enum, Read, Show, Bounded)

Consider a datatype whose purpose is to hold configuration settings. Usually when you extract members

from this type, you really only care about one or possibly two of the many settings. Moreover, if many of the

settings have the same type, you might often find yourself wondering "wait, was this the fourth or fifth

element?" One thing you could do would be to write accessor functions. Consider the following made-up

configuration type for a terminal program:

data Configuration =

Configuration String -- user name

String -- local host

String -- remote host

Bool -- is guest?

Bool -- is super user?

String -- current directory

String -- home directory

Integer -- time connected

deriving (Eq, Show)

You could then write accessor functions, like (I've only listed a few):

getUserName (Configuration un _ _ _ _ _ _ _) = un

getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh

getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh

getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig

...

You could also write update functions to update a single element. Of course, now if you add an element to

the configuration, or remove one, all of these functions now have to take a different number of arguments.

This is highly annoying and is an easy place for bugs to slip in. However, there's a solution. We simply give

names to the fields in the datatype declaration, as follows:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 92

data Configuration =

Configuration { username :: String,

localhost :: String,

remotehost :: String,

isguest :: Bool,

issuperuser :: Bool,

currentdir :: String,

homedir :: String,

timeconnected :: Integer

}

This will automatically generate the following accessor functions for us:

localhost :: Configuration -> String

...

Moreover, it gives us very convenient update methods. Here is a short example for a "post working directory"

and "change directory" like functions that work on Configurations:

changeDir cfg newDir =

-- make sure the directory exists

if directoryExists newDir

then -- change our current directory

cfg{currentdir = newDir}

else error "directory does not exist"

-- retrieve our current directory

postWorkingDir cfg = currentdir cfg

So, in general, to update the field x in a datatype y to z, you write y{x=z}. You can change more than one;

each should be separated by commas, for instance, y{x=z, a=b, c=d}.

You can of course continue to pattern match against Configurations as you did before. The named fields

are simply syntactic sugar; you can still write something like:

getUserName (Configuration un _ _ _ _ _ _ _) = un

But there is little reason to. Finally, you can pattern match against named fields as in:

= (lh,rh)

This matches the variable lh against the localhost field on the Configuration and the variable rh

against the remotehost field on the Configuration. These matches of course succeed. You could also

constrain the matches by putting values instead of variable names in these positions, as you would for

standard datatypes.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 93

You can create values of Configuration in the old way as shown in the first definition below, or in the

named-field's type, as shown in the second definition below:

initCFG =

Configuration "nobody" "nowhere" "nowhere"

False False "/" "/" 0

initCFG' =

Configuration

{ username="nobody",

localhost="nowhere",

remotehost="nowhere",

isguest=False,

issuperuser=False,

currentdir="/",

homedir="/",

timeconnected=0 }

Though the second is probably much more understandable unless you litter your code with comments.

Parameterised Types

Parameterised types are similar to "generic" or "template" types in other languages. A parameterised type

takes one or more type parameters. For example the Standard Prelude type Maybe is defined as follows:

This says that the type Maybe takes a type parameter a. You can use this to declare, for example:

The lookupBirthday function takes a list of birthday records and a string and returns a Maybe

Anniversary. Typically, our interpretation is that if it finds the name then it will return Just the

corresponding record, and otherwise, it will return Nothing.

You can parameterise type and newtype declarations in exactly the same way. Furthermore you can

combine parameterised types in arbitrary ways to construct new types.

We can also have more than one type parameter. An example of this is the Either type:

For example:

eitherExample a | even a = Left (a/2)

| a `mod` 3 == 0 = Right "three"

| otherwise = Right "neither two or three"

otherFunction a = case eitherExample a of

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 94

Right s = show a ++ " is divisible by " ++ s ++ "."

In this example, when you call otherFunction, it'll return a String. If you give it an even number as

argument, it'll say so, and give half of it. If you give it anything else, eitherExample will determine if it's

divisible by three and pass it through to otherFunction.

Kind Errors

The flexibility of Haskell parameterised types can lead to errors in type declarations that are somewhat like

type errors, except that they occur in the type declarations rather than in the program proper. Errors in these

"types of types" are known as "kind" errors. You don't program with kinds: the compiler infers them for

itself. But if you get parameterised types wrong then the compiler will report a kind error.

Trees

Now let's look at one of the most important datastructures: Trees. A tree is an example of a recursive

datatype. Typically, its definition will look like this:

As you can see, it's parameterised, so we can have trees of Ints, trees of Strings, trees of Maybe Ints,

even trees of (Int, String) tuples, if you really want. What makes it special is that Tree appears in the

definition of itself. We will see how this works by using an already known example: the list.

Lists as Trees

Think about it. As we have seen in the List Processing chapter, we break lists down into two cases: An empty

list (denoted by []), and an element of the specified type, with another list (denoted by (x:xs)). This gives

us valuable insight about the definition of lists:

As you can see this is also recursive, like the tree we had. Here, the constructor functions are [] and (:).

They represent what we have called Leaf and Branch. We can use these in pattern matching, just as we

did with the empty list and the (x:xs):

We already know about maps and folds for lists. With our realisation that a list is some sort of tree, we can

try to write map and fold functions for our own type Tree. To recap:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 95

data [a] = [] | (:) a [a]

-- (:) a [a] would be the same as (a:[a]) with prefix instead of infix notation.

Map

map _ [] = []

map f (x:xs) = f x : map f xs

First, if we were to write treeMap, what would its type be? Defining the function is easier if you have an

idea of what its type should be.

We want it to work on a Tree of some type, and it should return another Tree of some type. What

treeMap does is applying a function on each element of the tree, so we also need a function. In short:

Next, we should start with the easiest case. When talking about a Tree, this is obviously the case of a Leaf.

A Leaf only contains a single value, so all we have to do is apply the function to that value and then return a

Leaf with the altered value:

treeMap f (Leaf x) = Leaf (f x)

Also, this looks a lot like the empty list case with map. Now what happens if we have a Branch. This will

include one value of type a, and two other trees. The function we take as argument can transform this value

of type a into a value of type b, but what about the two subtrees? When looking at the list-map, you can see it

uses a call to itself on the tail of the list. We also shall do that with the two subtrees. The complete definition

of treeMap is as follows:

treeMap f (Leaf x) = Leaf (f x)

treeMap f (Branch x firstSub secondSub) = Branch (f x) (treeMap f firstSub) (treeMap f secondSub)

If you don't understand it just now, re-read it. Especially the use of pattern matching may seem weird at first,

but it is essential to the use of datatypes. The most important thing to remember is that pattern matching

happens on constructor functions.

Fold

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 96

Now we've had the treeMap, let's try to write a treeFold. Again let's take a look at the definition of

foldr for lists, as those are easier to understand.

foldr f z [] = z

foldr f z (x:xs) = f x (foldr f z xs)

I'll use the same strategy to find a definition for treeFold as I did for treeMap. First, the type. What do

we want it to do? We need a tree of some type to transform into a value of some other type. This Tree a

fits nicely into the place of [a]. In case of a Leaf, we will want some replacement, and in case of a

Branch we'll need a function that combines a value of type a and two already folded trees into a value of

type b. This gives us the following idea for a type definition:

The (a -> b -> b -> b) might look frightening, but remember: the 'a' is the single value in a Branch, the first

and second 'b' are the two subtrees, the third 'b' is the return type. Now, let's figure out what to do in case of

a Leaf. We had a separate 'b' in our type definition especially for that purpose, so let's use it here:

treeFold f z (Leaf x) = f x z z

This looks similar to foldr on lists except that we are applying f to the Leaf value x, and using z as fillers

for the two remaining parameters to f (remember that f takes 3 parameters altogether). Now for the Branch.

First look at foldr. What does it do? It applies the function to the two 'parts' of the list: the front element

and the folded version of the rest of the list. We have a function that works on three parameters. These are

our single value, and the folded versions of the two subtrees. Our full definition becomes:

treeFold f z (Leaf x) = f x z z

treeFold f z (Branch x firstSub secondSub) = f x (treeFold f z firstSub) (treeFold f z secondSub)

For examples of how these work, copy the Tree data definition and the treeMap and treeFold

functions to a Haskell file, along with the following:

--helper functions for treeFold. Here firstSub and secondSub are the already folded subtrees.

--a and b as in the treeFold definition

addTree :: Int -> Int -> Int -> Int -- a = Int and b = Int

addTree x firstSub secondSub = x + firstSub + secondSub

treeConcat x firstSub secondSub = x : (firstSub ++ secondSub)

tree1 =

Branch 1

(Branch 3

(Branch 5

(Leaf 7)

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 97

(Branch 2

(Leaf 3)

(Branch 7 (Leaf 9) (Leaf 2))))

(Branch 1

(Branch 8 (Leaf 2) (Leaf 1))

(Leaf 5))

add1Tree = treeMap (+1)

addTreeElements = treeFold addTree 0

treeToList = treeFold treeConcat []

add1Tree tree1

addTreeElements tree1

treeToList tree1

Other datatypes

Now, unlike mentioned in the chapter about trees, folds and maps aren't tree-only. They are very useful for

any kind of data type. Let's look at the following, somewhat weird, type:

data Weird a b =

First a |

Second b |

Third [(a,b)] |

Fourth (Weird a b)

There's no way you will be using this in a program written yourself, but it demonstrates how folds and maps

are really constructed.

General Map

Again, we start with weirdMap. Now, unlike before, this Weird type has two parameters. This means that

we can't just use one function (as was the case for lists and Tree), but we need more. For every parameter,

we need one function. The type of weirdMap will be:

Read it again, and it makes sense. Maps don't throw away the structure of a datatype, so if we start with a

Weird thing, the output is also a Weird thing. Now we have to split it up into patterns. Remember that

these patterns are the constructor functions. To avoid having to type the names of the functions again and

again, I use a where clause:

weirdMap fa fb = weirdMap'

where

weirdMap' (First a) = --More to follow

weirdMap' (Second b) = --More to follow

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 98

weirdMap' (Fourth w) = --More to follow

It isn't very hard to find the definition for the First and Second constructors. The list of (a,b) tuples is

harder. The Fourth is even recursive!

Remember that a map preserves structure. This is important. That means, a list of tuples stays a list of tuples.

Only the types are changed in some way or another. You might have already guessed what we should do with

the list of tuples. We need to make another list, of which the elements are tuples. This might sound silly to

repeat, but it becomes clear that we first have to change individual elements into other tuples, and then add

them to a list. Together with the First and Second constructors, we get:

weirdMap fa fb = weirdMap'

where

weirdMap' (First a) = First (fa a)

weirdMap' (Second b) = Second (fb b)

weirdMap' (Third ((a,b):xs)) = Third ( (fa a, fb b) : weirdMap' (Third xs))

weirdMap' (Fourth w) = --More to follow

First we change (a,b) into (fa a, fb b). Next we need the mapped version of the rest of the list to add to it.

Since we don't know a function for a list of (a,b), we must change it back to a Weird value, by adding

Third. This isn't really stylish, though, as we first "unwrap" the Weird package, and then pack it back in.

This can be changed into a more elegant solution, in which we don't even have to break list elements into

tuples!

Remember we already had a function to change a list of some type into another list, of a different type? Yup,

it's our good old map function for lists. Now what if the first type was, say (a,b), and the second type

(c,d)? That seems useable. Now we must think about the function we're mapping over the list. We have

already found it in the above definition: It's the function that sends (a,b) to (fa a, fb b). To write it

in the Lambda Notation: \(a, b) -> (fa a, fb b).

weirdMap fa fb = weirdMap'

where

weirdMap' (First a) = First (fa a)

weirdMap' (Second b) = Second (fb b)

weirdMap' (Third list) = Third ( map (\(a, b) -> (fa a, fb b) ) list)

weirdMap' (Fourth w) = --More to follow

That's it! We only have to match the list once, and call the list-map function on it. Now for the Fourth

Constructor. This is actually really easy. Just weirdMap it again!

weirdMap fa fb = weirdMap'

where

weirdMap' (First a) = First (fa a)

weirdMap' (Second b) = Second (fb b)

weirdMap' (Third list) = Third ( map (\(a, b) -> (fa a, fb b) ) list)

weirdMap' (Fourth w) = Fourth (weirdMap w)

General Fold

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 99

Where we were able to define a map, by giving it a function for every separate type, this isn't enough for a

fold. For a fold, we'll need a function for every constructor function. This is also the case with lists!

Remember the constructors of a list are [] and (:). The 'z'-argument in the foldr function corresponds to

the []-constructor. The 'f'-argument in the foldr function corresponds to the (:) constructor. The Weird

datatype has four constructors, so we need four functions. Next, we have a parameter of the Weird a b

type, and we want to end up with some other type of value. Even more specific: the return type of each

individual function we pass to weirdFold will be the return type of weirdFold itself.

weirdFold :: (something1 -> c) -> (something2 -> c) -> (something3 -> c) -> (something4 -> c) ->

Weird a b -> c

This in itself won't work. We still need the types of something1, something2, something3 and

something4. But since we know the constructors, this won't be much of a problem. Let's first write down

a sketch for our definition. Again, I use a where clause, so I don't have to write the four function all the time.

weirdFold :: (something1 -> c) -> (something2 -> c) -> (something3 -> c) -> (something4 -> c) ->

Weird a b -> c

weirdFold f1 f2 f3 f4 = weirdFold'

where

weirdFold' First a = --Something of type c here

weirdFold' Second b = --Something of type c here

weirdFold' Third list = --Something of type c here

weirdFold' Fourth w = --Something of type c here

Again, the types and definitions of the first two functions are easy to find. The third one isn't very difficult

either, as it's just some other combination with 'a' and 'b'. The fourth one, however, is recursive, and we have

to watch out. As in the case of weirdMap, we also need to recursively use the weirdFold function here.

This brings us to the following, final, definition:

weirdFold :: (a -> c) -> (b -> c) -> ([(a,b)] -> c) -> (c -> c) -> Weird a b -> c

weirdFold f1 f2 f3 f4 = weirdFold'

where

weirdFold' First a = f1 a

weirdFold' Second b = f2 b

weirdFold' Third list = f3 list

weirdFold' Fourth w = f4 (weirdFold f1 f2 f3 f4 w)

In which the hardest part, supplying of f1, f2, f3 and f4, is left out.

Since I didn't bring enough recursiveness in the Weird a b datatype, here's some help for the even

weirder things. Someone, please clean this up!

Weird was a fairly nice datatype. Just one recursive constructor, which isn't even nested inside other

structures. What would happen if we added a fifth constructor?

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 100

A function to be supplied to a fold has the same amount of arguments as the corresponding

constructor.

The type of such a function is the same as the type of the constructor.

The only difference is that every instance of the type the constructor belongs to, should be replaced by

the type of the fold.

If a constructor is recursive, the complete fold function should be applied to the recursive part.

If a recursive instance appears in another structure, the appropriate map function should be used

weirdFold' Fifth list a (waa, maybe) = f5 (map (weirdFold f1 f2 f3 f4 f5) list) a (waa,

maybeMap (weirdFold f1 f2 f3 f4 f5) maybe)

where

maybeMap f Nothing = Nothing

maybeMap f (Just w) = Just (f w)

Now note that nothing strange happens with the Weird a a part. No weirdFold gets called. What's up?

This is a recursion, right? Well... not really. Weird a a has another type than Weird a b, so it isn't a

real recursion. It isn't guaranteed that, for example, f2 will work with something of type 'a', where it expects

a type 'b'. It can be true for some cases, but not for everything.

Also look at the definition of maybeMap. Verify that it is indeed a map function:

It preserves structure.

Only types are changed.

Class declarations

Type classes are a way of ensuring you have certain operations defined on your inputs. For example, if you

know a certain types instantiates the class Fractional, then you can find its reciprocal.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 101

Note

For programmers coming from C++, Java and other object-oriented languages: the

concept of "class" in Haskell is not the same as in OO languages. There are just

enough similarities to cause confusion, but not enough to let you reason by analogy

with what you already know. When you work through this section try to forget

everything you already know about classes and subtyping. It might help to mentally

substitute the word "group" (or "interface") for "class" when reading this section.

Java programmers in particular may find it useful to think of Haskell classes as being

akin to Java interfaces.

Introduction

Haskell has several numeric types, including Int, Integer and Float. You can add any two numbers of

the same type together, but not numbers of different types. You can also compare two numbers of the same

type for equality. You can also compare two values of type Bool for equality, but you cannot add them

together.

The Haskell type system expresses these rules using classes. A class is a template for types: it specifies the

operations that the types must support. A type is said to be an "instance" of a class if it supports these

operations.

For instance, here is the definition of the "Eq" class from the Standard Prelude. It defines the == and /=

functions.

class Eq a where

(==), (/=) :: a -> a -> Bool

-- (==) or (/=)

x /= y = not (x == y)

x == y = not (x /= y)

This says that a type a is an instance of Eq if it supports these two functions. It also gives default definitions

of the functions in terms of each other. This means that if an instance of Eq defines one of these functions

then the other one will be defined automatically.

(Foo x1 str1) == (Foo x2 str2) =

(x1 == x2) && (str1 == str2)

The class Eq is defined in the standard prelude. This code sample defines the type Foo and then

declares it to be an instance of Eq. The three definitions (class, data type and instance) are completely

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 102

separate and there is no rule about how they are grouped. You could just as easily create a new class

Bar and then declare the type Integer to be an instance of it.

Types and classes are not the same thing. A class is a "template" for types. Again this is unlike most

OO languages, where a class is also itself a type.

The definition of == depends on the fact that Integer and String are also members of Eq. In fact

almost all types in Haskell (the most notable exception being functions) are members of Eq.

You can only declare types to be instances of a class if they were defined with data or newtype.

Type synonyms are not allowed.

Deriving

Obviously most of the data types you create in any real program should be members of Eq, and for that

matter a lot of them will also be members of other Standard Prelude classes such as Ord and Show. This

would require large amounts of boilerplate for every new type, so Haskell has a convenient way to declare

the "obvious" instance definitions using the keyword deriving. Using it, Foo would be written as:

deriving (Eq, Ord, Show)

This makes Foo an instance of Eq with exactly the same definition of == as before, and also makes it an

instance of Ord and Show for good measure. If you are only deriving from one class then you can omit the

parentheses around its name, e.g.:

deriving Eq

You can only use deriving with a limited set of built-in classes. They are:

Eq

Equality operators == and /=

Ord

Comparison operators < <= > >=. Also min and max.

Enum

For enumerations only. Allows the use of list syntax such as [Blue .. Green].

Bounded

Also for enumerations, but can also be used on types that have only one constructor. Provides

minBound and maxBound, the lowest and highest values that the type can take.

Show

Defines the function show (note the letter case of the class and function names) which converts the

type to a string. Also defines some other functions that will be described later.

Read

Defines the function read which parses a string into a value of the type. As with Show it also defines

some other functions as well.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 103

The precise rules for deriving the relevant functions are given in the language report. However they can

generally be relied upon to be the "right thing" for most cases. The types of elements inside the data type

must also be instances of the class you are deriving.

This provision of special magic for a limited set of predefined classes goes against the general Haskell

philosophy that "built in things are not special". However it does save a lot of typing. Experimental work

with Template Haskell is looking at how this magic, or something like it, can be extended to all classes.

Class Inheritance

Classes can inherit from other classes. For example, here is the definition of the class Ord from the Standard

Prelude, for types that have comparison operators:

compare :: a -> a -> Ordering

(<), (<=), (>=), (>) :: a -> a -> Bool

max, min :: a -> a -> a

The actual definition is rather longer and includes default implementations for most of the functions. The

point here is that Ord inherits from Eq. This is indicated by the => symbol in the first line. It says that any

type that is an instance of Ord is also an instance of Eq, and hence must also implement the == and /=

operations.

A class can inherit from several other classes: just put all the ancestor classes in the parentheses before the

=>. Strictly speaking those parentheses can be omitted for a single ancestor, but including them acts as a

visual prompt that this is not the class being defined and hence makes for easier reading.

Standard Classes

This diagram, copied from the Haskell Report, shows the relationships between the classes and types in the

Standard Prelude. The names in bold are the classes. The non-bold text are the types that are instances of

each class. The (->) refers to functions and the [] refers to lists.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 104

Simple Type Constraints

So far we have seen how to declare classes, how to declare types, and how to declare that types are instances

of classes. But there is something missing. How do we declare the type of a simple arithmetic function?

plus x y = x + y

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 105

Obviously x and y must be of the same type because you can't add different numbers together. So how about:

which says that plus takes two values and returns a new value, and all three values are of the same type.

But there is a problem: the arguments to plus need to be of a type that supports addition. Instances of the

class Num support addition, so we need to limit the type signature to just that class. The syntax for this is:

This says that the type of the arguments to plus must be an instance of Num, which is what we want.

You can put several limits into a type signature like this:

foo x y t =

show x ++ " plus " ++ show y ++ " is " ++ show (x+y) ++ ". " ++ show t

This says that the arguments x and y must be of the same type, and that type must be an instance of both

Num and Show. Furthermore the final argument t must be of some (possibly different) type that is also an

instance of Show.

You can omit the parentheses for a single constraint, but they are required for multiple constraints. Actually it

is common practice to put even single constraints in parentheses because it makes things easier to read.

You can put a type constraint in almost any type declaration. The only exception is a type synonym

declaration. The following is not legal:

This declares a type Foo with two constructors. F1 takes any numeric type, while F2 takes an integer.

You can also use type parameters in newtype and instance declarations. Class inheritance (see the

previous section) also uses the same syntax.

Monads

Understanding monads

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 106

Introduction

Haskell is a pure functional language. That means that nothing is allowed to have a side effect. However this

is a bit of a problem if we want to do something involving side effects.

Most languages are "imperative": they have no problem with side effects. Part of their programmer model is a

"flow of control". A statement which has a side effect can change the result of something that happens

further along in the flow of control. But because Haskell has no side effects it has no concept of flow of

control either.

What we need is some way to capture the pattern "do X and then do Y, where Y may be affected by X".

Monads are the way we do this. It may seem odd to have to do all this work just to do what imperative

languages do automatically, but there is an important difference. An imperative language can only provide

one method of flow control and for side effects to propagate. In fact almost all imperative languages do this

exactly the same way. Haskell provides this model of side effect propagation as a special case, called the IO

monad. But others are possible:

The Prolog language provides a different approach to side effect propagation. Prolog tries to find a

combination of values under which a predicate evaluates to True. When it meets an expression that

evaluates to False it backs up and tries a different value. This backtracking makes Prolog great for

logic problems but lousy for anything else.

Parser generators such as YACC execute code when a grammar clause is recognised. The outputs from

sub-clauses are passed to outer clauses automatically.

The great thing about Haskell is that you can create your own monads. That means you can create your own

rules for how side effects propagate from one statement to the next, and then mix and match those rules to

suit the particular bit of the problem you are working on. If you are writing a parser then use the Parser

monad. If you are solving a logic problem then use the List monad. And if you are talking to something

outside the program then use the IO monad. In a sense, each monad is its own little minilanguage specially

suited for its particular task.

dollar

i (h (g (f x)))

Pretty ugly, isn't it? Fortunately, the Haskell library has a very handy operator called dollar, or actually ($),

which allows us to rewrite the same code in a more readable manner:

i $ h $ g $ f x

One could almost think of this as piping x through the functions f, g, h and i. Implementing ($) (http://

www.haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v%3A%24) is just a simple matter of

function application. Here's the implementation in one line of Haskell (two if you count the type signature):

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 107

f $ x = f x

Note

This definition is not quite complete. We also need (infixr 0 $) to specify that

it is right-associative with very low precedence. Also, if you're not convinced that

dollar is a useful thing to have, compare:

i ((h z) ((g y) (f x))) vs.

i $ h z $ g y $ f x

euro

The dollar operator allows us to remove a certain number of parentheses from our code, often adding clarity.

One thing which might make it even more intuitive is if it worked backwards. Say we wrote an operator

called euro that does exactly the same thing as dollar, but with the arguments flipped around.

x € f = f x

N.B.: the euro symbol isn't valid Haskell... if you want to try this, use (|>) as an operator instead

i $ h $ g $ f x

This is what the same example would look like using euro:

f x € g € h € i

This example should look vaguely familiar to programmers with experience in imperative languages like

Java. To drive the point home, we would even write the example above over multiple lines:

f x €

g €

h €

i

One could almost think of these euros as being the semicolons from C or Java. It's not entirely the same,

because here we have the concept of doing f and then doing g, but we don't have any way for f to affect

what g does apart from the data it explicitly passes. It's actually closer to Unix pipes than anything else.

Nevertheless, this notion of "sequencing" is basically 1/3 of the story behind monads.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 108

We've got sequencing down and would like to go further. In this section, we'll see a bit more what it means

by "going further", that is, what we're really trying to accomplish and how it relates to the euro operator.

This is also where things take a turn for the different, as the rest of this chapter will be infused with a rather

heavy dose of metaphor.

Imagine that we are working in a giant factory which is in the business of treating large amounts of nuclear

waste. Scattered throughout our factory are a bunch of waste processors, machines for treating the waste at

various stages of "production". A waste processor is just a metaphor for a function: it takes nuclear waste in,

and spits nuclear waste out.

Keep in mind that there is a huge variety of waste to deal with in our factory. There is also huge variety of

waste processing machines to treat them, but each machine is highly specialised, or typed. Each machine is

custom-built to accept one type of waste input and produce exactly one type of output.

Up to here, we have not done anything unusual. We have simply provided a new metaphor for functions in

Haskell. But let's take a short breath. Here are the things we are manipulating so far:

2. waste processors (functions)

One thing we would really like to do is somehow connect our machines together to form a single assembly

line: you insert some waste into one machine, and whatever comes out, you feed directly into the next. The

problem is that this is nuclear waste that we're dealing with, and just running around with large quantities of

waste would cause our workers to get radiation poisoning and die.

So we need a solution that isolates the workers from the materials they are working with.

Use a container

The first thing we're going to do is simplify matters by only connecting together machines from our deluxe

ultra-modern line. What makes these ultra-modern machines so special is that they pack the outgoing, treated

nuclear waste into a special container, thus making the waste much safer to handle:

Of course, ideally, we'd be able to make use of all the machines in our shop, but let us concentrate on the

newest machines first and eventually expand to the rest of the factory. You'll note that the new machines all

have a very similar type signature, something like a -> m b. What this signature means is that they take

something of type a and return something of type b, but since we're dealing with nasty radioactive waste, we

pack the stuff into a container m. As we shall discover later in this tutorial, the containers have many

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 109

interesting uses in the real world -- far beyond our artificial concerns of workers and radiation poisoning. It is

going to take us a while to get there though, so let us continue slowly working our way up.

bind (>>=)

As we saw above, our job is to connect processing machines together, that is, to send nuclear waste from one

processing machine to another. We've accomplished half of this job by -- for now -- concentrating only on

processing machines which output the waste in a container.

The only problem is that processing machines do not accept containers as inputs, they accept nuclear waste!

We could decide to restrict ourselves to machines which accept containers instead of raw nuclear waste, but

as it turns out, there is a more elegant solution.

What we're going to do is create a kind of robot that takes a container and waste processor, removes the

waste from its container and feeds the waste into the processor. This robot shall be called bind informally

but will be written in Haskell as >>=. This is roughly what the bind robot would do:

container >>= fn =

let a = extractWaste container

in fn a

2. It takes a processing machine (a -> m b).

3. After unpacking the container and feeding the waste in, it sends out whatever the waste processor

produces. Because the type is (m b) this must also be in a waste container: bind must not be used

with machines that output the waste without a container. We can get around this by having another

machine that straps on to any processing machine and puts the output in a container. Call this

putInContainer.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 110

To be precise, bind sometimes does do something to the container that the processor sends out, but this detail does not matter right now.

Remember the euro operator from early on in this tutorial? Well, bind pretty much serves the same

purpose; with the exception being that it handles all this business of removing nuclear waste from containers.

But the idea remains the same. Using the bind robot, we can chain together various processing machines

much in the same way that you would use euro in a non-monadic context.

To illustrate this idea, here is an example of three waste processors connected together by bind. They all

take nuclear waste and return containers, and the bind operator simply feeds the output of one processing

machine into another.

wasteInAContainer >>=

(\a1 -> putInContainer (decompose a1)) >>=

(\a2 -> putInContainer (decay a2)) >>=

(\a3 -> putInContainer (melt a3))

So now we have the idea that the bind robot is used to connect the output of one processing machine (waste

in a container) into the input of another processing machine. Notice that because of the type of bind the

resulting chain of machines must always output waste in a container, so we can wrap up the chain of

machines in a single big box and treat it as a single machine. This is a very important property because it

means that we can construct arbitrarily complex machines.

So we have a notion of composing machines together via the bind robot, but let's take a small step back and

look at the bigger picture. Did you ever consider that there can be different kinds of factories for treating the

same waste? What's interesting about this is that the way the bind robot works -- the way it is implemented --

depends on each factory.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 111

In the upcoming section, I will discuss two simple factories, Maybe and List, and show what the

corresponding bind robot looks like.

But first, let us quickly review the list of metaphors we are manipulating so far, just to make sure we are all

on the same page

2. nuclear waste (inputs)

3. containers (monadic values)

4. the bind robot >>=

5. factories (monads)

Note: to understand this section, it really helps to have used the Maybe datatype in Haskell.

The Maybe monad is one of the simplest monads you can show that does something interesting. In a Maybe

factory, the bind robot looks something like this:

container >>= fn =

case container of

Nothing -> Nothing

Just a -> fn a

So what's the story here? There are two kinds of containers to be used in Maybe-land: those that contain

nothing and those that contain a piece of waste. If bind receives a container with nothing in it, well there isn't

much to do, we just return an empty container as well. If, however, there was a piece of waste, well then we

use pattern matching on Just (i.e. Just a) to extract the waste from the container, and then we feed it to the

processing machine fn. The result must be boxed to be safe! fn must be of type a -> Maybe b then.

Now remember, the processing machines we're interested in all output nuclear waste in containers, so as far

as types are concerned, everything fits together: either we return Nothing (which is a container) or we

return whatever fn a returns, which also is a container.

Another simple monad to work with is [] (List). Here's what the bind robot looks like in a [] (List)

factory :

container >>= fn =

case container of

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 112

[] -> []

xs -> concat (map fn xs)

This largely resembles the Maybe monad: except this time, we can either have an empty container or some

kind of multi-component container that holds several pieces of nuclear waste (all of the same type, of

course). If we get an empty container, we just return an empty container.

If we have several pieces of nuclear waste in the container, then we have to individually feed each one of

these pieces into the processing machine. This gives us a bunch of containers, which we then have to merge

(concat) into one single container so that all the types fit together and everything continues working

smoothly.

If everything up to here has been easy, despite the tortured metaphors, we are now in an excellent position

because we have the understood the essence of how monads work. The next thing we will need to

concentrate on is figuring out how to do something truly useful with them, something more substantial than

manipulating Maybe and List.

Return

But before going further, it's time to revisit some of simplifying assumptions. Way back in the beginning, we

decided to focus only on the fancy next-generation machines which output their waste in a container. But

what about all the old machines, perfectly good machines that we can't afford to retrofit for container-output

capability?

These processing machines can be helped with a little robot called return, whose only job is to take raw

nuclear waste and put it into containers. Having the return robot would let us bring all these old-school

processing machines in line with the rest of our factory. We just have to call return on them.

return :: a -> m a

return a = Just a

return a = [a]

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 113

In our earlier examples, we used an imaginary function called putInContainer. In fact, this is exactly the

return robot we've just shown you. What happens when you are working with monads is that with some

processing machines, you have to use return function to wrap the output waste in a container.

Why bother?

return puts nuclear waste in containers, and bind takes them back out. It might seem somewhat ludicrous

that we're going through all of this machinery only to cancel ourselves out. Why bother? One reason is that

some processing machines have their monad compatibility built-in. They don't need some special function

like putInContainer or some robot like return because returning containers is part of their raison

d'être. You can recognise these processing machines by their type, because they always return a monadic

value. The putStr function, for example, returns IO (), which is simply an IO container with a waste of

type () inside. So one justification for all this monadic stuff is that it lets us handle these fancy new

processing machines in an elegant manner. If connecting the older container-less processing machines

together was the only issue, we could have just used something simpler, like dollar or euro.

There are also many other reasons, for example, keeping our factories nice and tidy

Being able to construct a daisy chain of waste processing machines is all very well and good. But how do we

deal with side effects? Let's have a look at the State monad and see. The State monad is where things start

to get really useful, but it is also where they start to get a little crazy. No matter what happens here, it is

useful to keep in mind that we're still always doing the same thing, building a bind robot which takes a

container, takes a processing machine, extracts the waste from the container, feeds it into the processing

machine, and sends out whatever the processing machine produces.

A State monad is useful for passing information around at the same time we run our functions. The tricky

thing here is that in a State factory, the container is itself a function!

This looks a little exotic, but we can reassure ourselves that it's really more of the same thing by comparing

the implementations of other return functions:

See, nothing special. With Maybe, we return a maybe, with List, we return a list, and with State, well,

we return a function.

To continue abusing the nuclear waste metaphor, we can say containers in the State factory are very

sophisticated: they all have a ticket reader, and when you feed a ticket (st) into the container, it opens up to

reveal a piece of waste and a new ticket (a, st).

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 114

This new ticket can be seen as a receipt. Now in the case of return, the container trivially reveals the same

ticket that it was fed, but other containers might not do the same. In fact, that is the whole point! Tickets are

used to represent some kind of state (say some kind of routing information), and this mechanism of taking

tickets in and spitting receipt tickets out is how we pass state information from one processing machine into

another.

Under the State monad, the bind robot implements what one might call a bureaucratic mess. But remember,

it's doing exactly the same thing as all the other bind robots, just under the conditions of the State factory.

container >>= fn =

\st -> let (a, st2) = container st

container2 = fn a

in container2 st2

Experienced Haskellers and other observant readers might notice that we're slightly fudging it with the types! Please bear with us, we'll

fess up with the details later!

To start things off, don't worry about the \st. Just imagine that somehow magically, we have a ticket st.

This is very fortunate, because the only way we're going to get our sophisticated State containers to open is

by feeding them a ticket.

In the line (a, st2) = container st, we do exactly that; we feed our ticket st into the container.

And it opens up to reveal both the waste and a receipt (a, st2).

Next, in line container2 = fn a, we feed the waste into the processing machine fn, which by the way,

outputs a container, as is the practice in our factories.

Here is the hard part: what does the line in container2 st2 mean? Well, here it's useful to ignore the

whole let..in construct and think of the whole expression. Ultimately, the implementation of bind is \st

-> container2 st2. And all this does is to encapsulate an interesting chain reaction into a container

container. The idea is that when you feed a ticket (st) into the container:

1. st gets fed to the first container. This results in waste and a new receipt (a, st2).

2. a gets fed into processing machine fn. This results in a new container (container2)

3. the outer container now feeds the new ticket (st2) into the new container (container2) and what

comes out is yet another piece of waste and a new receipt which represents the result of the whole

Rube Goldberg contraption.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 115

This is the hardest part to understand. Once we have gotten a hang of this part, we essentially have the

monads story cinched. It's all easy from here on out.

Useful functions

Here are a couple of functions that increase the usefulness of the State monad. Note that these are processing

machines, like all the others; they accept nuclear waste and produce containers. One thing that makes them

special though, is that they are the kind of function that have monad-compatibility built right in! But they

only work in a State factory, though.

The functions get and put are incredibly simple, and also incredibly useful. get simply returns the current

state, and put sets it to something else

put x = \_ -> ((),x)

The idea behind these functions is that they can be inserted into your chain of processing machines with a

simple bind operator.

One thing which is odd is that the waste that is sent out by get is a ticket! It is a state! Why? Well,

remember how bind works with State. It pulls the value out of the (value, state) pair and then feeds that into

the function the right of the >>=. That means if we had a function f which did something with the current

state, we could do this:

get >>= f

get copies the current state as the value, so when we bind the result of get, we access the current state.

The put function is similarly exotic. Whereas the nuclear waste returned by get is a ticket, the waste

returned by put is simply (), which is akin to unit or void from other languages and isn't very

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 116

interesting in itself. But that's ok, because putting things in states isn't about the nuclear waste, it's about the

tickets. The thing you have to be careful of is that the tickets always have to be the same type. If you are

using a State thumbprint monad, then you can only put thumbprints. If you are using a State Int monad, then

you can only put Ints.

We have seen that the functions get and put are weird because they return tickets and () as nuclear waste,

and we have also seen that they are useful because they allow you to manipulate tickets as if they were waste.

If you are paying close attention, however, you should notice that something is terribly amiss. Suppose that

we want to observe the state of a container foo. That would be written like this:

What happens to the nuclear waste from foo? The bind operator is supposed to unpack that waste and feed it

into get; however, get isn't expecting any nuclear waste at all. We've just broken our entire chain of waste

processing machines! Recall the type of the bind operator:

The problem here is that we're trying to plug get into the a -> m b side of things, when in fact the type

of the get operator is merely m b. But no worries, because fixing this little technical detail is very easy. We

have to introduce an anonymous bind, >>, yes, it's yet another operator to learn about, but please relax,

because its job is astoundingly simple:

container >> f =

container >>= (\_ -> f)

The anonymous bind operator's only job is provide a wrapper around the traditional >>=. It takes an input-

free waste outputting machine (i.e. one which does not process nuclear waste) and transforms it into an input-

accepting machine that completely ignores the incoming waste and continues about its business of outputting

nuclear waste in a container. Thus, you cannot make calls like

which is really just a more succint way of saying f >>= (\_ -> get).

Likewise, you probably don't want to have any sequences like this:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 117

You could, but what would be the point? The nuclear waste returned by put is merely (), remember? That's

not very useful, so you most likely just want to ignore it altogether:

The code above for the State monad is not proper Haskell! Just as with the fictitious € operator, we have

taken a few minor liberties with the syntax in the interest of clarity. Now that things are (hopefully) clear, let

us make them correct. The problem lies in the idea of using a function as a container:

This doesn't entirely make sense. It means that the type of our container would be something like

return :: a -> (st -> (a,st)), when what we really need to be returning is something of type

State st a. But that's just a minor detail. All we have to do is wrap up the function with a constructor.

The return function is not very much different from our initial white lie; it just packs everything up with a

constructor.

The bind operator would likewise have to be modified, but that's just extra bureaucracy. You have to take the

function out of State, call it (as before), and return a new function in State. We'll leave this as an exercise.

Exercises

Correct the definition for (>>=) to take the State constructor into consideration.

You'll need to use pattern matching to remove the State constructor.

runState

Note also that the real definition for State has a slightly different style:

That's an odd-looking beast, but a quick dissection reveals that there is nothing out of the ordinary. To begin

with, we can mentally substitute the newtype for the more familiar data, which takes some of the

exoticness out of things. Next, we observe that the { runState :: s -> (a, s) } is really just a

record with one element. The name of the element might be confusing, because it lends the false impression

that State monads are somehow objects that contain a function runState, the way an Address type might

contain a street name. But it turns out the choice of runState is just a sleight-of-Haskell. Recall from the

presentation of named fields that the record syntax automatically gives us a projection function to access

parts of the record. If I have an address type like data Address = Address { street ::

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 118

String, number :: Int }, street would be a function of type Address -> String. That

being said, what is the type of runState? Is it s -> (a, s) as we might be tempted to think from its

type signature? Certainly not! Just as street is of type Address -> String, runState is of type

State s a -> s -> (a,s). The runState function merely gives us a way to access this container

without pattern-matching on State. That's not all. Up to now, we've only considered the issue of putting

things into the State monad (return) and sequencing them together (>>=). What's been crucially missing

up to now is a way to get them back out. That's exactly what runState is for.

Exercises

pattern-matching on State

2. TODO: an exercise which uses runState in a more realistic setting.

Understanding the State monad is essentially all there is to understanding that IO monad we make so much

use of. The first useful idea is to simplify matters by only concentrating on output. Let's call this the O

monad. The O monad is simply a state monad where the state (the ticket) is a list of things to be written to

the screen. Putting something on the screen simply consists of appending something to the list.

putStr

Perhaps a good way to illustrate the point is to show one way that the putStr function would work:

That's all there is to it. We append the string to the output. If this isn't completely clear, try noticing how

much this putStr in our hypothetical O monad looks like the put function in the State monad. Now, in real

life, it is very rare that people write things like

What usually happens is that programmers already know what String they want to put... but that's ok, because

they can just use the anonymous bind operator:

So what about all the complicated stuff like stdin and stderr? Same old thing. The IO monad is still just a

State monad, but instead of the state being a list, it is now a tuple of lists, one for each file handle. Or to be

more realistic, the state in an IO monad is some horribly complicated data structure which represents the

state of the computer at some point in the computation. That's what you're passing around when you

manipulate IO: the entire environment as nothing more than a state.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 119

do notation

Now that we know what the underlying mechanisms behind monads are, it's time to reconsider the "secret

sauce" behind Haskell monads: do-notation. We've gotten by so far with seemingly magical rules like, "<-

takes a value out of monadic actions". No such magic is happening. The do notation has a simple,

mechanical translation to the >>= and return machinery we've seen in this chapter. Consider this fragment

of monadic code:

wasteInAContainer >>=

\a1 -> foo a1 >>=

\a2 -> bar a2 >>=

\a3 -> baz a3

One might reasonably argue that code like this is cumbersome and impractical to write. This sounds like a

job for syntactic sugar. We begin by slightly adjusting the whitespace so that all of these lambdas move up,

leaving the newlines in an admittably funky place:

foo a1 >>= \a2 ->

bar a2 >>= \a3 ->

baz a3

All the do notation does is move the binds and lambdas from the right to the left:

do a1 <- wasteInAContainer

a2 <- foo a1

a3 <- bar a2

baz a3

See? Same code, but sugarfied. There's a bit more to the do notation, especially the use of let, and of the

anonymous bind (>>) for lines without a left arrow (<-) [except for that pesky last line]. You can learn

more about this by looking at the Haskell report or in Yet Another Haskell Tutorial.

Conclusion

There is still a good bit of ground to cover on monads. We'll see much more in the rest of this book. In the

meantime, it is also worth looking at other tutorials or even the Haskell API on Control.Monad (http://

www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.html) , to get a more complete picture.

Browsing the Control.Monad implementation (http://darcs.haskell.org/packages/base/Control/Monad.hs) , for

example, is definitely a worthwhile experience.

Exercises

Write a tutorial explaining how monads work. You might find inspiration in the

tutorials listed on the Haskell meta-tutorial (http://www.haskell.org/haskellwiki/

Meta-tutorial) . Try to find a new audience for your tutorial, or a new way of

explaining things.

Acknowledgments

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 120

Without a combination of Hal Daume's Yet Another Haskell Tutorial and Jeff Newbern's excellent All about

Monads (http://www.haskell.org/all_about_monads/html/) , I wouldn't have had the slightest clue what a

Monad was. Hopefully this tutorial will provide another useful angle from which to understand the whole

idea behind monads.

Brian Slesinsky pointed out a pretty big goof in version 0.7 of this tutorial: I had been incorrectly writing the

bind operator as <<=. Thanks much!

Advanced monads

This chapter follows on from Understanding monads, and explains a few more of the more advanced

concepts.

Monads as computations

The concept

A metaphor we explored in the last chapter was that of monads as containers. That is, we looked at what

monads are in terms of their structure. What was touched on but not fully explored is why we use monads.

After all, monads structurally can be very simple, so why bother at all?

The secret is in the view that each monad represents a different type of computation. Here, and in the rest of

this chapter, a 'computation' is simply a function call: we're computing the result of this function. In a

minute, we'll give some examples to explain what we mean by this, but first, let's re-interpret our basic

monadic operators:

>>=

The >>= operator is used to sequence two monadic computations. That means it runs the first computation,

then feeds the output of the first computation into the second and runs that too.

return

return x, in computation-speak, is simply the computation that has result x, and 'does nothing'. The

meaning of the latter phrase will become clear when we look at State below.

So how does the computations analogy work in practice? Let's look at some examples.

Computations in the Maybe monad (that is, function calls which result in a type wrapped up in a Maybe)

represent computations that might fail. The easiest example is with lookup tables. A lookup table is a table

which relates keys to values. You look up a value by knowing its key and using the lookup table. For

example, you might have a lookup table of contact names as keys to their phone numbers as the values in a

phonebook application. One way of implementing lookup tables in Haskell is to use a list of pairs: [(a, b)

]. Here a is the type of the keys, and b the type of the values. Here's how the phonebook lookup table might

look:

phonebook = [ ("Bob", "01788 665242"),

("Fred", "01624 556442"),

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 121

("Jane", "01732 187565") ]

The most common thing you might do with a lookup table is look up values! However, this computation

might fail. Everything's fine if we try to look up one of "Bob", "Fred", "Alice" or "Jane" in our phonebook,

but what if we were to look up "Zoe"? Zoe isn't in our phonebook, so the lookup has failed. Hence, the

Haskell function to look up a value from the table is a Maybe computation:

-> [(a, b)] -- the lookup table to use

-> Maybe b -- the result of the lookup

Just "01788 665242"

Prelude> lookup "Jane" phonebook

Just "01732 187565"

Prelude> lookup "Zoe" phonebook

Nothing

Now let's expand this into using the full power of the monadic interface. Say, we're now working for the

government, and once we have a phone number from our contact, we want to look up this phone number in a

big, government-sized lookup table to find out the registration number of their car. This, of course, will be

another Maybe-computation. But if they're not in our phonebook, we certainly won't be able to look up their

registration number in the governmental database! So what we need is a function that will take the results

from the first computation, and put it into the second lookup, but only if we didn't get Nothing the first time

around. If we did indeed get Nothing from the first computation, or if we get Nothing from the second

computation, our final result should be Nothing.

comb Nothing _ = Nothing

comb (Just x) f = f x

Observant readers may have guessed where we're going with this one. That's right, comb is just >>=, but

restricted to Maybe-computations. So we can chain our computations together:

-> Maybe String -- their registration number

getRegistrationNumber name = lookup name phonebook >>= (\number -> lookup number

governmentalDatabase)

If we then wanted to use the result from the governmental database lookup in a third lookup (say we want to

look up their registration number to see if they owe any car tax), then we could extend our

getRegistrationNumber function:

-> Maybe Double -- the amount of tax they owe

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 122

getTaxOwed name = lookup name phonebook >>= (\number -> lookup number governmentalDatabase) >>=

(\registration -> lookup registration taxDatabase)

getTaxOwed name = do

number <- lookup name phonebook

registration <- lookup number governmentalDatabase

lookup registration taxDatabase

Let's just pause here and think about what would happen if we got a Nothing anywhere. Trying to use >>=

to combine a Nothing from one computation with another function will result in the Nothing being

carried on and the second function ignored (refer to our definition of comb above if you're not sure). That is,

a Nothing at any stage in the large computation will result in a Nothing overall, regardless of the other

functions! Thus we say that the structure of the Maybe monad propagates failures.

An important thing to note is that we're not by any means restricted to lookups! There are many, many

functions whose results could fail and therefore use Maybe. You've probably written one or two yourself.

Any computations in Maybe can be combined in this way.

Summary

2. It propagates failure.

Computations that are in the list monad (that is, they end in a type [a]) represent computations with zero or

more valid answers. For example, say we are modelling the game of noughts and crosses (known as tic-tac-

toe in some parts of the world). An interesting (if somewhat contrived) problem might be to find all the

possible ways the game could progress: find the possible states of the board 3 turns later, given a certain

board configuration (i.e. a game in progress).

return a = [a]

xs >>= f = concat (map f xs)

As monads are only really useful when we're chaining computations together, let's go into more detail on our

example. The problem can be boiled down to the following steps:

1. Find the list of possible board configurations for the next turn.

2. Repeat the computation for each of these configurations: replace each configuration, call it C, with the

list of possible configurations of the turn after C.

3. We will now have a list of lists (each sublist representing the turns after a previous configuration), so

in order to be able to repeat this process, we need to collapse this list of lists into a single list.

This structure should look similar to the monadic instance declaration above. Here's how it might look,

without using the list monad:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 123

getNextConfigs = undefined -- details not important

tick bds = concatMap getNextConfigs bds

find3rdConfig bd = tick $ tick $ tick [bd]

(concatMap is a handy function for when you need to concat the results of a map: concatMap f xs =

concat (map f xs).) Alternatively, we could define this with the list monad:

find3rdConfig bd0 = do

bd1 <- getNextConfigs bd0

bd2 <- getNextConfigs bd1

bd3 <- getNextConfigs bd2

return bd3

List comprehensions

An interesting thing to note is how similar list comprehensions and the list monad are. For example, the

classic function to find Pythagorean triples:

pythags = [ (x, y, z) | z <- [1..], x <- [1..z], y <- [x..z], x^2 + y^2 == z^2 ]

pythags = do

z <- [1..]

x <- [1..z]

y <- [x..z]

guard (x^2 + y^2 == z^2)

return (x, y, z)

The only non-trivial element here is guard. This is explained in the next module, Additive monads.

The State monad actually makes a lot more sense when viewed as a computation, rather than a container.

Computations in State represents computations that depend on and modify some internal state. For example,

say you were writing a program to model the three body problem (http://en.wikipedia.org/wiki/Three_body_

problem#Three_body_problem) . The internal state would be the positions, masses and velocities of all three

bodies. Then a function, to, say, get the acceleration of a specific body would need to reference this state as

part of its calculations.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 124

The other important aspect of computations in State is that they can modify the internal state. Again, in the

three-body problem, you could write a function that, given an acceleration for a specific body, updates its

position.

The State monad is quite different from the Maybe and the list monads, in that it doesn't represent the result

of a computation, but rather a certain property of the computation itself.

What we do is model computations that depend on some internal state as functions which take a state

parameter. For example, if you had a function f :: String -> Int -> Bool, and we want to modify

it to make it depend on some internal state of type s, then the function becomes f :: String -> Int

-> s -> Bool. To allow the function to change the internal state, the function returns a pair of (new state,

return value). So our function becomes f :: String -> Int -> s -> (s, Bool)

It should be clear that this method is a bit cumbersome. However, the types aren't the worst of it: what would

happen if we wanted to run two stateful computations, call them f and g, one after another, passing the

result of f into g? The second would need to be passed the new state from running the first computation, so

we end up 'threading the state':

fThenG :: (s -> (s, a)) -> (a -> s -> (s, b)) -> s -> (s, b)

fThenG f g s =

let (s', v ) = f s -- run f with our initial state s.

(s'', v') = g v s' -- run g with the new state s' and the result of f, v.

in (s'', v') -- return the latest state and the result of g

All this 'plumbing' can be nicely hidden by using the State monad. The type constructor State takes two

type parameters: the type of its environment (internal state), and the type of its output. So State s a

indicates a stateful computation which depends on, and can modify, some internal state of type s, and has a

result of type a. How is it defined? Well, simply as a function that takes some state and returns a pair of

(new state, value):

The above example of fThenG is, in fact, the definition of >>= for the State monad, which you probably

remember from the first monads chapter.

We mentioned right at the start that return x was the computation that 'did nothing' and just returned x.

This idea only really starts to take on any meaning in monads with side-effects, like State. That is,

computations in State have the opportunity to change the outcome of later computations by modifying the

internal state. It's a similar situation with IO (because, of course, IO is just a special case of State).

return x doesn't do this. A computation produced by return generally won't have any side-effects. The

monad law return x >>= f == f x basically guarantees this, for most uses of the term 'side-effect'.

Further reading

A tour of the Haskell Monad functions (http://members.chello.nl/hjgtuyl/tourdemonad.html) by Henk-

Jan van Tuyl

All about monads (http://www.haskell.org/all_about_monads/html/index.html) by Jeff Newbern

explains well the concept of monads as computations, using good examples. It also has a section

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 125

outlining all the major monads, explains each one in terms of this computational view, and gives a full

example.

MonadPlus

MonadPlus is a typeclass whose instances are monads which represent a number of computations.

Introduction

You may have noticed, whilst studying monads, that the Maybe and list monads are quite similar, in that they

both represent the number of results a computation can have. That is, you use Maybe when you want to

indicate that a computation can fail somehow (i.e. it can have 0 or 1 result), and you use the list monad when

you want to indicate a computation could have many valid answers (i.e. it could have 0 results -- a failure --

or many results).

Given two computations in one of these monads, it might be interesting to amalgamate these: find all the

valid solutions. I.e. given two lists of valid solutions, to find all of the valid solutions, you simply concatenate

the lists together. It's also useful, especially when working with folds, to require a 'zero results' value (i.e.

failure). For lists, the empty list represents zero results.

mzero :: m a

mplus :: m a -> m a -> m a

Here are the two instance declarations for Maybe and the list monad:

mzero = []

mplus = (++)

mzero = Nothing

Nothing `mplus` Nothing = Nothing -- 0 solutions + 0 solutions = 0 solutions

Just x `mplus` Nothing = Just x -- 1 solution + 0 solutions = 1 solution

Nothing `mplus` Just x = Just x -- 0 solutions + 1 solution = 1 solution

Just x `mplus` Just y = Just x -- 1 solution + 1 solution = 2 solutions,

-- but as Maybe can only have up to one

-- solution, we disregard the second one.

mzero = Left noMsg

Left _ `mplus` n = n

Right x `mplus` _ = Right x

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 126

Remember that (Either e) is similar to Maybe in that it represents computations that can fail, but it allows the

failing computations to include an error message. Typically, Left s means a failed computation with error

message s, and Right x means a successful computation with result x.

Example

A traditional way of parsing an input is to write functions which consume it, one character at a time. That is,

they take an input string, then chop off ('consume') some characters from the front if they satisfy certain

criteria (for example, you could write a function which consumes one uppercase character). However, if the

characters on the front of the string don't satisfy these criteria, the parsers have failed, and therefore they

make a valid candidate for a Maybe.

Here we use mplus to run two parsers in parallel. That is, we use the result of the first one if it succeeds,

but if not, we use the result of the second. If that too fails, then our whole parser returns Nothing.

-- | Consume a digit in the input, and return the digit that was parsed. We use

-- a do-block so that if the pattern match fails at any point, fail of the

-- the Maybe monad (i.e. Nothing) is returned.

digit :: Int -> String -> Maybe Int

digit i s | i > 9 || i < 0 = Nothing

| otherwise = do

let (c:_) = s

if read [c] == i then Just i else Nothing

binChar :: String -> Maybe Int

binChar s = digit 0 s `mplus` digit 1 s

Instances of MonadPlus are required to fulfill several rules, just as instances of Monad are required to fulfill

the three monad laws. Unfortunately, these laws aren't set in stone anywhere and aren't fully agreed on. For

example, the Haddock documentation (http://haskell.org/ghc/docs/latest/html/libraries/base/Control-

Monad.html#t%3AMonadPlus) for Control.Monad quotes them as:

v >> mzero = mzero

but adds:

mzero `mplus` m = m

m `mplus` mzero = m

There are even more sets of laws available, and therefore you'll sometimes see monads like IO being used as

a MonadPlus. The Haskell Wiki page (http://www.haskell.org/haskellwiki/MonadPlus) for MonadPlus has

more information on this. TODO: should that information be copied here?

Useful functions

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 127

Beyond the basic mplus and mzero themselves, there are a few functions you should know about:

msum

A very common task when working with instances of MonadPlus is to take a list of the monad, e.g. [Maybe

a] or [[a]], and fold down the list with mplus. msum fulfills this role:

msum = foldr mplus mzero

A nice way of thinking about this is that it generalises the list-specific concat operation. Indeed, for lists,

the two are equivalent. For Maybe it finds the first Just x in the list, or returns Nothing if there aren't

any.

guard

This is a very nice function which you have almost certainly used before, without knowing about it. It's used

in list comprehensions, as we saw in the previous chapter. List comprehensions can be decomposed into the

list monad, as we saw:

pythags = [ (x, y, z) | x <- [1..], y <- [x..], z <- [y..], x^2 + y^2 == z^2 ]

pythags = do

x <- [1..]

y <- [x..]

z <- [y..]

guard (x^2 + y^2 == z^2)

return (x, y, z)

guard True = return ()

guard False = mzero

Concretely, guard will reduce a do-block to mzero if its predicate is False. By the very first law stated in

the 'MonadPlus laws' section above, an mzero on the left-hand side of an >>= operation will produce

mzero again. As do-blocks are decomposed to lots of expressions joined up by >>=, an mzero at any point

will cause the entire do-block to become mzero.

To further illustrate that, we will examine guard in the special case of the list monad, extending on the

pythags function above. First, here is guard defined for the list monad:

guard True = [()]

guard False = []

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 128

guard blocks off a route. For example, in pythags, we want to block off all the routes (or combinations of

x, y and z) where x^2 + y^2 == z^2 is False. Let's look at the expansion of the above do-block to

see how it works:

pythags =

[1..] >>= \x ->

[x..] >>= \y ->

[y..] >>= \z ->

guard (x^2 + y^2 == z^2) >>= \_

return (x, y, z)

Replacing >>= and return with their definitions for the list monad (and using some let-bindings to make

things prettier), we obtain:

pythags =

let ret x y z = [(x, y, z)]

gd x y z = concatMap (\_ -> ret x y z) (guard $ x^2 + y^2 == z^2)

doZ x y = concatMap (gd x y) [y..]

doY x = concatMap (doZ x ) [x..]

doX = concatMap (doY ) [1..]

in doX

Remember that guard returns the empty list in the case of its argument being False. Mapping across the

empty list produces the empty list, no matter what function you pass in. So the empty list produced by the call

to guard in the binding of gd will cause gd to be the empty list, and therefore ret to be the empty list.

To understand why this matters, think about list-computations as a tree. With our Pythagorean triple

algorithm, we need a branch starting from the top for every choice of x, then a branch from each of these

branches for every value of y, then from each of these, a branch for every value of z. So the tree looks like

this:

start

|____________________________________________ ...

| | |

x 1 2 3

|_______________ ... |_______________ ... |_______________ ...

| | | | | | | | |

y 1 2 3 1 2 3 1 2 3

|___...|___...|___... |___...|___...|___...|___...|___...|___...

| | | | | | | | | | | | | | | | | | | | | | | | | | |

z 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3

Any combination of x, y and z represents a route through the tree. Once all the functions have been applied,

each branch is concatenated together, starting from the bottom. Any route where our predicate doesn't hold

evaluates to an empty list, and so has no impact on this concat operation.

Exercises

1. Prove the MonadPlus laws for Maybe and the list monad.

2. We could augment our above parser to involve a parser for any character:

-- | Consume a given character in the input, and return the the character we

-- just consumed, paired with rest of the string. We use a do-block so that

-- if the pattern match fails at any point, fail of the Maybe monad (i.e.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 129

-- Nothing) is returned.

char :: Char -> String -> Maybe (Char, String)

char c s = do

let (c':s') = s

if c == c' then Just (c, s') else Nothing

It would then be possible to write a hexChar function which parses any valid hexidecimal character

(0-9 or a-f). Try writing this function (hint: map digit [0..9] :: [Maybe Int]).

3. More to come...

TODO: is this at all useful? (If you don't know anything about the Monoid data structure, then don't worry

about this section. It's just a bit of a muse.)

Monoids are a data structure with two operations defined: an identity (or 'zero') and a binary operation (or

'plus'), which satisfy some axioms.

mempty :: m

mappend :: m -> m -> m

mempty = []

mappend = (++)

Note the usage of [a], not [], in the instance declaration. Monoids are not necessarily 'containers' of anything.

For example, the integers (or indeed even the naturals) form two possible monoids:

newtype MultiplicativeInt = MI Int

mempty = AI 0

AI x `mappend` AI y = AI (x + y)

mempty = MI 1

MI x `mappend` MI y = MI (x * y)

Monoids, then, look very similar to MonadPlus instances. Both feature concepts of a zero and plus, and

indeed MonadPlus is a subclass of Monoid:

mempty = mzero

mappend = mplus

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 130

However, they work at different levels. As noted, there is no requirement for monoids to be any kind of

container. More formally, monoids have kind *, but instances of MonadPlus, as they're Monads, have kind *

-> *.

Monad transformers

Introduction

Monad transformers are special variants of standard monads that facilitate the

combining of monads. For example, ReaderT Env IO a is a computation

Monad transformers which can read from some environment of type Env, can do some IO and

are monads too! returns a type a. Their type constructors are parameterized over a monad type

constructor, and they produce combined monadic types. In this tutorial, we

will assume that you understand the internal mechanics of the monad

abstraction, what makes monads "tick". If, for instance, you are not comfortable with the bind operator (>>=)

, we would recommend that you first read Understanding monads.

A useful way to look at transformers is as cousins of some base monad. For example, the monad ListT is a

cousin of its base monad List. Monad transformers are typically implemented almost exactly the same way

that their cousins are, only more complicated because they are trying to thread some inner monad through.

The standard monads of the monad template library all have transformer versions which are defined

consistently with their non-transformer versions. However, it is not the case that all monad transformers

apply the same transformation. We have seen that the ContT transformer turns continuations of the form

(a->r)->r into continuations of the form (a->m r)->m r. The StateT transformer is different. It

turns state transformer functions of the form s->(a,s) into state transformer functions of the form s->m

(a,s). In general, there is no magic formula to create a transformer version of a monad — the form of each

transformer depends on what makes sense in the context of its non-transformer type.

Error ErrorT Either e a m (Either e a)

State StateT s -> (a,s) s -> m (a,s)

Reader ReaderT r -> a r -> m a

Writer WriterT (a,w) m (a,w)

Cont ContT (a -> r) -> r (a -> m r) -> m r

Implementing transformers

The key to understanding how monad transformers work is understanding how they implement the bind

(>>=) operator. You'll notice that this implementation very closely resembles that of their standard, non-

transformer cousins.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 131

Type constructors play a fundamental role in Haskell's monad support. Recall that Reader r a is the type

of values of type a within a Reader monad with environment of type r. The type constructor Reader r is

an instance of the Monad class, and the runReader::Reader r a->r->a function performs a

computation in the Reader monad and returns the result of type a.

A transformer version of the Reader monad, called ReaderT, exists which adds a monad type constructor as

an addition parameter. ReaderT r m a is the type of values of the combined monad in which Reader is

the base monad and m is the inner monad.

function performs a computation in the combined monad and returns a result of type m a.

We begin by defining the data type for the Maybe transformer. Our MaybeT constructor takes a single

argument. Since transformers have the same data as their non-transformer cousins, we will use the newtype

keyword. We could very well have chosen to use data, but that introduces needless overhead.

This might seem a little off-putting at first, but it's actually simpler than it

looks. The constructor for MaybeT takes a single argument, of type m

Records are just

(Maybe a). That is all. We use some syntactic sugar so that you can see

syntactic sugar

MaybeT as a record, and access the value of this single argument by calling

runMaybeT. One trick to understanding this is to see monad transformers as

sandwiches: the bottom slice of the sandwhich is the base monad (in this case,

Maybe). The filling is the inner monad, m. And the top slice is the monad transformer MaybeT. The purpose

of the runMaybeT function is simply to remove this top slice from the sandwich. What is the type of

runMaybeT? It is (MaybeT m a) -> m (Maybe a).

As we mentioned in the beginning of this tutorial, monad transformers are monads too. Here is a partial

implementation of the MaybeT monad. To understand this implementation, it really helps to know how its

simpler cousin Maybe works. For comparison's sake, we put the two monad implementations side by side

Note

Note the use of 't', 'm' and 'b' to mean 'top', 'middle', 'bottom' respectively

Maybe MaybeT

instance Monad Maybe where instance (Monad m) => Monad (MaybeT m) where

b_v >>= f = tmb_v >>= f =

-- MaybeT $ runMaybeT tmb_v

case b_v of >>= \b_v -> case b_v of

Nothing -> Nothing Nothing -> return Nothing

Just v -> f v Just v -> runMaybeT $ f v

You'll notice that the MaybeT implementation looks a lot like the Maybe implementation of bind, with the

exception that MaybeT is doing a lot of extra work. This extra work consists of unpacking the two extra

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 132

layers of monadic sandwich (note the convention topMidBot to reflect the sandwich layers) and packing

them up. If you really want to cut into the meat of this, read on. If you think you've understood up to here,

why not try the following exercises:

Exercises

2. Rewrite the implementation of the bind operator >>= to be more concise.

So what's going on here? You can think of this as working in three phases: first we remove the sandwich

layer by layer, and then we apply a function to the data, and finally we pack the new value into a new

sandwich

Unpacking the sandwich: Let us ignore the MaybeT constructor for now, but note that everything that's

going on after the $ is happening within the m monad and not the MaybeT monad!

1. The first step is to remove the top slice of the sandwich by calling runMaybeT topMidBotV

2. We use the bind operator (>>=) to remove the second layer of the sandwich -- remember that we are

working in the confines of the m monad.

3. Finally, we use case and pattern matching to strip off the bottom layer of the sandwich, leaving

behind the actual data with which we are working

If the bottom layer was Nothing, we simply return Nothing (which gives us a 2-layer

sandwich). This value then goes to the MaybeT constructor at the very beginning of this function,

which adds the top layer and gives us back a full sandwich.

If the bottom layer was Just v (note how we have pattern-matched that bottom slice of monad off):

we apply the function f to it. But now we have a problem: applying f to v gives a full three-layer

sandwich, which would be absolutely perfect except for the fact that we're now going to apply the

MaybeT constructor to it and get a type clash! So how do we avoid this? By first running

runMaybeT to peel the top slice off so that the MaybeT constructor is happy when you try to add it

back on.

Just as with the Maybe transformer, we create a datatype with a constructor that takes one argument:

The implementation of the ListT monad is also strikingly similar to its cousin, the List monad. We do

exactly the same things for List, but with a little extra support to operate within the inner monad m, and to

pack and unpack the monadic sandwich ListT - m - List.

List ListT

b_v >>= f = tmb_v >>= f =

-- ListT $ runListT tmb_v

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 133

in concat x >>= \x -> return (concat x)

Exercises

1. Dissect the bind operator for the (ListT m) monad. For example, which do

we now have mapM and return?

2. Now that you have seen two simple monad transformers, write a monad

transformer IdentityT, which would be the transforming cousin of the

Identity monad.

3. Would IdentityT SomeMonad be equivalent to SomeMonadT

Identity for a given monad and its transformer cousin?

Lifting

FIXME: insert introduction

liftM

We begin with a notion which, strictly speaking, isn't about monad transformers. One small and surprisingly

useful function in the standard library is liftM, which as the API states, is meant for lifting non-monadic

functions into monadic ones. Let's take a look at that type:

So let's see here, it takes a function (a1 -> r), takes a monad with an a1 in it, applies that function to the

a1, and returns the result. In my opinion, the best way to understand this function is to see how it is used. The

following pieces of code all mean the same thing

return (myFn foo)

What made the light bulb go off for me is this third example, where we use liftM as an operator. liftM is

just a monadic version of ($)!

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 134

Exercises

1. How would you write liftM? You can inspire yourself from the the first

example

lift

When using combined monads created by the monad transformers, we avoid having to explicitly manage the

inner monad types, resulting in clearer, simpler code. Instead of creating additional do-blocks within the

computation to manipulate values in the inner monad type, we can use lifting operations to bring functions

from the inner monad into the combined monad.

Recall the liftM family of functions which are used to lift non-monadic functions into a monad. Each

monad transformer provides a lift function that is used to lift a monadic computation into a combined

monad.

base/Control.Monad.Trans.html) and provides the single function lift. The lift function lifts a monadic

computation in the inner monad into the combined monad.

lift :: (Monad m) => m a -> t m a

Monads which provide optimized support for lifting IO operations are defined as members of the MonadIO

class, which defines the liftIO function.

liftIO :: IO a -> m a

Using lift

Implementing lift

lift mon = MaybeT (mon >>= return . Just)

We begin with a monadic value (of the inner monad), the middle layer, if you prefer the monadic sandwich

analogy. Using the bind operator and a type constructor for the base monad, we slip the bottom slice (the

base monad) under the middle layer. Finally we place the top slice of our sandwich by using the constructor

MaybeT. So using the lift function, we have transformed a lowly piece of sandwich filling into a bona-fide

three-layer monadic sandwich.

As with our implementation of the Monad class, the bind operator is working within the confines of the inner monad.

Exercises

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 135

1. Why is it that the lift function has to be defined seperately for each

monad, where as liftM can be defined in a universal way?

2. Implement the lift function for the ListT transformer.

3. How would you lift a regular function into a monad transformer? Hint: very

easily.

Previously, we have pored over the implementation of two very simple monad transformers, MaybeT and

ListT. We then took a short detour to talk about lifting a monad into its transformer variant. Here, we will

bring the two ideas together by taking a detailed look at the implementation of one of the more interesting

transformers in the standard library, StateT. Studying this transformer will build insight into the

transformer mechanism that you can call upon when using monad transformers in your code. You might want

to review the section on the State monad before continuing.

State s is an instance of both the Monad class and the MonadState s class, so

StateT s m should also be members of the Monad and MonadState s classes. Furthermore, if m is an

instance of MonadPlus,

State StateT

> (a,s)) } m (a,s)) }

instance Monad (State s) where instance (Monad m) => Monad (StateT s m) where

return a = State $ \s -> (a,s) return a = StateT $ \s -> return (a,s)

(State x) >>= f = (StateT x) >>= f =

State $ \s -> let (v,s') = x s StateT $ \s -> do -- get new value, state

in runState (f v) s' (v,s') <- x s

-- apply bound function to get new state

transformation fn

(StateT x') <- return $ f v

-- apply the state transformation fn to the new

state

x' s'

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 136

Our definition of return makes use of the return function of the inner monad, and the binding operator

uses a do-block to perform a computation in the inner monad.

We also want to declare all combined monads that use the StateT transformer to be instances of the

MonadState class, so we will have to give definitions for get and put:

get = StateT $ \s -> return (s,s)

put s = StateT $ \_ -> return ((),s)

Finally, we want to declare all combined monads in which StateT is used with an instance of MonadPlus

to be instances of MonadPlus:

mzero = StateT $ \s -> mzero

(StateT x1) `mplus` (StateT x2) = StateT $ \s -> (x1 s) `mplus` (x2 s)

The final step to make our monad transformer fully integrated with Haskell's monad classes is to make

StateT s an instance of the MonadTrans class by providing a lift function:

lift c = StateT $ \s -> c >>= (\x -> return (x,s))

The lift function creates a StateT state transformation function that binds the computation in the inner

monad to a function that packages the result with the input state. The result is that a function that returns a

list (i.e., a computation in the List monad) can be lifted into StateT s [[]], where it becomes a function

that returns a StateT (s -> (a,s)). That is, the lifted computation produces multiple (value,state)

pairs from its input state. The effect of this is to "fork" the computation in StateT, creating a different branch

of the computation for each value in the list returned by the lifted function. Of course, applying StateT to a

different monad will produce different semantics for the lift function.

Acknowledgements

This module uses a large amount of text from All About Monads with permission from its author Jeff

Newbern.

Practical monads

Parsing monads

In the beginner's track of this book, we saw how monads were used for IO. We've also started working more

extensively with some of the more rudimentary monads like Maybe, List or State. Now let's try using

monads for something quintessentially "practical". Let's try writing a very simple parser. We'll be using the

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 137

need to be downloaded separately if you're using another compiler.

import System

import Text.ParserCombinators.Parsec hiding (spaces)

This makes the Parsec library functions and getArgs available to us, except the "spaces" function, whose

name conflicts with a function that we'll be defining later.

Now, we'll define a parser that recognizes one of the symbols allowed in Scheme identifiers:

symbol = oneOf "!$%&|*+-/:<=>?@^_~"

This is another example of a monad: in this case, the "extra information" that is being hidden is all the info

about position in the input stream, backtracking record, first and follow sets, etc. Parsec takes care of all of

that for us. We need only use the Parsec library function oneOf (http://www.cs.uu.nl/~daan/download/parsec/

parsec.html#oneOf) , and it'll recognize a single one of any of the characters in the string passed to it. Parsec

provides a number of pre-built parsers: for example, letter (http://www.cs.uu.nl/~daan/download/parsec/

parsec.html#letter) and digit (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#digit) are library

functions. And as you're about to see, you can compose primitive parsers into more sophisticated

productions.

S Let's define a function to call our parser and handle any possible errors:

readExpr input = case parse symbol "lisp" input of

Left err -> "No match: " ++ show err

Right val -> "Found value"

As you can see from the type signature, readExpr is a function (->) from a String to a String. We name the

parameter input, and pass it, along with the symbol action we defined above and the name of the parser

("lisp"), to the Parsec function parse (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#parse) .

Parse can return either the parsed value or an error, so we need to handle the error case. Following typical

Haskell convention, Parsec returns an Either (http://www.haskell.org/onlinereport/standard-prelude.html#$

tEither) data type, using the Left constructor to indicate an error and the Right one for a normal value.

We use a case...of construction to match the result of parse against these alternatives. If we get a Left value

(error), then we bind the error itself to err and return "No match" with the string representation of the error. If

we get a Right value, we bind it to val, ignore it, and return the string "Found value".

The case...of construction is an example of pattern matching, which we will see in much greater detail

[evaluator1.html#primitiveval later on].

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 138

Finally, we need to change our main function to call readExpr and print out the result:

main :: IO ()

main = do args <- getArgs

putStrLn (readExpr (args !! 0))

To compile and run this, you need to specify "-package parsec" on the command line, or else there will be

link errors. For example:

listing3.1.hs listing3.1.hs]

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser $

Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser a

No match: "lisp" (line 1, column 1):

unexpected "a"

Whitespace

Next, we'll add a series of improvements to our parser that'll let it recognize progressively more complicated

expressions. The current parser chokes if there's whitespace preceding our symbol:

No match: "lisp" (line 1, column 1):

unexpected " "

First, lets define a parser that recognizes any number of whitespace characters. Incidentally, this is why we

included the "hiding (spaces)" clause when we imported Parsec: there's already a function "spaces (http://

www.cs.uu.nl/~daan/download/parsec/parsec.html#spaces) " in that library, but it doesn't quite do what we

want it to. (For that matter, there's also a parser called lexeme (http://www.cs.uu.nl/~daan/download/parsec/

parsec.html#lexeme) that does exactly what we want, but we'll ignore that for pedagogical purposes.)

spaces :: Parser ()

spaces = skipMany1 space

Just as functions can be passed to functions, so can actions. Here we pass the Parser action space (http://

www.cs.uu.nl/~daan/download/parsec/parsec.html#space) to the Parser action skipMany1 (http://

www.cs.uu.nl/~daan/download/parsec/parsec.html#skipMany1) , to get a Parser that will recognize one or

more spaces.

Now, let's edit our parse function so that it uses this new parser. Changes are in red:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 139

Left err -> "No match: " ++ show err

Right val -> "Found value"

We touched briefly on the >> ("bind") operator in lesson 2, where we mentioned that it was used behind the

scenes to combine the lines of a do-block. Here, we use it explicitly to combine our whitespace and symbol

parsers. However, bind has completely different semantics in the Parser and IO monads. In the Parser

monad, bind means "Attempt to match the first parser, then attempt to match the second with the remaining

input, and fail if either fails." In general, bind will have wildly different effects in different monads; it's

intended as a general way to structure computations, and so needs to be general enough to accomodate all the

different types of computations. Read the documentation for the monad to figure out precisely what it does.

Compile and run this code. Note that since we defined spaces in terms of skipMany1, it will no longer

recognize a plain old single character. Instead you have to preceed a symbol with some whitespace. We'll see

how this is useful shortly:

listing3.2.hs listing3.2.hs]

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser " %" Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser %

No match: "lisp" (line 1, column 1):

unexpected "%"

expecting space

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser " abc"

No match: "lisp" (line 1, column 4):

unexpected "a"

expecting space

Return Values

Right now, the parser doesn't do much of anything - it just tells us whether a given string can be recognized or

not. Generally, we want something more out of our parsers: we want them to convert the input into a data

structure that we can traverse easily. In this section, we learn how to define a data type, and how to modify

our parser so that it returns this data type.

First, we need to define a data type that can hold any Lisp value:

| List [LispVal]

| DottedList [LispVal] LispVal

| Number Integer

| String String

| Bool Bool

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 140

This is an example of an algebraic data type: it defines a set of possible values that a variable of type

LispVal can hold. Each alternative (called a constructor and separated by |) contains a tag for the constructor

along with the type of data that that constructor can hold. In this example, a LispVal can be:

2. A List, which stores a list of other LispVals (Haskell lists are denoted by brackets)

3. A DottedList, representing the Scheme form (a b . c). This stores a list of all elements but the last, and

then stores the last element as another field

4. A Number, containing a Haskell Integer

5. A String, containing a Haskell String

6. A Bool, containing a Haskell boolean value

Constructors and types have different namespaces, so you can have both a constructor named String and a

type named String. Both types and constructor tags always begin with capital letters.

Next, let's add a few more parsing functions to create values of these types. A string is a double quote mark,

followed by any number of non-quote characters, followed by a closing quote mark:

parseString = do char '"'

x <- many (noneOf "\"")

char '"'

return $ String x

We're back to using the do-notation instead of the >> operator. This is because we'll be retrieving the value of

our parse (returned by many (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#many) (noneOf (http://

www.cs.uu.nl/~daan/download/parsec/parsec.html#noneOf) "\"")) and manipulating it, interleaving some

other parse operations in the meantime. In general, use >> if the actions don't return a value, >>= if you'll be

immediately passing that value into the next action, and do-notation otherwise.

Once we've finished the parse and have the Haskell String returned from many, we apply the String

constructor (from our LispVal data type) to turn it into a LispVal. Every constructor in an algebraic data type

also acts like a function that turns its arguments into a value of its type. It also serves as a pattern that can be

used in the left-hand side of a pattern-matching expression; we saw an example of this in [#symbols Lesson

3.1] when we matched our parser result against the two constructors in the Either data type.

tMonad) to lift our LispVal into the Parser monad. Remember, each line of a do-block must have the same

type, but the result of our String constructor is just a plain old LispVal. Return lets us wrap that up in a Parser

action that consumes no input but returns it as the inner value. Thus, the whole parseString action will have

type Parser LispVal.

The $ operator is infix function application: it's the same as if we'd written return (String x), but $ is right-

associative, letting us eliminate some parentheses. Since $ is an operator, you can do anything with it that

you'd normally do to a function: pass it around, partially apply it, etc. In this respect, it functions like the Lisp

function apply (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.4) .

HTML/r5rs-Z-H-5.html#%_sec_2.1) is a letter or symbol, followed by any number of letters, digits, or

symbols:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 141

parseAtom = do first <- letter <|> symbol

rest <- many (letter <|> digit <|> symbol)

let atom = [first] ++ rest

return $ case atom of

"#t" -> Bool True

"#f" -> Bool False

otherwise -> Atom atom

Here, we introduce another Parsec combinator, the choice operator <|> (http://www.cs.uu.nl/~daan/download/

parsec/parsec.html#or) . This tries the first parser, then if it fails, tries the second. If either succeeds, then it

returns the value returned by that parser. The first parser must fail before it consumes any input: we'll see

later how to implement backtracking.

Once we've read the first character and the rest of the atom, we need to put them together. The "let" statement

defines a new variable "atom". We use the list concatenation operator ++ for this. Recall that first is just a

single character, so we convert it into a singleton list by putting brackets around it. If we'd wanted to create a

list containing many elements, we need only separate them by commas.

Then we use a case statement to determine which LispVal to create and return, matching against the literal

strings for true and false. The otherwise alternative is a readability trick: it binds a variable named otherwise,

whose value we ignore, and then always returns the value of atom

Finally, we create one more parser, for numbers. This shows one more way of dealing with monadic values:

parseNumber = liftM (Number . read) $ many1 digit

It's easiest to read this backwards, since both function application ($) and function composition (.) associate

to the right. The parsec combinator many1 (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#many1)

matches one or more of its argument, so here we're matching one or more digits. We'd like to construct a

number LispVal from the resulting string, but we have a few type mismatches. First, we use the built-in

function read (http://www.haskell.org/onlinereport/standard-prelude.html#$vread) to convert that string into a

number. Then we pass the result to Number to get a LispVal. The function composition operator "." creates a

function that applies its right argument and then passes the result to the left argument, so we use that to

combine the two function applications.

Unfortunately, the result of many1 digit is actually a Parser String, so our combined Number . read still can't

operate on it. We need a way to tell it to just operate on the value inside the monad, giving us back a Parser

LispVal. The standard function liftM does exactly that, so we apply liftM to our Number . read function, and

then apply the result of that to our Parser.

We also have to import the Monad module up at the top of our program to get access to liftM:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 142

import Monad

This style of programming - relying heavily on function composition, function application, and passing

functions to functions - is very common in Haskell code. It often lets you express very complicated

algorithms in a single line, breaking down intermediate steps into other functions that can be combined in

various ways. Unfortunately, it means that you often have to read Haskell code from right-to-left and keep

careful track of the types. We'll be seeing many more examples throughout the rest of the tutorial, so

hopefully you'll get pretty comfortable with it.

parseExpr = parseAtom

<|> parseString

<|> parseNumber

readExpr input = case parse parseExpr "lisp" input of

Left err -> "No match: " ++ show err

Right _ -> "Found value"

Compile and run this code, and you'll notice that it accepts any number, string, or symbol, but not other

strings:

listing3.3.hs listing3.3.hs]

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "\"this is a string\""

Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser 25 Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser symbol

Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser (symbol)

bash: syntax error near unexpected token `symbol'

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(symbol)"

No match: "lisp" (line 1, column 1):

unexpected "("

expecting letter, "\"" or digit

Exercises

1. do-notation

2. explicit sequencing with the >>= (http://www.haskell.org/

onlinereport/standard-prelude.html#tMonad) operator

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 143

Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.3.5) ,

because they don't support escaping of internal quotes within the string.

Change parseString so that \" gives a literal quote character instead of

terminating the string. You may want to replace noneOf "\"" with a new

parser action that accepts either a non-quote character or a backslash

followed by a quote mark.

3. Modify the previous exercise to support \n, \r, \t, \\, and any other desired

escape characters

4. Change parseNumber to support the Scheme standard for different bases

(http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-

9.html#%_sec_6.2.4) . You may find the readOct and readHex (http://

www.haskell.org/onlinereport/numeric.html#sect14) functions useful.

5. Add a Character constructor to LispVal, and create a parser for character

literals (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-

Z-H-9.html#%_sec_6.3.4) as described in R5RS.

6. Add a Float constructor to LispVal, and support R5RS syntax for decimals

(http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-

9.html#%_sec_6.2.4) . The Haskell function readFloat (http://

www.haskell.org/onlinereport/numeric.html#sect14) may be useful.

7. Add data types and parsers to support the full numeric tower (http://

www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#

%_sec_6.2.1) of Scheme numeric types. Haskell has built-in types to

represent many of these; check the Prelude (http://www.haskell.org/

onlinereport/standard-prelude.html#$tNum) . For the others, you can define

compound types that represent eg. a Rational as a numerator and

denominator, or a Complex as a real and imaginary part (each itself a Real

number).

Next, we add a few more parser actions to our interpreter. Start with the parenthesized lists that make Lisp

famous:

parseList = liftM List $ sepBy parseExpr spaces

This works analogously to parseNumber, first parsing a series of expressions separated by whitespace (sepBy

parseExpr spaces) and then apply the List constructor to it within the Parser monad. Note too that we can

pass parseExpr to sepBy (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#sepBy) , even though it's

an action we wrote ourselves.

The dotted-list parser is somewhat more complex, but still uses only concepts that we're already familiar

with:

parseDottedList = do

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 144

tail <- char '.' >> spaces >> parseExpr

return $ DottedList head tail

Note how we can sequence together a series of Parser actions with >> and then use the whole sequence on

the right hand side of a do-statement. The expression char '.' >> spaces returns a Parser (), then combining

that with parseExpr gives a Parser LispVal, exactly the type we need for the do-block.

Next, let's add support for the single-quote syntactic sugar of Scheme:

parseQuoted = do

char '\''

x <- parseExpr

return $ List [Atom "quote", x]

Most of this is fairly familiar stuff: it reads a single quote character, reads an expression and binds it to x,

and then returns (quote x), to use Scheme notation. The Atom constructor works like an ordinary function:

you pass it the String you're encapsulating, and it gives you back a LispVal. You can do anything with this

LispVal that you normally could, like put it in a list.

parseExpr = parseAtom

<|> parseString

<|> parseNumber

<|> parseQuoted

<|> do char '('

x <- (try parseList) <|> parseDottedList

char ')'

return x

This illustrates one last feature of Parsec: backtracking. parseList and parseDottedList recognize identical

strings up to the dot; this breaks the requirement that a choice alternative may not consume any input before

failing. The try (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#try) combinator attempts to run the

specified parser, but if it fails, it backs up to the previous state. This lets you use it in a choice alternative

without interfering with the other alternative.

listing3.4.hs listing3.4.hs]

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(a test)"

Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(a (nested) test)" Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(a (dotted . list) test)"

Found value

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(a '(quoted (dotted . list)) test)"

Found value

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 145

No match: "lisp" (line 1, column 24):

unexpected end of input

expecting space or ")"

Note that by referring to parseExpr within our parsers, we can nest them arbitrarily deep. Thus, we get a full

Lisp reader with only a few definitions. That's the power of recursion.

Exercises

Standards/R5RS/HTML/r5rs-Z-H-7.html#%_sec_4.2.6) syntactic sugar: the

Scheme standard details what it should expand into (quasiquote/unquote).

2. Add support for vectors (http://www.schemers.org/Documents/Standards/

R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.3.6) . The Haskell representation

is up to you: GHC does have an Array (http://www.haskell.org/ghc/docs/

latest/html/libraries/base/Data-Array.html) data type, but it can be difficult

to use. Strictly speaking, a vector should have constant-time indexing and

updating, but destructive update in a purely functional language is difficult.

You may have a better idea how to do this after the section on set!, later in

this tutorial.

3. Instead of using the try combinator, left-factor the grammar so that the

common subsequence is its own parser. You should end up with a parser

that matches a string of expressions, and one that matches either nothing or

a dot and a single expressions. Combining the return values of these into

either a List or a DottedList is left as a (somewhat tricky) exercise for the

reader: you may want to break it out into another helper function

Generic monads

Write me: The idea is that this section can show some of the benefits of not tying yourself to one single

monad, but writing your code for any arbitrary monad m. Maybe run with the idea of having some

elementary monad, and then deciding it's not good enough, so replacing it with a fancier one... and

then deciding you need to go even further and just plug in a monad transformer

newtype Id a = Id a

instance Monad Id where

(>>=) (Id x) f = f x

return = Id

show (Id x) = show x

In another File

import Identity

type M = Id

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 146

my_fib = my_fib_acc 0 1

my_fib_acc _ fn1 1 = return fn1

my_fib_acc fn2 _ 0 = return fn2

my_fib_acc fn2 fn1 n_rem = do

val <- my_fib_acc fn1 (fn2+fn1) (n_rem - 1)

return val

Doesn't seem to accomplish much, but It allows to you add debugging facilities to a part of your program on

the fly. As long as you've used return instead of explicit Id constructors, then you can drop in the following

monad:

module PMD (Pmd(Pmd)) where --PMD = Poor Man's Debugging, Now available for haskell

import IO

(>>=) (Pmd (x, prt)) f = let (Pmd (v, prt')) = f x

in Pmd (v, prt >> prt')

return x = Pmd (x, return ())

show (Pmd (x, _) ) = show x

import Identity

import PMD

import IO

type M = Pmd

...

my_fib_acc :: Integer -> Integer -> Integer -> M Integer

my_fib_acc _ fn1 1 = return fn1

my_fib_acc fn2 _ 0 = return fn2

my_fib_acc fn2 fn1 n_rem =

val <- my_fib_acc fn1 (fn2+fn1) (n_rem - 1)

Pmd (val, putStrLn (show fn1))

All we had to change is the lines where we wanted to print something for debugging, and add some code

wherever you extracted the value from the Id Monad to execute the resulting IO () you've returned. Something

like

main :: IO ()

main = do

let (Id f25) = my_fib 25

putStrLn ("f25 is: " ++ show f25)

main :: IO ()

main = do

let (Pmd (f25, prt)) = my_fib 25

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 147

prt

putStrLn ("f25 is: " ++ show f25)

For the Pmd Monad. Notice that we didn't have to touch any of the functions that we weren't debugging.

Advanced Haskell

Arrows

Introduction

Arrows are a generalization of monads. They can do everything monads can do, and more. They serve much

the same purpose as monads -- providing a common structure for libraries -- but are more general. In

particular they allow notions of computation that may be partially static (independent of the input) or may

take multiple inputs. If your application works fine with monads, you might as well stick with them. But if

you're using a structure that's very like a monad, but isn't one, maybe it's an arrow.

Let's begin by getting to grips with the arrows notation. We'll work with the simplest possible arrow there is

(the function) and build some toy programs strictly in the aims of getting acquainted with the syntax.

Fire up your text editor and create a Haskell file, say toyArrows.hs:

idA :: a -> a

idA = proc a -> returnA -< a

plusOne = proc a -> returnA -< (a+1)

These are our first two arrows. The first is the identity function in arrow form, and second, slightly more

exciting, is an arrow that adds one to its input. Load this up in GHCi, using the -farrows extension and see

what happens.

___ ___ _

/ _ \ /\ /\/ __(_)

/ /_\// /_/ / / | | GHC Interactive, version 6.4.1, for Haskell 98.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 148

\____/\/ /_/\____/|_| Type :? for help.

Compiling Main ( toyArrows.hs, interpreted )

Ok, modules loaded: Main.

*Main> idA 3

3

*Main> idA "foo"

"foo"

*Main> plusOne 3

4

*Main> plusOne 100

101

Thrilling indeed. Up to now, we have seen three new constructs in the arrow notation:

-<

the imported function returnA

Now that we know how to add one to a value, let's try something twice as difficult: adding TWO:

plusTwo = proc a -> plusOne -< (a+1)

One simple approach is to feed (a+1) as input into the plusOne arrow. Note the similarity between

plusOne and plusTwo. You should notice that there is a basic pattern here which goes a little something

like this: proc FOO -> SOME_ARROW -< (SOMETHING_WITH_FOO)

Exercises

too. What do you think returnA does?

do notation

Our current implementation of plusTwo is rather disappointing actually... shouldn't it just be plusOne

twice? We can do better, but to do so, we need to introduce the do notation:

plusTwoBis =

proc a -> do b <- plusOne -< a

plusOne -< b

Prelude> :r

Compiling Main ( toyArrows.hs, interpreted )

Ok, modules loaded: Main.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 149

*Main> plusTwoBis 5

7

You can use this do notation to build up sequences as long as you would like:

plusFive =

proc a -> do b <- plusOne -< a

c <- plusOne -< b

d <- plusOne -< c

e <- plusOne -< d

plusOne -< e

FIXME: I'm no longer sure, but I believe the intention here was to show what the difference is having

this proc notation instead to just a regular chain of dos

Understanding arrows

We have permission to import material from the Haskell arrows page (http://www.haskell.org/arrows) . See

the talk page for details.

In this tutorial, we shall present arrows from the perspective of stream processors, using the factory metaphor

from the monads module as a support. Let's get our hands dirty right away.

You are a factory owner, and as before you own a set of processing machines. Processing machines are just

a metaphor for functions; they accept some input and produce some output. Your goal is to combine these

processing machines so that they can perform richer, and more complicated tasks. Monads allow you to

combine these machines in a pipeline. Arrows allow you to combine them in more interesting ways. The

result of this is that you can perform certain tasks in a less complicated and more efficient manner.

In a monadic factory, we took the approach of wrapping the outputs of our machines in containers. The

arrow factory takes a completely different route: rather than wrapping the outputs in containers, we wrap the

machines themselves. More specifically, in an arrow factory, we attach a pair of conveyor belts to each

machine, one for the input and one for the output.

So given a function of type b -> c, we can construct an equivalent a arrow by attaching a b and c

conveyer belt to the machine. The equivalent arrow is of type a b c, which we can pronounce as an arrow

a from b to c.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 150

Plethora of robots

We mentioned earlier that arrows give you more ways to combine machines together than monads did.

Indeed, the arrow type class provides six distinct robots (compared to the two you get with monads).

arr

The simplest robot is arr with the type signature arr :: (b -> c) -> a b c. In other words, the arr

robot takes a processing machine of type b -> c, and adds conveyor belts to form an a arrow from b to c.

(>>>)

The next, and probably the most important, robot is (>>>). This is basically the arrow equivalent to the

monadic bind robot (>>=). The arrow version of bind (>>>) puts two arrows into a sequence. That is, it

connects the output conveyor belt of the first arrow to the input conveyor belt of the second one.

What we get out of this is a new arrow. One consideration to make, though is what input and output types

our arrows may take. Since we're connecting output and the input conveyor belts of the first and second

arrows, the second arrow must accept the same kind of input as what the first arrow outputs. If the first arrow

is of type a b c, the second arrow must be of type a c d. Here is the same diagram as above, but with

things on the conveyor belts to help you see the issue with types.

Exercises

What is the type of the combined arrow?

first

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 151

Up to now, our arrows can only do the same things that monads can. Here is where things get interesting!

The arrows type class provides functions which allow arrows to work with pairs of input. As we will see

later on, this leads us to be able to express parallel computation in a very succinct manner. The first of these

functions, naturally enough, is first.

If you are skimming this tutorial, it is probably a good idea to slow down at least in this section, because the

first robot is one of the things that makes arrows truly useful.

Given an arrow f, the first robot attaches some conveyor belts and extra machinery to form a new, more

complicated arrow. The machines that bookend the input arrow split the input pairs into their component

parts, and put them back together. The idea behind this is that the first part of every pair is fed into the f,

whilst the second part is passed through on an empty conveyor belt. When everything is put back together,

we have same pairs that we fed in, except that the first part of every pair has been replaced by an equivalent

output from f.

Now the question we need to ask ourselves is that of types. Say that the input tuples are of type (b,d) and

the input arrow is of type a b c (that is, it is an arrow from b to c). What is the type of the output? Well,

the arrow converts all bs into cs, so when everything is put back together, the type of the output must be

(c,d).

Exercises

What is the type of the first robot?

second

If you understand the first robot, the second robot is a piece of cake. It does the same exact thing,

except that it feeds the second part of every input pair into the given arrow f instead of the first part.

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 152

What makes the second robot interesting is that it can be derived from the previous robots! Strictly

speaking, the only robots you need to for arrows are arr, (>>>) and first. The rest can be had "for

free".

Exercises

2. Combine this helper function with the robots arr, (>>>) and first to

implement the second robot

***

One of the selling points of arrows is that you can use them to express parallel computation. The (***)

robot is just the right tool for the job. Given two arrows, f and g, the (***) combines them into a new

arrow using the same bookend-machines we saw in the previous two robots

Conceptually, this isn't very much different from the robots first and second. As before, our new arrow

accepts pairs of inputs. It splits them up, sends them on to seperate conveyor belts, and puts them back

together. The only difference here is that, rather than having one arrow and one empty conveyor belt, we

have two distinct arrows. But why not?

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 153

Exercises

2. Given the (>>>), first and second robots, implement the (***)

robot.

&&&

The final robot in the Arrow class is very similar to the (***) robot, except that the resulting arrow accepts

a single input and not a pair. Yet, the rest of the machine is exactly the same. How can we work with two

arrows, when we only have one input to give them?

The answer is simple: we clone the input and feed a copy into each machine!

Exercises

2. Using your cloning function, as well as the robots arr, (>>>) and ***,

implement the &&& robot

3. Similarly, rewrite the following function without using &&&:

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 154

Now that we have presented the 6 arrow robots, we would like to make sure that you have a more solid grasp

of them by walking through a simple implementations of the Arrow class. As in the monadic world, there are

many different types of arrows. What is the simplest one you can think of? Functions.

Put concretely, the type constructor for functions (->) is an instance of Arrow

arr f = f

f >>> g = g . f

first f = \(x,y) -> (f x, y)

arr - Converting a function into an arrow is trivial. In fact, the function already is an arrow.

(>>>) - we want to feed the output of the first function into the input of the second function. This is

nothing more than function composition.

first - this is a little more elaborate. Given a function f, we return a function which accepts a pair

of inputs (x,y), and runs f on x, leaving y untouched.

And that, strictly speaking, is all we need to have a complete arrow, but the arrow typeclass also allows you

to make up your own definition of the other three robots, so let's have a go at that:

second f = \(x,y) -> ( x, f y) -- like first

f *** g = \(x,y) -> (f x, g y) -- takes two arrows, and not just one

f &&& g = \x -> (f x, g x) -- feed the same input into both functions

Note that this is not the official instance of functions as arrows. You should take a look at the haskell library (http://darcs.haskell.org/

packages/base/Control/Arrow.hs) if you want the real deal.

In the introductory Arrows chapter, we introduced the proc and -< notation. How does this tie in with all

the arrow robots we just presented? Sadly, it's a little bit less straightforward than do-notation, but let's have

a look.

Maybe functor

It turns out that any monad can be made into arrow. We'll go into that later on, but for now, FIXME:

transition

Using arrows

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 155

At this point in the tutorial, you should have a strong enough grasp of the arrow machinery that we can start

to meaningfully tackle the question of what arrows are good for.

Stream processing

Avoiding leaks

Arrows were originally motivated by an efficient parser design found by Swierstra & Duponcheel.

To describe the benefits of their design, let's examine exactly how monadic parsers work.

If you want to parse a single word, you end up with several monadic parsers stacked end to end. Taking

Parsec as an example, the parser string "word" can also be viewed as

word = do char 'w' >> char 'o' >> char 'r' >> char 'd'

return "word"

Each character is tried in order, if "worg" is the input, then the first three parsers will succeed, and the last

one will fail, making the entire string "word" parser fail.

If you want to parse one of two options, you create a new parser for each and they are tried in order. The first

one must fail and then the next will be tried with the same input.

To parse "c" successfully, both 'a' and 'b' must have been tried.

return "one"

return "two"

three = do char 't' >> char 'h' >> char 'r' >> char 'e' >> char 'e'

return "three"

With these three parsers, you can't know that the string "four" will fail the parser nums until the last parser

has failed.

If one of the options can consume much of the input but will fail, you still must descend down the chain of

parsers until the final parser fails. All of the input that can possibly be consumed by later parsers must be

retained in memory in case one of them does consume it. That can lead to much more space usage than you

would naively expect, this is often called a space leak.

The general pattern of monadic parsers is that each option must fail or one option must succeed.

So what's better?

Swierstra & Duponcheel (1996) noticed that a smarter parser could immediately fail upon seeing the very

first character. For example, in the nums parser above, the choice of first letter parsers was limited to either

the letter 'o' for "one" or the letter 't' for both "two" and "three". This smarter parser would also be able to

Haskell/Print version - Wikibooks, collection of open-content textbooks Page 156

garbage collect input sooner because it could look ahead to see if any other parsers might be able to consume

the input, and drop input that could not be consumed. This new parser is a lot like the monadic parsers with

the major difference that it exports static information. It's like a monad, but it also tells you what it can parse.

There's one major problem. This doesn't fit into the monadic interface. Monads are (a -> m b), they're based

around functions only. There's no way to attach static information. You have only one choice, throw in some

input, and see if it passes or fails.

The monadic interface has been touted as a general purpose tool in the functional programming community,

so finding that there was some particularly useful code that just couldn't fit into that interface was something

of a setback. This is where Arrows come in. John Hughes's Generalising monads to arrows proposed the

arrows abstraction as new, more flexible tool.

Let us examine Swierstra & Duponcheel's parser in greater detail, from the perspective of arrows. The parser

has two components: a fast, static parser which tells us if the input is worth trying to parse; and a slow,

dynamic parser which does the actual parsing work.

data StaticParser s = SP Bool [s]

newtype DynamicParser s a b = DP ((a,[s]) -> (b,[s]))

The static parser consists of a flag, which tells us if the parser can accept the empty input, and a list of

possible starting characters. For example, the static parser for a single character would be as follows:

spCharA c = SP False [c]

It does not accept the empty string (False) and the list of possible starting characters consists only of c.

The dynamic parser needs a little more dissecting : what we see is a function that goes from (a,[s]) to

(b,[s]). It is useful to think in terms of sequencing two parsers : Each parser consumes the result of the

previous parser (a), along with the remaining bits of input stream ([s]), it does something with a to produce

its own result b, consumes a bit of string and returns that. Ooof. So, as an example of this in action, consider

a dynamic parser (Int,String) -> (Int,String), where the Int represents a count of the

characters parsed

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.