You are on page 1of 88

Table

of Contents

Summary
Introduction 1.1
Chapters/En-Uk/What Is/What And Why Of The Book 1.2
Chapters/En-Uk/What Is/How Can I Be Pragmatic With My Testing 1.3
Chapters/En-Uk/Xctest/What Is Xctest How Does It Work 1.4
Chapters/En-Uk/Xctest/Types Of Testing 1.5
Chapters/En-Uk/Xctest/Unit Testing 1.6
Chapters/En-Uk/Xctest/Three Types Of Unit Tests 1.7
Chapters/En-Uk/Xctest/Behavior Testing 1.8
Chapters/En-Uk/Xctest/Test Driven Development 1.9
Chapters/En-Uk/Xctest/Integration Testing 1.10
Chapters/En-Uk/Foundations/Dependency Injection 1.11
Chapters/En-Uk/Foundations/Stubs Mocks And Fakes 1.12
Chapters/En-Uk/Oss Libs/Expanding On Bdd Frameworks 1.13
Chapters/En-Uk/Oss Libs/Mocking And Stubbing Ocmock And Ocmockito 1.14
Chapters/En-Uk/Oss Libs/Network Stubbing Ohttp And Vcrurlconnection 1.15
Chapters/En-Uk/Setup/How I Got Started 1.16
Chapters/En-Uk/Setup/Getting Setup 1.17
Chapters/En-Uk/Setup/Introducing Tests Into An Existing Application 1.18
Chapters/En-Uk/Setup/Starting A New Application And Using Tests 1.19
Chapters/En-Uk/Ops/Developer Operations Aka Automation 1.20
Chapters/En-Uk/Ops/Techniques For Keeping Testing Code Sane 1.21
Chapters/En-Uk/Ops/Creation Of App-Centric It Blocks 1.22
Chapters/En-Uk/Ops/Fixtures And Factories 1.23
Chapters/En-Uk/Async/Dispatch Asyncs Ar Dispatch Etc 1.24
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing 1.25
Chapters/En-Uk/Async/Techniques For Getting Around Async Networking 1.26
Chapters/En-Uk/Async/Networking In View Controllers Network Models 1.27
Chapters/En-Uk/Async/Animations 1.28

1
Chapters/En-Uk/Async/Will And Xctest 6 1.29
Chapters/En-Uk/App Testing/Techniques For Testing Different Aspects Of The App
Chapters/En-Uk/App Testing/Views Snapshots 1.31 1.30
Chapters/En-Uk/App Testing/Scroll Views 1.32
Chapters/En-Uk/App Testing/User Interactions 1.33
Chapters/En-Uk/App Testing/Ipad And Iphone 1.34
Chapters/En-Uk/App Testing/Testing Delegates 1.35
Chapters/En-Uk/Core Data/Core Data 1.36
Chapters/En-Uk/Core Data/Core Data Migrations 1.37
Chapters/En-Uk/Prag Prog/Making Libraries To Get Annoying Tests Out Of Your App
Chapters/En-Uk/Prag Prog/Using Xcode Pragmatically 1.39 1.38
Chapters/En-Uk/Prag Prog/Improving Xcode 1.40
Chapters/En-Uk/Wrap Up/Books 1.41
Chapters/En-Uk/Wrap Up/Twitter Follows 1.42
Chapters/En-Uk/Wrap Up/Recommended Websites 1.43

2
Introduction

What is this book?


This is a book that aims to be a down to earth guide to testing iOS applications. It came out
of a long period of writing tests to multiple non-trivial Apps whilst working at Artsy. All of
which are open source, and available for inspection on the Artsy Open Source page.

I found very few consolidated resources for testing in iOS general. A lot of the best book
advice revolved around reading books about Java and C# and applying the techniques to
Objective-C projects. Like any creative output this book does not live in a vacuum, the books
I used to get to this point are in the recommendations section.

Finally this is not a generic iOS Testing book. I will not be objective. This is a pragmatic book
from a pragmatic programmer known for making things, not architecting beautiful concepts.
There will be things you disagree with, and I'm of the strong opinions, weakly held camp, so
you're welcome to send me feedback as issues on orta/pragmatic-testing

I treat this book very similar to how I would a collection of smaller blog posts, so I aim to
have it well hyperlinked. There are a lot of great resources out there, and this can send you
out into other resources. I'd rather not re-write someone when I can quote.

About the author, and contributors


I'm the head of mobile at Artsy, and I gave myself the title Design Dictator at the open source
project CocoaPods. My interest in testing piqued when the entire Artsy mobile team became
just me, and I realized that I'm going to get the blame for everything from this point forward.
Better up my game.

There are a lot of times that I say we, meaning the Artsy Mobile team. I don't think I would
be writing this book without these people contributing testing ideas to our codebase. Thanks,
Daniel Doubrovkine, Laura Brown, Ash Furrow, Eloy Durn, Sarah Scott, Maxim Cramer &
Dustin Barker. I owe you all.

Finally, I want to thank Danger. She gives me time and space to achieve great things. I
wouldn't be the person I am without her.

Who is it for?

3
Introduction

Anyone interested in applying tests to iOS applications. Which hopefully should be a large
amount of people. I'm trying to aim this book at myself back in 2012, new to the ideas of
testing and seeing a whole world of possibilities, but not knowing exactly where to start or
how to continue once I've made one or two tests.

Swift or Objective-C?
It's easy to get caught up in what's new and shiny, but in reality there's a lot of existing
Objective-C code-bases out there. I will aim to try and cover both Swift and Objective-C. As
we have test-suites in both languages, some concepts work better in one language vs the
other. If you can only read one language, I'm not apologising for that. It's not pragmatic to
only focus on one language, a language is a language, in time you should know as many as
possible.

4
Chapters/En-Uk/What Is/What And Why Of The Book

What is this book?


This is a book that aims to be a down to earth guide to testing iOS applications. It came out
of a long period of writing tests to multiple non-trivial Apps whilst working at Artsy. All of
which are open source, and available for inspection on the Artsy Open Source page.

I found very few consolidated resources for testing in iOS general. A lot of the best book
advice revolved around reading books about Java and C# and applying the techniques to
Objective-C projects. Like any creative output this book does not live in a vacuum, the books
I used to get to this point are in the recommendations section.

Finally this is not a generic iOS Testing book. I will not be objective. This is a pragmatic book
from a pragmatic programmer known for making things, not architecting beautiful concepts.
There will be things you disagree with, and I'm of the strong opinions, weakly held camp, so
you're welcome to send me feedback as issues on orta/pragmatic-testing

I treat this book very similar to how I would a collection of smaller blog posts, so I aim to
have it well hyperlinked. There are a lot of great resources out there, and this can send you
out into other resources. I'd rather not re-write someone when I can quote.

About the author, and contributors


I'm the head of mobile at Artsy, and I gave myself the title Design Dictator at the open source
project CocoaPods. My interest in testing piqued when the entire Artsy mobile team became
just me, and I realized that I'm going to get the blame for everything from this point forward.
Better up my game.

There are a lot of times that I say we, meaning the Artsy Mobile team. I don't think I would
be writing this book without these people contributing testing ideas to our codebase. Thanks,
Daniel Doubrovkine, Laura Brown, Ash Furrow, Eloy Durn, Sarah Scott, Maxim Cramer &
Dustin Barker. I owe you all.

Finally, I want to thank Danger. She gives me time and space to achieve great things. I
wouldn't be the person I am without her.

Who is it for?

5
Chapters/En-Uk/What Is/What And Why Of The Book

Anyone interested in applying tests to iOS applications. Which hopefully should be a large
amount of people. I'm trying to aim this book at myself back in 2012, new to the ideas of
testing and seeing a whole world of possibilities, but not knowing exactly where to start or
how to continue once I've made one or two tests.

Swift or Objective-C?
It's easy to get caught up in what's new and shiny, but in reality there's a lot of existing
Objective-C code-bases out there. I will aim to try and cover both Swift and Objective-C. As
we have test-suites in both languages, some concepts work better in one language vs the
other. If you can only read one language, I'm not apologising for that. It's not pragmatic to
only focus on one language, a language is a language, in time you should know as many as
possible.

6
Chapters/En-Uk/What Is/How Can I Be Pragmatic With My Testing

What is Pragmatic Testing?


Testing is meant to help you sleep at night, and feel nonchalant about shipping a build
anytime. However, you want to be able to launch your test-suite, and not feel like it's a major
chore. If you try to test every if in your app, you're going to have a bad time.

My favourite WWDC video come from 2011, now too old to be in the search index on the
website. It's called Writing Easy-To-Change Code: Your Second-Most Important Goal As A
Developer. It talks about all the different ways in which you can build your codebase to
evolve safely. Your tests shouldn't be getting in the way of your evolving product. Test
coverage is about the constant battle between "perfect" and "enough", Pragmatic Testing is
about siding towards "enough" more often.

Sometimes you will need to say "I don't need to write tests for this." There will be times when
you'll be burned for that decision, and that's OK. It's much harder to simplify a time-
expensive test-suite, and it's extremely time intensive to cover 100% of your codebase.
You're shipping apps, not codebases. At the end of the day, the tests are for developers,
even when it's very likely to increase the quality of the product.

An easy example is model objects, when you have objects that have some simple functions
that will probably never change, it's not worth testing the functions. If you have functions that
do some complicated logic, you should make sure that's covered by multiple tests showing
different inputs and outputs.

Internally we talk about coverage by using a painting analogy. There are some tools that
allow you to test a lot of logic at once, for example a snapshot test for a view controller. In
doing a Snapshot test, you're testing: object initialization, viewDidLoad behavior,
viewDid/WillAppear behavior, subview layouting and many more systems. On the other

hand you have unit tests, which are a much finer brush for covering the edge cases which
snapshots won't or can't cover.

By using multiple testing techniques, you can cover the shapes of your application, but still
offers the chance to have fewer places to change as you evolve your application.

7
Chapters/En-Uk/Xctest/What Is Xctest How Does It Work

What is the XCTest framework?


Now as a default when you create a new Xcode project apple creates a test target for you.
Testing has been a part of Xcode since OCUnit the predecessor to XCTest was included
Xcode 2.1.

XCTest owes it's architectural decisions to SUnit the first testing framework, built for
smalltalk apps. OCUnit is an Objective-C implementation of SUnit.

The xUnit format is quite simple. There are collections of test-suites, which contain test
cases and test cases contain individual tests. A test runner is created which loops through all
suites, their test cases and runs specific methods. If running the method raises an exception
then that test is considered a failure, and the runner moves to the next method. The final
part is a logger to output the results.

In XCTest the convention is that you subclass a XCTestCase object, and the test runner will
call any method that begins with the word test . E.g. -
(void)testImageSpecifiesAspectRatio .

The actual implementation of XCTest works by creating a bundle, which can optionally be
injected into an application ( Apple calls this hosting the tests. ) The bundle contains test
resources like dummy images or JSON, and your test code. There is a version of XCTest
open source'd by Apple.

XCTest provides a series of macros or functions based on OCUnit's for calling an exception
when an expectation isn't met. For example XCTAssertEqual ,
XCTFail and XCTAssertLessThanOrEqual`. You can explore how the XCT* functions work in
the OSS XCTest.

Here's a real example of an XCTest case subclass taken from the Swift Package Manager:

8
Chapters/En-Uk/Xctest/What Is Xctest How Does It Work

import Utility
import XCTest

class ShellTests: XCTestCase {

func testPopen() {
XCTAssertEqual(try! popen(["echo", "foo"]), "foo\n")
}

func testPopenWithBufferLargerThanThatAllocated() {
let path = Path.join(#file, "../../Get/DependencyGraphTests.swift").normpath
XCTAssertGreaterThan(try! popen(["cat", path]).characters.count, 4096)
}

func testPopenWithBinaryOutput() {
if (try? popen(["cat", "/bin/cat"])) != nil {
XCTFail("popen succeeded but should have failed")
}
}
}

It has three tests, that each test their own expectations.

The first ensures that when popen(["echo", "foo"]) is called, it returns "foo\n"
The second ensures that when popen(["cat", path]) is called, it returns a number of
characters greater than 4096
Finally the third one checks an expectation, and if it's wrong, it will faile the test.

What is the difference between hosted test targets and unhosted

When talking pragmatically, we're really talking about writing tests against apps or libraries.
Depending on whether you have dependencies on Cocoa or UIKit, you end up having to
make a choice. Hosted, or not hosted.

The terminology has changed recently, a hosted test used to be known as Application Tests,
and "unhosted" was known as Logic Tests. The older terminology gives a better hint at how
the tests would be ran.

A hosted test is ran inside your application after


application:didFinishLaunchingWithOptions: has finished. This means there is a fully

running application, and your tests run with that happening around it. This gives you access
to a graphics context, the application's bundle and other useful bits and pieces.

Un-hosted tests are useful if you're testing something very ephemeral/logical and relying
only on Foundation, but anything related to UIKit/Cocoa subclasses will eventually require
you to host the test bundle in an application. You'll see this come up every now and again
when setting up test-suites.

9
Chapters/En-Uk/Xctest/What Is Xctest How Does It Work

Further reading:

History of SenTestingKit on objc.io by Daniel Eggert


How XCTest works on objc.io by Daniel Eggert and Arne Schroppe

10
Chapters/En-Uk/Xctest/Types Of Testing

Types of Testing
A test is a way of verifying code, and making your assumptions explicit. Tests exist to show
connections between objects. When you make a change somewhere, tests reveal implicit
connections.

In the Ruby world the tests are basically considered the application's documentation. This is
one of the first port of calls for understanding how systems work after reading the
overview/README. It's quite similar with node. Both languages are interpreted languages
where even variable name typos only show themselves in runtime errors. Without a
compiler, you can catch this stuff only with rigorous testing.

In Cocoa, this is less of a problem as we can rely on the tooling more. Given a compiler,
static type systems, and a well integrated static analyser -- you can know if your code is
going to break someone else's very quickly. However, this is only a lint, the code compiling
doesn't mean that it will still correctly show a view controller when you tap a button. In other
words, the behaviour of your app can still be wrong.

That is what testing is for. I will cover three major topics in this book. There are many, many
other types of testing patterns -- for example, last week I was introduced to Model Testing.
However, these three topics are the dominent patterns that exist within the Cocoa world, and
I have experience with them. Otherwise, I'd just be copy & pasting from Wikipedia.

Unit Testing
Unit testing is the idea of testing a specific unit of code. This can range from a function, to a
single Model to a full UIViewController subclass. The idea being that you are testing and
verifying a unit of application code. Where you draw the boundary is really up to you, but for
pragmatic reasons we'll mostly stick to functions and objects because they easily match our
mental model of a "thing," that is, a Unit.

Integration Testing
Integration testing is a way of testing that your application code integrates well with external
systems. The most common example of integration testing in iOS apps are user interface
runners that emulate users tapping though you application.

Behavioural Testing

11
Chapters/En-Uk/Xctest/Types Of Testing

Behavioural testing is essentially a school of thought. The principles state that the way you
write and describe your tests should reflect the behaviours of your objects. There is a
controlled vocabulary using words like "it", "describe", "spec", "before" and "after" which
mean that most BDD-testing frameworks all read very similar.

This school of thought has also brought about a different style of test-driving the design of
your app: instead of strictly focusing on a single unit and developing your app from the
inside-out, behaviour-driven development sometimes favors working outside-in. This style
could start with a failing integration test that describes a feature of the app and lead you to
discover new Unit Tests on the way to make the feature tes t "green." Of course it's up to
you how you utilize a BDD framework: the difference in typing test cases doesn't force you to
change your habits.

12
Chapters/En-Uk/Xctest/Unit Testing

Unit Testing
So, what is a unit of code? Well, that's subjective. I'd argue that it's anything you can easily,
reliably measure. A unit test can generally be considered something that is set up (arrange),
then you perform some action (act) and verify the end result (assert.) It's like science
experiments, kinda.

Arrange, Act, Assert.

We had some real-world examples from Swift Package Manager in the last chapter that
were too small for doing Arrange, Act, Assert, so let's use another example:

class WalkTests: XCTestCase {


[...]

func testRecursive() {
let root = Path.join(#file, "../../../Sources").normpath
var expected = [
Path.join(root, "Build"),
Path.join(root, "Utility")
]

for x in walk(root) {
if let i = expected.indexOf(x) {
expected.removeAtIndex(i)
}
}

XCTAssertEqual(expected.count, 0)
}
[...]
}

The first section, the author arranges the data-models as they expect them. Then in the
for they act on that data. This is the unit of code being tested. Finally, they assert that the

code has had the expected result. In this case, that the expected array has become empty.

Using the Arrange, Act, Assert methodology, it becomes very easy to structure your test
cases. It makes tests obvious to other practitioners, and you can determine code smells
quicker by seeing large amounts of code in your arrange or assert sections.

One Test, One Unit


In the perfect theoretical world, every test case would be a single logical test of a unit of
code. We're not talking theory here though, so from my perspective, this is a great unit test:

13
Chapters/En-Uk/Xctest/Unit Testing

func testParentDirectory() {
XCTAssertEqual("foo/bar/baz".parentDirectory, "foo/bar")
XCTAssertEqual("foo/bar/baz".parentDirectory.parentDirectory, "foo")
XCTAssertEqual("/bar".parentDirectory, "/")
XCTAssertEqual("/".parentDirectory.parentDirectory, "/")
XCTAssertEqual("/bar/../foo/..//".parentDirectory.parentDirectory, "/")
}

It shows a wide array of inputs, and their expected outputs for the same function. Someone
looking over these tests would have a better idea of what parentDirectory does, and it
covers a bunch of use cases. Great.

Note, the Quick documentation really shines on ArrangeActAssert - I would strongly


recommend giving it a quick browse.

Further reading:

Quick/Quick documentation: ArrangeActAssert.md

14
Chapters/En-Uk/Xctest/Three Types Of Unit Tests

The Three Types of Unit Tests


There are commonly three types of Unit Tests, well be taking the examples directly from the
source code of Eigen. Lets go:

Return Value
it(@"sets up its properties upon initialization", ^{
// Arrange + Act
ARShowNetworkModel *model = [[ARShowNetworkModel alloc] initWithFair:fair show:show]
;

// Assert
expect(model.show).to.equal(show);
});

ARShowNetworkModelTests.m

You can setup your subject of the test, make a change to it, and check the return value of a
function is what you expect. This is what you think of when you start writing tests, and
inevitably Model objects are really easy to cover this way due to their ability to hold data and
convert that to information.

State
it(@"changes selected to deselected", ^{
// Arrange
ARAnimatedTickView *tickView = [[ARAnimatedTickView alloc] initWithSelection:YES];

// Act
[tickView setSelected:NO animated:NO];

/// Assert
expect(tickView).to.haveValidSnapshotNamed(@"deselected");
});

ARAnimatedTickViewTest.m

State tests work by querying the subject. In this case were using snapshots to investigate
that the visual end result is as we expect it to be.

15
Chapters/En-Uk/Xctest/Three Types Of Unit Tests

These tests can be a little bit more tricky than straight return value tests, as they may require
some kind of mis-direction depending on the public API for an object.

Interaction Tests
An interaction test is more tricky because it usually involves more than just one subject. The
idea is that you want to test how a cluster of objects interact in order

it(@"adds Twitter handle for Twitter", ^{

// Arrange
provider = [[ARMessageItemProvider alloc] initWithMessage:placeHolderMessage path:pa
th];

// Act
providerMock = [OCMockObject partialMockForObject:provider];
[[[providerMock stub] andReturn:UIActivityTypePostToTwitter] activityType];

// Assert
expect([provider item]).to.equal(@"So And So on @Artsy");
});

ARMessageItemProviderTests.m

In this case to test the interaction between the ARMessageItemProvider and the
activityType we need to mock out a section of the code that does not belong to the

domain we are testing.

Full Details
There is a talk by Jon Reid of qualitycoding.org on this topic that is really the definitive guide
to understanding how you can test a unit of code.

TODO: Get Jon Reid's MCE talk video TODO: Re-watch it and flesh this out a bit more

16
Chapters/En-Uk/Xctest/Behavior Testing

Behaviour Driven Development


Behaviour Driven Development (BDD) is something that grew out of Test Driven
Development (TDD). TDD is a practice and BDD expands on it, but only really is about trying
to provide a consistent vocabulary for how tests are described.

This is easier to think about when you compare the same tests wrote in both XCTest ( which
has it's own structure for writing tests) and move towards a BDD appraoch. So lets take
some tests from the Swift Package Manager ModuleTests.swift which uses plain old XCTest.

17
Chapters/En-Uk/Xctest/Behavior Testing

class ModuleTests: XCTestCase {

func test1() {
let t1 = Module(name: "t1")
let t2 = Module(name: "t2")
let t3 = Module(name: "t3")

t3.dependsOn(t2)
t2.dependsOn(t1)

XCTAssertEqual(t3.recursiveDeps, [t2, t1])


XCTAssertEqual(t2.recursiveDeps, [t1])
}

func test2() {
let t1 = Module(name: "t1")
let t2 = Module(name: "t2")
let t3 = Module(name: "t3")
let t4 = Module(name: "t3")

t4.dependsOn(t2)
t4.dependsOn(t3)
t4.dependsOn(t1)
t3.dependsOn(t2)
t3.dependsOn(t1)
t2.dependsOn(t1)

XCTAssertEqual(t4.recursiveDeps, [t3, t2, t1])


XCTAssertEqual(t3.recursiveDeps, [t2, t1])
XCTAssertEqual(t2.recursiveDeps, [t1])
}

func test3() {
let t1 = Module(name: "t1")
let t2 = Module(name: "t2")
let t3 = Module(name: "t3")
let t4 = Module(name: "t4")

t4.dependsOn(t1)
t4.dependsOn(t2)
t4.dependsOn(t3)
t3.dependsOn(t2)
t3.dependsOn(t1)
t2.dependsOn(t1)

[...]
This pattern of adding an extra `dependsOn`,
and new t`X`s continues till it gets to test6
}

18
Chapters/En-Uk/Xctest/Behavior Testing

Note: They are split up in Act, Arrange, Assert, but they do an awful lot of repeating
themselves. As test bases get bigger, maybe it makes sense to start trying to split out some
of the logic in your tests.

So what about if we moved some of the logic inside each test out, into a section before, this
simplifies out tests, and allows each test to have more focus.

class ModuleTests: XCTestCase {


let t1 = Module(name: "t1")
let t2 = Module(name: "t2")
let t3 = Module(name: "t3")
let t4 = Module(name: "t4")

func test1() {
t3.dependsOn(t2)
t2.dependsOn(t1)

XCTAssertEqual(t3.recursiveDeps, [t2, t1])


XCTAssertEqual(t2.recursiveDeps, [t1])
}

func test2() {
t4.dependsOn(t2)
t4.dependsOn(t3)
t4.dependsOn(t1)
t3.dependsOn(t2)
t3.dependsOn(t1)
t2.dependsOn(t1)

XCTAssertEqual(t4.recursiveDeps, [t3, t2, t1])


XCTAssertEqual(t3.recursiveDeps, [t2, t1])
XCTAssertEqual(t2.recursiveDeps, [t1])
}

func test3() {
t4.dependsOn(t1)
t4.dependsOn(t2)
t4.dependsOn(t3)
t3.dependsOn(t2)
t3.dependsOn(t1)
t2.dependsOn(t1)
[...]
}

This is great, we're not quite doing so much arranging, but we're definitely doing some
obvious Acting and Asserting. The tests are shorter, more concise, and nothing is lost in the
refactor. This is easy when you have a few immutable let variables, but gets complicated
once you want to have your Arrange steps perform actions.

19
Chapters/En-Uk/Xctest/Behavior Testing

Behaviour Driven Development is about being able to have a consistent vocabulary in your
test-suites. BDD defines the terminology, so they're the same between BDD libraries. This
means, if you use Rspec, Specta, Quick, Ginkgo and many others you will be able to employ
similar testing structures.

So what are these words?

describe - used to collate a collection of tests under a descriptive name.

it - used to set up a unit to be tested

before/after - callbacks to code that happens before, or after each it or describe

within the current describe context.


beforeAll/afterAll - callbacks to run logic at the start / end of a describe context.

They combine like this psuedocode version of these tests:


ARArtworkViewControllerBuyButtonTests.m:

20
Chapters/En-Uk/Xctest/Behavior Testing

describe("buy button") {
beforeAll {
// sets up a mock for a singleton object
}

afterAll {
// stops mocking
}

before {
// ensure we are in a logged out state
}

after {
// clear all user credentials
}

it("posts order if artwork has no edition sets") {


// sets up a view controller
// taps a button
// verifies what routes have been called
}

it("posts order if artwork has 1 edition set") {


// [...]
}

it("displays inquiry form if artwork has multiple sets") {


// [...]
}

it("displays inquiry form if request fails") {


// [...]
}
}

By using BDD, we can effectively tell a story about what expectations there are within a
codebase, specifically around this buy button. It tells you:

In the context of Buy button, it posts order if artwork has no edition sets
In the context of Buy button, it posts order if artwork has 1 edition set
In the context of Buy button, it displays inquiry form if artwork has multiple sets
In the context of Buy button, it displays inquiry form if request fails

Yeah, the English gets a bit janky, but you can easily read these as though they were english
sentences. That's pretty cool. These describe blocks can be nested, this makes
contextualising different aspects of your testing suite easily. So you might end up with this in
the future:

21
Chapters/En-Uk/Xctest/Behavior Testing

In the context of Buy button, when logged in, it posts order if artwork has no edition sets
In the context of Buy button, when logged in, it posts order if artwork has 1 edition set
In the context of Buy button, when logged in, it displays inquiry form if artwork has
multiple sets
In the context of Buy button, when logged in, it displays inquiry form if request fails
In the context of Buy button, when logged out, it asks for a email if we don't have one
In the context of Buy button, when logged out, it posts order if no edition sets and we
have email

Where you can split out the it blocks into different describes called logged in and
logged out .

So what does this pattern give us?


Well, first up, tests are readable, and are obvious in their dependents.

The structure of how you make your tests becomes a matter of nesting context or
describes s. This it's much harder to just name your tests: test1 , test2 , test3 because

you should easily be able to say them out loud.

Being able to structure your tests as a hierarchy, making it easy to structure code to run
before / after or beforeAll / afterAll at different points can be much simpler than

having a collection of setup code in each test. This makes it easier for each it block to be
focused on just the arrange and assert .

I've never felt comfortable writing plain old XCTest formatted code, and so from this point on,
expect to not see any more examples in that format.

Matchers
BDD only provides a lexicon for structuring your code, in all of the examples further on you'll
see things like:

// Objective-C
expect([item.attributeSet title]).to.equal(artist.gridTitle);

// Swift
expect(range.min) == 500

These types of expectations are not provided as a part of XCTest. XCTest provides a
collection of ugly macros/functions like XCTFail , XCTAssertGreaterThan or XCTAssertEqual
which does some simple logic and raises an error denoting the on the line it was called from.

22
Chapters/En-Uk/Xctest/Behavior Testing

As these are pretty limited in what they can do, and are un-aesthetically pleasing, I don't use
them. Instead I use a matcher library. For example Expecta, Nimble or OCHamcrest. These
provide a variety of tools for creating test assertions.

It's common for these libraries to be separate from the libraries doing BDD, in the Cocoa
world, only Kiwi aims to do both BDD structures and matchers.

From my perspective, there's only one major advantage to bundling the two, and that is that
you can fail a test if there were no matchers ran ( e.g. an async test never called back in
time. ) To my knowledge, only Rspec for ruby provides that feature.

23
Chapters/En-Uk/Xctest/Test Driven Development

Test Driven Development


It's been said recently that Test Driven Development (TDD) is dead. I'm not too sold on the
idea that it's dead, personally. I think for the Cocoa community, TDD has barely even started.

Test Driven Development is the idea of writing your application and your tests
simultaneously. Just as you write out your app's code unit by unit, you cover each individual
step with a test. The common pattern for this is:

Red
Green
Refactor

This is the idea that, you start with a test, which will fail:

it("has ten items") {


let subject = KeyNumbers()
expect(subject.items.count) == 10
}

Then you do the minimum possible to get that test to pass:

class KeyNumbers: NSObject {


let items = [0,1,2,3,4,5,6,7,8,9]
}

This gives you a green (passing) test, from there you would refactor your code, now that you
can verify the end result.

You would then move on to the next test, which may verify the type of value returned or
whatever you're really meant to be working on. The idea is that you keep repeating this
pattern and at the end you've got a list of all your expectations in the tests.

When to use TDD?


TDD works really well when you have to work on a bug, you produce the test that represents
the fixed bug first (red), then you can fix the bug (green) and finally you can clean up the
code you've changes (refactor.)

I've found that TDD works well when I know a lot of the states of my view controllers up
front, I would write something like this:

24
Chapters/En-Uk/Xctest/Test Driven Development

override func spec() {


describe("cells") {
var subject: LiveAuctionHistoryCell!

pending("looks right for open")


pending("looks right for closed")
pending("looks right for bid")
pending("looks right for final call")
pending("looks right for fair warning")
}
}

Which eventually turned into: LiveAuctionBidHistoryViewControllerTests.swift

// TODO link to LiveAuctionBidHistoryViewControllerTests

Which would give me an overview of the types of states I wanted to represent. Then I could
work through adding snapshots of each state to the tests, in order to make sure I don't miss
anything.

In part, doing TDD with compiled languages can be tough, because you cannot easily
compile against an API which doesn't exist. This makes the red step tricky. The best work
around is to do stubbed data like my items example above.

In ruby, I would write my tests first, in Cocoa I find this harder. So I'm not a true convert, this
could be the decade of not doing tests providing a negative momentum though. Established
habits die hard. There are definitely very productive programmers who do test first.

25
Chapters/En-Uk/Xctest/Integration Testing

Integration Testing
Integration Testing is a different concept to Unit Testing. It is the idea of testing changes in
aggregate, as opposed to individual units. A good testing goal is to have a lot of the finer-
grained ( thinner brush ) tests covered by unit testing, then Integration Testing will help you
deal with larger ideas ( a paint roller. )

Within the context of Cocoa, integration tests generally means writing tests against things
you have no control over. Which you could argue is all of UIKit, but hey, gotta do that to build
an app. Seriously though, UIKit is the most common thing against which people have done
integration testing.

UI Testing
UI Testing involves running your app as though there was a human on the other side tappig
buttons, waiting for animations and filling in all of bits of data. The APIs make it easy to
make tests like "If I've not added an email, is the submit button disabled?" and "After hitting
submit with credentials, do it go to the home screen?" These let you write tests pretty quickly
( it's now built into Xcode ) and it can be used to provide a lot of coverage fast.

The tooling for in the OSS world is pretty mature now. The dominant player is Square's KIF.
KIF's tests generally look like this:

class ReaderViewUITests: KIFTestCase, UITextFieldDelegate {


[...]
func markAsReadFromReaderView() {
tester().tapViewWithAccessibilityLabel("Mark as Read")
tester().tapViewWithAccessibilityIdentifier("url")
tester().tapViewWithAccessibilityLabel("Reading list")
tester().swipeViewWithAccessibilityLabel("Reader View Test", inDirection: KIFSw
ipeDirection.Right)
tester().waitForViewWithAccessibilityLabel("Mark as Unread")
tester().tapViewWithAccessibilityLabel("Cancel")
}
[...]
}

Where KIF will look or wait for specific views in the view hierarchy, then perform some
actions. Apple's version of KIF, UITesting is similar, but different.

It works by having a completely different test target just for UI Integration Tests, separate
from your Unit Tests. It can build out your test-suite much faster, as it can record the things
you click on in the simulator, and save the actions to your source files in Xcode.

26
Chapters/En-Uk/Xctest/Integration Testing

These tests look like vanilla XCTest, here's some examples from Deck-Tracker

class About: XCTestCase {

let backButton = XCUIApplication().navigationBars["About"].buttons["Settings"]


let aboutTitleScreen = XCUIApplication().navigationBars["About"].staticTexts["Abo
ut"]
let hearthstoneImage = XCUIApplication().images["Hearthstone About"]
[...]

override func setUp() {


super.setUp()
continueAfterFailure = false
XCUIApplication().launch()
let app = XCUIApplication()
app.navigationBars["Games List"].buttons["More Info"].tap()
app.tables.staticTexts["About"].tap()
}

func testElementsOnScreen() {
XCTAssert(backButton.exists)
XCTAssert(aboutTitleScreen.exists)
XCTAssert(hearthstoneImage.exists)
XCTAssert(versionNumberLabel.exists)
XCTAssert(createdByLabel.exists)
XCTAssert(emailButton.exists)
XCTAssert(nounIconsLabel.exists)
}
[...]
}

There are some good up-sides to this approach, it's really fast to set up and to re-create
when something changes. It's a really wide-brushed approach to covering your tested.

The biggest down-side is that it's slow. It requires running all the usual animations and
networking would be performed as usual in the app. These can be worked around with some
networking stubbing libraries, mainly VCR or HTTP Stubs but that adds a lot more
complexity to what is generally a simple approach.

API Testing
If you have a staging environment for your API, it can be worth having your application run
through a series of real-world networking tasks to verify the APIs which you rely on ( but
don't necessarily maintain) continue to act in the way you expect.

This can normally be built with KIF/UITesting, and can be tested

27
Chapters/En-Uk/Xctest/Integration Testing

28
Chapters/En-Uk/Foundations/Dependency Injection

Dependency Injection
Dependency Injection (DI, from here on in) is a way of dealing with how you keep your code
concerns separated. On a more pragmatic level it is expressed elegantly in Jame Shore's
blog post

Dependency injection means giving an object its instance variables. Really. That's it.

This alone isn't really enough to show the problems that DI solves. So, let's look at some
code and investigate what DI really means in practice.

Dependency Injection in a function


Lets start with the smallest possible example, a single function:

- (void)saveUserDetails
{
User *user = [[User currentUser] dictionaryRepresentation];
[[NSUserDefaults standardUserDefaults] setObject:user forKey:@"user"];
[[NSUserDefaults standardUserDefaults] setBool:YES forKey:@"injected"];
}

Testing this code can be tricky, as it relies on functions inside the NSUserDefaults and User
class. These are the dependencies inside this function. Ideally when we test this code we
want to be able to replace the dependencies with something specific to the test. There are
many ways to start applying DI, but I think the easiest way here is to try and make it so that
the function takes in it's dependencies as arguments. In this case we are giving the function
both the NSUserDefaults object and a User model.

- (void)saveUser:(User *)user inDefaults:(NSUserDefaults *)defaults


{
[defaults setObject:[user dictionaryRepresentation] forKey:@"user"];
[defaults setBool:YES forKey:@"injected"];
}

In Swift we can use default arguments to acknowledge that we'll most often be using the
sharedUserDefaults as the defaults var:

29
Chapters/En-Uk/Foundations/Dependency Injection

func saveUser(user: User, defaults: Defaults = .standardUserDefaults()){


defaults.setObject(user.dictionaryRepresentation, forKey:"user")
defaults.setBool(true, forKey:"injected")
}

This little change in abstraction means that we can now insert our own custom objects inside
this function. Thus, we can inject a new instance of both arguments and test the end results
of them. Something like:

it(@"saves user defaults", ^{


NSUserDefaults *defaults = [[NSUserDefaults alloc] init];
User *user = [User stubbedUser];
UserArchiver *archiver = [[UserArchiver alloc] init];

[archiver saveUser:user inDefaults:defaults];

expect([user dictionaryRepresentation]).to.equal([defaults objectForKey:@"user"]);


expect([defaults boolForKey:@"injected"]).to.equal(YES);
});

We can now easily test the changes via inspecting our custom dependencies.

Dependency Injection at Object Level


Let's expand our scope of using DI, a single function can use DI via it's arguments, so then
an object can expand it's scope via instance variables. As the initial explanation said.

class UserNameTableVC: UITableViewController {


var names: [String] = [] {
didSet {
tableView.reloadData()
}
}
override func viewDidLoad() {
super.viewDidLoad()
MyNetworkingClient.sharedClient().getUserNames { newNames in
self.names = newNames
}
}
}

This example grabs some names via an API call, then sets the instance variable names to
be the new value from the network. In this example the object that is outside of the scope of
the UserNameTableVC is the MyNetworkingClient .

30
Chapters/En-Uk/Foundations/Dependency Injection

This means that in order to easily test the view controller, we would need to stub or mock the
sharedClient() function to return a different version based on each test.

The easiest way to simplify this, would be to move the networking client into an instance
variable. We can use Swift's default initialisers to set it as the app's default which means
less glue code ( in Objective-C you would override a property's getter function with a default
unless the instance variable has been set. )

class UserNameTableVC: UITableViewController {


var names: [String] = [] {
[...]
}

var network: MyNetworkingClient = .sharedClient()

override func viewDidLoad() {


super.viewDidLoad()
network.getUserNames { newNames in
self.names = newNames
}
}
}

This can result in simpler app code, and significantly easier tests. Now you can init a
UITableViewController and set the .network with any version of the MyNetworkingClient

before viewDidLoad is called, and you're all good.

Dependency Injection at Global Scope


Ambient Context
When you have a group of objects that all need access to the same kind of dependencies. It
can makes sense to bundle those dependencies into a single object. I generally call these
context objects. Here's an example, directly from from Artsy Folio:

31
Chapters/En-Uk/Foundations/Dependency Injection

[...]

@interface ARSyncConfig : NSObject

- (instancetype)initWithManagedObjectContext:(NSManagedObjectContext *)context
defaults:(NSUserDefaults *)defaults
deleter:(ARSyncDeleter *)deleter;

@property (nonatomic, readonly, strong) NSManagedObjectContext *managedObjectContext


;
@property (nonatomic, readonly, strong) NSUserDefaults *defaults;
@property (nonatomic, readonly, strong) ARSyncDeleter *deleter;

@end

This object wraps a NSManagedObjectContext , a NSUserDefaults and a ARSyncDeleter into a


single class. This means it can provide an ambient context for other objects. For example,
this is a class that performs the analytics on a sync.

@implementation ARSyncAnalytics

- (void)syncDidStart:(ARSync *)sync
{
[sync.config.defaults setBool:YES forKey:ARSyncingIsInProgress];

BOOL completedSyncBefore = [sync.config.defaults boolForKey:ARFinishedFirstSync];


[ARAnalytics event:@"sync_started" withProperties:@{
@"initial_sync" : @(completedSyncBefore)
}];
}

[...]

The ARSyncAnalytics doesn't have any instance variables at all, the sync object is DI'd in as
a function argument. From there the analytics are set according to the defaults provided
inside the ARSync 's context object. I believe the official name for this pattern is ambient
context.

Read more:

http://www.bignerdranch.com/blog/dependency-injection-ios/ http://www.objc.io/issue-
15/dependency-injection.html

32
Chapters/En-Uk/Foundations/Stubs Mocks And Fakes

Mocks
A mock object is an object created by a library to emulate an existing object's API. In general
there are two main types of mocks

1. Strict Mocks - or probably just Mocks. These objects will only respond to what you
define upfront, and will assert if they receive anything else.

2. Nice (or Partial) Mocks which wrap existing objects. These mocks objects can define
the methods that they should respond too, but will pass any function / messages they
haven't been told about to the original.

In Objective-C you can define mocks that act as specific instance of a class, conform to
specific protocols or be a class itself. In Swift this is still all up in the air, given the language's
strict type system.

Stubs
A stub is a method that is replaced at runtime with another implementation. It is common for
a stub to not call the original method. It's useful in setting up context for when you want to
use known a return value with a method.

You can think of it as being method swizzling, really.

Mocks and Stubs


From a personal opinion I avoid stubbing and mocking code which is under my control.

When you first get started, using Mocks and Stubs feel like the perfect tool for testing code,
but it becomes unwieldy as it can quickly get out of sync with reality. They can be a great
crutch, when you really can't figure out how to test something however.

A great example of when to use stubbing is when dealing with an Apple class that you
cannot easily replace or use your own copy. For example I regularly use partial mocks of
UIScreen instances in order to emulate being on an iPad simulator when it's actually

running on an iPhone simulator. This saves us time from running our test-suite twice,
sequentially, on multiple simulators.

When you own the code that you're working with, it can often be easier to use a fake.

Fakes

33
Chapters/En-Uk/Foundations/Stubs Mocks And Fakes

A Fake is an API compatible version of an object. That is it. Fakes are extremely easy to
make in both Swift and Objective-C.

In Objective-C fakes can be created easily using the loose typing at runtime. If you create an
object that responds to the same selectors as the one you want to fake you can pass it
instead by typecasting it to the original.

In Swift the use of AnyObject is discouraged by the compiler, so the use of fudging types
doesn't work. Instead, you are better off using a protocol. This means that you can rely on a
different object conforming to the same protocol to make test code run differently. It provides
a different level of coupling.

For my favourite use case of Fakes, look at the chapter on Network Models, or Testing
Delegates.

34
Chapters/En-Uk/Oss Libs/Expanding On Bdd Frameworks

Custom Matchers

35
Chapters/En-Uk/Oss Libs/Mocking And Stubbing Ocmock And Ocmockito

36
Chapters/En-Uk/Oss Libs/Network Stubbing Ohttp And Vcrurlconnection

37
Chapters/En-Uk/Setup/How I Got Started

How I got started


Within the entire development team at Artsy, the iOS team was unique in that it didn't write
tests. This is in part due to the issues around tooling, horror stories about lost productivity
and a lack of non-trivial examples.

The other part was that no-one else was doing it when they were a small team. You would
hear about large companies with tens of employees creating a large testing structure, but
very few small startups with a couple of programmers were talking about testing.

We had experimented once or twice with adding testing to our applications, mainly doing
Integration Testing with KIF, but without team consensus the attempts fell flat.

At the end of 2013 the Bus Factor for all of the knowledge in the Artsy mobile apps became
an all-time low.

I was the only one with any knowledge of how our systems worked and what additional
context anyone needed to know about making changes. I had been involved in all decisions,
and all that domain knowledge was just in me.

So, the mobile team expanded. Laura Brown and dB joined the mobile team from the web
side. They helped raise our testing standards. From close-to zero, to something.

These changes in our development culture helped turn a pretty insular team, into one that's
world-renowned for it's documentation and accessibility of it's code-bases. We actively use
blog-posts, videos and conference talks to document why and how things work in our team.

When new members join, they have a wide variety of sources to understand how the team
work, including this book - which helps to explain a lot of the decisions we made around
testing methodologies and internal best-practices.

Finally, I write tests because some day I will leave Artsy. I don't want those who have to
continue the apps to remember me as the person who left a massive pile of hard-to-maintain
code. Just little bits of that here and there, and the majority reasonably explained.

38
Chapters/En-Uk/Setup/Getting Setup

Getting setup
We're pragmatic, so we use CocoaPods. It is a dependency manager for Cocoa projects,
we're going to use it to pull in the required dependencies. If you're new to CocoaPods then
read the extensive guides website to help you get started.

Adding a test Target


If you don't have a test target in your application then you need to add one. This can be
done by opening Xcode, clicking File in the menu-bar then going to New > New Target .
You'll find the test target under Other in iOS. It's called Cococa Touch Unit Testing Bundle .
Adding this to your project will add the required bundle files, and you should choose to test
(be hosted by) your existing application.

Setting up your Podfile


I'm presuming you already have a Podfile , if you don't consult the CocoaPods Getting
Started guide. We're going to make changes to add testing tools. This means adding a new
section in the Podfile . These typically look like the following:

target "App"
pod 'ORStackView'
[...]

target "AppTests" do
inherit! :search_paths

pod 'Specta'
pod 'Expecta'
pod 'FBSnapshotTestCase'
end

This links the testing Pods to only the test target. This inherits the app's CocoaPods (in this
case ORStackView. ) CocoaPods will generate a second library for the testing pods Specta,
Expecta and FBSnapshotTestCase. That is only linked to your Tests target.

You can test that everything is working well together by either going to Product > Test in
the menu or by pressing + u . This will compile your application, then run it and inject
your testing bundle into the application.

39
Chapters/En-Uk/Setup/Getting Setup

If that doesn't happen, it's likely that your Scheme is not set up correctly. Go to Product >
Scheme > Edit Scheme.. or press + + , . Then make sure that you have a valid test

target set up for that scheme.

40
Chapters/En-Uk/Setup/Introducing Tests Into An Existing Application

Introducing Tests to an Existing App


We introduced tests into the first Artsy iOS App around the time that it hit 100,000 lines of
code (including Pods/ ). The app was the product of a hard and unmovable deadline. We
pushed out two patch releases after that, then sat back to try and figure out how to switch
make the app friendly to new developers.

We introduced some ground rules:

All bug fixes get a test.


Nearly all new code gets tested.
Code you touch in the process should get cleaned, and tested.

At the same time we agreed on a style change. Braces at the end of methods would move
down to the next line. This meant we would know up-front whether code should be
considered legacy or not.

Needs tests

- (void)method {
...
}

Covered

- (void)method
{
...
}

This style change was agreed on throughout our apps as a reminder that there was work to
be done.

I would still get started in an existing project without tests this way, make it obvious what is
and what isn't under tests. Then start with some small bugs, think of a way to prove the bug
wrong in code then add the test.

Once we were confident with our testing flow from the smaller bugs. We discussed internally
what parts of the app were we scared to touch. This was easy for us, and that was
authentication for registering and logging in.

41
Chapters/En-Uk/Setup/Introducing Tests Into An Existing Application

It was a lot of hastily written code, as it had a large amount of callbacks and a lot of hundred
line+ methods. That was the first thing to hit 100% test coverage. I'm not going to say it was
easy, but I would have no issues letting people make changes there presuming the tests
pass.

This code became well tested, and eventually made its way out of the application and into a
new CocoaPod on its own. A strategy for generating great Open Source.

42
Chapters/En-Uk/Setup/Starting A New Application And Using Tests

Introducing Tests to an New App


Oh wow, this is like a dream for me. Let's say I needed to write our biggest app, Eigen, from
scratch. So that's a big app, thousands of tests. With everything I know now up-front, what
would I do?

So, there's a healthy amount of restrictions in here. That's part of the point, stopping us from
doing some of our biggest problems with the test-suite.

The other thing, is that I'd have our CI servers perform additional metrics on the test logs. I
work on a tool called Danger which makes it easy to enforce team cultural rules on CI builds.
Some of those rules are in here.

NSUserDefaults, Keychain, NSFileManager are all faked up-front.

This is because all of these will eventually leak into the developer's setup. Meaning
tests can be different between developers. We have a series of Fakes for these classes
in a library called Forgeries

Use the TZ environment variable to ensure all runs of the testing suite has the
same timezone.

This one causes flaky tests pretty often, especially when you have a team of remote
developers. You can edit the Scheme for your app's tests, and add an Environment
Variable named "TZ" with the value of "Europe/London". Now everyone is running their
tests in Manchester.

Synchronous Networking in Tests.

Eh, I've wrote a whole chapter or two on this, if you're not convinced after those, then a
blurb won't change that.

Any un-stubbed networking would be an instant NSAssert in tests.

We're doing this in all our apps, it catches a bunch of problems at the moment of test
creation rather than later when it's harder to debug.

I still would have a ARPerformWorkAsynchronously bool.

It's a constant battle against asynchronicity, and this, mixed with our ar_dispatch and
UIView+Boolean are great tools in the battle.

TODO: Verify URLs^

I would not include a mocking library

43
Chapters/En-Uk/Setup/Starting A New Application And Using Tests

I think they have their place, but I'd like to see how long we can go without being forced
into using one of these tools.

I would try and ensure that our tests can run in a non-deterministic order.

Specta, the testing library, doesn't have a way of doing this, nor to my knowledge does
Quick. However, I know for sure that the test cases I have will only work in the same
order that Xcode runs that as an implementation detail. Un-doing that would require
some effort, and an improvement on tooling though.

I would have a log parsers for Auto Layout errors, CGContext errors and tests
that take a long time.

I would use Danger to look for Auto Layout errors, we see them all the time, and ignore
them. We've done this for too long, and now we can't go back. Now we have the tech,
but it's not worth the time investment to fix.

I'd also debate banning will and asyncWait for adding extra complications to the test-
suite.

44
Chapters/En-Uk/Ops/Developer Operations Aka Automation

Developer Operations
There are days when I get really excited about doing some dev-ops. There are some days I
hate it. Let's examine some of the key parts of a day to day work flow for a pragmatic
programmer:

Single line commands


We use a Makefile like its 1983. Makefile s are a very simple ancient mini-language built
on top of shell scripts. These are commonly used to run tasks. We use them for a wide
range of tasks:

Bootstrapping From a Fresh Repo Clone


Updating Mogenerator Objects from a .xcdatamodel
Updating App Storyboard Constants
Building the App
Cleaning the Apps build folder
Running Tests
Generating Version Info for App Store Deploys
Preparing for deploys
Deploying to HockeyApp
Making Pull Requests

Code Review
Code Review is an important concept because it enforces a strong deliverable. It is a
statement of action and an explanation behind the changes. I use github for both closed and
open source code, so for me Code Review is always done in the form of Pull Requests.
Pulls Requests can be linked to, can close other issues on merging and have an extremely
high-functioning toolset around code discussions.

When you prepare for a code review it is a reminder to refactor, and a chance for you to give
your changes a second look. It's obviously useful for when there are multiple people on a
project ( in which case it is a good idea to have different people do code reviews each time )
but theres also value in using code reviews to keep someone else in the loop for features
that affect them.

45
Chapters/En-Uk/Ops/Developer Operations Aka Automation

Finally Code Review is a really useful teaching tool. When new developers were expressed
interest in working on the mobile team then I would assign them merge rights on smaller Pull
Requests and explain everything going on. Giving the merger the chance to get exposure to
the language before having to write any themselves.

When you are working on your own it can be very difficult to maintain this, especially when
you are writing projects on your own. A fellow Artsy programmer, Craig Spaeth does this
beautifully when working on his own, here are some example pull requests. Each Pull
Request is an atomic set of changes so that he can see what the larger goal was each time.

TODO: Craig's adddress ^

Continuous Integration
Once you have some tests running you're going to want a computer to run that code for you.
Pulling someone's code, testing it locally then merging it into master gets boring very quickly.
There are three major options in the Continous Integration (CI) world, and they all have
different tradeoffs.

Self hosted

Jenkins
Jenkins is a popular language agnostic self-hosted CI server. There are many plugins for
Jenkins around getting set up for github authentication, running Xcode projects and
deploying to Hockey. It runs fine on a Mac, and you just need a Mac Mini set up somewhere
that receives calls from a github web-hook. This is well documented on the internet.

The general trade off here is that it is a high-cost on developer time. Jenkins is stable but
requires maintenance around keeping up to date with Xcode.

Buildkite.io
Buildkite lets you run your own Travis CI-like CI system on your own hardware. This means
easily running tests for Xcode betas. It differs from Jenkins in its simplicity. It requires
significantly less setup, and require less maintenance overall. It is a program that is ran on
your hardware

Xcode Bots

46
Chapters/En-Uk/Ops/Developer Operations Aka Automation

Xcode bots is still a bit of a mystery, though it looks like with it's second release it is now at a
point where it is usable. I found them tricky to set up, and especially difficult to work with
when working with a remote team and using code review.

An Xcode bot is a service running on a Mac Mini, that periodically pings an external
repository of code. It will download changes, run optional before and after scripts and then
offer the results in a really beautiful user interface directly in Xcode.

Services
It's nice to have a Mac mini to hand, but it can be a lot of maintenance. Usually these are
things you expect like profiles, certificates and signing. A lot of the time though it's problems
with Apple's tooling. This could be Xcode shutting off halfway though a build, the Keychain
locking up, the iOS simulator not launching or the test-suite not loading. For me, working at a
company as an iOS developer I don't enjoy, nor want to waste time with issues like this. So I
have a bias towards letting services deal with this for me.

The flip-side is that you don't have much control, if you need bleeding-edge Xcode features,
and you're not in control of your CI box, then you have to deal with no CI until the provider
provides.

Travis CI
Travis CI is a CI server that is extremely popular in the Open Source World. We liked it so
much in CocoaPods that we decided to include setup for every CocoaPod built with our
toolchain. It is used by most programming communities due to their free-if-open-source
pricing.

Travis CI is configured entirely via a .travis.yml file in your repo which is a YAML file that
lets you override different parts of the install/build/test script. It has support for local
dependency caching. This means build times generally do not include pulling in external
CocoaPods and Gems, making it extremely fast most of the time.

I really like the system of configuring everything via a single file that is held in your
repository. It means all the necessary knowledge for how your application is tested is kept
with the application itself.

Circle CI

We've consolidated on Circle CI for our Apps. It has the same circle.yml config file
advantage as Travis CI, but our builds don't have to wait in an OSS queue. It also seems to
have the best options for supporting simultaneous builds.

47
Chapters/En-Uk/Ops/Developer Operations Aka Automation

Bitrise.io

Bitrise is a newcomer to the CI field and is focused exclusively on iOS. This is a great thing.
They have been known to have both stable and betas builds of Xcode on their virtual
machines. This makes it possible to keep your builds green while you add support for the
greatest new things. This has, and continues to be be a major issue with Travis CI in the
past.

Bitrise differs from Travis CI in that it's testing system is ran as a series of steps that you can
run from their website. Because of this it has a much lower barrier to entry. When given
some of my simpler iOS Apps their automatic setup did everything required with no
configuration.

Build

Internal Deployment
We eventually migrated from Hockey for betas to Testflight. In part because it felt like it was
starting to mature, and also because of a bug/feature in iOS.

We deploy via Fastlane.

TODO: Link to Eigen "App Launch Slow"

iTunes deployment
2015 was a good year for deployment to the App Store, as Felix Krause released a series of
command line tools to do everything from generating snapshots to deploying to iTunes. This
suite of tools is called Fastlane and I cant recommend it enough.

Getting past the mental barrier of presuming an iTunes deploy takes a long time means that
you feel more comfortable releasing new builds often. More builds means less major
breaking changes, reducing problem surface area between versions.

48
Chapters/En-Uk/Ops/Techniques For Keeping Testing Code Sane

49
Chapters/En-Uk/Ops/Creation Of App-Centric It Blocks

50
Chapters/En-Uk/Ops/Fixtures And Factories

51
Chapters/En-Uk/Async/Dispatch Asyncs Ar Dispatch Etc

Techniques for avoiding Async Testing


Ideally an app should be running on multiple threads, with lots of work being done in the
background. This means you can avoid having the user waiting for things to happen. A side
effect of this is that asynchronous code can be difficult to test, as the scope for all your
objects quickly collapses, and you never get callbacks in time.

For me there are three main ways to deal with this, in order of preference:

Make asynchronous code run synchronously


Make your main test thread wait while the work is done in the background
Use a testing frameworks ability to have a run loop checking for a test pass

One of the big downsides of using Asynchronous testing is that it can be extremely time
consuming trying to figure out why a test is failing, or flaky. Especially when it may only
happen on CI that could have a weaker processor, less memory or be on a different OS
version.

An unreliable test is really, really hard to figure out, especially when you can't get a simple
stack trace or log. Doubly so if it's not reproducible on a developer's computer. So I put in as
much effort as possible to ensure that a test starts and ends during the it 's initial scope.

If you're the one pitching that it's worth writing tests, you're the one that has to figure out
what some obvious-ish code isn't working. For me, after years, Asynchronous tests are the
one to blame the most.

Get in line
A friend in Sweden passed on a technique he was using to cover complex series of
background jobs. In testing where he would typically use dispatch_async to run some code
he would instead use dispatch_sync or just run a block directly. I took this technique and
turned it into a simple library that allows you to toggle all uses of these functions to be
asychronous or not.

This is not the only example, we built this idea into a network abstraction layer library too. If
you are making stubbed requests then they happen synchronously. This reduced complexity
in our testing.

It will happen

52
Chapters/En-Uk/Async/Dispatch Asyncs Ar Dispatch Etc

Testing frameworks typically have two options for running asynchronous tests, within the
matchers or within the testing scaffolding. For example in Specta/Expecta you can use
Specta's waitUntil() or Expecta's will .

Wait Until
waitUntil is a simple function that blocks the main thread that your tests are running on.

Then after a certain amount of time, it will allow the block inside to run, and you can do your
check. This method of testing will likely result in slow tests, as it will take the full amount of
required time unlike Expecta's will .

Will
A will looks like this: expect(x).will.beNil(); . By default it will fail after 0.3 seconds, but
what will happen is that it constantly runs the expectation during that timeframe. In the above
example it will keep checking if x is nil. If it succeeds then the checks stop and it moves
on. This means it takes as little time as possible.

Downsides
Quite frankly though, async is something you should be avoiding. From a pragmatic
perspective, I'm happy to write extra code in my app's code to make sure that it's possible to
run something synchronously.

For example, we expose a Bool called ARPerformWorkAsynchronously in eigen, so that we


can add animated: flags to things that would be notoriously hard to test.

For example, here is some code that upon tapping a button it will push a view controller. This
can either be tested by stubbing out a the navigationViewController ( or perhaps providing
an easy to test subclass (fake)) or you can allow the real work to happen fast and verify the
real result. I'd be happy with the fake, or the real one.

- (void)tappedArtworkViewInRoom
{
ARViewInRoomViewController *viewInRoomVC = [[ARViewInRoomViewController alloc] ini
tWithArtwork:self.artwork];
[self.navigationController pushViewController:viewInRoomVC animated:ARPerformWorkA
synchronously];
}

53
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

Techniques for avoiding Async Networking


We've already covered Network Models. So I need to delve a little bit harder in order to
really drive this point home. Networking is one of the biggest reasons for needing to do
async testing, and we want to do everything possible to stop that happening.

The way I deal with this, is a little hacky. However, it is the best technique I have come up
with so far. This is the PR for when I added this technique to Eigen.

Stubbed Networking Client


It's normal to create a centralised HTTP client, the HTTP client's responsibilities are
generally:

Convert a path to a NSURL


Create NSURLRequests for NSURL s
Create networking operations for NSURLRequests
Start the networking operations

We care about taking the last two responsibilities and making them act differently when in
testing.

Eigen
Let's go through the process simplified for how Eigen's stubbed networking HTTP client
works.

We want to have a networking client that can act differently in tests, so create a subclass of
your HTTP client, in my case, the client is called ArtsyAPI . I want to call the subclass
ArtsyOHHTTPAPI - as I want to use the library OHHTTPStubs to make my work easier.

@interface ArtsyOHHTTPAPI : ArtsyAPI


@end

You need to have a way to ensure in your tests that you are using this version in testing.
This can be done via Dependency Injection, or as I did, by using different classes in a
singleton method when the new class is available.

54
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

+ (ArtsyAPI *)sharedAPI
{
static ArtsyAPI *_sharedController = nil;
static dispatch_once_t oncePredicate;
dispatch_once(&oncePredicate, ^{
Class klass = NSClassFromString(@"ArtsyOHHTTPAPI") ?: self;
_sharedController = [[klass alloc] init];
});
return _sharedController;
}

Next up you need a point of inflection with the generation of networking operations in the
HTTP client. For ArtsyAPI that is this method: - (AFHTTPRequestOperation
*)requestOperation:(NSURLRequest *)request success:(void (^)(NSURLRequest *request,
NSHTTPURLResponse *response, id JSON))success failure:(void (^)(NSURLRequest *request,
NSHTTPURLResponse *response, NSError *error, id JSON))failureCallback

We want to override this function to work synchronously. So let's talk a little about how this
will work.

1. Request Lookup
We need an API to be able to declare a stubbed route, luckily for me OHHTTPStubs
has been working on this problem for years. So I want to be able to build on top of that
work, rather than write my own stubbed NSURLRequest resolver.

After some digging into OHHTTPStubs, I discovered that it has a private API that does
exactly what I need.

@interface OHHTTPStubs (PrivateButItWasInABookSoItMustBeFine)


+ (instancetype)sharedInstance;
- (OHHTTPStubsDescriptor *)firstStubPassingTestForRequest:(NSURLRequest *)request
;
@end

This allows us to access all of the OHHTTPStubsDescriptor objects, and more


importantly, find out which ones are in memory at the moment. We can use this lookup
function to work with the request parameter in our inflection function. All in one simple
line of code.

OHHTTPStubsDescriptor *stub = [[OHHTTPStubs sharedInstance] firstStubPassingTestFo


rRequest:request];

2. Operation Variables

55
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

So we have request look-up working, next up is creating an operation. It's very likely
that you will need to create an API compatible fake version of whatever you're working
with. In my case, that's AFNetworking NSOperation subclasses.

However, first, you'll need to pull out some details from the stub:

// Grab the response by putting in the request


OHHTTPStubsResponse *response = stub.responseBlock(request);

// Open the input stream for in JSON data


[response.inputStream open];

id json = @[];
NSError *error = nil;
if (response.inputStream.hasBytesAvailable) {
json = [NSJSONSerialization JSONObjectWithStream:response.inputStream options:N
SJSONReadingAllowFragments error:&error];
}

This gives us all the details we'll need, the response object will also contain things like
statusCode and httpHeaders that we'll need for determining operation behaviour.

3. Operation Execution
In my case, I wanted an operation that does barely anything. The best operation that
does barely anything is the trusty NSBlockOperation - which is an operation which
executes a block when something tells it to start. Easy.

@interface ARFakeAFJSONOperation : NSBlockOperation


@property (nonatomic, strong) NSURLRequest *request;
@property (nonatomic, strong) id responseObject;
@property (nonatomic, strong) NSError *error;
@end

@implementation ARFakeAFJSONOperation
@end

Depending on how you use the NSOperation s in your app, you'll need to add more
properties, or methods in order to effectively fake the operation.

For this function to be completed it needs to return an operation, so lets return a


ARFakeAFJSONOperation .

56
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

ARFakeAFJSONOperation *fakeOp = [ARFakeAFJSONOperation blockOperationWithBlock:^{


NSHTTPURLResponse *URLresponse = [[NSHTTPURLResponse alloc] initWithURL:request
.URL statusCode:response.statusCode HTTPVersion:@"1.0" headerFields:response.httpH
eaders];

if (response.statusCode >= 200 && response.statusCode < 205) {


if (success) { success(request, URLresponse, json); }
} else {
if (failureCallback) { failureCallback(request, URLresponse, response.error
, json); }
}
}];

fakeOp.responseObject = json;
fakeOp.request = request;
return (id)fakeOp;

So we create an operation, that either calls the success or the failure block in the
inflected method depending on the data from inside the stub. Effectively closing the loop
on our synchronous networking. From here, in the case of ArtsyAPI another object will
tell the ARFakeAFJSONOperation to start, triggering the callback synchronously.

4. Request Failure
Having a synchronous networking lookup system means that you can also detect when
networking is happening when you don't have a stubbed request.

We have a lot of code here, in order to provide some really useful advice to
programmers writing tests. Ranging from a copy & paste-able version of the function
call to cover the networking, to a full stack trace showing what function triggered the
networking call

TODO: Example of what one looks like

With this in place, all your networking can run synchronously in tests. Assuming they all go
through your point of inflection, it took us a while to iron out the last few requests that
weren't.

AROHHTTPNoStubAssertionBot
I used a simplification of the above in a different project, to ensure that all HTTP requests
we're stubbed. By using the same OHHTTPStubs private API, I could detect when a request
was being ignored by the OHHTTPStubs singleton. Then I could create a stack trace and give
a lot of useful information.

57
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

@interface ARHTTPStubs : OHHTTPStubs


@end

@implementation ARHTTPStubs

- (OHHTTPStubsDescriptor *)firstStubPassingTestForRequest:(NSURLRequest *)request


{
id stub = [super firstStubPassingTestForRequest:request];
if (stub) { return stub; }

[... Logging out here]

_XCTPrimitiveFail(spectaExample, @"Failed due to unstubbed networking.");


return nil;
}

Then I used "fancy" runtime trickery to change the class of the OHHTTPStubs singleton at
runtime on the only part of the public API.

@interface AROHHTTPNoStubAssertionBot : NSObject


+ (BOOL)assertOnFailForGlobalOHHTTPStubs;
@end

@implementation AROHHTTPNoStubAssertionBot

+ (BOOL)assertOnFailForGlobalOHHTTPStubs
{
id newClass = object_setClass([OHHTTPStubs sharedInstance], ARHTTPStubs.class);
return newClass != nil;
}

@end

This technique is less reliable, as it relies on whatever the underlying networking operation
does. This is generally calling on a background thread, and so you lose a lot of the useful
stack tracing. However, you do get some useful information.

Moya
Given that we know asynchronous networking in tests is trouble, for a fresh project we opted
to imagine what it would look like to have networking stubs as a natural part of the API
description in a HTTP client. The end result of this is Moya.

In Moya you have to provide stubbed response data for every request that you want to map
using the API, and it provides a way to easily do synchronous networking instead.

58
Chapters/En-Uk/Async/Techniques For Getting Around Async Testing

59
Chapters/En-Uk/Async/Techniques For Getting Around Async Networking

60
Chapters/En-Uk/Async/Networking In View Controllers Network Models

Network Models
In another chapter, I talk about creating HTTP clients that converts async networking code
into synchronous APIs. Another tactic for dealing with testing out your networking.

There are lots of clever ways to test your networking, I want to talk about the simplest. From
a View Controller's perspective, all networking should go through a networking model.

A network model is a protocol that represents getting and transforming data into something
the view controller can use. In your application, this will perform networking asynchronously -
in tests, it is primed with data and synchronously returns those values.

Let's look at some code before we add a network model:

class UserNamesTableVC: UITableViewController {

override func viewDidLoad() {


super.viewDidLoad()
MyNetworkingClient.sharedClient().getUsers { users in
self.names = users.map { $0.names }
}
}
}

OK, so it accesses a shared client, which returns a bunch of users - as we only need the
names, we map out the names from the users and do something with that.

Let's start by defining the relationship between UsersNameTableVC and it's data:

protocol UserNamesTableNetworkModelType {
func getUserNames(completion: ([String]) -> ())
}

Then we need to have an object that conforms to this in our app:

class UserNamesTableNetworkModel: UserNamesTableNetworkModelType {

func getUserNames(completion: ([String]) -> ()) {


MyNetworkingClient.sharedClient().getUsers { users in
completion(users.map {$0.name})
}
}
}

61
Chapters/En-Uk/Async/Networking In View Controllers Network Models

We can then bring this into our ViewController to handle pulling in data:

class UserNamesTableVC: UITableViewController {

var network: UserNamesTableNetworkModelType = UserNamesTableNetworkModel()

override func viewDidLoad() {


super.viewDidLoad()
network.getUserNames { userNames in
self.names = userNames
}
}
}

OK, so we've abstracted it out a little, this is very similar to what happened back in the
Dependency Injection chapter. To use network models to their fullest, we want to make
another object that conforms to the UserNamesTableNetworkModelType protocol.

class StubbedUserNamesTableNetworkModel: UserNamesTableNetworkModelType {

var userNames = []
func getUserNames(completion: ([String]) -> ()) {
completion(userNames)
}
}

Now in our tests we can use the StubbedUserNamesTableNetworkModel instead of the


UserNamesTableNetworkModel and we've got synchronous networking, and really simple tests.

it("shows the same amount names in the tableview") {


let stubbedNetwork = StubbedUserNamesTableNetworkModel()
stubbedNetwork.names = ["gemma", "dmitry"]

let subject = UserNamesTableVC()


subject.network = stubbedNetwork

// Triggers viewDidLoad (and the rest of the viewXXX methods)


subject.beginAppearanceTransition(true, animated: false)
subject.endAppearanceTransition()

let rows = subject.tableView.numberOfRowsInSection(0)


expect(rows) == stubbedNetwork.names.count
}

This pattern has saved us a lot of trouble over a long time. It's a nice pattern for one-off
networking issues, and can slowly be adopted over time.

62
Chapters/En-Uk/Async/Networking In View Controllers Network Models

63
Chapters/En-Uk/Async/Animations

Animations
Animations are notorious for being hard to test. The problem arises from the fact that
normally an animation is a fire and forget change that is handled by Apple.

UIView animations
One way that we deal with animations in tests is by having a strict policy on always including
a animates: bool on any function that could contain animation. We mix this with a
CocoaPod that makes it easy to do animations with a boolean flag,
UIView+BooleanAnimations. This provides the UIView class with an API like:

[UIView animateIf:animates duration:ARAnimationDuration :^{


self.relatedTitle.alpha = 1;
}];

Which gives the control whether to animate in a test or not to the animates BOOL. If this is
being called inside a viewWillAppear: method for example, then you already have a bool to
work with.

Core Animation
It can be tough to test a core animation

64
Chapters/En-Uk/Async/Will And Xctest 6

65
Chapters/En-Uk/App Testing/Techniques For Testing Different Aspects Of The App

66
Chapters/En-Uk/App Testing/Views Snapshots

Snapshot Testing
The process of taking a snapshot of the view hierarchy of a UIView subclass, then storing
that as a reference for what your app should look like.

Why Bother
TLDR: Fast, easily re-producible tests of visual layouts. If you want a longer introduction,
you should read my objc.io article about Snapshot Testing. I'll be assuming some familiarity
from here on in.

Techniques
We aim for snapshot tests to cover two main areas

1. Overall state for View Controllers


2. Individual States per View Component

As snapshots are the largest testing brush, and as the apps I work on tend to be fancy
perspectives on remote data. Snapshot testing provides easy coverage, for larger complex
objects like view controllers where we can see how all the pieces come together. They can
then provide a finer grained look at individual view components.

Let's use some real-world examples.

A simple UITableViewController subclass


I have a UITableViewController subclass that shows the bid history of a live auction,
LiveAuctionBidHistoryViewController . It receives a collection of events and presents each

as a different looking cell.

LiveAuctionBidHistoryViewController - Real Tests

TODO: Add links ^

Custom Cells
Ideally you should have every state covered at View level. This means for every possible
major style of the view, we want to have a snapshot covering it

67
Chapters/En-Uk/App Testing/Views Snapshots

class LiveAuctionBidHistoryViewControllerTests: QuickSpec {


[...]
override func spec() {
describe("cells") {
var subject: LiveAuctionHistoryCell!

it("looks right for open") {


let event = LiveEvent(JSON: ["type" : "open", "id" : "OK"])

subject = self.setupCellWithEvent(event)
expect(subject).to( haveValidSnapshot() )
}

it("looks right for closed") { [...] }


it("looks right for bid") { [...] }
it("looks right for final call") { [...] }
it("looks right for fair warning") { [...] }
}
}
}

This means we have a good coverage of all the possible states for the data. This makes it
easy to do code-review as it shows the entire set of possible styles for your data.

The View Controller is where it all comes together, in this case, the View Controller isn't
really doing anything outside of showing a collection of cells. It is given a collection of items,
it does the usual UITableView datasource and delegate bits and it shows the history.
Simple.

So for the View Controller, we only need a simple test:

class LiveAuctionBidHistoryViewControllerTests: QuickSpec {


override func spec() {
describe("view controller") {
it("looks right with example data") {
let subject = LiveAuctionBidHistoryViewController()

// Triggers viewDidLoad (and the rest of the viewXXX methods)


subject.beginAppearanceTransition(true, animated: false)
subject.endAppearanceTransition()

subject.lotViewModel = StubbedLotViewModel()
expect(subject).to( haveValidSnapshot() )
}
}
}
}

68
Chapters/En-Uk/App Testing/Views Snapshots

This may not show all the different types of events that it can show, but those are specifically
handled by the View-level tests, not at the View Controller.

A Non-Trivial View Controller


TODO: Energy's `AREditAlbumArtistViewController``

Common issues with Testing View Controllers


Turns out to really grok why some problems happen you have to have quite a solid
foundation in:

View Controller setup process - e.g. viewDidLoad , viewDid/WillAppear etc.


View Controller Containment - e.g. childViewControllers , definesPresentationContext
etc.

This is not useless, esoteric, knowledge though. Having a firmer understanding of this
process means that you will probably write better code.

69
Chapters/En-Uk/App Testing/Scroll Views

70
Chapters/En-Uk/App Testing/User Interactions

UIButtons

UIGestures

Target Action

71
Chapters/En-Uk/App Testing/Ipad And Iphone

Multi-Device Support
There are mainly two ways to have a test-suite handle multiple device-types, and
orientations. The easy way: Run your test-suite multiple times on multiple devices,
simulators, and orientations.

The hard way: Mock and Stub your way to multi-device support in one single test-suite.

Device Fakes
Like a lot of things, this used to be easier in simpler times. When you could just set a device
size, and go from there, you can see this in Eigen's - ARTestContext.m

TODO - Link ^

static OCMockObject *ARPartialScreenMock;

@interface UIScreen (Prvate)


- (CGRect)_applicationFrameForInterfaceOrientation:(long long)arg1 usingStatusbarHeigh
t:(double)arg2 ignoreStatusBar:(BOOL)ignore;
@end

+ (void)runAsDevice:(enum ARDeviceType)device
{
[... setup]

ARPartialScreenMock = [OCMockObject partialMockForObject:UIScreen.mainScreen];


NSValue *phoneSize = [NSValue valueWithCGRect:(CGRect)CGRectMake(0, 0, size.width, s
ize.height)];

[[[ARPartialScreenMock stub] andReturnValue:phoneSize] bounds];


[[[[ARPartialScreenMock stub] andReturnValue:phoneSize] ignoringNonObjectArgs] _appl
icationFrameForInterfaceOrientation:0 usingStatusbarHeight:0 ignoreStatusBar:NO];
}

This ensures all ViewControllers are created at the expected size. Then you can use your
own logic to determine iPhone vs iPad. This works for simple cases, but it isn't optimal in the
current landscape of iOS apps.

Trait Fakes
Trait collections are now the recommended way to distinguish devices, as an iPad could
now be showing your app in a screen the size of an iPhone. You can't rely on having an
application the same size as the screen. This makes it more complex.

72
Chapters/En-Uk/App Testing/Ipad And Iphone

This is not a space I've devoted a lot of time to, so consider this section a beta. If anyone
wants to dig in, I'd be interested in knowing what the central point of knowledge for train
collections is, and stubbing that in the way I did with
_applicationFrameForInterfaceOrientation:usingStatusbarHeight:ignoreStatusBar: .

Every View or View Controller (V/VC) has a read-only collection of traits, the V/VCs can
listen for trait changes and re-arrange themselves. For example, we have a view that sets
itself up on the collection change:

TODO: Link to AuctionBannerView.swift

class AuctionBannerView: UIView {

override func traitCollectionDidChange(previousTraitCollection: UITraitCollection?)


{
super.traitCollectionDidChange(previousTraitCollection)

// Remove all subviews and call setupViews() again to start from scratch.
subviews.forEach { $0.removeFromSuperview() }
setupViews()
}
}

When we test this view we stub the traitCollection array and trigger
traitCollectionDidChange , this is done in our Forgeries library. It looks pretty much like this,

with the environment being the V/VC.

void stubTraitCollectionInEnvironment(UITraitCollection *traitCollection, id<UITraitEn


vironment> environment) {
id partialMock = [OCMockObject partialMockForObject:environment];
[[[partialMock stub] andReturn:traitCollection] traitCollection];
[environment traitCollectionDidChange:nil];
}

Giving us the chance to make our V/VC think it's in any type of environment that we want to
write tests for.

73
Chapters/En-Uk/App Testing/Testing Delegates

Patterns for Testing Using Protocols


One of the really nice things about using protocols instead of classes is that it defines a
collection of methods something should act with, but doesn't force the relationship into
specific implementations.

This works out really well, because it makes it super easy to switch out the object in test, as
it just has to conform to said protocol. Let's look at an example of a tricky to test
UITableViewDataSource .

This example has a class whose responsibility is to deal with getting data, and providing that
to a tableview.

class ORArtworkDataSource, NSObject, UITableViewDataSource {


// Do some networking, pull in some data, make it possible to generate cells
func getData() {
[...]
}
[...]
}

class ORArtworkViewController: UITableViewController {


var dataSource: ORArtworkDataSource!

[...]
override func viewDidLoad() {
dataSource = ORArtworkDataSource()
tableView.dataSource = dataSource
dataSource.getData()
}
}

This implementation is great if you don't want to write any tests, but it can get tricky to find
ways to have your tests perform easy-to-assert on behavior with this tactic.

One of the simplest approaches to making this type of code easy to test is to use lazy
initialisation, and a protocol to define the expectations but not the implementation.

So, define a protocol that says what methods the ORArtworkDataSource should have then
only let the ORArtworkViewController know it's talking to something which conforms to this
protocol.

74
Chapters/En-Uk/App Testing/Testing Delegates

/// This protocol abstracts the implementation details of the networking


protocol ORArtworkDataSourcable {
func getData()
}

class ORArtworkDataSource: NSObject, ORArtworkDataSourcable, UITableViewDataSource {


// Do some networking, pull in some data, make it possible to generate cells
func getData() {
[...]
}

[...]
}

class ORArtworkViewController: UITableViewController {

// Allows another class to change the dataSource,


// but also will fall back to ORArtworkDataSource()
// when not set
lazy var dataSource: ORArtworkDataSourcable = {
return ORArtworkDataSource()
}()

[...]
override func viewDidLoad() {
tableView.dataSource = dataSource
dataSource.getData()
}
}

This allows you to create a new object that conforms to ORArtworkDataSourcable which can
have different behaviour in tests. For example:

it("shows a tableview cell") {


subject = ORArtworkViewController()
subject.dataSource = ORStubbedArtworkDataSource()
// [...]
expect(subject.tableView.cellForRowAtIndexPath(index)).to( beTruthy() )
}

There is a great video from Apple called Protocol-Oriented Programming in Swift that covers
this topic, and more. The video has a great example of showing how you can test a graphic
interface by comparing log files because of the abstraction covered here.

The same principals occur in Objective-C too, don't think this is a special Swift thing, the
only major new change for Swift is the ability for a protocol to offer methods, allowing for a
strange kind of multiple inheritence.

75
Chapters/En-Uk/App Testing/Testing Delegates

Examples in practice, mostly Network models:

Eigen - ARArtistNetworkModel is a protocol which ARArtistNetworkModel and


ARStubbedArtistNetworkModel conform to. Here are some tests using the technique.

Eidolon - BidderNetworkModel BidderNetworkModelType is a protocol that


BidderNetworkModel and StubBidderNetworkModel conform to. Here are tests using the

technique

76
Chapters/En-Uk/Core Data/Core Data

Core Data
Core Data is just another dependency to be injected. It's definitely out of your control, so in
theory you could be fine using stubs and mocks to control it as you would like.

From my perspective though, I've been creating a blank in-memory NSManagedObjectContext


for every test, for years, and I've been happy with this.

Memory Contexts
An in-memory context is a managed object context that is identical to your app's main
managed object context, but instead of having a SQL NSPersistentStoreCoordinator based
on the file system it's done in-memory and once it's out of scope it disappears.

Here's the setup for our in-memory context in Folio:

+ (NSManagedObjectContext *)stubbedManagedObjectContext
{
NSDictionary *options = @{
NSMigratePersistentStoresAutomaticallyOption : @(YES),
NSInferMappingModelAutomaticallyOption : @(YES)
};

NSManagedObjectModel *model = [CoreDataManager managedObjectModel];


NSPersistentStoreCoordinator *persistentStoreCoordinator = [[NSPersistentStoreCoor
dinator alloc] initWithManagedObjectModel:model];
[... Add a memory store to the coordinator]

NSManagedObjectContext *context = [[NSManagedObjectContext alloc] initWithConcurre


ncyType:NSMainQueueConcurrencyType];
context.persistentStoreCoordinator = persistentStoreCoordinator;
return context;
}

This context will act the same as your normal context, but it's cheap and easy to fill. It's also
going to run functions on the main-thread for you to if you use NSMainQueueConcurrencyType
simplifying your work further.

Having one of these is probably the first step for making tests against any code touching
Core Data.

Knowing when to DI functions

77
Chapters/En-Uk/Core Data/Core Data

It's extremely common to wrap the Core Data APIs, they're similar to XCTest and Auto
Layout in that Apple provides a low-level standard library then everyone builds their own
wrappers above it.

The wrappers for Core Data tend to not be built with DI in mind, offering their own singleton
access for a main managed object context. So you may need to send some PRs to allow
passing an in-memory NSManagedObjectContext instead of a singleton.

This means I ended up writing a lot of functions that looked like this:

@interface NSFetchRequest (ARModels)

/// Gets all artworks of an artwork container that can be found with current user sett
ings with an additional scope predicate
+ (instancetype)ar_allArtworksOfArtworkContainerWithSelfPredicate:(NSPredicate *)selfS
copePredicate inContext:(NSManagedObjectContext *)context defaults:(NSUserDefaults *)d
efaults;

[...]

Which, admittedly would be much simpler in Swift thanks to default initialiser. However, you
get the point. Any time you need to do fetches you need to DI the in-memory version
somehow. This could be as a property on an object, or as an argument in a function. I've
used both, a lot.

Asserting on the Main Managed Object Context


In my Core Data stack I use a CoreDataManager and the factory pattern. This means I can
add some logic to my manager to raise an exception when it is running in tests. You can do
this very easily by checking the NSProcessInfo class.

78
Chapters/En-Uk/Core Data/Core Data

static BOOL ARRunningUnitTests = NO;

@implementation CoreDataManager

+ (void)initialize
{
if (self == [CoreDataManager class]) {
NSString *XCInjectBundle = [[[NSProcessInfo processInfo] environment] objectFo
rKey:@"XCInjectBundle"];
ARRunningUnitTests = [XCInjectBundle hasSuffix:@".xctest"];
}
}

+ (NSManagedObjectContext *)mainManagedObjectContext
{
if (ARRunningUnitTests) {
@throw [NSException exceptionWithName:@"ARCoreDataError" reason:@"Nope - you s
hould be using a stubbed context in tests." userInfo:nil];
}
[...]
}

This is something you want to do early on in writing your tests. The later you do it, the larger
the changes you will have to make to your existing code base.

This makes it much easier to move to all objects to accept in-memory


NSManagedObjectContext via Dependency Injection. It took me two days to migrate all of the

code currently covered by tests to do this. Every now and again, years later, I start adding
tests to an older area of the code-base and find that mainManagedObjectContext was still
being called in a test. It's a great way to save yourself and others some frustrating
debugging time in the future.

Advantages of Testing with Core Data


I like working with Core Data, remember that it is an object graph tool, and so being able to
have a fully set up NSManagedObjectContext as a part of the arrangement in your test can
make testing different states extremely easy, you can also save example databases from
your app and move them into your tests as a version you could work against.

Straight after setting up an in-memory store, we wanted to be able to quickly throw example
data into our NSManagedObjectContext . The way we choose to do it was via a factory object.
Seeing as the factory pattern works pretty well here, here's the sort of interface we created:

79
Chapters/En-Uk/Core Data/Core Data

@interface ARModelFactory : NSObject

+ (Artwork *)fullArtworkInContext:(NSManagedObjectContext *)context;


+ (Artwork *)partiallyFilledArtworkInContext:(NSManagedObjectContext *)context;
+ (Artwork *)fullArtworkWithEditionsInContext:(NSManagedObjectContext *)context;

[...]
@end

These would add an object into the context, and also let it return the newly inserted object
in-case you had test-specific modifications to do.

80
Chapters/En-Uk/Core Data/Core Data Migrations

Core Data Migrations


The first time I released a patch release for the first Artsy App it crashed instantly, on every
install. It turned out I didn't understand Core Data Model Versioning. Now a few years on I
grok the migration patterns better but I've still lived with the memories of that dark dark day.
Since then I've had an informal rule of testing migrations with all the old build of Folio using a
tool I created called chairs the day before submitting to the app store.

Chairs is a tool to back up your application's documents and settings. This meant I would
have backups from different builds and could have a simulator with data from past versions
without having to compile and older build.

The problem here is that the manual process takes a lot of time, is rarely done, and could be
pretty easily automated. So I extracted the old sqlite stores from the older builds, added
these files to my testing bundle as fixture data and starting writing tests that would run the
migrations.

Running the migration is a matter of applying the current NSManagedObjectModel to the old
sqlite file if you are using lightweight migrations.

NSManagedObjectContext *ARContextWithVersionString(NSString *string);

SpecBegin(ARAppDataMigrations)

__block NSManagedObjectContext *context;

it(@"migrates from 1.3", ^{


expect(^{
context = ARContextWithVersionString(@"1.3");
}).toNot.raise(nil);
expect(context).to.beTruthy();
expect([Artwork countInContext:context error:nil]).to.beGreaterThan(0);
});

it(@"migrates from 1.3.5", ^{


expect(^{
context = ARContextWithVersionString(@"1.3.5");
}).toNot.raise(nil);
expect(context).to.beTruthy();
expect([Artwork countInContext:context error:nil]).to.beGreaterThan(0);
});

[...]

SpecEnd

81
Chapters/En-Uk/Core Data/Core Data Migrations

NSManagedObjectContext *ARContextWithVersionString(NSString *string) {

// Allow it to migrate
NSDictionary *options = @{
NSMigratePersistentStoresAutomaticallyOption: @YES,
NSInferMappingModelAutomaticallyOption: @YES
};

// Open up the the _current_ managed object model


NSError *error = nil;
NSManagedObjectModel *model = [CoreDataManager managedObjectModel];
NSPersistentStoreCoordinator *persistentStoreCoordinator = [[NSPersistentStoreCoor
dinator alloc] initWithManagedObjectModel:model];

// Get an older Core Data file from fixtures


NSString *storeName = [NSString stringWithFormat:@"ArtsyPartner_%@", string];
NSURL *storeURL = [[NSBundle bundleForClass:ARAppDataMigrationsSpec.class] URLForR
esource:storeName withExtension:@"sqlite"];

// Set the persistent store to be the fixture data


if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType conf
iguration:nil URL:storeURL options:options error:&error]) {
NSLog(@"Error creating persistant store: %@", error.localizedDescription);
@throw @"Bad store";
return nil;
}

// Create a stubbed context, check give it the old data, and it will update itself
NSManagedObjectContext *context = [[NSManagedObjectContext alloc] init];
context.persistentStoreCoordinator = persistentStoreCoordinator;
return context;
}

Nothing too surprising, but I think it's important to note that these tests are the slowest tests
in the app that hosts them at a whopping 0.191 seconds. I'm very willing to trade a fraction
of a second on every test run to know that I'm not breaking app migrations.

These are tests that presume you still have people using older builds, every now and again
when I'm looking at Analytics I check to see if any of these test can be removed.

Finally, if you don't use Core Data you may still need to be aware of changes around model
migrations when storing using NSKeyedArchiver . It is a lot harder to have generic future-
proofed test cases like the ones described here. However, here is an example in eigen.

82
Chapters/En-Uk/Prag Prog/Making Libraries To Get Annoying Tests Out Of Your App

83
Chapters/En-Uk/Prag Prog/Using Xcode Pragmatically

84
Chapters/En-Uk/Prag Prog/Improving Xcode

85
Chapters/En-Uk/Wrap Up/Books

Books for Further Reading

The Pragmatic Programmer


Super useful for having solid foundations as a programmer. This book's name came from
this book. You know what though, I've not read in about a decade, I should probably re-read
it.

Working Effectively With Legacy Code


The basic pitch is that any un-tested code is legacy, and to add tests you'll probably have to
do maintenance on those sections of code in order to make it testable.

It comes with some useful terminology (Seams, Inflection Points) that make it easier to
visualise how code composes together, and how you can find ways to address applying
tests to existing code-bases.

Growing Object-Orienteds Software, Guided by Tests


The story of the creation of Test Driven Development. This will help put a lot of ideas into the
larger context of software development. Someone had to come up with these ideas, but how
and why are usually left out. I left them out, for example.

TODO: Add Graham Lee's book

86
Chapters/En-Uk/Wrap Up/Twitter Follows

87
Chapters/En-Uk/Wrap Up/Recommended Websites

Recommended Websites
Link Concept
http://qualitycoding.org Jon Reid's blog on testing strategy's.
Ron Lisle's blog on TDD, Unit Testing andcreating bug free
http://iosunittesting.com
code.
objc.io issue 15 The objc.io issue on testing.

88

You might also like