You are on page 1of 109

iOS 9 Day by Day

This version was published 2016-01-13

©2014 - 2016 Scott Logic Ltd

Also by shinobicontrols
About this book
Welcome to iOS 9 Day by Day, the latest in our Day by Day series covering all that an
iOS developer needs to know about the new technologies and APIs available to
them in iOS 9.

Who is the expert behind all of these great tutorials?


Our very own shinobicontrols developer, Chris Grant, is the author. He’s has always
had an interest in mobile development and has pursued this passion through
university and his professional career. Chris has numerous mobile
applications and native UI frameworks to his name.

Does it come with sample code?


Every single post will have an accompanying project or playground. This is one of
the fantastic features of iOS 9 Day by Day – not only do you read about the new
features, but you can actually see them in action. For each chapter, the related code
has been pushed to our github repo at github.com/ShinobiControls/iOS9-day-by-
day.

What you’ll get from this book

• Short articles highlighting the key features of iOS 9, and how to use them
• A full project with source code demonstrating how to use each feature. Every
chapter has an app or playground that features the new technology. This
allows you to see it “in the wild”, not just “on paper”, as it is in the
documentation

• A great understanding of how to update your existing apps to ensure


compatibility with iOS 9

• Hours of fun. Well, maybe... 



Contents

Day 1 :: Search APIs 1

Day 2 :: User Interface Testing 9

Day 3 :: Storyboard References 18

Day 4 :: UIStackView 26

Day 6 :: iPad Multitasking 39

Day 7 :: The New Contacts Framework 44

Day 8 :: Apple Pay 53

Day 9 :: New UIKit Dynamics Features 63

Day 10 :: MapKit Transit ETA Requests 71

Day 11 :: GameplayKit - Pathfinding 80

Day 12 :: GameplayKit - Behaviors & Goals 86

Day 13 :: CloudKit Web Services 94

The Conclusion 104


Day 1 :: Search APIs
Prior to iOS 9, you could only use spotlight to find apps by their name. With the
announcement of the new iOS9 Search APIs, Apple now allow developers to choose
what content from their apps they want to index, as well as how the results appear
in spotlight, and what happens when the user taps one of the results.

The 3 APIs
NSUserActivity
The NSUserActivity API was introduced in iOS 8 for Handoff, but iOS 9 allows
activities to be searchable. You can now provide metadata to these activities,
meaning that spotlight can index them. This acts as a history stack, similar to when
you are browsing the web. The user can quickly open their recent activities from
Spotlight.

Web Markup
Web Markup allows apps that mirror their content on a website to index their
content in Spotlight. Users don’t need to have the app installed on their device for
results to appear in Spotlight. Apple’s Indexer will now crawl the web looking for this
particular markup in your website. This is then provided to users in both Safari and
Spotlight.

The fact that results can appear even when your app is not installed on a user’s
device could lead to a lot more exposure to potential users. The deep links from
your applications that you expose as public to the Search APIs will be stored in
Apple’s cloud index. To learn more about Web Markup, take a look at Apple’s Use
Web Markup to Make App Content Searchable documentation.

CoreSpotlight
CoreSpotlight is a new iOS 9 framework which allows you to index any content
inside of your app. While NSUserActivity is useful for saving the user’s history, with
this API, you can index any data you like. It essentially provides you with low level
access to the CoreSpotlight index on the user’s device.

Page 1 of 105
Using the Core Spotlight APIs
The NSUserActivity and Web Markup APIs are relatively simple to use, however
CoreSpotlight is a little more complex. To demonstrate how the new Core Spotlight
APIs work, let’s create a simple app that shows a list of our friends, and then a
picture of them when you tap on their name. You can find the code on GitHub and
follow along with what we are building there.

The app has a simple storyboard containing FriendTableViewController which


displays a simple list of our friends names, and FriendViewController which
displays details about each friend.

Page 2 of 105
All of the information about our friends is stored in the Datasource class. This is
where we create the models that store information about our friends, and also
where we will include the logic to store the friends into the Core Spotlight index.
First of all, we override the init() method of the Datasource class, where we create
and store an array of Person objects. You’ll probably want to load yours from a
database or from a server somewhere, but for demonstration purposes, we will
simply create some dummy data.

override init () {
let becky = Person()
becky.name = "Becky"
becky.id = "1"
becky.image = UIImage(named: "becky")!

...

people = [becky, ben, jane, pete, ray, tom]


}

Once the data is stored in the people array, the Datasource is ready to use!
Now that the data is ready, the FriendTableViewController can create an instance
of Datasource to use when its table view requests cells for display.

let datasource = Datasource()

In the cellForRowAtIndexPath function, displaying the contents in the cell is as


simple as:

let person = datasource.people[indexPath.row]


cell?.textLabel?.text = person.name

Page 3 of 105
Saving the person entries to Core Spotlight
Now the mocked data exists, we can store it in Core Spotlight using the new APIs
available in iOS 9. Back in the Datasource class, we have defined a function,
savePeopleToIndex. The FriendTableViewController can call this function when
the view has loaded.

In the function, we iterate through each person in the people array, creating a
CSSearchableItem for each of them and storing them into a temporary array named
searchableItems.

let attributeSet = CSSearchableItemAttributeSet(itemContentType:


"image" as String)
attributeSet.title = person.name
attributeSet.contentDescription = "This is an entry all about the
interesting person called \(person.name)"
attributeSet.thumbnailData = UIImagePNGRepresentation(person.image)

let item = CSSearchableItem(uniqueIdentifier: person.id,


domainIdentifier: "com.ios9daybyday.SearchAPIs.people", attributeSet:
attributeSet)
searchableItems.append(item)

The final step is to call indexSearchableItems on the default CSSearchableIndex.


This actually saves the items into CoreSpotlight so that users can search for them
and so they appear in search results.

CSSearchableIndex.defaultSearchableIndex().indexSearchableItems(search
ableItems, completionHandler: { error -> Void in
if error != nil {
print(error?.localizedDescription)
}
})

And that’s it! When you run your application, the data will be stored. When you
search in spotlight, your friends should appear!

Page 4 of 105
Responding to User Selection
Now users can see your results in Spotlight, hopefully they will tap on them! But
what happens when they do? Well, at the minute, tapping a result will just open the
main screen of your app. If you wish to display the friend that the user tapped on,
there’s a little more work involved. We can specify our app’s behaviour when it is
opened this way through the continueUserActivity UIApplicationDelegate
method in the app’s AppDelegate.

Page 5 of 105
Here’s the entire implementation of this method:

func application(application: UIApplication, continueUserActivity


userActivity: NSUserActivity, restorationHandler: ([AnyObject]?) ->
Void) -> Bool {
// Find the ID from the user info
let friendID = userActivity.userInfo?
["kCSSearchableItemActivityIdentifier"] as! String

// Find the root table view controller and make it show the friend
with this ID
let navigationController = (window?.rootViewController as!
UINavigationController)
navigationController.popToRootViewControllerAnimated(false)
let friendTableViewController =
navigationController.viewControllers.first as!
FriendTableViewController
friendTableViewController.showFriend(friendID)

return true
}

As you can see, the information we previously saved into the CoreSpotlight index
with the indexSearchableItems function is now available to us in the
userActivity.userInfo dictionary. The only thing we are interested in for this
sample is the friend ID, which was stored into the index as the item’s
kCSSearchableItemActivityIdentifier.

Once we have extracted that information from the userInfo dictionary, we can find
the application’s navigation controller, and pop to the root (without animation so it’s
not noticeable to the user) and then call the showFriend function on the
friendTableViewController. I won’t go into detail about how this works, but
essentially it finds the friend with the given ID in it’s datasource and then pushes a
new view controller onto the navigation controller stack. That’s all there is to it! Now
when the user taps on a friend in spotlight, this is what they will see:

Page 6 of 105
As you can see, now there is a “Back to Search” option in the top left hand corner of
your app. This takes the user directly back to the search screen where they first
tapped their friend’s name. They can still navigate through the app with the
standard back button too.

Demo Summary
In the demo above, we’ve seen how easy it is to integrate your application’s data
with the CoreSpotlight index, how powerful it can be when trying to get users to
open your app, and how helpful it can be to users looking for specific content.
We have not covered how to remove data from the index, however. This is important
and you should always try to keep the index that your application uses up to date.
For information on how to remove old entries from CoreSpotlight, take a look at the
deleteSearchableItemsWithIdentifiers,
deleteSearchableItemsWithDomainIdentifiers and
deleteAllSearchableItemsWithCompletionHandler functions.

Page 7 of 105
The Importance of Good Citizenship
Although it may seem like a good idea to get as much of your content into Spotlight
and Safari as possible, think twice before spamming the search indexes with your
content. Being a good citizen in the iOS ecosystem is not only important to keep
your customers happy, but Apple will also notice. They have clearly invested a lot
into protecting relevance. Engagement ratios are tracked and spammers will be
moved to the bottom of search results.

Further Information
For more information on the new Search APIs, I’d recommend watching WWDC
session 709, Introducing Search APIs. You may also be interesting in reading the
NSUserActivity Class Reference as well as the documentation for CoreSpotlight.

Page 8 of 105
Day 2 :: User Interface Testing
Automated User Interface Testing is a valuable tool when developing any software
application. It can detect problems with your app quickly, and a successful test
suite run can provide you with confidence before a release. On the iOS platform, this
is currently done using UIAutomation, with tests written in JavaScript. This involves
opening up a separate app, Instruments, and creating and running scripts. The
workflow is painfully slow and takes a long time to get used to.

UI Testing
With Xcode 7, Apple have introduced a new way to conduct user interface testing
on your apps. UI testing allows you to find and interact with UI elements, and
validate their properties and state. UI testing is fully integrated with Xcode 7 test
reports, and will run alongside your unit tests. XCTest has been Xcode’s integrated
testing framework since Xcode 5, but in Xcode 7, it has been updated to include
new UI testing abilities. This allows you to make assertions which will check the
state of your UI at a certain point.

Accessibility
In order for UI Testing to work, the framework needs to be able to access the
various elements inside the UI so it can carry out it’s actions. You could define
specific points where the tests should tap and swipe, but this would fall down on
different sized devices, or if you made tweaks to the positions of the elements in
your UI.

This is where accessibility can help. Accessibility is a long established Apple


framework that provides disabled users with a way to interact with your
applications. It provides rich semantic data about your UI which allow the various
Accessibility features to open your app up to disabled users. A lot of this
functionality comes out of the box and will just work with your app, but you can
(and should) improve the data that Accessibility has about your UI using the
Accessibility APIs. There are many cases where this is necessary, for example

Page 9 of 105
custom controls where Accessibility won’t be able to figure out what your API does
automatically.

UI Testing has the ability to interface with your app through the Accessibility
features that your app provides, which gives you a great solution to the different
sized device problem, and provides you from having to rewrite your entire test suite
if you rearrange some elements in your UI. Not only does it help with UI Testing,
implementing Accessibility also gives you the added benefit of making your app
available to disabled users.

UI Recording
Once you have set up your accessible UI, you’ll want to create some UI tests.
Writing UI tests can be time consuming, boring, and if you have a complicated UI,
difficult! Thankfully in Xcode 7, Apple have introduced UI Recording, which allows
you to create new, and expand existing tests. When turned on, code will
automatically be generated when you interact with your app on the device or the
simulator. Now that we have a good overview of how UI Testing fits together, it is
time to start using it!

Creating UI Tests
We’re going to build a UI Testing suite using the new UI Testing tools to
demonstrate how this works. The finished demo application is available at GitHub if
you wish to follow along and see the result.

Setup
In Xcode 7, when you create a new project, you can choose whether to include UI
Tests. This will set up a placeholder UI Test target for you with all of the
configuration you need.

Page 10 of 105
The project set up in this example is very simple but should be enough for us to
demonstrate how the UI Testing works in Xcode 7.

There is a menu view controller that contains a switch and a button that links you to
a detail view controller. When the switch is in the “off” state, the button should be
disabled and navigation should not be possible. The detail view controller contains
a simple button which increments the value in a label.

Using UI Recording
Once the UI has been set up and is functional, we can write some UI tests to ensure
that any changes in the code do not effect functionality.

Page 11 of 105
THE XCTEST UI TESTING API
Before we start recording actions we must decide what we want to assert. In order
to assert certain things about our UI, we can use the XCTest Framework, which has
been expanded to include three new pieces of API.

XCUIApplication. This is a proxy for the application you are testing. It allows you to
launch your application so that you can run tests on it. It’s worth noting that it
always launches a new process. This takes a little more time, but it means that the
slate is always clean when it comes to testing your app and there are fewer
variables that you have to deal with.

XCUIElement. This is a proxy for the UI elements for the application you are testing.
Elements all have types and identifiers, which you can combine to allow you to find
the elements in the application. The elements are all nested in a tree which
represents your application.

XCUIElementQuery. When you want to find elements, you use element queries.
Every XCUIElement is backed by a query. Such queries search the XCUIElement tree
and must resolve to exactly one match. Otherwise your test will fail when you try to
access the element. The exception to this is the exists property, where you can
check if an element is present in the tree. This is useful for assertions. You can use
XCUIElementQuery more generally, when you want to find elements that are visible
to accessibility. Queries return a set of results.

Now that we have a way to explore the API we are ready to start writing some tests.

TEST 1 - ENSURING NO NAVIGATION TAKES PLACE WHEN THE SWITCH IS


OFF
First of all we must define a function that will contain our test.

func testTapViewDetailWhenSwitchIsOffDoesNothing() {

When the function has been defined, we move the mouse cursor inside of its
brackets and tap the record button at the bottom of the Xcode window.

Page 12 of 105
The app will now launch. Tap the switch to turn it off, then tap the “View Detail”
button. The following should now appear inside
testTapViewDetailWhenSwitchIsOffDoesNothing.

let app = XCUIApplication()


app.switches["View Detail Enabled Switch"].tap()
app.buttons["View Detail”].tap()

Now click the record button again and the recording should stop. We can see that
the app didn’t actually display the detail view controller, but the test has no way of
knowing that currently. We must assert that nothing has changed. We can do this
by inspecting the title of the navigation bar. This might not be appropriate for all use
cases, but it works for us here.

XCTAssertEqual(app.navigationBars.element.identifier, “Menu")

Once you’ve added this line, run the test again and it should still pass. Try changing
the “Menu” string to “Detail” for example, and it should fail. Here’s the final result of
this test with some comments added to explain the behaviour:

func testTapViewDetailWhenSwitchIsOffDoesNothing() {
let app = XCUIApplication()

// Change the switch to off.


app.switches["View Detail Enabled Switch"].tap()

// Tap the view detail button.


app.buttons["View Detail"].tap()

// Verify that nothing has happened and we are still at the menu
screen.
XCTAssertEqual(app.navigationBars.element.identifier, "Menu")
}

Page 13 of 105
TEST 2 - ENSURING NAVIGATION TAKES PLACE WHEN THE SWITCH IS ON
The second test is similar to the first, so we won’t go through it in detail. The only
difference here is that the switch is enabled, so the app should load the detail
screen, and the XCTAssertEqual verifies this.

func testTapViewDetailWhenSwitchIsOnNavigatesToDetailViewController()
{
let app = XCUIApplication()

// Tap the view detail button.


app.buttons["View Detail"].tap()

// Verify that navigation occurred and we are at the detail


screen.
XCTAssertEqual(app.navigationBars.element.identifier, "Detail")
}

TEST 3 - ENSURING THE INCREMENT BUTTON INCREMENTS THE VALUE


LABEL.
In this test we verify that when a user clicks the increment button, the value label
increases by 1. The first two lines of this test are very similar, so we can copy and
paste them from the previous test.

let app = XCUIApplication()

// Tap the view detail button to open the detail page.


app.buttons["View Detail”].tap()

Next we need to gain access to the button. We will want to tap this a few times so
we will store it as a variable. Rather than manually typing the code to find the button
and having to debug it, launch the recorder again and click the button. This will give
you the following code.

app.buttons["Increment Value”].tap()

We can then stop the recorder and change this to

let incrementButton = app.buttons["Increment Value”]

This way we don’t need to manually type out the code to find the button. We do the
same for the value label.

let valueLabel = app.staticTexts["Number Value Label”]

Page 14 of 105
Now we have the UI elements we are interested in we can interact with them. In this
test we will verify that after tapping on the button 10 times, the label is updated
accordingly. We could record this 10 times in the recorder, but because we stored
the elements above, we can simply put it in a for loop.

for index in 0...10 {


// Tap the increment value button.
incrementButton.tap()

// Ensure that the value has increased by 1.


XCTAssertEqual(valueLabel.value as! String, "\(index+1)")
}

These three tests are far from a comprehensive test suite, but they should give you
a good starting point and you should be able to expand upon them easily. Why not
try adding a test yourself that verifies that the button is enabled and you can
navigate if you turn the switch off and then back on again?

WHEN RECORDING GOES WRONG


Sometimes when you tap on an element while recording, you’ll notice that the code
produced doesn’t look quite right. This is usually because the element you are
interacting with is not visible to Accessibility. To find out if this is the case, you can
use XCode’s Accessibility Inspector.

Once it is open, if you hit CMD+F7 and hover over an element with your mouse in
the simulator, then you’ll see comprehensive information about the element
underneath the cursor. This should give you a clue about why Accessibility can’t
find your element.

Page 15 of 105
Once you’ve identified the issue, open up interface builder and in the identity
inspector you’ll find the Accessibility panel. This allows you to enable Accessibility,
and set hints, labels identifiers, and traits. These are all powerful tools to enable
Accessibility to access your interface.

WHEN A TEST FAILS


If a test fails and you aren’t sure why, there’s a couple of ways to help you fix it. First
of all, you can access the test reports in Xcode’s Report Navigator.

Page 16 of 105
When you open this view and hover over certain steps in the tests, you will see a
small eye icon to the right of the test action. If you click this eye, you are presented
with a screenshot of the state of your app at that exact point. This will let you
visually check the state of your UI and find out exactly what is wrong.

Just like unit tests, you can add breakpoints to UI tests, which allows you to debug
the behaviour and find any problems. You can log the view hierarchy and inspect
accessibility properties using this technique to see why your test is failing.

Why you should use UI Testing


Automated UI Testing is a great way to improve quality assurance while giving you
the confidence to make changes to your app. We have seen how simple it is to get
UI Testing up and running in Xcode, and how adding Accessibility features to your
app cannot only help you test your app, but also has the added benefit of helping
users with disabilities to use your app.

One of the best new features of UI Testing in Xcode is the ability to run your tests
from your continuous integration server. There’s support for doing so with Xcode
bots, and also from the command line meaning when a UI Test fails you can be
immediately informed!

Further Reading
For more information on the new UI Testing features in XCode, I’d recommend
watching WWDC session 406, UI Testing in Xcode. You may also be interesting in
reading the Testing in Xcode Documentation, and the Accessibility for Developers
Documentation

Page 17 of 105
Day 3 :: Storyboard References
If you’ve used interface builder to build a complicated application with a lot of
screens before, you’ll know how large Storyboards can become. This quickly
becomes unmanageable and slows you down. Since the introduction of
Storyboards, it’s been possible to split different regions of your app into separate
Storyboards. In the past, this involved manually creating separate Storyboard files
and a considerable amount of code.

In order to resolve this, in iOS 9 Apple have introduced the concept of Storyboard
References. Storyboard References allow you to reference view controllers in other
storyboards from your segues. This means that you can keep each region of your
app modular, and your Storyboards become smaller and more manageable. Not
only is this easier to understand, but when working in a development team it will
make merges simpler.

Simplifying A Storyboard
In order to demonstrate how Storyboard References work, lets take an existing
application and try to simplify its structure. The application in question is available
over at GitHub if you wish to follow along and then see the final result. The
OldMain.Storyboard file is what we start with, and is included in the project for
reference only. It isn’t actually used any more. If you want to follow along, delete all
of the storyboards in the project and rename OldMain.Storyboard to
Main.Storyboard.

Page 18 of 105
The screenshot below is how the original Storyboard looks.

As you can see, we are using a Tab Bar Controller as the initial view controller. This
Tab Bar Controller has 3 navigation controllers, all with different root view
controllers. The first one is table view controller with a list of contacts, the second
is another table view controller with a list of favourite contacts. Both of these link to
the same contact detail view controller. The third navigation controller contains
more information about the application including account details, a feedback
screen and an about screen.

Although this application is far from complicated, the Storyboard is already fairly
large. I’ve seen Storyboards in the past with well over 10 times the number of view
controllers in them, and as we all know, this quickly becomes unmanageable. Now
though, we can split them up. So where should we start? In this case, we have three
distinct regions. We can clearly identify these as they are the content for each
section of the tab bar controller.

We will start with the simplest case. On the right hand side of Main.storyboard you
should see the view controllers that provide more information about our
application. These view controllers are self contained and don’t link to any common
views that anything other view controllers in the application link to.

Page 19 of 105
All we have to do is select these view controllers by dragging and highlighting them
all. Once we have done that, select “Editor”, then “Refactor to Storyboard” from
Xcode’s menu bar.

Page 20 of 105
Give the storyboard a name of More.storyboard then click save. More.storyboard
will be added to your application and opened for you.

You can see that the storyboard has been created. If you now return to
Main.storyboard, you will see that one of the tab bar controller’s view controller has
changed to a Storyboard Reference.

Page 21 of 105
That’s great. We’ve managed to pull out a whole section of our UI into a separate
storyboard, which not only helps with separation of concerns, but also allows us to
reuse the storyboard in other areas of our app. Not particularly useful in this case,
but this will be a valuable addition for many use cases.

So now we want to pull the other regions of our application out into separate
storyboards. This is a little more complicated than the first step, due to the fact that
both of these sections reference a common view controller. Both table views have a
segue that presents the Contact Detail view controller. There are a couple of
options here.

• Keep the common view controller in the Main.storyboard.


• Refactor the common view controller into it’s own storyboard.

Both options will work, but my personal preference is to keep things separate. So
select the Contact Detail view controller and again go to Xcode’s Editor menu,
selecting “Refactor to Storyboard”. Give the storyboard a name and click save. That
will create yet another storyboard and open it. The links to the view controller from
the contacts and favourites table view controllers will be created for you.

Now go back into Main.storyboard and select the contacts navigation and table
view controllers. Refactor those to a storyboard, then do the same for the favourites
view controllers. This should be the result.

Page 22 of 105
We have now split Main.storyboard out into 5 separate storyboard instances in the
project.

• Main.storyboard simply contains a tab bar controller and sets the view
controllers on it from the separate storyboards.

• Contacts.storyboard which is a navigation controller and a table view


controller which when tapped links to ContactDetail.storyboard

• Favorites.storyboard which is a navigation controller and a table view


controller which when tapped links to ContactDetail.storyboard

• ContactDetail.storyboard displays a single view controller that is accessed


from both the contacts and the favourites storyboards.

• More.storyboard which contains a view controllers that show information


about the app.

This restructuring has made our storyboard structure a lot more modular, which
should help when developing the app further!

Opening a Specific View Controller from a Storyboard Reference


Until now, we have only demonstrated how to present a storyboard from a segue
when we want to present the storyboard’s initial view controller, and we also haven’t
looked at how to add a Storyboard Reference manually, without the refactoring tool.
Let’s assume that we want to add a UIBarButtonItem to the top right of the
Contacts table view controller which will show us more information about our
account quickly, without having to go through settings.

Open Contacts.Storyboard and drag a UIBarButtonItem onto the table view


controller’s navigation bar. Change it’s title to “Account”. Once that is done, find the
new “Storyboard Reference” object in the interface builder object reference panel.
Drag that into the Contacts storyboard and then open the attributes inspector.
Select the “More” storyboard, then in the “Referenced ID” field, enter
“accountViewController”. This allows us to reference the account details view
controller, rather than the more storyboard’s initial view controller.

Page 23 of 105
Once the Storyboard Reference for the account view controller is present, select the
account button. Hold down Control + Click and then drag to the newly created
Storyboard Reference to create a segue.

The final step is to give the account view controller the identifier we just specified.
So open up More.storyboard and select the account view controller. Open the
identity inspector and set the Storyboard ID to “accountViewController”. Now when
you launch the app and tap account, you should see the account view controller
pushed onto the contacts navigation controller’s stack.
Page 24 of 105
As we have seen, adding Storyboard References (either with the refactoring tool or
manually in interface builder) is simple, straightforward and effective. It allows you
to create reusable components with storyboards, and helps to keep your UI
modular. As ever, the results of this tutorial are available on GitHub.

Further Reading
For more information on Storyboard References in Xcode 7, I’d recommend
watching WWDC session 215, What’s New in Storyboards. The first 20 minutes
covers the new Storyboard Reference functionality.

Page 25 of 105
Day 4 :: UIStackView
In iOS 9, Apple have introduced UIStackView, which gives you a simple way to
horizontally or vertically stack views in your application. Under the hood, these
views use auto layout to manage the position and size of their child views, which
makes it easy to build adaptive UIs.

Previously, if you wished to create the kind of layout that stack views give you, you’d
need a lot of constraints. You’d need to manage the layout with a lot of padding,
height, and x/y position constraints depending on the orientation.

UIStackViews do all of this for you. There’s even out of the box support to smoothly
animate between states when adding, hiding and removing views as well as when
changing the layout properties on the UIStackView itself.

Using UIStackView
Now we are going to build an example of how to use a UIStackView. The finished
code is available over at GitHub, so you can follow along. We will be building a
simple demonstration of how UIStackView works, which has segmented controls at
the bottom to control the alignment and distribution properties of the UIStackView.

Page 26 of 105
The image above is what we are going to build. As you can see, we have 4 of our
friends displayed, as well as two segmented controls along the bottom. This UI
uses auto layout, and adapts to any size it is given. It may therefore surprise you
that when creating this, we only have to add 4 layout positioning constraints
manually!

Everything else in this view is placed with UIStackViews! In total, we have 4


UIStackViews. The first is the only one that we need to add constraints to. This is to
position the stack view inside of our root view.

Once you’ve dragged a vertical stack view on to the view controller, open up the
constraint pinning tool from the bottom right of the Interface Builder window and
add the constraints specified in the screenshot above. This should keep the main
stack view in the center of the view and give it the correct size.

Now drag three more horizontal stack views inside of the original vertical stack
view that we just created. The stack view at the top will contain four image views;
one for each of our friends. All you need to do is drag four new image views into the
top stack view. Each image that we use has a slightly different size, and we don’t
want the images to become distorted, so on each image view, set its content mode
to Aspect Fit. This means that regardless of the size of the image view, the image

Page 27 of 105
will always be the correct aspect ratio and will always fit inside of the image view’s
dimensions.

You may also notice that there are small gaps between each image view in the final
implementation. This is set by the spacing property in interface builder’s attributes
inspector while the top stack view is selected. This is where you can also set the
alignment and distribution properties. Leave both of these set to “Fill” for now, as
we are going to modify these based on the selected segment of our segmented
controls.

The other two stack views in our root stack views are also horizontal stack views.
These are simple stack views, each with a label and a segmented control. Once
you’ve added distribution and alignment labels and segmented controls, configure
the controls to have the following segments:

• Distribution
• Fill
• Equally
• Fill Proportionally
• Equal Spacing
• Equal Centering
• Alignment
• Fill
• Top
• Center
• Bottom

We will see a visual demonstration of what each of these properties does soon, but
they should be fairly self explanatory. It’s worth noting that some of these
properties heavily depend on the contentSizes of the contents of the stack view.
Thankfully in our case this is simple, as the image size is the size of the image
itself.
Page 28 of 105
Now that our UI is set up, we need to actually do something when the user selects a
different segment. First, drag an IBOutlet from the top stack view which contains
the image views into your view controller subclass and name it peopleStackView.
Then, drag an IBAction from each of the segmented controls’ value changed event
into your class. In each function, you’ll want to set the alignment or distribution
property on peopleStackView based on which segment was selected by the user.

@IBAction func alignmentSegmentSelected(sender: UISegmentedControl) {

UIView.animateWithDuration(1.0,
delay: 0,
usingSpringWithDamping: 0.5,
initialSpringVelocity: 0.2,
options: .CurveEaseInOut,
animations: { () -> Void in

if sender.selectedSegmentIndex == 0 {
self.peopleStackView.alignment = .Fill
}
else if sender.selectedSegmentIndex == 1 {
self.peopleStackView.alignment = .Top
}
else if sender.selectedSegmentIndex == 2 {
self.peopleStackView.alignment = .Center
}
else if sender.selectedSegmentIndex == 3 {
self.peopleStackView.alignment = .Bottom
}
},
completion: nil)
}

@IBAction func distributionSegmentSelected(sender: UISegmentedControl)


{
UIView.animateWithDuration(1.0,
delay: 0,
usingSpringWithDamping: 0.5,
initialSpringVelocity: 0.2,
options: .CurveEaseInOut,
animations: { () -> Void in

if sender.selectedSegmentIndex == 0 {
self.peopleStackView.distribution = .Fill
}
else if sender.selectedSegmentIndex == 1 {
self.peopleStackView.distribution = .FillEqually
}
else if sender.selectedSegmentIndex == 2 {

Page 29 of 105
self.peopleStackView.distribution
= .FillProportionally
}
else if sender.selectedSegmentIndex == 3 {
self.peopleStackView.distribution = .EqualSpacing
}
else if sender.selectedSegmentIndex == 4 {
self.peopleStackView.distribution = .EqualCentering
}
},
completion: nil)
}

You can see that I’ve wrapped the code in each function in an animation block for a
bit of visual flair, but that is by no means necessary. It will instantly change if you
remove the animation code. Now all that’s left is to build and run!

Try playing around with the different combinations of distribution and alignment.
This should show you how powerful UIStackView can be when helping you to create
interfaces that work perfectly on a multitude of devices.

Adding existing views to UIStackView


If you have an existing UI that you wish to convert to use the UIStackView layout,
simply remove the constraints on your views, select them, then click the left most
button in the bottom right hand side of the Interface Builder window. This will take
your views and quickly arrange them into a new UIStackView.

Page 30 of 105
This will make converting your existing constraint based layouts into simple stack
views, which handle the majority of your constraint layout for you.

Further Reading
For more information on Storyboard References in Xcode 7, I’d recommend
watching WWDC session 218, Mysteries of Auto Layout, Part 1. Jason Yao covers
the fundamentals of UIStackView in the first 15 minutes of the video, and creates a
demo which shows how quickly you can create interfaces with fewer constraints
than you needed in the past.


Page 31 of 105
Day 5 :: Xcode Code Coverage Tools
Code coverage is a tool that helps you to measure the value of your unit tests. High
levels of code coverage give you confidence in your tests and indicate that your
application has been more thoroughly tested. You could have thousands of tests,
but if they only test one of your many functions, then your unit test suite isn’t that
valuable at all!

There’s no ideal code coverage percentage that you should aim for. This will vary
drastically depending your project. If your projects has a lot of visual components
that you can’t test, then the target figure will be a lot lower than if you’re putting
together a data processing framework, for example.

Code Coverage in Xcode


In the past, if you wanted to produce code coverage reports for your projects with
Xcode, there were several options. However, these were all fairly complicated and
required a lot of manual setup. Thankfully, in iOS 9, Apple have integrated Code
Coverage tools directly into Xcode itself. The tools are tightly integrated with LLVM
and count each time an expression is called.

Using the Code Coverage Tools


Now we are going to put together a simple example of how to use the new code
coverage tools and how to use them to improve your existing test suite. The
finished code is available over at GitHub, so you can follow along.

The first thing to do is create a new project. Make sure that you select the option to
use unit tests. This will create a default project with the required setup. Now we
need something to test. This can obviously be anything you want, but I’ve added an
empty Swift file and wrote a global function that checks whether two strings are
anagrams of each other. Having this as a global function probably isn’t the best
design, but it will do for now!
Page 32 of 105
func checkWord(word: String, isAnagramOfWord: String) -> Bool {

// Strip the whitespace and make both of the strings lowercase


let noWhitespaceOriginalString =
word.stringByReplacingOccurrencesOfString(" ", withString:
"").lowercaseString
let noWhitespaceComparisonString =
isAnagramOfWord.stringByReplacingOccurrencesOfString(" ", withString:
"").lowercaseString

// If they have different lengths, they are definitely not


anagrams
if noWhitespaceOriginalString.characters.count !=
noWhitespaceComparisonString.characters.count {
return false
}

// If the strings are the same, they must be anagrams of each


other!
if noWhitespaceOriginalString == noWhitespaceComparisonString {
return true
}

// If they have no content, default to true.


if noWhitespaceOriginalString.characters.count == 0 {
return true
}

var dict = [Character: Int]()

// Go through every character in the original string.


for index in 1...noWhitespaceOriginalString.characters.count {

// Find the index of the character at position i, then store


the character.
let originalWordIndex =
advance(noWhitespaceOriginalString.startIndex, index - 1)
let originalWordCharacter =
noWhitespaceOriginalString[originalWordIndex]

// Do the same as above for the compared word.


let comparedWordIndex =
advance(noWhitespaceComparisonString.startIndex, index - 1)
let comparedWordCharacter =
noWhitespaceComparisonString[comparedWordIndex]

// Increment the value in the dictionary for the original word


character. If it doesn't exist, set it to 0 first.

Page 33 of 105
dict[originalWordCharacter] = (dict[originalWordCharacter] ??
0) + 1
// Do the same for the compared word character, but this time
decrement instead of increment.
dict[comparedWordCharacter] = (dict[comparedWordCharacter] ??
0) - 1
}

// Loop through the entire dictionary. If there's a value that


isn't 0, the strings aren't anagrams.
for key in dict.keys {
if (dict[key] != 0) {
return false
}
}

// Everything in the dictionary must have been 0, so the strings


are balanced.
return true
}

This is a relatively simple function, so we should be able to get 100% code coverage
of it without any problems.

Once you have added your algorithm, it’s time to test it! Open up the default
XCTestCase that was created for you when the project was created. Add a simple
test that asserts whether “1” is an anagram of “1”. Your test class should now look
like this.

class CodeCoverageTests: XCTestCase {

func testEqualOneCharacterString() {
XCTAssert(checkWord("1", isAnagramOfWord: "1"))
}
}

Before you run your test, we must make sure that code coverage is turned on! At the
time of writing, it is off by default, so you must edit your test scheme to turn it on.

Page 34 of 105
Make sure that the “Gather coverage data” box is checked, then click ‘Close’ and run
the test target! Hopefully the test we just added will pass.

The Coverage Tab

Once the test passes, you know that at least one route through the
checkWord:isAnagramOfWord function is correct. What you don’t know, is how
many more there are that are not being tested. This is where the code coverage
tools come in handy. If you open The code coverage tab allows you to see the
different levels of code coverage in your application, grouped by target, file, then by
function.

Open the Report Navigator in Xcode’s left hand pane and select the test that just
build. Then on the tap bar, select “Coverage”.

This will display a list of your classes and functions and indicate the test coverage
levels of each. If you hover over the checkWord function, you’ll see that our test only
covers 28% of the class. Unacceptable! We need to find out which code paths are
being executed and which aren’t, so that we can improve this. Double click the
function name, and XCode will open up the class with the code coverage statistics
displayed alongside the code.

Page 35 of 105
The white areas indicate code that is covered and has been executed. The grey
areas show code that has not been run. These are areas where we must add more
tests to check. The numbers on the right hand side show the number of times that
this block of code has been executed.

Improving Coverage
Clearly, we should be aiming for more than 28% coverage for this class. There’s no
UI and seems like the perfect candidate for unit testing. So lets add some more
tests! Ideally, we want to reach each return statement in the function. This should
give us full coverage. Add the following tests to your test class.

func testDifferentLengthStrings() {
XCTAssertFalse(checkWord("a", isAnagramOfWord: "bb"))
}

func testEmptyStrings() {
XCTAssert(checkWord("", isAnagramOfWord: ""))
}

func testLongAnagram() {
XCTAssert(checkWord("chris grant", isAnagramOfWord: "char
string"))
}

func testLongInvalidAnagramWithEqualLengths() {
XCTAssertFalse(checkWord("apple", isAnagramOfWord: "tests"))
}

Page 36 of 105
These tests should be enough to give us full code coverage. Run the unit tests
again and head back to the code coverage tab in the latest test report.

We’ve done it! 100% code coverage. You can now see how the whole file has turned
white and the numbers indicate that every code path has been executed at least
once.

Using code coverage is a great way to help you to build a unit testing suite that is
actually valuable, rather than one with lots of tests that don’t really test the full
functionality of your code. Xcode 7 makes this easy to do, and I’d thoroughly
recommend enabling Code Coverage in your project. Even if this is on an existing
test suite, it will help to give you an idea of how well tested your code is.

Page 37 of 105
Further Reading
For more information on the Code Coverage Tools in Xcode 7, I’d recommend
watching WWDC session 410, Continuous Integration and Code Coverage in Xcode.

Page 38 of 105
Day 6 :: iPad Multitasking
One of the biggest changes in iOS 9 was the introduction of multitasking.
Multitasking allows users to have more than one app on the screen at a time. This
comes in two forms. Slide Over, and Split View.

Slide Over View

In slide over view, the user swipes from the right hand side to display a list of their
apps, then can select one to open it in a the narrow window displayed. This appears
on top of the app that was previously opened and any interaction with the left hand
side of the window is disabled.

Page 39 of 105
Split View

To open split view, the user pulls the vertical divider that appears in Slide Over View
further to the left. The user directly controls the size of your app’s window by sliding
the vertical divider between the two app windows. When split view is active, there is
no concept of a foreground or background app. Both apps are in the foreground.
It’s worth noting that split view is currently only available on the iPad Air 2.

Enabling Multitasking in Your App


New Projects created in Xcode 7 have multitasking enabled by default. If you have
an existing application however, you’ll have to enable it manually. When you are
using the iOS 9 SDK, there are a couple of steps to do this.
• Enable all user interface orientations in your app
• Use Launch Storyboards

Page 40 of 105
Opting Out
If your app already does the above things, then multitasking will be enabled when it
is built with the iOS 9 SDK. If you want to opt out of this behaviour, specify the
UIRequiredFullscreen key in your info.plist file.

The Importance of Auto Layout


Auto Layout was first introduced in iOS 6, and gives you a way to lay out your UI by
specifying constraints rather than fixed positions. Adaptive Layout was introduced
in iOS8, which takes Auto Layout to the next level by allowing you to specify
different constraints based on different size classes. Size classes identify a relative
amount of display space for the height and for the width of your app’s window.

Due to the nature of multitasking, there are a few issues that you’ll have to take into
consideration when compiling your app with the iOS 9 SDK.

Don’t Use UIInterfaceOrientation any more!


Conceptually, this doesn’t work any more if your app supports multitasking. If you
have a multitasking app and you are checking the current UIInterfaceOrientation,
you can’t be sure that your app is running in full screen. If your app is the front app
in SplitView and the iPad is landscape, then even though it is larger vertically than
horizontally, it will still return UIInterfaceOrientationPortrait.

Sometimes you will still need to modify your interface based on size of the app’s
window though. So how can we do that? The answer is to use
traitCollection.horizontalSizeClass. This gives you the Size Class information
about your interface, which you can use to conditionally position views in your app.

Size Change Transition Events


Previously, events such as willRotateToInterfaceOrientation and
didRotateToInterfaceOrientation were the recommended way to make changes
to your application when the screen rotated. In iOS 8, Apple introduced
willTransitionToTraitCollection and viewWillTransitionToSize. These
methods become even more important in iOS 9 with the introduction of
multitasking. To check whether your interface is portrait or landscape, which you
still may wish to do, you can manually compare the width to the height.

Page 41 of 105
Responding to Keyboard Events
In the past, the only time your app would be effected by the keyboard was when it
was opened by your app itself. Now, it’s possible to have a keyboard appear on top
of your app, even though a user did not open it from your app.

In some cases, you may be fine with the keyboard appearing on top of your app.
However, if it obstructs an important piece of your UI then your users may be
obstructed. In this situation, you should respond to one of the UIKeyboard
notifications that have been around for a long time now. WillShow, DidShow,
WillHide, DidHide, WillChangeFrame and DidChangeFrame notifications should give
you the ability to do this. These events will fire in both apps that are present on
screen.

Other Considerations
The changes you will have to make aren’t just visual. Previously, apps could rely on
being the only app running in the foreground. You had sole access to the vast
majority of system resources such as the CPU, GPU and memory. However, this has
now changed. If a user has split view or slide over view active, and, at the same
time, is watching a video in the new iOS 9 picture in picture mode, then these
resources must be shared between three applications.

For best user experience, the system tightly manages resource usage and
terminates apps that are using more than their fair share - Apple iOS 9
Multitasking Documentation

Page 42 of 105
You should therefore profile and heavily test your applications on different
variations of iPad so that you are confident that your application is as efficient as it
can be and is not using resources that it does not need.

Further Reading
For more information on the new multitasking functionality in iOS 9, take a look at
the Adopting Multitasking On iPad guide in the iOS developer library. I’d also
recommend watching WWDC session 205, Continuous Integration and Code
Coverage in Xcode.

Page 43 of 105
Day 7 :: The New Contacts
Framework
In iOS 9, Apple have introduced the new Contacts framework. This gives developers
a way to interact with the device’s contacts using an Objective-C API that works well
with Swift too! This is a big improvement over the previous method of accessing a
user’s contacts, with the AddressBook framework. The AddressBook framework
was difficult to use and did not have an Objective-C API. Using it in Swift was a
huge pain point for developers and hopefully the new Contacts framework can fix
this.

I think the best indicator of how much developers disliked the AddressBook
framework, was the volume of the cheers in the WWDC session when it was
announced that it has been deprecated in iOS 9! It was certainly one of the loudest
and longest that I’ve heard so far.

Contacts returned from the Framework are now unified, meaning that if you have
duplicate copies from different sources, they are combined into one, and that is the
contact exposed to you, meaning that we no longer have to do any manual merging
of contacts.

Using the New Contacts Framework


We are now going to build a simple application that shows you a list of your
contacts and allows you to see more details about them.

Page 44 of 105
As you can see, this is a master detail view controller application which also works
well on the iPhone. There is a list of your device’s contacts on the left hand side,
and their display image, name, and phone numbers are displayed in the detail view
controller.

Fetching the User’s Contacts


To get started, just set up a default Xcode project with the master detail view
controller template. That should give us the views and infrastructure that we need.
Once that is set up, open the MasterViewController class. First we need to import
the new Contacts and ContactsUI frameworks at the top of the file.

import Contacts
import ContactsUI

We are now going to replace the existing datasource behavior with one which
fetches and displays the current device’s contacts. Let’s write a function that does
just that.

func findContacts() -> [CNContact] {

let store = CNContactStore()

CNContactStore is the new class to fetch and save contacts. In this article we will
only be fetching contacts, but you can also use it to fetch and save contact groups,
as well as contact containers.

Page 45 of 105
let keysToFetch =
[CNContactFormatter.descriptorForRequiredKeysForStyle(.FullName),
CNContactImageDataKey,
CNContactPhoneNumbersKey]

let fetchRequest = CNContactFetchRequest(keysToFetch: keysToFetch)

Once we have a reference to the store, we need to create a fetch request to query
the store and fetch some results. Creating a CNContactFetchRequest, we also pass
the contact keys that we wish to fetch, so we create an array of keys first. One
interesting thing to note is the
CNContactFormatter.descriptorForRequiredKeysForStyle(.FullName) key that
we place in the dictionary. This is a convenience method from CNContactFormatter,
which we will use later. CNContactFormatter requires many different keys, and
without this descriptorForRequiredKeysForStyle function, we would need to
specify these keys manually, like so:

[CNContactGivenNameKey,
CNContactNamePrefixKey,
CNContactNameSuffixKey,
CNContactMiddleNameKey,
CNContactFamilyNameKey,
CNContactTypeKey…]

As you can see, this is a lot of code, and if the CNContactFormatter key
requirements were to change in future, then you’d get an exception when trying to
generate a string from the CNContactFormatter.

var contacts = [CNContact]()

do {
try store.enumerateContactsWithFetchRequest(fetchRequest,
usingBlock: { (let contact, let stop) -> Void in
contacts.append(contact)
})
}
catch let error as NSError {
print(error.localizedDescription)
}

return contacts

This code is fairly simple. All that we do is enumerate contacts from the
CNContactStore that match our fetch request. The fetch request did not specify any

Page 46 of 105
queries, so it should return all contacts with all the keys that we requested. For each
contact, we simply save them into a contacts array and return it.

Now we must invoke the function and use the results in our table view. Again, in
MasterViewController, add a property to store the contacts that we want to
display.

var contacts = [CNContact]()

Then, update the viewDidLoad() function to include an asynchronous call to the


contacts, and store them.

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAU
LT, 0)) {
self.contacts = self.findContacts()

dispatch_async(dispatch_get_main_queue()) {
self.tableView!.reloadData()
}
}

Once the results have been saved, reload the table view.

You’ll have to make some amendments to the UITableViewDatasource methods in


order to display the results correctly.

override func tableView(tableView: UITableView, numberOfRowsInSection


section: Int) -> Int {
return self.contacts.count
}

override func tableView(tableView: UITableView, cellForRowAtIndexPath


indexPath: NSIndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCellWithIdentifier("Cell",
forIndexPath: indexPath)

let contact = contacts[indexPath.row] as CNContact


cell.textLabel!.text = "\(contact.givenName) \
(contact.familyName)"
return cell
}

All that’s left now is to update DetailViewController to show the contact details. I
won’t go into how to do this in depth here, but you’ll need an image view, a name

Page 47 of 105
label and a phone number label created in interface builder and stored as an
IBOutlet in your DetailViewController class.

@IBOutlet weak var contactImageView: UIImageView!


@IBOutlet weak var contactNameLabel: UILabel!
@IBOutlet weak var contactPhoneNumberLabel: UILabel!

Once that’s done, we need to set the correct values. This is where we will learn how
to work with CNContact objects. In the configureView function, you will need the
following lines.

label.text = CNContactFormatter.stringFromContact(contact,
style: .FullName)

As we discussed above, CNContactFormatter takes care of creating strings from a


contact’s name, in the correct format. All we have to do is pass in the contact and
the format we require. Everything else is handled by the formatter.
When it comes to setting the image, we have to check if the imageData actually
exists first. imageData is optional so the app could crash if the device had a contact
without an image set on it.

if contact.imageData != nil {
imageView.image = UIImage(data: contact.imageData!)
}
else {
imageView.image = nil
}

If it does exist, then we simply create a new UIImage with the data and set that on
the image view.

Finally, all that is left is to do is set the phone number label.

if let phoneNumberLabel = self.contactPhoneNumberLabel {


var numberArray = [String]()
for number in contact.phoneNumbers {
let phoneNumber = number.value as! CNPhoneNumber
numberArray.append(phoneNumber.stringValue)
}
phoneNumberLabel.text = ", ".join(numberArray)
}

Page 48 of 105
This should be the final result. We now have an app that displays a list of the
device’s contacts and lets you extract data and find out more about each one.

Using ContactsUI to Select Contact Information


Suppose we want to have an app where users can select their contacts and pass
information about them to us. As we have just seen above, there is quite a lot of
work involved in writing code to manually fetch and display contact information in
our UI. It would be a lot simpler if something else did that for us.

This is where the ContactsUI framework comes in handy. It provides a set of view
controllers that we can use to display contact information in our applications.
In this section of the app, we want the user to be able to select one of their
contacts’ phone number and for us to be able to record this in the app. As this is
just a demo application, add a UIBarButtonItem in the top right of the
MasterViewController storyboard scene’s navigation item. Then hook that up to a
function with an IBAction in the MasterViewController class.

Page 49 of 105
@IBAction func showContactsPicker(sender: UIBarButtonItem) {
let contactPicker = CNContactPickerViewController()
contactPicker.delegate = self;
contactPicker.displayedPropertyKeys = [CNContactPhoneNumbersKey]

self.presentViewController(contactPicker, animated: true,


completion: nil)
}

We simply create a new CNContactPickerViewController, set delegate to self so


we receive the responses, specify that we are only interested in phone numbers,
then present the view controller to the user. Everything else is handled for us!

func contactPicker(picker: CNContactPickerViewController,


didSelectContactProperty contactProperty: CNContactProperty) {
let contact = contactProperty.contact
let phoneNumber = contactProperty.value as! CNPhoneNumber

print(contact.givenName)
print(phoneNumber.stringValue)
}

In the contactPicker delegate function, didSelectContactProperty, we are passed


a CNContactProperty object. This is a wrapper for a CNContact and a specific
property that the user selected. Lets see how this works.

Page 50 of 105
When you tap the UIBarButtonItem in the top right of the MasterViewController,
you will be presented with the screen above. This is a simple list of all of your
contacts, as we haven’t specified a filter with a predicate on the
CNContactPickerViewController.

Once you tap a contact, you are presented with a list of phone numbers for that
contact. No other information is visible, as we earlier specified
CNContactPhoneNumbersKey as the only value in our contact picker’s
displayedPropertyKeys.

Finally, when tapping a property, such as the mobile number as we have done in the
screenshot above, your contactPicker:didSelectContactProperty function will be
called before the picker is dismissed.

In this case, the contact property contains the “Kate Bell” CNContact as it’s contact,
the string “phoneNumbers” as it’s key, the “5555648583” CNPhoneNumber as it’s
value, and finally the contact identifier string as it’s identifier property.

Page 51 of 105
In summary, we have seen that using the ContactsUI framework to select pieces of
contact information is easy and quick to develop for and use. If you require more
fine grained control of how your contact information is displayed, then the Contacts
framework provides you with a great way to access and store contact information.

Further Reading
For more information on the new Contacts Framework, I’d also recommend
watching WWDC session 223, Introducing the Contacts Framework for iOS and OS
X.

Page 52 of 105
Day 8 :: Apple Pay
Apple Pay was introduced in iOS 8, and is the easy, secure, and private way to pay
for physical goods and services within apps. It makes it simple for users to pay for
things by only requiring their fingerprint to authorise a transaction.

Apple Pay is only available on certain devices. Currently, these devices are iPhone 6,
iPhone 6+, iPad Air 2, and iPad mini 3. This is because Apple Pay must be
supported by a dedicated hardware chip called the Secure Element, which stores
and encrypts vital information.

You should not use Apple Pay to unlock features of your app. In App Purchase
should be used in this case. Apple Pay is solely for physical goods and services, for
example, club memberships, hotel reservations, and tickets for events.

Why Use Apple Pay


Apple pay makes things a lot easier for developers. You don’t need to handle and
process actual card numbers any more, and you don’t need users to sign up. You
can remove your on boarding processes and your user doesn’t need an account any
more. The shipping and billing information is sent to your payment processor
automatically with the Apple Pay token. This means a much easier purchase
process which leads to much higher conversion rates.

In WWDC session 702, Apple Pay Within Apps, Nick Shearer gave some statistics
about these higher conversion rates from different businesses in the USA.

• Stubhub found that Apple Pay customers transacted 20% more than regular
customers.

• OpenTable had a 50% transaction growth after integrating Apple Pay.


• Staples saw an increase in conversion of 109% with Apple Pay.

Page 53 of 105
Creating a Simple Store App
We are going to set up a simple store inside of an app, and demonstrate how to use
Apple Pay to process our transactions. The app will only have one product, but will
be fully integrated with Apple Pay to allow us to demonstrate how to set up and get
started with Apple Pay.

This is what we are going to be building. As you can see, the user is presented with
a Apple Pay sheet when they tap a buy now button.

Enabling Apple Pay


Before we can write any code at all, we have to set up our app’s capability to work
with Apple Pay. Once you’ve created a blank new project, open the project settings,
then the capabilities tab.

Page 54 of 105
You should see Apple Pay listed in the capabilities section. Change the switch to
the on state, and you will be asked to select a development team to use for
provisioning. Hopefully, Xcode will then do all of the setup for you and Apple Pay
will be enabled.

We must then add a Merchant ID, so that Apple knows how to encrypt the payment
correctly. Click the add button in the Merchant ID area that appears, and enter your
own, unique Merchant ID. In this example, we’ve chosen
merchant.com.shinobistore.appleplay.

And that’s it! You should now see that Apple Pay is enabled and you should be able
to use it now in your app.

Using Apple Pay


Now that we have the correct provisioning and entitlements set up, we’re ready to
start building the UI to allow the user to pay for our product. Open up the storyboard
and and some placeholder UI to show that the product is available for sale.

Page 55 of 105
The UI we have created is just a simple image with title, price, and description text.
This isn’t important for the demo. We do need to add a button to the view though,
so lets add one at the bottom of the view. The button we are going to add is a
PKPaymentButton. This was introduced by Apple in iOS 8.3. This Apple Pay button is
localized and provides users with a standard visual indicator of when they can use
Apple Pay. Using this button to launch Apple Pay screens is strongly recommended
by Apple for this reason.

The button has three available styles:


• White
• WhiteOutline
• Black

It also has two different button types:


• Plain
• Buy

These are just different ways to style the button. Unfortunately, adding this type of
button isn’t yet supported in Interface Builder, so open ViewController.swift and
override the viewDidLoad method.
Page 56 of 105
override func viewDidLoad() {
super.viewDidLoad()

let paymentButton = PKPaymentButton(type:.Buy, style:.Black)


paymentButton.translatesAutoresizingMaskIntoConstraints = false
paymentButton.addTarget(self, action: "buyNowButtonTapped",
forControlEvents: .TouchUpInside)
bottomToolbar.addSubview(paymentButton)

bottomToolbar.addConstraint(NSLayoutConstraint(item:
paymentButton, attribute: .CenterX, relatedBy: .Equal, toItem:
bottomToolbar, attribute: .CenterX, multiplier: 1, constant: 0))
bottomToolbar.addConstraint(NSLayoutConstraint(item:
paymentButton, attribute: .CenterY, relatedBy: .Equal, toItem:
bottomToolbar, attribute: .CenterY, multiplier: 1, constant: 0))
}

That’s all we need. It’s all pretty self explanatory so let’s move on. Essentially, the
only UI element that we really care about here is the button. We’ll launch the
purchase process in the buyNowButtonTapped: method when the button is tapped.

Page 57 of 105
Once the UI has been set up, we must now process the purchase. First though, it
would be good to have a solid understanding of the various classes required to
make Apple Pay transactions.

PKPaymentSummaryItem
This object is simply an item you’d like to charge for on the Apple Pay charge sheet.
This could be any product, tax, or shipping, for example.

PKPaymentRequest
A PKPaymentRequest combines the items you’d like to charge with how you would
like to users to make the payment. This includes things such as your merchant
identifier, country code and currency code etc.

PKPaymentAuthorisationViewController
The PKPaymentAuthorisationViewController prompts the user to authorize a
PKPaymentRequest, and to select a delivery address and valid payment card.

PKPayment
A PKPayment contains information needed to process the payment, and the
information needed to display a confirmation message.

All of these classes live under PassKit (hence the PK prefix), so you will also need
to import this framework wherever you use Apple Pay.

Setting up a Payment
The first step to setting up a payment is by creating a PKPaymentRequest.
There’s a few steps involved in doing so, which are detailed below.

func buyNowButtonTapped(sender: UIButton) {

// Networks that we want to accept.


let paymentNetworks = [PKPaymentNetworkAmex,
PKPaymentNetworkMasterCard,
PKPaymentNetworkVisa,
PKPaymentNetworkDiscover]

Page 58 of 105
The first thing we need to do is set up an array of acceptable payment networks.
This is how we will restrict or permit the certain card types that we want to accept.

if
PKPaymentAuthorizationViewController.canMakePaymentsUsingNetworks(paym
entNetworks) {

Then, we have to check whether the device can process this kind of payment. The
static method on PKPaymentAuthorizationViewController,
canMakePaymentsUsingNetworks checks to see if the device is capable of making
payments, and whether it

let request = PKPaymentRequest()

// This merchantIdentifier should have been created for you in Xcode


when you set up the ApplePay capabilities.
request.merchantIdentifier = "shinobistore.com.day-by-day."

// Standard ISO country code. The country in which you make the
charge.
request.countryCode = "US" request.currencyCode = "USD"

request.supportedNetworks = paymentNetworks

// 3DS or EMV. Check with your payment platform or processor.


request.merchantCapabilities = .Capability3DS

If we can process a payment on this device, then we can start to set up the payment
request itself using the code above. The comments on each line should explain
what effect each line has.

// Set the items that you are charging for. The last item is the total
amount you want to charge.
let shinobiToySummaryItem = PKPaymentSummaryItem(label: "Shinobi Cuddly
Toy", amount: NSDecimalNumber(double: 22.99), type: .Final)
let shinobiPostageSummaryItem = PKPaymentSummaryItem(label: "Postage",
amount: NSDecimalNumber(double: 3.99), type: .Final)
let shinobiTaxSummaryItem = PKPaymentSummaryItem(label: "Tax", amount:
NSDecimalNumber(double: 2.29), type: .Final)
let total = PKPaymentSummaryItem(label: "Total", amount:
NSDecimalNumber(double: 29.27), type: .Final)

Then, as shown above, you now have to set up the products that you want to appear
on the Apple Pay sheet. These will all be used on the next line as the
paymentSummaryItems on the request.

Page 59 of 105
request.paymentSummaryItems = [shinobiToySummaryItem,
shinobiPostageSummaryItem, shinobiTaxSummaryItem, total]

One interesting part of the API here is that the last item in the array is how much the
user is actually charged. This wasn’t obvious at first, but the Apple Pay sheet will
charge the user for the amount specified in the last item. In this case, that will be
total. Therefore, if you wish for more than one item to appear in the payment sheet
you must calculate the total for yourself and add this to an additional
PKPaymentSummaryItem at the end of the list, as demonstrated below.

// Create a PKPaymentAuthorizationViewController from the request


let authorizationViewController =
PKPaymentAuthorizationViewController(paymentRequest: request)

// Set its delegate so we know the result of the payment authorization


authorizationViewController.delegate = self

// Show the authorizationViewController to the user


presentViewController(authorizationViewController, animated: true,
completion: nil)

Finally, all that is left to do to present the Apple Pay sheet to the user is to create a
PKPaymentAuthorizationViewController from the request, set the delegate, and
present it to the user!

Page 60 of 105
Now we need to make sure that we have implemented the delegate methods on
PKPaymentAuthorizationViewController. We need to implement these methods so
that we know whether a payment has been made or not, and receive callbacks
when the payment has been authorized and completes.

In paymentAuthorizationViewController:didAuthorizePayment we need to
process the payment data with our provider and then return the status to our app.
The PKPayment object we receive in this method has a PKPaymentToken token
property, which is what we should send to our payment provider. This is secure,
encrypted data.

func paymentAuthorizationViewController(controller:
PKPaymentAuthorizationViewController, didAuthorizePayment payment:
PKPayment, completion: (PKPaymentAuthorizationStatus) -> Void) {

paymentToken = payment.token

// You would typically use a payment provider such as Stripe here


using payment.token
completion(.Success)

// Once the payment is successful, show the user that the purchase
has been successful.
self.performSegueWithIdentifier("purchaseConfirmed", sender: self)
}

In paymentAuthorizationViewControllerDidFinish, we simply need to dismiss the


view controller.

func paymentAuthorizationViewControllerDidFinish(controller:
PKPaymentAuthorizationViewController) {
self.dismissViewControllerAnimated(true, completion: nil)
}

And that’s all there is to it! Obviously in the real world you will need to send the
payment token to a payment provider such as Stripe, but that’s beyond the scope of
this tutorial. We have also added a simple view controller to show a receipt, which
in this case just shows the the payment token’s transactionIdentifier. This is a
string that describes a globally unique identifier for this transaction that can be
used for receipt purposes.

Page 61 of 105
Further Reading
For more information on the Apple Pay, I’d recommend watching WWDC session
702, Apple Pay Within Apps. It’s quite a long session, but it’s’ definitely worth
watching if you are interested in integrating Apple Pay with your application. There’s
a great section in the middle about how to improve the user experience of the
payment process in your app too.

There’s also a guide to Apple Pay on the Apple Developer website, which contains a
lot of valuable information that you should certainly be familiar with before
integrating Apple Pay.

Page 62 of 105
Day 9 :: New UIKit Dynamics
Features
UIKit Dynamics was introduced in iOS 7, to give developers an easy way to add
some physical realism to their user interfaces. iOS 9 has brought a couple of big
improvements and we are going to take a look at some of these in this post.

Non-Rectangular Collision Bounds


Prior to iOS 9, collision bounds in UIKitDynamics could only be rectangular. This led
to some strange visual effects when views collided if they were not perfectly
rectangular. Now, iOS 9 supports three types of collision bound. Rectangle, Ellipse,
and Path. The path can be anything, as long as it is counter clockwise and not self
intersecting. There is one caveat to this though. The path must be convex, and not
concave.

In order to provide a custom collision bounds type, you can subclass UIView and
provide your own.

class Ellipse: UIView {


override var collisionBoundsType: UIDynamicItemCollisionBoundsType
{
return .Ellipse
}
}

You can do the same if you have custom view with a custom collision bounds path
too.

Page 63 of 105
UIFieldBehavior
Before iOS 9, the only type of field behaviour available was the gravity behaviour.
This has been a UIFieldBehavior all along, but the API was not exposed for the user
of the SDK to subclass.

Now, UIKit Dynamics contains a variety of field behaviours:

• Linear Gravity
• Radial Gravity
• Noise
• Custom
These behaviours all have a variety of properties to customise how they effect the
views in the UIDynamicAnimator, and are very simple to use and add.

Building a UIFieldBehavior & Non-


Rectangular Collision Bounds
Example
Lets combine these two new features into an example. We will have a couple of
views, (one ellipse and one square) and add some collision logic, and a noise
UIFieldBehavior.

Page 64 of 105
To use UIKit Dynamics, the first thing you need to set up is a UIDynamicAnimator.
Once you have set up a variable in your class to keep a reference to it, set it up in
the viewDidLoad method.

// Set up a UIDynamicAnimator on the view.


animator = UIDynamicAnimator(referenceView: view)

Now we need to add some views that will actually animate.

// Add two views to the view


let square = UIView(frame: CGRect(x: 0, y: 0, width: 100, height:
100))
square.backgroundColor = .blueColor()
view.addSubview(square)

let ellipse = Ellipse(frame: CGRect(x: 0, y: 100, width: 100, height:


100))
ellipse.backgroundColor = .yellowColor()
ellipse.layer.cornerRadius = 50
view.addSubview(ellipse)

These are two basic views that we will add to the view’s behaviors.

Page 65 of 105
let items = [square, ellipse]

// Create some gravity so the items always fall towards the bottom.
let gravity = UIGravityBehavior(items: items)
animator.addBehavior(gravity)
The first behavior we create is a gravity behavior.
let noiseField:UIFieldBehavior =
UIFieldBehavior.noiseFieldWithSmoothness(1.0, animationSpeed: 0.5)
// Set up the noise field
noiseField.addItem(square)
noiseField.addItem(ellipse)
noiseField.strength = 0.5
animator.addBehavior(noiseField)

The next behavior we need to set up is a UIFieldBehavior, using the


noiseFieldWithSmoothness initializer. We add the square and ellipse to this
behavior and add the field behavior to the animator.

// Don't let objects overlap each other - set up a collide behaviour


let collision = UICollisionBehavior(items: items)
collision.setTranslatesReferenceBoundsIntoBoundaryWithInsets(UIEdgeIns
ets(top: 20, left: 5, bottom: 5, right: 5))
animator.addBehavior(collision)

We then set up a UICollisionBehavior for the items. This prevents them


overlapping and adds collision physics to the animator. We also use
setTranslatesReferenceBoundsIntoBoundaryWithInsets. This creates a bounding
box around the view and allows us to specify some insets so the bounds are visible.
If we didn’t have a bounding box, then gravity would take the ellipse and square off
the bottom of the screen and they would never return!

Speaking of gravity, it would be nice if the gravity in our example always pointed
toward the bottom of the device. In other words, the direction of real life gravity! In
order to do this, we have to use the CoreMotion framework. Import core motion,
and create a CMMotionManager variable.

let manager:CMMotionManager = CMMotionManager()

We need a property for this because we have to keep a reference to it so we can


continue to get updates. Otherwise the manager will be released and the updates
will never be created. Once we start to receive device motion updates, we can
update the gravity behavior’s gravityDirection property to a vector that faces
down, based on the deviceManager’s gravity property.

Page 66 of 105
// Used to alter the gravity so it always points down.
if manager.deviceMotionAvailable {
manager.deviceMotionUpdateInterval = 0.1

manager.startDeviceMotionUpdatesToQueue(NSOperationQueue.mainQueue(),
withHandler:{
deviceManager, error in
gravity.gravityDirection = CGVector(dx:
deviceManager!.gravity.x, dy: -deviceManager!.gravity.y)
})
}

Note that this will only work for the portrait orientation. You’ll have to add some
additional calculations if you want to support all device orientations in this app!
If you launch the app now, you’ll see something like this.

The squares will be moving around, but you can’t really see what is going on! In
WWDC session 229, Apple revealed a way to visually debug the effects being
applied by an animator. All you need to do is add a bridging header (if you are
writing your project in swift), then add the following code.

Page 67 of 105
@import UIKit;

#if DEBUG

@interface UIDynamicAnimator (AAPLDebugInterfaceOnly)

/// Use this property for debug purposes when testing.


@property (nonatomic, getter=isDebugEnabled) BOOL debugEnabled;

@end

#endif

This exposes some private API on UIDynamicAnimator which turns on debug mode.
This lets you see the forces that are being applied to your views. Back in your
ViewController class, you can now set the debugEnabled property to true on the
animator.

animator.debugEnabled = true // Private API. See the bridging header.

Now, when the app launches, you’ll be able to see the forces that are being applied
from the UIFieldBehavior.

You can also see bounding boxes around the view’s collision bounds, and around
the ellipse and square too! There are a couple of other properties you can add,
which are not in the API but are available in lldb. These are debugInterval and
Page 68 of 105
debugAnimationSpeed. They should provide additional help when trying to debug
your UIKit Dynamics animations.

We can see the field is working and applying forces to our views. If we want to
tweak the properties of the field, we would usually have to set some numbers on
the object, then relaunch the app to see the effect the changes had. For this type of
work, it’s often a lot easier to add some controls so you can do this in real time!
Open up interface builder and add three UISlider controls. The first will control the
strength, the second the smoothness, and the last, the speed. The strength slider
should scale from 0–25, and the others, 0 to 1.

Once you’ve set them up in Interface Builder, drag their value changed actions into
the ViewController class, and update each property accordingly.

@IBAction func smoothnessValueChanged(sender: UISlider) {


noiseField.smoothness = CGFloat(sender.value)
}

@IBAction func speedValueChanged(sender: UISlider) {


noiseField.animationSpeed = CGFloat(sender.value)
}

@IBAction func strengthValueChanged(sender: UISlider) {


noiseField.strength = CGFloat(sender.value)
}

Now, when the app runs, you should be able to control the three properties and see
what effects the different combinations have.
Page 69 of 105
Hopefully this has given you a good overview of how to work with, and debug, the
new UIFieldBehavior and non-rectangular collision bounds APIs in UIKit Dynamics.
I’d recommend using the app we have built on a real device, otherwise you won’t get
the full effect of the motion sensors!

Further Reading
For more information on the the new UIKit Dynamics features, take look at the first
half of WWDC session 229, What’s New in UIKit Dynamics and Visual Effects. Don’t
forget, if you want to try out the project we created and described in this article, you
can find it over at GitHub.

Page 70 of 105
Day 10 :: MapKit Transit ETA
Requests
Each iteration of MapKit brings more and more features for developers, and the iOS
9 update was no exception. In this post we will look at some of the new MapKit
APIs, and use them in an app in which we incorporate the new transit ETA
functionality.

Notable New API


MapKit View Improvements
You can now specify more advanced callout view layouts that appear on your maps.
MKAnnotation now has the following properties that you can customise:

• Title
• Subtitle
• Right Accessory View
• Left Accessory View
• Detail Callout Accessory View
Detail Callout Accessory View is new in iOS 9, and allows you to specify the detail
accessory view to be used in the standard callout. This view has full auto layout and
constraints support, and is a great way to customise your existing callouts.

Page 71 of 105
There’s also a few new, self explanatory, properties that have been added to
MKMapView:

• showsTraffic
• showsScale
• showsCompass

Transit Improvements
In iOS 9, Apple have introduced a new MKDirectionsTransportType, called
MKDirectionsTransitType. Currently, this is only available for ETA requests, and
can’t be used to get a full set of directions. When you request an ETA with the
calculateETAWithCompletionHandler function on MKDirections, you are given a
MKETAResponse object in the completion handler, which contains information such
as expected travel time, distance, expected arrival date, and expected departure
date.

Building a Sample App


In order to see how these new pieces of API fit together, and to try an example of
the new transit ETA requests, we are going to build the following app, which shows
the transit information from a tapped location, to various landmarks in London.

Page 72 of 105
The first step is to set up your storyboard with an MKMapView, a UITableView, and
the various constraints necessary to position the map view in the top half of the
view, and the table view in the bottom half.

Once that is done, add a prototype table view cell and add the necessary elements.
We won’t go into depth about how the UI is set up because that isn’t the focus of
this piece, but ensure that the ViewController class acts as the
UITableViewDataSource of the table view, and the MKMapViewDelegate of the map
view. Once your UI is set up, you should have something that looks similar to this in
your storyboard.

You’ll also need a custom class for the table view cell. For now, it should just be a
simple class with outlets for the labels that exist inside of the table view cell in the
storyboard.

Page 73 of 105
class DestinationTableViewCell: UITableViewCell {

@IBOutlet weak var nameLabel: UILabel!


@IBOutlet weak var etaLabel: UILabel!
@IBOutlet weak var departureTimeLabel: UILabel!
@IBOutlet weak var arrivalTimeLabel: UILabel!
@IBOutlet weak var distanceLabel: UILabel!

Now that the storyboard setup is out of the way, it’s time to start adding pins to the
map. To do that, we will need some destinations. Create a ‘Destination’ class which
we can use to store information about these locations.

class Destination {

let coordinate:CLLocationCoordinate2D
private var addressDictionary:[String : AnyObject]
let name:String

init(withName placeName: String,


latitude: CLLocationDegrees,
longitude: CLLocationDegrees,
address:[String:AnyObject])
{
name = placeName
coordinate = CLLocationCoordinate2D(latitude: latitude,
longitude: longitude)
addressDictionary = address
}
}

We can then easily create the locations like so:

let stPauls = Destination(


withName: "St Paul's Cathedral",
latitude: 51.5138244,
longitude: -0.0983483,
address: [
CNPostalAddressStreetKey:"St. Paul's Churchyard",
CNPostalAddressCityKey:"London",
CNPostalAddressPostalCodeKey:"EC4M 8AD",
CNPostalAddressCountryKey:”England"])

We can then create several of these and store them in an array so that we can
display them on the map when the view loads.

Page 74 of 105
In ViewController’s viewDidLoad() method, add the following code to add all of
these destinations to the map.

for destination in destinations {


let annotation = MKPointAnnotation()
annotation.coordinate = destination.coordinate
mapView.addAnnotation(annotation)
}

This will display them on the map. You’ll also need to set the initial map region in
viewDidLoad() so that the map starts in the correct position.

mapView.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(
latitude: CLLocationDegrees(51.5074157),
longitude: CLLocationDegrees(-0.1201011)),
span: MKCoordinateSpan(
latitudeDelta: CLLocationDegrees(0.025),
longitudeDelta: CLLocationDegrees(0.025)))

Next up, we have to display the destinations in the table view.

func tableView(tableView: UITableView, numberOfRowsInSection section:


Int) -> Int {
return destinations.count
}

func tableView(tableView: UITableView, cellForRowAtIndexPath


indexPath: NSIndexPath) -> UITableViewCell {
let cell =
tableView.dequeueReusableCellWithIdentifier("destinationCell") as!
DestinationTableViewCell
cell.destination = destinations[indexPath.row]
return cell
}

Now run the app, and you should see that the locations specified now appear on the
map, and the table view should show the names of the locations.

Page 75 of 105
That’s great, but we can’t calculate transit directions yet, because we don’t have
anywhere to start from! We could use the user’s location, but ideally we want to get
transit directions over a realistic distance. So instead, we can detect when the user
taps on the map view, and use that as the start point.

To do this we will need to add a tap gesture recogniser to the map view.

let tap = UITapGestureRecognizer(target: self, action: "handleTap:")


mapView.addGestureRecognizer(tap)

Then we can create a handleTap function which translates the tap into a map view
coordinate.

let point = gestureRecognizer.locationInView(mapView)


userCoordinate = mapView.convertPoint(point,
toCoordinateFromView:mapView)

Once we have the coordinate we store it for later use, and then add an annotation
showing the user’s location, after removing any existing annotation if there is one.

Page 76 of 105
if userAnnotation != nil {
mapView.removeAnnotation(userAnnotation!)
}

userAnnotation = MKPointAnnotation()
userAnnotation!.coordinate = userCoordinate!
mapView.addAnnotation(userAnnotation!)

Finally, we have to set the location on the table view cell, as they need to update the
ETA information based on the user’s new location. First, we have to do this on the
visible cells.

for cell in self.tableView.visibleCells as! [DestinationTableViewCell]


{
cell.userCoordinate = userCoordinate
}

But we also need to update the tableView:cellForRowAtIndexPath function to set


the user coordinate on the cell in case any cells are reloaded. Add the following line
before the cell is returned.

cell.userCoordinate = userCoordinate

Whenever the user coordinate is set on the table view cell, we want to trigger an
update. We can do this using the didSet call on the userCoordinate property. The
first thing we want to do is clear all of the text from the labels, as the previously
displayed ETA information is now irrelevant.

var userCoordinate:CLLocationCoordinate2D? {
didSet {

etaLabel.text = ""
departureTimeLabel.text = "Departure Time:"
arrivalTimeLabel.text = "Arrival Time:"
distanceLabel.text = "Distance:"

guard let coordinate = userCoordinate else { return }

Now that we know that there is a user coordinate and we have a start location, we
can create a MKDirectionsRequest object which we can use to calculate the ETA
information. We set the source to be a MKMapItem initialised with the coordinate,
and the destination to be the mapItem property on our Destination object. We
specify that we want to request transit directions with the transportType property.

Page 77 of 105
Then finally, we call calculateETAWithCompletionHandler to request the ETA
information, and update the labels based on that.

let request = MKDirectionsRequest()


request.source = MKMapItem(placemark: MKPlacemark(coordinate:
coordinate, addressDictionary: nil))
request.destination = destination!.mapItem
request.transportType = .Transit
let directions = MKDirections(request: request)
directions.calculateETAWithCompletionHandler { response, error -> Void
in
if let err = error {
self.etaLabel.text = err.userInfo["NSLocalizedFailureReason"]
as? String
return
}

self.etaLabel.text = "\(response!.expectedTravelTime/60) minutes


travel time"
self.departureTimeLabel.text = "Departure Time: \
(response!.expectedDepartureDate)"
self.arrivalTimeLabel.text = "Arrival Time: \
(response!.expectedArrivalDate)"
self.distanceLabel.text = "Distance: \(response!.distance) meters"
}

Now, if you run the app, you should see something like this.

Page 78 of 105
Whenever you tap on the map, the ETA information in the cells will update with the
fetched information.

There’s one final thing left to do. Hook up the “View Route” button in each cell to an
IBAction in the custom cell class and add the following code.

guard let mapDestination = destination else { return }

let launchOptions =
[MKLaunchOptionsDirectionsModeKey:MKLaunchOptionsDirectionsModeTransit
]
mapDestination.mapItem.openInMapsWithLaunchOptions(launchOptions)

This will open the destination in the maps app, and display a transit route from the
user’s current location.

Customising the Pin Colors


The app is now fully functional, but it’s a bit difficult to tell which pin is the user’s
and which display our destinations. In order to customise the pin appearance, we
have to implement the MKMapViewDelegate protocol and set ViewController to be
the map view’s delegate. We can then add the following code:

func mapView(mapView: MKMapView, viewForAnnotation annotation:


MKAnnotation) -> MKAnnotationView? {
let pin = MKPinAnnotationView(annotation: annotation,
reuseIdentifier: "pin")
pin.pinTintColor = annotation === userAnnotation ?
UIColor.redColor() : UIColor.blueColor()
return pin
}

pinTintColor is a new property introduced in iOS 9 which allows you to specify the
tint color of the top of the pin in the annotation. As you can see above, if the
annotation passed to the mapView:viewForAnnotation is equal to the
userAnnotation, then we make the tint color red, and otherwise make it blue. This
allows us to distinguish between the user location and the destinations on the map.

Further Reading
For more information on the the new MapKit features discussed in this post, take
look at WWDC session 206, What’s New in MapKit.

Page 79 of 105
Day 11 :: GameplayKit -
Pathfinding
In previous releases of iOS, Apple have put a lot of emphasis on making it easier for
developers to create games for their platforms. In iOS 7, they introduced SpriteKit,
which is a 2D graphics and animation library that you can use to create interactive
games for the iOS and OS X platforms. SceneKit has been available for the Mac
since 2012, but at WWDC 2014, they introduced it to iOS and added a lot of new
features, such as particle effects and physics simulation.

Having worked with both in the past, I can personally testify that these frameworks
are great. They are both really helpful when it comes to displaying the visual
elements of your game. Having very little experience of game development, the one
thing I always seemed to struggle with was how to architect games and how to
model the entities and the relationships and interactions beteween them.

With the announcement of iOS 9, Apple have gone some way to try and help
developers with this. They introduced a new framework, GameplayKit, which is a
collection of tools and technologies for building games in iOS and OS X.

Unlike high-level game engines such as SpriteKit and SceneKit, GameplayKit is not
involved in animating and rendering visual content. Instead, you use GameplayKit
to develop your gameplay mechanics and to design modular, scalable game
architecture with minimal effort.“ - Apple ”About GameplayKit" Prerelease Docs

Page 80 of 105
The new framework includes several features:

• Randomization
• Entities and Components
• State Machines
• Pathfinding
• Agents, Goals & Behaviors
• Rule Systems
This post specifically looks at the new pathfinding functionality in the GameplayKit
APIs, but future posts will explore some of the various other areas too!

Building a Pathfinding Example


We are now going to build a simple SpriteKit example which demonstrates the new
pathfinding APIs available in GameplayKit.
First, set up a SpriteKit game project in Xcode.

Page 81 of 105
This should give us a really basic template for the game we want to create. That’s
fine for now. The next step is to open up GameScene.sks and add some nodes. First,
add a node that will represent the player that we want to move through the maze.

Note that how in the property inspector on the right hand side of Xcode, the name
of the node is set to “player”. We will use this later to access this node.

Now we need to add nodes that the player has to avoid when moving. Otherwise
this pathfinding example will be very straightforward!

Page 82 of 105
Drag some nodes into the scene using the scene editor in Xcode, and you should
see something similar to the above image. You can make your maze simpler, or
more complicated. The important part is that there are at least a few nodes that the
player has to avoid when making its way to a particular point in the scene. You don’t
need to set any special properties on these nodes. It’s fine to leave them as basic
shape nodes.

The next step is to open GameScene.swift and override the touchesBegan method.
We will use the location that the touch occurred as the end point for our path.

// Whenever a tap is detected, move the player to that position.


override func touchesBegan(touches: Set<UITouch>, withEvent event:
UIEvent?) {
for touch: AnyObject in touches {
let location = (touch as! UITouch).locationInNode(self)
self.movePlayerToLocation(location)
}
}

Once we have detected the user’s tap, we can now build a path to where they
tapped from the player node’s current location, which avoids any obstacles along
the way. For this, we will create a new function called movePlayerToLocation.

/// Moves the Player sprite through the scene to the given point,
avoiding obstacles on the way.
func movePlayerToLocation(location: CGPoint) {

// Ensure the player doesn't move when they are already moving.
guard (!moving) else {return}
moving = true

The first step is to get the player. We can do this with the childNodeWithName
function, which we pass whatever we earlier named our player node in the scene
editor.

// Find the player in the scene.


let player = self.childNodeWithName(“player")

Once we have the player, we have to set up an array containing every other node in
the scene. This will give us the obstacles that the player node will ultimately avoid.

// Create an array of obstacles, which is every child node, apart from


the player node.
Page 83 of 105
let obstacles =
SKNode.obstaclesFromNodeBounds(self.children.filter({ (element ) ->
Bool in
return element != player
}))

Once we have the obstacles, we can now use them to calculate the path from the
player’s current position to the position passed to this function.

// Assemble a graph based on the obstacles. Provide a buffer radius so


there is a bit of space between the
// center of the player node and the edges of the obstacles.
let graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: 10)

// Create a node for the user's current position, and the user's
destination.
let startNode = GKGraphNode2D(point: float2(Float(player!.position.x),
Float(player!.position.y)))
let endNode = GKGraphNode2D(point: float2(Float(location.x),
Float(location.y)))

// Connect the two nodes just created to graph.


graph.connectNodeUsingObstacles(startNode)
graph.connectNodeUsingObstacles(endNode)

// Find a path from the start node to the end node using the graph.
let path:[GKGraphNode] = graph.findPathFromNode(startNode, toNode:
endNode)

// If the path has 0 nodes, then a path could not be found, so return.
guard path.count > 0 else { moving = false; return }

Now that the path has been derived, avoiding any obstacles along the way, the
player node can be directed along the path. It’s possible to create an action using
SKAction.followPath(path: CGPath, speed: CGFloat), but in this case I’ve
chosen to show each step of the path as a distinct step, so the results of the
pathfinding algorithm are more obvious. In a real game though, you will probably
want to use SKAction.followPath.

The following code creates a moveTo SKAction for each gap in the path, then
assembles them into a sequence and performs the action on the player node.

// Create an array of actions that the player node can use to follow
the path.
var actions = [SKAction]()

for node:GKGraphNode in path {


Page 84 of 105
if let point2d = node as? GKGraphNode2D {
let point = CGPoint(x: CGFloat(point2d.position.x), y:
CGFloat(point2d.position.y))
let action = SKAction.moveTo(point, duration: 1.0)
actions.append(action)
}
}

// Convert those actions into a sequence action, then run it on the


player node.
let sequence = SKAction.sequence(actions)
player?.runAction(sequence, completion: { () -> Void in
// When the action completes, allow the player to move again.
self.moving = false
})
}

Now, when you tap anywhere in the scene, your player node should move to that
point while avoiding all of the other nodes in the scene! If you tap in the center of a
node, or somewhere that the player node can’t possibly reach, then the node won’t
move at all.

The Result
The video below shows the result. Notice how the player moves around the
obstacles and makes it’s way from it’s current position to the far side of the scene.
Your browser does not support the video tag.

This was a very brief overview of the new pathfinding features. Learning how to
integrate them with the rest of the functionality in GameplayKit will be key when
developing a game, and that’s something that we will potentially discuss in a later
post!

Further Reading
For more information on the the new GameplayKit features discussed in this post,
take look at WWDC session 608, Introducing GameplayKit.

If you have any questions or comments then we would love to hear your feedback.
Send me a tweet @christhegrant or you can follow @shinobicontrols to get the
latest news and updates to the iOS9 Day-by-Day series!

Page 85 of 105
Day 12 :: GameplayKit - Behaviors
& Goals
In post 10, we looked at how you could use GameplayKit pathfinding APIs to
calculate a path between two locations in your scene, while avoiding designated
obstacles.

In this post, we will take a different approach to moving nodes through our scene.
Gameplay kit has introduced the concept of Behaviours and Goals, which provide
you with a way to position the nodes in your scene based on constraints and
desired achievements. Lets take a look at an example of how this works before
looking into it in more detail.

In this example (which we will build shortly), you will see a yellow box which
represents the user. This box is controlled by the user moving their finger around
the scene. Pretty basic stuff! The interesting part is the missile, which seeks the
player, and will always try to reach the center point of the player node.

This doesn’t use any physics or custom code, and is solely controlled by a simple
behavior composed with a single seek goal.

Now we know a bit about behaviours and goals work, lets take a look at how to
create this demo app.

Page 86 of 105
Creating a Behavior and Goal
Example
Lets walk through how to put this example together.

Set up the default SpriteKit template and open up the GameScene.swift file.
The first thing we need to do is set up our entities.

let player:Player = Player()


var missile:Missile?

A GKEntity is a general purpose object that you can add components that provide
functionality to. In our case, we have two entities, one which represents the player,
and another that represents the missile. We will look into how these are set up in
more detail soon.

In addition to our entities, we also need to set up an array of component systems. A


component system is homogeneous collection of components that will be called at
the same type. We can use a lazy var here because we only want to initialize it once,
when we first use it. We have a component for targeting (added to the missile node
so it can target the player node) and a component for rendering (so we can render
both entities in the scene). The order that we return the components defines the
order that they will run in. So we return targeting then rendering, because we want
Page 87 of 105
to update the node positions based on the targeting component, and then render
the results.

lazy var componentSystems:[GKComponentSystem] = {


let targetingSystem = GKComponentSystem(componentClass:
TargetingComponent.self)
let renderSystem = GKComponentSystem(componentClass:
RenderComponent.self)
return [targetingSystem, renderSystem]
}()

But what is a GameKit Component? We have discussed the effects that it has on
the entities in our scene, but not what it actually does. A GKComponent
encapsulates the data and logic for a particular part of an object in an entity.
Components are associated with an entity, but entities may have several
components. They provide reusable pieces of behaviour that can be added to
entities. This helps to prevent large inheritance trees that can become problematic
with big games, by using a composition pattern.

Both of the entities in this scene have render components, and the missile entity
has an additional targeting component.

Setting up the entities

THE PLAYER ENTITY


The following is the Player class. It’s a simple NodeEntity subclass which only has
one component. Notice how it also has a GKAgent2D agent property.
A GKAgent2D is a subclass of GKAgent, which in turn is a subclass of GKComponent.
GKAgent is a point mass whose local coordinate system is aligned to its velocity.
GKAgent2D is a two dimensional specialisation of this class:

class Player: NodeEntity, GKAgentDelegate {


let agent:GKAgent2D = GKAgent2D()

In this case, the agent is dumb. It doesn’t actually do anything or effect the position
of the node unless it is changed manually by user interaction. We need an agent
though because the targeting component has to have an agent to use as its target.

override init() {
super.init()

Page 88 of 105
In the init function, we add the RenderComponent, and add a PlayerNode to the the
render component’s node. We won’t go into detail with the PlayerNode. It’s boring
and just draws a yellow box!

let renderComponent = RenderComponent(entity: self)


renderComponent.node.addChild(PlayerNode())
addComponent(renderComponent)

We also have to set the delegate of the agent to self, and actually add the agent to
the entity.

agent.delegate = self
addComponent(agent)
}

We also need to implement the GKAgentDelegate functions, so that if the agent


updates, the node position is updated, and if the node position updates manually,
then the agent position is updated before calculations take place.

func agentDidUpdate(agent: GKAgent) {


if let agent2d = agent as? GKAgent2D {
node.position = CGPoint(x: CGFloat(agent2d.position.x), y:
CGFloat(agent2d.position.y))
}
}

func agentWillUpdate(agent: GKAgent) {


if let agent2d = agent as? GKAgent2D {
agent2d.position = float2(Float(node.position.x),
Float(node.position.y))
}
}
}

THE MISSILE ENTITY


The missile entry is slightly different to the PlayerNode. In the constructor we pass
a target agent which the missile will seek.

class Missile: NodeEntity, GKAgentDelegate {

let missileNode = MissileNode()

required init(withTargetAgent targetAgent:GKAgent2D) {


super.init()

let renderComponent = RenderComponent(entity: self)

Page 89 of 105
renderComponent.node.addChild(missileNode)
addComponent(renderComponent)

let targetingComponent = TargetingComponent(withTargetAgent:


targetAgent)
targetingComponent.delegate = self
addComponent(targetingComponent)
}

You may have noticed that there’s no dumb GKAgent2D in this class, which is
because we use the TargetingComponent to move the entity around the scene. We
will discuss the TargetingComponent below. For now, all you need to know is that
we pass the targetAgent from the constructor, to the targeting component, and that
will trigger the delegate methods.

In order for this to happen, then we need to implement the agentDidUpdate and
agentWillUpdate delegate methods again. Note how these are different to those in
the player. In this case we also have to take the zRotation into consideration in both
methods.

func agentDidUpdate(agent: GKAgent) {


if let agent2d = agent as? GKAgent2D {
node.position = CGPoint(x:
CGFloat(agent2d.position.x), y: CGFloat(agent2d.position.y))
node.zRotation = CGFloat(agent2d.rotation)
}
}

func agentWillUpdate(agent: GKAgent) {


if let agent2d = agent as? GKAgent2D {
agent2d.position = float2(Float(node.position.x),
Float(node.position.y))
agent2d.rotation = Float(node.zRotation)
}
}

The Targeting Component


All of the classes so far are relatively lightweight. You’d be forgiven for thinking that
the targeting component would be fully of logic and code to make things happen in
our game. Fortunately though, thanks to GameplayKit, this is not the case! The
entire class is only 20 lines long.

Page 90 of 105
class TargetingComponent: GKAgent2D {

let target:GKAgent2D

required init(withTargetAgent targetAgent:GKAgent2D) {

target = targetAgent

super.init()

let seek = GKGoal(toSeekAgent: targetAgent)

self.behavior = GKBehavior(goals: [seek], andWeights: [1])

self.maxSpeed = 4000
self.maxAcceleration = 4000
self.mass = 0.4
}
}

The code is so simple that there’s not much to explain, but you can see that the
class is a subclass of GKAgent2D, and creates a GKGoal with the toSeekAgent
constructor. This goal is then used to create a GKBehavior object. If you had
multiple goals here, such as to seek a certain target but to avoid another, then you
could pass multiple goals into the constructor. You can also specify weights for
each goal, so if avoiding one agent is more important than seeking another, you can
represent that here.

We also set a few values at the bottom, maxSpeed, maxAcceleration and mass.
These units are dimensionless but related. They will depend on your exact scenario.
It took me a while to get these right. At first I thought nothing was happening and
spent ages trying to find where I’d gone wrong. It turned out that these values were
all set to their default values, and my missile node was moving, but really really
slowly!

The Missile Node


Now that the Missile entity is set up, we need to create a node to visually represent
it in the scene. This node is just a SKNode subclass, which has a single function.

Page 91 of 105
func setupEmitters(withTargetScene scene:SKScene) {
let smoke =
NSKeyedUnarchiver.unarchiveObjectWithFile(NSBundle.mainBundle().pathFo
rResource("MissileSmoke", ofType:"sks")!) as! SKEmitterNode
smoke.targetNode = scene
self.addChild(smoke)

let fire =
NSKeyedUnarchiver.unarchiveObjectWithFile(NSBundle.mainBundle().pathFo
rResource("MissileFire", ofType:"sks")!) as! SKEmitterNode
fire.targetNode = scene
self.addChild(fire)
}

As you can see, the setupEmitters function takes the scene object and creates two
SKEmitter nodes, adding them to the node itself, and setting the target node of the
emitters to the scene. If we didn’t set the target node, then the particles emitted
would just stay with the missile, and appear to not move through the scene! These
two emitters are set up as .sks files in the project. MissileFire.sks and
MissileSmoke.sks if you want to take a look. We won’t go into detail here.

Combining the Parts


Now that our nodes, entities and components are all set up, lets go back to
GameScene.swift and put it all together! We will need to override didMoveToView.

override func didMoveToView(view: SKView) {


super.didMoveToView(view)

We have already set up the player during initialisation, so we can simply add
player.node to the scene.

self.addChild(player.node)

For the missile, we have to set it up in this method, as we have to set the player’s
agent as it’s target.

missile = Missile(withTargetAgent: player.agent)

Then we also need to pass the scene to the setupEmitters function on the missile
so that the emitters leave a trail, rather than moving with the missile itself, as
discussed previously.

missile!.setupEmitters(withTargetScene: self)
self.addChild(missile!.node)

Page 92 of 105
Finally, once both of our entities are set up, we can add their components to our
component systems

for componentSystem in self.componentSystems {


componentSystem.addComponentWithEntity(player)
componentSystem.addComponentWithEntity(missile!)
}

Now, in the update:currentTime function, all we have to do is update every


component system in the componentSystems array with the delta time. This will
cause the behaviors to invalidate and recalculate, then a render to trigger!

override func update(currentTime: NSTimeInterval) {

// Calculate the amount of time since `update` was last called.


let deltaTime = currentTime - lastUpdateTimeInterval

for componentSystem in componentSystems {


componentSystem.updateWithDeltaTime(deltaTime)
}

lastUpdateTimeInterval = currentTime
}

And that’s all there is to it! Now, if you run the game, you should see a missile
streaking towards your player. Unfortunately we haven’t added collision detection
and explosions, but why not build an explosion component yourself as an additional
learning exercise!

Further Reading
For more information on the the new GameplayKit features discussed in this post,
take look at WWDC session 608, Introducing GameplayKit.

Page 93 of 105
Day 13 :: CloudKit Web Services
CloudKit JS was introduced at WWDC 15, and allows developers to build a web
interface for users to access the same containers as an existing CloudKit app. One
of the major limitations of CloudKit was that the data was only accessible on iOS
and OS X. Hopefully the removal of this limitations will mean that more and more
developers can start to use CloudKit for their applications.

In this post, we will take a look over the features of CloudKit JS and build a sample
notes application that allows users to store their important notes in the cloud!

CloudKit JS
CloudKit JS Supports the following browsers:

• Safari
• Firefox
• Chrome
• IE
• Edge
Interestingly, it also supports node, meaning you can run requests from your own
middle tier server and forward the results on to your API.

Creating a CloudKit JS Application


In order to demonstrate the capabilities of CloudKit JS, I have built a sample
application that allows you to store shared notes in CloudKit.

Page 94 of 105
Lets walk through how this application was created. The first step when creating
any CloudKit application, whether it is for iOS or for JS, is to open the iCloud
developer dashboard. This lets you configure the details of your application, set up
record types, establish security roles, enter data, and much more. You can find it at
https://icloud.developer.apple.com/dashboard.

Set up a new application called CloudNotes, and just leave the settings as default
for now.

Once the application is set up, we need to specify a new record type. Our
application is just going to store simple notes, with a title and some content. Select
the ‘Record Types’ option under ‘Schema’ on the left hand side. The ‘Users’ record
type should already exist. That is created by default.

Click add, and create a new record type named ‘CloudNote’. This is the record that
we will use to store our data.

Page 95 of 105
You will now be given the option to add fields. Add title and content fields (both
Strings) on to the CloudNote record. That’s the only structure we need for now.
Next, lets create a record so that we have something to fetch and display on our
webpage. Select “Default Zone” from the left hand menu, in the “Public Data”
section. All of the data we are using in this application will be public. In a real
application you would probably want to store data in the private zone on a per user
basis, but to keep things simple we aren’t addressing security and permissions in
this tutorial.

Click add and you will be given the option to enter the title and content for a new
note. Then click save and your new note will be persisted into CloudKit!

Now that we have some data in our CloudKit instance, lets try and display it with
some Javascript.

JS APP STRUCTURE
Our app is only going to have one page (index.html), which will use an external
JavaScript file to request and store the data from CloudKit. In order to help display

Page 96 of 105
the data, we are going to be using Knockout JS. Knockout will just simplify things
for us a little bit by allowing us to declare bindings between the UI and the data set
that is pulled from CloudKit. It will ensure that the UI automatically refreshes when
the data model’s state changes. We will also import styles from bootstrap so we
don’t have to write any of our own CSS.

Here’s the result of these imports, along with the CloudKit import too.

<html>
<head>
<title>iOS9 Day by Day - CloudKit Web Services Example</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/
bootstrap/3.3.5/css/bootstrap.min.css">
<script type="text/javascript" src="https://cdnjs.cloudflare.com/
ajax/libs/knockout/3.3.0/knockout-min.js"></script>
<script type="text/javascript" src="cloudNotes.js"></script>
<script src="https://cdn.apple-cloudkit.com/ck/1/cloudkit.js"
async></script>
</head>

Let’s take a look into cloudNotes.js and see how we can request the data from
CloudKit.

Before we can request any data, we must wait for the CloudKit API to load. We do
this by placing our code in a window eventListener, which listens for the
‘cloudkitloaded’ event.

window.addEventListener('cloudkitloaded', function() {
Once CloudKit has loaded, you need to configure it with your identifier, the
environment, and an API token.

CloudKit.configure({
containers: [{
containerIdentifier:
'iCloud.com.shinobicontrols.CloudNotes',
apiToken:
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
environment: 'development'
}]
});

You’ll have to go back into CloudKit Dashboard to generate an API token. You can
do this by selecting ‘API Tokens’ from the ‘Admin’ area of CloudKit Dashboard, and

Page 97 of 105
clicking add. Give your API token a name, and copy and paste it into the
configuration code seen in the code above.

The code in my version of cloudNotes.js uses Knockout JS to create and bind the
model to the HTML. I have created a CloudNotesViewModel which is responsible for
managing the page. It contains an array of all of the notes, as well as functions to
save a new note, fetch notes from the server, display an authenticated state and
display an unauthenticated state.

Before the view model can call any of these functions however, it must set up
CloudKit authentication.

container.setUpAuth().then(function(userInfo) {
// Either a sign-in or a sign-out button will be added to the DOM
here.
if(userInfo) {
self.gotoAuthenticatedState(userInfo);
} else {
self.gotoUnauthenticatedState();
}
self.fetchRecords(); // Records are public so we can fetch them
regardless.
});

When the promise is resolved, depending on the login state of the user, a sign-in or
sign-out button will be added to the DOM. You therefore have to have a div with the
id “apple-sign-in-button” on the page. This container.setUpAuth() function call
automatically modifies this div to contain the appropriate sign in button.

Page 98 of 105
FETCHING RECORDS
The fetch records function queries CloudKit for all of the records with the type
‘CloudNote’.

self.fetchRecords = function() {
var query = { recordType: 'CloudNote' };

// Execute the query.


return publicDB.performQuery(query).then(function (response) {
if(response.hasErrors) {
console.error(response.errors[0]);
return;
}
var records = response.records;
var numberOfRecords = records.length;
if (numberOfRecords === 0) {
console.error('No matching items');
return;
}

self.notes(records);
});
};

You can see above how to set up a basic query based on the recordType, and then
execute that on the public database. Queries do not have to be executed on the
public database, they can be executed on private databases too, but all of our work
in this sample application is public.

Once the notes have been fetched, we store them into self.notes, which is a
knockout observable. This means that the HTML will be regenerated using the note
template, and the fetched notes should appear in the page.

<div data-bind="foreach: notes">


<div class="panel panel-default">
<div class="panel-body">
<h4><span data-bind="text: fields.title.value"></span></h4>
<p><span data-bind="text: fields.content.value"></span></p>
</div>
</div>
</div>

The template iterates through notes with a foreach binding, and prints each note’s
fields.title.value and fields.content.value values in a panel.

Page 99 of 105
After a user has signed in, they will see the ‘Add New Note’ panel. The saveNewNote
function must therefore be able to store new records into CloudKit.

if (self.newNoteTitle().length > 0 && self.newNoteContent().length >


0) {
self.saveButtonEnabled(false);

var record = {
recordType: "CloudNote",
fields: {
title: {
value: self.newNoteTitle()
},
content: {
value: self.newNoteContent()
}
}
};

In the first half of the function, we do some basic validation to check that the record
will be valid, and then create a new record based on the data that is currently in the
form.

Once we have set up a new record, we are ready to save it into CloudKit.

publicDB.saveRecord(record).then(
function(response) {
if (response.hasErrors) {
console.error(response.errors[0]);
self.saveButtonEnabled(true);
return;
}
var createdRecord = response.records[0];
self.notes.push(createdRecord);

self.newNoteTitle("");
self.newNoteContent("");
self.saveButtonEnabled(true);
}
);

Page 100 of 105


The line publicDB.saveRecord(record) saves the newly created record into the
public database and returns a promise with the success of the save operation. The
created record is pushed into the array of existing records so we don’t need to
refetch everything again, and then the form is cleared and the save button is
enabled again.

iOS App
To demonstrate how data can be shared between iOS and web applications using
CloudKit, there’s also an iOS app included in the source files for this blog post.

To set this app up, I simply created a new master detail application in Xcode.

Page 101 of 105


To enable it to work with iCloud, select your project in Xcode’s file explorer, select
your target and then select capabilities. You should see the option to turn iCloud on.
Click the switch and Xcode will communicate with the developer centre and add the
entitlements required to your application.

The configuration panel should now look like this.

You’re now ready to start using CloudKit in the iOS app. I won’t go into the details of
how it was implemented, as there’s already a comprehensive explanation of how to
use CloudKit on iOS in iOS8-day-by-day, but this is what the app should look like
when you select a note. The title should be the view controller title, and the content
should be displayed in the middle of the screen.

Page 102 of 105


Conclusion
Hopefully this post has shown how simple it is to use the CloudKit JS API. I’m glad
that Apple now offer a web based API for CloudKit, but I still have some
reservations and I don’t think I’d personally use it if I were developing an application.
There are plenty of third party cloud service providers out there with more features
and better documentation, as well as native SDKs for the other mobile platforms
too. I also couldn’t get any CloudKit services working on the simulator, no matter
what I tried. That’s certain to be another barrier to development if it’s not fixed as
soon as possible.

There are definitely use cases for CloudKit, and I’d encourage everyone to give it a
try. Just think twice before developing an application where you will potentially
need access to your data on other platforms in future.

Further Reading
For more information on the the new CloudKit JS and Web Services features
discussed in this post, take look at WWDC session 710, CloudKit JS and Web
Services.

Page 103 of 105


The Conclusion

In it’s original web format, this iOS 9 Day by Day series has been a big success and
we hope you have found it useful. We’ve already had some great feedback, and
nearly 40,000 users viewed at least one of these chapters online.

By far the most popular post so far was on the topic of Search APIs. There’s already
a lot noise on the wider web discussing if this is the second coming of search
engine optimization. What is clear however, is that there’s a lot of interest on how to
make the most of this new addition.

iOS 9 Day by Day joins our two previous ebooks which are worth a read if you
haven’t all ready. We’re looking forward to the exciting new features that may arrive
in iOS 10 and we hope we’ll welcome you back to our new series iOS 10 Day by Day.

Page 104 of 105


Page 105 of 105

You might also like