You are on page 1of 63

Roberto Casadei

2013-05-30

Unit Testing

Roberto Casadei
Notes taken from:

Effective Unit Testing: A guide for Java developers


The Art of Unit Testing: With examples in C#, 2e

Testing
Expressing and validating assumptions and intended
behavior of the code
Checking what code does against what it should do
Tests help us
catch mistakes
shape our design to actual use
avoid gold-plating by being explicit about what the required behavior is

The biggest value of writing a test lies not in the


resulting test but in what we learn from writing it

Effective Unit Testing

The value of having tests


First step: (automated) unit tests as a quality tool
Helps to catch mistakes
Safety net against regression (= units of work that once worked and now
don't)
failing the build process when a regression is found

Second step: unit tests as a design tool


Informs and guides the design of the code towards its actual purpose and use
From design-code-test to test-code-refactor (i.e. TDD) a.k.a red-greenrefactor

The quality of test code itself affects productivity

Effective Unit Testing

Test-Driven Development (TDD)


Direct results:
Usable code
Lean code, as production code only implements what's required by the
scenario it's used for

Sketching a scenario into executable code is a design activity


A failing test gives you a clear goal
Test code becomes a client for production code, expressing your needs
in form of a concrete example
By writing only enough code to make the test pass, you keep your
design simple and fit-for-purpose

Effective Unit Testing

Behaviour-Driven Development (BDD)


Born as a correction of TDD vocabulary
Test word as source for misunderstandings

Now, commonly integrated with business analysis and


specification activities at requirements level
Acceptance tests as examples that anyone can read

Effective Unit Testing

Not just tests but good tests (1)


Readability
Mantainability
Test-code organization and structure
Not just structure but a useful structure
Good mapping with your domain and your abstractions

What matters is whether the structure of your code helps you locate
the implementation of higher-level concepts quickly and reliably
So, pay attention to:
Relevant test classes for task at hand
Appropriate test methods for those classes
Lifecycle of objects in those methods

Effective Unit Testing

Not just tests but good tests (2)


It should be clear what your tests are actually testing
Do not blindly trust the names of the tests

The goal is not 100% coverage testing the right


things
A test that has never failed is of little value it's probably not
testing anything
A test should have only one reason to fail
because we want to know why it failed

Effective Unit Testing

Not just tests but good tests (3)


Test isolation is important
Be extra careful when your tests depend on things such as:
Time, randomness, concurrency, infrastructure, pre-existing data, persistence,
networking

Examples of measures:
Test doubles
Keep test code and the resources it uses together
Making tests set up the context they need
Use in-memory database for integration tests that require persistence

In order to rely on your tests, they need to be repeatable

Effective Unit Testing

Test Doubles

Effective Unit Testing

Test doubles
Def.: Objects to be substituted for the real implementation for
testing purposes
Replacing the code around what you want to test to gain full control if its
context/environment

Essential for good test automation


Allowing isolation of code under test
From code it interacts with, its collaborators, and dependencies in general

Speeding-up test execution


Making random behavior deterministic
Simulating particular conditions that would be difficult to create
Observing state & interaction otherwise invisible

Effective Unit Testing

10

Kinds of test doubles


Stubs: unusually short things
Fake objects: do it without side effects
Test spies: reveal information that otherwise would be
hidden
Mocks: test spies configured to behave in a certain way
under certain circumstances

Effective Unit Testing

11

Stubs
(noun) def. a truncated or unusually short thing
A stub is simple impl that stands for a real impl
e.g. An object with methods that do nothing or return a default
value

Best suited for cutting off irrelevant collaborators

Effective Unit Testing

12

Fake objects
Replicating the behavior of the real thing without the side
effects and other consequences of using the real thing
Fast alternative for situations where the real thing is
difficult or cumbersome to use

Effective Unit Testing

13

Test spies
Built to record what happens when using them
E.g. they are useful when none of the objects passed in
as arguments to certain operations can reveal through
their API what you want to know

Effective Unit Testing

14

Mock objects
Mocks are test spies that specify the expected interaction
together with the behavior that results from them
E.g. A mock for UserRepository interface might be told to
return null when findById() is invoked with param 123, and
return a given User instance when called with 124

Effective Unit Testing

15

Choosing the right test double


As usual, it depends
Rule of thumb: stub queries; mock actions

Effective Unit Testing

16

Structuring unit tests


Arrange-act-assert
Arrange your objects and collaborators
Make them work (trigger an action)
Make assertions on the outcome

BDD evolves it in given-when-then


Given a context
When something happens
Then we expect certain outcome

Effective Unit Testing

17

Check behavior, not implementation


A test should test..
just one thing, and..
test it well, while..
communicating its intent clearly

What's the desired behavior you want to verify?


What's just an implementation detail?

Effective Unit Testing

18

Test Smells

Effective Unit Testing

19

Readability
Why
Accidental complexity adds cognitive load

Goal
Reading test code shouldn't be hard work

How
The intent and purpose of test code should be explicit or easily deducible

Consider
Level of abstraction
Single Responsibility Principle (also applies to tests)

Effective Unit Testing

20

Readability smells (1)


Primitive assertions
Assertions that use a level of abstraction that is too low
E.g. Testing structural details of results

Twin of primitive obsession code smell (which refers to use of primitive types to represent higherlevel concepts)
Also the abstraction level of the testing API matters
General advice: keep a single level of abstraction in test methods

Hyperassertions
Assertions that are too broad
make it difficult to identify the intent and essence of the test
may fail if small details change, thus making it difficult to find out why

Approach: remove irrelevant details + divide-et-impera

Effective Unit Testing

21

Readability smells (2)


Incidental details
The test intent is mixed up with nonessential information
Approach
Extracts nonessential information into private helpers and setup methods
Give things appropriate, descriptive names
Strive for a single level of abstractions in a test method

Setup sermon
Similar to Incidental details but focuses on the setup of a test's fixture (= the context in which a
given test executes), i.e. on the @Before and @BeforeClass (setup) methods

Magic numbers
Generally, literal values does not communicate their purpose well
Approach: replace literals using costants with informative names that make their purpose explicit

Effective Unit Testing

22

Readability smells (3)


Split personality
When a test embodies multiple tests in itself
A test should only check one thing and check it well
so that what's wrong could be easily located

Approach: divide-et-impera

Split logic
Test code (logic or data) is scattered in multiple places
Approach:
Inline the data/logic into the test that uses it

Effective Unit Testing

23

Maintainability
Test code requires quality (as production code)
Maintainability of tests
is related to test readability
is related to structure

Look for
test smells that add cognitive load
test smells that make for a mantainance nightmare
test smells that cause erratic failures

Effective Unit Testing

24

Mantainability smells (1)


Duplication
needless repetition of concepts or their representations
all copies need to be synchronized
Examples:
Literal duplication extract variables
Structural duplication (same logic operating with different data istances) extract methods

Sometimes, it may be better to leave some duplication in favor of better readability

Conditional logic
can be hard to understand and error-prone
Control structures can be essential in test helpers but, in test methods, these structures tend to be a major
distraction

Thread.sleep()
It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries

Effective Unit Testing

25

Mantainability smells (2)


Flaky tests
Tests that fails intermittently
Does the behavior depend on time/concurrency/network/?

When you have a source for trouble, you can


1) Avoid it

2) Control it

3) Isolate it

Unportable file paths


Possibly, use relative paths (e.g. evaluated against the project's root dir)
You could also put resources on Java's classpath and look them up via
getClass().getResource(filename).getFile()

Persistent temp files


Even though you should try to avoid using physical files altogether if not essential, remember to
delete temp files during teardown

Effective Unit Testing

26

Maintainability smells (3)


Pixel perfection
It refers to tests that assert against (hardcoded) low-level details even
though the test would semantically be at a higher-level
you may require a fuzzy match instead of a perfect match

From Parametrized-Test pattern to Parametrized-Mess


Some frameworks might not allow you
to trace a test failure back to the specific data set causing it
to express data sets in a readable and concise way

Lack of cohesion in test methods


each test in a test-case should use the same text fixture

Effective Unit Testing

27

Trustworthiness
We need to trust our tests so that we can feel confident
in evolving/modifying/refactoring code
Look for test code that deliver a false sense of security
Misleading you to think everything is fine when it's not

Effective Unit Testing

28

Trustworthiness smells (1)


Commented-out tests
Try to understand and validate their purpose, or delete them

Misleading comments
May deliver false assumptions
Do not comment what the test does, as the test code should show that clearly and promptly
Instead, comments explaining the rationale may be useful

Never-failing tests
Have no value
E.g. forgetting fail() in a try{}catch{}

Shallow promises
Tests that do much less than what they say they do

Effective Unit Testing

29

Trustworthiness smells (2)


Lowered expectations
Tests asserting for loose conditions (vague assertions, ) give a false sense of security
raise the bar by making the assertions more specific/precise

Platform prejudice
A failure to treat all platforms equal
Measures: different tests for different platforms

Conditional test
It's a test that's hiding a secret conditional within a test method, making the test logic
different from what its name would suggest
Platform prejudice is an example (the specific test depends on the platform)

As a rule of thumb, all branches in a test method should have a chance to fail

Effective Unit Testing

30

Some advanced stuff

Effective Unit Testing

31

Testable design
Design decisions can foster or hinder testability
Principles supporting testable design
Modularity
SOLID
Single responsability principle a class should have only a single responsibility
Open/closed principle sw entities should be open for extension, but closed for modification
you can change what a class does without changing its source code
Liskov substitution principle objects in a program should be replaceable with instances of their
subtypes without altering the correctness of that program

Inteface segregation principle many client-specific interfaces are better than one general-purpose
interface
Dependency inversion principle as a way for depending on abstractions rather than on concretions

great for testability!

Effective Unit Testing

32

Testability issues
Testability issues
restricted access
private/protected methods
inability to observe the outcome (e.g. side effects), void methods

inability to substitute parts of an implementation


inability to substitute a collaborator
inability to replace some functionality

Effective Unit Testing

33

Guidelines for testable design


Avoid complex private methods
Avoid final methods
Avoid static methods (if you foresee chance for stub)
Use new with care
it hardcodes the implementation class use IoC if possible

Avoid logic in constructors


Avoid the Singleton pattern
Favor composition over inheritance
Wrap external libraries
Avoid service lookups (factory classes)
as the collaborator is obtained internally to the method ( service = MyFactory.lookupService() ), it may be
difficult to replace the service

Effective Unit Testing

34

Best practices

Effective Unit Testing

35

Why good unit testing is essential


A failing project
doing TDD (red-green-refactor), the first months were great
as time went by, requirements changed
we were forced to change code, and when we did, tests
broke

Effective Unit Testing

36

Good unit tests


Good unit tests should
be AUTOMATED and REPEATABLE
be EASY TO IMPLEMENT
be RELEVANT TOMORROW
be RUNNABLE BY ANYONE WITH EASE
RUN QUICKLY
be CONSISTENT in its results
have FULL CONTROL OF THE UNIT under test
be FULLY ISOLATED
be COMMUNICATIVE WHEN FAILING

Effective Unit Testing

37

Why we need best practices


Just because you write your tests
doesn't mean they're maintainable, readable, and
trustworthy.
If they're so
doesn't mean you get the same benefits as when writing
them test first.
If you do so
doesn't mean you'll and up with a well-designed system.

Effective Unit Testing

38

Naming conventions
Test project [ProjectUnderTest].UnitTests
For a class in ProjectUnderTest [ClassName]Tests
For each unit of work test method named
[UnitOfWorkName]_[ScenarioUnderTest]_[ExpectedBehavior]

UnitOfWorkName: name of method/methods/classes being tested


Scenario: conditions under which the unit is tested (e.g. user already
exists, or system out of memory)
ExpectedBehavior: what you expect the tested method to do
return a value / change state of SUT / call 3rd-party system
NOTE: SUT = System Under Test

Effective Unit Testing

39

Possible naming conventions of scenarios


In cases of state-based testing
MyMUT_WhenCalled_DoSomething
MyMUT_Always_DoSomething

Effective Unit Testing

40

Other naming conventions


Fake objects for interface IMyInterface
StubMyInterface, MockMyInterface, FakeMyInterface (for both
stubs and mocks)

Effective Unit Testing

41

Types of testing
Value-based testing
check for values returned by a function

State-based testing (state verification)


determines whether the exercised method worked correctly by examining
the changed behavior of the SUT and its collaborators

Interaction testing
tests how an object send messages to other objects
you use interaction testing when a method call is the end result of a
specific unit of work

Effective Unit Testing

42

Using stubs to break dependencies


Case: your SUT relies on dependencies over which you have
no control (or that don't work yet)
Examples of these dependencies: filesystem, threads, memory, time ...

By using a stub (a controllable replacement for a dependency)


you can test your SUT without dealing with the dependency

Effective Unit Testing

43

Refactoring for testability (1)


You may need to refactor your design to make it more testable
e.g. by introducing seams, i.e. places in your code where you can plug
in different functionality

Refactorings to allow replacements with stubs


Abstracting concrete objects (dependencies) into interfaces or delegates
Allowing injection of fake impls of those interfaces or delegates
By making the OUT (Object Under Test) receive an interface at constructor-level /
property-level (C#) for later use
or receive the interface in a method call via

a parameter of the method (parameter injection)


a factory class
a local factory method (extract and override)
variations of the preceding techniques
Effective Unit Testing

44

Refactoring for testability (2)


Which injection? Rules of thumb:
Constructor injection for non-optional dependencies
Property injection for optional dependencies

Indirection levels
Depth 1: faking a member in the class under test (constructor/property
injection or faking a method via subclassing)
Depth 2: faking a member of a factory class
add a setter to the factory and set it to a fake dependency

Depth 3: faking the factory class by implementing the factory interface

Extract and ovverride can help to create fake results


You subclass your class under test and pverride a virtual method to make it
return your stub
Effective Unit Testing

45

Testable code and encapsulation


Making a class testable may imply breaking
encapsulation
Solutions
Using internal instead of public for methods, and exposing them to your
test assembly via [InternalsVisibleTo]
Using conditional compilation via #if and #endif

Effective Unit Testing

46

Mocks help you to assert something in your test


Mocks vs. stubs
stubs replace objects so that you can test other objects without
problems; a stub can never fail a test; the emphasis remains on the
object under test
mocks can fail tests; the emphasis is in the interaction between the
object under test and another object

Mocking
the class under test communicates with the mock object
the mock object records all the communication
the test uses the mock object to verify that the test passes

Effective Unit Testing

47

Mocks
Rule of thumb: no more than one mock per test
and more stubs, if necessary
Avoid overspecification

Handwriting mocks
is cumbersome, it takes time, a lot of boilerplate code, hard to
reuse...

isolation / mocking frameworks

Effective Unit Testing

48

Isolation frameworks
Two categories of mocking frameworks
constrained
they generate code and compile it at runtime (i.e. they're constrained by the compiler and
intermediate code / bytecode abilities)
e.g. they cannot fake static methods, nonvirtual methods, nonpublic methods...
for C#: RhinoMocks, Moq, NMock, EasyMock, NSubstitute, FakeItEasy
for Java: jMock, EasyMock

unconstrained
in .NET (Typemock Isolator, JustMock, Moles) they are profiler-based (i.e. they use the
profiling APIs, which are unmanaged and allow to inject IL-based code at runtime)
thus the process that runs the tests must be enabled by specific env vars
for Java: PowerMock, JMockit; for C++: Isolator++, Hippo Mocks
PROS: you can fake 3rd-party systems; it allows to test previously untestable code
CONS: some tests may become unmaintainable because you're faking APIs that you don't
own
Effective Unit Testing

49

Good isolation frameworks have...


features promoting test robustness
recursive fakes: objects returned by calling methods on a fake object
will be fake as well
ignored argument by default
no need to always include Arg.IsAny<T>

faking multiple methods at once


e.g. ability to specify a return value of type T for every method which returns T

nonstrict mocks
strict mocks fail in two cases A) when a unexpected method is called on them, or
2) if an expected method is NOT called on them

Effective Unit Testing

50

Good isolation frameworks have also...


a good design which promotes clarity and readability
For example
API names which distinguish between mocks and stubs
AAA (Arrange-Act-Assert) style of testing rather than recordand-replay style

Effective Unit Testing

51

Test execution
Two common scenarios
tests run during the automated build process
automated build as a collection of build scripts, automated triggers, a build integration
server, and a shared team agreement to work this way
CI servers manage, record and trigger build scripts based on specific events
typical scripts: CI build (should be quick!), nightly build, and deployment build scripts

tests run by developers in their own machine

Some tools
for build scripts: NAnt, MSBuild, FinalBuilder, Rake
for CI servers: CruiseControl.NET, Jenkins, Travis CI, TeamCity, Hudson, Bamboo

Effective Unit Testing

52

Test code organization


Separate integration tests from unit tests
in different projects, or
in different folders and namespaces

Define a mapping from test classes to code under test


MyProject, MyProject.UnitTests, MyProject.IntegrationTests
Mapping tests from classes, approaches (not exclusive):
one-test-class-per-class-under-test pattern, e.g. MyClassTest
one-test-class-per-feature, e.g. MyLoginClassTestForPasswordChanges
Test method names: [MethodUnderTest]_[Scenario]_[ExpectedBehavior]

Effective Unit Testing

53

Building a test API...


for code testability & readability/maintainance of tests
use inheritance in test classes for code reuse
(abstract test infrastructure class pattern) base test classes with

common utility methods,


factory or template methods
common setup/teardown code
test methods enforcing a structure for testing in subclasses

create test utility classes (e.g. named as AssertUtility, FactoryUtility,


ConfigurationUtility) and methods
factory methods for complex objects, object configuration methods, ...
system initialization methods
methods for handling (setup, connection, read, ..) external resources (e.g. DBs,
)
special assert methods

Effective Unit Testing

54

JUnit 4

Effective Unit Testing

55

JUnit4 basics
Package: org.junit
Test classes are POJO classes
Annotations
@Test (org.junit.Test)
@Before: marked methods are exec before each test method run
@After: marked methods are exec after each test method run

Using assertions & matchers


importstaticorg.junit.Assert.*;
importstaticorg.hamcrest.CoreMatchers.*;
So that in your test methods you can write something as
assertThat(true,is(not(false)));

Effective Unit Testing

56

Parametrized-Test pattern in JUnit


Mark the test class with
@RunWith(org.junit.runners.Parametrized.class)

Define private fields and a constructor that accepts, in order, your parameters
public MyParamTestCase(int k, String name) { this.k=k; }

Define a method that returns your all your parameter data


@org.junit.runners.Parametrized.Parameters
public static Collection<Object[]> data(){
return Arrays.asList( new Object[][] { { 10, roby}, } );

Define a @Test method that works against the private fields that are defined to
contain the parameters.

Effective Unit Testing

57

NUnit

Effective Unit Testing

58

NUnit basics (1)


Library: NUnit.Framework.dll (add reference to project)
Namespace: using NUnit.Framework;
Test classes annotated with [TestFixture]
Annotations
[Test]
1+ [TestCase(params)] for multiple parametrizations of a test method
[SetUp]: marked methods are exec before each test method run
[TearDown]: marked methods are exec after each test method run
[ExpectedException( typeof(ArgumentException), ExpectedMessage=...)]
[Ignore] to skip the tests that need to be fixed

Effective Unit Testing

59

NUnit basics (2)


Using assertions & matchers
Assert.True(cond,msg);Assert.False(cond,msg);
Assert.AreEqual(obj1,obj2);
varex=Assert.Catch<Exception>(()=>/*exceptionalcode*/);
StringAssert.Contains(...,ex.Message);

Fluent syntax: Assert.That( strObj, Is.StringContaining(...) )

Effective Unit Testing

60

A mocking framework: NSubstitute (1)


It supports the arrange-act-assert model
arrange: create and configure your fake objects
act: run your SUT
assert: verify that your fake was called

ISomething fake = Substitute.For<ISomething>();


/* act */
fake.Received().SomethingMethod(...);
Received() returns the fake object itself so that calling a method of its interface is checked against
the expectation set by Received()

fake.aMethodCall(Arg.Any<String>()).Returns(myFakeReturnVal);
// the previous line forces the calls to aMethodCall() to return myFakeReturnVal
Assert.IsTrue(fake.fake.aMethodCall(...));
fake.When(x => x.m(Any.Any<string>())).Do(context => { throw new Exception });
Assert.Throws<Exception>( () => fake.m(ahah));
Effective Unit Testing

61

A mocking framework: NSubstitute (2)


Argument-matching constrains can be specified
mock.Received().m(Arg.Is<MyType>(obj => obj.Prop1==...));

Effective Unit Testing

62

NSubstitute: testing event-related activities


You can test events in the two different directions
testing that someone is listening to an event
var stub = Substitute.For<MyEventProvider>();
stub.MyEvent += Raise.Event<Action<string>>(str);
mock.Received().MyEventMethod(str);
// NOTE: public delegate void Action<in T>(T obj) // def in mscorlib

testing that someone is triggering an event


bool evFired = false;
stubEventProvider.MyEvent += delegate { evFired = true; };
SUT.doSomethingThatEventuallyFiresTheEvent();
Assert.IsTrue(evFired);
Effective Unit Testing

63

You might also like