Professional Documents
Culture Documents
2013-05-30
Unit Testing
Roberto Casadei
Notes taken from:
Testing
Expressing and validating assumptions and intended
behavior of the code
Checking what code does against what it should do
Tests help us
catch mistakes
shape our design to actual use
avoid gold-plating by being explicit about what the required behavior is
What matters is whether the structure of your code helps you locate
the implementation of higher-level concepts quickly and reliably
So, pay attention to:
Relevant test classes for task at hand
Appropriate test methods for those classes
Lifecycle of objects in those methods
Examples of measures:
Test doubles
Keep test code and the resources it uses together
Making tests set up the context they need
Use in-memory database for integration tests that require persistence
Test Doubles
Test doubles
Def.: Objects to be substituted for the real implementation for
testing purposes
Replacing the code around what you want to test to gain full control if its
context/environment
10
11
Stubs
(noun) def. a truncated or unusually short thing
A stub is simple impl that stands for a real impl
e.g. An object with methods that do nothing or return a default
value
12
Fake objects
Replicating the behavior of the real thing without the side
effects and other consequences of using the real thing
Fast alternative for situations where the real thing is
difficult or cumbersome to use
13
Test spies
Built to record what happens when using them
E.g. they are useful when none of the objects passed in
as arguments to certain operations can reveal through
their API what you want to know
14
Mock objects
Mocks are test spies that specify the expected interaction
together with the behavior that results from them
E.g. A mock for UserRepository interface might be told to
return null when findById() is invoked with param 123, and
return a given User instance when called with 124
15
16
17
18
Test Smells
19
Readability
Why
Accidental complexity adds cognitive load
Goal
Reading test code shouldn't be hard work
How
The intent and purpose of test code should be explicit or easily deducible
Consider
Level of abstraction
Single Responsibility Principle (also applies to tests)
20
Twin of primitive obsession code smell (which refers to use of primitive types to represent higherlevel concepts)
Also the abstraction level of the testing API matters
General advice: keep a single level of abstraction in test methods
Hyperassertions
Assertions that are too broad
make it difficult to identify the intent and essence of the test
may fail if small details change, thus making it difficult to find out why
21
Setup sermon
Similar to Incidental details but focuses on the setup of a test's fixture (= the context in which a
given test executes), i.e. on the @Before and @BeforeClass (setup) methods
Magic numbers
Generally, literal values does not communicate their purpose well
Approach: replace literals using costants with informative names that make their purpose explicit
22
Approach: divide-et-impera
Split logic
Test code (logic or data) is scattered in multiple places
Approach:
Inline the data/logic into the test that uses it
23
Maintainability
Test code requires quality (as production code)
Maintainability of tests
is related to test readability
is related to structure
Look for
test smells that add cognitive load
test smells that make for a mantainance nightmare
test smells that cause erratic failures
24
Conditional logic
can be hard to understand and error-prone
Control structures can be essential in test helpers but, in test methods, these structures tend to be a major
distraction
Thread.sleep()
It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries
25
2) Control it
3) Isolate it
26
27
Trustworthiness
We need to trust our tests so that we can feel confident
in evolving/modifying/refactoring code
Look for test code that deliver a false sense of security
Misleading you to think everything is fine when it's not
28
Misleading comments
May deliver false assumptions
Do not comment what the test does, as the test code should show that clearly and promptly
Instead, comments explaining the rationale may be useful
Never-failing tests
Have no value
E.g. forgetting fail() in a try{}catch{}
Shallow promises
Tests that do much less than what they say they do
29
Platform prejudice
A failure to treat all platforms equal
Measures: different tests for different platforms
Conditional test
It's a test that's hiding a secret conditional within a test method, making the test logic
different from what its name would suggest
Platform prejudice is an example (the specific test depends on the platform)
As a rule of thumb, all branches in a test method should have a chance to fail
30
31
Testable design
Design decisions can foster or hinder testability
Principles supporting testable design
Modularity
SOLID
Single responsability principle a class should have only a single responsibility
Open/closed principle sw entities should be open for extension, but closed for modification
you can change what a class does without changing its source code
Liskov substitution principle objects in a program should be replaceable with instances of their
subtypes without altering the correctness of that program
Inteface segregation principle many client-specific interfaces are better than one general-purpose
interface
Dependency inversion principle as a way for depending on abstractions rather than on concretions
32
Testability issues
Testability issues
restricted access
private/protected methods
inability to observe the outcome (e.g. side effects), void methods
33
34
Best practices
35
36
37
38
Naming conventions
Test project [ProjectUnderTest].UnitTests
For a class in ProjectUnderTest [ClassName]Tests
For each unit of work test method named
[UnitOfWorkName]_[ScenarioUnderTest]_[ExpectedBehavior]
39
40
41
Types of testing
Value-based testing
check for values returned by a function
Interaction testing
tests how an object send messages to other objects
you use interaction testing when a method call is the end result of a
specific unit of work
42
43
44
Indirection levels
Depth 1: faking a member in the class under test (constructor/property
injection or faking a method via subclassing)
Depth 2: faking a member of a factory class
add a setter to the factory and set it to a fake dependency
45
46
Mocking
the class under test communicates with the mock object
the mock object records all the communication
the test uses the mock object to verify that the test passes
47
Mocks
Rule of thumb: no more than one mock per test
and more stubs, if necessary
Avoid overspecification
Handwriting mocks
is cumbersome, it takes time, a lot of boilerplate code, hard to
reuse...
48
Isolation frameworks
Two categories of mocking frameworks
constrained
they generate code and compile it at runtime (i.e. they're constrained by the compiler and
intermediate code / bytecode abilities)
e.g. they cannot fake static methods, nonvirtual methods, nonpublic methods...
for C#: RhinoMocks, Moq, NMock, EasyMock, NSubstitute, FakeItEasy
for Java: jMock, EasyMock
unconstrained
in .NET (Typemock Isolator, JustMock, Moles) they are profiler-based (i.e. they use the
profiling APIs, which are unmanaged and allow to inject IL-based code at runtime)
thus the process that runs the tests must be enabled by specific env vars
for Java: PowerMock, JMockit; for C++: Isolator++, Hippo Mocks
PROS: you can fake 3rd-party systems; it allows to test previously untestable code
CONS: some tests may become unmaintainable because you're faking APIs that you don't
own
Effective Unit Testing
49
nonstrict mocks
strict mocks fail in two cases A) when a unexpected method is called on them, or
2) if an expected method is NOT called on them
50
51
Test execution
Two common scenarios
tests run during the automated build process
automated build as a collection of build scripts, automated triggers, a build integration
server, and a shared team agreement to work this way
CI servers manage, record and trigger build scripts based on specific events
typical scripts: CI build (should be quick!), nightly build, and deployment build scripts
Some tools
for build scripts: NAnt, MSBuild, FinalBuilder, Rake
for CI servers: CruiseControl.NET, Jenkins, Travis CI, TeamCity, Hudson, Bamboo
52
53
54
JUnit 4
55
JUnit4 basics
Package: org.junit
Test classes are POJO classes
Annotations
@Test (org.junit.Test)
@Before: marked methods are exec before each test method run
@After: marked methods are exec after each test method run
56
Define private fields and a constructor that accepts, in order, your parameters
public MyParamTestCase(int k, String name) { this.k=k; }
Define a @Test method that works against the private fields that are defined to
contain the parameters.
57
NUnit
58
59
60
fake.aMethodCall(Arg.Any<String>()).Returns(myFakeReturnVal);
// the previous line forces the calls to aMethodCall() to return myFakeReturnVal
Assert.IsTrue(fake.fake.aMethodCall(...));
fake.When(x => x.m(Any.Any<string>())).Do(context => { throw new Exception });
Assert.Throws<Exception>( () => fake.m(ahah));
Effective Unit Testing
61
62
63