You are on page 1of 131

Contents Guide

~ Welcome & What You’ll Learn

Section 1: Introduction to Automated Testing


1. Understanding Automated Testing
2. Unveiling the Advantages of Automated Testing
3. Exploring Different Test Types
4. Building a Solid Foundation: The Test Pyramid Concept
5. Navigating the Testing Tool Landscape
6. Writing Your First Unit Test: Step-by-Step Guide
7. Deep Dive into Unit Test Composition: Advanced Techniques
8. Ensuring Comprehensive Test Coverage: Beyond Unit Tests
9. Empowering Refactoring Practices with Testing
10.
Leveraging NUnit for Efficient Testing in Visual
Studio
11.
Embracing Test-Driven Development: Principles
and Practices

Section 2: Mastering Unit Testing Fundamentals


12.
Understanding the Core Principles of Unit Testing
13.
Essential Traits of Effective Unit Tests
14.
Defining Test Scope: Choosing What to Test
Wisely
15.
Structuring Your Tests: The Art of Naming and
Organization
16.
Leveraging Rider IDE: Optimizing Your Unit
Testing Workflow
17.
Hands-On: Crafting Your First Unit Test in Rider
18.
Shedding Light on Black-box Testing Techniques
19.
Managing Test Setup and Teardown: Ensuring
Test Independence
20.
Unleashing the Power of Parameterized Tests
21.
Selective Testing: Strategies for Ignoring and
Skipping Tests
22.
Building Trustworthy Tests, Part 1: Handling
Dependencies and Isolation
23.
Building Trustworthy Tests, Part 2: Ensuring
Consistency and Reliability

Section 3: Advanced Unit Testing Techniques


24.
Unveiling Core Unit Testing Techniques
25.
Mastering String Testing Strategies, Part 1
26.
Mastering String Testing Strategies, Part 2
27.
Comprehensive Testing Approaches for Arrays
and Collections, Part 1
28.
Comprehensive Testing Approaches for Arrays
and Collections, Part 2
29.
Verifying Method Return Types: Best Practices
30.
Tackling Void Methods: Testing Strategies and
Considerations
31.
Handling Exceptions: Testing Method Behavior
Under Error Conditions
32.
Validating Event Invocation: Ensuring Correct
Event Handling
33.
Unraveling the Mystery of Testing Private
Methods
34.
Exploring Code Coverage: Metrics and Insights
35.
Real-world Testing Challenges

Section 4: Decoupling Dependencies for Testability


36.
Understanding the Importance of Decoupling
External Dependencies
37.
Embracing Loosely-coupled Design Principles for
Testable Code
38.
Step-by-Step Refactoring Towards a Loosely-
coupled Architecture, Part 1
39.
Step-by-Step Refactoring Towards a Loosely-
coupled Architecture, Part 2
40.
Dependency Injection via Method Parameters:
Implementation and Benefits
41.
Leveraging Property Injection for Enhanced
Flexibility
42.
Utilizing Constructor Injection for Seamless
Dependency Management
43.
Exploring Dependency Injection Frameworks:
Options and Considerations
44.
Unleashing the Power of Mocking Frameworks:
Essential Tools for Unit Testing
45.
Crafting Mock Objects Using Moq: Advanced
Techniques, Part 1
46.
Crafting Mock Objects Using Moq: Advanced
Techniques, Part 2
47.
Balancing State-based and Interaction Testing
Approaches
48.
Testing Object Interaction: Strategies and
Examples
49.
Identifying and Avoiding Mock Abuse: Best
Practices
50.
Defining Roles: Who Should Be Responsible for
Writing Tests in Your Team?
~ Conclusion

Welcome & What You’ll Learn


Welcome to the World of Robust C# Code
Software testing often gets relegated to the sidelines, treated as a tedious
step done sometime after “real” coding is finished. But as your projects
grow, their complexity increases. Bugs become harder to find, and changes
might inadvertently introduce unexpected problems in seemingly unrelated
areas of your application. The truth is, creating stable, maintainable
software takes more than just writing code that works today.
This book aims to change your perspective. It’s your guide to building
confidence in your C# applications through the power of automated testing,
with a strong focus on unit testing. Unit testing serves as your safety net
during development, helping you isolate problems early and giving you the
freedom to make changes without the constant fear of breaking something.
What You’ll Learn
This comprehensive guide will take you on a journey, transforming the way
you think about testing:
● Solid Foundations: You’ll understand automated testing’s benefits,
different test types, and how they fit together. Mastering the testing
pyramid concept will help you structure your testing strategy.
● Unit Testing Mastery: Discover core unit testing principles, best
practices for writing effective tests, and techniques to tackle various
aspects of your C# code (strings, exceptions, events, etc.).
● Beyond the Basics: Explore advanced unit testing concepts like
parameterized testing, code coverage analysis, and testing private
methods to unlock new levels of efficiency.
● Dependency Management: Learn the importance of decoupling
code, the concepts of dependency injection, and how it leads to
more testable systems.
● Mocking Power: Get hands-on experience with Moq, a powerful
mocking framework, to isolate components and simulate
dependencies for focused testing.
● Test-Driven Development (TDD): Embrace test-driven
development as a philosophy to guide your development process,
leading to more robust and maintainable code from the start.
● Industry Insights: Benefit from expert tips and strategies for
tackling real-world testing challenges and integrating testing
seamlessly into your development workflow.
Who This Book is For
● C# Developers of All Levels: Whether you’re new to testing or
have some experience, this book will expand your knowledge and
solidify your testing skills.
● Teams Aiming for Quality: Learn how to improve code reliability
and reduce the risk of defects slipping into production, making your
development process more efficient.
● Software Architects: Discover how testable designs enable greater
agility in delivering high-quality software.
Prerequisites
You should have a basic understanding of core C# syntax and the ability to
create simple applications.
Additional Resources
This book complements your learning with additional resources to deepen
your understanding:
● Microsoft Documentation on Unit Testing:
https://docs.microsoft.com/en-us/dotnet/core/testing/
● The NUnit Project Website: https://nunit.org/
● The Moq Project Website: https://github.com/Moq/moq4
Get Ready to Embark on Your Testing Journey!
As you progress through this book, you’ll gain the mindset, skills, and tools
you need to create high-quality C# applications that stand the test of time.
Let’s get started!
Section 1:
Introduction to
Automated Testing

Understanding Automated Testing


You’ve crafted a C# application that seems to function perfectly. It takes the
right inputs, produces the right outputs, and your early users are happy. Yet,
as updates roll in – more features, bug fixes, optimizations – a nagging
worry grows: how can you be sure that these changes don’t unintentionally
break something else? This is where automated testing enters the picture.
What is Automated Testing?
In its essence, automated testing is the process of writing code to test your
code. Let’s break down this definition:
● Code to Test Code: Instead of manual checks, you create special
test cases – essentially small programs that verify various aspects of
your main application.
● Automated: Once written, these test cases can be executed
repeatedly and automatically without direct human intervention.
Why Automated Testing Matters
Let’s consider why automated testing deserves a place in your development
toolbox:
● Early Defect Detection: Automated tests run instantly. You can
catch bugs during development rather than discovering them after a
release, saving time and potential embarrassment.
● Reliability: Well-written automated tests are consistent. They
perform the same checks the same way every time, unlike manual
testing that might be prone to human error or oversight.
● Regression Safety Net: As projects change, automated tests ensure
that previously working features don’t unexpectedly break, giving
you confidence in making changes.
● Efficiency over Time: While there’s an initial investment in writing
tests, they save time in the long run. Find a bug early, fix it early,
and avoid costly debugging sessions later.
● Living Documentation: Tests can serve as another way to explain
how your code is supposed to function, aiding collaboration within
your team.
Manual Testing vs. Automated Testing
Automated testing does not completely replace manual testing. Here’s a
quick comparison:

Feature Manual Testing Automated Testing


Focus User experience, edge cases, Code logic, repetitive
exploratory testing scenarios
Speed Can be slow and tedious Very fast execution
Consisten May vary with tester Always consistent
cy
Scope Limited by time / resources Can be very
comprehensive

Best of Both Worlds: Combining automated and manual testing strategies


provides the most robust safety net for your software development.
Additional Resources
● Benefits of Software Testing and Automation:
https://smartbear.com/learn/automated-testing/what-is-automated-
testing/
Get Ready to Dive Deeper
Now that you understand the core concepts and the “why” behind
automated testing, the next chapter will unveil the many benefits it offers
for you and your projects.
Let’s continue!

Unveiling the Advantages of


Automated Testing
In the previous chapter, you learned the fundamentals of what automated
testing is. Now, it’s time to explore the concrete benefits it brings to your
C# software projects. Let’s dive into the key advantages and why automated
testing deserves a prominent place in your development toolkit.
1. Catching Bugs Early
Imagine a scenario: You release a new feature, only to receive negative
feedback about unexpected behavior in a seemingly unrelated part of your
software. Such “silent breaks” are frustrating and erode user trust.
Automated tests act as your early warning system. Every time you change
code, you run the test suite, getting quick feedback about potential problems
before they reach end users.
2. Increased Confidence During Changes
Software rarely remains static. Features get added, performance
optimizations occur, dependencies get updated, and code is refactored for
better maintainability. With a solid set of automated tests, you gain
confidence. If your tests pass after a change, it provides a strong signal that
you haven’t broken core functionality.
3. Faster and Safer Refactoring
Refactoring means restructuring existing code without impacting its
external behavior. A comprehensive test suite is your safety net during the
refactoring process. As you make changes, run the tests – any unexpected
failures point to areas where you might have inadvertently introduced a
bug.
4. Time Savings in the Long Run
While writing automated tests requires an initial effort, they pay dividends
over time:
● Reduced Debugging: Automated tests help pinpoint the source of
issues faster, minimizing those lengthy debugging sessions.
● Reduced Regression: Fix a bug once, and write a test to ensure it
doesn’t reappear. Catching regressions early is far cheaper than
discovering them after a release.
5. Living Documentation
Well-written tests serve as another form of documentation for your code.
They clearly outline expected inputs, outputs, and the behavior of your
components. This aids in understanding the intent of your code, especially
helpful for newcomers joining a project.
6. Promoting Better Design
Testable code is often better-structured. When you think about testing
upfront, you tend to design your C# classes and methods in ways that are
easier to isolate and test. This can lead to more modular, reusable, and
loosely coupled code.
7. Encouraging a Quality Mindset
Embracing automated testing fosters a culture of quality within your team.
Writing tests becomes the norm, not an afterthought. This mindset shift
leads to more robust software, as everyone becomes responsible for
ensuring things not only work today, but continue to function correctly as
the project evolves.
Additional Resources
● The Top 5 Benefits of Automated Testing:
https://www.testim.io/blog/top-5-benefits-of-automated-testing/
From Theory to Practice
Understanding the benefits is a fantastic start! In the upcoming chapters,
you’ll transform this understanding into practical skills as you learn about
different test types, write your first test, and discover essential testing tools.
Let’s move forward!

Exploring Different Test Types


The realm of automated testing is vast, with different test types suited for
various scenarios. In this chapter, you’ll gain a high-level understanding of
the key test types you’ll encounter on your C# testing journey. Let’s
demystify the testing landscape and how these types complement each
other!
1. Unit Tests
● Focus: These are the foundation of most testing strategies. Unit
tests isolate and verify the smallest testable pieces of your code –
typically individual methods or functions.
● Benefits: Fast to write and execute, ideal for early bug detection
and providing quick feedback during development.
2. Integration Tests
● Focus: Examine how multiple units or components of your system
collaborate. These tests check if units are ‘playing nicely’ together.
● Benefits: Discover mismatches in how components interact and
identify communication issues.
3. System Tests
● Focus: Assess the behavior of your complete system as a whole.
Often simulate an end-user journey.
● Benefits: Verify if the whole system meets requirements from the
user’s perspective.
4. End-to-End (E2E) Tests
● Focus: Replicate real-world user scenarios, often involving
interactions with the user interface (UI), databases, or external
systems.
● Benefits: Ensure everything works together from the user’s point of
view. These are usually slower to run than the kinds of tests listed
above.
5. Regression Tests
● Focus: A catch-all term for tests designed to prevent the
reintroduction of previously fixed bugs. As you fix issues, critical
tests become part of your regression test suite.
● Benefits: Offer a vital safety net as your project grows, providing
confidence that changes haven’t broken previously working
functionality.
6. Acceptance Tests
● Focus: Driven by business requirements. These tests verify that
your software meets user-facing expectations and acceptance
criteria.
● Benefits: Help bridge the gap between what developers build and
what users actually need.
7. Performance Tests
● Focus: Ensure your system operates within acceptable response
times and handles expected load. Types include load testing, stress
testing, and scalability testing.
● Benefits: Help uncover performance bottlenecks and ensure your
system can meet user demand.
The Testing Pyramid (Coming Up!)
These different test types aren’t created equal. In the next chapter, we’ll
discuss the “Test Pyramid” concept – a valuable strategy for balancing your
testing efforts.
Additional Resources
● Software Testing Types: A Complete Guide:
https://www.softwaretestinghelp.com/types-of-software-testing/
● Test Types Cheatsheet: https://abstracta.us/blog/quality-
assurance/test-types-cheatsheet
Understanding Your Testing Needs
The best test mix for your projects depends on their unique needs. As you
get deeper into unit testing, you’ll also gain insights into when other test
types become essential to ensure your C# applications truly deliver.
Next, let’s build a foundation by understanding the Test Pyramid concept!
Building a Solid Foundation: The Test
Pyramid Concept
Imagine building a house. You start with a strong foundation and then add
layers upon it. The testing pyramid is a similar concept for software testing.
It’s a powerful visual metaphor that guides you in prioritizing different test
types to create a robust and efficient testing strategy.
Understanding the Layers
The classic test pyramid generally consists of three main layers:
1. Unit Tests (Base): This forms the largest and most crucial
section of your pyramid. Here, you focus on testing individual
units of code (like functions or methods) in isolation.
2. Integration Tests (Middle): A smaller layer dedicated to
testing how multiple units or components work together.
3. End-to-End (E2E) Tests (Top): The smallest portion of your
pyramid. These tests simulate real-world user scenarios, often
spanning across the entire application, including the UI,
database, and other systems.

Why a Pyramid?
Here’s why this pyramid structure matters:
● Speed: Unit tests are lightning-fast, providing rapid feedback as
you develop. Integration and E2E tests tend to be slower to execute.
● Cost: Unit tests are comparatively inexpensive to write and run.
The higher you go on the pyramid, the more complex, time-
consuming, and potentially brittle your tests can become.
● Focus: The pyramid encourages you to focus on testing the core
logic of your application at the unit level, where most bugs are
likely to appear.
Ideal Proportions
While the ratio of tests will vary depending on your project’s nature, the
general guideline is: Have a substantial base of unit tests, a smaller section
of integration, and a select few targeted E2E tests.
Benefits of the Test Pyramid
● Resilient Projects: A solid foundation of unit tests promotes
greater code stability over time.
● Optimized Efforts: The pyramid guides you towards a balanced
testing strategy, maximizing return on investment.
● Faster Feedback Loops: With more unit tests and their rapid
execution, you get early signals of potential problems.
Additional Resources
● Introducing the Software Testing Pyramid:
https://martinfowler.com/articles/practical-test-pyramid.html
● The Test Pyramid: A Guide to Better Automated Software
Testing: https://www.browserstack.com/guide/testing-pyramid-for-
test-automation
Beyond the Classic Pyramid
It’s important to note that the classic test pyramid is a conceptual model and
shouldn’t be treated as rigid dogma. Some projects might benefit from
slightly modified shapes. It’s about finding the best balance for your
context.
Get Ready to Dive Deep
Now that you grasp the essence of the testing pyramid, the next chapter will
take you into the world of testing tools. You’ll learn about essential tools for
writing and executing your C# tests.
Let’s move forward!
Navigating the Testing Tool Landscape
The world of C# testing offers a rich ecosystem of tools that empower you
to write, run, and analyze your automated tests. Let’s take a tour of the key
categories and popular choices to help you select the best fit for your
projects.
1. Testing Frameworks
These frameworks provide the structure and vocabulary for composing your
tests. Key players in the C# world include:
● NUnit: A mature, widely-adopted, and flexible testing framework.
We’ll focus on NUnit later in the book.
● xUnit: A modern, lean, and extensible testing framework inspired
by the fundamentals of NUnit.
● MSTest: A testing framework built into Visual Studio, providing
seamless integration with the IDE.
2. Assertion Libraries
Assertion libraries offer a clear way to express the expected results of your
tests. Popular options include:
● Built-in Assertions: NUnit, xUnit, and MSTest all come with their
own assertion mechanisms.
● Fluent Assertions: Provides a readable, fluent syntax for writing
assertions (e.g., myResult.Should().BePositive() ).
● Shouldly: Another assertion library focusing on expressive
assertions (e.g. myResult.ShouldBePositive() .
3. Mocking Frameworks
Mocking is essential for isolating your code during unit testing. A top
choice for C# is:
● Moq: Powerful, versatile, and easy to learn. Supports both state-
based and interaction-based testing styles. We’ll dive deep into Moq
in later sections.
4. Code Coverage Tools
Code coverage gives you insight into what percentage of your code is
executed by your tests. Popular tools include:
● dotCover: A powerful commercial code coverage tool from
JetBrains, with excellent Visual Studio and Rider integration.
● OpenCover: A free and open-source alternative for code coverage
analysis.
5. Test Runners
Test runners discover and execute your tests. They’re often integrated with
your IDE or build system.
● Visual Studio Test Explorer: Built-in test runner for Visual Studio.
● ReSharper Test Runner: JetBrains’ advanced test runner for
ReSharper.
● NUnit Console Runner: Command-line runner for executing
NUnit tests.
Choosing Your Toolkit
Here’s a simplified starting point that will serve you well:
● Testing Framework: NUnit
● Assertion Library: NUnit’s built-in assertions with potential
exploration of Fluent Assertions
● Mocking Framework: Moq
● IDE: Visual Studio or Rider
Don’t Get Overwhelmed
This landscape might seem vast. The key is to start with a core set of tools
and expand your knowledge as you go deeper into unit testing. We’ll focus
on using NUnit, Moq, and IDE-integrated test runners to give you a
powerful and practical testing setup.
Additional Resources
● C#/.NET Testing Tools: https://github.com/topics/testing-tools
Practical Experience Ahead
In the next chapter, you’ll take a leap forward by writing and understanding
your first C# unit test!
Let’s get those hands-on skills going!

Writing Your First Unit Test: Step-by-


Step Guide
It’s time to move from theory to practice! In this chapter, you’ll create your
first C# unit test using NUnit. We’ll walk through the process step-by-step,
making it a clear and tangible experience.
Prerequisites
● Basic C# syntax understanding.
● An IDE like Visual Studio or Rider (Rider is used in this example).
Scenario: A Simple Calculator
Let’s imagine a basic Calculator class with a simple Add method:
public class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
}
Our goal is to write a unit test that verifies the Add method functions
correctly.
Step 1: Create a Test Project
1. In Rider (or your chosen IDE), create a new project.
2. Choose a ‘Unit Test Project’ template (the exact naming may
vary slightly between IDEs).
3. Name the project something like CalculatorTests .

Step 2: Install NUnit


1. Navigate to the NuGet Package Manager of your test project.
2. Search for NUnit and install the package.
3. Consider installing the NUnit3TestAdapter package as well
(this enables tests to be discovered by the test runner in your
IDE).

Step 3: Write Your First Test Class


1. Create a new C# class file in your test project. Name it
CalculatorTests (mirroring the name of your calculator class is
a common practice).
2. Add the necessary using statements:
using NUnit.Framework; // For NUnit assertions and attributes
using YourProjectNamespace; // Replace with the namespace of your
Calculator class
Step 4: Write Your First Test Method
[TestFixture] // Marks this class as containing tests
public class CalculatorTests
{
[Test] // Marks this method as a test
public void Add_PositiveNumbers_ReturnsCorrectSum()
{
// Arrange
var calculator = new Calculator();
int a = 5;
int b = 8;
int expectedResult = 13;
// Act
int actualResult = calculator.Add(a, b);
// Assert
Assert.AreEqual(expectedResult, actualResult);
}
}
Step 5: Run Your Test!
Use your IDE’s built-in test runner (e.g., Rider’s or Visual Studio’s Test
Explorer) to locate and run the test. You should see it pass!
Breaking it Down
● Test Structure: NUnit tests are defined by methods with the [Test]
attribute within a class marked with [TestFixture] .
● AAA Pattern: (Arrange-Act-Assert): A common structure for tests;
you set up test data, execute the code to be tested, and assert the
expected outcome.
● Assertions: NUnit provides various Assert methods (like
AreEqual ) to verify results.
Additional Resources
● NUnit Documentation: https://docs.nunit.org/
Congratulations! You’ve written your first C# unit test.
Next Steps
This is just the beginning! In the next chapter, we’ll dive deeper into how
you can expand your tests and ensure they’re both effective and
maintainable.
Deep Dive into Unit Test Composition:
Advanced Techniques
In the previous chapter, you took your first steps writing a simple unit test.
Now, let’s go deeper and explore techniques to make your tests more
expressive, robust, and capable of handling complexity.
1. Testing Multiple Scenarios
A single test method often needs to verify different conditions. Let’s extend
our Calculator example:
[Test]
public void Add_HandlesNegativeNumbers_ReturnsCorrectSum()
{
var calculator = new Calculator();

// Scenario 1
Assert.AreEqual(-5, calculator.Add(-3, -2));

//Scenario 2
Assert.AreEqual(2, calculator.Add(5, -3));
}
Instead of separate test methods, we use multiple Assert statements for
various scenarios.
2. Parameterized Tests for Data-Driven Testing
Expand the concept above with NUnit’s parameterized tests:
[Test]
public void Add_HandlesVariousInputs_ReturnsCorrectSum(
[Values(2, 5, 8)] int a,
[Values(-4, 0, 3)] int b,
[Values(-2, 5, 11)] int expectedResult)
{
var calculator = new Calculator();
Assert.AreEqual(expectedResult, calculator.Add(a, b));
}
NUnit will run the test multiple times, each with a combination of the
provided parameters!
3. The Art of Test-Driven Development (TDD)
While not strictly a composition technique, let’s introduce TDD:
● Red: First, write a failing test for a feature that doesn’t exist yet.
● Green: Write the minimal implementation to only make the test
pass.
● Refactor: Improve your code’s design while ensuring tests still
pass.
TDD guides development with testing as its foundation. We’ll dedicate a
full chapter to it later!
4. Testing Edge Cases and Error Conditions
Don’t just test the “happy path”. Good tests explore boundaries:
[Test]
public void Add_LargeNumbers_ThrowsOverflowException()
{
var calculator = new Calculator();
Assert.Throws<OverflowException>(() => calculator.Add(int.MaxValue,
1));
}
5. Test Naming as Documentation
Descriptive names make tests self-explanatory:
[Test]
public void Divide_DenominatorZero_ThrowsDivideByZeroException()
{ ... }
6. Organizing with Setup/Teardown
For repeated setup and cleanup steps in multiple tests, use NUnit’s [SetUp]
and [TearDown] attributes on methods within your test class. Be mindful:
these can make tests less isolated if misused.
Additional Resources
● Parameterized Tests in NUnit:
https://docs.nunit.org/articles/nunit/writing-tests/parameterized-
tests.html
● NUnit Setup and Teardown:
https://docs.nunit.org/articles/nunit/writing-
tests/setuptearndown.html
Beyond the Basics
We’ve only scratched the surface here. As you get more comfortable with
unit testing, you’ll discover more patterns and tools for handling increasing
complexity.
Up Next
To make sure your tests actually aid in quality code, it’s vital to understand
what constitutes sufficient test coverage. Let’s move on to the next chapter!
Ensuring Comprehensive Test
Coverage: Beyond Unit Tests
Unit tests are your front line of defense, but achieving truly robust software
requires a broader testing strategy. Let’s explore why it’s important to go
beyond unit tests and get acquainted with some of the key testing types to
expand your safety net.
1. Why “More Tests” Isn’t Always the Answer
You might be tempted to simply aim for writing as many unit tests as
possible. However, untargeted testing is inefficient. To maximize the value
of your testing efforts, you need a multi-layered approach. Remember the
test pyramid!
2. Integration Tests: Playing Well Together
Where unit tests examine individual components in isolation, integration
tests check how multiple units collaborate:
● Example: An integration test might verify that a repository class in
your application correctly communicates with the database.
● Purpose: They uncover issues in how components communicate,
ensuring that the ‘wiring’ of your system works as intended.
3. System Tests: The Big Picture
System tests focus on the behavior of your entire application from the user’s
perspective:
● Example: Simulating a user journey through a web application,
including interactions with the user interface (UI).
● Purpose: Verifies that all the parts of your system integrate
correctly to deliver the expected user experience.
4. End-to-End (E2E) Tests: As Real as it Gets
End-to-end tests thoroughly exercise your system, often involving external
dependencies like databases, web services, or even hardware integrations:
● Example: Testing a full payment processing flow, inclusive of
interactions with an external payment gateway.
● Purpose: Provides the highest level of confidence that the system
works as a whole, replicating real-world user scenarios.
5. Other Important Test Types
Let’s briefly touch on other test types you’ll encounter:
● Performance Tests: Assess if your system meets responsiveness
and scalability targets.
● Acceptance Tests: Ensure that the software fulfills business
requirements as defined by users or stakeholders.
● Regression Tests: Designed to catch reintroduced errors, guarding
against the unintended breakages of previously working
functionality.
The Right Mix
Finding the right testing blend for your project is essential. Not every
application needs extensive E2E tests, but neglecting integration tests
altogether is rarely a good strategy.
Beyond Introduction
As you become a seasoned unit tester, you’ll gain the experience to
strategically use a combination of these test types to maximize the quality
of your software while minimizing the cost and time for testing.
Next Up
Testing and refactoring go hand-in-hand. In the next chapter, we’ll see how
tests empower you to make changes to your code with more confidence.
Empowering Refactoring Practices
with Testing
Refactoring is the art of restructuring your code to improve its readability,
design, and maintainability – all without changing its external behavior.
Automated tests, especially unit tests, are your superpower when it comes
to fearless refactoring.
Why Refactoring Matters
● Adaptable Code: As projects evolve, refactoring keeps your
codebase flexible and easier to adjust to new requirements.
● Preventing Rot: Untouched, code tends to degrade over time.
Refactoring helps fight complexity “rot” and technical debt.
● Developer Happiness: Clean, well-designed code is simply more
enjoyable to work with!
Tests: Your Refactoring Safety Net
Imagine refactoring without tests. Every change introduces a nagging fear
of accidentally breaking something. With a solid set of tests, you get this
amazing advantage:
1. Make Changes: Restructure your code as needed.
2. Run Tests: Your test suite automatically runs.
3. Instant Feedback: If tests pass, you have high confidence your
changes haven’t introduced unexpected problems. If a test fails,
it pinpoints exactly where you need to focus.

Illustrative Example
Let’s say you have a complex method CalculateOrderTotal that’s become
difficult to understand. With tests covering various CalculateOrderTotal
scenarios, you can:
● Break down the method into smaller, more focused functions.
● Introduce clearer variable names.
● Optimize the calculation logic.
…all while tests keep verifying it still gives the correct output!
Embracing Change with Confidence
Refactoring with a strong test suite in place transforms your mindset.
Instead of fearing change, you embrace it, confident that your tests will
quickly signal if you make a misstep.
Types of Refactoring
Here are common refactoring techniques where tests shine:
● Extract Method: Decouple a long method into smaller, well-named
functions.
● Rename: Improve readability with more descriptive class, method,
or variable names.
● Introduce Design Patterns: Apply established patterns (Observer,
Factory, etc.) to enhance structure and flexibility.
Additional Resources
● Refactoring.Guru (Catalog of Refactorings):
https://refactoring.guru/
● Martin Fowler: Refactoring
https://martinfowler.com/books/refactoring.html (A classic book on
refactoring)
Key Takeaway
Refactoring and testing form a powerful cycle. Tests give you the courage
to refactor, and refactoring often leads to simpler code that is easier to test.
This promotes long-term code health.
Up Next
Now that you understand the importance of testing, let’s dive into the
specifics of writing excellent unit tests using the powerful NUnit
framework.
Let’s get testing with NUnit!
Leveraging NUnit for Efficient Testing
in Visual Studio
NUnit is a battle-tested unit testing framework for C# and the .NET
ecosystem. When paired with Visual Studio, you get a seamless testing
experience. Let’s explore how to use them in tandem.
Prerequisites
● Basic Visual Studio experience (creating projects, etc.)
● Understanding of unit testing concepts from previous chapters
1. Installing NUnit
The easiest way is using Visual Studio’s NuGet Package Manager:
1. Right-click on your test project -> “Manage NuGet
Packages…”
2. Search for ‘NUnit’ and install the NUnit package.
3. Install the ’NUnit3TestAdapter* package as well. This enables
Visual Studio to discover and run your tests.

2. Visual Studio’s Test Explorer


Your primary interface for interacting with NUnit tests is the Test Explorer
window:
● Find it: View -> Test Explorer (or use the search bar in Visual
Studio)
● Run Tests: After installing NUnit, build your project. Tests will
appear in the Test Explorer. You can run them individually or all at
once.
● Results: Test Explorer displays clear results (pass/fail/skipped)
along with any error output.
3. Writing Your First NUnit Test (Revisited)
Let’s create a new test. Recall this structure from a previous chapter:
using NUnit.Framework;
namespace CalculatorTests
{
[TestFixture]
public class CalculatorTests
{
[Test]
public void Add_PositiveNumbers_ReturnsCorrectSum()
{
// ... (Arrange, Act, Assert as before)
}
}
}
(Key Points)
● NUnit.Framework: Include the necessary namespace.
● *TestFixture`*: Marks a class as containing tests.
● *Test`*: Marks individual test methods.
4. Running and Debugging Tests
● Run: Click the ‘Run’ icons in the Test Explorer.
● Debugging: Set breakpoints inside your test methods just like
regular code. Step through to investigate test failures in detail.
5. Test Organization
● Test Fixtures: Use multiple [TestFixture] classes to group tests
logically (e.g., AccountServiceTests , OrderRepositoryTests ).
● Namespaces: Use namespaces to further organize larger test
projects.
6. Additional NUnit Features (Just a Taste)
We’ll dive deeper later, but here’s a preview of further capabilities:
● Parameterized Tests: [TestCase] attribute for data-driven testing.
● Setup/Teardown: [SetUp] for common test initialization,
[TearDown] for cleanup.
● Assertions: NUnit provides a rich set of assertion methods (e.g.,
AreEqual , Throws , etc.)
Additional Resources
● NUnit Documentation: https://docs.nunit.org/
● Visual Studio Test Tools: https://docs.microsoft.com/en-
us/visualstudio/test/
Power at Your Fingertips
NUnit and Visual Studio combine to provide a smooth and powerful testing
workflow directly within your favorite IDE. As you write more complex
tests, you’ll increasingly value this integration.
Next: Test-Driven Development
Now that you’re set up with NUnit, let’s explore a philosophy that can
revolutionize how you write software: Test-Driven Development (TDD)!

Embracing Test-Driven Development:


Principles and Practices
Test-Driven Development (TDD) is more than just writing tests. It’s a
philosophy that flips the traditional development process, letting tests drive
the way you design and write your code. Let’s dive into its core principles
and benefits.
The TDD Cycle
TDD revolves around a short, iterative cycle:
1. Red: Begin by writing a small test that fails (because the
functionality doesn’t exist yet).
2. Green: Write the bare minimum production code to make the
test pass. Don’t worry about perfect design at this stage.
3. Refactor: With a passing test, now confidently restructure and
improve your code design, ensuring tests still pass.
Benefits of TDD
● Focus and Guidance: TDD keeps you focused on tiny units of
functionality at a time, leading to incremental progress.
● Early Feedback: Failing tests quickly highlight if your
implementation is deviating from the desired behavior.
● Design Enforcer: The ‘design later’ step in the cycle encourages
better design decisions, promoting testable, maintainable code.
● Built-in Regression Safety Net: As your codebase grows, your
TDD tests form an increasingly robust safety net against
regressions.
A Practical Example
Let’s revisit our Calculator class:
1. Red: Write a test like
Add_PositiveNumbers_ReturnsCorrectSum() . It expects the
result but initially fails since Add doesn’t exist.
2. Green: Create the simplest Add method with hardcoded logic
to make the test pass (e.g., just returning a fixed sum for now).
3. Refactor: Now, replace the hardcoded logic with a proper
calculation, ensuring the test (and any others you’ve since
added) still pass.

TDD Mindset Shift


Adopting TDD requires practice and a shift in mindset. Instead of jumping
straight into coding, learn to think “test first.” Here are some tips for this
transition:
● Baby Steps: Start with very small units of behavior to test.
● Don’t Overthink: In the ‘green’ phase, prioritize merely making
the test pass, not writing the most elegant solution.
● Trust the Process: Initially, it might feel slower, but as your test
suite grows, you’ll gain speed and confidence.
Additional Resources
● Introduction to Test-Driven Development (TDD):
https://www.agilealliance.org/glossary/tdd/
● The Three Rules of TDD :
https://www.agilealliance.org/glossary/tdd/
Beyond the Basics
TDD mastery is a continuous journey. As you gain experience, you’ll
discover patterns that aid in designing testable systems and how TDD
influences your approach to problem-solving.

Section 2:
Mastering Unit Testing Fundamentals

Understanding the Core Principles of


Unit Testing
While tools like NUnit provide the mechanics of writing tests, a solid grasp
of fundamental principles will guide you in creating tests that are truly
effective. Let’s explore these cornerstones.
1. Isolation: The Heart of Unit Testing
● A true unit test targets the smallest testable piece of code in
complete isolation from other parts of your system.
● Why It Matters: Isolation allows you to precisely pinpoint the
source of failures. If a test fails, you know exactly where the issue
lies.
2. FAST
● Unit tests must be blazingly fast to execute.
● Why It Matters: Slow tests discourage frequent testing. You want
to be able to run your full test suite in seconds, allowing rapid
feedback as you code.
3. Independent and Repeatable
● Each unit test should be self-contained, not relying on the execution
order or side effects of other tests.
● Tests should consistently produce the same results each time they
are run, regardless of external factors.
● Why It Matters: Unreliable tests erode trust in your test suite.
Order-dependent tests can hide errors and make debugging a
nightmare.
4. Self-Validating
● A unit test should have a clear, unambiguous pass or fail outcome.
Avoid subjective interpretations.
● Why It Matters: Automated testing is most valuable when it
provides definitive, automated verdicts on how your code is
functioning.
5. Timely
● Ideally, write unit tests alongside your production code or even
before (TDD!).
● Why It Matters: Retrofitting tests is harder. Testing early forces
you to think about your code’s testability and can shape better
design.
Common Traits of Good Unit Tests
Let’s extend these principles into more practical observations:
● Focused: A good unit test verifies one specific behavior of a code
unit.
● Readable: Tests serve as documentation. Their intent should be
clear.
● Maintainable: As your code evolves, tests should be easy to update
with minimal breakage.
Principled vs. Dogmatic
Treat these as guiding principles, not unbreakable laws. There will be
occasional scenarios where slight compromises are justifiable (e.g., a very
minor, well-controlled dependency might be unavoidable).
Additional Resources
● The Art of Unit Testing by Roy Osherove:
https://www.manning.com/books/the-art-of-unit-testing-second-
edition (A well-regarded book on Unit Testing)
● Principles of Unit Testing
https://martinfowler.com/bliki/UnitTest.html
Essential Traits of Effective Unit Tests
In the previous chapter, we discussed the core principles that guide unit
testing. Let’s translate those into actionable traits you’ll find in well-crafted
unit tests.
1. Focused and Specific
Each test should zero in on a single, well-defined behavior of the code
being tested. Here’s why this matters:
● Pinpointing Problems: Granular tests quickly reveal the root cause
of a failure.
● Avoiding Overlap: Tests with broad scopes can result in
redundancy or miss subtle edge cases altogether.
2. Readable and Self-Descriptive
Tests act as living documentation of your code’s intended behavior.
Prioritize clarity:
● Descriptive Naming: Test method names should communicate the
scenario being tested (e.g.,
CalculateDiscount_AppliesSeniorDiscount_WhenAgeOver65 )
● Clear Structure: Employ the AAA (Arrange-Act-Assert) pattern.
● Minimal Comments: Your test structure should explain itself for
the most part; use comments sparingly for complex setups.
3. Trustworthy and Reliable
A flaky test suite erodes confidence. Strive for consistency and avoid false
alarms caused by:
● External Dependencies: Minimize the use of real databases, file
systems, etc., as these can introduce uncontrolled factors.
● Test Order Dependency: Tests that inadvertently rely on previous
executions lead to unpredictable outcomes.
● Non-Determinism: If your logic uses randomness or depends on
elements like the current time, figure out ways to control those
factors within the test.
4. Maintainable
Software changes, and your tests need to change with it. Here’s how to
minimize test maintenance headaches:
● Avoid Over-Mocking: Overly complex mocking setups lead to
brittle tests that break easily when the underlying code changes.
● Test Design: Follow the same design principles in your tests as
your production code. Well-structured tests are easier to adjust.
● Parallel to Refactoring: When refactoring production code,
refactor your tests alongside, keeping them in sync.
Example: Revisiting Our Calculator
Let’s imagine a poorly written Calculator test:
[Test]
public void CalculatorCanDoMath()
{
var calculator = new Calculator();
Assert.AreEqual(5, calculator.Add(2, 3));
Assert.AreEqual(10, calculator.Multiply(5, 2));
Assert.AreEqual(2, calculator.Subtract(5, 3));
}
Weaknesses:
● Not Focused: Tests multiple operations at once.
● Poor Naming: Doesn’t convey test intent.
● Fragile: If the Add implementation breaks, all the assertions could
fail, obscuring the source of the error.
Additional Resources
● Roy Osherove on maintainable tests:
https://www.manning.com/books/the-art-of-unit-testing-second-
edition
● XUnit Test Patterns: Refactoring Test Code:
https://www.amazon.com/xUnit-Test-Patterns-Refactoring-
Code/dp/0131495054
The Journey, Not the Destination
Writing truly excellent unit tests is a continuous learning process. Begin
with these traits in mind, and as you gain more experience, your ability to
balance these considerations will naturally grow.
Up Next
Understanding how much to test and what scenarios to focus on is a crucial
skill. In the next chapter, we’ll tackle the concept of defining your test
scope wisely.

Defining Test Scope: Choosing What


to Test Wisely
Exhaustively testing every possible input and code path is often impractical
and can lead to an overabundance of tests. Let’s explore how to strategically
define your test scope to maximize the value of your unit tests.
1. The Logic of Your Code
Focus primarily on testing the decision points and algorithmic logic within
your methods:
● Conditional Statements (if/else): Ensure that each branch of your
code is executed under different test cases.
● Loops: Verify that loops iterate correctly, handle edge cases like
zero iterations, and work with different input sizes.
● Calculations: Test your formulas and computational logic.
2. Edge Cases and Boundary Values
Pay special attention to input values that lie at the boundaries of valid and
invalid ranges.
● Numbers: Consider zero, negative numbers, very large numbers, or
any values specific to your calculation logic.
● Strings: Test empty strings, strings containing special characters, or
strings of maximum allowed length.
● Collections: Check for empty collections, those with one element,
with the maximum number of elements, etc.
3. Error Conditions
How does your code behave when things go wrong? Include tests that:
● Trigger Exceptions: Verify that your code throws the correct
exceptions under invalid conditions and those are handled
gracefully.
● Invalid Inputs: Ensure bad input is detected and appropriately
rejected.
4. Risk Assessment
Identify areas of code with these characteristics:
● Complexity: Complex logic is more prone to errors; therefore, it
warrants more thorough testing.
● Criticality: If the failure of a code unit would have severe
consequences for your application, prioritize rigorous testing.
● Rate of Change: Frequently modified code areas might warrant a
richer set of tests to catch regressions more easily.
5. Evolving Test Scope
Avoid a “write once and forget” mentality:
● New Features: As you expand your application, new tests will
likely be required to cover the added functionality.
● Bug Fixes: When fixing a bug, write a test that reproduces the bug
first to ensure the fix works and prevent re-occurrences.
● Refactoring: As your code structure changes, adjust your tests
accordingly.
Example: Back to the Calculator
Let’s suppose our Calculator now has a more complex Divide method:
public double Divide(double a, double b) {
if (b == 0) {
throw new DivideByZeroException();
}
// Handle potential overflow if the result is out of double's range
if (Math.Abs(a) == double.MaxValue && Math.Abs(b) == 1.0) {
return a;
}
return a / b;
}
Additional Resources
● Boundary Value Analysis:
https://en.wikipedia.org/wiki/Boundary-value_analysis
● Equivalence Partitioning:
https://en.wikipedia.org/wiki/Equivalence_partitioning
● Example-based thinking when determining test scenarios:
https://www.agilealliance.org/resources/experience-
reports/example-based-testing-an-introduction/
Striking the Right Balance
Finding the ideal test scope is an ongoing balance between thoroughness
and practicality. With practice, you’ll get better at pinpointing the tests that
provide the most return on investment for your time and effort.
Up Next
A clear test structure and descriptive naming dramatically improve the
readability of your tests. Let’s delve into effective test organization in the
next chapter.
Structuring Your Tests: The Art of
Naming and Organization
Well-structured and named tests are essential for readability and
maintainability. They enhance the value of your tests as living
documentation of your system. Let’s dive into the key aspects.
1. Test Class Organization
● One Test Class per Production Class: Mirroring the structure of
your production code improves navigability. (e.g., CalculatorTests
to test the Calculator class)
● Nested Classes for Fine-Grained Grouping (Optional): For
classes with many methods, nest test classes to organize – like
CalculatorTests.AdditionTests specifically for testing the addition
aspect.
2. Test Method Naming
Descriptive method names act as a story, telling you what a test is verifying.
A common and effective convention is:
MethodName_ScenarioUnderTest_ExpectedBehavior
● MethodName: The method of your production code being tested.
● ScenarioUnderTest: Specific input conditions or setup.
● ExpectedBehavior: The anticipated result or outcome.
Example: Divide_PositiveNumbers_ReturnsCorrectQuotient
3. The Arrange-Act-Assert (AAA) Pattern
Within each test method, the AAA pattern fosters clarity:
● Arrange: Set up objects, data, and any preconditions for the test.
● Act: Invoke the actual method or unit of code you are testing.
● Assert: Check the expected outcomes using assertions.
4. Folder Structure
Mirror the project structure of your production code within your test
project. This makes it easy to find tests related to specific components.
5. Consistency is King
Choose a style and stick with it throughout your test suite. Consistency
makes tests easier to understand for everyone on your team.
Example with Our Calculator
Let’s see everything in action:
[TestFixture]
public class CalculatorTests
{
[Test]
public void Add_PositiveNumbers_ReturnsCorrectSum()
{
// Arrange
var calculator = new Calculator();
int a = 5, b = 7;
int expectedResult = 12;
// Act
int actualResult = calculator.Add(a, b);
// Assert
Assert.AreEqual(expectedResult, actualResult);
}
// More tests follow for subtraction, multiplication, etc.
}
Additional Resources
● Roy Osherove: The Art of Unit Testing: Naming Tests
https://osherove.com/blog/2005/4/3/naming-standards-for-unit-
tests.html
● NCrunch: Good Unit Test Naming Conventions
https://www.ncrunch.net/documentation/reference_writing_good_te
st_names
Test Structure as a Success Factor
Well-organized tests are easier to understand, debug, and update when
requirements change. As your test suite grows, the investment in proper
structure pays huge dividends.
Next Up
Your IDE can significantly enhance your testing workflow. In the next
chapter, we’ll see how to leverage Rider’s features to optimize your unit
testing experience.
Let’s get those tests streamlined with your IDE!

Leveraging Rider IDE: Optimizing


Your Unit Testing Workflow
Rider, with its rich feature set and intelligent code insights, can significantly
enhance your unit testing experience. Let’s explore how it can streamline
your workflow.
1. Seamless Visual Studio Compatibility (If Relevant)
If you’re transitioning from Visual Studio, Rider offers extensive support
for NUnit projects and the familiar Test Explorer interface. This eases the
learning curve.
2. Test Discovery and Execution
● Test Explorer Integration: Rider automatically discovers NUnit
tests and integrates with its Test Explorer-like window.
● Run Tests at Many Levels: Execute individual tests, entire test
fixtures, groups of tests, or your whole test suite directly from the
IDE.
● Instant Feedback: Clear indicators for passed (green), failed (red),
or skipped (blue) tests.
3. Debugging Power at Your Fingertips
● Step Through Test Code: Use breakpoints and debug tests just like
your application code. Analyze variables, step through execution
paths, and quickly identify the root cause of failures.
● Live Stack Trace: When a test fails, Rider’s stack trace lets you
jump straight to the failing line in your code.
4. Navigation and Code Refactoring Within Tests
● Go to Declaration/Implementation: Effortlessly navigate between
test code and corresponding production code using keyboard
shortcuts (like F12).
● Refactor with Confidence: Rider’s refactoring tools work reliably
within test code. Rename methods, introduce parameters, etc., while
tests automatically update, keeping them in sync.
5. Code Generation and Snippets
● Generating Test Fixtures: Rider can create basic test fixture
structures ( [TestFixture] , empty test methods) for you, saving you
keystrokes.
● NUnit Snippets: Use built-in snippets (e.g., type ‘test’ and expand)
for common NUnit test structures for rapid test writing.
6. Continuous Testing (Advanced Feature)
● Background Test Execution: Rider can continuously run relevant
tests in the background as you code. Get immediate feedback on
whether changes accidentally break something.
● Inline Notifications: See pass/fail results subtly integrated within
the code editor, letting you address test failures without switching
context.
Example Workflow
1. Write a Test (or let Rider generate the basics)
2. Discover: Rider adds your test to the testing tree.
3. Run and Debug: Execute the test. If it fails, debug to pinpoint
the problem in your production code.
4. Refactor: Fix your production code or tests with Rider’s
refactoring tools.
5. Continuous Testing (Optional): Get real-time code change
impact as you work.

Additional Resources
● JetBrains Rider Unit Testing Documentation:
https://www.jetbrains.com/help/rider/Getting_Started_with_Unit_Te
sting.html
● JetBrains Blog Posts on Testing
https://blog.jetbrains.com/dotnet/tag/testing/
Efficiency and Insight
Rider doesn’t just run your tests, it transforms unit testing into a fluid
conversation between your code and its tests. This tight integration helps
you write better tests faster and fix issues with surgical precision.
Next Up
Let’s make your work on a real-world class using Rider with hands-on
guidance. The next chapter will provide a practical walkthrough for creating
your first unit tests in Rider.

Hands-On: Crafting Your First Unit


Test in Rider
Let’s get practical and write your first C# unit test using Rider. We’ll stick
with our familiar Calculator example for consistency.
Prerequisites
● Rider installed
● Basic familiarity with C# syntax
Step 1: Create Your Projects
1. New Solution: In Rider, select “File” -> “New Solution”.
2. Project Types:

○ Choose a “Class Library (.NET Framework)” or similar for


your production code ( CalculatorProject – adjust naming
as you like).
○ Create a “Unit Test Project (.NET Framework)” or similar
for your tests ( CalculatorTests ).

3. Solution Structure: Ensure both projects are added to the same


solution.

Step 2: Add the Calculator Class


In your CalculatorProject , create a Calculator.cs file:
public class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
}
Step 3: Install NUnit
1. Right-click your test project > “Manage NuGet
Packages…”
2. Browse: Search for “NUnit” and install the package.
3. Also Install: Install the “NUnit3TestAdapter” package (this
enables test discovery within Rider).

Step 4: Create Your First Test Class


In your CalculatorTests project, create a CalculatorTests.cs file:
using NUnit.Framework;
namespace CalculatorTests
{
[TestFixture]
public class CalculatorTests
{
[Test]
public void Add_PositiveNumbers_ReturnsCorrectSum()
{
// Arrange
var calculator = new Calculator();
// Act
int result = calculator.Add(5, 8);
// Assert
Assert.AreEqual(13, result);
}
}
}
Step 5: Discover and Run Your Test
1. Build: Build your solution (Rider usually prompts you to do
so).
2. Test Explorer: Rider should automatically discover your test.
You might find a dedicated “Unit Tests” window, similar to
Visual Studio’s Test Explorer.
3. Run: Right-click on the
Add_PositiveNumbers_ReturnsCorrectSum test (or the entire
fixture) in the Test Explorer, and select “Run”.
Step 6: Success (Hopefully!)
You should see your test pass with a satisfying green indicator!
Troubleshooting
● Tests Not Found: Make sure you’ve installed the
“NUnit3TestAdapter” and rebuilt your project.
● Incorrect Version: Ensure Rider uses a .NET framework supported
by your NUnit version.
Key Takeaways
● Solution Structure: Tests reside in a separate project.
● NUnit Attributes: [TestFixture] and [Test]
● AAA Pattern: Arrange, Act, Assert
Next Steps
● Add more tests for different Calculator scenarios (negative
numbers, subtraction, etc.).
● Experiment with Rider’s test running and debugging features.
Congratulations! You’ve taken your first substantial step into the realm of
unit testing. Keep building tests, and you’ll see how they transform your
development experience.
Up Next
Let’s delve into a testing approach known as ‘black-box’ testing where we
focus on inputs, outputs, and behaviors without worrying about the internal
workings of your code.
Shedding Light on Black-box Testing
Techniques
Black-box testing is a powerful approach where you focus squarely on the
inputs and expected outputs of your code units, treating their internal
implementation details as a “black box.”
Why “Black Box”?
● Abstraction: You don’t concern yourself with how a component
produces its results, only that it produces the correct results for a
given set of inputs.
● Flexibility: If the internal implementation changes (but the input-
output behavior remains the same), your black-box tests shouldn’t
need to be modified.
Key Principles of Black-Box Testing
1. Focus on the Interface, Not the Implementation: Design
tests around the publicly exposed methods and properties of a
class, and what they’re supposed to do.
2. Cover Input and Output Scenarios:

○ Valid inputs: Test cases that should lead to expected


results.
○ Invalid inputs: How does your code handle bad data or
edge cases (e.g., negative numbers in a square root
function)?
○ Boundary values: Inputs at the extremes of valid ranges
(zero, maximum values, etc.).
Example: Our Calculator
Instead of worrying about the specific calculation within the Add method,
we’d focus on test cases like this:
● Positive numbers: (5, 3) -> Should result in 8
● Negative numbers: (-2, -5) -> Should result in -7
● Zero: (0, 10) -> Should result in 10
Techniques within Black-Box Testing
● Equivalence Partitioning: Dividing input data into classes that you
expect to be processed similarly. For example, all positive numbers
versus all negative numbers, instead of testing hundreds of
individual numbers.
● Boundary Value Analysis: Rigorously testing inputs at the edges
of those equivalence classes (e.g., the smallest valid positive
number, the largest valid one).
Benefits of Black-Box Testing
● Resilient Tests: Since they’re less coupled to the implementation,
black-box tests are more likely to survive refactoring.
● Clear Specifications: Black-box tests help define the expected
behavior of your components.
● Non-Developer Friendly: People less familiar with the underlying
code can still contribute by designing input/output scenarios.
Black-Box vs White-Box
Black-box testing complements “white-box” testing, where you do have
knowledge of the implementation. A healthy testing strategy often employs
a combination of both.
Additional Resources
● Black Box Testing Techniques https://www.guru99.com/black-
box-testing.html
● ISTQB Glossary of Testing Terms
https://www.istqb.org/downloads/syllibus/glossary-of-terms/
A Strategy, Not an All-Encompassing Solution
Black-box testing alone doesn’t cover all scenarios (e.g., complex internal
logic paths). Use it strategically alongside other testing techniques.
Up Next
Sometimes, units of code need to work together to perform a task. The next
chapter will focus on managing the setup and teardown necessary to get
your tests into a known, reliable state.

Managing Test Setup and Teardown:


Ensuring Test Independence
Isolated tests are reliable tests. A core aspect of isolation is ensuring each
test starts with a clean slate and doesn’t leave any side effects that might
corrupt subsequent tests. This is where setup and teardown methods come
into play.
The Challenge of Shared State
Imagine these scenarios:
● Test Modifying a Global Variable: One test modifies a global
variable, and a subsequent test unknowingly relies on that modified
version.
● Test Manipulating a Database: A test adds data to a database. If
not cleaned up properly, it could interfere with other tests that
expect an empty database.
Setup: Getting Ready for Action
Setup methods are used for common pre-test arrangements:
● Instantiating Objects: Creating the objects under test and any
supporting objects.
● Initializing Variables: Setting variables to known values.
● Mocking Dependencies: Configuring mock objects (we’ll cover
mocking in detail later).
● Preparing External Resources: Setting up files, database records,
etc., if needed.
Teardown: Cleaning Up the Scene
Teardown methods reverse actions performed in setup, ensuring a clean
environment:
● Releasing Resources: Closing files, connections, etc.
● Deleting Temporary Data: Removing test records from a database.
● Resetting Mocks: Clearing mock object behavior.
NUnit: Setup and Teardown Attributes
NUnit provides attributes to designate your setup and teardown methods:
● [SetUp] Method runs before each test within a [TestFixture] .
● [TearDown] Method runs after each test.
Example: A (Slightly Contrived) File Test
[TestFixture]
public class FileProcessorTests
{
private string _tempFilePath;
[SetUp]
public void Setup()
{
_tempFilePath = Path.GetTempFileName();
}
[TearDown]
public void TearDown()
{
File.Delete(_tempFilePath);
}
[Test]
public void WriteToFile_CreatesFileIfItDoesntExist() {
// Test logic here...
}
}
Additional Resources
● NUnit Documentation - SetUp and TearDown:
https://docs.nunit.org/articles/nunit/writing-tests/setup-and-
teardown.html
Beyond the Basics
● OneTimeSetUp/OneTimeTearDown: Use [OneTimeSetUp] for
actions that should happen once before all tests in a fixture.
● Hierarchical Setup: Setups at higher levels (the class) run before
those defined within individual test methods.
Key Takeaway
Effective use of setup and teardown is essential for creating trustworthy,
independent unit tests. By ensuring a clean testing environment, you can
catch bugs earlier and with more confidence.
Up Next
Writing the same test variations with slightly altered inputs over and over
again is tedious. Let’s explore parameterized tests, a powerful way to
streamline this process.
Unleashing the Power of
Parameterized Tests
Parameterized tests let you streamline your testing process by running a
single test multiple times with different sets of inputs and expected outputs.
This reduces redundancy and helps you cover diverse scenarios efficiently.
Why Parameterized Tests Matter
1. Reduce Verbosity: Instead of multiple near-identical test
methods, you have a single parameterized test. This keeps your
test suite cleaner.
2. Increased Coverage: Easily test a broader range of inputs,
including edge cases and potential error scenarios, with a few
lines of code.
3. Data-Driven: Your test logic becomes a template, and
variations are driven by the data you provide.

NUnit: The [TestCase] Attribute


NUnit’s [TestCase] attribute is the key to creating parameterized tests:
[Test]
public void Add_CalculatesCorrectSum([TestCase(5,2,7)] [TestCase(10, -3,
7)])
public void(int a, int b, int expectedResult)
{
var calculator = new Calculator();
Assert.AreEqual(expectedResult, calculator.Add(a, b));
}
Let’s break it down:
● [TestCase] attributes: Each [TestCase] defines a set of input
parameters (5, 2, 7 in the first case) and the expected result.
● Method Parameters: Your test method’s parameters align with the
values you provide in the [TestCase] attributes.
Test Runner Magic
NUnit’s test runner will execute the Add_CalculatesCorrectSum test twice,
injecting the different TestCase data each time. It’s like having multiple
tests in one!
Sources for Test Data
● Inline: As in the example, provide values directly within the
[TestCase] attributes.
● [TestCaseSource] : Reference a variable, property, or method that
returns an IEnumerable of test cases for more complex or dynamic
sets of data.
● Custom Attributes: Create your own attributes to fetch test data
from external files (CSV, XML, etc.).
Additional Resources
● NUnit Documentation - Parameterized Tests
https://docs.nunit.org/articles/nunit/writing-
tests/attributes/testcase.html
Beyond the Basics
● Descriptive Arguments: Use optional named arguments within
[TestCase] for better failing test reports (e.g., [TestCase(5, 2, 7,
ExpectedResult = 7)] ).
● Combining with Setup/Teardown: Parameterized tests can use
[SetUp] and [TearDown] for any pre/post-test actions just like
regular tests.
When to Use Parameterized Tests
● A function’s logic applies to various inputs, and the output varies
based on that input.
● Testing with boundary values and edge cases.
Next Up
Sometimes, it’s necessary to temporarily disable or exclude tests. Let’s
explore strategies for selectively ignoring and skipping tests.
Selective Testing: Strategies for
Ignoring and Skipping Tests
Sometimes you need to temporarily control which tests execute within your
test suite. This might be due to tests that are under development, depend on
unavailable external resources, or those marked with specific conditions.
Let’s explore how to manage this.
1. Ignoring Tests
Ignored tests are completely excluded from your test run. NUnit offers a
few mechanisms:
● [Ignore] Attribute: The simplest way. Apply [Ignore] to a test
method or an entire test fixture.
[Ignore("Functionality not yet implemented")]
[Test]
public void NewFeature_PlaceOrder_CalculatesCorrectTotal()
{
// ... test logic
}
● Reason (Optional): Include a string reason for better reporting on
ignored tests.
● Explicit Attribute (Rare): When needed for advanced scenarios,
use the [Explicit] attribute. These tests only run when explicitly
selected.
2. Skipping Tests (Conditional)
Skipped tests are recognized by the test runner but are not executed unless
specific conditions are met. NUnit provides ways to achieve this:
● Assume.That : For making assumptions about the environment. If
the condition fails, the test is skipped.
[Test]
public void DatabaseWrite_WhenConnected_SavesData()
{
Assume.That(_database.IsConnected, "Database must be connected");
// ... test logic
}
● [TestCase] with non-matching inputs: Parameterized tests whose
input parameters fail to satisfy a condition within the test method
can be conditionally skipped.
When to Ignore vs. Skip
● Ignore:
○ Functionality under development.
○ Tests known to be broken and awaiting a fix.
● Skip:
○ Environment-dependent tests (when a database is down,
tests related to it may be skipped).
○ Platform-specific tests (e.g., skip tests not applicable to
your current OS)
Additional Resources
● NUnit Documentation - Ignoring Tests:
https://docs.nunit.org/articles/nunit/writing-tests/ignoring-tests.html
● NUnit Documentation - Conditional Tests
https://docs.nunit.org/articles/nunit/writing-tests/conditional-test-
attributes.html
Strategic Use
Ignoring and skipping have valid uses, but remember these guidelines:
● Don’t Overuse: Excessive ignoring/skipping hides problems.
Strive to keep the majority of your tests active.
● Review Regularly: Ignored/Skipped tests shouldn’t linger
indefinitely. Address their underlying reasons and re-enable them as
soon as possible.
● Reporting: Test reporting tools should clearly visualize ignored and
skipped tests alongside successes and failures for a holistic view.
Temporary Measures
Treat ignoring and skipping tests as tools to manage a test suite in
transition. The ultimate goal should be a suite where the vast majority of
tests actively participate in guarding the quality of your code.
Next Up
Units of code rarely exist in true isolation. The next chapter will delve into
strategies for creating trustworthy unit tests in the face of external
dependencies.
Building Trustworthy Tests, Part 1:
Handling Dependencies
The core principle of unit testing is isolation. However, in the real world,
code units often interact with components like databases, web services, file
systems, and other external dependencies. Let’s explore how to manage
these dependencies to ensure our unit tests remain focused on the code we
intend to test.
1. The Problem with External Dependencies
● Slowness: Relying on external systems can make tests slow,
impacting your development feedback loop.
● Unreliability: Externals can be down or behave unpredictably
(network glitches, etc.), leading to flaky, non-deterministic tests.
● Lack of Control: You cannot easily manipulate scenarios for error
conditions or edge cases when depending completely on an external
system.
2. Techniques for Isolating Your Code Under Test
● Abstractions (Interfaces): Define interfaces for the dependencies
of your code. During testing, you can replace real implementations
with test doubles.
public interface IEmailService {
void SendEmail(string to, string subject, string body);
}
● Test Doubles: A generic term for stand-ins used in place of real
dependencies. The common types include:
○ Fakes: Working implementations but simplified (in-
memory data store instead of a full database).
○ Stubs: Minimal implementations providing canned
responses for pre-defined scenarios.
○ Mocks: Sophisticated doubles that allow you to verify if
interactions and calls happened as expected. (We’ll discuss
these more in upcoming chapters)
3. Example: Isolating a File Writer
Let’s imagine a class that writes data to a file:
public class OrderProcessor
{
private readonly IFileWriter _fileWriter;
public OrderProcessor(IFileWriter fileWriter)
{
_fileWriter = fileWriter;
}
public void Process(Order order)
{
// ... process the order
_fileWriter.WriteToFile("order.txt", order.ToString());
}
}
Notice that OrderProcessor doesn’t directly create a file. It depends on an
interface IFileWriter , making it “testable.”
In a Unit Test: You could provide a fake IFileWriter that records data in
memory instead of hitting the file system.
Additional Resources
● Martin Fowler on Test Doubles:
https://martinfowler.com/articles/mocksArentStubs.html
Key Mindset Shift
Designing your code to depend on abstractions (like interfaces) improves
testability and often leads to better overall design due to increased
flexibility.
Next Up
Reliable tests should have predictable outcomes. In the next chapter, we’ll
discuss ways to ensure your tests remain consistent and dependable over
time.
Let’s make your tests robust!
Building Trustworthy Tests, Part 2:
Ensuring Consistency and Reliability
A test suite you can’t trust is worse than no tests at all. Let’s focus on how
to guarantee your tests produce consistent results, minimizing the chance of
false alarms or unexpected failures unrelated to actual changes in your
production code.
1. Sources of Inconsistency
● Timing Issues: Tests that rely on the current date, time, or race
conditions between different parts of the system can be non-
deterministic.
● Shared Mutable State: If tests modify global variables or leave
resources in an altered state, it can lead to order-dependent failures.
● External Dependencies: As discussed earlier, network issues or a
database in an unknown state can lead to instability.
● Overly-Complex Test Logic: Too many intricate steps in a test
make it harder to reason about potential failure points.
2. Strategies for Robustness
● Control Time: Libraries like NodaTime can let you “freeze” the
current date and time within tests, eliminating inconsistencies based
on when the test was executed.
● Emphasis on Isolation: Remind yourself of the core unit testing
principle. Double-check your setup ( [SetUp] ) methods to fully
reset any state before every test.
● Clean Up Thoroughly: Ensure your [TearDown] methods
meticulously reverse changes (delete files, restore database records,
etc.)
● Test Doubles (Again): By substituting external dependencies with
test doubles, you gain control over their behavior, leading to more
predictable tests.
● Seeding Data: When working with data sources (even in-memory
ones), start with a known, pre-populated dataset for each test.
3. Avoid Over-Dependence on Threading or Concurrent Operations
If possible, structure unit tests to avoid direct reliance on multithreading
scenarios. These can be notoriously tricky to test consistently due to timing
sensitivities. For those cases, specialized techniques for concurrency testing
might be required, which is often a broader topic than pure unit testing.
4. Simplicity as a Safeguard
Clear, easy-to-understand tests are less likely to have hidden assumptions or
side effects leading to flakiness. If a test is hard to follow, there’s a greater
chance of subtle issues.
5. When a Test Fails…
● First, Rule Out the Obvious: Double-check if it’s a genuine failure
in your production code and not an issue within the test itself.
● Reproduce Reliably: Can you make the failure happen
consistently? Intermittent failures are the most difficult to
troubleshoot.
● Isolate the Root Cause: Narrow down the problem to the most
specific code unit possible.
Continuous Vigilance
Maintaining a reliable test suite is an ongoing effort. As your project
evolves, be mindful of new dependencies or complex logic creeping into
your tests that might erode their consistency.

Section 3:
Advanced Unit Testing Techniques

Unveiling Core Unit Testing


Techniques
While the foundational principles remain essential, as you become a
seasoned unit testing practitioner, having some go-to techniques in your
arsenal will prove invaluable. Let’s explore a few core concepts that will
frequently appear in well-crafted unit tests.
1. Equivalence Classes and Boundary Values
● Equivalence Classes: Instead of testing every possible input value,
group inputs that are expected to be processed similarly. For
instance, testing with a few positive numbers instead of every
conceivable positive number.
● Boundary Values : Specifically focus tests on the edges of valid
input ranges. For example, if a function accepts ages 18-65, test
with 18, 65, and values right outside the boundaries (17, 66).
2. State Verification
● Directly Checking State: After a unit of code runs, assert that
internal properties or variables have the expected values.
● State-Based Behavior: Assert that an object’s behavior changes in
the correct way based on its modified state. For example, after a
withdraw operation, a bank account’s getBalance() method should
return a lower value.
3. Error Handling and Exceptions
● Expect Exceptions: Use NUnit attributes like
[ExpectedException] to ensure your code throws the right type of
exception for invalid input or error conditions.
● Inspect Exception Details: Go beyond just the type of exception.
Verify if any custom messages or data contained within the
exception are correct.
4. Interaction Testing with Collaborators
● Not in Complete Isolation: While unit tests should focus on one
unit, sometimes they need to involve how a unit collaborates with
its dependencies.
● Mocking Frameworks (Coming Soon!): These will become your
essential tool for controlling and verifying interactions between
components, which we’ll delve into in upcoming chapters.
Example: Thinking in ‘Techniques’
Let’s revisit our Calculator , with a new method:
public double SquareRoot(double num)
{
if (num < 0) throw new ArgumentException("Cannot calculate square
root of a negative number");
return Math.Sqrt(num);
}
Applying the techniques:
● Equivalence Classes / Boundary: Test with zero, a few positive
values, large positive values.
● Error Handling: Test that a negative input throws an
ArgumentException .
● State Verification (Less Applicable Here): Since the output is a
direct calculation, there’s not much internal state to check.
Additional Resources
● Test Driven Development (TDD) Kata with calculator example
by Roy Osherove: https://osherove.com/blog/2005/4/3/naming-
standards-for-unit-tests.html
Beyond the Basics
These techniques serve as your building blocks. Combining them with
black-box testing, parameterized tests, and a mocking framework will give
you a powerful toolset for crafting thorough unit tests.
Up Next
Strings are a common data type. Let’s get specific about strategies to create
robust unit tests that work with string inputs and outputs.
Mastering String Testing Strategies,
Part 1
Strings are ubiquitous in software. From user input to file contents, you’ll
encounter them frequently, and string manipulation can be tricky. Let’s
develop a robust approach to testing code that works heavily with strings
1. Not Just About Correct Output
Sure, verifying that a function produces the correct output string is
essential, but string tests need to go deeper. Consider:
● Case Sensitivity: Should string comparisons be case-sensitive or
insensitive? This depends on your domain requirements.
● Whitespace: Are leading/trailing spaces significant? How should
tabs vs. spaces be treated?
● Encoding: Especially when dealing with internationalization,
ensure your tests handle various character sets (UTF-8, etc.)
correctly.
2. Common Scenarios To Test
● Concatenation: Verify results of combining multiple strings.
● Substrings: Test if correct substrings can be extracted (starting
position, length).
● Searching: Look for specific substrings or patterns within larger
strings.
● Trimming / Removal: Ensure whitespace or specific characters can
be removed correctly.
● Formatting: If your code formats strings (e.g., dates, currencies),
test various valid and invalid inputs.
3. Test Libraries Enhance String Assertions
While NUnit offers core assertions, consider specialized string assertion
libraries:
● Fluent Assertions Provides a readable, fluent syntax.
string formattedDate = "2023-12-22";
formattedDate.Should().StartWith("2023").And.Contain("-12-");
● Shouldly Another option with a similar assertion style.
4. Edge Cases
● Empty Strings: How does your code behave with empty strings as
input?
● Nulls: If applicable, does it handle null values gracefully?
● Extremely Long Strings: Performance might degrade with very
large inputs. Consider testing this if it’s a concern.
Example: A Text Transformer
Let’s imagine a class with methods to manipulate text:
public class TextTransformer
{
public string ToTitleCase(string str) {...}
public string RemoveSpecialCharacters(string str) {...}
}
Tests could include:
● ToTitleCase :
○ Mixed case input -> Correct capitalization
○ All-lowercase -> Correct handling
○ Null input -> Exception or empty string (depends on your
design)
● RemoveSpecialCharacters :
○ String with various symbols -> Symbols removed
○ Whitespace handling (is it preserved?)
Additional Resources
● Fluent Assertions documentation: https://fluentassertions.com/
● Shouldly documentation: https://shouldly.github.io/
Precision Matters
Pay careful attention to how your domain treats strings. Are they purely
data, or do they represent something with stricter rules (e.g., usernames,
URLs)?
Up Next
In Part 2, we’ll delve into regular expressions and explore strategies for
testing code that utilizes them.
Let’s make your string handling code rock-solid!
Mastering String Testing Strategies,
Part 2
Regular expressions (RegEx) are a powerful tool for pattern matching and
string manipulation. When your code employs them, let’s arm your unit
tests to ensure their proper usage.
1. Testing with Regular Expressions
● Verification: Assert whether a string matches or doesn’t match a
given RegEx pattern. Libraries like Fluent Assertions often have
specialized methods for this.
● Capturing Groups: If your RegEx extracts parts of a string, test
that the correct substrings are captured.
● Edge Cases: Try invalid patterns or complex strings that could
reveal unexpected issues in your regular expressions.
2. Strategies to Control Complexity
● Helper Functions: Break down complex regular expressions into
smaller, easier-to-test units if possible.
● Data Providers: Consider using Parameterized Tests to feed a
variety of sample inputs and expected RegEx outcomes.
● Online RegEx Tools: While testing, use tools like
https://regex101.com/ to visualize and experiment with patterns
before embedding them in tests.
3. Example: Email Validation
Suppose you have a basic email validator using a regular expression:
public class UserAccountUtil
{
public bool IsValidEmail(string email)
{
Regex emailRegex = new Regex(@"^\w+@\w+\.\w+$");
return emailRegex.IsMatch(email);
}
}
Tests could include:
● Valid Formats: “[xyz@gmail.com]”, “[1234@gmail.com]”
(various valid patterns)
● Invalid Formats: “noAtSymbol”, “some@thing@weird.ending”
● Tricky Edge Cases: Addresses with special characters, very long
domain names (if they’re within your requirements!)
4. Caveats of Regular Expressions
● Readable but Tricky: RegEx syntax is powerful but terse.
Comments or good variable names in test code can clarify the intent
of the pattern.
● Don’t Overuse: Sometimes simpler string manipulation may be
more appropriate and easier to test than complex RegEx patterns.
Additional Resources
● C# Regex Documentation: https://docs.microsoft.com/en-
us/dotnet/standard/base-types/regular-expressions
Beyond the Basics
If a significant portion of your code logic depends on regular expressions,
consider writing unit tests for the RegEx patterns themselves in isolated
fashion.
Up Next
Arrays and collections are fundamental data structures. Let’s explore
strategies to test code that works with them thoroughly.

Comprehensive Testing Approaches


for Arrays and Collections, Part 1
Arrays and collections are core building blocks in software. Testing code
that manipulates them effectively requires a tailored approach. Let’s lay the
foundation for writing robust tests for this type of code.
1. Why Special Treatment?
● Content AND Structure: You need to verify not only the
individual elements within an array or collection but also often their
order, size, and potential duplicates.
● Collection Types: Lists, sets, dictionaries … each data structure
exhibits distinct behaviors that your tests need to address.
● Diverse Operations: Common operations like sorting, filtering,
and searching demand specific test cases.
2. Key Areas of Focus
● Contents:
○ Are the expected elements present?
○ In the correct order (if relevant)?
○ Is a calculated result based on the collection’s content
correct?
● Size:
○ Does an operation produce a collection of the expected
size?
○ Boundary checks: empty collections, and potentially very
large ones
● Transformation: Test that operations correctly modify the original
collection (if they are meant to), or produce a new one, leaving the
original intact.
3. Strategies and Assertions
● NUnit’s Collection Assert: Offers methods like Contains ,
AreEqual , IsSubsetOf , etc.
int[] result = CalculatePrimes(10);
CollectionAssert.Contains(result, 7);
CollectionAssert.DoesNotContain(result, 8);
● LINQ for flexible queries: Check conditions on the contents.
List<string> names = GetCustomerNames();
Assert.IsTrue(names.Any(name => name.StartsWith("Alice")));
● Custom Assertions: Consider crafting your own assertions for very
specific checks relevant to your domain.
4. Example: A Statistics Calculator
Imagine a StatisticsCalculator class providing methods like:
● double GetAverage(int[] numbers)
● int[] GetPositiveValues(int[] numbers)
Tests Could Cover:
● GetAverage :
○ Diverse sets of numbers
○ Empty array -> error or a meaningful default
● GetPositiveValues
○ All positives, some negatives, all negatives
○ The order of the returned result (should it match the input?)
Additional Resources
● NUnit documentation : CollectionAssert
https://docs.nunit.org/articles/nunit/writing-
tests/assertions/collection-assertions.html
Beyond the Basics
Not all collections are created equal. Dictionaries and sets require
specialized test logic to check for the existence and correct association of
keys and values.
Up Next
In Part 2, we’ll expand on testing different collection types and address
potential performance considerations when dealing with large datasets.
Comprehensive Testing Approaches
for Arrays and Collections, Part 2
Expanding Our Testing Toolkit
Let’s delve into testing more specialized collection types and consider
performance implications when dealing with collections.
1. Dictionaries / Maps
● Key Existence: Checks for the presence (and absence) of specific
keys.
● Value Retrieval: Ensure fetching by key returns the expected value
with correct data types.
● Key-Value Association: Verify that keys and values have the
correct relationship, especially after modifications.
2. Sets
● Membership: Tests if specific elements exist within the set.
● Uniqueness: Ensure your set implementation upholds the ‘no
duplicates’ rule.
● Set Operations: Test results of intersections, unions, differences,
etc., if your code performs those.
Example: A User Permission Manager
Imagine a class managing permissions using sets:
public class PermissionManager
{
private readonly HashSet<string> _userPermissions;
// ... methods like AddPermission, RemovePermission, HasPermission
etc.
}
Test Scenarios:
● AddPermission : Does a new permission get added only once, even
if added multiple times?
● RemovePermission : Can existing permissions be removed as
expected?
● HasPermission : Checks for correct true/false results with varying
inputs.
3. Performance Considerations
● Large Collections: Design tests with sizable collections to spot
performance bottlenecks in sorting, searching, or filtering
operations.
● Algorithm Choice: The underlying data structure and algorithms
used in your code will heavily influence performance. Understand
those trade-offs.
● Profiling Tools: Consider using profiling tools to pinpoint specific
slowdowns if they arise within your tests.
4. Immutability (If Applicable)
If you’re working with immutable collections, your tests need to reflect
that:
● Modification Verification: Assert that operations on an immutable
collection produce a new collection rather than modifying the
existing one.
Additional Resources
● Documentation of C# collection types: (List, Dictionary,
HashSet, etc.) https://docs.microsoft.com/en-
us/dotnet/csharp/programming-guide/concepts/collections
Beyond the Surface
Thorough collection testing means understanding the nuances of the
specific collection types you use and potential performance concerns within
your application’s context.

Verifying Method Return Types: Best


Practices
While a cornerstone of basic unit testing is checking whether a method
returns the expected value, let’s dive deeper into strategies for ensuring your
return type assertions are accurate and robust.
1. Not Just About the ‘Happy Path’
● Data Types Matter: Don’t just check if a value is returned, ensure
it’s the correct data type. Differentiate between a method returning a
string “10” and the number 10.
● Complex Return Types: If a method returns a class or struct,
verify the contents of the returned object and any nested properties.
● Nulls: Explicitly plan for scenarios where a method might return
null . Does your code handle this gracefully?
2. Precision in Assertions
● Floating-point Comparisons: Be aware of potential rounding
errors. Use assertions that check for approximate equality within an
acceptable tolerance.
● Collections: (building upon previous chapters): Utilize collection-
specific assertions to check for matching size, contents, and order
where relevant.
● Custom Types: Implement Equals or GetHashCode overrides on
your custom classes, if necessary, to make comparisons in test
assertions meaningful.
Example: A Data Conversion Service
Imagine a method:
public double ConvertTemperature(int celsius) { ... }
Tests Could Cover:
● Type: Assert.IsInstanceOf<double>(result) .
● Accuracy: Ensure a few known Celsius to Fahrenheit conversions
are correct.
● Unexpected Input: Validate behavior if non-numeric input is
provided (exception, default value?)
3. Test Libraries Enhance Type Checks
Consider libraries to augment your assertions:
● Fluent Assertions: Provides a readable way to assert object types:
object result = GetResult();
result.Should().BeOfType<Invoice>();
● Shouldly: Similar fluent-style type checking.
4. Edge Cases
● Enums: If a method returns an enum, check valid values and
potentially invalid ones to ensure graceful handling.
● Consider Design: Should your methods signal errors through return
values (e.g., special result codes), or should they throw exceptions?
Your tests need to align with this design choice.
Additional Resources
● NUnit’s IsInstanceOf Constraint:
https://docs.nunit.org/articles/nunit/writing-
tests/constraints/isinstanceof.html
Beyond the Basics
The complexity of your return types directly influences the complexity of
your tests. For intricate custom types, this might even involve writing mini-
test suites specifically for those types to guarantee correct behavior.

Tackling Void Methods: Testing


Strategies and Considerations
Void methods, those lacking an explicit return value, present a unique
challenge. They often produce side effects: modifying a state, sending a
notification, or interacting with external systems. How do we prove they did
what they were supposed to?
1. The Indirect Approach
Since you cannot directly assert on a void method’s return value, you need
to observe its impact on the system:
● State Changes: If a method modifies an object’s properties or
fields, assert those changes after the method call.
● Collaborator Calls: Use mocking frameworks (which we’ll cover
in-depth later) to verify that a void method invokes other
components in your system with the expected parameters.
● External Interactions: If the method interacts with files, databases,
or network resources, test for the presence of expected changes (file
created, database record updated, etc.)
2. Think Like a Detective
Testing void methods is about following the “breadcrumbs”:
● What Traces Are Left?
○ Did the method update a property visible for inspection?
○ Does it raise events?
○ Are logs generated?
● Testable Seams: If direct inspection is difficult, consider
refactoring to introduce points in your code that expose the state
specifically for testing.
Example: A File Writer
Imagine a void method:
public void WriteMessageToFile(string message, string filename)
{
// File writing logic...
}
Tests Could:
● Check file existence: Assert that a file with the given name was
created.
● Verify Content: Read the file to confirm the expected message was
written.
● Mocking the File System (More Advanced): Use a mocking
framework to substitute the file system interaction and directly
inspect what the method tried to write.
3. Exceptions in Void Methods
If your void method is designed to throw exceptions under error conditions,
your tests should assert these:
● Assert.Throws<ExceptionType>(...) Ensure the correct exception
type is thrown.
● Inspect Exception Details: If the exception carries additional data
(custom error messages), verify them within your test.
Design Considerations
● Are Void Methods Overused? Sometimes, refactoring a void
method to return a value, even a simple status code or result object,
can improve testability.
● Testability as a Design Driver: Thinking about how you will test
your code before you write it can lead to designs that are more
naturally testable.
Up Next
A unit test should ideally focus on one unit in isolation. However, what
happens when methods must interact with external components? Let’s learn
about techniques for handling exceptions.
Handling Exceptions: Testing Method
Behavior Under Error Conditions
Exceptional situations (pun intended!) are a fact of software development.
Well-crafted unit tests should not only verify the ‘happy path’ but also
ensure your code gracefully handles error scenarios.
1. Exceptions as Part of the Contract
● Expected Exceptions: If a method is designed to throw a specific
exception type under certain conditions (invalid input, for example),
your tests must assert that.
● Document Exceptions: Treat exceptions thrown by a method as
part of its documented behavior, and subsequently, part of what you
need to test.
2. NUnit and Exceptions
NUnit provides several ways to assert on exceptions:
● Assert.Throws<ExceptionType>(...) : The most general form.
Ensure that a specific exception type is thrown somewhere within a
block of code.
Assert.Throws<ArgumentNullException>(() => myObject.Process(null));
● Assert.That(() => myObject.DoSomething(),
Throws.Exception.TypeOf<InvalidOperationException>()) :
Combines an assertion with the exception check.
● Examining Exception Details:
var ex = Assert.Throws<DivideByZeroException>(() =>
Calculator.Divide(10, 0));
Assert.That(ex.Message, Does.Contain("zero"));
3. Test Strategy
● Triggering Errors: Carefully design test inputs to force the
exceptional conditions you wish to test.
● Multiple Cases: A single method might throw different exception
types based on the type of error. Have tests for each.
● Don’t Over-Catch: In your test code, avoid using an overly broad
try-catch that hides the specific nature of the exception.
Example: A User Registration Method
Imagine a method with the following requirements:
public void RegisterUser(string username, string password)
{
if (username == null ) throw new ArgumentNullException(...);
if (password.Length < 8) throw new ArgumentException(...);
// ... rest of the registration logic
}
Tests Would Cover:
● Passing a null username -> ArgumentNullException
● Too short password -> ArgumentException
● Valid input -> Does not throw exceptions
Additional Resources
● NUnit documentation on exception Assertions:
https://docs.nunit.org/articles/nunit/writing-
tests/assertions/exception-assertions.html
Beyond the Basics
● Custom Exceptions: If you have custom exception types, you can
write tests to verify any additional information they carry (error
codes, custom messages).
● Error Handling Logic: If your code has complex try-catch logic,
you may need unit tests specifically focused on testing that error-
handling behavior.
Validating Event Invocation: Ensuring
Correct Event Handling
Events are a cornerstone of asynchronous and loosely coupled
programming in C#. Properly testing that your components raise events,
and that other parts of your system react to them, is crucial for guaranteeing
robust behavior.
1. Why Test Events?
● Implicit Contracts: Events form implicit connections between
parts of your code. Tests ensure those connections work as
designed.
● Timing Matters: Often, unit tests should verify that events are fired
in the correct sequence or in response to specific triggers.
● Event Data: If your events carry data, tests should check if the
payload within the event arguments is correct.
2. Strategies for Testing Events
● Manual Subscription: In your test, subscribe to the event of the
unit under test. Within the event handler, set flags, or store results
for later assertions.
● NUnit’s Event Listener: NUnit offers some support for event
assertions, but it’s often considered basic ([invalid URL removed]).
● Mocking Frameworks (The Power Tool): Mocking frameworks
(which we’ll cover later) become invaluable here. They can let you
verify that a collaborator raised a specific event with the expected
arguments.
Example: A File Monitoring Component
Imagine a class that monitors file changes:
public class FileMonitor
{
public event EventHandler<FileChangedEventArgs> FileChanged;
private void OnFileChanged(string filename)
{
FileChanged?.Invoke(this, new FileChangedEventArgs(filename));
}
// ... file monitoring logic ...
}
Tests Could Verify:
● Event Raised on Change: Simulate a file change and assert that
the FileChanged event was raised.
● Event Data: Ensure the filename within the
FileChangedEventArgs is correct.
● No Event on No Change: Test that the event is not fired if there’s
no relevant file activity.
3. Additional Considerations
● Asynchronous Events: Ensure your test handles waiting or
synchronization if events are raised on a different thread.
● Testing the Subscribers: You might also need unit tests
specifically focusing on components that react to events to ensure
their behavior is as expected.
Mocking Frameworks Enable Sophistication
While basic event assertions can be done manually, mocking libraries
empower you with more fine-grained checks and significantly cleaner test
code when it comes to event handling.
Up Next
Sometimes your perfect unit test design relies on testing code that isn’t
directly exposed – those pesky private methods. Let’s discuss the trade-offs
and techniques for testing them.
Unraveling the Mystery of Testing
Private Methods
Private methods are intended as internal helpers. Should a unit test directly
reach into the private workings of your class? Let’s examine the arguments
for and against, and some practical strategies.
1. The Purist’s View: “Don’t Do It”
● Unit Tests As Public API: Purists argue that unit tests should only
exercise a class’s public interface. Private methods are
implementation details.
● Refactoring Safety Net: A key benefit of unit tests is enabling
refactoring without fear. If your tests rely on private details,
refactoring might inadvertently break them.
● Symptom of a Deeper Problem: The need to test private methods
often hints at a design issue – the class might be doing too much or
violating the Single Responsibility Principle.
2. The Pragmatist’s Perspective: “Sometimes Necessary”
● Legacy Code: With large, older code bases, refactoring into
‘perfectly’ testable units might be an unrealistic goal in the short
term.
● Complex Internal Logic: There may be intricate algorithms or
calculations hidden within those private methods that warrant direct
testing to ensure correctness.
● TDD: If you practice Test-Driven Development religiously, you
might write tests exercising the desired behavior before the private
method is even implemented.
3. Strategies (If You Must)
● Reflection: C# Reflection allows accessing private members.
Caution: This makes tests brittle, as they are now tied directly to
the internal structure of the class.
● Partial Classes and the ‘InternalsVisibleTo’ Attribute:
Specifically for testing purposes, you can expose internal members
to your test assembly. A less intrusive approach than reflection.
● Elevate to Protected (If Feasible): Consider whether the method
could be made protected . This allows for derived test classes to
access it, maintaining some level of encapsulation.
Example: Tricky Calculations
Imagine a private method in an OrderService that handles complex tax
calculations. Directly testing this private method could bring more
confidence than just testing edge cases of the public API.
Additional Resources
● Martin Fowler on Testing Private Methods:
https://martinfowler.com/articles/mocksArentStubs.html
When in Doubt, Refactor First
Before reaching for techniques to directly test private methods, critically
ask yourself:
● Can I refactor the code to extract the complex logic into a
separate, more testable unit?
● Do my tests for the public methods provide enough indirect
coverage of the private behavior to give me confidence?
Up Next
Testing isn’t about getting a perfect score on some theoretical metric. Let’s
discuss how code coverage tools can provide valuable insights, but also
how to interpret those insights wisely.
Exploring Code Coverage: Metrics
and Insights
Code coverage tools give you a picture of which parts of your code are
executed by your tests. However, understanding how to use this information
wisely is key.
1. Beyond the Raw Percentage
● High Coverage != Bulletproof: High coverage doesn’t guarantee
you’ve caught all bugs or that your tests are thorough.
● Low Coverage = Red Flag: Low coverage indicates areas of your
code completely untested and thus more likely to harbor problems.
● Focus on Change: Utilize code coverage tools to see which parts of
your code are tested by changes for a pull request or a feature
development.
2. Types of Code Coverage
● Line Coverage: The most basic - which lines of code were hit by
the tests.
● Branch Coverage: Did your tests exercise different branches of if-
statements and conditional logic? This is often more meaningful
than pure line coverage.
● Method Coverage: A finer-grained metric, ensuring individual
methods have been entered.
3. Visualizing Code Coverage
Tools often provide ways to visualize coverage directly in your IDE,
highlighting lines in green (covered) and red (not covered). This makes it
very intuitive to spot gaps.
Example: A Legacy Application
You’re adding features to an existing system with minimal tests. Code
coverage analysis might reveal:
● Critical business logic: with poor coverage: Prioritize writing tests
here.
● Old, unused code: If it’s uncovered, perhaps it’s safe for
refactoring or removal.
4. Setting Targets
● Avoid Obsessing Over 100%: It’s often wasted effort. Focus on
the untested areas of high importance.
● Reasonable Baselines: Start with a modest target (e.g., 60-70%)
and gradually increase it as you add more tests.
● New Code: Consider a stricter bar for newly written code, aiming
for higher coverage to avoid the build-up of ‘testing debt.’
Additional Resources
1. OpenCover: OpenCover is a free and open-source code
coverage tool for .NET, which is widely used and supports
various testing frameworks like NUnit, MSTest, and xUnit. You
can find more information and download it from the
[OpenCover GitHub repository]
(https://github.com/OpenCover/opencover).
2. NCover: NCover is a commercial code coverage tool for .NET,
offering features like branch coverage, code highlighting, and
integration with popular CI/CD systems. You can learn more
about NCover and its features on the [NCover website]
(https://www.ncover.com/).
3. JetBrains dotCover: dotCover is a code coverage tool
provided by JetBrains, known for its ReSharper and IntelliJ
IDEA products. It offers integration with Visual Studio and
supports various .NET testing frameworks. You can find more
information about dotCover on the [JetBrains website]
(https://www.jetbrains.com/dotcover/).
4. Coverlet: Coverlet is a cross-platform code coverage library
for .NET, compatible with .NET Core and .NET Framework.
It’s open-source and integrates well with other testing
frameworks. You can learn more about Coverlet and how to use
it from the [Coverlet GitHub repository]
(https://github.com/coverlet-coverage/coverlet).
5. Visual Studio’s built-in code coverage tools: Visual Studio
Enterprise edition includes built-in code coverage tools that
integrate seamlessly with the IDE. You can run code coverage
analysis directly from Visual Studio and view coverage results
within the IDE.

Each of these tools has its own set of features, advantages, and limitations,
so you may want to evaluate them based on your specific requirements and
preferences.
The Right Mindset
Code coverage is a tool, not a master. Use it to:
● Uncover blind spots
● Drive the creation of new, meaningful tests
● Identify potentially unused code
Up Next
Testing is an evolving skill. Let’s discuss some real-world testing
challenges and the best practices to overcome them.

Real-world Testing Challenges and


Solutions
Unit testing in a pristine, controlled environment is one thing; real
production systems are another. Let’s confront some common hurdles and
the best practices to tackle them.
1. Challenge: Complex Setups
● Databases, External Services, etc.: Often, your code depends on
things difficult to replicate in unit tests.
● Solutions:
○ Mocking Frameworks (more on this later) to simulate
external dependencies
○ Test Databases: Use in-memory databases or dedicated,
temporary test instances.
○ Containerization (for complex cases): Package
dependencies for easy setup/teardown in tests.
2. Challenge: Non-Deterministic Behavior
● System Time: Tests relying on the current date and time are brittle.
● External Data: If your test depends on data from an unpredictable
source (web service, etc.), results can be flaky.
● Solutions:
○ Control Time: Libraries to ‘freeze’ time for tests.
○ Isolate External Dependencies: Mocks to replace those with
predictable responses
3. Challenge: Legacy Code Monoliths
● Tightly Coupled Spaghetti: Code that wasn’t designed for
testability is a nightmare to isolate.
● Solutions:
○ Refactoring: It’s an investment, but gradually breaking
down dependencies makes code testable in the long run.
○ Seams: Find points where you can introduce test doubles,
even minimally at first.
○ ‘Characterisation Tests’ (For the truly awful): Write tests
that capture the current behavior, preventing regressions,
and providing a safety net for refactoring.
4. Challenge: Asynchronous Code
● Concurrency and Race Conditions: Hard-to-predict interactions
between threads/tasks
● Solutions
○ Task-Focused Assertions: Features in testing libraries to
work with Tasks directly.
○ Synchronization Primitives: If needed, carefully use
semaphores or similar to control execution order in tests.
○ Deterministic Task Scheduling (Advanced): For really
intricate cases, there are libraries to orchestrate task
execution in your tests.
5. Challenge: UI Testing
● Unit Tests != UI Tests: User interface interactions are often best
tested with specialized tools (like Selenium).
● Know the Boundaries: Understand what unit tests do (and don’t)
cover regarding the UI. Model-View-Controller/ViewModel
patterns can help create testable UI logic.
6. Challenge: The ‘Right’ Amount of Testing
● Diminishing Returns: 100% code coverage can be an elusive and
sometimes unhelpful goal.
● Solution: Risk-Based Approach: Focus tests on the most critical,
error-prone sections of your codebase.
Additional Resources
● xUnit Test Patterns (Book by Gerard Meszaros):
https://www.amazon.com/xUnit-Test-Patterns-Refactoring-
Code/dp/0131495054 While not C# specific, it’s a classic on test
strategies.
● Stack Overflow - common unit testing challenges
https://stackoverflow.com/questions/tagged/unit-testing?
tab=Frequent
Evolving Your Testing Strategy
Real-world projects necessitate a pragmatic approach. Continuously adapt:
● Learn from Failed Tests: A broken test indicates a potential bug or
a badly written test. Analyze the root cause.
● Balance with Other Testing Types: Unit tests exist alongside
integration, end-to-end tests, etc.
Up Next
A key aspect of improving testability lies in “decoupling” your code’s
dependencies. Let’s dive into the principles of loose coupling and how they
empower your testing efforts.

Section 4:
Decoupling Dependencies for
Testability

Understanding the Importance of


Decoupling External Dependencies
When units of code are tightly intertwined, they become notoriously
difficult to test in isolation. Decoupling your code from external
dependencies is the foundation upon which testable software rests.
1. Why Dependencies Hinder Unit Testing
● Unpredictable Behavior: Databases, file systems, network services
– these can change or be unavailable during tests, breaking them in
unforeseen ways.
● Slow Tests: Relying on real external systems often makes tests
sluggish.
● Setup Complexity: Tests might require intricate configuration or
mocking of the environment, making them brittle.
2. Enter: Loose Coupling
Loosely coupled code means components interact through well-defined
interfaces (think abstract classes or interfaces, in C# parlance). This has
profound implications for testing:
● Substitution: We can replace real dependencies with ‘test doubles’
during unit tests.
● Control: We dictate the behavior of the test doubles, leading to
predictable test scenarios.
● Focus: The unit under test becomes the star of the show, and we can
precisely examine its logic without distraction.
3. Techniques for Decoupling
● Dependency Injection: A core pattern where a unit’s dependencies
are provided externally rather than created internally. We’ll explore
specific forms of this in the next few chapters.
● Interfaces: Define contracts between components. A class can
depend on an IFileWriter interface, not a specific implementation
of file writing.
● Seams: Introduce points in your code where dependencies can be
swapped during testing (such as making a method configurable to
use a real database vs. a test one).
Example: Before and After
Tightly Coupled (Bad):
public class OrderProcessor
{
private readonly DatabaseContext _database;
public OrderProcessor() {
_database = new DatabaseContext("connectionString");
}
public void ProcessOrder(Order order)
{
// ... processing logic
_database.Insert(order);
}
}
Decoupled (Good):
public class OrderProcessor
{
private readonly IOrderRepository _repository;
public OrderProcessor(IOrderRepository repository) {
_repository = repository;
}
public void ProcessOrder(Order order)
{
// ... processing logic
_repository.Save(order);
}
}
4. Benefits Beyond Testing
● Flexibility: It’s easier to switch implementations. Need a different
database later? A new way of logging? Loosely coupled code allows
this more gracefully.
● Maintainability: Components with clear boundaries of
responsibility are easier to understand, modify, and refactor.
● Parallel Development: Teams can work on different components
simultaneously using mocks to stand in for unfinished
dependencies.
Additional Resources
● Robert C. Martin (“Uncle Bob”) on the Dependency Inversion
Principle: https://blog.cleancoder.com/uncle-
bob/2014/05/08/SingleReponsibilityPrinciple.html (Part of the
SOLID principles)
Let’s Get Practical
Up next, we’ll discuss design principles that foster loose coupling, making
your code not just more testable but more adaptable overall.
Embracing Loosely-coupled Design
Principles for Testable Code
Testability shouldn’t be an afterthought; it should influence your design
choices. Let’s explore key principles that naturally make your code easier to
test.
1. The Power of Interfaces
● Interfaces as Contracts: An interface (like ILogger ,
IPaymentGateway ) defines the interactions between components,
not their concrete implementation.
● Testability Boost A class depending on ILogger can work with
any implementation during testing, real or a test double.
2. Favor Composition over Inheritance
● Inheritance: Tight Coupling: When you overuse inheritance, your
classes become rigidly linked, making it harder to isolate parts for
testing.
● Composition: Flexibility: Build objects by composing them from
smaller parts with well-defined interfaces. This allows you to swap
or inject those smaller components during testing.
Example: Logging
Inheritance (less flexible):
public class FileLogger : Logger { ... }
public class OrderProcessor
{
private readonly FileLogger _logger; // Tightly bound to FileLogger
...
}
Composition (more flexible):
public class OrderProcessor
{
private readonly ILogger _logger; // Works with any ILogger
public OrderProcessor(ILogger logger){
_logger = logger;
}
...
}
3. The Single Responsibility Principle (SRP)
● Do One Thing Well: Classes that try to do too much are testing
nightmares. Each class should have a clear, focused responsibility.
● Testability Benefit: Smaller, focused classes are easier to test in
isolation, and have fewer reasons to change (making your tests less
brittle).
4. Dependency Inversion Principle (DIP)
● “Depend on Abstractions, Not Concretions”: High-level modules
shouldn’t depend directly on low-level details; both should depend
on abstractions.
● Why It Matters: This inverts the control flow and makes it easier
to substitute components for testing. We’ll cover specific techniques
for achieving DIP soon.
Example: Persistence
public interface IOrderRepository {
void Save(Order order);
}
public class SqlOrderRepository : IOrderRepository { ... } // Concrete
implementation
public class OrderService
{
private readonly IOrderRepository _repository;
public OrderService(IOrderRepository repository) { ... }
}
Theory vs. Practice
Design is about trade-offs. Sometimes, a bit of ‘tight coupling’ might be the
pragmatic choice. However, understanding these principles empowers you
to make informed design decisions.
Up Next
Principles are great, but how do you actually change existing code to
become more testable? Let’s discuss step-by-step refactoring techniques.
Step-by-Step Refactoring Towards a
Loosely-coupled Architecture, Part 1
Refactoring is the art of changing a code’s structure without altering its
external behavior. When done in tandem with unit tests, it’s a powerful tool
for improving testability.
Important Premise: You Need Tests First
Refactoring without a safety net of tests is like walking a tightrope without
a net. Before embarking, make sure you have some tests in place to catch if
you accidentally break functionality.
1. Common Scenarios
● Introducing Seams: Finding places where you can ‘break’ the
dependency chain to insert test doubles during testing.
● Extracting Interfaces: Creating interfaces from concrete classes to
enable dependency injection.
● Breaking up Monoliths: Gradually decomposing a large class into
smaller units with well-defined responsibilities.
2. Step-by-Step Approach (General)
1. Identify Target: Choose a class or section of code that is
difficult to test.
2. Write Characterisation Tests (If Needed): If tests are lacking,
write tests that capture the current behavior, even if dirty.
3. Small Refactorings: Make a single focused change (extract
method, introduce a parameter to a constructor, etc.) Rerun
your tests, they should still pass.
4. Repeat: Iterate with tiny refactorings, always keeping tests
green.

3. Example: Breaking a Database Dependency


Let’s imagine a class tightly bound to a database:
public class CustomerService
{
private readonly string _connectionString;
public CustomerService(string connectionString) {
_connectionString = connectionString;
}
public Customer GetCustomerById(int id)
{
// Complex database query logic using _connectionString ...
}
}
Steps:
1. Interface: Introduce ICustomerRepository with methods like
GetById(int id) .
2. Make CustomerService use the Interface: Change its
constructor and modify database logic to use the repository.
3. Test Double Implementation: During tests, you can now
provide a fake ICustomerRepository .

Additional Resources
● Martin Fowler’s Refactoring Catalog:
https://refactoring.com/catalog/ A classic.
● Book: Working Effectively with Legacy Code (Michael
Feathers): https://www.amazon.com/Working-Effectively-Legacy-
Michael-Feathers/dp/0131177052
Cautions
● Don’t Change Everything at Once: Large rewrites are risky.
Small steps are your friend.
● Tests as a Guide: Your test suite tells you when your refactoring is
functionally equivalent and when you’ve made a mistake.
Up Next
In Part 2, we’ll look at specific refactoring techniques and the core concept
of dependency injection, a powerful pattern for creating loosely coupled
code.
Step-by-Step Refactoring Towards a
Loosely-coupled Architecture, Part 2
In the previous chapter, we started introducing seams into our code. Now,
let’s learn how to fully leverage those seams to inject dependencies, making
our units gloriously testable.
1. Dependency Injection (DI) in a Nutshell
● Core Idea: A class doesn’t create its dependencies directly. Instead,
they are ‘injected’ from the outside.
● Benefits:
○ Decoupling at its finest
○ Easily swap in test doubles
○ Increased flexibility in our application’s overall structure
2. Types of Dependency Injection
● Constructor Injection: Dependencies are passed through the
constructor. (The most preferred for its clarity and testability)
● Property Injection: Dependencies are set through public
properties. (Used sometimes, but less ideal for testability)
● Method Injection: A dependency is passed directly as a parameter
to a specific method. (Useful at times, but not a whole-system DI
approach)
3. Refactoring Continued: Our Database Example
Let’s assume after Part 1 we have:
public interface ICustomerRepository { ... }
public class CustomerService
{
private readonly ICustomerRepository _repository;
public CustomerService(ICustomerRepository repository) {
_repository = repository;
}
// ... rest of the code using _repository
}
Steps
1. Real vs. Fake: Create a DatabaseCustomerRepository (real)
and maybe a FakeCustomerRepository (for tests). Both
implement ICustomerRepository .
2. Wiring it Up: Somewhere in your application’s startup or
composition logic, you’ll decide which to use:
// Production Mode
var customerService = new CustomerService(new
DatabaseCustomerRepository("..."));
// Test Mode
var customerService = new CustomerService(new
FakeCustomerRepository());
4. Taking It Further
● Manual DI vs. Frameworks: What we did was “manual”
dependency injection. For large projects, consider Dependency
Injection Frameworks which automate object ‘wiring’ based on
configuration. We’ll cover these later!
Additional Resources
● Mark Seemann on Dependency Injection: https://blog.ploeh.dk/
(Search his site for DI content, he’s a thought leader on the topic)
Trade-offs
● Increased Indirection: DI can potentially make the flow of your
application slightly harder to trace for newcomers. Good
documentation helps!
● The Right Amount: Be pragmatic. Not everything must be
injected, finding the balance is key.

Dependency Injection via Method


Parameters: Implementation and
Benefits
While constructor injection is king for testability, there are scenarios where
injecting dependencies directly into a method is the right tool for the job.
1. How Method Injection Works
Instead of the dependency being provided at object construction, it’s passed
as a parameter to the specific method that needs it:
public class ReportGenerator
{
public void GenerateMonthlyReport(ITextFormatter formatter, string
data) {
// ... Logic to generate report, using the 'formatter' ...
}
}
2. When It Makes Sense
● Optional Dependencies: The dependency isn’t always needed for
the object to function in general, only for specific operations.
● Multiple Collaborators: Perhaps the method could work with
different kinds of text formatters (XML, CSV, etc.). Method
injection lets you change this on a call-by-call basis.
● Legacy Code Constraints: If refactoring a constructor is difficult,
method injection can be a way to introduce testability with less
invasive changes.
3. Example: Flexibility in Formatting
// Test Scenario 1
var generator = new ReportGenerator();
generator.GenerateMonthlyReport(new XmlFormatter(), "...");
// Test Scenario 2
generator.GenerateMonthlyReport(new CsvFormatter(), "...");
4. Testability Considerations
● Possible with Method Injection: You can easily pass in test
doubles during your unit tests.
● Slightly Less Explicit: Constructor injection makes a class’s
dependencies very obvious at a glance. Method injection trades
some of that off for flexibility.
5. Trade-offs
● Increased Method Parameter Lists: Overuse can lead to methods
with many parameters. Strike a balance.
● Object State: Constructor Injection encourages immutable objects,
which is good practice overall. With method injection, there might
be a temptation to change the dependency through the method,
leading to less predictable state.
Method Injection’s Place in Your Toolkit
It’s not a replacement for constructor injection, but rather a complementary
technique:
● Constructor Injection: The Default (For core, always-required
dependencies)
● Method Injection: The Supplement (For special-case or
situational dependencies)
Up Next
Let’s look at property injection, a less common pattern, understanding
where it sits in our dependency injection arsenal.
Leveraging Property Injection for
Enhanced Flexibility
Property injection offers a middle ground between the rigidity of
constructor injection and the free-for-all of direct field assignments. Let’s
see when it’s useful and the trade-offs involved.
1. The Mechanics
Instead of a dependency being provided through the constructor, it is set via
a public property:
public class OrderProcessor
{
public ILogger Logger { get; set; }
// ... other code ...
}
2. Use Cases
● Optional Dependencies: Maybe a component can use a logger, but
it’s not strictly essential for its creation.
● Configuration-Driven: Perhaps the decision of which dependency
to inject is made based on a configuration file or external settings.
● Framework Constraints: Some frameworks or libraries might
encourage or even force the use of property injection.
3. The Testability Angle
● It Works: You can easily set up test doubles in your unit tests:
var orderProcessor = new OrderProcessor();
orderProcessor.Logger = new TestLogger();
● But a Word of Caution: Property injection makes the core
dependencies of a class less immediately obvious at a glance.
4. Flexibility vs. Explicitness
● Mutable State: Since the property can be changed after object
creation, it carries the risk of the object being used in an incomplete
or inconsistent state (e.g., if Logger was never set).
● Documentation is Key: Property injection relies more heavily on
good documentation to make the intended usage pattern clear.
Example: Highly Configurable Component
Imagine a report generator that can log to different targets. Property
injection would let you dynamically switch the logging target at runtime.
Additional Resources
● Dependency Injection in .NET (Book by Mark Seemann):
https://www.amazon.com/Dependency-Injection-NET-Mark-
Seemann/dp/1935182501 While not C# specific, covers DI concepts
in-depth. Look for sections on property injection.
When to Choose Property Injection
1. Optional over Mandatory: A good fit when a component truly
doesn’t always need the dependency.
2. Late Configuration: If the decision of what to inject is
determined well after object construction.
3. Being Pragmatic: Adapting to legacy code or framework
restrictions

Generally speaking, it’s often advisable to:


● Prefer Constructor Injection: For core, mandatory dependencies
to maintain clarity and immutability.
● Use Property Injection Sparingly: For those specific scenarios
where its flexibility is the paramount advantage.
Up Next
Constructor injection remains the king of testability. Let’s look at how it
provides the most seamless and controlled way to wire up the dependencies
of our system.
Utilizing Constructor Injection for
Seamless Dependency Management
Constructor injection is the gold standard for creating testable units. It
provides clarity, promotes immutability, and gracefully guides you towards
better design.
1. The Essence
● Dependencies in the Constructor: A class clearly declares its
dependencies by requiring them as parameters in its constructor.
● Inversion of Control: The class doesn’t reach out and create its
dependencies; it expects them to be provided.
Example
public class OrderProcessor
{
private readonly IPaymentGateway _paymentGateway;
private readonly ILogger _logger;
public OrderProcessor(IPaymentGateway paymentGateway, ILogger
logger)
{
_paymentGateway = paymentGateway;
_logger = logger;
}
// ... other methods ...
}
2. Why It Rules for Testability
● Explicit Contracts: You can’t even create an instance of
OrderProcessor without providing suitable implementations of its
dependencies. This prevents accidental misuse.
● Test Doubles Made Easy: During testing, you pass in your test
doubles:
var orderProcessor = new OrderProcessor(new FakePaymentGateway(),
new TestLogger());
● Immutability: By assigning dependencies in the constructor and
making them readonly , you encourage creating objects that don’t
change their internal dependencies mid-flight, leading to more
predictable behavior.
3. Design Benefits Beyond Testing
● Loose Coupling: The OrderProcessor works with any
IPaymentGateway , making it more reusable.
● Early Error Detection: If you attempt to create an
OrderProcessor at runtime with invalid dependencies, you’ll get
immediate feedback, not a mysterious failure later on.
4. Potential “Drawbacks” (And how to address them)
● Longer Constructor Parameter Lists: A sign that a class might
have too many responsibilities (consider refactoring).
● Wiring it All Together: In large projects, you’ll likely want a
Dependency Injection framework to automate object creation –
we’ll cover those soon!
Additional Resources
● Blog post “Constructor Injection” by Martin Fowler:
https://martinfowler.com/articles/injection.html#ConstructorInjectio
n
The Path of Least Resistance
Constructor injection, when possible, should be your default choice due to
its advantages in both testability and overall design.
Up Next
Having a solid grasp of DI techniques is great. But how do we scale this to
large applications? Let’s explore Dependency Injection Frameworks that
streamline this process for us.
Exploring Dependency Injection
Frameworks: Options and
Considerations
We’ve learned how to do DI ‘manually’. Now, let’s see how frameworks
automate the wiring-up of our components, making it cleaner, especially for
large projects.
1. Why Use a DI Framework?
● Reduced Boilerplate: Instead of manually passing dependencies
everywhere, you configure the framework, and it instantiates your
objects with the right collaborators.
● Object Lifecycles: Frameworks can manage whether your
dependencies are singletons (one instance reused), transient (new
instance each time), or something more elaborate.
● Centralized Configuration: You define how components are
connected in one place, improving visibility.
● Advanced Features: Some frameworks offer interception,
convention-based wiring, and more.
2. Popular Options in .NET
● Microsoft.Extensions.DependencyInjection: Built-in, lightweight,
good for getting started.
● Ninject: Mature, flexible, XML-based configuration (feels a bit
older).
● Autofac: Very powerful, feature-rich, popular choice for large
applications.
● Lamar: Focus on speed and clarity, gaining traction.
● Many Others: DryIoc, StructureMap, Castle Windsor, etc.
3. Key Considerations When Choosing
● Performance: For most applications, any major framework is fine.
If you have extreme performance needs, benchmark!
● Features vs. Simplicity: Do you need advanced features (like
interception), or is a leaner framework better?
● Configuration Style: Some use code-based setup, others XML, and
others conventions. Find one that suits your project’s style.
● Community & Documentation: A healthy community means
better support and learning resources.
4. Basic Example (Conceptual)
Let’s assume we’re using a fictional, simple DI framework
// Configuration
container.Register<IPaymentGateway, StripePaymentGateway>();
container.Register<ILogger, FileLogger>();
// Usage (somewhere in your app's startup)
var orderProcessor = container.Resolve<OrderProcessor>();
5. Considerations Beyond the Basics
● Integration with ASP.NET Core: Modern web frameworks often
have their own DI system that might interact with your chosen
framework.
● Design for DI: Using DI effectively influences how you structure
your classes long-term
● Don’t Overcomplicate: DI is a tool, not a religion. Use it where it
makes sense.
Additional Resources
● Comparison of DI frameworks on StackOverflow:
https://stackoverflow.com/questions/130794/which-net-dependency-
injection-frameworks-are-worth-looking-into
● Autofac Documentation: https://autofac.org/ (Documentation for
other frameworks can be easily found)
Up Next
Saying goodbye to brittle tests and hello to the power of mocks! Let’s learn
how to craft mock objects to isolate units for focused testing.
Unleashing the Power of Mocking
Frameworks: Essential Tools for Unit
Testing
Mocks are the stand-ins, the stunt doubles of the software world. They
allow us to isolate our units and test them with laser-like precision.
1. What is a Mocking Framework?
● Libraries that Generate Test Doubles: Mocking frameworks (like
Moq for C#) let you create fake objects that implement interfaces on
the fly.
● Control!: You dictate how these fake objects should respond to
method calls, what values they return, and more.
● Focus: Mocking frameworks let you shift the spotlight entirely onto
the unit you’re testing, removing unpredictable dependencies.
2. Why Mocks are a Game-Changer
● Dealing with Unavailable Dependencies: A database might not be
ready, a web service down. Mocks don’t care.
● Test ‘Bad’ Paths: Force your mock to throw an exception and see
how your unit reacts.
● Performance: A mock object responding with canned data is
blazing fast compared to a real external system.
3. Basic Example Using Moq (Syntax may vary slightly)
// Arrange
var mockLogger = new Mock<ILogger>();
mockLogger.Setup(m => m.Log(It.IsAny<string>())); // We don't care
about the input for this test
var service = new CustomerService(mockLogger.Object);
// Act
service.PlaceOrder(someOrder);
// Assert
mockLogger.Verify(m => m.Log(It.Is<string>(s => s.StartsWith("Order
placed"))), Times.Once());
4. Key Concepts
● Mocks vs. Stubs vs. Fakes: Subtle distinctions exist, but often we
use ‘mock’ as the umbrella term,
● Setup: Preparing the mock’s behavior in advance (like in the
example).
● Verification: Checking if methods on the mock were called as
expected.
5. When to Reach for a Mock
● Slow dependencies
● Unavailable systems during development
● Error cases that are hard to trigger with the real dependency
● Isolating the true functionality you’re testing
Additional Resources
● Moq Documentation:
https://github.com/moq/moq4/wiki/Quickstart
Cautions: Don’t Overdo It
● Maintainability Matters: Overly complex mock setups lead to
brittle tests.
● Test the Real Thing (sometimes): Integration tests with real
dependencies also have their place in a healthy testing strategy.
Up Next
Let’s get practical! We’ll dive into specific scenarios and Moq techniques to
create sophisticated mocks for even tricky dependencies.
Crafting Mock Objects Using Moq:
Advanced Techniques, Part 1
We’ve grasped the basics of mocking. Now, let’s unlock more intricate
scenarios to make our mocks even more effective.
1. Matching Arguments
● Going Beyond “Any”: It.Is<string>(s => s.Contains("error")) –
Match only strings containing ‘error’.
● Matchers for Precision: It.IsRegex("[a-z]{10}") – complex
matching when needed.
● Custom Matchers: For really bespoke scenarios, you can even
create your own matching logic.
2. Mocking Multiple Return Values
● Setup Sequences:
mockRepo.SetupSequence(m => m.GetById(123))
.Returns(aCustomer)
.Returns((Customer)null);
● Behavior Change over Time: First call returns a customer,
subsequent calls return null (simulating something not being found).
3. Mocking Properties
● Getters & Setters:
mockService.Setup(m => m.Status).Returns("OK");
mockService.SetupSet(m => m.LastUpdated = DateTime.Now);
● Testing Interactions: Mocking properties is helpful for both setting
up test values and later verifying if your unit under test modified
them.
4. Mocking Void Methods
● Callbacks:
int callCount = 0;
mockMailer.Setup(m => m.SendMail(It.IsAny<string>(), It.IsAny<string>
()))
.Callback(() => callCount++);
// ... your test ...
Assert.AreEqual(2, callCount); // Verify the void method was called twice
● Indirect Verification: Mocking frameworks often can detect calls
to void methods even without a setup, making them ideal for
interaction testing (which we’ll cover further soon!).
5. Partial Mocks
● Mocking Some Parts, Real Implementation for Others: Useful
when you have a class where only a specific method needs to be
replaced with a mock version.
● Use with Care: They can make tests a bit more ‘magical’ as real
code is silently executed. Consider if refactoring the class under test
is a better option.
Example: Mocking a File I/O Dependency
var mockFileSystem = new Mock<IFileSystem>();
mockFileSystem.Setup(fs => fs.FileExists(It.IsAny<string>
())).Returns(true);
mockFileSystem.Setup(fs => fs.ReadAllText("data.txt")).Returns("test
content");
var fileProcessor = new FileProcessor(mockFileSystem.Object);
Additional Resources
● Moq “cheat sheet”:
https://github.com/Moq/moq4/wiki/Quickstart#cheat-sheet
Trade-offs
● Complexity vs. Readability: Complex mocks can become hard to
understand. Strive for a balance between flexibility and clarity.
Up Next
In Part 2, we’ll tackle raising events from mocks, testing how classes
interact, and strategies to avoid mocking pitfalls.
Crafting Mock Objects Using Moq:
Advanced Techniques, Part 2
1. Raising Events from Mocks
● Why It Matters: Testing how your unit reacts to events raised by
its dependencies.
● Raise to the Rescue:
var mockMailer = new Mock<IMailer>();
mockMailer.Raise(m => m.MessageSent += null, new
MessageEventArgs("test"));
var service = new NotificationService(mockMailer.Object);
// ... Test how 'service' reacts to the raised event
2. Mocking Out-Parameters
● Scenario: When the method you’re mocking modifies arguments
using out :
mockRepo.Setup(m => m.TryGetById(It.IsAny<int>(), out customer))
.Returns(true)
.OutCallback((int id, out Customer c) => c = new Customer());
3. Verifying Calls in a Specific Order
● Sequence Matters: Especially when your unit interacts with the
same mock multiple times with a significant order.
● Moq and Sequences: Moq lets you chain setups and then enforce
sequences during verification (details on this can get a bit intricate,
so refer to Moq documentation).
4. State-based vs. Interaction-based Testing
● State-based: Setting up return values, verifying properties. (“Did it
return the right result?”)
● Interaction-based: Verifying calls on mocks. (“Did it call the right
methods on collaborators?”)
● A Healthy Mix: Both are vital tools in your toolbox. Choose the
style that best focuses the intent of your test.
Example: Testing a Complex Interaction
Imagine an OrderProcessor that talks to a payment gateway and an
inventory system:
// ... Test setup
mockPaymentGateway.Setup(x => x.ProcessPayment(It.IsAny<Order>
())).Returns(true); // Example only, you might want to mock different
outcomes
// Unit under test
orderProcessor.ProcessOrder(order);
// Verify the expected workflow
mockPaymentGateway.Verify(x => x.ProcessPayment(order), Times.Once);
mockInventoryService.Verify(x => x.ReduceStock(order.Items),
Times.Once);
Additional Resources
● Official Moq Documentation (Quickstarts and examples):
https://github.com/moq/moq4/wiki/Quickstart
5. When Mocking Gets Tough
● Overly Complex Mocks: A sign your code might need refactoring
to be more testable.
● Tooling Limits: Extremely intricate mocking might be possible, but
the test itself becomes very brittle.
● Stepping Back: Consider if a slightly coarser-grained integration
test is needed in some cases, along with your unit tests.
Up Next
Mocking is powerful, but it’s easy to fall into the trap of overuse. Let’s
discuss how to balance mocking effectively and spot the warning signs of
mock abuse.

Balancing State-based and Interaction


Testing Approaches
Mocks, like any tool, are best used strategically. Let’s understand the
nuances between state-based and interaction-based testing:
1. State-based Testing
● Focus: Ensuring the output and side-effects of a method are as
expected.
● Techniques: Setting up return values for mock methods, or
verifying properties on mocks.
● Example:
mockRepository.Setup(x => x.GetById(123)).Returns(new Customer());
// ... Test if the returned customer object is used correctly
2. Interaction-based Testing
● Focus: Verifying whether the unit under test collaborates with its
dependencies correctly.
● Techniques: mock.Verify() calls to ensure methods on mocks were
invoked in the expected way, with the right arguments, etc.
● Example:
mockMailer.Verify(x => x.SendWelcomeEmail(It.IsAny<Customer>()),
Times.Once);
3. When to Favor Each
● State-based:
○ The core logic of the function produces a result
○ You care about the values flowing through the system
● Interaction-based:
○ Behavior with external collaborators is essential
○ Ensuring the unit under test “orchestrates” its dependencies
correctly.
4. The Dangers of Over-Emphasis
● Excess State-based: Brittle tests, as they get attached to the
specific implementation details of the method.
● Excess Interaction-based: Can lead to micro-managing your code,
making it resistant to refactoring even when the external behavior
remains consistent.
Example: Order Processing Revisited
● State-based: Does ProcessOrder correctly set the ‘status’ property
on the Order object?
● Interaction-based: Does ProcessOrder call the payment gateway
and then update inventory? Was the correct order data passed to
both?
Additional Resources
● Martin Fowler on “Mocks Aren’t Stubs”:
https://martinfowler.com/articles/mocksArentStubs.html (Delves
deeper into the philosophical distinctions)
5. The Wisdom of Hybrid Tests
Most real-world unit tests employ a mix of both techniques. The art is in
striking the right balance:
● Start with the Outcome: What’s the core thing your unit is
supposed to do? This often suggests if state-based is your starting
point.
● Collaborator Workflow: Are there crucial interactions that must
happen in a specific way? Layer on interaction testing.
Up Next:
Let’s see how testing object interactions can get more complex, and
strategies for dealing with intricate scenarios.

Testing Object Interaction: Strategies


and Examples
When units don’t exist in isolation, we need to verify they communicate
correctly. Mocking frameworks shine in this domain.
1. The Essence of Interaction Testing
● Shift of Perspective: From what a unit returns to how it uses its
dependencies.
● Why It Matters: Incorrect interactions are a common source of
bugs, even when units individually seem to work.
2. Key Strategies
● Verifying Calls: Did your unit call the ‘right’ method on a
dependent object? With the correct arguments?
mockPaymentService.Verify(m => m.Charge(order, customer.CreditCard),
Times.Once);
● Sequences (When Order Matters):
// Example - Ensure inventory check happens BEFORE processing
payment
var mockSequence = new Sequence();
mockInventory.InSequence(mockSequence).Setup(x =>
x.IsInStock(order));
mockPaymentGateway.InSequence(mockSequence).Setup(x =>
x.Charge(order));
● Indirect Observation: Sometimes you don’t mock a direct
collaborator. Instead, you test if some downstream result occurred:
○ Did a database update happen after a call chain?
○ Did a message get placed on a queue for asynchronous
processing?
3. Example Scenarios
● Web Controller: Ensure a controller handling a form submission
interacts correctly with a service or repository layer to save data.
● Service with Multiple Dependencies: Verify an OrderService
uses InventoryManager to validate stock, a PaymentProcessor to
charge, and then a NotificationService to send confirmation
emails.
● Workflows: Test a process comprised of multiple steps, ensuring
objects pass data along and trigger the next stage correctly.
4. Considerations
● Level of Detail: How deep do you go? Mocking every interaction
leads to brittle, overly-specific tests. Focus on the most critical
points of collaboration.
● Partial Mocks: Can be useful for hybrid scenarios where you want
a mix of real and mocked behavior on a single object.
● Test Intent: What’s the purpose of the interaction you’re verifying?
Keep this clear in your test names.
Example: Order Processing Workflow
[Test]
public void ProcessOrder_ValidatesStock_ChargesCustomer_SendsEmail()
{
// Setup mocks, potentially with state-based setups for return values
// Act
orderService.Process(order);
// Assert interactions
mockInventory.Verify(i => i.ReduceStock(order.Items), Times.Once);
mockPaymentGateway.Verify(p => p.Charge(order.Total), Times.Once);
mockMailer.Verify(m => m.SendOrderConfirmation(order),
Times.Once);
}
Up Next
Mocking can be a superpower but using it unwisely leads to test headaches.
Let’s learn how to recognize the pitfalls of mock abuse and best practices
for keeping your tests healthy.
Identifying and Avoiding Mock Abuse:
Best Practices
Mocking is a potent tool but must be wielded with care. Otherwise, you end
up with tests that are brittle, obscure the intent, and might even give you a
false sense of security.
1. Telltale Signs of Mock Abuse
● Tests Too Tightly Coupled to Implementation: If minor code
changes break many tests, you’re likely mocking excessively.
● Tests Specifying ‘How’ instead of ‘What’: Overly elaborate
sequences, verifying low-level interactions – your tests shouldn’t
mirror your production code line-by-line.
● Difficulty Setting Up Mocks: Complex chains of setup calls
indicate your real code might have testability problems.
● TDD Becomes Impossible: If you struggle to write the test before
the code due to mocks, it’s a red flag.
● Tests Lose Readability: Tests littered with obscure mock setups
and verifications are hard to understand and maintain.
2. Strategies for Prevention
● Mock Roles, Not Implementations: Focus on the contract
(interface) of the dependency, not its internal details.
● Test Behavior, Not Calls: Start with state-based testing. Isolate
interaction testing to the most critical interactions.
● Don’t Mock What You Don’t Own: Mocking third-party libraries
can be necessary, but very sparingly. Prefer writing a thin wrapper
with an interface you control for testing purposes.
● Refactor for Testability: If mocking feels painful, it’s usually a
sign that your code is too tightly coupled.
● Consider Coarser-grained Tests: Sometimes a slightly higher-
level integration test, in conjunction with unit tests, provides better
value.
3. The ‘Spurious Accuracy’ Trap
Overly mocked tests might pass even when the real system would fail due
to integration issues.
● Balance is Key: A mix of unit, integration, and even end-to-end
tests is a healthy strategy.
● Monitoring: Combine testing with real-world monitoring to catch
issues that slip through your test net.
Example: Over-Mocking an OrderService
// Bad example!
mockRepo.Setup(r =>
r.GetCustomer(order.CustomerId)).Returns(customer);
mockInventory.Setup(i => i.IsInStock(order.Items)).Returns(true);
mockRepo.Setup(r => r.Save(order));
// … more setup for payment gateway, notifications, etc.
// Test becomes nearly impossible to understand and heavily tied to the
implementation
Additional Resources
● “Mocking Isn’t Testing” by JB Rainsberger:
https://blog.thecodewhisperer.com/permalink/mocking-isnt-testing
(Thought-provoking article)
● StackOverflow discussions on recognizing mock abuse:
https://stackoverflow.com/search?q=recognizing+mock+abuse
4. Finding the Sweet Spot
Mocking is immensely valuable when used judiciously.
● Keep Tests Focused on Outcomes: Mocks help you achieve them.
● Be Willing to Refactor: Improve code design in tandem with your
test suite.
Up Next
We wrap up our deep dive into testing by addressing a practical question:
who, in a software team, should be responsible for writing our precious unit
tests?
Defining Roles: Who Should Be
Responsible for Writing Tests in Your
Team?
While testing is everyone’s responsibility (in the broad sense), questions
arise: Who creates the unit tests? Should developers do it all, or are there
other roles involved?
1. The Case for Developers Writing Unit Tests
● Deep Understanding: The people writing production code often
have the best insight into the nuances to target with tests.
● Accountability: It encourages taking ownership of code quality and
maintainability from the outset.
● TDD Rhythm: Test-Driven Development works best when writing
tests is a seamless part of the development flow.
2. When Other Roles Get Involved
● Testers/QA Specialists:
○ Bring a dedicated focus on testing edge cases and breaking
things creatively.
○ Can contribute broader integration tests.
● DevOps Collaboration: In environments with strong CI/CD,
there’s often a shared effort in maintaining testing infrastructure and
pipelines.
● Special Expertise: Highly complex domains might necessitate
bringing in developers with very specific testing skills for
particularly intricate parts of the system.
3. Evolving with Your Team
● Small Teams: Often, developers wear most of the hats, including
testing.
● Growing Teams: As teams scale, specialized testers start to play a
more significant role.
● Project Maturity: A new, experimental project might be
developer-driven, whereas a safety-critical system might warrant a
dedicated QA team.
● Continuous Adaptation: Regularly assess what works best for
your team’s dynamics and project needs.
4. Best Practices, Regardless of Role
● Code Review for Tests: Just like production code, tests benefit
from review by peers.
● Shared Standards: Coding styles, naming conventions, etc., for
tests keep the project cohesive.
● Focus on the ‘Why’ Not Just the ‘How’: Ensure everyone
understands the testing philosophy to avoid rote, low-value tests.
● Skill Development: Provide testing training across the team, even
if certain roles specialize more in it.
Additional Resources
● Martin Fowler on “Who Writes Tests”:
https://martinfowler.com/articles/whoTests.html
5. Anti-Patterns to Avoid
● Throwing Code ‘Over the Wall’ to QA: Creates silos, not
collaboration.
● Seeing Testing as ‘Less Important’: Testing is a core development
skill.
● No Room for Experimentation: Allow processes to evolve as your
team does.
Conclusion
There’s no single, perfect answer. The best approach fosters a culture where
testing is valued, well-understood, and a collaborative effort tailored to your
team’s specific structure and your project’s nature.
Conclusion
Throughout this book, we’ve embarked on an empowering journey. You’ve
transformed from someone tentatively approaching automated testing to a
developer armed with the knowledge and tools to write robust, maintainable
C# code.
Key Takeaways
● Testing as a Core Skill: It’s not an afterthought; it’s interwoven
into how you design and build software.
● Confidence in Change: Refactoring, extending, and updating code
without fear, knowing your tests form a safety net.
● Cleaner Designs: The need for testability naturally encourages
loose coupling, interfaces, and modularity.
● Beyond Just Unit Tests: Understanding the strategic role of
integration tests and the larger testing landscape.
● The Joy of the ‘Green Tick’: The satisfaction of well-crafted tests
passing, signaling that your system is behaving as intended.
The Adventure Continues
This book is not the destination, but a springboard. Here’s how to keep
growing:
● Practice, Practice, Practice: Build testing muscle memory. Start
with small projects, then apply it at work.
● Community: Engage with online forums and local meetups. Learn
from others, and share your own knowledge.
● Never Stop Questioning: Are there better ways to test? Can this
code be made more test-friendly?
● Expand Your Toolbox: Explore other testing techniques like
property-based testing or mutation testing as you become more
advanced.
A Call to Action
Embrace testing as your ally. Witness how it elevates your code quality,
your confidence, and ultimately the value you deliver as a software
developer. Be the champion of testing within your team, spreading the
knowledge and reaping the collective rewards.
You’ve Got This
The journey may have had its challenges, but you’ve leveled up. The world
of reliable, well-tested C# applications now lies open before you. Go forth
and code with the unwavering confidence that comes from a solid suite of
tests!
This le was downloaded from Z-Library project

Your gateway to knowledge and culture. Accessible for everyone.

z-library.se singlelogin.re go-to-zlibrary.se single-login.ru

O cial Telegram channel

Z-Access

https://wikipedia.org/wiki/Z-Library
ffi
fi

You might also like