You are on page 1of 103

let's connect socially:

facebook twitter
facebook.com/dotnetcurry twitter.com/dotnetcurry

linkedin GITHUB
linkedin.com/suprotimagarwal github.com/dotnetcurry

editor
letter from the

contributors
@suprotimagarwal
Contributing Authors :

I
May seems like Christmas for geeks... Alex Basiuk
'm thick in the middle of learning that runs on Windows, Linux, Damir Arh
developer conference MacOS. Daniel Jimenez Garcia
season. After attending
Dobromir Nikolov
F8, Google IO and > Visual Studio Intellicode for AI assisted
MSBuild 2019 virtually, coding and VS Remote Development.
Filip Woj
I decided to physically Pavel Kutakov
attend the final lapse of the Microsoft > Azure Speech Service with an improved Yacoub Massad
Ignite Tour in Mumbai this month. speech recognition algorithm.
Technical Reviewers :
Although I ended up having consumed > Windows Terminal with WSL2.
Damir Arh
too much information, these conferences
Daniel Jimenez Garcia
does shape up the content for future > Gaming, Mixed Reality and Azure IoT
editions of this magazine. Hence I make it Tim Sommer
a point to attend them nevertheless and > DevOps and Cloud Native Yacoub Massad
get a grip on the roadmaps and advances
for developers. > Cognitive Services. Next Edition : July 2019
Copyright @A2Z Knowledge
Here are my top "developer" picks and What excited the developer in you and
Visuals Pvt. Ltd.
annoucements from the Build 2019 what would like us to focus on?
conference that I am excited about.
Mail me your views at
> NET 5.0 was announced and will be suprotimagarwal@dotnetcurry.com or
available in 2020. tweet them to @dotnetcurry.

> ML.NET v1.0 was released. ML.NET is


an open source framework for machine Chief Editor, DNC Magazine

Art Director : Minal Agarwal permission. Email requests to


Windows, Visual Studio, ASP.NET, Azure, TFS
credits

“suprotimagarwal@dotnetcurry. & other Microsoft products & technologies


Editor In Chief : com”. The information in this are trademarks of the Microsoft group
Suprotim Agarwal magazine has been reviewed of companies. ‘DNC Magazine’ is an
for accuracy at the time of independent publication and is not affiliated
(suprotimagarwal@
with, nor has it been authorized, sponsored, or
dotnetcurry.com) its publication, however the
otherwise approved by Microsoft Corporation.
information is distributed without
Microsoft is a registered trademark of
Disclaimer : any warranty expressed or implied. Microsoft corporation in the United States
Reproductions in whole or part and/or other countries.
prohibited except by written
CONTENTS

ASP.NET CORE STATE IN


VUE CLI
MULTI-THREADED
TEMPLATES C# APPLICATIONS
92

16 SHIPPING

PSEUDOCODE TO
PRODUCTION 42
REACTIVE AZURE
SERVICE BUS
MESSAGING
CURRENT STATE
WITH AZURE OF WEB

06
EVENT GRID DEVELOPMENT
FOR .NET
DEVELOPERS 58

ASYNCHRONOUS
PRODUCER ZERO DOWNTIME
CONSUMER DEPLOYMENT
PATTERN
FOR ASP.NET
82

21
IN .NET C# APPLICATIONS
AZURE

Filip Woj

REACTIVE AZURE
SERVICE BUS
MESSAGING
WITH AZURE
EVENT GRID
Azure Service Bus has
long become a staple for
distributed systems running
on Microsoft Azure cloud. It
is a high-throughput, reliable
and easy to use messaging
service that can be a
backbone for your scalable
solutions.

In this tutorial, we'll explore Service Bus integration


with Azure Event Grid, and the advantages and
scenarios it brings to the table.

6 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Azure Event Grid integration
Integration with Event Grid is a fairly new functionality of the Service Bus, as this feature was launched in
the autumn of 2018.

For the time being, it is only available for the Premium tier subscription for Service Bus - however if you are
running any kind of reliable real-life production workflow, you probably already are on the premium tier,
just because it is so much more reliable than Basic/Standard levels.

Event Grid integration adds a whole new dimension to architecting solutions against Service Bus, as it
allows you to create message listeners or receivers that do not need to maintain a constant connection to
the Service Bus.

Currently, Service Bus can raise an Event Grid event in two scenarios:

• active message available on a queue / topic subscription, without a message receiver connected
(Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners event type)

• dead letter messages available (Microsoft.ServiceBus.DeadletterMessagesAvailable event


type)

More event types are expected to be added in the future.

How does it differ from traditional use cases?


In a typical Service Bus application integration pattern, for both queues (one-to-one messaging) and topic/
subscriptions (one-to-many messaging), receivers maintain an open TCP connection with the Service Bus
and listen to incoming messages using the AMQP protocol.

On rare occasions, if full bi-directional TCP communication cannot be established, the client may also
continuously poll (long polling) the Service Bus for active messages.

Despite the fact that it is possible to have the receiver polling for messages, the best mental model for
Service Bus usage though, is to treat it as a de facto "push model" that guarantees fast message delivery
over an existing TCP connection.

The listener may connect early and be ready for any message that the producer would send, or it may
choose to connect later, and simply pick up the active (pending) messages then - unprocessed messages are
not lost, unless they expire.

This simple sequence flow is shown in Figure 1.

www.dotnetcurry.com/magazine | 7
Figure 1: Typical Service Bus Architecture

Such an approach is suitable for most distributed system architectures, but the problem with it is that the
listener, should it disconnect from the Service Bus, doesn't really know when the new messages show up.
So, in order to be able to consume messages quickly, it normally ends up having to reconnect to the Service
Bus straight away.

Azure Event Grid integration opens up a new possible integration path into Service Bus. Instead of the
usual "push model", it flips the script and allows us to program against the Service Bus using a "pull model".

When using the Azure Event Grid integration, the receiver, on both queue and topic/subscription side of
things, doesn't need to maintain an active TCP connection to the Service Bus anymore. Instead, it exposes a
hook for the Azure Event Grid - this might be either HTTP webhook or an Event Hub hook.

For simplicity, we'll only focus on webhooks in this article.

The webhook is invoked by Azure Event Grid as soon as there is a message-related activity detected by the
Service Bus. This gives the receiver a chance to connect to the Service Bus and handle the message.

We already called this approach a "pull model" but perhaps a more fitting name would be a "reactive
system". The practical consequence is that we are able to handle a message on Service Bus, without having
to maintain a constant connection to the Service Bus.

8 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Just like with any software feature or functionality, you probably will come up with some of your own use
cases where this behavior would come in handy - after all, it all depends on the context.

However, there are a few benefits of Azure Event Grid integration into Service Bus that immediately jump to
mind, so let's briefly look at those.

Benefits of Azure Event Grid Integration


into Service Bus
#Benefit 1: On demand message receivers
As already explained, the fact that you do not have to maintain permanent connection to Service Bus from
your listener worker, can be very valuable.

It actually can be helpful in two ways.

First of all, you could apply certain cost or infrastructure saving measures by shutting down the listener
processes when they are not in use, and only bringing them back online once Event Grid notifies you of new
pending messages.

This is shown in a sequence diagram in Figure 2.

Figure 2: Service Bus Architecture with Event Grid

www.dotnetcurry.com/magazine | 9
You can immediately see the difference compared to our previous, more traditional diagram (Figure 1),
where a listener either had to be online all along, or it had to guess when a connection to Service Bus
might be necessary.

In this case however, as soon as a new message that has no active listener shows up on a Service Bus
queue or a topic subscription, Service Bus raises an Event Grid event about that.

Event Grid would then call the web hook with that information, into something that we call a "Message
Watchdog" (see Figure 2). It will be a custom component that we explicitly introduce to listen to Azure
Event Grid events and perform certain actions based on that.

“Message Watchdog” is of course a generic name and depending on your system architecture, the watchdog
process could take different shapes and forms – it is up to you to structure your system landscape in a way
that suits your business and technical needs.

For example, the watchdog functionality could be fulfilled by a monitoring system that you always have
online. It could also be fulfilled by another part of the service landscape, which contrary to the message
listener isn't getting shut down, such as an API server. It could also be completely serverless, i.e. an Azure
Function.

In our simple example, the watchdog is then responsible for bringing the message consumer (listener)
into the picture, by notifying it that it is time to connect to Service Bus to pick up the message, provided
the consumer is already alive. There's also the possibility where the watchdog has to orchestrate the
provisioning of the message consumer if it's actually offline or deprovisioned.

The second benefit of this type of Event Grid powered integration into Service Bus, is that it allows you to
bypass the connection limits of Service Bus namespaces.

At the moment, in AMQP mode, the limit is 5000 concurrent connections per Service Bus namespace. This
might seem like a lot, but that all depends on the system architecture.

It could be - and I have seen this in some enterprise-integration scenarios - that the number of connected
listeners can be really high. This could be caused by large scale out situations, by having many specialized
message types, that can only be handled by a specific type of listener. It could also be – and this is
something to definitely avoid - found in applications created for internal enterprise use, allowing client
applications to connect to the Service Bus namespace directly.

#Benefit 2: Monitoring message receivers


Another useful use case for the Event Grid integration, is to combine the traditional "push model" of Service
Bus usage, with Event Grid web hook notification used as a reliable self-heal monitoring feature.

In such an architecture, we'd still use Service Bus the same way as before (as shown on Figure 1), but we'd
leverage Event Grid to notify us that there are active messages without a listener.

This could happen when, for example, our message receiver crashes or loses network connectivity. In such
situations, Event Grid would notify our Message Watchdog, which could then spin up a new instance of the
receiver, thus equipping us with a nice self-healing system.

This process is shown in Figure 3.

10 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Figure 3: Service Bus Architecture with Event Grid

#Benefit 3: Dead letter queue monitoring


Finally, the Event Grid integration provides an excellent way for keeping tab on the dead letter queue - be it
for a regular Service Bus queue, or for a topic subscription (since it will have its own dedicated dead letter
queue).

Since dead letter queue is well, a queue, you could of course connect to it and monitor it for new messages
all the time - and be able to notify your monitoring or support teams when a new dead letter message
shows up.

This is somewhat wasteful though, since you need to be connected all the time in order to be able to timely
react to the (hopefully) occasional dead letter message. The alternative is to check the dead letter queue
at regular intervals, continuously reconnecting and disconnecting to it, but of course that means that your
reaction time might be slowed down.

With Event Grid, things get much cleaner.

Just like shown in the earlier diagrams, Service Bus is able to raise an Event Grid event when there is a new
message in the new dead letter queue, which normally means something went really wrong in the message

www.dotnetcurry.com/magazine | 11
consumption process. At that point, you are free to react to that message however you wish - without
having to continuously observe the dead letter queue.

Trying it out
Now that we have a long and winding theoretical road behind us, it's time to try this feature out.

We'll assume here that you are already familiar with Azure Service Bus and can provision a new Service Bus
instance for your use.

In case you have troubles, or need a refresher, there is an excellent "Get started with Service Bus topics"
tutorial. It specifically deals with topics, because that is what we are going to be using here, but it's very
similar for queues.

Once your Azure Service Bus namespace is created, and your queue is ready (remember, you need a
Premium tier Service Bus!), let's set up the Azure Event Grid integration. This is done by going to the Events
blade of your Service Bus namespace as shown in Figure 4.

Figure 4: Event Grid Integration Setup

Note that the configuration of Event Grid


happens against the namespace, even if
you'd like to listen to events from a specific
queue or a specific subscription only. In
those cases, you'd apply special filtering
which we'll be looking at soon.

For now, click +Event Subscription


(see Figure 4) to create a new Event Grid
integration. This should take you to the next
screen as shown in Figure 5.

Figure 5: Create New Event Grid Integration >>>

12 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


In this next screenshot, we choose the event type we are interested in - in our example only
Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners and we choose were the event
should be directed at.

In this particular example, it's a webhook with a test service that I have created to receive the Event Grid
events (so something corresponding to the "Messaging Watchdog" from our sequence diagrams).

At this point, the event configuration would apply to the entire Service Bus namespace – i.e. all queues and
all topics/subscriptions.

Should you wish to constraint the events to a specific set of Service Bus entities, you can switch to the
Filter tab as shown in Figure 6.

Figure 6: Applying Filters to the Subject of Each Event

While not tremendously intuitive, this tab allows you to pick the entity you are interested in. This is done
using the Subject Ends With configuration field, where you can put in the name of your queue or the name
of your topic with subscription (that one in the format <topic name>/subscriptions/<subscription
name>).

At this point, we can save the whole Event Grid configuration and try this out. Overall, my demo setup uses
the following names:

Given such naming style, Event Grid, in case there is an active message that is not being listened to by any
receiver, will raise the following ActiveMessagesAvailableWithNoListeners event, which we'll be
able to consume in our watchdog process.

[
{
"data": {
"entityType": "subscriber",

www.dotnetcurry.com/magazine | 13
"namespaceName": "strathweb-test",
"queueName": null,
"requestUri": "https://strathweb-test.servicebus.windows.net/event-grid-demo/
subscriptions/strathweb-subscription/messages/head",
"subscriptionName": "strathweb-subscription",
"topicName": "event-grid-demo"
},
"dataVersion": "1",
"eventTime": "2019-03-25T20:33:45.5378923Z",
"eventType": "Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners",
"id": "f8b78b1b-593a-41ca-95ee-80f59d5ee3f9",
"metadataVersion": "1",
"subject": "topics/event-grid-demo/subscriptions/strathweb-subscription",
"topic": "/subscriptions/59027101-5bc0-4c11-aa7a-0f66f7054bd3/resourcegroups/
strathweb-demos/providers/Microsoft.ServiceBus/namespaces/strathweb-test"
}
]

This means that you can now create your watchdog process as an API endpoint. It doesn't really matter
what endpoint it is - an ASP.NET Core app on Azure App Service, a Logic App, an Azure Function and so on.
You can then handle this incoming JSON, and make logical, educated system decisions like the ones we
discussed before.

Please note - and this is out of scope for this article - in order to consume Event Grid events in an API
application, you must first go through a simple validation process (basically consume another JSON payload
once before). This is described in details on docs.microsoft.com.

Summary

I very much encourage you to try out Service Bus with Event Grid integration.

It allows us to go beyond the traditional Service Bus system architectures and build truly reactive, event
driven systems, where message receivers can come to existence, and go away on demand, as new messages
flow through your system landscape.

This model wonderfully supplements the established messaging patterns with Service Bus, enhancing the
toolkit of every system architect relying on Azure for messaging.

Filip Woj
Author
Filip is a founder and maintainer of several open source projects, .NET Blogger, author and
a Microsoft MVP. He specializes at the Roslyn compiler, cross platform .NET development,
ASP.NET Core and is experienced in delivering robust web solutions across the globe. Filip
is based out of Zurich where he works in the medical industry for Sonova AG.

You can follow him on Twitter @filip_woj.

Thanks to Tim Sommer for reviewing this article.

14 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


ASP.NET CORE

Daniel Jimenez Garcia

ASP.NET
CORE
VUE CLI
TEMPLATES
Vue.js is becoming one of the most
popular and loved web frameworks,
and its CLI 3.0 makes creating and
working with Vue.js applications easier
than ever.

However, when it comes to the official


SPA templates provided by ASP.NET Core,
you might have noticed they only support
Angular and React out-of-the-box.

16 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Figure 1, SPA templates included in ASP.NET Core out of the box

What about Vue.js then?

Microsoft has so far declined(at least for the moment) to include support for the Vue CLI, so it is up to the
community to fill the gap in their SPA templates.

In this article we will discuss several options for integrating ASP.NET Core and the Vue CLI, one of them
already available in NuGet, thanks to Software Ateliers.

If you have chosen Vue.js and ASP.NET Core as your web stack, I hope this article will help you understand
how both fit together, and at the same time, help you get started.

You can find the companion source code on GitHub.

Things move fast in the web development world. I wrote a similar article about a year and a half ago, describing
a template which can now be considered obsolete. This is mostly due to the rise of the Vue CLI 3.0 and the newer
webpack versions!

www.dotnetcurry.com/magazine | 17
Combining ASP.NET Core projects and Vue
CLI projects
Writing web applications using a modern framework like Vue.js isn’t a simple enterprise.

The Vue.js application code is composed of a mixture of vue files, .js/.ts, .css/.sass/.less files and some static
files like images or fonts. For stitching everything together there is tooling like Babel and webpack, which
will combine all the different source files into a final set of JavaScript and CSS files that browsers can
interpret.

This means a Vue.js application (and applications in most other modern frameworks like Angular or React
for that matter) needs to be bundled before it can be run in a browser. You can consider the bundling
process the equivalent of compiling a .NET application.

Running the application during development


The need to bundle the application introduces friction during the development process, since bundles need
to be regenerated after making changes to the source files before you can execute the updated code in the
browser.

The way Vue.js deals with this (and again, so do other modern frameworks) is to provide a development
web server as part of its tooling. This web server generates the bundles on startup and keeps watching for
file changes, automatically regenerating the bundles and even pushing the changes to the browser.

If you and/or your team is working on a full-stack, as soon as you add a traditional web server framework
like ASP.NET Core providing the backend API, the development cycle of your application suddenly increases
in complexity. You now have two servers to get started during development:

• The Vue.js development server that provides the HTML/JS/CSS to be run in the browser
• The ASP.NET Core server providing the backend API that your application will send requests to

The following diagram explains this situation:

Figure 2, developing with a Vue.js application and an ASP.NET Core application

18 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


This approach requires no special templates or support from either Vue.js or ASP.NET Core. You will be able
to use the best tool to write/debug each application, but you will have to manually coordinate starting/
stopping them.

This can work great with bigger teams that split frontend and backend responsibilities or with experienced
developers who can switch between each side and understand the different tooling involved in both.

An alternative approach is the one followed by the Angular and React SPA templates provided by ASP.NET
Core. These templates use ASP.NET Core server as the main web server, with all the browser traffic directed
to it. During startup, the Vue.js development server is launched as a child process, and traffic other than the
one for API controllers, is proxied to it:

Figure 3, Using ASP.NET Core as the main server, proxying requests to the Vue.js development server

This approach lets you keep frontend and backend together during the development cycle, reducing the
friction to run and debug the application on your local machine. With a single Start or F5 command, you
get both servers started, while the Build command builds everything into a single package ready to be
deployed.

There is a third option which inverts the roles of both servers. Since the Vue.js development server is
nothing but a webpack development server, we can use its proxying capabilities to invert the situation:

www.dotnetcurry.com/magazine | 19
Figure 4, Using Vue.js development server as the main server, proxying requests to the ASP.NET Core server

While this option might seem redundant, there are certain advantages compared to the previous scenario:

• The Vue.js development server has been designed to bundle and push Vue.js applications to the
browser. It makes sense for it to be the one the browser directly talks to.

• The hot reload features rely on web sockets so the Vue.js development server can push the new
bundles to the browser as soon as there are any code changes. Proxying them introduces unneeded
complexity into the ASP.NET Core SPA proxy and extra latency.

• The ASP.NET Core server works mainly as an API server, which is better aligned to the role it plays in
your system.

Running the application during production


The previous discussion focused on how to run your application during development, reducing the friction
between making changes to your code and running them. When it comes to deploying to production, things
are different.

During build time, your Vue.js application is bundled by webpack into a set of static HTML/JS/CSS files,
while your ASP.NET Core application is compiled into dll/exe files. The simplest approach is to host
everything together in an ASP.NET Core web server that serves both the static files and the requests to the

20 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


API controllers:

Figure 5, Hosting together the bundled Vue.js application and the ASP.NET Core server

More advanced options are available, hosting the generated bundles and the ASP.NET Core application
separately. While this is a more complicated model, it certainly has its advantages, letting you scale each
independently and use hosting options that suit each of them. (For example, taking advantage of CDNs for
your bundles.)

With the development and build cycle being the same, we will leave this outside of the scope of the article,
but I encourage you to do some research if interested.

The next sections discuss the three different options we saw to organize your project during development.
You can find the source code on GitHub.

Separate Projects
Although this is the simplest of the solutions, it is a powerful one and will be very frequently chosen by big
and/or experienced teams. It decouples frontend from backend, making it easier to adopt different tooling,
development process, etc.

You will typically create two folders or even two repos, usually following some naming convention like
backend/frontend or client/server. In one of them, simply create a new ASP.NET Core web application using
the Web API template. You can use either the Visual Studio new application wizard or run in the command
line dotnet new webapi .

In the other folder, create a new Vue.js application using the Vue CLI by running vue create from the

www.dotnetcurry.com/magazine | 21
command line. Then select the options that you want to enable for your Vue application (Vuex, Router,
TypeScript, Linting, Testing, etc.)

At the end of this process, you should have a project structure like the following one, with the ASP.NET Core
and Vue.js applications in separate folders:

Figure 6, folder structure with separate frontend and backend projects

There is one thing you need to do before you can start both applications.

Since each will run on its own web server (Vue.js typically on localhost:8080 and ASP.NET Core on
localhost:5000 with Kestrel or an address like localhost:49960 with IISExpress) you will need to enable
communication between the two processes. You can enable CORS by adding the following line to the
ConfigureServices method of the Startup class:

services.AddCors();

Next you need to specify which CORS policy should be used.

Update the Configure method of the Startup class to use a wide policy in case of development:

22 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseCors(policy => policy
.AllowAnyHeader()
.AllowAnyMethod()
.WithOrigins("http://localhost:8080")
.AllowCredentials());
}

This way when you send a request to the ASP.NET Core backend application from your Vue.js application
running on localhost:8080, the browser won’t block the request.

An alternative approach is to send requests from the Vue.js application as if the backend was hosted in
the same location as the Vue.js application (i.e., send a request to localhost:8080/api/SampleData/
WeatherForecast rather than localhost:5000/api/SampleData/WeatherForecast).

For this approach you will need to enable the proxy in the Vue.js web development server, pointing it to
the ASP.NET Core application. The Vue.js development server will then send to the ASP.NET Core those
application requests that it cannot solve itself.

Simply add a vue.config.js file to the root of the Vue.js application with these contents:

module.exports = {
// The URL where the .NET Core app will be listening.
// Specific port depends on whether IISExpress/Kestrel and HTTP/HTTPS are used
devServer: {
proxy: 'http://localhost:5000'
},
}

Let’s verify if it all works.

Open two separate terminals and run dotnet run in one and npm run serve in the other. They should
start the ASP.NET Core and Vue.js applications respectively, and they will print on which specific HTTP(S)
port each is listening.

Figure 7, Running separately the frontend and backend with “dotnet run” and “npm run serve”

www.dotnetcurry.com/magazine | 23
The Vue development server will typically start on http://localhost:8080. If you open that URL in the
browser, you should see the default home page. Try changing the message in the Home.vue component and
you should immediately see it changing in the browser - thanks to the hot reload capabilities of the Vue.js
dev server.

Now let’s try to fetch the sample values returned by the ValuesController.

We need to send a request to the location where the ASP.NET Core application is running, in my case
https://localhost:5001 since I had HTTPS enabled (You should see this address in the command line where
you run dotnet run or in the Visual Studio output if running from Visual Studio).

Execute the following in the browser dev tools and you should see the values logged to the console:

window.fetch('https://localhost:5001/api/values')
.then(res => res.json())
.then(console.log)

Figure 8, Accessing the ASP.NET Core backend from the Vue.js application

24 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


You now have a project made of a separate Vue.js frontend application and an ASP.NET Core backend
application!

This is the simplest approach for integrating Vue.js and ASP.NET Core, using out-of-the-box tooling in each
case, but that doesn’t mean this isn’t a good approach! Quite the contrary, you can now fully embrace the
different tooling and development process for each.

For example:

• You might run the Vue.js application by running npm run serve from the command line. However,
you might run the ASP.NET Core application from Visual Studio or by running dotnet run from the
command line.

• You might use Visual Studio Code with plugins like Vetur and Chrome with the Vue Dev Tools for
developing and debugging the Vue.js application. However, you might use Visual Studio to develop and
debug the ASP.NET Core application.

• When it comes to publishing and hosting them, you might decide to keep them separate, with the
ASP.NET Core application as a pure API server while a different server takes care of serving the built
Vue.js application bundles. Or you can decide to include the built Vue.js application bundles as static
files in the ASP.NET Core application, using the Static middleware to serve them.

It certainly introduces an additional layer of complexity in your project, but using the right tool for the job
might be worth the steeper learning curve. If you have been following the DNC magazine, you might have
recognized this approach in articles like the one integrating SignalR with Vue.js.

The next two sections discuss approaches aiming to provide an integrated experience for the entire project.

Adapting the React SPA template


If you would rather keep a more integrated development experience for both projects, simplifying the
tooling involved and avoiding the need to run frontend and backend separately, you are probably looking at
the official SPA templates included with ASP.NET Core.

These templates essentially add some middleware to the ASP.NET Core application so:

• The frontend development server (manually run with npm run serve in our previous approach) is
automatically started whenever you start your ASP.NET Core application in development mode.

• Requests for routes that don’t match any ASP.NET Core controller are proxied to the frontend
development server, giving the browser the impression that everything is running from within the
ASP.NET Core application.

We saw a diagram of this approach in the initial section:

www.dotnetcurry.com/magazine | 25
Figure 9, SPA template where ASP.NET Core proxies the Vue.js development server

Now the problem you will find is that ASP.NET Core only provides SPA templates for Angular and React,
with Vue.js sadly missing and officially unsupported by Microsoft (at least for the moment).

The good news is that it isn’t hard to adapt one of the existing templates, since it is mostly a matter of
executing a different npm script during startup.

The bad news is that the utilities needed to run an npm script during startup and to wait for the Vue.js
development server to be started are internal to Microsoft’s SPAServices.Extensions package, so you will
need to manually copy their source code into your project.

Adapting the React template is exactly what Software Ateliers has done, so you can now install their
template and generate a Vue.js SPA application using this approach.

In the rest of this section I will show you how you would manually create such a template, so you
understand how it works.

Manually adapting the React template


Step 1. Replacing React with Vue.js in the React SPA template

Start by creating a new ASP.NET Core application, selecting the React template. Either use the new project
wizard in Visual Studio or run dotnet new react from the command line.

Once generated, the first thing we need to do is to replace the contents of the /ClientApp folder with a Vue

26 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


application. To do so, take the following steps:

• Remove the existing /ClientApp folder

• From the project root, run vue create client-app (The Vue CLI does not allow upper case laters
in project names). Use this step as a chance to select any features you want in your Vue.js project, like
TypeScript or Unit Testing, rather than relying on the ones selected by someone in a template!

• Once the Vue CLI finishes generating the project, rename the /client-app folder to /ClientApp

With these three sub-steps, we have fully replaced the React frontend app with a Vue.js frontend app.

Step 2. Using Middleware to launch the Vue.js development server during startup

Here comes the most complicated part! If you look at the Configure method of the Startup class, you will
notice the following piece of code:

app.UseSpa(spa => {
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseReactDevelopmentServer(npmScript: "start");
}
});

This code is using the methods UseSpa and UseReactDevelopmentServer that are part of the
Microsoft.AspNetCore.SpaServices.Extensions NuGet package.

With UseReactDevelopmentServer, the development server gets started on a free port, and any request
waits for it to be started before continuing.

With UseSpa, a proxy between the ASP.NET Core server and the development server is established,
redirecting unknown requests to said development server.

The problem comes from the fact that although UseReactDevelopmentServer allows you to specify
which npm script should be run (since with Vue.js we need to run npm run serve rather than npm start),
it has internal hardcoded logic specific to React:

• Specify which port the development server should be started on

• Determine when the development server is started based on the output of the npm command

Since these two points are different in the case of a Vue.js application, and UseReactDevelopmentServer
does not provide any options to change its behavior, we will need to create our own middleware that
knows how to start the Vue.js development server. The process will be a little more involved than expected
since it relies on utilities which are internal to the Microsoft.AspNetCore.SpaServices.Extensions package,
but it won’t be too hard.

1. Start by creating a new Middleware folder in you project, and manually copy the Util folder from
Microsoft’s source (do not remove the license attribution!).

www.dotnetcurry.com/magazine | 27
2. Next, you will also need to add to the Util folder the NpmScriptRunner class.

3. Finally copy the React middleware and extension classes into your project’s
Middleware folder, and rename them as VueDevelopmentServerMiddleware and
VueDevelopmentServerMiddlewareExtensions respectively.

The end result should look like this:

Figure 10, Adapting the React development server middleware for Vue.js

You can ignore the classes in the Util folder since we will use them as-is, the problem is that they are
internal classes to the Microsoft.AspNetCore.SpaServices.Extensions package, so we can't access them.
Simply rename their namespace, and you are free to concentrate on the middleware and extension classes.

Open the middleware class, which is the sole place where we actually need to make the changes!

Rename the StartCreateReactAppServerAsync method to StartVueDevServerAsync. Remove the


environment variables and replace the creation of the NpmScriptRunner with:

var npmScriptRunner = new NpmScriptRunner(


sourcePath, npmScriptName, $"--port {portNumber} --host localhost", null);

This is just updating the way the port is provided to the Vue.js development server. Now replace the
WaitForMatch line so it waits for the Vue.js development server to print “DONE”:

startDevelopmentServerLine = await npmScriptRunner.StdOut.WaitForMatch(


new Regex("DONE", RegexOptions.None, RegexMatchTimeout));

Then rename the method UseReactDevelopmentServer in the middleware extension class to


UseVueDevelopmentServer.

28 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Once renamed, update the Configure method of the Startup class to call
app.UseVueDevelopmentServer with the npm run serve script:

app.UseSpa(spa =>
{
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseVueDevelopmentServer(npmScript: "serve");
}
});

Once you are done, you should be able to start the project from Visual Studio or from the command line.
The middleware will start the Vue.js development server and will wait for it to be initialized before loading
the home page.

You should eventually see the Vue.js home page, but notice it is getting loaded from the port of the ASP.NET
Core server rather than the Vue.js development server.

Try modifying the message in the Home.vue file and notice how the browser immediately reloads the
changes.

That’s great, it means the hot reload functionality keeps working even after we introduced the SPA proxy.

Now you can also try and load the sample forecast data without the need to specify the host since
everything is running from the ASP.NET Core server:

Figure 11, Running the ASP.NET Core template

www.dotnetcurry.com/magazine | 29
If you look at the output from your ASP.NET Core application, you will be able to see the port where the
Vue.js development server has started (Remember our middleware is finding a free port and explicitly
telling the Vue.js development server to use it, so it can later proxy requests to such known port).

This means you can open the Vue.js home page in the browser both by using the ASP.NET Core address and
the Vue.js development server address! What the ASP.NET Core middleware is doing behind the scenes is
proxying the requests to the Vue.js development server.

Figure 12, ASP.NET Core (left) proxes requests to the Vue.js development server (right)

At this point, it is worth mentioning you will see some errors in the console.

This is caused by a known issue with ASP.NET Core trying to proxy all requests to the Vue.js development
server and the web sockets used for hot reload. While hot reload will still work, you will see those errors
during development.

Note: The alternative SPA template we will see later does not have this problem.

Another issue worth mentioning is that if you start the application from Visual Studio using the IISExpress
profile, the npm process is not closed upon stopping the application!

This means the Vue.js development server is left running (if you get the port the development server was
started, you will be able to still load it on the browser, like http://localhost:62495 in Figure 12) until you
manually kill that process from the Task Manager.

This does not happen when starting with dotnet run, so you might want to keep using the alternate

30 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


launch profile rather than IISExpress.

While everything will work since every time you start the server, a new free port is assigned to the Vue.js
development server, the amount of running processes can become quite a burden after a few debugging
sessions.

Step 3. Debugging from Visual Studio and Visual Studio Code

It is possible to debug C# and JavaScript code at the same time from both Visual Studio and Visual Studio
Code. But before we can do so, we need to update the generated source maps so their path is from the root
of the project folder and not just from the ClientApp folder.

Fortunately for us, this is something relatively easy to do with the webpack development server used by
Vue.js, with Vue.js giving us a hook to update the webpack configuration in the form of the vue.config.js file.
You just need to create such a file with the following contents:

module.exports = {
configureWebpack: {
// Using source-map allows VS Code to correctly debug inside vue files
devtool: 'source-map',
// Breakpoints in VS and VSCode won’t work since the source maps
// consider ClientApp the project root, rather than its parent folder
output: {
devtoolModuleFilenameTemplate: info => {
const resourcePath = info.resourcePath.replace('./src', './ClientApp/src')
return `webpack:///${resourcePath}?${info.loaders}`
}
}
}
}

Once created, make sure you have enabled Script debugging in Visual Studio and start debugging. Set a
breakpoint in the ClientApp/src/router.js file and reload the page, notice the breakpoint is hit in Visual
Studio and that Chrome shows it is stopped by a debugger!

Figure 13, enabling script debugging in Visual Studio

www.dotnetcurry.com/magazine | 31
Figure 14, Debugging Vue's JavaScript code from Visual Studio on F5

Debugging in Visual Studio Code requires you to install the C# extension as well as the Debugger for
Chrome extension. When opening the project for the first time after adding the extensions, accept the
suggestion to add common .NET Core tasks, which should add a tasks.json file.

Then add launch configurations to start the ASP.NET Core application (without starting the browser) and
another to launch the Chrome debugger (this one starts the browser). Finally add a compound task to
launch both.

The tasks should look similar to this:

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"compounds": [
{
"name": ".NET+Browser",
"configurations": [ ".NET Core Launch (console)", "Launch Chrome" ]
}
],
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome",
"url": "http://localhost:5000",
"webRoot": "${workspaceFolder}/ClientApp/src",

32 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


"sourceMaps": true,
},
{
"name": ".NET Core Launch (console)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceFolder}/bin/Debug/netcoreapp3.0/VueSPATemplate.dll",
"args": [],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"launchBrowser": {
"enabled": false
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
},
{
"name": ".NET Core Launch (web)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceFolder}/bin/Debug/netcoreapp3.0/VueSPATemplate.dll",
"args": [],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"launchBrowser": {
"enabled": true
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
]
}

You should now be able to launch the debugger selecting the “.NET+Browser” option, and setup breakpoints
in both .NET and JavaScript code:

www.dotnetcurry.com/magazine | 33
Figure 15, Debugging Vue's JavaScript code from Visual Studio Code

It is important to raise a big caveat when debugging JavaScript in Visual Studio (but not in Visual Studio
Code). This is the fact that Visual Studio doesn’t understand .vue files, forcing you to keep separate js/ts
files if you really want to debug Vue components from Visual Studio.

As said this limitation doesn’t apply to Visual Studio Code as long as you install the Vetur extension.
Although in my opinion you are better off debugging in Chrome with the Vue dev tools extension, using the
right tool for the job!

Step 4. Including the built Vue.js bundles during publish

When the Vue.js application bundles are generated with npm run build, these are generated into the
ClientApp/dist folder. This is different from where the React template expects the generated bundles, so we
will need to adapt the template for the bundles to be included in the published output and for the ASP.NET
Core server to be able to serve them when deployed.

First update the RootPath property in the call to AddSpaStaticFiles found in the ConfigureServices
method of the Startup class:

// In production, the Vue files will be served from this directory


services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/dist";
});

Next update the csproj files, so the section that copies the generated bundles into the output looks in the
ClientApp/dist folder:

34 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


<DistFiles Include="$(SpaRoot)dist\**" />

That’s all, you will now correctly generate the Vue.js bundles and include them into the published output
inside the ClientApp/dist folder, which the ASP.NET Core server is configured to serve as static files.

Using the community published template


As mentioned at the beginning of the section, Software Ateliers has already published a template following
this approach. You can just install their template and use it when generating a new project, rather than
manually going through the steps described above! (Although it is good for you to understand what’s going
on under the covers.)

The only thing you might want to add is the vue.config.js file with the source map rewrite that enables
debugging from Visual Studio and Visual Studio Code. Although in my opinion, and as already mentioned,
you are better off debugging in Chrome with the Vue.js devtools extension, using the right tool for the job!

The next section discusses a variant of this template that inverts the proxying role between the ASP.NET
Core and Vue.js development servers.

A different SPA template approach


All of the official SPA templates plus the Vue.js SPA community template has taken the same approach.
The ASP.NET Core application acts as the main server and proxies requests to a React/Angular/Vue.js
development server launched during application startup.

We can easily modify the middleware we created in the previous section so we invert the roles of the
servers.

We can keep launching everything when starting the ASP.NET Core application, launching the npm script
during its Startup. However, we can then redirect the browser to the Vue.js development server and have
this one proxying API requests to the ASP.NET Core application.

Figure 16, Inverting the server roles during development. Vue.js development server now proxies to ASP.NET Core

www.dotnetcurry.com/magazine | 35
This will align every server closely to the role they actually play in the system. Keeping the
Vue.js development server closer to the browser is a great idea for avoiding issues with its hot reload
functionality.

Instead of ASP.NET Core setting up a SPA proxy to the Vue.js development server, we will setup the Vue.js
development server proxy to the ASP.NET Core application.

We will also request the browser to initially load the root page “/” from Vue.js development server.
This way the browser will initially load the Vue.js home page from the Vue.js development server, while
any requests for an /api endpoint will be sent to the ASP.NET Core server by the Vue.js development server
proxy.

We will use the previous template as the starting point, since most of the changes we made to the
official React template are still needed. If you are starting from scratch, it will be useful to go through the
Manually adapting the template section of the earlier template, bearing in mind most of the changes will
be located in the VueDevelopmentServerMiddleware.

Feel free to download the code from GitHub if that helps.

Setting up the Vue development server as


the main server
The first change we will make to the previous template is located in the vue.config.js file. We will use its
dev server proxy option to send any requests towards the ASP.NET Core server that it cannot solve itself.
Rather than hardcoding a URL, we will read its value from an environment variable ASPNET_URL that will
be set by the VueDevelopmentServerMiddleware before starting the Vue dev server:

module.exports = {
configureWebpack: {
// The URL where the .NET Core app will be listening.
// Read the ASPNET_URL environment variable, injected by
VueDevelopmentServerMiddleware
devServer: {
// When running in IISExpress, the env variable won’t be provided.
// Hardcode a fallback here based on your launchSettings.json
proxy: process.env.ASPNET_URL || 'https://localhost:44345'
},
// … devtool and output are same as in the earlier template …
}
}

It is important to note the hardcoded fallback in case you run the project with IISExpress. The startup code does
not know the public URL used by IISExpress and can’t inject the environment variable, so you need to hardcode
the value here based on the contents of your launchSettings.json!

Next, we will modify the StartVueDevServerAsync function of the


VueDevelopmentServerMiddleware so it injects the ASPNET_URL environment variable. The value can
be found by getting IServerAddressesFeature service:

private static async Task<int> StartVueDevServerAsync(


IApplicationBuilder appBuilder,
string sourcePath,
string npmScriptName,

36 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


ILogger logger)
{
var portNumber = TcpPortFinder.FindAvailablePort();
logger.LogInformation($"Starting Vue dev server on port {portNumber}...");
// Inject address of .NET app as the ASPNET_URL env variable
// which will be read it in vue.config.js from process.env
// NOTE: When running with IISExpress this will be empty,
// so you need to hardcode the URL in IISExpress as a fallback
var addresses = appBuilder.ServerFeatures.Get<IServerAddressesFeature>().
Addresses;
var envVars = new Dictionary<string, string>
{
{ "ASPNET_URL", addresses.Count > 0 ? addresses.First() : "" },
};
var npmScriptRunner = new NpmScriptRunner(
sourcePath, npmScriptName, $"--port {portNumber} --host localhost", envVars);
npmScriptRunner.AttachToLogger(logger);
// the rest of the method remains unchanged, waiting to see “DONE”
// in the script output and returning the portNumber
}

The biggest change will be made in the Attach method. Instead of calling SpaProxyingExtensions.
UseProxyToSpaDevelopmentServer we will add middleware to:

• Wait for the Vue.js development server to be started


• Redirect requests to the root page “/” to the root of the Vue.js development server

The Attach method will then look like:

public static void Attach(


IApplicationBuilder appBuilder,
string sourcePath,
string npmScriptName)
{
if (string.IsNullOrEmpty(sourcePath))
{
throw new ArgumentException("Cannot be null or empty", nameof(sourcePath));
}

if (string.IsNullOrEmpty(npmScriptName))
{
throw new ArgumentException("Cannot be null or empty", nameof(npmScriptName));
}

var logger = LoggerFinder.GetOrCreateLogger(appBuilder, LogCategoryName);

// Start Vue development server


var portTask = StartVueDevServerAsync(appBuilder, sourcePath, npmScriptName,
logger);
var targetUriTask = portTask.ContinueWith(
task => new UriBuilder("http", "localhost", task.Result).Uri);

// Add middleware that waits for the Vue development server to start
// before calling the next middleware on the chain
appBuilder.Use(async (context, next) =>
{
// On each request gets its own timeout. That way, even if

www.dotnetcurry.com/magazine | 37
// the first request times out, subsequent requests could still work.
var timeout = TimeSpan.FromSeconds(30);
await targetUriTask.WithTimeout(timeout,
$"The vue development server did not start listening for requests " +
$"within the timeout period of {timeout.Seconds} seconds. " +
$"Check the log output for error information.");

await next();
});

// Redirect all requests for root towards the Vue development server,
// using the resolved targetUriTask
appBuilder.Use(async (context, next) =>
{
if (context.Request.Path == "/")
{
var devServerUri = await targetUriTask;
context.Response.Redirect(devServerUri.ToString());
} else
{
await next();
}
});
}

Now we need to update the UseVueDevelopmentServer method of the extension class. We no longer
need to pass an ISpaBuilder, just pass both the IApplicationBuilder and sourcePath as parameters:
public static void UseVueDevelopmentServer(
this IApplicationBuilder appBuilder,
string sourcePath = "ClientApp",
string npmScript = "serve")
{
if (appBuilder == null)
{
throw new ArgumentNullException(nameof(appBuilder));
}

VueDevelopmentServerMiddleware.Attach(appBuilder, sourcePath, npmScript);


}

Finally we replace the call to app.UseSpa at the end of the Configure method of the Startup class with
this simple call:

if (env.IsDevelopment())
{
app.UseVueDevelopmentServer();
}

Once you are done with these changes, you should now be able to start and debug the application the
same way as before, including JavaScript code. Make sure you completed steps 1, 3 and 4 of the Manually
adapting the template section since they are still relevant!

38 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Figure 17, Debugging with the alternate template from Visual Studio Code

Figure 18, Debugging the alternate template with Visual Studio

www.dotnetcurry.com/magazine | 39
Conclusion

Vue.js is one of the fastest growing and most loved web frameworks. Just to name a couple of recent events,
it was chosen both as the second most loved and second most wanted web framework in the 2019 Stack
Overflow survey, and surpassed React in the number of GitHub stars during 2018.

In my personal experience, most of the developers who have worked with Vue.js find it very attractive,
intuitive and a pleasure to work with. Tooling like the Vue CLI, the Chrome Dev tools or the VS Code plugin
Vetur make it even easier to work with it and develop web applications, while its plugin model and library
ecosystem, makes it a very extensible and adaptable framework.

It is a shame that it isn’t one of the frameworks with an official SPA template out of the box in ASP.NET
Core. As of today, only React and Angular are officially supported.

While things might change in the future, for now it is up to the community to fill the gap.

In this tutorial, we have seen several approaches in which you can integrate Vue.js with ASP.NET Core, all of
them using Vue.js applications generated by the Vue CLI.

While the first approach is the obvious one of treating the Vue.js and ASP.NET Core applications as separate
projects, we have also seen two other approaches in which they can be more tightly integrated for those
who prefer that approach. One of these tightly integrated ways is a straight port of the React template and
is already published as a NuGet package by Software Ateliers.

The lack of an official template is unfortunate, but I hope this article gives you enough information to
overcome it!

Daniel Jimenez Garcia


Author

Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience.
He started as a Microsoft developer and learned to love C# in general and ASP MVC in
particular. In the latter half of his career he worked on a broader set of technologies
and platforms while these days is particularly interested in .Net Core and Node.js. He
is always looking for better practices and can be seen answering questions on Stack
Overflow.

Thanks to Damir Arh for reviewing this article.

40 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


PATTERNS & PRACTICES

Dobromir Nikolov

Shipping Pseudocode
to Production
We’ve all heard that line. It seems intuitive and obvious.
Everyone strives to write clean code, everyone strives towards
“CLEAN CODE IS SIMPLE AND DIRECT.
simplicity and maintainability, or at least everyone claims to
do so. CLEAN CODE READS LIKE WELL-WRITTEN
PROSE. CLEAN CODE NEVER OBSCURES
Sadly, if it were true, we wouldn’t have ended up with these THE DESIGNER’S INTENT BUT RATHER
huge unmaintainable messes that everyone is complaining
IS FULL OF CRISP ABSTRACTIONS
about. If it were true, we would’ve never ended up with 5000
line functions containing critical business logic that no one AND STRAIGHTFORWARD LINES OF
seems to be aware of. (if you haven’t seen that, good for you)! CONTROL.”

- Grady Booch -
author of Object Oriented Analysis and
What’s the cause? Tight deadlines? Obscure Design with Applications
requirements? Incompetent developers?

Of course, it may be any or all of these, but in my


opinion the most common cause that results
in unmaintainable code is a wrong mindset. A
“code-first” mindset that pushes the developers
to just start patching around the codebase
without thinking too much about what it is
exactly that they’re trying to accomplish.

What if we could establish a different mindset?


And even better, what if we could make our
codebase push us towards following that
Disclaimer: This article concerns enterprise development, where priority is
mindset, as if the code itself is working with us to
maintainability and robustness, some of the points made will be invalid for
achieve a common goal?
performance critical applications.
The problem
What does this function do?

RowData ParseRow(ExcelRow row);

Parse an excel row, right?

Looks like it, but excel rows are unstructured user input. You know there are going to be columns, but there
are no guarantees about the content inside them or how many there are going to be.

Now, I’m no expert, but I’ve done a fair amount of work with unstructured user input and one thing is for
certain - you can’t trust it!

If you can’t trust something, then you must think about handling the error case. What if this row is not in
the format you’re expecting it to be? Do you throw an exception? Return null?

Sadly, the function definition itself is not telling us any of these things. If the ParseRow function came
from an undocumented closed source library and we didn’t have access to the code, the only way of
understanding what it does in case of an error is - by trial and error.

Luckily, we’re the ones implementing ParseRow, so we have the luxury of following best practices.

In the case of errors, we’re not going to return null, as that is error-prone and we’ll also lose the context
that the error occurred in. What we’ll do instead is throw a custom ParsingException. This exception
is going to contain all of the relevant information that we could use while debugging/displaying error
messages.

Parsing an excel file now looks like this:

try
{
ExcelRow[] rows = ReadExcelRows(excelFile);
RowData[] parsedData = rows
.Select(ParseRow)
.ToArray();

return parsedData;
}
catch (ParsingException e)
{
// Handle
}

Nothing complicated. The code sits there and works quietly for a while now.

Then comes John. John is the manager for the service delivery team.

He says that he loves the functionality, but now that the company has grown and the Excel files his team is
inputting are larger, they are losing a lot of time by stopping the process on the first broken row (remember
that’s what the exception will do).

www.dotnetcurry.com/magazine | 43
John asks if they could receive all errors at once as some sort of a report.

Good point, John! Definitely sounds like a reasonable request. Let’s get to work!

Refactoring

So we’ll have to aggregate all of the errors. The most straightforward solution is to add an Error property
to the RowData class like so:

class RowData
{
...
string Error { get; set; }
...
}

Fortunately, we’re serious professionals, we don’t joke around with the single responsibility principle, so we
create a ParsingResult class.

class ParsingResult
{
RowData Data { get; set; }
string Error { get; set; }
}

The ParseRow function now looks like this:

ParsingResult ParseRow(ExcelRow row);

Extracting the errors is now simple. No try-catch blocks:

ParsingResult[] parsedData = rows


.Select(ParseRow)
.ToArray();

string[] errors = parsedData


.Select(r => r.Error)
.ToArray();

Here’s the catch.

What if r.Error is null? Of course, we can always add a Where clause, but this is way too easy to forget.

And what if we want to use the ParseRow function on one row only?

ExcelRow row = ReadExcelRow();


ParsingResult result = ParseRow(row);

if (string.IsNullOrEmpty(result.Error))
{
// Do something with result.Data
}
else
{

44 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


// Handle the error
}

Smell anything? What if someone forgot the if statement and went straight for result.Data? He/She’d
get a NullReferenceException.

We’ve “improved” the design by including the potential error inside the ParsingResult class, which
has allowed us to easily generate an error report, but the cost is that now every time someone calls the
ParseRow function, he’s required to not forget to check result.Error or result.Data for null.

This is a code smell, and a serious one. As the codebase grows there will be hundreds of situations where
someone will be required to not forget to check for null/some other condition. Just think about how many
bugs you’ve fixed by adding yet another if statement or a try-catch block.

If anything can be forgotten, it will be forgotten.

Why did this happen? What was wrong with our approach?

Expressiveness

The answer is lack of expressiveness.

If the ParseRow function was expressive enough, then it would’ve simply told the caller that they are
supposed to handle two cases - one for the error and one for the success. We wouldn’t have had to rely on
someone not forgetting to check for null or some other condition.

Functions telling the caller what to do definitely sounds great, but what does it mean exactly? How can a
function be expressive? Let’s take a look.

Our first expressive function

Let’s go through another example of an operation that is likely to fail - performing an HTTP request.

The HTTP protocol has 62 status codes defined, and only 10 of them mean success. 84% of the status codes
mean some sort of an error. This means something. If we were to model a function that performs an HTTP
request, it better communicate clearly with the caller about how likely it is to fail.

But how can a function communicate?

Well, let’s think about all of the possible ways:

• Through its name - We obviously can’t name our function


PerformAnHttpRequestAndRememberItsLikelyToFail (I know we can, but we can’t), so the name
won’t help us much.

• Through its arguments (and their names) - There is the possibility of adding successHandler and

www.dotnetcurry.com/magazine | 45
errorHandler function arguments, but what is the function going to return then (void?)? It seems like
a decent approach, but we can do better.

• Through its return type - Names and arguments won’t do the job, but what about the return type? What
if we could model a type that represents an operation with two possible outcomes. Then the compiler
could force us towards handling both the success and error case and we would’ve eliminated the easily
forgettable if statements.

We’ll take the return type approach as it seems to fit our needs nicely. Let’s define some requirements for
the type we’ll be modeling:

1. It should be able to contain 2 other types - one that is going to be returned if the operation went
successfully and another one if it failed

• In the HTTP request case, we would like one type of response if the server returned a success status
code (in the form of 2XX) and another one if it failed.

2. It should never be null

• It makes no sense for a type that signals a likely error to be null as that will lead to even more
errors.

Reading the requirements, it becomes obvious that we’ll need some sort of a generic struct.

public struct Either<T, TException>


{
// Constructors omitted for brevity
public bool IsSuccessful { get; }

public static Either<T, TException> Success(T value) =>


new Either<T, TException>(value);

public static Either<T, TException> Error(TException exception) =>


new Either<T, TException>(exception);
}

Using the newly defined Either type, we can model our HTTP request function like so.

Either<SucessfulHttpResponse, HttpError> PerformHttpRequest(...);

It reads pretty nicely. We either get a SuccessfulHttpResponse (just some arbitrary type to illustrate a
point) or an HttpError. But how do we make use of it?

Either<SucessfulHttpResponse, HttpError> responseResult =


PerformHttpRequest("GET", "http://someurl.com/");

// ????

We obviously can’t access two values at once, but the good thing is we won’t have to. The Either type
signals that there are two possible outcomes. The actual outcome will be known in runtime.

46 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


As you probably saw, we didn’t expose direct value accessors when defining the Either type. What we’ll do
instead is “force” the consumer to always handle both the success and the error case during compile time
through a Match function.

{
public bool IsSuccessful { get; private set; }
public void Match(Action<T> success, Action<TException> error)
{
if (IsSuccessful)
success(value);
else
error(exception);
}
public TResult Match<TResult>(Func<T, TResult> success, Func<TException, TResult>
error) =>
IsSuccessful ? success(value) : error(exception);
}

Match abstracts away the success/error condition and forces us to handle both cases. Now if someone tried
to handle only one case (like going straight for result.Data in the Excel parsing example), they would
get a compiler error.

Either<SucessfulHttpResponse, HttpError> responseResult =


PerformHttpRequest("GET", "http://someurl.com/");

// Compiler error
responseResult.Match(
some: response => ...
);

The only correct way to consume an Either type (for now) is by always providing both handlers.

Either<SucessfulHttpResponse, HttpError> responseResult =


PerformHttpRequest("GET", "http://someurl.com/");

responseResult.Match(
some: response => /* Happy path */,
none: error => /* Sad path */
);

This seems good enough for a simple case when you only need to perform one request, but what about
when you need to perform a series?

In an imperative approach, performing consecutive requests that depend on each other would look
something like:

// Note that HttpResponse is just an arbitrary type to illustrate a point


HttpResponse response = HttpRequest(...);

if (response.IsSuccessful)
{
HttpResponse anotherResponse = HttpRequest(... response.Data);

if (anotherResponse.IsSuccessful)
{
HttpResponse thirdResponse = HttpRequest(... anotherResponse.Data);

www.dotnetcurry.com/magazine | 47
if (thirdResponse.IsSuccessful)
{
...
}
}
}

A crazy amount of nesting and the need of having to always remember to check whether the response is
successful. With our Either type, we’ve eliminated the need to remember to check the IsSuccessful
property, but the code looks like this:

var responseResult = HttpRequest(...);

responseResult.Match(
success: response =>
{
var anotherResponseResult = HttpRequest(... response.Data);

anotherResponseResult.Match(
success: anotherResponse =>
{
var thirdResponseResult = HttpRequest(... anotherResponse.Data);

thirdResponseResult.Match(
success: thirdResponse
{

},
error: /* Error handling */
)
},
error: /* Error handling */
)
},
error: /* Error handling */
);

Definitely not much better. So what can we do?

Let’s take a step back and look at what we’re doing.

1. Perform a request
2. If the request is successful, perform another request
3. If the consequent request is successful, perform another request
4. ...

Notice anything?

1. Perform a request
2. If the request is successful, perform another request
3. If the consequent request is successful, perform another request
4. If the consequent request is successful, perform another request

There’s a pattern.

48 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


And since our Either type doesn’t know about HTTP requests in particular, we can take another step back
and just look at it in terms of operations (where operations are nothing but functions).

1. Perform an operation
2. If the request operation is successful, perform another request operation
3. If the consequent request operation is successful, perform another request operation
4. If the consequent request operation is successful, perform another request operation

This pattern can be translated into a function that we can add to our Either type:

public struct Either<T, TException>


{
public Either<TResult, TException> Map<TResult>(Func<T, TResult> mapping) =>
Match(
success: value => Either<TResult, TException>.Success(mapping(value)),
error: exception => Either<TResult, TException>.Error(exception)
);
}

Map (also called Select in LINQ) acts the same way as it does on lists. If the Either has a success value
inside it (compare this to a list that has elements), it applies a transformation function to it. Otherwise,
if it has an exception (compare this to an empty list), it will simply return the exception value in a new
“transformed” Either. (similarly, if you map an empty List<string> with int.Parse, the list type
would change to List<int>, regardless of whether the List is empty)

It may sound confusing, but the concept is very simple. For a type container C<T>, map acts like so:

(C<T>, (T => T2)) => C<T2>

(List<string>, int.Parse) => List<int> (even if it was empty)

(Either<T, TException>, (T => TResult)) => Either<TResult, TException> (even if there


was no actual T value inside)

It takes the container and transforms the inner value using the provided (T => T2) function.

Types that implement a map function are called Functors in the functional programming world.

Map allows us to simplify our code like so:

var responseResult = HttpRequest(...);

responseResult.Map(response =>
{
var anotherResponseResult = HttpRequest(... response.Data);

anotherResponseResult.Map(anotherResponse =>
{
var thirdResponseResult = HttpRequest(... anotherResponse.Data);

thirdResponseResult.Map(thirdResponse =>
{
var fourthResponseResult = HttpRequest(... thirdResponse.Data);

www.dotnetcurry.com/magazine | 49
fourthResponseResult.Map(fourthResponse =>
{
...
});
});
});
});

The need to pass an error handler every time is now gone. It still looks messy, but we can rearrange it a bit:

HttpRequest(...).Map(response =>
HttpRequest(... response.Data).Map(anotherResponse =>
HttpRequest(... anotherResponse.Data).Map(thirdResponse =>
HttpRequest(... thirdResponse.Data))));

Four consecutive requests - four lines of code. And the great part - if an error occurred in any of these
chained requests, it will simply stop there and return you the error put inside an Either instance.

Does that chain mean we can define a function similar to:

Either<SuccessfulHttpResponse, HttpError> ChainFour() =>


HttpRequest(...).Map(response =>
HttpRequest(... response.Data).Map(anotherResponse =>
HttpRequest(... anotherResponse.Data).Map(thirdResponse =>
HttpRequest(... thirdResponse.Data))));

Sadly, no. But we’ll get there.

The problem with defining functions like that lies within the chain’s type. The type of these 4 chained
requests is

Either<Either<Either<Either<SucessfulHttpResponse, HttpError>, HttpError>,


HttpError>, HttpError>

...which frankly, doesn’t look friendly enough to work with.

What’s interesting about this nested type is that although it looks quite intimidating, it will actually contain
either only one value or only one error.

1. HttpRequest(...).Map(response =>
2. HttpRequest(... response.Data).Map(anotherResponse =>
3. HttpRequest(... anotherResponse.Data).Map(thirdResponse =>
4. HttpRequest(... thirdResponse.Data))));

If 1 passed successfully, its result is going to be passed into 2 and then discarded. If 2 passed successfully,
its result is going to be passed into 3 and then discarded, and so on. There is only going to be one
SuccessfulHttpResponse, that is if the whole chain completed successfully, and there is potentially only
going to be one error, that is the first one that occurred.

Therefore, we can perform the following transformation without losing any data:

Either<Either<Either<Either<SucessfulHttpResponse, HttpError>, HttpError>,


HttpError>, HttpError>
=>
Either<SucessfulHttpResponse, HttpError>

50 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


The nesting appears to be meaningless.

In order to get rid of it, we must first understand why it happened. If you remember, the map function takes
a container of type C<T> and applies a (T => T2) function to it.

(C<T>, (T => T2)) => C<T2>

If the transforming function happened to return another C<T>, then we would get a nested type C<C<T>>
as it happens in our case.

Or more concretely:

Take an Either<SuccessfulHttpResponse, HttpError>


=>
Map the SuccessfulHttpResponse into Either<SuccessfulHttpResponse, HttpError>
=>
Receive an Either<Either<SuccessfulHttpResponse, HttpError>, HttpError>

The solution to this is a FlatMap function.

public struct Either<T, TException>


{
public Either<TResult, TException> FlatMap<TResult>(Func<T, Either<TResult,
TException>> mapping) =>
Match(
success: mapping, // We skip wrapping the value into an Either
error: exception => Either<TResult, TException>.Error(exception)
);
}

FlatMap is very similar to Map except that instead of taking any transformation function, it only accepts
those that return another Either. This allows us to skip wrapping the result over and over again.

Map:

(C<T>, (T => T2)) => C<T2>

FlatMap:

(C<T>, (T => C<T2>)) => C<T2>

Types that implement a FlatMap function (along with a few other things) are called Monads in the
functional programming world.

Now we can simply substitute the Map calls for FlatMap in order for the ChainFour function definition to
become valid:

Either<SuccessfulHttpResponse, HttpError> ChainFour() =>


HttpRequest(...).FlatMap(response =>
HttpRequest(... response.Data).FlatMap(anotherResponse =>
HttpRequest(... anotherResponse.Data).FlatMap(thirdResponse =>
HttpRequest(... thirdResponse.Data))));

www.dotnetcurry.com/magazine | 51
We went from the imperative:

HttpResponse response = HttpRequest(...);

if (response.IsSuccessfull)
{
HttpResponse anotherResponse = HttpRequest(... response.Data);

if (anotherResponse.IsSuccessfull)
{
HttpResponse thirdResponse = HttpRequest(... anotherResponse.Data);

if (thirdResponse.IsSuccessfull)
{
HttpResponse fourthResponse = HttpRequest(... thirdResponse.Data);

if (fourthResponse.IsSuccessfull)
{

}
}
}
}

To the functional:

HttpRequest(...).FlatMap(response =>
HttpRequest(... response.Data).FlatMap(anotherResponse =>
HttpRequest(... anotherResponse.Data).FlatMap(thirdResponse =>
HttpRequest(... thirdResponse.Data))));

Going into production

Now this may seem cool and all, but I seriously doubt that most of you have ran into the exact scenario
where you perform consequent HTTP requests that rely on one another. Real life problems look more like
this:

52 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


1. A user uploads a file to a server with a category.
2. The server checks if the user has rights for this category.
3. The server stores the file into some cloud storage (AWS S3 for example).
4. The server takes the cloud storage id and stores it into a database with a newly generated unique key.
5. The unique key is then sent to an external service for asynchronous processing.

Enter the pseudocode

This article is called “Shipping Pseudocode to Production” for a reason.

Let’s see how we can implement this scenario in a robust and readable way using our Either type and
some pseudocode.

If you convert the above diagram into pseudocode, it will probably look something like this:

// function ProcessDocument userId, category, file


// 1. check if the user is authorized
// 2. store the file into the cloud
// 3. store database log with unique key
// 4. send the unique key to an external service for post-processing

We can clearly see there are 4 distinct operations. For each operation we can model a corresponding
function definition that describes it using Either:

1. Either<User, Error> CheckIfUserIsAuthorized(string userId, string category);


2. Either<CloudRecord, Error> StoreTheFileIntoTheCloud(File file, string category);
3. Either<Guid, Error> StoreDatabaseLog(CloudRecord record);
4. Either<DocumentProcessedResult, Error> SendToExternalService(Guid key);

The implementation of the ProcessDocument function can now be looked upon as an arrangement of
these 4 operations. Conveniently, we’ve implemented the “arrangement” mechanism through FlatMap and
Map.

Either<DocumentProcessedResult, Error> ProcessDocument(


string userId,
string category,
File file) =>
CheckIfUserIsAuthorized(userId, category).FlatMap(user =>
StoreTheFileIntoTheCloud(file, category).FlatMap(cloudRecord =>
StoreDatabaseLog(cloudRecord).FlatMap(uniqueKey =>
SendToExternalService(uniqueKey))));

It really is that simple. The workflow behind the implementation is readable even to someone who doesn’t
regularly read code.

Imagine if you scaled this to 15 operations. You will not lose any readability, as 15 operations are still 15
lines of code.

If you wanted to expose this function as an ASP.NET endpoint, for example, the usage is quite simple:

www.dotnetcurry.com/magazine | 53
public IActionResult ProcessDocument(string category, File file) =>
_documentService.ProcessDocument(User.Id, category, file)
.Match<IActionResult>(Ok, BadRequest);

What status code will be returned will be decided by the Match function.

Going back to Grady Booch’s quote:

Clean code is simple and direct. Clean code reads like well-written prose. Clean
code never obscures the designer’s intent but rather is full of crisp abstractions and
straightforward lines of control.

1. Simple and direct - check


2. Reads like well-written(?) prose - check
3. Doesn’t obscure the designer’s intent - check
4. Full of crisp abstractions and straightforward lines of control - check

Compare this to an imperative approach:

DocumentProcessedResult ProcessDocument(
string userId,
string category,
File file)
{
var user = _userRepository.GetById(userId);

if (user != null)
{
var isAuthorized = _authService.IsUserAuthorized(user, category);

if (isAuthorized)
{
try
{
var cloudRecord = _cloudStorageService.Store(file, category);

var databaseStorageResult = _recordsRepository.Save(new DatabaseRecord


{
Key = Guid.NewGuid(),
CloudRecordId = cloudRecord.Id
});

// ...and so on
}
catch (CloudStorageException)
{
// Handle
}

54 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


catch (DatabaseException)
{
// Handle
}
}
}
}

It’s really hard to see the actual workflow, and if we had to add more logic, the function would very soon
become bloated to the point of unmaintainability.

The mindset

How does this resonate with the mindset concept I started talking about early in the article?

Since one Either instance represents one operation, using it pushes you towards splitting what you’re
implementing into logical chunks. Each logical chunk is a separate operation (function) that doesn’t know
anything about where it’s going to stand once put in the big picture.

Compare extending the chained (functional) and imperative implementations of the ProcessDocument
function.

Say we want to add an additional operation to the workflow - emitting a DocumentSentForProcessing


event.

Although the imperative version follows common best practices - it has abstracted the persistence logic in
repositories, the communication logic in services, you are still inevitably thrown inside this huge nested pile
of code that doesn’t have clear “borders” around the logical sections of the workflow.

You are required to read attentively and be careful not to break some existing logic. You can’t really “extend”
the implementation without getting your hands dirty with what’s already there.

In the chained version, however, adding an additional operation is as simple as adding an additional line of
code.

CheckIfUserIsAuthorized(userId, category).FlatMap(user =>


StoreTheFileIntoTheCloud(file, category).FlatMap(cloudRecord =>
StoreDatabaseLog(cloudRecord).FlatMap(uniqueKey =>
SendToExternalService(uniqueKey).FlatMap(documentProcessedResult =>
PublishEvent(new DocumentSentForProcessing(documentProcessedResult))))));

Now what you’re concerned with is only implementing the PublishEvent function correctly. You didn’t
have to get familiar with what’s inside the other operations, nor could you break something inside them by
not being careful.

Splitting the logic into isolated sections and implementing them one by one helps maintainability,
extensibility and debugging. It is much easier to avoid bugs and reason about your code when
implementing only one function at a time.

www.dotnetcurry.com/magazine | 55
You also get the added benefit of directly translating pseudocode that you have discussed with some non-
technical domain expert, for example, into actual code that you can ship to production.

Conclusion

Establishing the mindset of reasoning about your code as a composition of operations helps tremendously
with identifying bugs and design issues early on.

This modular mentality, paired together with the Either monad results in code that is readable,
maintainable, and scalable in terms of complexity. The readability will make sure that after getting back to
the code 6 months later you’ll remember what it’s about and the compile-time errors will keep you from
forgetting about the edge cases and handling them adequately.

If you’re interested, you can go to https://github.com/dnikolovv/bar-event-sourcing for a “real” application


made using this approach. (note that the Either monad that is used in this project is called Option<T,
TException> instead of Either<T, TException>, but its behavior is the same).

Dobromir Nikolov
Author

Dobromir Nikolov is a software developer working mainly with Microsoft technologies, with his
specialty being enterprise web applications and services. Very driven towards constantly improving
the development process, he is an avid supporter of functional programming and test-driven
development. In his spare time, you’ll find him tinkering with Haskell, building some project on
GitHub (https://github.com/dnikolovv), or occasionally talking in front of the local tech community.

Thanks to Yacoub Massad for reviewing this article.

56 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


.NET

Damir Arh

CURRENT STATE OF
WEB DEVELOPMENT
FOR .NET DEVELOPERS
The field of web development has
been changing quickly for
some time now and the rate
of change doesn’t seem to
be slowing down.

As a result, there
are many different
approaches available
to get the job done,
no matter what
technology stack you
are using.

In this article, I will give an


overview of frameworks that you can
choose for your .NET projects.

58 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Model-view-controller (MVC) application architecture
The default choice for a .NET developer today is probably ASP.NET Core or ASP.NET MVC.
Both frameworks implement the model-view-controller (MVC) architectural pattern. As the name implies,
applications following this pattern consist of three main types of building blocks:

• Models are a representation of the application state. They encapsulate the business logic necessary to
retrieve the state from and persist it back to the data store (often a relational database), as well as to
perform the operations modifying the state.

• Controllers respond to all user requests. Their main role is to map the request to the underlying models
which implement the business logic, select the correct view for presenting the requested data to the
user, and pass it the model with that data.

• Views are responsible for rendering the resulting web page to the user. They read all the necessary data
from the model passed to them. To avoid too much logic in a view, the controller can instead pass a
specialized view model which serves as a wrapper or data transfer object (DTO) for the model data.

The pattern prescribes how the different building blocks can interact. A controller interacts with both
models and views, a view only interacts with models, and a model interacts neither with controllers nor
with views.

Figure 1: Interaction between the model, the view and the controller

The MVC architectural pattern is often used in web development frameworks for a reason. It works well with
the stateless model of the HTTP protocol:

• The HTTP request received by the web server is routed to the controller with all the required
information to perform the requested action.

• The controller interacts with the model and finally invokes the view with the required data.

• The output generated by the view is sent back by the web server as the HTTP response.

www.dotnetcurry.com/magazine | 59
Figure 2: HTTP request handling with the MVC pattern

The MVC application architecture is most suitable for development of web applications that benefit from
being fully rendered on the server, i.e. those that need to be easily indexable by search engines and don’t
require a lot of quick user interaction. Examples of such applications are publicly accessible web portals
and dynamic web sites.

ASP.NET Core

Of the two MVC frameworks for .NET, ASP.NET Core is the modern one. It was released as part of.NET Core
which means that it can run on Windows as well as on Linux and macOS. It is the recommended framework
for development of new web applications following the MVC architectural pattern.

In Visual Studio 2019, a new project can be created using the ASP.NET Core Web Application template. The
recommended choice in the second step of the project creation wizard is Web Application (Model-View-
Controller) which will generate a simple sample app. If you’re using Visual Studio Code, you can create an
equivalent new project from the command line with the following command:

dotnet new mvc

Each building block type has its own folder in the project structure.

- Controller classes are placed in the Controllers folder. They derive from the Controller base class.
Their methods are called action methods and serve as entry points handling the requests:

public class HomeController : Controller


{
public IActionResult Error()
{
return View(new ErrorViewModel { RequestId = Activity.Current?.Id ??
HttpContext.TraceIdentifier });
}
}

60 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


- The Views folder contains views. By convention, they are placed in a subfolder matching the name of the
controller which will be using them. Their name matches the action methods they will be used in. Hence,
the Views/Home/Index.cshtml file would contain the default view for the HomeController.Index
action method. The .cshtml extension indicates that the views are using Razor syntax which combines
regular HTML with additional Razor markup which starts with the @ symbol:

@model ErrorViewModel
@{
ViewData["Title"] = "Error";
}

<h1 class="text-danger">Error.</h1>
<h2 class="text-danger">An error occurred while processing your request.</h2>

@if (Model.ShowRequestId)
{
<p>
<strong>Request ID:</strong> <code>@Model.RequestId</code>
</p>
}

- The classes in the Models folder fit the definition of view models. They are typically instantiated in action
methods and are usually wrappers or data transfer objects with minimal business logic.

public class ErrorViewModel


{
public string RequestId { get; set; }

public bool ShowRequestId => !string.IsNullOrEmpty(RequestId);


}

Most of the business logic in an ASP.NET Core application is typically implemented in services which can be
registered in the ConfigureServices method of the Startup class. This way they can be injected in the
controller classes which need them using the built-in dependency injection functionality.

By default, requests invoke a specific action method based on the routing convention specified in the
Configure method of the Startup class:

app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});

The first URL component determines the controller (HomeController if not specified), the second URL
component determines the action method in that controller (Index if not specified), and the third URL
component is passed into the action method as a parameter named id. Of course, the routing configuration
can be customized as needed.

www.dotnetcurry.com/magazine | 61
ASP.NET Core applications are highly extensible.

Internally, each incoming request is processed by a pipeline of components which can do some work before
passing on the request to the next component in the pipeline and after receiving the response back from it.

Figure 3: ASP.NET Core request pipeline

The components in the pipeline are named middleware. Many of them are built into the framework and
provide what one could consider standard web application functionalities like routing, serving static files,
authentication, exception handling, HTTPS redirection, CORS (Cross-Origin Resource Sharing) configuration,
etc. The framework makes it easy to develop custom middleware as well.

The current version of ASP.NET Core (2.2) is not limited to running on top of .NET Core.

It can also target .NET framework versions that implement .NET Standard 2.0 (.NET framework 4.7.1 or
later is recommended). This can be useful if the application depends on Windows-specific .NET framework
libraries which don’t run in .NET Core.

It comes at a price, however.

Such an application will only run in IIS (Internet Information Services web server) on Windows and will not
be as performant as .NET Core hosted applications. Also, the upcoming ASP.NET Core 3.0 will run only on
.NET Core and won’t support the .NET framework anymore.

All of this makes targeting the .NET framework only as a temporary solution until the old Windows
dependencies can be replaced with .NET Core based alternatives. With the upcoming improvements to
interop in .NET Core 3.0, some of these dependencies might even become useable directly from .NET Core if
you decide to host your app on Windows.

62 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


ASP.NET MVC
ASP.NET MVC is the predecessor of ASP.NET Core.

It runs on top of the .NET framework (with no support for .NET Core) and is not actively developed anymore.
Applications written in ASP.NET MVC must be hosted in IIS on Windows. ASP.NET MVC should usually not be
used for creating new web applications anymore. ASP.NET Core should be used instead (on top of .NET Core
if possible, or .NET framework if necessary).

On the surface, ASP.NET MVC appears similar to ASP.NET Core. The project structure contains the same three
core folders:
• Controllers folder with controller classes derived from the Controller base class which implement
action methods.

• Views folder with views which use Razor syntax.

• Models folder with model (or view model) classes.

However, there is only limited built-in support for dependency injection with no implementation provided
out-of-the-box. To inject services with business logic into a controller, a third-party dependency injection
library implementing the provided interfaces must be added to the project and properly configured.

Although the default routing of requests to action methods in controllers follows the same convention as in
ASP.NET Core projects, it is configured elsewhere and with slightly different code which you can find in the
RegisterRoutes method of the RouteConfig class:

routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new
{
controller = "Home",
action = "Index",
id = UrlParameter.Optional
}
);

There is also no customizable request pipeline. To hook into a request, several different approaches can be
used, depending on what needs to be achieved:

• HTTP handlers are used for extension-based request processing. ASP.NET MVC itself is implemented as
an HTTP handler which executes action methods in controllers.

• HTTP modules are used to hook into the processing of every HTTP request (ASP.NET MVC or not) and are
typically used for authentication, logging etc.

www.dotnetcurry.com/magazine | 63
• Action filters are a part of ASP.NET MVC and can be invoked before or after the action method call, but
always after the ASP.NET MVC HTTP handler has already taken over the processing.

From the three approaches above, only action filters are still available in ASP.NET Core. The other two must
be converted to middleware. This can make it difficult to port an existing ASP.NET MVC application to
ASP.NET Core unless it is very simple. Usually it makes sense to keep maintaining it in ASP.NET MVC,
especially if there isn’t a lot of new development being done and the application is not required to run on
other operating systems than Windows.

Single-page application (SPA) architecture


In a typical web site or a web application following the MVC architecture as described in the previous
section, each user interaction with the web page will result in an HTTP request to the server to which the
web server will respond with a new web page for the browser to render.

Figure 4: User interaction in an MVC web application

As the name implies, in a single-page application (SPA) there’s only one HTML page which is downloaded
from the web server when the user opens the page. Any further interaction with that page will not directly
result in a request to the web server. Instead, it will be handled by the JavaScript code which will update
the existing page accordingly.

A new request to the web server will only be made when JavaScript code will require data that is not yet
available in the browser.

In response to that request, the web server will not send back a full HTML page. Instead, it will only send
the data requested (usually in JSON format). JavaScript code will then process the data received and update
the existing page.

64 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Figure 5: User interaction in a single-page application

Single-page application (SPA) pattern is most suitable for web applications with a high level of user
interactivity because it can provide a better experience for the user.

Since the final appearance of a page is generated inside the browser and not returned from the server,
single-page applications are at a disadvantage when the content must be indexed by search engines. There
are solutions for that (i.e. server-side rendering) but they also increase the complexity of the final solution.

Even for a single-page application, the web server part can still take advantage of the MVC pattern.
Although the initial web page usually only consists of static files (HTML, CSS and JavaScript) and doesn’t
require any server-side processing, the process of generating the JSON documents is not all that different
from generating the web pages in an MVC web application. A common term for such backend application is
REST service.

ASP.NET Core and ASP.NET Web API


The recommended framework for implementing REST services in .NET is ASP.NET Core (preferably hosted in
.NET Core).

In Visual Studio 2019, a new project can be created using the ASP.NET Core Web Application template, but
make sure that you choose API in the second step of the project creation wizard. Visual Studio Code users

www.dotnetcurry.com/magazine | 65
can create a new project from the command line with the following command:

dotnet new webapi

In web API applications, the controller classes are still responsible for responding to incoming requests
and can use the business logic implemented in models (and services). However, there’s no need for a view.
Instead, action methods can return view models directly, and they will be serialized to JSON format.

[Route("api/[controller]")]
[ApiController]
public class ValuesController : ControllerBase
{
// GET api/values
[HttpGet]
public ActionResult<IEnumerable<string>> Get()
{
return new string[] { "value1", "value2" };
}
}

Although default routing can still be configured in the Configure method of the Startup class, it’s more
common to use attribute routing instead because they give more flexibility which is often required for web
APIs.

In the example we just saw, the method returns an array of values. A common convention in REST services
is to include an ID in the URL to receive only the value corresponding to that ID:

// GET api/values/5
[HttpGet("{id}")]
public ActionResult<string> Get(int id)
{
return "value";
}

When data needs to be modified, other HTTP methods should be used instead of GET, such as:

• POST for inserting data,


• PUT for modifying data, and
• DELETE for deleting data.

Using attributes, such requests can be routed to the appropriate action methods:

// POST api/values
[HttpPost]
public void Post([FromBody] string value)
{
}

// PUT api/values/5
[HttpPut("{id}")]
public void Put(int id, [FromBody] string value)
{
}

// DELETE api/values/5
[HttpDelete("{id}")]

66 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


public void Delete(int id)
{
}

The .NET framework predecessor of ASP.NET Core for building REST services is ASP.NET Web API. It’s
architecturally similar to ASP.NET MVC.

For the same reasons as ASP.NET MVC, it’s not a recommended choice for new projects, but it makes sense
for existing projects to keep using it.

JavaScript frameworks
There’s an abundance of client-side JavaScript frameworks for single-page applications to choose from.
Currently, the most popular ones are Angular, React, and Vue.js.

Although there are a lot of differences between them, they consist of the same basic building blocks:

• Templates for the HTML pages to be displayed with support for binding values from variables and for
handling events such as clicks.

• JavaScript (or TypeScript) code that reacts to those events by changing the bound values or switching
the HTML template which is currently displayed.

• Command line tooling for building the application, for running it locally during development and for
other common operations.

There’s no clear choice which framework is the best.

While choosing the one to learn first, you can’t go wrong – just go with any of the ones listed above.

No matter the choice, you will need to have a recent version of Node.js installed on your development
machine. I’m going to explain the basic concepts using Angular because this is the framework, I’m most
familiar with.

Editorial Note: For Vue.js, check these tutorials.

It’s recommended to have Angular CLI installed globally for any kind of Angular development:

npm install -g @angular/cli

Using it, you can create a new Angular project:

ng new

The application entry point is a static index.html file in the src folder:

<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>AngularApp</title>

www.dotnetcurry.com/magazine | 67
<base href="/">

<meta name="viewport" content="width=device-width, initial-scale=1">


<link rel="icon" type="image/x-icon" href="favicon.ico">
</head>
<body>
<app-root></app-root>
</body>
</html>

After the application initializes, the app-root element will be replaced with the root AppComponent.
An Angular application is composed of many components. Each one declares the HTML element name that
can be used in a template to insert into.

By convention, its source code is in a file named *.component.ts:

import { Component, Inject } from '@angular/core';


import { HttpClient } from '@angular/common/http';

@Component({
selector: 'app-fetch-data',
templateUrl: './fetch-data.component.html'
})
export class FetchDataComponent {
public forecasts: WeatherForecast[];

constructor(http: HttpClient, @Inject('BASE_URL') baseUrl: string) {


http.get<WeatherForecast[]>(baseUrl + api/SampleData/WeatherForecasts').
subscribe(result => {
this.forecasts = result;
}, error => console.error(error));
}
}

This code will call a REST service to retrieve some data and store it in a local property so that it can be
used from its template:

<table class='table table-striped' *ngIf="forecasts">


<thead>
<tr>
<th>Date</th>
<th>Temp. (C)</th>
<th>Temp. (F)</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
<tr *ngFor="let forecast of forecasts">
<td>{{ forecast.dateFormatted }}</td>
<td>{{ forecast.temperatureC }}</td>
<td>{{ forecast.temperatureF }}</td>
<td>{{ forecast.summary }}</td>
</tr>
</tbody>
</table>

In Angular templates, values are interpolated using double curly braces ({{ }}). The attributes with *ng
prefix are Angular directives which are used to further control how the HTML is rendered.

68 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


If we make a rough comparison to ASP.NET Core MVC applications, Angular templates correspond to MVC
views. They just use an alternative syntax to Razor in MVC views. The component source code approximately
corresponds to MVC controllers. The architecture is somewhat similar to MVC, although it doesn’t match it
completely.

ASP.NET Core SPA templates

Typically, a single-page application consists of two separate parts:

• the REST service and

• the generated static files for the client-side application.

You can decide for yourself how you want to build and host each one. To simplify this, there are ASP.NET
Core templates available for Angular and React which join both parts into a single project:

• In Visual Studio 2019, you can choose these templates in the second step of the ASP.NET Core Web
Application template. They are named: Angular, React.js, and React.js and Redux.

• When using the dotnet new command, the template names are: angular, react and reactredux.

Editorial Note: For Vue.js templates, check Page 20 of this magazine.

No matter which template you choose, the client-side JavaScript application will be placed in the
ClientApp folder. It will be a standard application for the chosen framework which can be fully controlled
using its corresponding command line tooling.

In addition to that, the JavaScript application will be fully integrated into the ASP.NET Core application. The
generated static files will be hosted as static files in the ASP.NET Core application. During development, the
application will also automatically refresh in the browser whenever you change any of its source files.
When starting a new Angular or React application with an ASP.NET Core backend, these templates are
probably your best starting point. They make development and deployment more convenient because you
don’t have to deal with two separate projects.

Blazor
With increasing support for WebAssembly in modern browsers, JavaScript isn’t the only supported language
for applications running in a browser anymore. WebAssembly is a binary format designed to be generated
by compilers on one hand, and directly executed in browsers on the other hand. This is opening the doors to
frameworks for single-page application development which aren’t JavaScript (or TypeScript) based.

The one most interesting to .NET developers is Blazor which allows you to write your client-side code in C#.
Since it’s still in preview, it requires some effort to get it installed. For best experience you will require:

• The latest preview SDK for .NET Core 3.0.


• The latest preview of Visual Studio 2019.

To create a new project, use the ASP.NET Core Web Application template in Visual Studio 2019. In the second
step of the wizard, select Blazor (ASP.NET Core hosted). You need to have NET Core 3.0 selected in the
dropdowns at the top to make it available.

www.dotnetcurry.com/magazine | 69
To create a project from the command line instead, you first need to install the Blazor templates:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview4-19216-03


dotnet new blazorhosted

The generated solution is functionally similar to the projects generated by the ASP.NET Core SPA templates.
It consists of three projects:

• *.Server contains the ASP.NET Core application with the web API.
• *.Client contains the Blazor single-page application.
• *.Shared contains classes which can be shared between the two projects because they both use the
same language.

The ASP.NET Core application is almost identical to the one for the JavaScript SPA frameworks. The Blazor
application is functionally very similar to the Angular and React applications as well. It’s just developed
using a different technology which makes the code more similar to an ASP.NET Core MVC application than a
JavaScript SPA.

The source code for both pages and components is placed in *.razor files, and it uses Razor syntax:

@page "/fetchdata"
@using AspNetCoreBlazor.Shared
@inject HttpClient Http

<h1>Weather forecast</h1>

<p>This component demonstrates fetching data from the server.</p>

@if (forecasts == null)


{
<p><em>Loading...</em></p>
}
else
{
<table class="table">
<thead>
<tr>
<th>Date</th>
<th>Temp. (C)</th>
<th>Temp. (F)</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
@foreach (var forecast in forecasts)
{
<tr>
<td>@forecast.Date.ToShortDateString()</td>
<td>@forecast.TemperatureC</td>
<td>@forecast.TemperatureF</td>
<td>@forecast.Summary</td>
</tr>
}
</tbody>
</table>
}

70 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


This markup could easily be mistaken for a view in an ASP.NET Core MVC application. The biggest difference
is that there’s no controller class. The rest of code is contained in a @functions block, usually placed at
the bottom of the same file:

@functions {
WeatherForecast[] forecasts;

protected override async Task OnInitAsync()


{
forecasts = await Http.GetJsonAsync<WeatherForecast[]>("api/SampleData/
WeatherForecasts");
}
}

Although this code is running in the browser, it uses classes from .NET Standard 2.0 instead of browser or
JavaScript APIs.

To a .NET developer with little or no JavaScript experience, Blazor can look very tempting. However, it’s not
suitable for production use (yet) for several reasons:

• Most importantly, the framework is still in preview and no official release has been announced yet. It
will almost certainly be released after .NET Core 3.0.

• The current implementation of Blazor is very inefficient. The code is not compiled directly to
WebAssembly. Instead, it is compiled into a .NET assembly which is interpreted in the browser with a
.NET runtime which was compiled to WebAssembly. This might still change until the final release.

• WebAssembly currently can’t access DOM (Document Object Model) APIs directly. It can only call
JavaScript functions which then interact with the DOM. Since DOM APIs are used for any client-side
modification of the HTML page, this negatively affects WebAssembly performance in this field.

At least until the first official release of Blazor, JavaScript frameworks are a superior choice for development
of single-page applications. Blazor is currently only an interesting experiment showing what might be
possible in the future. To learn more about it, you can read a dedicated article about it in the DNC Magazine
written by Daniel Jimenez Garcia: Blazor - .NET in the browser.

Client-side web applications running on the server (MVC vs SPA)


There are two main differences between web applications following the MVC pattern and those following
the SPA pattern:

• MVC applications run on the server, while SPAs run on the client. With the former, the browser acts as a
thin client sending user requests to the server and rendering the received responses for the user. With
the latter, the browser is a fat client which only needs the web server to get the application files at the
beginning and to retrieve additional data while the application is running.

• The programming model of MVC applications fully exposes the nature of the web: every interaction is
an HTTP request and every response is the new state of the web page. The SPA programming model
is much more similar to desktop applications. User interactions trigger event handlers which modify
the existing page currently displayed in the browser. HTTP requests are only used for data access
operations (and loading of static files).

www.dotnetcurry.com/magazine | 71
There’s a third category of applications which are a hybrid between the two approaches. They are running
on the server, but their programming model is event-driven, “hiding” the request/response nature of the
web from the developer.

.NET developers can choose between two frameworks for development of such applications.

Razor components
Razor components were originally named Blazor server-side. That name is a pretty good description of
what they are about.

A Razor components application is an ASP.NET Core application configured to run Blazor code on the
server. This means that the .NET Standard assemblies built from the .razor files are not interpreted in
the browser. Instead, the browser is only running a small SignalR-based JavaScript library which uses
WebSockets for real-time communication with the server where those assemblies are running.

The requirements for using Razor components are currently the same as for Blazor: the latest previews of
.NET Core 3.0 SDK and Visual Studio 2019 are recommended. To create a new project in Visual Studio 2019,
the ASP.NET Core Web Application template must be used, and the Razor Components option must be selected
in the second step of the wizard.

From command line the same can be achieved using the following command:

dotnet new blazorserverside

In the generated solution, there’s only a single project because all the code is running on the server. The
setup code specific to running Razor components on the server is placed in the Startup class. The code in
the .razor files in the Pages folder is almost identical to the code from the Blazor sample project. The
only important difference can be found in the @functions block of the FetchData.razor file:

@functions {
WeatherForecast[] forecasts;

protected override async Task OnInitAsync()


{
forecasts = await ForecastService.GetForecastAsync(DateTime.Now);
}
}

Here, the sample data is retrieved by calling an in-process service.

If you look at the Blazor sample, you can see that an HTTP request was used there instead. Since data
resides on the server and Blazor is running on the client, the HTTP request was the only option there. The
HTTP request approach would work with Razor components just as well, but it’s probably not used in the
sample because it would introduce additional overhead of implementing the web API and serializing the
response to JSON format.

Razor components don’t have any of the previously listed disadvantages of Blazor:

• Although they are still in preview, they will be officially released as part of .NET Core 3.0.

72 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


• .NET assemblies are running more efficiently since they don’t need to be interpreted in the browser.
• The HTML is being modified by the JavaScript library in the browser which can directly use the DOM API.

However, since the code is running on the server, the browser must be constantly connected to it. As soon
as the connection is broken, the application stops working. There’s no way to support offline mode with this
approach, unlike SPAs which can continue working without interruption even with no internet connection,
at least until they require new data from the server.

When Razor components are officially released with .NET Core 3.0, they will become a viable alternative to
JavaScript frameworks for development of highly interactive web applications, at least for scenarios where
reliable connectivity to the server is not an issue. If you strictly use REST service calls for retrieving data
which would reside on the server if the application was a SPA, you might even be able to convert your
application to a Blazor SPA without too much effort when Blazor is officially released.

ASP.NET Web Forms


ASP.NET Web Forms take a different approach to providing the illusion of a client-side programming model
to the developer. Similar to MVC applications, all the communication between the client and the server is
using the stateless approach of HTTP requests and responses. The state necessary for the application to
function correctly is being sent across as part of these requests and responses in the background without
the developer having to be aware of it.

The client-side development model is further emphasized by server controls which can be used in the
markup in addition to standard HTML elements. The server processes these controls before sending the
page to the client and generates corresponding HTML markup to ensure correct rendering in the browser.
The following snippet from the Wingtip Toys sample application would generate a table with typical grid
view semantics:

<asp:GridView ID="CartList" runat="server" AutoGenerateColumns="False"


ShowFooter="True" GridLines="Vertical" CellPadding="4"
ItemType="WingtipToys.Models.CartItem" SelectMethod="GetShoppingCartItems"
CssClass="table table-striped table-bordered" >
<Columns>
<asp:BoundField DataField="ProductID" HeaderText="ID"
SortExpression="ProductID" />
<asp:BoundField DataField="Product.ProductName" HeaderText="Name" />
<asp:BoundField DataField="Product.UnitPrice" HeaderText="Price (each)"
DataFormatString="{0:c}"/>
<asp:TemplateField HeaderText="Quantity">
<ItemTemplate>
<asp:TextBox ID="PurchaseQuantity" Width="40" runat="server" Text="<%#:
Item.Quantity %>"></asp:TextBox>
</ItemTemplate>
</asp:TemplateField>
<asp:TemplateField HeaderText="Item Total">
<ItemTemplate>
<%#: String.Format("{0:c}", ((Convert.ToDouble(Item.Quantity)) * Convert.
ToDouble(Item.Product.UnitPrice)))%>
</ItemTemplate>
</asp:TemplateField>
<asp:TemplateField HeaderText="Remove Item">
<ItemTemplate>
<asp:CheckBox id="Remove" runat="server"></asp:CheckBox>
</ItemTemplate>
</asp:TemplateField>
</Columns>

www.dotnetcurry.com/magazine | 73
</asp:GridView>

Instead of Razor syntax, the markup above is using the ASPX view engine. Razor was developed as an
alternative to it in the early versions of ASP.NET MVC.

Apart from data binding, the markup above also specifies that the GetShoppingCartItems method
will be invoked when the data for the table needs to be retrieved. This method is implemented in the
accompanying code file (commonly called code-behind file):

public List<CartItem> GetShoppingCartItems()


{
ShoppingCartActions actions = new ShoppingCartActions();
return actions.GetCartItems();
}

There’s nothing special about how this code is invoked because the data is usually retrieved before the
generated web page is sent to the browser. However, the same approach is also used to define a click event
handler for a button:

<asp:Button ID="UpdateBtn" runat="server" Text="Update" OnClick="UpdateBtn_Click"


/>

This event is triggered when the user clicks on the button in the browser.

To trigger the event on the server, the button click will generate a POST request to the server with the page
state and the information about the event by submitting the form that is automatically added to any Web
Forms page.

In response, the server will execute the code in the event handler (along with any pre- and post-processing
required) and send the updated web page in response.

As you can see, at the network level, the communication between the client and the server is similar to MVC
applications. On top of it, ASP.NET Web Forms provide a client-side programming model abstraction but at
the same time takes away full control over the generated HTML from the developer.

ASP.NET Web Forms are only available for the .NET framework. They weren’t reimplemented for ASP.NET
Core like the other ASP.NET web development frameworks.

To create a new ASP.NET Web Forms project in Visual Studio 2019, start with the ASP.NET Web Application
(.NET Framework) template and select Web Forms in the second step of the wizard. However, I wouldn’t
recommend using ASP.NET Web Forms for new projects anymore for several reasons:

• ASP.NET Web Forms have no successor available for .NET Core. This means that your application will
only run on Windows and there’s no path available for migrating the code to .NET Core in the future.

• There’s no active development on ASP.NET Web Forms anymore. There were only a few minor new
features added in recent versions of the .NET frameworks, and you can expect even less of that in the
future.

• You have limited control over the generated HTML when using ASP.NET Web Forms. This can be an
issue when having to exactly implement a specific design or make the page responsive for mobile
devices and different screen sizes.

74 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


For existing ASP.NET Web Forms applications, you don’t have a lot of choice. You will have to keep
maintaining them in their current form unless you decide for a complete rewrite in one of the other
development frameworks described in previous sections.

Conclusion:

There are many approaches available for developing a web application in .NET. However, in most cases, you
will choose between two of them based on the type of the application you’re developing.

For content-focused public sites which depend on good indexing by search engines, the best fit is usually an
MVC web application in ASP.NET Core.

For web applications with a lot of user interaction, potentially protected by a login, a single-page
application (SPA) in your preferred JavaScript framework with an ASP.NET Core web API backend is often a
better choice.

For real-world applications which usually fall somewhere in between the two extremes, the advantages
and disadvantages of each approach should be carefully considered when making the choice for one or the
other.

With the release of .NET Core, Razor components will become a good alternative to JavaScript SPAs if you
prefer C# to JavaScript (or TypeScript) and the requirement for a constant connection to the web server is
not a serious limitation for your application.

The web frameworks in the .NET framework are not really recommended for starting development of new
applications. For existing applications developed using the .NET framework, you can keep maintaining
them using the same technology, unless you require features, they don’t support, i.e. you want to run your
application on other operating systems than Windows.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his knowledge
by speaking at local user groups and conferences, blogging, and answering questions on
Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Daniel Jimenez Garcia for reviewing this article.

www.dotnetcurry.com/magazine | 75
PATTERNS & PRACTICES

Alex Basiuk

Covers .NET Core

ASYNCHRONOUS
PRODUCER
CONSUMER
PATTERN " This article introduces a new .NET API

in .NET (C#) for implementing an asynchronous

"
version of the producer-consumer
pattern. We assume that readers are
already familiar with the
producer-consumer pattern.

Editorial Note: If you haven't already, make sure to read the Producer-
Consumer pattern before reading ahead.
When it comes to multi-threading, one of the first patterns that emerges is the producer-consumer pattern.
The Producer-Consumer pattern is great for buffered asynchronous communication which is a great way of
separating work that needs to be done, from the execution of that work.

Since .NET 4.0, the standard way to implement the pattern is to use the BlockingCollection class which
wraps the IProducerConsumerCollection interface. It’s been in the wild for a long time and was covered by
Yacoub Massad earlier in his article.

But it has a major issue: it doesn’t support the async/await paradigm.

It’s possible to create an async/await version by using DataFlow blocks, but it’s less elegant and
significantly less performant.

The new API was developed as a part of .NET Core which focuses a lot on performance improvements. It
was published as a separate NuGet package System.Threading.Channels. It has an intuitive interface and
was designed to be used in asynchronous code. However, unlike BlockingCollection, it has its own
storage mechanism which is a FIFO queue and it’s not possible to customise it.

The API entry point is the Channel class which exposes a few static factory methods that can create two
categories of channels:

Bounded channels
public static Channel<T> CreateBounded<T>(int capacity);
public static Channel<T> CreateBounded<T>(BoundedChannelOptions options

Users can choose channel capacity and tailor how writing the operation handles a situation when the
channel is full. The following options are available:

1. Wait until space is available (default option).
2. Drop the oldest item in the channel.
3. Drop the newest item in the channel.
4. Drop the item being written.

Unbounded channels
public static Channel<T> CreateUnbounded<T>();
public static Channel<T> CreateUnbounded<T>(UnboundedChannelOptions options)

It should be used cautiously as it can lead to OutOfMemoryException if consumers are slower than
producers.

Each Channel exposes two properties: Writer and Reader for producer and consumer concerns
respectively.

The writer accepts new items until the channel is completed with the TryComplete method. New items
can be written synchronously with the TryWrite method or asynchronously with the WriteAsync method.
The WaitToWriteAsync method allows to await until the channel is available for writing and returns a
Boolean which indicates whether the channel is completed.

www.dotnetcurry.com/magazine | 77
The reader mimics writer’s methods and exposes the TryRead method for synchronous reading and
the ReadAsync method for asynchronous reading. There is also the WaitToReadAsync method which
allows to await until new items are available and indicates whether the channel is completed. The reader
also exposes the Completion property. It returns a task that completes when no more data will ever be
available to be read from the channel (when the TryComplete method is called on the respective writer
and the remaining messages are processed).

The following code snippet demonstrates how to use the library.

In order to make it easier to compare it to the BlockingCollection approach, I deliberately created an


async version of ProcessDocumentsUsingProducerConsumerPattern example described by Yacoub
Massad in his post.

public static async Task ProcessDocumentsUsingProducerConsumerPattern()


{
const int boundedQueueSize = 500;
var savingChannel = Channel.CreateBounded<string>(boundedQueueSize);
var translationChannel = Channel.CreateUnbounded<string>();

var saveDocumentTask = Task.Run(async () =>


{
while (await savingChannel.Reader.WaitToReadAsync())
{
while (savingChannel.Reader.TryRead(out var document))
{
Debug.WriteLine($"Saving document: {document}");
await SaveDocumentToDestinationStore(document);
}
}
});

var translateDocumentTasks = Enumerable


.Range(0, 7) // 7 translation tasks
.Select(_ => Task.Run(async () =>
{
while (await translationChannel.Reader.WaitToReadAsync())
{
while (translationChannel.Reader.TryRead(out var documentId))
{
Debug.WriteLine($"Reading and translating document {documentId}");
var document = await ReadAndTranslateDocument(documentId);
await savingChannel.Writer.WriteAsync(document);
}
}
}))
.ToArray();

var allItemsAreWrittenIntoTranslationChannel = GetDocumentIdsToProcess()


.Select(id => translationChannel.Writer.TryWrite(id))
.All(_ => _);
Debug.Assert(allItemsAreWrittenIntoTranslationChannel); // All items should be
written successfully into the unbounded channel. translationChannel.
Writer.Complete();

await Task.WhenAll(translateDocumentTasks);
savingChannel.Writer.Complete();
await savingChannel.Reader.Completion;
}

78 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


As mentioned above, Channels were designed with high performance in mind. So we ran a series of tests to
compare the throughput of Channels vs BlockingCollection vs DataFlow.

The first test measures the “maximum” throughput. The consumers just dequeue messages and don’t do any
actual work. The intent is to compare the overhead added by each approach.

In this case, we had a channel with a capacity of 64 items and we measured the duration of processing for
100,000,000 items. To cover a wider range of scenarios, we ran tests with different combinations of
numbers of producers and consumers, each in the range of 1-32.

www.dotnetcurry.com/magazine | 79
In 80% of scenarios Channels shows better throughput than both BlockingCollection and DataFlow. It
demonstrates the best results when the number of producers is greater than the number of consumers. And
the worst results when the number of consumers is greater than the number of producers.

The second set of tests covers a more realistic scenario. Each consumer dequeues a message and emulates
25ms of processing. We tested both IO bound and CPU bound cases. On this occasion, we had a channel
with a capacity of 64 items and we measured the duration of processing for 32,768 items.

80 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


This time the results are not so significantly different. In most cases, Channels and DataFlow are at par. It
can probably be explained by the fact that the emulated processing time is relatively high and the number
of processed items is significantly lower than in the first test.

Conclusion:

The new Channels should be used to implement the producer-consumer pattern in asynchronous code,
especially in performance-sensitive applications where it’s important to reduce the time spent in the GC.

It offers clean API and great throughput. But in certain scenarios, the impact can be unexpected. In existing
applications, especially synchronous ones, it may not give significant benefits if BlockingCollection is
replaced with Channels. On the other hand, DataFlow is a great tool for building more complex processing
pipelines. However, it’s a bit “heavy” for the simple producer-consumer case.

* All measurements were taken on a Windows 10 x64 PC with CPU Intel Core i7-7700@3.60GHz (4 cores)
and RAM 64Gb

Download the entire source code from GitHub at


bit.ly/dncm42-producer-pattern

Alex Basiuk
Author

Alex Basiuk is a software engineer specializing in software & distributed systems development. He
has over 15 years of experience delivering various technology solutions for the financial sector. Alex
is passionate about software architecture, high-performance computing and functional programming.
Check out this Github projects at https://github.com/alex-basiuk.

Thanks to Yacoub Masaad for reviewing this article.

www.dotnetcurry.com/magazine | 81
ASP.NET

Pavel Kutakov

ZERO
DOWNTIME
DEPLOYMENT
FOR ASP.NET APPLICATIONS

Problem of Continuous Deployment for


database applications.
Launching a new version into production is always a nervous event, especially if the process involves a lot
of manual operations.

“It would be so good to automate this process". This idea is as old as the world of software development.
And there is a term for this - Continuous Deployment.

82 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


But here's the problem, there is no "right" way to configure continuous deployment. This
process is woven pretty much to the technology stack of the project and its environment.

In this tutorial, I want to share my practical experience in setting up automatic updates of


an application without interrupting its operation for a specific technological environment:

This application was written in ASP.NET MVC + SQL Azure + Entity Framework utilizing
Code First, deployed to Azure App Service, built and deployed using the Azure DevOps
(formerly Visual Studio Team Services).

At the first glance, everything is very simple. Azure App Service has the concept of
deployment slot – you can deploy a new version of the application to that slot and just
swap it with an active version. But it would be much easier if the application was based
on a non-relational database that did not have strictly defined structure of tables. In which
case, the newer version may start getting traffic and voila!

But with a relational database, it's a bit more complicated.

I believe you are familiar with the concept of migrations in Entity Framework and know
that it was designed exactly for the “fire and forget” mode – just run automatic migrations
and be always in sync.

Unfortunately, in the real-world, automatic migration mechanism has some problems


when used as part of continuous deployment scenario:

1. The old version of the application may not/cannot work with the new database
structure.

2. Updating the database structure may take considerable time and is not always
possible by the application itself using these automatic migrations.

3. Your infrastructure may become inconsistent in case of migration failure.

Let me explain this further with an example.

Let us assume that you have deployed a new version in a parallel slot or in a secondary
datacenter and have started applying migrations. Assume that we have three migrations
and - horror of horrors - two rolled and the third failed!!

At this point, nothing will happen to the running servers. Entity Framework does not check
the version for each request, but it is likely that you will not be able to solve the problem
quickly. And at this time, the load on the application may increase and the platform will
launch you an additional instance of the application and it... of course it will not start.

www.dotnetcurry.com/magazine | 83
Entity Framework will compare the version of the code with version of the database during the first
database request, and as far as the structure of the database has changed – EF will throw an error. A
significant part of users will start receiving errors.

As you can see, the risk of automatic migration is high.

Updating the application in main datacenter while users are switched to secondary datacenter. Failed update on
primary DC may cause unpredictable problems for your customers.

Figure 1: Risk of Automatic Migration when DB is Updated

As for the second point, your migration may contain some commands, the execution time of which exceeds
30 seconds and the standard procedure will fall by timeout.

And in addition to these points, I personally do not like the fact that with automatic migrations you have
to update part of the infrastructure to the new version. It's not so bad if you are using a deployment slot in
Azure, but when you are deploying to a secondary datacenter, you have a part of the infrastructure with a
deliberately broken application.

What to do?
So, we want to implement continuous deployment for the ASP.NET application using Entity Framework.
Let's start with the most difficult part - with the database.

84 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


It would be nice to automatically update the structure of the database while keeping the previous version
of the application working. In addition, it would be good to take into account the fact that there are such
updates in which an execution of a single command can take significant amount of time, which means that
we need to update the database without using the built-in mechanisms and by executing a separate SQL
script.

Needless to say, your migrations should be non-destructive. That is, changes in the database structure
should not disrupt the performance of the previous version, and even better - previous two versions. If you
are unable to satisfy this requirement, the described approach will be dangerous for your application.

The question is how to prepare SQL script for database migration?

You can make this process manual. If your team has a dedicated Release Manager role, you can coerce the
team member to run the following command in Visual Studio

update-database -script

...which will generate the script and this person will put this script into a specific project folder.

But I agree, this approach is not error free. It depends on a human intervention, adds extra complexity if
you have more than one migration between releases, and also deals with the possibility of one of release
being skipped on the target system.

Be ready to implement some difficult migration tracking system to know which migrations have been
there and what you need to run. It is difficult and the same wheel has already been invented in the built-in
migration mechanism.

Of course, we’d be better off to embed the process of migration script generation and execution into
the release pipeline. Unfortunately, “update-database –script" command cannot be used as part of CI/CD
pipeline, it can be executed only in the “Package Manager” console of Visual Studio.

To achieve the same result, you can use separate “migrate.exe” utility which is included with the Entity
Framework. Please note that you need Entity Framework 6.2 or higher, as the script generation option
appeared in this utility only in April 2017. Calling the utility looks like this:

migrate.exe Context.dll /connectionString="Data Source=localhost;Initial


Catalog=myDB;User Id=sa;Password=myPassword;" /connectionProviderName="System.Data.
SqlClient" /scriptFile=1.SQL /startUpDirectory="c:\projects\MyProject\bin\Release"
/verbose

Specify the name of the assembly where your Context class is located, the connection string to the target
database, the provider, and, most importantly, the start directory that contains both the context assembly
and the Entity Framework assembly. Do not experiment with the names of the working directory, keep it
simple.

Note: We came across a strange case when migrate.exe was unable to read the directory with a name that had
spaces and non-alphabetic characters.

www.dotnetcurry.com/magazine | 85
There's an important digression to be made.

After the execution of the above command, the utility will generate a single SQL script containing all the
commands for all migrations that need to be applied to the target database. This is not good for SQL Server.

The fact is that the server executes commands without the GO separator as a single batch, and some
operations cannot be performed together in one batch.

For example, in some cases, adding a field to a table and immediately creating an index on that table with
a new field, does not work.

But there is more. Some commands require certain environment settings when running the script. Such
settings are enabled by default when you connect to SQL Server via SQL Server Management Studio, but
when the script is executed via SQLCMD console utility - it must be set manually.

To take all this into account, you will have to modify the process of generating the migration script. To do
so, create an additional class next to your DbContext descendant, which does everything you need:

public class MigrationScriptBuilder : SqlServerMigrationSqlGenerator


{
public override IEnumerable<MigrationStatement>
Generate(IEnumerable<MigrationOperation> migrationOperations, string
providerManifestToken)
{
var statements = base.Generate(migrationOperations, providerManifestToken);
var result = new List<MigrationStatement>();
result.Add(new MigrationStatement { Sql = "SET QUOTED_IDENTIFIER ON;" });
foreach (var item in statements)
{
item.BatchTerminator = "GO";
result.Add(item);
}
return result;
}
}

And to allow Entity Framework to use it, you should register it in the Configuration class, that usually is
found in the Migrations folder:

public Configuration()
{
SetSqlGenerator("System.Data.SqlClient", new MigrationScriptBuilder());
}

The resulting migration script will then contain a GO between each statement and a SET QUOTED_
IDENTIFIER on at the beginning of the file.

Well done, now it becomes necessary to adjust the process.

In general, as part of the release pipeline in Azure DevOps (VSTS/TFS), this is quite simple. We'll need
to create a PowerShell script to prepare and execute required database migrations. It will look like the
following:

86 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


param
(
[string] [Parameter(Mandatory=$true)] $dbserver,
[string] [Parameter(Mandatory=$true)] $dbname,
[string] [Parameter(Mandatory=$true)] $dbserverlogin,
[string] [Parameter(Mandatory=$true)] $dbserverpassword,
[string] [Parameter(Mandatory=$true)] $rootPath,
[string] [Parameter(Mandatory=$true)] $buildAliasName,
[string] [Parameter(Mandatory=$true)] $contextFilesLocation,
)

Write-Host "Generating migration script..."


$fullpath="$rootPath\$buildAliasName\$contextFilesLocation"
Write-Host $fullpath
& "$fullpath\migrate.exe" Context.dll /connectionProviderName="System.
Data.SqlClient" /connectionString="Server=tcp:$dbserver.database.windows.
net,1433;Initial Catalog=$dbname;Persist Security Info=False;User

…and add a PowerShell script execution task to the release pipeline. The task and its settings may look like
this:

Figure 2: Add PowerShell Execution Block

PowerShell script settings looks like this:

Figure 3: Powershell Task Settings

www.dotnetcurry.com/magazine | 87
You need to supply the following parameters to the PowerShell script:

• dbserver – server name/address where your target database can be found.

• dbname – name of your target database.

• dbserverlogin – user name to connect to the database.

• dbserverpassword – user password. Server, database, user and password usually defined as a specific
release environment variable. Password data may be set as secured.

• buildAliasName – it is a name for your release pipeline which is used by Azure DevOps as a name of the
target directory. Usually taken from $(Build.DefinitionName) variable.

• rootPath – the path to the local directory where all artifacts will be copied by Azure DevOps. Usually
taken from $(System.ArtifactsDirectory) variable.

It is important to add the “migrate.exe” file to your project from <your Project>/packages/
EntityFramework.6.2.0/tools/ and set it to “Copy Always” so that this utility is copied to the output directory
when you build the project and you can access it in the Azure DevOps release.

Note: If your project also uses WebJob then deploying to Azure App Service is a bit unsafe. We have faced a
similar situation where Azure launches the first available executable file in the folder where your WebJob is
published. If your WebJob name is alphabetically located after “migrate.exe” (as was in our case) then Azure will
try and to run “migrate.exe” instead of your WebJob executable!

So, now that we have learned how to update the version of the database by generating a script during the
release, other steps will be way easier. We should disable checking of the migration version, so that in case
of any failures in the execution of the script, the older version of our code continues to work.

As I mentioned earlier - your migrations should be non-destructive. To disable validation, you only need to
add the following section to Web.config file:

<entityFramework>
<contexts>
<context type="<full namespace for your DbContext class>, MyAssembly"
disableDatabaseInitialization="true"/>
</contexts>
</entityFramework>

..where <full namespace for your DataContext class> is a full path with namespace to your
DbContext descendant and MyAssembly is the name of the assembly with your Context class.

Finally, it is highly desirable to warm up the application before switching users to the new version. To do
so, add a special section to the web.config with links that your application automatically follows during
initialization:

<system.webServer>

88 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


<applicationInitialization doAppInitAfterRestart="true">
<add initializationPage="/" hostName="" />
</applicationInitialization>
</system.webServer>

You could add several links by just adding more lines with the “initializationPage” attribute:

<add initializationPage=”/myInit1” />

Azure documentation states that during the swapping of slots, the platform will wait for application
initialization and only then will switch traffic to the new version.

What about .NET CORE projects?


In .NET Core, things are much easier and at the same time, different.

Migration script generation is possible using the standard mechanism, but it is performed not on the basis
of the compiled assembly, but on the basis of the project file.

Therefore, the script must be generated as part of the build process and must be included as a build
artifact.

The script will contain all the SQL commands of all the migrations from the beginning. There are no
problems with it because the script is idempotent, i.e. it can be applied to the target database again
without any consequences. This has another useful consequence - we do not need to modify the script
generation process to divide commands into batches - everything is already done for the same.

Here is the step-by-step process setup.

Step 1: Call Entity Framework Core CLI utility for script generation. Just add the appropriate task to the
build pipeline:

Figure 4: Add .NET Core task to the build pipeline

www.dotnetcurry.com/magazine | 89
Step 2: Set up this task to generate migrations file:

Figure 5: .NET Core Task Settings

(official documentation is here: https://docs.microsoft.com/ru-ru/ef/core/miscellaneous/cli/dotnet#dotnet-


ef-migrations-script )

The parameters are pretty clear: you have to specify your project file location, your startup project file
(which usually the same as project file) and the path to the output SQL script file.

As a result, after the build finishes, you may have the following build artifacts:

Figure 6: .NET Core Artifacts Explorer

Step 3: Your build artifacts should contain additional PowerShell script (“easycicd.ps1” as seen in Figure
6) which you will use to execute your SQL migrations script in the corresponding Release pipeline. Your
PowerShell script may be much smaller and include only database connection information as input
variables:

param
(
[string] [Parameter(Mandatory=$true)] $dbserver,

90 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


[string] [Parameter(Mandatory=$true)] $dbname,
[string] [Parameter(Mandatory=$true)] $dbserverlogin,
[string] [Parameter(Mandatory=$true)] $dbserverpassword,
[string] [Parameter(Mandatory=$true)] $migrationScript
)

& "SQLCMD" -S "$dbserver.database.windows.net" -U $dbserverlogin@$dbserver -P


$dbserverpassword -d $dbname -i $migrationScript

Please do not forget to add “PowerShell” task in your Release pipeline to run this script, as described above
for EF 6.2.

Conclusion

Using the technique mentioned in this article, you can roll out your ASP.NET applications without any
downtime. But notice that even within the same technology family, the process setup is totally different.

Each development environment requires its own recipe for continuous deployment.

Pavel Kutakov
Author

Pavel is an experienced software architect who has repeatedly demonstrated his ability
to complete finance-related software projects. He is a strong information technology
professional with focus on modern cloud technology platforms. His banking core system
software has worked all over the world from US to Papua New Guinea. He has also created
a specialized processing system for national lottery operator.

Thanks to Daniel Jimenez Garcia for reviewing this article.

www.dotnetcurry.com/magazine | 91
PATTERNS & PRACTICES

Yacoub Massad

STATE IN
MULTI-
THREADED
APPLICATIONS
In this article, I will talk about ways to handle state in
multi-threaded C# applications.

INTRODUCTION
In the previous edition, I talked about Global state in C# applications. I talked about why
people tend to use global variables and suggested some solutions.

In this part, I will talk about state in multithreaded C# applications.

92 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


C# Multithreaded application – The Example
I am going to continue using the same example application I used in Part 1. The source code can be found
here: https://github.com/ymassad/StateExamples. I added new projects to the solution for this part.

Modifying state without any synchronization

Take a look at the MultithreadingAndRefStateParameters project. This project is a modified version of the
PassingStateViaRefParametersWithIOC project that we ended up with in Part 1.

The difference between the two projects is that in the MultithreadingAndRefStateParameters project,
documents are processed in parallel.

Take a look at the TranslateDocumentsInFolderInParallel method in the


FolderProcessingModule class. This method uses the Parallel.ForEach method to process the documents
in parallel. The Parallel.ForEach method automatically decides on the degree of parallelism based on
the number of CPU cores on the machine and other factors.

Another difference is that I added a ref int numberOfTimesCommunicatedWithServersState


parameter to both the TranslateFromGerman and TranslateFromSpanish methods to allow these
methods to increment such state every time they communicate with a translation server. I also defined a
numberOfTimesCommunicatedWithServersState variable in the Main method to hold such state.

Note: In Part 1, I used the Server1State class to represent the state of server 1 (if it is down and since
when). All the projects I created for Part 2 don’t have this state class. In these new projects, the choice
between server 1 and server 2 is random.

The number of documents the application will try to process is set to 1000 in this application and each
document has two paragraphs. Therefore, the application is expected to communicate with the servers
(fake servers in this demo application) 2000 times.

If you run this sample application, the application will print the value of
numberOfTimesCommunicatedWithServersState after all documents have been processed. In one run
on my machine, I got the value 1974. In another run, it was 1946. I expect it to give you a number smaller
than 2000 when you run it (unless you have a single CPU core or Parallel.ForEach decided to use a
single thread for some reason).

We get a number smaller than 2000 because we have a race condition. The issue is related to the following
code:

numberOfTimesCommunicatedWithServersState++;

This code exists in four places in the project: two in TranslateFromGerman() and two in
TranslateFromSpanish().

This code increments the value in the numberOfTimesCommunicatedWithServersState parameter.


However, such increment operation is not atomic. It consists of three operations:

1. Reading the value of the state parameter

www.dotnetcurry.com/magazine | 93
2. Adding one (1) to the value (not the state parameter, think of this as a temporary variable)
3. Writing the result to the state parameter

Now, imagine the following scenario: The current value of the state is 100, and two threads are about to
execute the code above. They execute the operations like this:

Because each thread communicated with the service once, we expect the new state value to be 102.
Instead, it ended up being 101!

To solve the problem, we need to treat the increment operation as an atomic operation.

To increment an integer in C# as an atomic operation, we can use the Interlocked.Increment method. We


can replace the code line that increments the state parameter with the following line:

Interlocked.Increment(ref numberOfTimesCommunicatedWithServersState);

The Increment method increments the value stored in


numberOfTimesCommunicatedWithServersState as an atomic operation.

The lock statement

What if we want to multiply the state variable by some number? Or do some other type of complex update
to a more complex state object?

For example, let’s say we want to have a complex state object to keep track of the number of times the
application communicated with each server and the total time spent communicating with each server.

We can use the lock statement to synchronize any kind of state manipulation.

Take a look at the MultithreadingAndRefStateParametersAndLockStatement project. The


ServerCommunicationStatistics class is an immutable class that contains four properties that keep
track of the number of times we communicate with each server and the total time spent communicating
with each server.

The TranslateFromGerman and TranslateFromSpanish methods take a


ServerCommunicationStatistics state parameter (by reference, of course) instead of a simple int
state parameter. They also take a parameter of type object (statisticsStateLockingObject) that they
use for synchronization. Here is the relevant code:

94 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


Stopwatch stopwatch = Stopwatch.StartNew();

var result = TranslateFromSpanishViaServer1(text, location);

var elapsed = stopwatch.Elapsed;

lock (statisticsStateLockingObject)
{
statisticsState = statisticsState
.WithTotalTimeSpentCommunicatingWithServer1(
statisticsState.TotalTimeSpentCommunicatingWithServer1 + elapsed)
.WithNumberOfTimesCommunicatedWithServer1(
statisticsState.NumberOfTimesCommunicatedWithServer1 + 1);
}

This code measures the time spent communicating with server 1. It then uses the C# lock statement to
synchronize the update to the statisticsState parameter (of type ServerCommunicationStatistics). All
access to the statisticsState object in the TranslateFromGerman and TranslateFromSpanish
methods is protected by locking on the statisticsStateLockingObject object. This guarantees that
only a single thread can be inside the lock statement block at any given time. This means that when a
thread reads the value of statisticsState, no other thread will have the chance of reading or updating
the value of statisticsState before the first thread is done updating it. This eliminates the race
condition.

Inside the lock block, we create a new ServerCommunicationStatistics instance that has the updated
property values. I use With methods here to do this. For more information about With methods, see the
Designing Data Objects with C# and F# article.

Note that in the Main method, we pass the same object instance
(serverCommunicationStatisticsStateLockingObject) to both TranslateFromGerman and
TranslateFromSpanish. If we pass two different instances, updates to statisticsState will not be
synchronized and we will have the race condition again.

But why have the statisticsStateLockingObject parameter? Why not use the statisticsState
parameter?

The statisticsState parameter (being a state ref parameter) does not always refer to the same
object instance. Therefore, when two threads read the value of this parameter, they might get references to
two different ServerCommunicationStatistics objects if they read it at different times.

We must lock using the same instance.

If we turn ServerCommunicationStatistics into a mutable object, then we can pass it normally


(not as a ref parameter), and then we can use it for locking. However, having mutable objects passed as
parameters makes methods harder to understand. For more details, see the Designing Data Objects with C#
and F# article.

Using Interlocked.CompareExchange

The lock statement is the recommended way of protecting shared resources in .NET. However, there is a
way to use the Interlocked class to update a complex state object which does not involve a lock.

www.dotnetcurry.com/magazine | 95
Take a look at the MultithreadingAndRefStateParametersAndComplexCAS project. Because we need no
locks in this project, I removed the statisticsStateLockingObject parameters. Here is how I update
the state (in the TranslateFromGerman method for example):

Utilities.UpdateViaCAS(ref statisticsState, state =>


state
.WithTotalTimeSpentCommunicatingWithServer1(
state.TotalTimeSpentCommunicatingWithServer1 + elapsed)
.WithNumberOfTimesCommunicatedWithServer1(
state.NumberOfTimesCommunicatedWithServer1 + 1));

I call a method called UpdateViaCAS passing a reference to the state and a lambda. This lambda will be
called by the UpdateViaCAS method to calculate a new state value from the current state value.

Here is how the UpdateViaCAS method looks like (CAS stands for Compare and Swap):
public static void UpdateViaCAS<TState>(
ref TState state, Func<TState, TState> update) where TState : class
{
var spinWait = new SpinWait();

while (true)
{
TState beforeUpdate = state;

TState updatedValue = update(beforeUpdate);

TState found = Interlocked.CompareExchange(


location1: ref state,
value: updatedValue,
comparand: beforeUpdate);

if (beforeUpdate == found)
return;

spinWait.SpinOnce();
}
}

The trick in the UpdateViaCAS method is that it uses the Interlocked.CompareExchange method to update
the state object only if it has not changed by another thread.

The idea is like this: We first read the value of the state into the beforeUpdate variable. We then
invoke the update function to calculate the updated state object. We then invoke Interlocked.
CompareExchange.

Interlocked.CompareExchange will only update the state if it finds out that no other thread has
already updated it since we read it. It does that by comparing the current state value with beforeUpdate.
The compare operation and the swap operation (setting the new state value) happen as a single atomic
operation.

The Interlocked.CompareExchange method returns the value of the state as it found it when
attempting to swap. If we find out that this returned value is the same as the value we read before
attempting to swap, then our mission is complete.

Otherwise, we repeat the whole operation.

96 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


This means that we might end up calling the update function multiple times, each with a different state
value.

This is an advanced technique and as I mentioned before, using the lock statement is the recommended
way to protect a shared resource in .NET.

Still, using the CAS approach might give you some performance benefits in certain cases.

When it comes to performance, though, always test to see which approach is better in your case.

Note: In the example above, I use SpinWait to wait for a very small time after each failed attempt to set
the new state before trying again. This is intended to reduce the chance of failing for the next attempt. The
reason I use this and not Thread.Sleep is for performance reasons which are out of the scope of this article.

Extracting state updating to a StateHolder object

Usually, you don’t want your functions to know exactly how state is stored and which synchronization
technique is used to protect access to it.

Take a look at the MultithreadingAndRefStateParametersAndStateHolder project. Look at the


TranslateFromGerman method for example. It does not take a ref ServerCommunicationStatistics
parameter, nor an object, to use for locking. Instead, it takes a parameter of type
IStateUpdater<ServerCommunicationStatistics>. The IStateUpdater interface has the following
method:

void UpdateState(Func<TState, TState> updateFunction);

The TranslateFromGerman method uses the UpdateState method like this:

statisticsStateUpdater.UpdateState(statisticsState =>
statisticsState
.WithTotalTimeSpentCommunicatingWithServer1(
statisticsState.TotalTimeSpentCommunicatingWithServer1 + elapsed)
.WithNumberOfTimesCommunicatedWithServer1(
statisticsState.NumberOfTimesCommunicatedWithServer1 + 1));

The UpdateState method takes an updateFunction parameter. This function parameter allows the
UpdateState method to call back so that we can tell it how to produce a new state value given the
current state value (represented by the statisticsState lambda parameter above).

The IStateUpdater parameter of the TranslateFromGerman method makes it clear to readers that this
method might update state.

In the Main method, create an instance of the ThreadSafeStateHolder class and pass it to the
TranslateFromGerman and TranslateFromSpanish methods. This class implements the
IStateUpdater interface (and other interfaces). Internally it uses the lock statement to protect access
to the state. We can create another state holder class that uses Interlocked.CompareExchange if we
wanted to.

www.dotnetcurry.com/magazine | 97
Using the return value of a function to communicate the state back to the
caller

In multithreaded applications, functions running on different threads might require access to a shared copy
of some state.

For example, consider the Server1State class (the state discussed in part 1). If code running on thread
#1 discovers that server1 is down and sets the state accordingly, then thread #2 which runs in parallel with
thread #1 should be able to read that same state.

In these scenarios, using an IStateUpdater parameter (or a ref parameter) to update state is required
because a function must be able to have some outputs (the new state) while it executes, i.e., before it
returns.

The IStateUpdater interface (and ref parameters) allows functions to make such outputs before they
complete. They can simply invoke the UpdateState method. Other functions will be able to read the new
state (via IStateGetter for example) when they wish.

ServerCommunicationStatistics is different. We only really read these statistics at the end of the
application, after all the processing is done.

Each thread can have its own instance of ServerCommunicationStatistics, and translate a subset
of the documents. We can then combine the ServerCommunicationStatistics objects from different
threads to produce a single instance of ServerCommunicationStatistics that we then use to display
the results in the console.

See the MultithreadingAndUsingReturnValue project. In this project, there are no ref parameters or State
Holders. A method interested in reading or updating state takes the current value of the state as input, and
returns a new value of the state as part of the output.

For example, here is the signature of the TranslateFromGerman method:

public static (Text text, ServerCommunicationStatistics newState)


TranslateFromGerman(
Text text,
Location location,
ServerCommunicationStatistics statisticsState)

The method takes a normal (not ref) ServerCommunicationStatistics parameter and returns a new
ServerCommunicationStatistics object. Notice that I used a tuple here to represent two outputs; the
original translated Text object, and the new ServerCommunicationStatistics object.
Take a look also at the TranslateDocumentsInFolderInParallel method:

public static ServerCommunicationStatistics TranslateDocumentsInFolderInParallel(


string folderPath,
string destinationFolderPath,
Location location,
ServerCommunicationStatistics statisticsState)
{
IEnumerable<Document> documentsEnumerable = GetDocumentsFromFolder(folderPath);

object lockingObject = new object();

98 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


var state = statisticsState;

Parallel.ForEach(
documentsEnumerable,
localInit: () => ServerCommunicationStatistics.Zero(),
body: (document, loopState, localState) =>
{
var result = DocumentTranslationModule.TranslateDocument(
document, location, localState);

WriteDocumentToDestinationFolder(result.document, destinationFolderPath);

return result.newState;
},
localFinally: (localSum) =>
{
lock (lockingObject)
{
state = state.Combine(localSum);
}
});

return state;
}

This method uses a different overload of the Parallel.ForEach method. This overload allows us to have a
thread-local state (or Task-local state since Parallel.ForEach uses the Task Parallel Library, and does not
use threads directly).

When each task runs, it starts with an initial state. In the method above, I use
ServerCommunicationStatistics.Zero to construct an empty ServerCommunicationStatistics
object. You can find this code above in the localInit function parameter.

Then, when processing each item, in the body function parameter, we get to access the task-local
state. In the method above, the localState lambda parameter represents such state. After I call
DocumentTranslationModule.TranslateDocument, I return the new state object as the return value
of the lambda. We don’t need to synchronize any access to this state because it is guaranteed that only a
single task (and thus a single thread) is going to access it.

However, when a task completes its subset of documents, the localFinally function parameter is invoked.
The localSum lambda parameter in the code above will be the state object produced by translating the
final document run by the task. Here I pass a lambda that updates state; a local variable representing the
final result for all tasks.

Notice that here I must synchronize access to the state variable since there might be multiple tasks that
finish at the same time and thus attempt to update the state variable at the same time.

Without locking, we can have a race condition here.

Please notice that in this project, there is no Inversion of Control (IOC). Functions call each other directly.
This means that the call hierarchy is polluted with the state parameters/return values. Go to the programs
functions and see how each of the functions had to deal with the state parameter/return value.

www.dotnetcurry.com/magazine | 99
Here are some questions:

1. What are the benefits of having the state returned via function return values in both multithreaded and
single-threaded applications?

2. Can we use the return values of functions to return state without polluting the whole call hierarchy?

I will address these questions in an upcoming article.

Note: In the case of the ServerCommunicationStatistics state, we really don’t even need to have a
parameter of this type, just a return value. This is true because the functions don’t really read the state, they
just update it. We can make methods return objects representing the changes that they made, and then
aggregate them higher in the call stack. For example, when translating a document, we translate multiple
paragraphs. We can get the changes done by each paragraph translation and aggregate that and return it to
the caller.

However, this is just a special case. In general, state can be both read and written to by a function.

Conclusion:

In this article, we discussed handling state in multithreaded applications. I talked about race conditions.
When two threads try to update a shared value, a race condition can happen if the updates are not atomic.

We explored the Interlocked class to atomically update state, and using the lock statement to do so.

We also explored encapsulating state changing into a StateHolder object so as to separate concerns.

Finally, I explained a way to deal with updating state by actually returning the new state value as part of
the function’s return value, instead of having access to the state via a parameter.

Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.

Thanks to Damir Arh for reviewing this article.

100 | DNC MAGAZINE ISSUE - 42 (MAY-JUNE 2019)


THANK YOU
FOR THE 42 nd EDITION

@dani_djg @damirarh @yacoubmassad

@filip_woj alex-basiuk dnikolovv Pavel Kutakov

@sommertim @suprotimagarwal @saffronstroke

WRITE FOR US
mailto: suprotimagarwal@dotnetcurry.com

You might also like