You are on page 1of 76

Vue Plug-ins, Entity Framework, REST, Ajax

MAR
APR
2021
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Understanding
Docker
shutterstock.com/anttoniart

Tapping into More Vue Deploying


the EF Core Plug-Ins ExpressJS
Pipeline Applications
SQL & Azure SQL
Conference

June 8–10, 2021 Dec 7–9, 2021


Workshops June 6, 7, 11 Workshops December 5, 6, 10
Orlando, FL Las Vegas, NV
WALT DISNEY WORLD SWAN AND DOLPHIN MGM GRAND

Powered by

A SAMPLING OF KEYNOTES AND SPEAKERS FOR OUR SPRING AND FALL EVENTS

SCOTT GUTHRIE SCOTT HANSELMAN CHARLES LAMANNA ROHAN KUMAR BOB WARD
Executive Vice President, Principal Program Corporate Vice President, Corporate Vice President, Principal Architect Microsoft
Cloud + AI Platform, Manager, Web Platform, Low Code Application Azure Data, Microsoft Azure Data / SQL Server Team,
Microsoft Microsoft Platform, Microsoft Microsoft

KATHLEEN DOLLARD SCOTT HUNTER JEFF FRITZ BUCK WOODY ANNA HOFFMAN
Principal Program Director of Program Senior Program Manager, Applied Data Scientist, Data & Applied Scientist,
Manager, Microsoft Management, Microsoft Microsoft Microsoft Microsoft

JOHN PAPA DAN WAHLIN KENDRA HAVENS LESLIE RICHARDSON PAUL YUKNEWICZ
Principal Developer Advocate, Cloud Developer Advocate Program Manager, Program Manager, Principal Group PM Manager,
Microsoft Manager, Microsoft Microsoft Microsoft Microsoft
DEVintersection.com AzureAIConf.com and many more!
If your passion is technology, take it to the next level!
Be among the first to get the insider scoop
on what’s coming in 2021
A unique opportunity to meet with Microsoft engineers
who build the products
Interact with top industry experts sharing real-world solutions
and techniques to sharpen your skills for instant ROI

SAMPLE TECHNOLOGIES
ASP.NET MICROSOFT AZURE AI SQL SERVER AZURE SQL .NET
VISUAL STUDIO TEAMS POWER PLATFORM IoT DEVOPS
POWERSHELL MACHINE LEARNING BLAZOR PROJECT DESIGN
DOCKER C# KUBERNETES REACT TYPESCRIPT SIGNALR
COSMOS DB NOSQL QUANTUM COMPUTING XAMARIN
AND MANY MORE, INCLUDING
UPCOMING TECHNOLOGIES AND PRODUCTS CURRENTLY UNDER WRAPS

June 8–10, 2021 Orlando, FL


This event returns to Orlando, FL for those who can travel safely by then as the vaccine
rollout increases.
We will be following CDC guidelines, so space will be limited.
For those who can’t attend safely in June, we have our major event December 7–9, 2021
at the MGM Grand, Las Vegas , NV.

When you REGISTER EARLY for a workshop package, Surface


you’ll receive a choice of hardware or hotel gift card! Earbuds
Go to DEVintersection.com or
AzureAIConf.com for details.

Surface Go 2
Xbox One X
Surface
Headphones 2

Follow us on Twitter @DEVintersection @AzureAIConf

DEVintersection.com 203-527-4160 M-F, 12pm-4pm EST AzureAIConf.com


TABLE OF CONTENTS

Features
8  eploy a Real-World ExpressJS
D 62  igrating Monolithic Apps
M
TypeScript Application Using to Multi-Platform Product Lines
Containers with .NET 5
Because you spend so much time interacting with JavaScript (whether you You can just about feel the eyerolls of the dev team when you find
know it or not), Sahil wants to make sure that you make the right decisions out that you’re going to be updating some ancient piece of software.
about how to wire up your own apps. Alexander shows you that using .NET 5, migrating doesn’t have to
Sahil Malik be painful.
Alexander Pirker

14 Using Ajax and REST APIs in .NET 5


Paul continues exploring how to move data in a hurry with Ajax and REST

Columns
APIs. This is the first of a longer series that will teach you to create great
applications using .NET 5.
Paul D. Sheriff

27 Eliminating Waste During 74 CODA: Why Ritual Matters


Designer-to-Developer Handoff John reacts to current events by taking the lessons from the news
into coding and corporate structures.
If you’ve ever been part of a development team, you’ve experienced John V. Petersen
the disconnection that designers sometimes seem to have from how things
get built. Jason takes a dive into the hand-off process and shows you
how it can be done smoothly.
Jason Beres

31 Tapping into EF Core’s Pipeline Departments


When it comes to metadata, EF Core has some cool new tricks to learn.
Julie shows you how. 6 Editorial
Julie Lerman

25 Advertisers Index
42 I ntroduction to Containerization
Using Docker 73 Code Compilers
Wei-Meng explains how Docker replaces virtual machines to host the apps
and libraries you need, completely independent of which OS you’re using.
Wei-Meng Lee

53 T he Complete Guide to Vue 3


Plug-ins: Part 2
Bilal shows you how to pack a lot of reusable code into Vue plug-ins
in this second part of his exploration of Vue.
Bilal Haidar

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to subscriptions@codemag.com or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Keeping the Creative Fires Lit


The theme of last issue’s editorial “Reigniting Creativity Daily” was originally going to be titled: “Keeping
the Creative Fires Lit,” but like most of my creative endeavors, the theme changed as I wrote. Last
issue’s editorial was more or less about awakening the creator that lives within us all. For this issue,

I’ll return to the original idea and talk about how tion of this part is the concept of MVP: Minimal Maintaining the Fire
creative endeavors are like creating and manag- Viable Project. Put another way, do the simplest Keeping the fire going can be the challenging
ing campfires. thing that could work. Our kindling would be an part. Learning new technology is not without its
ASP/MVC application built with .NET Core 5.0 and pitfalls. You might get stuck, you might get frus-
Many of my editorials are inspired by real-life using the HotChocolate (GraphQL tools) NuGet trated, you might be flummoxed, or you might
events. Be it work or home life, these editorials packages. just need a break. You need to keep that fire go-
spring forth from my reality. The event that in- ing by any means necessary. What I find is that
spired this theme was the discovery of GraphQL After building a project, we built out our first when the fire seems to be waning, I return to the
and how it could really help with one of my new query service that would return a simple set Gathering Material phase. Sometimes I need to
entrepreneurial endeavors. For the astute read- of objects fabricated in memory. We then used step away, take a pause, and gather my thoughts
er, you’ll realize that there have been several the tools built into HotChocolate to test the to explore the new ideas that I wasn’t ready for
GraphQL articles in CODE Magazine’s pages, so API. Once we had our proof of concept built, we when I started the original fire.
the discovery wasn’t all that new. This time was pushed it to a private GitHub repo and set off on
different. I had an application for this technology the next step in building our fire. If you’ve made it this far, I thank you. I know
and was yearning to learn more about it. It’s this the example felt a bit marshmallowy (insert ey-
yearning that provided the seed for this editorial. eroll here LOL). I know you, as a reader, probably
Adding Wood rolled your eyes once or twice when reading it.
When I started to learn about GraphQL I realized that Once the fire has been lit, you can start adding Heck I rolled my eyes a few times while writing
I had a method to my learning process. This method wood to that fire so that it has something solid it. In any case, there’s a process for learning and
isn’t a formal process, but rather an intuitive process to burn. We added wood of various varieties in- this process is yet another reason why what we do
built over many years of exploring technology. As cluding more data entities (types), more queries, is so fun. Now, go build your own fire!
I delved deeper and deeper into this technology, I more relations, and a pseudo data persistence
felt my excitement grow: A creative fire had been lit. layer. We focused on the query aspect of the
This is where my silly but, in my opinion, poignant project at first, as this seemed to be the most  Rod Paddock
theme comes from. Learning new technology is akin common use case. 
to building a campfire. So let’s explore the steps and
how they applied to my learning of GraphQL.
Stoking the Fire
It’s not enough to just get the fire burning. You
The Need for a Fire need to keep the fire going. This is where the
Before you work on building a fire, you need to de- excitement keeps propelling you to continue. To
termine whether you need it. In this case, the need stoke the fire, we added new features that an API
came from the fact that this new application would needs. We started adding simple Mutations. Mu-
be a mobile application. I didn’t like the idea of tations are the data update portions of GraphQL.
constructing a complex REST API. There had to be Like in the original example, we started simply.
a better way. This better way was GraphQL. We also continued gathering more material—do-
ing more research. Our goal was to not let the
fire go out.
Gathering Material
In order to build a fire, you need fuel. For this
project, the material we gathered was basic re- S’mores Time
search. We consulted the GraphQL spec online, we After working on the API for a bit, I soon learned
Googled, we looked for sample code and papers, that my senior developer had taken it upon
we looked for tools to help us learn. This phase himself to start building an MVP version of the
is sometimes known as the “ideating” phase for mobile application. When we saw that mobile ap-
those playing buzzword bingo at home. plication talking to the API, it was time to have
some S’mores.
Adding Kindling You need to take time to celebrate your wins.
After gathering material, we started constructing Take a pause to reflect where you are. After a
our fire pit by adding kindling. A better explana- quick celebration, it was back to work.

6 Editorial codemag.com
NEVRON OPEN VISION (NOV)
The future of .NET UI development is here

A c c el er
U a
P

te
G

d
Re
n d e ri n g

NOV lets you build applications


for Windows, Mac and soon Blazor
using a single codebase.

The
Thesuite
suitefeatures
featuresindustry-leading
industry-leadingUIcomponents
components
including Chart, Diagram, Gauge, Grid, Scheduler,
Rich Text Editor, Ribbon, and others.

Nevron provides advanced solutions for:

codemag.com Learn more at: www.nevron.com today


ONLINE QUICK ID 2103021

Deploy a Real-World ExpressJS


TypeScript Application
Using Containers
In my first article (https://www.codemag.com/Article/2011021/A-Simple-ExpressJS-and-TypeScript-Project) in this series in
CODE Magazine, I made a controversial statement. I said: JavaScript is awesome. Seriously, anytime I say this, people think I’m
joking. Sometimes I laugh with them. But the reality is that I laugh with them only to avoid an argument. Secretly, I do think

JavaScript is indeed awesome. Sure, it has its flaws, after That’s where I’ll pick up in this article. In this article, I’ll
all, the language was invented in a week in a basement but address a very important real-world concern: deployment.
tell me one other language that’s equally flexible and runs The code I have right now works great on a developer’s
everywhere, encapsulates both back- and front-ends, and computer. Shipping code to run in the cloud or on a server
runs with the performance of compiled languages. somewhere brings a whole set of interesting challenges.

Whether you like it or not, probably the most used applica- A very popular and solid way of deploying code these days
tion on both your phone and your desktop is your browser. is using containers. In this article, I’ll show you how I go
As much as other approaches (Blazor <cough cough!>) are about containerizing the application and deploy both the
Sahil Malik trying to encroach on its territory, which language do you application and the database as two separate containers.
www.winsmarts.com end up interacting with the most as of today? It’s JavaScript.
@sahilmalik
JavaScript is incredibly unopinionated. Although that’s Git Repo Structure
Sahil Malik is a Microsoft
great because it fuels innovation, it doesn’t tie you to doing Before I get into the weeds of this article, ensure that you
MVP, INETA speaker,
one way of thinking and it also creates an interesting chal- have the starting point for the code of this article ready to
a .NET author, consultant,
lenge. Anytime you wish to do something simple, there’s no go. All of the code for this article can be found at https://
and trainer.
file\new_project approach here. You must build a project github.com/maliksahil/expressjs-Typescript. I’m going to
Sahil loves interacting with template from scratch, which involves a lot of decisions. leave the master branch as the basic starter code. The start-
fellow geeks in real time. And as you go along through the various phases of the proj- ing point for this article is in a branch called todoapp.
His talks and trainings are ect, you must keep making these decisions. Should I use
full of humor and practical package A or package B? Which package is better? How do I Get the Starter Code Running
nuggets. define better? Its long-term supportability? Security issues? I know I’ve covered this in depth in my previous article, but
Popularity of use? Or features? if you don’t get this part working, you’re going to be very
His areas of expertise are lost in rest of the article. So let’s get this part squared away.
cross-platform mobile app These are some tough decisions. And even after you make Use the following steps to get the starter code running on
development, Microsoft these decisions, did you make the right decision? Is another your computer. You’ll need to do this on a Windows, Mac, or
anything, and security larger development team in another company making a dif- Linux computer that’s set up to use NodeJS.
and identity. ferent decision that may affect the outcome of your deci-
sions in the future? First, clone the repo:

It’s for this reason that I started this series of articles. In git clone
my first article, I showed you how to build a simple proj- https://github.com/maliksahil/
ect template from scratch. I combined ExpressJS and Type- expressjs-typescript
Script to create both a front-end and a back-end for the
JavaScript-based application. I ended the first article with Next, check out the “todoapp” branch:
a very simple working application, really just a project tem-
plate, that had no functionality in it. But even then, I had a git checkout todoapp
template that could run in Visual Studio Code, it supported
live reload, debugging, full TypeScript support, client side For the purposes of this article, create a new branch called “deploy”:
and server side support, and much more. I encourage you to
check the article out. git checkout -b deploy

Of course, the plain vanilla JavaScript application template At this point, you may want to repoint the origin of the Git repo
isn’t something you can ship. So in the second article (https:// to somewhere you can check-in the code, or just not bother
www.codemag.com/Article/2101021/A-Real-World-ExpressJS- checking it in if you just wish to follow along in this article.
and-TypeScript-Application), I took it one step further, and
crafted up a real-world application that talks to a database. I Next, install the dependencies:
wrote a simple ToDo application, and for client-side scripting,
used Vue JS; for server side I wrote an API in ExpressJS. npm install

8 Deploy a Real-World ExpressJS TypeScript Application Using Containers codemag.com


Then, create a new .env file. To get started, you can simply my whole computer! There’s no longer going to be this confu-
clone the .env.sample file: sion that something in the production environment is different
than in the test environment. When I ship my code, I have full
cp .env.sample .env confidence that what works locally is identical to what will run in
production. Now, production could be on-premises or it can be
Finally, run npm start and ensure that you can see the ap- in the cloud. In other words, what I run locally in my Docker en-
plication running on localhost:8080. vironment, I can have confidence will work in the cloud as well.

If you need a walkthrough of this starter code, I highly rec- Identify Dependencies
ommend that you read my previous articles in CODE Magazine. Speaking of “it works on my machine,” let’s see what prob-
lems we can uncover. For a node-based development, I re-
ally prefer not to install stuff in the global namespace, be-
Add Docker Support cause it makes my environment impure. It makes it harder
Docker is an amazing solution, and I’ve talked about it to test how things will run in Docker. At this point, if you
extensively in my articles in CODE Magazine https://www. can manage to, uninstall all global packages except npm. If
codemag.com/Magazine/ByCategory/Docker) (and so have you can’t uninstall global packages, you can find problems
a few other people). Where it really shines is when I can directly in Docker, but that will slow you down.
deploy code to Linux containers running in the cloud. These
containers are super lightweight, so they run fast, and be- The first thing I’d suggest is that you take that dist folder
cause they’re stripped down to the absolute bare minimum, you’d built and from which you were running your applica-
they’re secure, and they’re cheap to run. tion, copy it to a folder outside of your main project folder,
and simply issue the following command:
This is where NodeJS shines. You could have written the whole
application on a Mac or Windows computer, but you can be node index.js
reasonably confident that it will work in a Docker container.
Now, it’s true that some node packages could take OS-level You should see an error like this:
dependencies. This is especially common in electron-based
apps. But for this Web-based application, accessed through a Error: Cannot find module 'dotenv'
browser, this isn’t something I’m worried about.
If you don’t see this error, you probably have dotenv installed
When I dockerize my application, I want to ensure that my in your global namespace. And this is why I really dislike frame-
usual development lifecycle isn’t broken. In other words, I works that insist on installing things in your global namespace.
still wish to be able to hit F5 in VS Code and run the applica- I have only one. Keep your junk to yourself please.
tion as usual. I don’t want to run Docker, if I don’t have an
absolute need to. I know Docker is fast, but I don’t want to Anyway, I need to solve this problem. In fact, dotenv isn’t the
slow my dev work down by that additional step of creating a only problem here. There are a number of packages I took a de-
Docker image, starting the container, etc. Although VSCode is pendence on, and when I run the project from within the folder,
amazing and so is remotely developing in a Docker container, it simply picks it from node_modules. Although I could just ship
I don’t want to pay that overhead unless I absolutely need to. the entire node_modules folder, it would really bloat my project.

There are situations where I want to pay the overhead for To solve this problem, I’ll simply leverage the parcel bun-
Docker even in local development. Now, to be clear, the dler to create a production build in addition to the dev time
overhead is just a few seconds. Frankly, it’s faster than build. If you remember from my previous article, the dev
launching MS Word. For that little overhead, I find running time build was already building client-side code. The client-
the application containerized in Docker on my local dev en- side code isn’t going to read directly from node_modules
vironment useful for the following situations. because it runs in the browser, so that was essential to get
started. You can use a similar approach for server-side code.
I want to take a dependency on numerous packages that I don’t
want to install on the main host operating system. I realize To bundle server-side code, add this script in your package.
that there are things such as NVM or Conda that allow me to json’s scripts node:
manage different node versions or different Python environ-
ments. To be honest, those—especially NVM—have been prob- "parcelnode":
lematic for me. Maybe I’m just holding it wrong. But when I can "parcel build main src/index.ts
just isolate everything in a separate Docker environment, why -o dist/index.js
bother dealing with all that nonsense? Just use Docker. --target node
--bundle-node-modules",
Sometimes I may wish to isolate development environments
from a security perspective. Maybe I’m working in a certain This line of code, when executed, will bundle all server-side
customer’s Azure tenancy, perhaps I’m connected to their dependencies except ejs, and output that in a file called
source control using a separate GitHub account, or perhaps dist/index.js. The reason it won’t include ejs is because ejs
I’m using tools that interfere with my other work. For all is being referenced as a string and not as an import state-
those scenarios and more, Docker gives me a very nice, iso- ment. This isn‘t great, and I’m sure there are workarounds
lated development environment to work in. for this, but I’ll keep things simple and simply install ejs as
a node module in the Docker image. It’s a single package, so
And finally, the elephant in the room, have you heard the I’ll live with a little bloat for lots of effort saved. Let’s chalk
phrase, “it works on my machine”? Docker gives me a way to ship it up to “technical debt” for now.

codemag.com Deploy a Real-World ExpressJS TypeScript Application Using Containers 9


Add a second script to create a production build in your server started at http://localhost:8080
package.json’s scripts node:
Now open the browser and visit http://localhost:8080.
"buildprod": You’ll see yet another error:
"npm-run-all clean
lint parcel parcelnode Error: Cannot find module 'ejs'
copy-assets"
To get around this error, run the below command in your
Now build a production version of your application by issu- dist folder, and restart the project.
ing the command below.
npm i ejs
npm run buildprod
At this point, verify that you can see your todos app, as
You are greeted with another error: shown in Figure 2.

Cannot resolve dependency 'pg-native' The reason you don’t see any todos is because your database
isn’t running. You can go ahead and run the database as a
Now, if you glance through your code, you aren’t taking a Docker image by running ./db/setupdb.sh in your project
dependency on pg-native. If you read through the stack folder. That shell command, if you remember from the previ-
trace, you’ll see that one of the packages you took a depen- ous article, runs the postgres database as a Docker image,
dency on is taking a dependency on pg-native. Okay, this exposed on Port 5432. Once you run the database, go ahead
is frustrating. Because this isn’t a package that will easily and refresh your browser, and verify that you can see the
install either, it takes dependency on native code. If I did todos appearing, as shown in Figure 3.
that, I’d also have to install it on my Docker image. That’s
not a big deal, so perhaps this is the route I need take.

The frustrating part is that this is a rabbit hole I’m falling in,
where identifying dependencies feels like an unpredictable
never-ending hole of time suck. Well, this is the reality of
node-based development. This is why, when I write node-based
code, I always keep dockerization and deployment in the fore-
front of my mind, and try not to solve a huge project at once.
I make sure that whatever package I take dependency on, it’s
something I can package or I don’t take a dependency on it.

I’ll simply alias this package, pg-native. I don’t wish to take a


dependency on it, I have no use for it, so I’ll simply write some
code to tell the parcel to effectively ignore it. Here is how.

In your package.json under the “alias” node, add the fol-


lowing alias.
Figure 2: My todos app finally running, kind of
"pg-native": "./aliases/pg-native.js"

In your project, create a folder called “aliases” and in that,


create a file called pg-native.js, with the following code:

export default null;

Now run the buildprod command again. This time around,


my production version of the application should build. Ver-
ify that you can see the built version, as shown in Figure 1.

One obvious missing piece here is the .env file. The .env
file, if you remember, is what you had various configura-
tion information in, such as what port to run on, where the
database is, etc., I could copy the .env file in, but if you
look deeper into the .env file, it has a dependency on the
database running on localhost. What is localhost on Docker?
It’s the docker image itself.

I think you can imagine that this may be a problem down the
road. But let’s not try to boil the ocean in one check-in. For now,
just copy the .env file into the dist folder manually, copy the .dist
Figure 1: The built version of folder to an alternate location your disk, and run the project by
my application running node index.js. You should see the following console.log: Figure 3: Finally running locally

10 Deploy a Real-World ExpressJS TypeScript Application Using Containers codemag.com


Build the Docker Image on Port 8080, go ahead and add the following command to
At this point, you’ve made some real progress. You’ve suc- expose this port to the outside world.
cessfully identified and removed all dependencies that now
allow the project to run completely isolated outside your EXPOSE 8080
dev environment. You’ve also identified two dependencies
that you need to take care of outside of your bundling pro- Finally, run the application in the Docker image.
cess. The first is the .env file, and the second is the ejs pack-
age that you need to manually install. CMD ["node", "index.js"]

With this much in place, let’s start building the docker im- At this point, your Dockerfile should look like Listing 1.
age. This is a matter of authoring a Dockerfile in the root of
your projects. To smooth out your development, also create a file called
“scripts/runindocker.sh” with code as shown in Listing 2.
Create a new file called Dockerfile in the root of your project
and add the following two lines in it. Now, make this file executable:

FROM node:12 chmod +x ./scripts/runindocker.sh


WORKDIR /app
Go ahead and run it. Running this command should create
These lines of code inform the Docker image that it will be a Docker image and run the container, and then start your
built using a base image that’s already set up to use node application at localhost:8080.
12, and you will put your stuff in a folder called /app.
The application should run exactly like Figure 2. The todos
Below this, add two more lines of code, as below. aren’t loading, because the database isn’t running, right?
Sure, go ahead and run the database, as indicated above in
COPY ./dist ./ the article, and refresh the page. What do you see? You’ll
COPY .env ./ see that your code still doesn’t load the todos.

As you can see here, you’re instructing the Docker build


Listing 1: The Dockerfile
daemon that it needs to copy stuff from the “dist” folder.
The dist folder should at this time already have a production FROM node:12
WORKDIR /app
version of the application built. Additionally, you’re copying
the .env file as well. This was an external dependency, not # Copy distributable code
part of the final built image. COPY ./dist ./
COPY .env ./
Speaking of external dependencies that aren’t part of the # Install ejs
build package, also go ahead and install ejs. RUN npm i ejs

#Expose the right port


RUN npm i ejs
EXPOSE 8080

This RUN command will instruct the Docker daemon to run # Run our app
the aforementioned command in the Docker image, and CMD ["node", "index.js"]
therefore have ejs installed locally. There are better ways to
do this, but for now, let’s go with this.
Listing 2: Shell script to build and run the Docker container
My code is pretty much ready to go, but I need to expose docker build --tag nodeapp:latest .
the right port. This is necessary because Docker, by default, docker container rm
is locked down, as it should be. No ports are exposed un- $(docker container ls -af name=nodeapp -q)
less you ask them to be. Because my application is running docker run -it --name nodeapp -p 8080:8080 nodeapp

Figure 4: I can’t seem to connect to my database

codemag.com Deploy a Real-World ExpressJS TypeScript Application Using Containers 11


What’s going on? If you examine the network trace in your You had to work through a kludgy way of allowing the Docker
browser tools, you’ll see something similar to Figure 4. container running the Web server to connect to the Docker con-
tainer that was running the database server. This means that
I can’t seem to connect to my database. Aha! This makes there will be networking details to worry about. As the applica-
sense. My Docker container is practically its own computer. tion grows bigger and more complex, and uses more contain-
The .env file had pointed the database to localhost. There’s ers, this problem becomes exponentially more difficult.
no database running on my NodeJS container. Ideally, what
I’d like to do is be able to pass in the location of the data- Then there’s the risk of someone snooping on the network
base to the image as a parameter. communication. Why isn’t the Docker Web server and Docker
database in an isolated network? You’ll have to just trust that
At this point, I’ve made some progress, and I’d like to com- the IT ogre did it properly. And you know he’ll screw up.
mit this code. So go ahead and commit and push.
Then there’s the whole issue of reliability. What happens if my
Parameterize the Docker Container NodeJS process crashes? I’ll need to add monitoring logic. I’ll
Much like central banks, I seem to have solved one problem need to have something react to that health check. I’ll have
and created two new ones. Although I can now run my ap- to bring the container back up. Then there will be container
plication as a container, it has no clue where the database name clashes, so I’ll need to make sure to get rid of the older
is. I need to somehow tell the Docker container, at runtime, names. There will be outages. And then there will be new ver-
where the database is. sions adding to more confusion. There are going to be more IT
ogre and developer fairy fights.
This is where the benefits of dotenv come in. dotenv is great
because you can supply it parameters from a .env file, and World peace is fragile, isn’t it?
you can have as many of these .env files as you please, to
match your environments. I ended up copying the .env into What I ideally want is to somehow specify that a bunch of
the Docker image itself. I have two choices here. containers work together in a certain way. These are the net-
working settings that the rest of the world doesn’t need to
Either I can copy a .env for my needs when I start the con- worry about. This is how they tell the world they’re healthy.
tainer, or I can simply set an environment variable. Any en- This is how the application should react when they’re un-
vironment variable will override what the application finds healthy. This is how the application scales, etc.
in the .env file. All I need to do now is to override the “PG-
HOST” variable from the .env file and I should be ready to I wish I could just describe all this in a file and ask all this to
go. The good news is that the environment variables in the be “applied” and forget about it. That’s exactly what products
Docker container belong to the Docker container. They don’t such as Docker Swarm and Kubernetes allow you to do.
interfere with the host OS, so this is a pretty clean way of
setting such dependencies.
Summary
Now only if there was an easy way to inform Docker that at If you’ve been following along in this series of articles, it’s
container start, I’d like a specific environment variable to incredibly fascinating how involved and rewarding it can
have a certain value. Turns out, Docker has thought of this be to write and deploy a NodeJS-based application. The
very issue. Just use the -e flag in the runindocker.sh file, as application you just wrote and deployed is deployed as a
shown below. Docker image that’s a total of 925MB in size. You can verify
that by running “docker image ls”. Try to wrap your head
docker run -it around that: 925 megabytes that package an entire OS, an
--name nodeapp entire runtime, and your application, and it runs for cents a
-e PGHOST=$(hostname) month, scales very easily, and orchestrates well with plat-
-p 8080:8080 nodeapp forms like Kubernetes. This is some really powerful stuff.

Go ahead and run the application container again. Verify No wonder everyone is gaga over it. But the challenge re-
that the application, now running in Docker, shows you to- mains: the learning curve and bewildering array of options
dos, like in Figure 3. make it very difficult to go from Point A to Point Z.

At this point, I’ve made good progress. Go ahead and com- It’s exactly that problem I wish to solve in this series of articles,
mit your code. You can find the commit history associated going from Point A to Point Z, arguing and debating every step,
with this article at https://github.com/maliksahil/express- giving you the reasons why I make a certain decision over anoth-
js-typescript/commits/deploy. You can repro each step by er, and sharing the history of why things are the way they are.
pulling a specific commit down.
In my next article, I’ll continue this further by adding more
deployment concerns, where I’ll extend my simple two con-
Something Still Doesn’t Feel Right tainer application to Kubernetes. I’ll deal with some inter-
In this article, you made some good progress. You took your esting challenges, such as how to keep secrets, such as the
NodeJS application that was effectively running nicely on a database password, safe.
developer’s computer, and you were able to dockerize it. The
advantage here is that now it can run reliably on-premises, Until then, happy coding.
in the cloud, in any cloud. There will be fewer fights between
the IT Pros and the developers. World peace. With fine print  Sahil Malik
of course. 

12 Deploy a Real-World ExpressJS TypeScript Application Using Containers codemag.com


OLD
TECH HOLDING
YOU BACK?

Are you being held back by a legacy application that needs to be modernized? We can help.
We specialize in converting legacy applications to modern technologies. Whether your application
is currently written in Visual Basic, FoxPro, Access, ASP Classic, .NET 1.0, PHP, Delphi…
or something else, we can help.

codemag.com/legacy
832-717-4445 ext. 9 • info@codemag.com
ONLINE QUICK ID 2103031

Using Ajax and REST APIs in .NET 5


Asynchronous JavaScript and XML (Ajax) is the cornerstone of communication between client-side and server-side code.
Regardless of whether you use JavaScript, jQuery, Angular, React, or any other client-side language, they all use Ajax under the
hood to send and receive data from a Web server. Using Ajax, you can read data from, or send data to, a Web server all without

reloading the current Web page. In other words, you can Looking at Figure 1, you can see that an HTML page sends a
manipulate the DOM and the data for the Web page with- request for data (1) to a controller in the Web server (2). The
out having to perform a post-back to the Web server that controller gets that data from the data storage medium (3)
hosts the Web page. Ajax gives you a huge speed benefit and from the data model that serializes that data as JSON
because there is less data going back and forth across the (4) and sends the response back to the HTML page (5). If you
internet. Once you learn how to interact with Ajax, you’ll look at Figure 2, you can see both the JavaScript and C# code
find the concepts apply to whatever front-end language on the client and the server that corresponds to each of the
you use. numbers on Figure 1. Don’t worry too much about the code in
Figure 2, you’re going to learn how to build it in this article.
Paul D. Sheriff Because more mobile applications are being demanded by
http://www.pdsa.com consumers, you’re probably going to have to provide a way Methods of Communication
for consumers to get at data within your organization. A Although the XMLHttpRequest object is the basis for all Ajax
Paul has been in the IT consumer of your data may be a programmer of a mobile calls, there are actually a few different ways you can use
industry over 34 years. application, a desktop application, or even an HTML page this object. Table 1 provides a list of the common methods
In that time, he has success- being served from a Web server. You don’t want to expose in use today.
fully assisted hundreds
your entire database; instead, create an Application Pro-
of company’s architect
gramming Interface (API) in which you decide how and If you’re unfamiliar with the terms callbacks and promises,
software applications
what to expose to these consumers. A Web API, also called the two sections that follow provide you with a definition
to solve their toughest
business problems. a REST API, is a standard mechanism these days to expose and some links to learn more about each.
Paul has been a teacher your data to consumers outside your organization.
and mentor through various Callbacks
mediums such as video This is the first in a series of articles where you’ll learn to A callback is the object reference (the name) of a function
courses, blogs, articles use Ajax and REST APIs to create efficient front-end applica- that’s passed to another function. That function can then
and speaking engagements tions. In this article, you create a .NET 5 Web server to ser- determine if and when to invoke (callback) that function.
at user groups and conferences vice Web API calls coming from any Ajax front-end. You also It may call that function after some variable changes state,
around the world. learn to create an MVC Web application and a Node server to or maybe after some task is performed. For a nice defini-
Paul has 27 courses in the serve up Web pages from which you make Ajax calls to the tion and an example of a callback, check out this article:
www.pluralsight.com library .NET 5 Web server. In future articles, I’ll show you how to https://www.freecodecamp.org/news/javascript-callback-
(http://www.pluralsight.com/ use the XMLHttpRequest object, the Fetch API, and jQuery functions-what-are-callbacks-in-js-and-how-to-use-them.
author/paul-sheriff) on topics to communicate efficiently with a .NET 5 Web API project.
ranging from JavaScript, Promises
Angular, MVC, WPF, XML, A promise is the result of the completion of an asynchronous
jQuery, and Bootstrap. Ajax Defined operation. The operation may succeed, fail, or be cancelled.
Although Ajax stands for Asynchronous JavaScript and XML, Whatever the result, the promise allows access to the result
the data transmitted can be JSON, XML, HTML, JavaScript, and any data returned by the operation. For more informa-
plain text, etc. Regardless of the type of data, Ajax can tion, see this post: https://developer.mozilla.org/en-US/
send and receive it. Ajax uses a built-in object of all modern docs/Web/JavaScript/Reference/Global_Objects/Promise.
browsers called XMLHttpRequest. This object is used to ex- Another great article compares callbacks and promises:
change data back and forth between your Web page and a https://itnext.io/javascript-promises-vs-rxjs-observables-
Web server, as shown in Figure 1. de5309583ca2.

Tools You Need


I suggest you follow along step-by-step with this article to
build a C# .NET Web API project to retrieve data from a SQL
Server table. You also build a MVC application and a Node
server to host Web pages for communicating to the Web API
project. For this article, I’m going to use the technologies
listed below.

• Visual Studio Code v1.52.1 or higher


• .NET v5.x or higher
• Entity Framework
• SQL Server 2019 Developer Edition or higher
Figure 1: Ajax sends a request for data to a Web server • SQL Server AdventureWorksLT Sample Database
and receives data back, all without having to post the • JavaScript ECMAScript 2015 or higher
entire Web page back and forth. • jQuery v3.5 or higher

14 Using Ajax and REST APIs in .NET 5 codemag.com


Technology Description
JavaScript XMLHttpRequest using Callbacks This is the most primitive method of Ajax communication and is the basis for almost all other Ajax implementations today.
JavaScript XMLHttpRequest with Promises You can build your own wrapper around the callbacks and return a Promise. You can find many samples of doing this
with a quick Google search.
Fetch API Most modern browsers support this Promise-based API for Ajax communication.
jQuery using Callbacks Prior to jQuery 1.5, you used callbacks to respond to Ajax events returned by the underlying XMLHttpRequest object.
jQuery using Promises From jQuery 1.5 onward, the jQuery team put a Promise-based wrapper around the XMLHttpRequest object so you
can use a more streamlined approach to Ajax queries.
jQuery Ajax Shorthand Functions There are several shorthand functions, such as $.get(), $.post(), that allow you to shorten the syntax for making
GET and POST requests of the Web API server.
Table 1: Several methods you may use to communicate from a Web page to a Web API server

If you wish to follow along, download these tools and install


them on your computer. At the time of the writing of this arti-
cle, these tools can be retrieved from the following locations.

• VS Code: code.visualstudio.com
• .NET 5: dotnet.microsoft.com/download
• SQL Server: www.microsoft.com/en-us/sql-server/sql-
server-downloads
• AdventureWorksLT Sample Database: https://github.
com/PaulDSheriff/AdventureWorksLT

Create a Web (REST) API Server


To retrieve data from an Ajax call, you must have a Web
server running that has a series of endpoints (methods) to Figure 2: The code on the left shows the JavaScript on the Web page and the code on the
call. This Web server can be created with a variety of tools, bottom right shows the C# code on the Web API server.
including Node and .NET 5. I’m going to use .NET 5 in this
series of articles, but Node works just as well. If you want to
learn to use Node to build a REST API, check out my Plural-
sight video entitled JavaScript REST APIs: Getting Started
at https://bit.ly/2XT1lgD.

In this section of this article, you’ll create the .NET 5 Web


API project to retrieve the data from the SQL Server Adven-
tureWorksLT database and send it back to a Web page. Open
an instance of VS Code and open a terminal window. Navi-
gate to your normal development folder. Create a .NET Web
API app using the following dotnet command.

dotnet new webapi -n WebAPI

Once the dotnet command finishes, select File > Open


Folder… from the menu and select the WebAPI folder you
created. Your VS Code will look like that shown in Figure 3.

Add Required Assets


At the bottom right-hand corner of VS Code, a dialog appears
(Figure 4) asking you to add some required assets. Click the
Yes button to allow VS Code to load the various packages to Figure 3: Your VS Code environment will look like this after
support C# and Web API programming. If, for some reason, creating your Web API project.
you don’t see this dialog, exit VS Code and restart it.

Try It Out
Select Run > Start Debugging from the menu to build the
.NET Web API project and launch a browser. If a dialog box
appears asking if you should trust the IIS Express certifi-
cate, answer Yes. In the Security Warning dialog that ap-
pears next, also answer Yes. Once the browser appears, it
comes up with a 404 error page. Type in the following ad-
dress into the browser address bar: https://localhost:5001/ Figure 4: VS Code will inform you that you it needs to load some packages to support Web
weatherforecast. API programming.

codemag.com Using Ajax and REST APIs in .NET 5 15


If you get an error related to privacy and/or HTTPS. Open for this SQL database at https://github.com/PaulDSheriff/
the \Properties\launchSettings.json file and modify the AdventureWorksLT. In this GitHub repository, there’s also a
applicationURL property to use http://localhost:5000. .sql file you can use to build the SQL database if the backup
After hitting enter, you should see a string that looks like file doesn’t work with your version of SQL Server.
Figure 5. This means that your Web API server is working,
and you’re ready to create your own controller to retrieve Add Entity Framework
product data from a SQL Server database. I’m going to use the Entity Framework to interact with the
AdventureWorksLT database. To use the Entity Framework in
the .NET application, you need to add a package to your
Build Database Access Classes project. From the VS Code menu, select Terminal > New Ter-
This article uses the AdventureWorksLT sample database minal. Ensure that the terminal prompt is in your WebAPI
that comes with SQL Server. You can download a .bak file project folder and type in the following command.

dotnet add package


Microsoft.EntityFrameworkCore.SqlServer

Build an Entity Class


Create a Product class in C# to match the fields in the SalesLT.
Product table. Right mouse-click on the WebAPI folder and
add a new folder named EntityClasses. Right mouse-click on
the \EntityClasses folder and add a new file named Product.
cs. Into this new file, add the code shown in Listing 1.

Add a AdventureWorksLTDbContext.cs file


To retrieve and modify data in the SalesLT.Product table
through the Product class you need an instance of a Db-
Context class. Create a folder named \Models in your
WebAPI project. Right mouse-click on this folder and create
a file named AdventureWorksLTDbContext.cs. Add the code
shown in Listing 2 to this file.
Figure 5: The weather forecast data is a quick way to check if your Web API server is working.
Modify appSettings.json
You need a connection string for the DbContext object to
Listing 1: Build a Product class with properties that match the fields in the Product table. work. It’s a best practice to place your connection strings in
using System; the appSettings.json file. Open the appSettings.json file and
using System.ComponentModel.DataAnnotations; add a new property named ConnectionStrings. Set the value
using System.ComponentModel.DataAnnotations.Schema; of this new JSON object to the code shown below. Note that
I had to break the connection string across multiple lines to
namespace WebAPI
{ format it for the printed page. When you type in your connec-
[Table("Product", Schema ="SalesLT")] tion string, ensure that it’s all is on a single line.
public partial class Product
{ {
[Required]
[Key] "ConnectionStrings": {
[DatabaseGenerated(DatabaseGeneratedOption.Identity)] "DefaultConnection": "Server=Localhost;
public int ProductID { get; set; } Database=AdventureWorksLT;
[Required(ErrorMessage =
"The Product Name is required")] Trusted_Connection=True;
public string Name { get; set; } MultipleActiveResultSets=true"
[Required] },
...
public string ProductNumber { get; set; }
public string Color { get; set; } }
[Required]
public decimal StandardCost { get; set; }
[Required] Configure Web API
public decimal ListPrice { get; set; }
public string Size { get; set; } When using a .NET Web API, a lot of “magic” happens with
public decimal? Weight { get; set; } Dependency Injection and setting configuration options.
public int? ProductCategoryID { get; set; } Add some packages and configuration code to help make
public int? ProductModelID { get; set; } this magic happen.
[Required]
public DateTime SellStartDate { get; set; }
public DateTime? SellEndDate { get; set; } Add System.Text.Json
public DateTime? DiscontinuedDate You’re going to be sending and receiving JSON from your
{ get; set; }
[Required]
Ajax calls. The recommended package to use these days is
public Guid rowguid { get; set; } the one contained in the System.Text.Json package. Add
[Required] this package to your WebAPI project by executing the fol-
public DateTime ModifiedDate { get; set; } lowing command in the terminal window of VS Code.
}
}
dotnet add package System.Text.Json

16 Using Ajax and REST APIs in .NET 5 codemag.com


Listing 2: A DbContext class is needed to retrieve and modify data in a database through the Entity Framework.
using Microsoft.EntityFrameworkCore; public virtual DbSet<Product> Products
{ get; set; }
namespace WebAPI {
public partial class AdventureWorksLTDbContext protected override void OnModelCreating(
: DbContext { ModelBuilder modelBuilder) {
public AdventureWorksLTDbContext( base.OnModelCreating(modelBuilder);
DbContextOptions<AdventureWorksLTDbContext> }
options) : base(options) { }
} }

Listing 3: Configure services to work with CORS, serialize JSON and interact with your database.
public void ConfigureServices( // appSettings.json file
IServiceCollection services) { services.AddDbContext<AdventureWorksLTDbContext>
// Tell this project to allow CORS (options => options.UseSqlServer(
services.AddCors(); Configuration.
GetConnectionString("DefaultConnection")));
// Convert JSON from Camel Case to Pascal Case
services.AddControllers().AddJsonOptions( services.AddControllers();
options => { options.JsonSerializerOptions services.AddSwaggerGen(c => {
.PropertyNamingPolicy = c.SwaggerDoc("v1", new OpenApiInfo {
JsonNamingPolicy.CamelCase; Title = "WebAPI",
}); Version = "v1" });
});
// Setup the AdventureWorks DB Context }
// Read in connection string from

Add CORS public void Configure(IApplicationBuilder app,


The JavaScript code you wrote in your node or .NET Web serv- IWebHostEnvironment env) {
er is running on localhost:3000, however the Web API code ...
is running on localhost:5000. These are two completely dif-
ferent domains. For your JavaScript or other client-side ap- app.UseCors(options =>
plication to call this Web API, you must tell the Web API that options.WithOrigins("http://localhost:3000")
you’re allowing Cross-Origin Resource Sharing (CORS). To use .AllowAnyMethod().AllowAnyHeader()
CORS, you need to add this package into your project. Go back );
to the terminal window and type in the following command.
app.UseEndpoints(endpoints => {
dotnet add package Microsoft.AspNetCore.Cors endpoints.MapControllers();
});
}
Modify Startup.cs
Now that you’ve added two new packages, let’s use them.
Open the Startup.cs file and add two using statements at Create a Base Controller
the top of the file. You’re most likely going to be building several controllers,
so it’s best to create a base class that all your controllers
using Microsoft.EntityFrameworkCore; can inherit from. Microsoft supplies the ControllerBase
using System.Text.Json; class that all controllers should inherit from. Create your
own class like the one shown in Listing 4 that inherits from
Locate the ConfigureServices() method and add the code ControllerBase. Then make all your classes inherit from this
shown in bold in Listing 3. This code informs this proj- BaseApiController class.
ect that you want to allow CORS to be used. It also sets
the JSON converter to automatically convert and property Adding your own base controller class allows you to add code
names that are in camel case to Pascal case and vice versa. that all controllers can use. For example, you might want a
The next lines inject an instance of the AdventureWorksLT- single method to handle any exceptions. In the HandleExcep-
DbContext class you created earlier and passes in an options tion() method, you could publish any exceptions, then return
object that is loaded with the connection string you stored a specified status code such as a 500 that can be communi-
in the appSettings.json file. cated back to the calling application. Right mouse-click on
the Controllers folder and add a new file named BaseApiCon-
Locate the Configure() method and add the lines of code troller.cs. Add the code shown in Listing 4 to this new file.
shown in bold below. This code configures CORS with a list
of URLs this Web API project allows to make calls. The other Delete Weather Classes
two chained methods, AllowAnyMethod() and AllowAny- You’re not going to need the WeatherForecast classes anymore,
Header(), allow all verbs (GET, POST, PUT, etc.), and any so feel free to delete the \Controllers\WeatherForecastCon-
HTTP headers. troller.cs and the \WeatherForecast.cs files from the project.

codemag.com Using Ajax and REST APIs in .NET 5 17


Create Product Controller it back to the calling application. Right mouse-click on the
Now that you have Product and DbContext classes to allow Controllers folder and add a new file named ProductCon-
data access to the SQL Server database, it’s time to build a troller.cs. Dependency Injection (DI) is used to insert the
controller class to use those classes to get the data and send AdventureWorksLTDbContext object into the constructor of
this controller (Listing 5). Remember that an instance of this
class was created in the Startup.ConfigureServices() method.

When a class is detected in the constructor of a controller


class, .NET looks up the class name in its DI list to see if
there is an object that’s been registered. If so, it retrieves
the instance and injects it into the constructor. This ob-
ject is assigned to the field _DbContext for use within any
method in this controller class.

Get All Products


Add a Get() method to the new controller (Listing 6) and use
EF to detect whether there are any products in the Products
collection on the DbContext object. If there aren’t, an Object-
Result object is created by calling StatusCode(StatusCodes.
Status404NotFound, “message”). This method sets the HTTP
status code of 404 and returns a message for the calling ap-
plication to read. The HTTP status code (404) is placed into
the status property of the XMLHttpRequest object and the
message is placed into the responseText property.

If there are products in the Products collection, a call to


StatusCode(StatusCodes.Status200OK, list) is made to cre-
ate an ObjectResult object with the status code of 200 and
the list of products to return.

If an error occurs when attempting to retrieve products from


the database, a call to the HandleException() method in the
base class is made. In this method, a status code of 500 is
created and the response is the exception information.

Try It Out
Run the Web API project by selecting Run > Start Debug-
ging from the menu. Once the browser comes up, type in
http://localhost:5000/api/product and you should see data
appear like that shown in Figure 6.

Figure 6: Running the Product controller from the browser should produce a set of JSON. Get a Single Product
Besides retrieving all products, you might also need to just
retrieve a single product. This requires you to send a unique
Listing 4: Always create a base controller class for all your controllers to inherit from. identifier to a Web API method. For the Product table, this is
using System; value in the ProductID field.
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
Go back to the Web API project and open the ProductCon-
namespace WebAPI.Controllers { troller.cs file. Add a new method that looks like Listing 7.
public class BaseApiController : This method is very similar to the Get() method you created
ControllerBase { earlier, however, just a single product is returned if found.
protected IActionResult HandleException(
Exception ex, string msg) { Try It Out
IActionResult ret; Save the changes to your ProductController and restart the
Web API project so it can pick up the changes you made. When
// TODO: Publish exceptions here the browser comes up, type in the following: http://local-
// Create new exception with generic message host:5000/api/product/706. This passes the value 706 to the
ret = StatusCode(StatusCodes Get(int id) method. You should see a screen similar to Figure 7.
.Status500InternalServerError,
new Exception(msg));

return ret;
Insert a Product
} If you wish to send a new product to your Web server and
} have it inserted into the Product table, you need a POST Web
} API method to pass a JSON product object to. Create a new

18 Using Ajax and REST APIs in .NET 5 codemag.com


method in your ProductController class call Post(), as shown Update Product Data
in Listing 8. If you wish to update a product that already exists in the
AdventureWorksLT Product table, you need a PUT method
The .NET Web API automatically takes a JSON object that in your Web API project, as shown in Listing 9. Pass to this
has the same properties as the C# Product class you created Put() method the unique ID of the product you wish to up-
earlier and maps it to the entity argument to this method. date and the product data to update. For the sample in this
An example of a JSON object that could be passed might article, you’re not asking the user for all product data. There
look like the following: are more fields in the Product table than you have on the
Web page. Thus, you need to Find() the product in the Prod-
{ uct table and retrieve all of the data and then only update
"productID": 0,
"name": "A New Product",
"productNumber": "NEW-999",
"color": "Red",
"standardCost": 20,
"listPrice": 40,
"sellStartDate": "2021-01-15"
}

As you can see from the JSON object above, not all fields
are present compared to the C# Product class. That’s why,
in the Post() method, you need to fill in a few other fields
with some default values. I’m doing this just to keep things
simple for passing in a JSON object. In a real application,
you’d most likely be passing in all fields for your entity class. Figure 7: Retrieve a single product by placing the product ID after a forward-slash on the URL.

Listing 5: The ProductController class inherits from the BaseApiController and has the DbContext injected into it.
using System;
using System.Collections.Generic; public ProductController(
using System.Linq; AdventureWorksLTDbContext context)
using Microsoft.AspNetCore.Http; : base() {
using Microsoft.AspNetCore.Mvc; _DbContext = context;
}
namespace WebAPI.Controllers {
[Route("api/[controller]")] private AdventureWorksLTDbContext _DbContext;
[ApiController] }
public class ProductController : }
BaseApiController {

Listing 6: The Get() method retrieves all products from the SalesLT.Product table.
private const string ENTITY_NAME = "product"; } else {
ret = StatusCode(
// GET api/values StatusCodes.Status404NotFound,
[HttpGet] "No " + ENTITY_NAME +
public IActionResult Get() { "s exist in the system.");
IActionResult ret = null; }
List<Product> list = new List<Product>(); } catch (Exception ex) {
ret = HandleException(ex,
try { "Exception trying to get all " +
if (_DbContext.Products.Count() > 0) { ENTITY_NAME + "s.");
list = _DbContext.Products. }
OrderBy(p => p.Name).ToList();
ret = StatusCode(StatusCodes.Status200OK, return ret;
list); }

Listing 7: The Get(int id) method is used to retrieve a single record from the database.
[HttpGet("{id}")] StatusCodes.Status404NotFound,
public IActionResult Get(int id) { "Can't find " + ENTITY_NAME + ": " +
IActionResult ret = null; id.ToString() + ".");
Product entity = null; }
} catch (Exception ex) {
try { ret = HandleException(ex,
entity = _DbContext.Products.Find(id); "Exception trying to retrieve " +
ENTITY_NAME + " ID: " +
if (entity != null) { id.ToString() + ".");
ret = StatusCode(StatusCodes.Status200OK, }
entity);
} else { return ret;
ret = StatusCode( }

codemag.com Using Ajax and REST APIs in .NET 5 19


Listing 8: Pass in a JSON Product object and .NET Core automatically converts the object to a C# object.
[HttpPost()] StatusCodes.Status201Created, entity);
public IActionResult Post(Product entity) { } else {
IActionResult ret = null; ret = StatusCode(
StatusCodes.Status400BadRequest,
try { "Invalid " + ENTITY_NAME +
if (entity != null) { " object passed to POST method.");
// Fill in fields not sent by client }
entity.ProductCategoryID = 18; } catch (Exception ex) {
entity.ProductModelID = 6; ret = HandleException(ex,
entity.rowguid = Guid.NewGuid(); "Exception trying to insert a new " +
entity.ModifiedDate = DateTime.Now; ENTITY_NAME + ".");
}
_DbContext.Products.Add(entity);
_DbContext.SaveChanges(); return ret;
}
ret = StatusCode(

Listing 9: Add a Put() method to allow updating to a table in your database.


[HttpPut("{id}")]
public IActionResult Put(int id,Product entity) { _DbContext.Update(changed);
IActionResult ret = null; _DbContext.SaveChanges();
ret = StatusCode(
try { StatusCodes.Status200OK, changed);
if (entity != null) { } else {
// Since we don't send all the data, ret = StatusCode(
// read in existing entity, StatusCodes.Status404NotFound,
// and overwrite changed properties "Can't find ProductID=" +
Product changed = id.ToString());
_DbContext.Products.Find(id); }
} else {
if (changed != null) { ret = StatusCode(
changed.Name = entity.Name; StatusCodes.Status400BadRequest,
changed.ProductNumber = "Invalid " + ENTITY_NAME +
entity.ProductNumber; " object passed to PUT method.");
changed.Color = entity.Color; }
changed.StandardCost = } catch (Exception ex) {
entity.StandardCost; ret = HandleException(ex,
changed.ListPrice = entity.ListPrice; "Exception trying to update " +
changed.Size = entity.Size; ENTITY_NAME + " ID: " +
changed.Weight = entity.Weight; entity.ProductID.ToString() + ".");
changed.SellStartDate = }
entity.SellStartDate;
changed.SellEndDate = return ret;
entity.SellStartDate; }
changed.ModifiedDate = DateTime.Now;

Listing 10: Pass in a unique product id of the product you wish to delete. those fields that you pass in from the front-end. Most likely,
[HttpDelete("{id}")] you won’t be doing this in your applications; I’m just trying
public IActionResult Delete(int id) { to keep the sample as small as possible.
IActionResult ret = null;
Product entity = null;

try {
Delete Product Data
entity = _DbContext.Products.Find(id); Now that you have inserted and updated product data, let’s
if (entity != null) { learn to delete a product from the table. Create a DELETE
_DbContext.Products.Remove(entity);
_DbContext.SaveChanges(); Web API method, as shown in Listing 10. Pass in the unique
ret = StatusCode( ID for the product you wish to delete. Locate the product
StatusCodes.Status200OK, true); using the Find() method and if the record is found, call the
} else { Remove() method on the Products collection in the DbCon-
ret = StatusCode(
StatusCodes.Status404NotFound, text. Call the SaveChanges() method on the DbContext ob-
"Can't find " + ENTITY_NAME + ject to submit the DELETE SQL statement to the SQL Server
" ID: " + id.ToString() + database.
" to delete.");
}
} catch (Exception ex) {
ret = HandleException(ex,
Create a .NET MVC Application
"Exception trying to delete " + To make calls using Ajax from a Web page to the Web API
ENTITY_NAME + "ID: " + server, you need to run the HTML pages from their own Web
id.ToString() + ".");
} server. If you wish to use a .NET MVC application, follow
along with the steps in this section of the article. If you
return ret; wish to use a Node server, skip to the next section of this
} article entitled “Create a Node Web Server Project”.

20 Using Ajax and REST APIs in .NET 5 codemag.com


Create this project by opening VS Code and selecting Termi-
nal > New Terminal… from the menu. Click into the termi-
nal window and navigate to a folder where you normally cre-
ate your development projects. On my computer, I’m going
to use the \Samples folder on my D drive. Run the following
commands one at a time in the terminal (substituting your
project folder for the first command).

CD D:\Samples Figure 8: Answer yes when prompted to add required assets.

dotnet new mvc -n AjaxSample

Load the AjaxSample folder in VS Code and wait a few sec- npm init -y
onds until you see a prompt at the bottom right of your npm install lite-server
screen asking to add some assets to the project, as shown
in Figure 8. Answer Yes to the prompt to add the required Once the last command finishes, select File > Open Folder…
assets. from the menu and select the AjaxSample folder you cre-
ated. You should now see a \node_modules folder and two
Open the \Views\Home\Index page file and make the file .json files; package-lock and package. Open the package.
look like the code shown in Listing 11. You’re going to add json file and add new property to “scripts” property.
more HTML to this file later, but this small set of HTML pro-
vides a good starting point. "scripts": {
"dev": "lite-server",
Change the Port Number "test": "echo \"Error: no test
Open the launchSettings.json file located under the \Prop- specified\" && exit 1"
erties folder and modify the applicationUrl property under },
the AjaxSample property, as shown in bold below. Because
you’re going to be using a Web API server that’s running on Create a Home Page
a different port and thus a different domain, Cross-Origin Like any website, you should have a home page that starts
Resource Sharing (CORS) needs to have a specific URL that the Web application. Add a new file under the \AjaxSample
you can whitelist. Change the URL to use port number 3000. folder named index.html. Add the HTML shown in Listing
Later, when you create your Web API server, you’re going to 12 into this new file. You’re going to add more HTML to this
use this domain address with CORS to allow calls to travel file later, but this small set of HTML provides you with a
from one domain to another. good starting point.

"AjaxSample": { Try It Out


"commandName": "Project", To make sure your server is running correctly, save all your
"dotnetRunMessages": "true", changes in all files, execute the following command in VS
"launchBrowser": true, Code’s terminal window.
"applicationUrl": "http://localhost:3000",
"environmentVariables": { npm run dev
"ASPNETCORE_ENVIRONMENT": "Development"
} If everything runs correctly, you should see a screen that
looks like Figure 9.
Try It Out
Run this MVC application and you should see a screen that
looks like Figure 9. Listing 11: Add code in the cshtml file to run your web project.
@{
ViewData["Title"] = "Ajax Samples";
Create a Node Web Server Project }
To run the Ajax samples, you need to run the HTML pages
<h1>Ajax Samples</h1>
from a Web server. There are several simple node Web serv- <p>Bring up console window</p>
ers that you can install from npm; I’m going to use one
called lite-server. Create this project by opening VS Code <button type="button" onclick="get();">
and selecting Terminal > New Terminal… from the menu. Get Products
</button>
Click into the terminal window and navigate to a folder
where you normally create your development projects. On @section Scripts {
my computer, I’m going to use the \Samples folder on my <script>
'use strict';
D drive. Run the following commands one at a time in the
terminal (substituting your project folder for the first com- const URL = "/resources/products.json";
mand). //const URL = "http://localhost:5000/api/product";

function get() {
CD D:\Samples
}
MkDir AjaxSample </script>
CD AjaxSample }

codemag.com Using Ajax and REST APIs in .NET 5 21


create a new folder under the \AjaxSample folder named
\resources. Add a new file in the resources folder named
products.json. Add the data shown in Listing 13 into this
new file.

Using XMLHttpRequest
As mentioned earlier in this article, the most fundamen-
tal building block of Ajax is the XMLHttpRequest object.
Let’s now use this object to retrieve the products in the
\resources\projects.json file in your Web server. Open the
index page file and modify the empty get() function to look
like the following code.

function get() {
Figure 9: Your home page should display when you run the
lite-server. let req = new XMLHttpRequest();

req.onreadystatechange = function () {
Listing 12: Add a home page for your application. console.log(this);
};
<!DOCTYPE html>
<html>
<head> req.open("GET", URL);
<title>Ajax Samples</title>
</head> req.send();
<body>
}
<h1>Ajax Samples</h1>
<p>Bring up console window</p>
The code above creates an instance of the XMLHttpRequest
<button type="button" onclick="get();">Get Products</button> object that’s built-into your browser. It then assigns a func-
tion to the onreadystatechange event, which fires whenev-
<script>
'use strict'; er the XMLHttpRequest object performs various operations
that you learn about in the next section of this article. The
const URL = "/resources/products.json"; next line of code calls the req.open() method and passes
//const URL = "http://localhost:5000/api/product"; the type of request it’s performing, and the URL on which
function get() { to send the request. In this case, you are performing a GET
on the URL pointing to the /resources/products.json file.
} The final line of code, req.send(), sends an asynchronous
</script> request to the Web server to retrieve the products.json file.
</body>

</html> Try It Out


Save the index page file and if you’re using MVC restart the de-
bugger, or if you are using the lite-server, you don’t need to do
anything. Switch over to your browser and open the Developer
Value Description Constant
Tools console window in your browser (typically the F12 key).
0 Unsent UNSENT Click on the Get Products button and you should see some-
1 Opened OPENED thing that looks like Figure 10. Depending on the browser you
2 Headers Received HEADERS_RECEIVED use, the console window may look slightly different.
3 Loading LOADING
4 Done DONE The XMLHttpRequest
Table 2: The readyState property reports different values readyState Property
as the XMLHttpRequest performs various functions. As you can see from the developer tools console window in
Figure 10, you received four different states. As the XML-
HttpRequest goes through retrieving the data, it sends some
You don’t need to close the browser, just keep it open and notifications that you’re reporting in the function you as-
switch back to VS Code. Every time you save your changes in signed to the onreadystatechange event. An instance of the
VS Code, the lite-server picks up the changes and refreshes XMLHttpRequest object is assigned to this within the on-
the HTML in your browser. If you do accidently close your readystatechange event. The readyState property on the ob-
browser, just reopen it and type in http://localhost:3000 ject can have one of five different values, as shown in Table 2.
and you’ll be taken back to the index page.
Once the readyState property is equal to the value 4 (DONE),
the call is complete. If the call is successful, the data returned
Add a Resources Folder from the Ajax call is available in the this.response property.
To start illustrating how to use Ajax, you’re going to read The response property returns an array of product objects
data from a .json file stored in the Web server. If you’re that looks like the data shown in Listing 13. If you want to
using MVC, create a folder under the AjaxSample\www- just retrieve the array of product objects and not each state,
root folder named \resources. If you’re using lite-server, modify the get() function to look like the following:

22 Using Ajax and REST APIs in .NET 5 codemag.com


function get() { status property to see if the call was successful. A successful
let req = new XMLHttpRequest(); call should set the status property to a value between 200
and 399. A value of 400 and higher signifies an error oc-
req.onreadystatechange = function () { curred with the Ajax call.
if (this.readyState===XMLHttpRequest.DONE &&
this.status === 200) { Add a displayResponse Function
console.log(this.response); Instead of writing a bunch of console.log() statements
} within the onreadystatechange event function, create a
}; function named displayResponse(). In this function, you can
then write any calls you want such as those shown below.
req.open("GET", URL);
req.send(); function displayResponse(resp) {
} console.log(resp);
console.log("");
In the above code, check the value of the readyState prop- console.log("responseText: " +
erty to see if it’s equal to the constant DONE. Also check the resp.responseText);
console.log("status: " +
resp.status.toString());
console.log("statusTest: " + resp.statusText);
}

Modify the get() function to call the displayResponse()


function when the status is equal to 200.

function get() {
let req = new XMLHttpRequest();

req.onreadystatechange = function () {
if (this.readyState===XMLHttpRequest.DONE &&
this.status === 200) {
displayResponse(this);
}
};

req.open("GET", URL);
req.send();
}

Try It Out
Save all the changes within VS Code. Restart the debug-
Figure 10: Bring up the Developer Tools on your browser to ger if using MVC, then go back to your browser and click
see the results of your JavaScript. on the Get Products button again. In the console window,

Listing 13: A JSON array of product objects.


[ {
{ "productID": 709,
"productID": 680, "name": "Mountain Bike Socks, M",
"name": "HL Road Frame - Black, 58", "productNumber": "SO-B909-M",
"productNumber": "FR-R92B-58", "color": "White",
"color": "Black", "standardCost": 3.3963,
"standardCost": 1059.31, "listPrice": 9.50
"listPrice": 1431.50 },
}, {
{ "productID": 712,
"productID": 707, "name": "AWC Logo Cap",
"name": "Sport-100 Helmet, Red", "productNumber": "CA-1098",
"productNumber": "HL-U509-R", "color": "Multi",
"color": "Red", "standardCost": 6.9223,
"standardCost": 13.08, "listPrice": 8.99
"listPrice": 34.99 },
}, {
{ "productID": 821,
"productID": 709, "name": "Touring Front Wheel",
"name": "Mountain Bike Socks, M", "productNumber": "FW-T905",
"productNumber": "SO-B909-M", "color": "Black",
"color": "White", "standardCost": 96.7964,
"standardCost": 3.3963, "listPrice": 218.01
"listPrice": 9.50 }
}, ]

codemag.com Using Ajax and REST APIs in .NET 5 23


Listing 14: Add a <form> tag and input fields to allow the user to input product data.
<form> Sell Start Date
<div class="row"> </label>
<label for="productID">Product ID</label> <input id="sellStartDate"
<input id="productID" name="productID" name="sellStartDate"
type="text" value="0" /> type="text" value="1/15/2021" />
</div> </div>
<div class="row"> <div class="row">
<label for="name">Product Name</label> <label id="message" class="infoMessage">
<input id="name" name="name" type="text" </label>
value="A New Product" /> </div>
</div> <div class="row">
<div class="row"> <label id="error" class="errorMessage">
<label for="productNumber"> </label>
Product Number </div>
</label> <div class="row">
<input id="productNumber" <button type="button" onclick="get();">
name="productNumber" Get Products
type="text" value="NEW-999" /> </button>
</div> <button type="button"
<div class="row"> onclick="getProduct();">
<label for="color">Color</label> Get a Product
<input id="color" name="color" type="text" </button>
value="Red" /> <button type="button"
</div> onclick="insertProduct();">
<div class="row"> Insert Product
<label for="standardCost">Cost</label> </button>
<input id="standardCost" name="standardCost" <button type="button"
type="number" value="10.00" /> onclick="updateProduct();">
</div> Update Product
<div class="row"> </button>
<label for="listPrice">Price</label> <button type="button"
<input id="listPrice" name="listPrice" onclick="deleteProduct();">
type="number" value="25.00" /> Delete Product
</div> </button>
<div class="row"> </div>
<label for="sellStartDate"> </form>

in the console window, you should now see product data


retrieved from the Product table in the AdventureWorksLT
database.

Add a Web Form to Submit Product Data


Let’s now finish up the index page to allow you to enter
Product data using a form. Look at Figure 11 to see what
this page is going to look like.

Add Global Styles for Web Form


The MVC application already has the bootstrap CSS frame-
work, so many styles are already there for you to use. If
you’re using MVC, go to the \wwwroot\css folder and open
the site.css file. If you’re using the lite-server, add a new
folder named \styles under your \AjaxSample folder and
add a new file named site.css in this new folder. Open the
Figure 11: Add input fields for each product property to site.css file and add the code shown below.
insert or update.
form .row label {
display: inline-block !important;
you should see the product data, plus the additional status min-width: 10em !important;
property values. }

Get Product Data from Web API Server .infoMessage {


Leave the Web API project running and go back to the index font-weight: bold;
page in your Web server and modify the constant URL to }
look like the following.
.errorMessage {
const URL = "http://localhost:5000/api/product"; font-weight: bold;
background-color: red;
Save the changes to the index page, restart the debugger, color: white;
and go to the browser. Click the Get Products button and, }

24 Using Ajax and REST APIs in .NET 5 codemag.com


If you’re using Node server, add one additional style into msg;
the site.css file. }

.row { Add a displayError() Helper Function


margin-bottom: 1em; In the <form> you added to the index page, there’s a <la-
} bel> with an ID of error where you can display error infor-
mation to the user. Create a function named displayError ()
Add a Product Form in the ajax-common.js file to which you can pass an error
In the index page, delete the button within the <body> but object to display as shown in the following code.
leave the <h1> and the <p> tags.
function displayError(error) {
<button type="button" onclick="get();"> document.getElementById("error").innerHTML =
Get Products JSON.stringify(error);
</button> }

Add the code shown in Listing 14 just below the <h1> and Add a handleAjaxError() Helper Function
the <p> tag to your index page. Ajax errors can generally be handled by a common piece of
code, as shown below. Add this function into the ajax-com-
Modify Script on a Page mon.js file. Don’t worry about what it does for now; you’ll
You added more buttons in the index page, so you need to learn more about this function in a later article.
add functions to correspond to each button’s click event.
Modify the code in the <script> tag to look like Listing 15. function handleAjaxError(error) {
displayError(error);
switch (error.status) {
Add Helper Functions into Scripts Folder case 404:
As with most applications, you’re going to have some ge- console.log(error.responseText);
neric functions that you can reuse on many Web pages. To break;
make them easy to reuse, create an ajax-common.js file case 500:
into which you may place these functions. You then put console.log(error.responseText);
this file into a folder that you can reference from any page break;
that needs them. Create a file named ajax-common.js in the default:
\wwwroot\js folder if using MVC, or if you’re using lite-serv- console.log(error);
er, create a \scripts folder and place the file into that folder.
Into this file, create three functions; getValue(), setValue(),
and handleAjaxError().

Add a getValue() Helper Function ADVERTISERS INDEX


Each time you want to retrieve data from an input field using
JavaScript, use the document.getElementById(“productID”).
value. This is a lot of typing for each field, so add a simple Advertisers Index
helper function within the ajax-common.js file to perform
this service for you. CODE Consulting
www.codemag.com/consulting 76
function getValue(id) {
return document.getElementById(id).value; CODE Legacy
} www.codemag.com/legacy 13
CODE Magazine
Add a setValue() Helper Function www.codemag.com/subscribe 75
To place data into an input field using JavaScript, you use the code
document.getElementById(“productID”).value = “data”. This is a DevIntersection
lot of typing for each field, so add a simple helper function within www.devintersection.com 2
the ajax-common.js file to perform this service for you.
dtSearch
function setValue(id, value) {
www.dtSearch.com 33
Advertising Sales:
document.getElementById(id).value = value; LEAD Tools Tammy Ferguson
832-717-4445 ext 26
} www.leadtools.com 5 tammy@codemag.com

Add a displayMessage() Helper Function Live On Maui


In the <form> you added to the index page, there’s a <la- www.live-on-maui.com 38
bel> with an ID of message where you can display informa-
Nevron Software, LLC
tional messages to the user. Create a function named dis-
www.nevron.com 7
playMessage() in the ajax-common.js file to which you can This listing is provided as a courtesy
pass a message to display, as shown in the following code. to our readers and advertisers.
The publisher assumes no responsibi-
function displayMessage(msg) { lity for errors or omissions.
document.getElementById("message").innerHTML =

codemag.com Using Ajax and REST APIs in .NET 5 25


Listing 15: Add CRUD functions to product page. function setInput(product) {
setValue("productID", product.productID);
<script>
'use strict'; setValue("name", product.name);
setValue("productNumber",
const URL = "http://localhost:5000/api/product"; product.productNumber);
setValue("color", product.color);
function get() {
let req = new XMLHttpRequest(); setValue("standardCost",
req.onreadystatechange = function () { product.standardCost);
if (this.readyState===XMLHttpRequest.DONE && setValue("listPrice", product.listPrice);
this.status === 200) {
console.log(this.response); setValue("sellStartDate",
displayMessage("Products Retrieved"); product.sellStartDate);
} }
};
req.open("GET", URL);
req.send(); Add a clearInput() Helper Function
} After deleting a product, or if you wish to clear the data be-
function getProduct() { fore the user begins to add a new product, create a function
} called clearInput() to set each input field to a default value.
function insertProduct() {
function clearInput() {
}
setValue("productID", "0");
function updateProduct() { setValue("name", "");
} setValue("productNumber", "");
function deleteProduct() { setValue("color", "");
} setValue("standardCost", "0");
</script> setValue("listPrice", "0");
setValue("sellStartDate",
new Date().toLocaleDateString());
Getting the Sample Code break; }
}
You can download the
sample code for this article
} Reference Scripts from Index Page
by visiting www.CODEMag.com To use these .js files on your pages, you need to reference
them using a <script> tag. If you are using MVC, add the
under the issue and article,
or by visiting www.pdsa.com/
Add Product Helper Functions two <script> tags shown in the code below just before the
downloads. Select “Fairway/ Just like you have common functions for any page, you’re <script> tag in the index page.
PDSA Articles” from the going to have some functions that just deal with the prod-
Category drop-down. uct data. Add a file named product.js into the \wwwroot\js <script src="/js/ajax-common.js"></script>
Then select “Using Ajax and folder if you’re using MVC, and place it into the \scripts folder <script src="/js/product.js"></script>
REST API’s in .NET 5” from if you’re using lite-server. Add three functions into this file
the Item drop-down. named getFromInput(), setInput(), and clearInput(). If you’re using the lite-server, add the following two <script>
tags just before the <script> tag in the index page.
Add a getFromInput() Helper Function
Create a method called getFromInput() to create a prod- <script src="/scripts/ajax-common.js"></script>
uct object. This product object should have the exact same <script src="/scripts/product.js"></script>
property names as the C# entity class you created in the
Web API project. The property names can be camel-case be- In the next articles, you’re going to use these functions to
cause you added the code in the Startup class to convert the retrieve data and populate the product form you created.
names from camel-case to Pascal case.

function getFromInput() { Summary


return { In this article, you built a .NET 5 Web API project to retrieve
"productID": getValue("productID"), data from a SQL Server database table, serialize that data as
"name": getValue("name"), JSON, and return it to your Web page. The ProductController
"productNumber": getValue("productNumber"), handles all CRUD logic for the Product table in the Adven-
"color": getValue("color"), tureWorkLT database. This is a standard controller used to
"standardCost": getValue("standardCost"), retrieve, add, edit, and delete records in a table. One ad-
"listPrice": getValue("listPrice"), ditional method you might add on is a method to search
"sellStartDate": for records based on one or more fields in the table. In this
new Date(getValue("sellStartDate")) article, you also built either a node Web server or an MVC
}; application to run a Web page to communicate with the Web
} API server. You learned the basics of using the XMLHttpRe-
quest object for making a simple Ajax call. In the next set
Add a setInput() Helper Function of articles, you will learn more ways to make Ajax calls from
When a product is returned from the Web API, you need to HTML to a Web server.
place each property in the product class into each input
field. Create a function called setInput() in the product.js  Paul D. Sheriff
file to perform this operation. 

26 Using Ajax and REST APIs in .NET 5 codemag.com


ONLINE QUICK ID 2103041

Eliminating Waste During


Designer-to-Developer Handoff
As companies digitize their products and become more technology-focused, developers and designers absorb increased
pressure to accelerate transformation so their companies can expand market share and fend off disruption. This
rising demand for developers—despite a widely documented shortage of technical talent—fundamentally changes

theroles of developers and designers and the dynamic be- According to Gartner, “A design system is one of the most
tween them. important strategic assets for an organization that produces
digital products. A robust design system drastically short-
Designers were once solely responsible for dreaming up ens design and development timelines, ensures the user
beautiful interfaces, and they’re now often expected to interface design is consistent, predictable, and usable, and
define the entire front-end UI/UX experience for users. Be- guarantees brand compliance.”
cause designers are the direct gatekeepers for overloaded
developers, they’re critical of eliminating anything that Even better, there are third-party tools that can pull out
could result in extraneous coding or lack of clarity before digital assets like CSS, HTML, and even code from the de-
handing off projects for development. signs, which ensures a mistake-free coded output and often Jason Beres
leads to a significantly accelerated product delivery. jasonb@infragistics.com
This designer-to-developer handoff has long been a source www.infragistics.com
of frustration and animosity among design and develop- indigo.design
ment teams—and inefficiency and lost productivity for @jasonberes
companies. It’s also one of the greatest opportunities for According to Gartner, Jason is the SVP of Devel-
transformation. oper Tools at Infragistics,
design systems are strategic, where, for 17 years, he’s
Four Ways to Accelerate Design “A robust design system drastically held roles at the intersec-

to Code shortens design and development tion of tech evangelism,


product management, and
There are several ways to accelerate the change from design timelines, ensures the user interface product development.
to code. Here are my four favorites. design is consistent, predictable, He and his team spear-
head the customer-driven,
1. Be Clear in Specs and Functionality and usable, and guarantees innovative features and
Communicating specs and functionality has become the big- brand compliance.” functionality throughout
gest obstacle plaguing the designer-to-developer handoff. all Infragistics’ testing,
It’s nearly impossible, for instance, to know the pixel mea- developer, and user
surements of, say, a scroll bar, by looking at a static design experience products.
file. The developer must guesstimate the size of each ele- 2. Make Usability Testing Non-Negotiable Jason has written seven
ment and its relative proportion to other elements on the We’ve all had the bad experience of engaging in major de- books and co-authored
screen. sign iterations during development, which is the biggest three, on topics including
source of waste plaguing the designer-to-developer handoff software development, cov-
Developers often apply the same guesswork to determin- in the app development process. With an effective user test- ering topics like SQL Server,
ing how features are meant to function. For instance, the ing strategy, teams can identify and address issues during C#, Visual Basic, Rich Client,
designer envisions a certain hover animation, but doesn’t the design phase. and Web Development. He’s
properly communicate it to the developer. And don’t get de- a former Microsoft MVP and
velopers started on where to find files for features or images Changes to layout, interactions, or other structural de- has contributed to a number
that are shown in mockups but weren’t delivered to them sign elements during development can result in thousands of publications including De-
during handoff! of lines of unnecessary code. This equates to longer dev veloper, InformIT, CMSWire,
cycles (missed deadlines), higher costs, and slower prod- and others. Jason speaks
In the end, the developed design looks—and acts—like a uct delivery—not to mention a lot of avoidable back and at both national and inter-
caricature of the intended design. From the designer’s per- forth (and tension) between design and development national conferences and is
spective, the developer has destroyed their design. From teams. very active in the developer
the developer’s perspective, the designer has wasted their and UX community.
time. Both are right, of course, but neither is to blame. Fixing problems during development can cost ten times as
much as fixing it during design, and potentially 100 times as
A design team can Introduce a design system to overcome much if you’re fixing a problem once the product has already
these common issues. The design system then serves as shipped.
the single source of truth that gives designers and de-
velopers an agreed-upon language that they can use to Even so, teams continue making this mistake by not con-
build apps. Design systems, backed by user interface ducting usability tests up front. Instead, they wait until the
components, eliminate the guesswork and accelerate app user interface has already been developed so they can inter-
creation. act with it the way they would the finished product.

codemag.com Eliminating Waste during Designer-to-Developer Handoff 27


be physically present to conduct testing. Testing often gets
Forrester estimates that for every sacrificed because teams view this method of testing as an
unnecessary expense and hurdle to meeting condensed
$1 to fix a problem during design, timelines.
it would cost $5 to fix the same
By letting users test applications in a centralized, digital
problem during development, tool that designers and developers both have access to—
and $30 to fix the same problem from wherever they are in the world—affordable testing can
after the product’s release. be conducted quickly and at scale. But even smaller testing
groups can suffice. According to Nielsen Norman Group, the
best results related to detecting UX issues in a design come
from iteratively testing with as few as five people.
On a positive note, today, there are design systems and
tools that give users an entire app experience to test be- 3. Get Rid of Coding Altogether
fore development writes any code, as seen in Figure 1. Be it Not so fast. An entire market of no-code and low-code apps
high- or low-fidelity prototypes, screenshots with hot-spot has emerged in response to the massive amount of time it
links for navigation, or an actual design from a tool like takes to code applications, whether completely or partially
Adobe XD or Sketch, you can user-test an experience and from scratch.
find most of the costly UX bugs with a good user-testing
strategy. And all of this can happen virtually, rather than in Considering that UI development can account for upward
person, especially with COVID restrictions forcing most of us of 60% of total development time and application costs,
to work from home indefinitely or even permanently. there’s significant cost savings in a code-reuse (low-code)
or code-elimination (no-code) approach to UI development.
This resolves another common obstacle to usability test-
ing: the time and cost involved in recruiting large groups Although these apps can enable citizen developers to create
of paid panelists, who have traditionally been required to simpler apps, more complex enterprise apps will continue

Figure 1: User testing to analytics on success of failure of a UX interaction.

28 Eliminating Waste during Designer-to-Developer Handoff codemag.com


to require the skills of developers. Organizations need to 4. Out with the Contention, in with the Collaboration Design One (or Two)
streamline the app development process—with standard- Efficiency comes from designers and developers collaborat- Sprint(s) before
ized tooling—so that the volume of manual coding can be ing on a prototype, seeing the data and feedback from test- Development
reduced. ing, and iterating on designs and prototypes before coding.
When developers can have workable code generated from Clean design handoffs
At the core of design-to-code apps is a design system—or a those prototypes without having to hand code everything, from design to dev drives
repository of reusable assets, with clear visuals, user inter- it adds further efficiency and helps ensure that the finished successful outcomes in
the software development
face designs, and technical standards—that serves as the app delivers with the same experience as the final design
process. This doesn’t mean
foundation for quick and consistent design and develop- and prototype.
you remove activities like
ment of digital products. an iterative design process.
When designers work in one tool or platform and developers Design teams create a design
The biggest opportunity for cost and time savings when work in another, collaboration and communication is harder. in iterations, making changes
going from design to code can be found in using a well- Eliminating silos and enabling design and dev collaboration based on feedback from the
thought-out set of UI patterns, or UI controls, that can be throughout the app creation process by providing a single stakeholders, and from testing
turned into code and customized from there. You can see an common platform where they can work together eliminates the design with users.
example of this code generation in Figure 2. costly (and messy) handoffs and gets rid of unnecessary si-
loes between designers and developers, as seen in Figure 3. Iterative design with user
As a start, consider introducing standard UI components testing fits well in an agile
that work regardless of which framework(s) developers are Although high output and individual productivity is necessary process by doing the design
working in to reduce the need to recreate components for for product delivery, it can’t be at the expense of overall team activities one sprint ahead of
each framework, like Angular, React, Blazor, etc. productivity. Standardizing on tools across an organization the developers. During Sprint
drives long-term productivity. For example, ensure that your 1, the design team designs
Developers can get all the necessary code, be it Web code design team is settled on a design tool; for example, don’t let the first set of features, gets
for CSS, HTML, or even JavaScript/TypeScript and code for one team use Sketch and another Adobe XD. Make sure that stakeholder feedback, quickly
tests the design with users,
entire pages with a good design-to-code solution. your entire organization has standardized on a development
and iterates the design.
Usability testing can be
conducted quickly with low-
fidelity prototypes.

The development team


works on the final, user-tested
design in the sprints that
follow. This two-track sprint
process—one for design,
one for dev—coupled with
standardized tooling and a
solid design system, ensure
successful outcomes.

Figure 2: Generate code to kick-start dev directly from a Sketch design.

codemag.com Eliminating Waste during Designer-to-Developer Handoff 29


Figure 3: End-to-End Flow of a Design to Code system using Indigo.Design

A Few Testers Find platform; don’t let one team use React and another use An- • Avon Products Inc. gave up on a four-year, $125 mil-
Most Issues gular. The potential short-term gain because of familiarity or lion software overhaul after a test of the system in
experience is miniscule when compared to the long-term cost Canada revealed that the system was so burdensome
According to Nielsen Norman of rectifying tool and platform compatibility. and difficult to use that many salespeople quit the
Group, the best results company.
in finding UX issues in a
design come from testing The High Cost of Ignoring To realize gains like American Airlines did, and to avoid the
no more than five users and
running as many small tests
the UX Process massive time and expense loss shown in the Avon example,
as you can afford.
Considering everything you’ve read in this article; you might eliminating waste in the designer-to-developer handoff
assume that incorporating user experience design activities while accelerating and perfecting the design to code pro-
into the standard software development process would add cess is critical. If organizational leaders focus on tools that
time and cost to projects. However, these activities save deliver the best, fastest business outcomes in their digital-
time and money by designing the right solution from the be- first/digital-transformation objective rather than relying on
ginning and by finding and correcting problems early in the individual, siloed teams to choose “tools of their choice,”
project, when they’re easy and inexpensive to change. User they can achieve these goals and they will accelerate deliv-
interfaces designed by someone who understands and ap- ery and drive down cost.
plies principles of human factors and design best practices
helps to avoid many UX problems. Iterative user testing and  Jason Beres
redesign finds and fixes problems and validates the design 
direction. Before development begins, design validation
occurs by both the business and users, eliminating costly
change requests due to unmet requirements and usability
problems late in the development process.

It’s far less expensive to make changes during the require-


ments definition and the design phase than during or after
development.

Two concrete examples where a UX focus can help (or hurt)


your outcomes:

• By correcting usability problems during the design


phase of their website, American Airlines reduced the
cost of those fixes by 60-90%.

30 Eliminating Waste during Designer-to-Developer Handoff codemag.com


ONLINE QUICK ID 2103051

Tapping into EF Core’s Pipeline


In EF Core 5, there are many ways to expose what’s going on along its workflow and interact with that information. These
access points come in the form of logging, intercepting, event handlers, and some super cool new debugging features.
The EF team even revived an old favorite from the very first version of Entity Framework. My recent CODE Magazine overview,

EF Core 5: Building on the Foundation (https://www.co- Then when debugging, I could see the expected SQL for the
demag.com/Article/2010042/EF-Core-5-Building-on-the- sqlFromQuery variable. But you don’t need to embed this in
Foundation) introduced you to a few of these capabilities. your production code. In fact, I wouldn’t recommend that
In this article, I’ll dive more deeply into a broader collection because it can easily impact performance as EF Core goes
of intriguing ways to access some of EF Core 5’s metadata. through its process of working out the SQL. Instead, you can
call ToQueryString in the debugger, as shown in Figure 1.
ToTraceString Revived as ToQueryString The query variable in Figure 1 has already been evaluated
This one is a blast from the past. In the first iterations of as a DbQuery before I called ToQueryString in the debugger,
Entity Framework, there was no built-in logging. But there and so that works. However, although you can debug the Julie Lerman
was at least ObjectQuery.ToTraceString(), a runtime method context and express a DbSet directly in the debugger, which @julielerman
that would work out the SQL for a LINQ or Entity SQL query means you could also run context.People.ToQueryString() thedatafarm.com/contact
on the fly. Although it wasn’t a great way to log as it still in the debugger successfully, you can’t evaluate LINQ ex-
Julie Lerman is a Microsoft
required you to provide your own logic to output that SQL, pressions directly. In other words, if you were to debug the
Regional director, Docker
there are some helpful use cases for it even today. This fea- context variable and then tack on the Where method in the
Captain, and a long-time
ture hasn’t been part of EF Core until this latest version, EF debugger, it will fail. That’s nothing new and not a limita-
Microsoft MVP who now
Core 5, and has been renamed ToQueryString(). tion of ToQueryString. counts her years as a coder
in decades. She makes
If you want to see what SQL is generated for the simple query of One last important point about ToQueryString in this sce- her living as a coach and
a DbSet called People, you just append ToQueryString to the que- nario is that its evaluation is based on the simplest exe- consultant to software
ry. There’s no LINQ execution method involved. In other words, cution: ToList. Using a LINQ execution query such as Fir- teams around the world.
you’re separating the query itself from the execution method, stOrDefault affects how the SQL is rendered and therefore You can find Julie presenting
which would trigger the query to run against the database. ToQueryString renders different SQL than the SQL sent to on Entity Framework,
the database when executing the query with FirstOrDefault. Domain-Driven Design and
var sqlFromQuery=context.People.ToQueryString(); Gunnar Peipman has some good examples of this in his blog other topics at user groups
post: https://gunnarpeipman.com/ef-core-toquerystring. and conferences around
One interesting use case for ToQueryString is to look at its the world. Julie blogs at
result while debugging so that you don’t have to wait until thedatafarm.com/blog,
after you’ve run your method to inspect SQL in logs. is the author of the highly
Another use case where I find acclaimed “Programming
In the case above, I could build the query, capture the Entity Framework” books,
ToQueryString particularly and many popular videos
string, and then execute the query.
helpful is in integration tests. on Pluralsight.com.
private static void GetAllPeople()
{
using var context = new PeopleContext();
var query = context Another use case where I find ToQueryString particularly
.People.Where(p=>p.FirstName=="Julie"); helpful is in integration tests. If you need to write tests
var sqlFromQuery = query.ToQueryString(); whose success depends on some part of the generated SQL
var people = query.ToList(); expression, ToQueryString is a much simpler path than log-
} ging. With logging, you would have to capture the log into

Figure 1: Visualizing the query.ToQueryString() output while debugging

codemag.com Tapping into EF Core’s Pipeline 31


a text writer and then read that text. Although it may be • Infrastructure
tempting to use the InMemory provider for tests like these, • Migrations
keep in mind that the InMemory provider does not generate • Model.Validation
SQL. You’ll need to use a provider for a true database. How- • Model
ever, the database does not need to exist in order to use • Query
ToQueryString. EF Core only works in memory to determine • Scaffolding
the SQL. • Update

Here’s an example of a silly test to prove that EF Core writes You can use these categories to filter output to only the type
more intelligent SQL than I do. Note that I’m referencing of information you want to log.
the Microsoft.EntityFrameworkCore.Sqlite provider in my
test project. As you may know, EF and EF Core always project One parameter of LogTo specifies the target—either a con-
the columns related to the properties of the entity. It does sole window, a file, or the debug window. Then a second pa-
not write SELECT *. rameter allows you to filter by .NET LogLevel plus any DLog-
gerCategoy you’re interested in. This example configures a
[TestMethod] DbContext to output logs to the console using a delegate for
public void SQLDoesNotContainSelectStar() Console.WriteLine and it filters on all the DbLoggerCategory
{ types that fall into the LogLevel.Information group.
var builder = new DbContextOptionsBuilder();
builder.UseSqlite("Data Source=testdb.db"); optionsBuilder.UseSqlServer(myConnectionString)
using var context = .LogTo(Console.WriteLine,LogLevel.Information);
new PeopleContext(builder.Options);
var sql=context.People.ToQueryString(); This next LogTo method adds a third parameter—an array
Assert.IsFalse(sql.ToUpper() of DbLoggerCatetory (only one is included) to further fil-
.Contains("SELECT *")); ter on only EF Core’s Database commands. Along with the
} LogTo method, I’ve added the EnableSensitiveDataLogging
method to show incoming parameters in the SQL. This will
A more interesting example would be if you’re using an in- capture all SQL sent to the database: queries, updates, raw
terceptor to perform soft deletes and a global query filter SQL and even changes sent via migrations.
to always filter out those rows. Here, for example, is a query
filter in my DbContext OnModelBuildling method telling EF .LogTo(Console.WriteLine,
Core to append a predicate to filter out Person rows whose LogLevel.Information,
IsDeleted property is true. new[]{DbLoggerCategory.Database.Command.Name},
)
modelBuilder.Entity<Person>() .EnableSensitiveDataLogging();
.HasQueryFilter(p => !p.IsDeleted);
My Person type that includes the IsDeleted property from
With this in place, I can write a test similar to the one above also has a FirstName and LastName property. Here’s
above, but changing the assertion to the following to make the log when calling SaveChanges after adding a new Person
sure I don’t break the global query filter logic. object.

Assert.IsTrue(sql.ToUpper() info: 1/4/2021 17:56:09.935


.Contains("WHERE NOT (\"p\".\"IsDeleted\")")); RelationalEventId.CommandExecuted[20101]
(Microsoft.EntityFrameworkCore.Database
You can see the full soft delete implementation in the down- .Command)
load associated with the article online.
Executed DbCommand (22ms) [Parameters=[
@p0='Julie' (Size = 4000), @p1='False',
Logging Details from the EF Core Pipeline @p2='Lerman' (Size = 4000)],CommandType='Text',
There are three ways to tap into EF Core’s pipeline. One is CommandTimeout='30']
with the simple logging that I introduced in my earlier ar-
ticle. SET NOCOUNT ON;
INSERT INTO [People] ([FirstName],
Simple logging works in conjunction with .NET’s logging API [IsDeleted], [LastName])
but all the hard work is done under the covers for you. You VALUES (@p0, @p1, @p2);
can easily configure the DbContext using a LogTo method, SELECT [Id]
directing the .NET logging API to output logs about a Db- FROM [People]
Context instance. And there are quite a lot of events that WHERE @@ROWCOUNT = 1 AND [Id] =
EF Core will output. These are grouped into the following scope_identity();
classes that derive from DbCloggerCategory.
The logger displays the LogLevel type (info), the DbContext
• ChangeTracking EventId (RelationalEventId.CommandExecuted), and the
• Database.Command details of the logger category I requested. Next it states
• Database.Connection the log name, the execution time, and the parameter list.
• Database.Transaction Because sensitive data logging is enabled, the parameters
• Database are displayed. Finally, it lists the SQL sent to the database.

32 Tapping into EF Core’s Pipeline codemag.com


LogTo makes it easy to have EF Core output basic logging.
SPONSORED SIDEBAR:
You can configure it in the DbContext, in the Startup file of
®
an ASP.NET Core app, or in an ASP.NET Core app’s application Get .NET Core Help
configuration file. for Free

Instantly Search
Notice the EventId at the top. You can even define your log- How does a FREE hour-
ging to filter on specific events using those IDs. You can also long CODE Consulting

Terabytes
filter out particular log categories and you can control the virtual meeting with our
formatting. Check out the docs for more details on these expert consultants sound?
various capabilities at https://docs.microsoft.com/en-us/ Yes, FREE. No strings.
ef/core/logging-events-diagnostics/simple-logging. No commitment.
No credit cards.
Simple logging is a high level way to log EF Core, but you can Nothing to buy.
also dive more deeply into the logger by working directly with For more information,
Microsoft.Extensions.Logging to exert even more control
visit www.codemag.com/ dtSearch’s document filters
over how EF Core’s logs are emitted. Check the EF Core docs
consulting or email us support:
at info@codemag.com.
for more details on getting started with this more advanced • popular file types
usage: https://docs.microsoft.com/en-us/ef/core/logging-
events-diagnostics/extensions-logging. • emails with multilevel
attachments
• a wide variety of databases
Responding to EF Core Events
EF Core 2.1 introduced .NET events in the EF Core pipeline.
• web data
There were only two to begin with: ChangeTracker.Tracked,
which is raised when the DbContext begins tracking an en-
tity, and ChangeTracker.StateChanged is raised when the Over 25 search options
state of an already tracked entity changes. including:
With the base logic in place, the team was able to add three more • efficient multithreaded search
events to EF Core 5 for SaveChanges and SaveChangesAsync. • easy multicolor hit-highlighting
• forensics options like credit
• DbContext.SavingChanges is raised when the context
is about to save changes.
card search
• DbContext.SavedChanges is raised after either of the
two save changes methods have completed successfully.
• DbContext.SaveChangesFailed is used to capture and Developers:
inspect a failure.
• SDKs for Windows, Linux,
It’s nice to be able to separate this logic rather than stuff- macOS
ing it all into a single override of the SaveChanges method. • Cross-platform APIs for
.
C++, Java and NET with
You could even use these events to emit alternate informa-
tion that’s not tracked by the logger. The EF Core docs use
. .
NET Standard / NET Core
an example where they output timestamps when data is • FAQs on faceted search,
added, updated, or deleted. granular data classification,
Azure and more

You could even use these events


to emit alternate information
Visit dtSearch.com for
that isn’t tracked by the logger.
• hundreds of reviews and
case studies
If you’re using shadow properties to track audit data, you
• fully-functional enterprise
could use the SavingChanges event to update those proper- and developer evaluations
ties before the SQL is constructed and sent to the database.
The Smart Choice for Text
For example, I set up my application to add a UserId shadow
property to every entity (barring those that are property
Retrieval® since 1991
bags and owned entities). My application has a static vari-
able called Globals.CurrentUserId set by my application 1-800-IT-FINDS
when a user logs in. Also, in my DbContext class, I’ve creat-
ed a private method called SetUserId that sets the value of
www.dtSearch.com
my shadow property (where it exists) to that CurrentUserId.

codemag.com Tapping into EF Core’s Pipeline 33


CommandType='Text', CommandTimeout='30']

SET NOCOUNT ON;


INSERT INTO [People] ([FirstName],[IsDeleted], [LastName], [UserId])
VALUES (@p0, @p1, @p2, @p3);
SELECT [Id]
FROM [People]
WHERE @@ROWCOUNT = 1
AND [Id] = scope_identity();

You can see the full implementation in the download with


this article. It’s just one simple way to leverage these events.
Figure 2: Output of Event Counters Targeting EF Core Events
Accessing Metrics with Event Counters
EF Core 5 takes advantage of a cool feature introduced to .NET in
.NET Core 3.0—dotnet-counters (https://docs.microsoft.com/
en-us/dotnet/core/diagnostics/dotnet-counters). Dotnet-
counters is a global command line tool. You can install this tool
using the dotnet CLI.

dotnet tool install --global dotnet-counters

Once installed, you can then tell it to monitor different pro-


cesses running in the dotnet environment. You’ll need to
provide the process ID of your running .NET application with:

System.Diagnostics.Process.GetCurrentProcess().Id

In Visual Studio, I wasn’t able to simply ask for this value


in the debugger. The debugger will only tell you that “This
expression causes side effects and will not be evaluated.” So
I embedded it into my code and received the value 28436.

With the ID in hand and the app still running, you can then
Figure 3: The makeup of a DbDataReader passed into the ReaderExecuted command trigger the counter to begin monitoring events coming from
the Microsoft.EntityFramework namespace. Note that I’ve
wrapped the command for display purposes.
private void SetUserId(object sender,
SavingChangesEventArgs e) dotnet counters monitor
{ Microsoft.EntityFrameworkCore -p 28436
foreach (var entry in ChangeTracker.Entries()
.Where(entry => entry.Metadata Then, as you run through your application, the counter displays
.GetProperty("UserId") != null)) a specific list of EF Core stats, as shown in Figure 2, and then
{ update the counts as the application performs its functional-
entry.Property("UserId").CurrentValue = ity. I only used it with a small demo app so my counts aren’t
Globals.CurrentUserId; very interesting, but you can see that I have a single DbContext
} instance running (Active DbContexts), I’ve run three queries,
} leveraged the query cache (because I ran some of those queries
more than once), and called SaveChanges twice.
Finally, I can wire up the SetUserId method to the Sav-
ingChanges event in the constructor of the DbContext: This looks like another interesting tool to have in your anal-
ysis toolbelt, but will certainly be more useful when running
public PeopleContext() against a more intensive solution. In the docs, the EF team
{ recommends that you do read up on the dotnet-counters
SavingChanges += SetUserId; feature to properly benefit from using it with EF Core.
}

Now any time I call SaveChanges, the UserId gets persisted


Intercepting EF Core’s Pipeline
into the table along with the other data. Here’s some log to the Database
data that shows 101, the value of my CurrentUserId, as part Another area of EF Core’s pipeline is related to its interac-
of the data inserted into a new Person row: tion with the database. On the way to the database, there
are tasks such as preparation of SQL commands and connec-
Executed DbCommand (29ms) [Parameters=[ tions. On the way back from the database, there are tasks
@p0='Julie' (Size = 4000), @p1='False', such as reading the database results and materializing ob-
@p2='Lerman' (Size = 4000), @p3='101'], jects. EF Core’s interceptors are a feature that began in EF6

34 Tapping into EF Core’s Pipeline codemag.com


and was re-introduced in EF Core 3. A new interceptor for add query hints, or other tasks, you can change the com-
SaveChanges was introduced in EF Core 5. mand.CommandText and then command with the new Com-
mandText value will continue on its way.

The ReaderExecuted/Async methods are triggered as any re-


EF Core’s interceptors are sulting data is returned from the database.
a feature that began in EF6 and public override DbDataReader ReaderExecuted(
was re-introduced in EF Core 3. DbCommand command,
CommandExecutedEventData eventData,
DbDataReader result)
{
Because this feature has been around for a long time (although return base.ReaderExecuted
it’s fairly new to EF Core), much has already been written and (command, eventData, result);
shared about it. Even so, I always find it interesting to be re- }
minded of all of the objects you can intercept and modify before
they continue on the pipeline. Note that some of the param- Here, for example, you could capture the DbDataReader
eters coming from the new version are simpler to work with. stored in the result variable and do something with that
data before it continues on to EF Core for object material-
There are three different interceptor classes to intercept ization. One example is logging something that the logger
commands, connections, and transactions as well as the doesn’t capture, such as the newly generated ID value of a
new interceptor class for SaveChanges. Each class exposes row added to the database. The result variable from a query
virtual methods (and relevant objects) related to its cat- shown in Figure 3 while debugging shows that there are
egory. For example, the DbCommandInterceptor exposes rows returned and each row has five fields.
ReaderExecuting and ReaderExecutingAsync, which is trig-
gered as the command is about to be sent to the database. You’ll find an example of an interceptor in the download but
might also appreciate some more detailed blog posts from
public override InterceptionResult<DbDataReader> community members. One is by Lizzy Gallagher, which is a
ReaderExecuting( nice introduction to interceptors in EF Core (https://lizzy-
DbCommand command, gallagher.github.io/query-interception-entity-framework/),
CommandEventData eventData, and another is a more in-depth article by Eddie Stanley
InterceptionResult<DbDataReader> result) https://eddiewould.com/2019/02/22/entityframework-fun-
{ with-dbcommandinterceptor/. Keep in mind that Eddie’s ar-
//e.g., alter command.CommandText ticle is based on EF6 and some of the method parameters are
return result; different, but he demonstrates some cool ideas.
}
There’s a lot of guidance around working with the intercep-
One of its parameters is a DbCommand, and its Command- tors in the EF Core docs at https://docs.microsoft.com/en-
Text property holds the SQL. If you want to modify the SQL, us/ef/core/logging-events-diagnostics/interceptors.

Listing 1: The Model.DebugView.ShortView of my data model


Model: Keys:
EntityType: AddressPerson (Dictionary<string, object>) Id PK
CLR Type: Dictionary<string, object> EntityType: Person
Properties: Properties:
AddressesId (no field, int) Indexer Required PK FK Id (int) Required PK AfterSave:Throw ValueGenerated.OnAdd
AfterSave:Throw FirstName (string)
ResidentsId (no field, int) Indexer Required PK FK Index IsDeleted (bool) Required
AfterSave:Throw LastName (string)
Keys: UserId (no field, int) Shadow Required
AddressesId, ResidentsId PK Skip navigations:
Foreign keys: Addresses (List<Address>) CollectionAddress
AddressPerson (Dictionary<string, object>) {'AddressesId'} Inverse: Residents
-> Address {'Id'} Cascade Keys:
AddressPerson (Dictionary<string, object>) {'ResidentsId'} Id PK
-> Person {'Id'} Cascade EntityType: WildlifeSighting
Indexes: Properties:
ResidentsId Id (int) Required PK AfterSave:Throw ValueGenerated.OnAdd
EntityType: Address AddressId (int) Required FK Index
Properties: DateTime (DateTime) Required
Id (int) Required PK AfterSave:Throw ValueGenerated.OnAdd Description (string)
PostalCode (string) UserId (no field, int) Shadow Required
Street (string) Keys:
UserId (no field, int) Shadow Required Id PK
Navigations: Foreign keys:
WildlifeSightings (List<WildlifeSighting>) WildlifeSighting {'AddressId'} -> Address {'Id'}
Collection ToDependent WildlifeSighting ToDependent: WildlifeSightings Cascade
Skip navigations: Indexes:
Residents (List<Person>) CollectionPerson AddressId
Inverse: Addresses

codemag.com Tapping into EF Core’s Pipeline 35


bugView, you’ll find a ShortView and a LongView property.
Here, for example, is the ShortView of my context when I’ve
just queried for a single person object and my context only
contains that one Person.

Person {Id: 1} Unchanged

That’s the most commonly needed information—relaying that


there’s only one unchanged Person whose ID is 1 in my context.

The LongView provides some more detail about the entity


that’s being tracked.

Person {Id: 1} Unchanged


Id: 1 PK
FirstName: 'Julie'
IsDeleted: 'False'
LastName: 'Lerman'
UserId: 101
Addresses: []

If I were to edit that Person while it’s still being tracked and
force the context to detect changes, the LongView, in addi-
tion to showing the state as Modified, also notes the change
I made to the LastName property.

Person {Id: 1} Modified


Figure 4: Viewing the data model with EF Core Power Tools Id: 1 PK
FirstName: 'Julie'
IsDeleted: 'False'
A Sleeper Feature in EF Core 5: LastName: 'Lermantov' Modified
Debug Views Originally 'Lerman'
You may or may not be familiar with the term “sleeper” used UserId: 101
to describe, for example, a great movie that not many people Addresses: []
are aware of. There are two new features added to EF Core 5
that I didn’t even know existed until Arthur Vickers showed You can see in this view that there’s an Addresses property.
them in the EF Community Standup just after EF Core 5 was In fact, Person and Address have a many-to-many relation-
released. (Entity Framework Community Standup—Special ship between them using skip navigations. EF Core infers
EF Core 5.0 Community Panel, https://www.youtube.com/ the PersonAddress entity in memory at run time in order to
watch?v=AkqRn2vr1lc, at about 41 minutes.) These are the persist the relationship data into a join table. When I create
ChangeTracker.DebugView and Model.DebugView. a graph of a person with one address in its Addresses col-
lection, you can see a Person, an Address, and an inferred
PersonAddress object in the ShortView. The long view shows
the properties of these objects.
Making sure you know about
AddressPerson (Dictionary<string, object>)
DebugViews was my inspiration
{AddressesId: 1, ResidentsId: 1} Unchanged FK
for writing this article. {AddressesId: 1} FK {ResidentsId: 1}
Address {Id: 1} Unchanged
Person {Id: 1} Modified

The DebugViews output nicely formatted strings filled with I love these debug views that help me at debug time to dis-
information about the state of a context’s ChangeTracker or cover the state and relationship of my tracked objects wheth-
metadata from the model. DebugView provides a beautiful er I’m problem solving or just learning how things work.
document that you can capture and print out and really get
a good look at what’s going on under the covers. I spend Let’s flip over to the Model.DebugViews to see what you can
a lot of time in the debugger drilling into explore various learn from them. First, I should clarify my model. It’s the
details about what the change tracker knows or how EF Core same model I used for the earlier article. In Figure 4, I’m
is interpreting the model I’ve described. The ability to read using the EF Core Power Tools extension in Visual Studio to
this information in this text format, even save it in a file so visualize how EF Core interprets my model. My classes are
you don’t have to debug repeatedly to glean details, is a Person, Address, Wildlife Sighting, and ScaryWildlifeSight-
fantastic feature of EF Core 5. In fact, making sure you know ing. As mentioned already, Person and Address have a ma-
about DebugViews was my inspiration for writing this article. ny-to-many relationship where EF Core infers a join entity.
WildlifeSighting has a one-to-many relationship with Ad-
The way to get to this information is when debugging an dress, and ScaryWildlifeSighting inherits from WildlifeSight-
active DbContext instance. In DbContext.ChangeTracker.De- ing using a Table-Per-Hierarchy mapping to the database.

36 Tapping into EF Core’s Pipeline codemag.com


Listing 2: The Person entity described in the LongView of Model.DebugView
EntityType: Person Relational:TableColumnMappings:
Properties: System.Collections.Generic.SortedSet`1
Id (int) Required PK AfterSave:Throw ValueGenerated.OnAdd [Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMapping]
Annotations: UserId (no field, int) Shadow Required
Relational:DefaultColumnMappings: Annotations:
System.Collections.Generic.SortedSet`1 Relational:DefaultColumnMappings:
[Microsoft.EntityFrameworkCore.Metadata.Internal System.Collections.Generic.SortedSet`1
.ColumnMappingBase] [Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMappingBase]
Relational:TableColumnMappings: Relational:TableColumnMappings:
System.Collections.Generic.SortedSet`1 System.Collections.Generic.SortedSet`1
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMapping] [Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMapping]
SqlServer:ValueGenerationStrategy: IdentityColumn Skip navigations:
FirstName (string) Addresses (List<Address>) CollectionAddress
Annotations: Inverse: Residents
Relational:DefaultColumnMappings: Keys:
System.Collections.Generic.SortedSet`1 Id PK
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMappingBase] Annotations:
Relational:TableColumnMappings: Relational:UniqueConstraintMappings:
System.Collections.Generic.SortedSet`1 System.Collections.Generic.SortedSet`1
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMapping] [Microsoft.EntityFrameworkCore.Metadata.Internal.UniqueConstraint]
IsDeleted (bool) Required Annotations:
Annotations: ConstructorBinding:
Relational:DefaultColumnMappings: Microsoft.EntityFrameworkCore.Metadata.ConstructorBinding
System.Collections.Generic.SortedSet`1 QueryFilter: p => Not(p.IsDeleted)
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMappingBase] Relational:DefaultMappings:
Relational:TableColumnMappings: System.Collections.Generic.List`1
System.Collections.Generic.SortedSet`1 [Microsoft.EntityFrameworkCore.Metadata.Internal.TableMappingBase]
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMapping] Relational:TableMappings: System.Collections.Generic.List`1
LastName (string) [Microsoft.EntityFrameworkCore.Metadata.Internal.TableMapping]
Annotations: Relational:TableName: People
Relational:DefaultColumnMappings: ServiceOnlyConstructorBinding: Microsoft.EntityFrameworkCore.Metadata.
System.Collections.Generic.SortedSet`1 ConstructorBinding
[Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnMappingBase]

The DbContext.Model.DebugView also has a ShortView and


a LongView. And they both contain a lot of information.
Listing 1 shows the ShortView. There’s so much informa-
tion in here. You can see properties, primary and foreign
keys, indexes, and cascade delete rules. The many-to-many
relationship is described, even specifying that it’s using a
skip navigation. The inheritance is also described. There’s so
much you can learn from this document. I like that you can
capture it and save it in your documentation for team mem-
bers to refer to as needed. You could also use it to compare
the model as you evolve your app to make sure you haven’t
broken any of EF Core’s interpretation of your model.

The Model.DebugView.LongView has even more detail, de-


scribing annotations, database mappings, and more. List-
ing 2 shows only the output for the Person entity. There’s
even more you can learn from the LongView. This level of
detail isn’t something everyone will want to look at but it’s
there if you need it. Looking at this type of information is
how I learned so much about Entity Framework and EF Core
in the early days and I continue to be fascinated by it. Note
that it’s a lot easier to read in the text viewer than in the
listing because I had to wrap so many lines in order to fit Figure 5: Screenshot of a subset Model.DebugView.LongView
into the listing format for the printed page. Figure 5 shows
a screenshot of part of the LongView so you can better see
how it’s formatted. more agency over designing interesting models that persist
data exactly the way you want it to. Knowing that you can
modify the SQL or even the results based on your needs lets
Take Advantage of the Pipeline you build more intelligent applications. Learning how to le-
I’m a big fan of understanding how the tools I use work. verage the various debugging, logging, interception, and
Luckily, I leave the power tools to my husband (a carpen- event handling explained in this article can turn you into
ter) and stick to my own tools, such as EF Core. The more the EF Core expert on your team.
you know about what’s going on under the covers, the more
power you have over that tool. For example, comprehending  Julie Lerman
how EF Core interprets your classes and mappings gives you 

codemag.com Tapping into EF Core’s Pipeline 37


Advertisement

WANT TO
LIVE ON
MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that
will continue way past the current crisis. Which begs the question: If you can work from
home, then why not make your home in one of the world’s premiere destinations and
most desirable living areas?

The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it
is uniquely situated as a place to live. It offers great infrastructure and a wide range of
things to do, not to mention a very high quality of life.

We have teamed up with CODE Magazine and Markus Egger to provide you information
about living in Maui. Markus has been calling Maui his home for quite some time, so
he can share his own experience of living in Maui and working from Maui in an industry
that requires great infrastructure.

For more information, and a list of available homes, visit www.Live-On-Maui.com

Steve and Carol Olsen


Maui, Hawai’i

www.Live-On-Maui.com
Advertisement

MAUI NO KA OI!
This loosely translates to “Maui is the best”. As someone who has been calling Maui

home for a while now, I can wholeheartedly confirm this. After having travelled the world,

and after having lived in a variety of places, I find Maui to be truly unique.

Most people know Maui is a place to go for a week on worth considering also). I like this area for its great qual-
vacation. And that is certainly great and very enjoyable. ity of housing, low crime, great weather, and the world’s
However, Maui is so much more! To me, Maui is the per- greatest beaches. I enjoy playing a round of golf or go-
fect mix that makes me feel like I am living on a tropi- ing to a great restaurant with friends. When I want to
cal island yet being a developed place with great infra- feel like I’m on vacation for an hour or two, I swing by
structure and great quality of life. A lot of this is true for one of the hotels for a snack at the pool bar. When I feel
the Hawaiian Islands in general. But while Oahu (with its like exercising, I ride my bike along the ocean or go for
capital of Honolulu) is essentially a big city with a lot of a hike into the jungle or across lava fields.
people that always reminds me of Southern California,
and while islands like Kauai or the Big Island of Hawai’i As we have been going through the COVID-19 crisis, it
are a bit too “back to the roots” for me, Maui is just per- has become more and more clear how great a location
fect. You can enjoy a great day at the beach or in nature, Maui is. For one, the warm weather and outdoor living
or you can go to a nice restaurant, the movies, or a con- have kept the COVID-19 numbers low, and the quality
cert. It’s the quality of life provided by a modern place in of life high. And while nobody wants to be hospitalized,
the Western world, paired with a tropical island paradise. it has been nice to know that we have better healthcare
here than other tropical locations. (Essentially the same
Maui has many unique advantages. There is no hurri- healthcare as anywhere else in the US.) While we also
cane season and no real rainy season. The weather is had to deal with a lockdown, and many restaurants and
nice year-round, especially on the south-side of the is- hotels have been closed, we have several places that
land. There are no dangerous animals. Not even mos- are not just open, but since everything happens out-
quitoes. How does 82-degree weather on a nice beach doors, many are perfectly safe to visit. I really can’t think
with a Mai Tai or Pina Colada sound? That’s Maui for you! of a better place to weather this pandemic than Maui.

As someone who works in the tech industry, good in- So yes: Maui No Ka Oi! To me, there isn’t another place
frastructure is important to me. After all, I need to work that even comes close. I have a long list of other places
as productively from Maui as I do when I am on the I enjoy visiting that are awesome too. Do I want to go
“mainland” (which is what we call the continental US to Bora Bora, Singapore, or many other great loca-
here in Hawai’i). I have a 300Mbit internet connection tions? Sure, I do! But what do you do in Bora Bora after
going to my home, and it is inexpensive. My connec- two weeks? Maui on the other hand is a great place to
tion to other parts of the world is better and less ex- set up a life and stay for good.
pensive here than it is on most main-land locations.
We have the same stores and supermarkets as every- I would love to see you on Maui in the future. Maybe
where else in the US. Schools are decent. The same we can share one of those Mai Tais on the beach. I rec-
is true for healthcare. Flight connections are great, ommend talking to Carol Olsen, who has been helping
and it is not at all difficult to travel from and to Maui. me with all my real estate needs in South Maui. Moving
to Maui has been the best decision of my life. I am sure
I live on the south-side of the island in an area called you would enjoy it too!
“Kihei”. Especially the southern parts of Kihei, known
as “Wailea” and “Makena”, are the areas I truly rec- Markus Egger
ommend to anyone (although there are other places Publisher, CODE Magazine
MAUI PROPERTIES

Wailea Homes Wailea Condos Wailea Land

Kihei Homes Kihei Condos Kihei Land

Makena Home Makena Condo Makena Land


ONLINE QUICK ID 2103061

Introduction to Containerization
Using Docker
Containerization has been a buzzword in the IT industry for the past several years. The term “containerization” has been
increasingly used as an alternative or companion to virtualization. But what exactly is containerization, how does it work and
how does it compare with virtualization? In this article, I’ll take you on a journey to discover just what exactly containerization

is through the use of the Docker platform. By the end of this The kernel of the host’s operating system is shared across
article, you’ll have a much clearer idea of how Docker works, all the containers that are running on it. Docker virtualizes
and how to leverage it in your next development project. the operating system of the host on which it’s installed and
running, rather than virtualizing the hardware components.
What Docker Is and How It Works
Whenever people talk about containerization, people start
to think of something they’re already familiar with—virtual Docker virtualizes the OS of your
machines (VM). Since this is the case, I’ll explain Docker by
computer and simplifies the
Wei-Meng Lee first using VM as an example.
weimenglee@learn2develop.net building, running, managing,
www.learn2develop.net Virtual Machines vs. Docker and distribution of applications.
@weimenglee Figure 1 shows the various layers in a computer that uses
VMs. The bottom layer is the hardware of your computer,
Wei-Meng Lee is a technolo-
gist and founder of Devel- with the OS sitting on top of it. On top of the OS is the hyper-
oper Learning Solutions visor (also known as the Virtual Machine Monitor), which is a Docker is written to run natively on the Linux platform, and
(www.learn2develop.net), software that creates and runs virtual machines. Using the the host and container OS must be the same.
a technology company spe- hypervisor, you can host multiple guest operating systems
cializing in hands-on train- (such as Windows, Linux, etc.). Each VM contains a separate Wait a minute. If the host and container OS must be the same
ing on the latest technolo- set of libraries needed by your application, and each VM is (that is, Linux), how do you run Docker on operating systems
gies. Wei-Meng has many allocated a specific fixed amount of memory. Therefore, the such as Windows and macOS? Turns out that if you’re using
years of training experiences number of VMs you can host in your computer is limited by Windows or Mac OS, Docker creates a Linux virtual machine,
and his training courses the amount of memory you have. which itself hosts the containers (see Figure 4). This is why,
place special emphasis if you use Windows, you’ll need to install WSL2 (Windows Sub-
on the learning-by-doing Figure 2 shows how Docker fits in the picture. Instead of a systems for Linux is a full Linux kernel built by Microsoft). For
approach. His hands-on hypervisor, you now have a Docker engine. The Docker en- Mac, Docker uses the macOS’s Hypervisor framework (Hyper-
approach to learning gine manages a number of containers, which host your ap- Kit) to create a virtual machine to host the containers.
programming makes plication and the libraries that you need. Unlike VMs, each
understanding the subject container does not have a copy of the OS—instead it shares Each container is isolated from the other containers present
much easier than read- the resources of the host’s operating system. on the same host. Thus, multiple containers with different
ing books, tutorials, and application requirements and dependencies can run on the
documentation. His name As mentioned, a Docker container doesn’t have any operat- same host.
regularly appears in online
ing system installed and running on it. But it does have a
and print publications such
as DevX.com, MobiForge.
virtual copy of the process table, network interface(s), and Uses of Docker
com, and CODE Magazine.
the file system mount point(s) (see Figure 3). These are The first question that you might ask is: “Okay, now that I
inherited from the operating system of the host on which know how Docker works, give me a good reason to use Dock-
the container is hosted and running. er.” Here’s one that I usually use to answer this question.

Figure 1:
How a virtual Figure 2:
machine works How Docker works

42 Introduction to Containerization Using Docker codemag.com


Say you have three different Python applications that you Another good use case of Docker is when you want to ex-
need to test on your development computer. Each of these periment with different database servers in your develop-
apps uses a specific version of Python, as well as libraries ment environment. Instead of installing multiple database
and dependencies. If you’re a Python developer, you know servers on your computer, simply use Docker containers to
that installing different versions of Python in one single run each database server. Docker hub (https://hub.docker.
computer is problematic. Of course, you can create virtual com/search?q=database&type=image) has a list of database
environments, but you might be reluctant to do that as you servers that you can try using with Docker.
don’t want to really mess up the configuration of your cur-
rent computer.

At the time of writing, Docker


doesn’t officially support the latest
M1 Mac from Apple (it’s currently in
preview). Docker is in the process of
transiting from HyperKit to the new
high-level Virtualization Framework
provided by Apple for creating
virtual machines on Apple silicon-
and Intel-based Macs. Docker also
needs to recompile the binaries
of Docker Desktop to the native Figure 3: Docker virtualizes the operating system of the host on which it’s installed and
running, rather than virtualizing the hardware components.
ARM platform.

To solve your dilemma, you can:

• Use three different physical machines


• Use three different virtual machines (VM), running on
a single physical machine. Figure 4:
On operating systems
Neither solution is cheap. For the first solution, you need to such as macOS and
have three physical machines, and the second option requires Windows, Docker
you to have a computer that’s powerful enough to run three creates a virtual
VMs. A much better way would be to run three Docker con- machine to host all
tainers, each with a different version of Python installed. the containers.

Figure 5: Docker Desktop for Windows

codemag.com Introduction to Containerization Using Docker 43


Figure 6: A Docker image contains a number of read-only Figure 7: A Docker container adds a container layer on top
layers (containing tools and libraries). of an image to create a run-time environment.

• Docker Container: A Docker Container uses a Docker


Image and adds a container layer on top of it to create
a run-time environment (see Figure 7). The container
layer is now read-write capable and can now be used
to run your application.

Think of Docker Images as templates containing the various


libraries and tools that are needed to run your application.
In order to actually run the application, you need to create
a container based on that image. If you’re a developer, the
closest analogy I can think of is that a Docker image is like
a class, while a Docker container is an object.

Creating Your First Docker Container from a Docker Image


The best way to understand the difference between an im-
age and a container is to try a very simple example. In the
command prompt, type the following command: docker run
hello-world. You should see the output as shown in Figure 8.

Figure 8: Trying Docker for the first time The docker run hello-world command downloads
(or in Docker-speak, pulls) the hello-world Docker Image
from Docker Hub and then creates a Docker Container using
Downloading and Installing Docker this image; it then assigns a random name to the container
With all the theory behind us, it’s time to get your hands and starts it. Immediately, the container exits.
dirty and experience for yourself how Docker works.
The hello-world image, as much as it’s useless, allows you
Using a Web browser, go to https://docs.docker.com/dock- to understand a few important concepts of Docker. Rest as-
er-for-windows/install/ and download Docker for the OS sured that you’ll do something useful after this.
you’re using. Follow the installation instructions and when
you’re done, you should see the Docker Desktop icon (the Viewing the Docker Container and Image
dolphin logo) in the system tray (for Windows). Clicking on If you go to the Docker Desktop app and click on the Images
the icon launches Docker for Windows (see Figure 5). item on the left (see Figure 9), you’ll see the hello-world
image listed.
A number of administrative tasks in Docker can be accom-
plished through the Docker Desktop app, but you can do If you now click on the Containers / Apps items on the left (see
more with the Docker CLI. For the rest of this article, I’ll Figure 10), you should now see a container named elated_
demonstrate the various operations through the CLI. bassi (you’ll likely see a different name, as names are randomly
assigned to a container) based on the hello-world image. If
Docker Images and Containers you click on it, you’ll be able to see logs generated by the con-
In Docker, there are two terminologies that are important: tainer, as well as inspect the environment variables associated
with the container and the statistics of the running container.
• Docker Image: A Docker image is a read-only file con-
taining libraries, tools, and dependencies that are re- You can also view the Docker container and image using the
quired for an application to run. Examples of Docker command prompt. To view the currently running container,
images are MySQL Docker image, Python Docker im- use the docker ps command. To view all containers (includ-
age, Ubuntu Docker image, and so on. Each Docker ing those already exited), use the docker ps -a command
image can be customized by adding layers (with ad- (see Figure 11).
ditional tools, libraries added on each layer, for exam-
ple; see Figure 6) on it. A Docker image can’t run by To explicitly name the container when running it, use the
itself. To do so, you need to create a Docker Container. --name option, like this:

44 Introduction to Containerization Using Docker codemag.com


Figure 9: Locating the hello-world image in the Docker Desktop app

Figure 10: Viewing the container created as well as the logs generated by the container

$ docker run --name helloworld hello-world


When you use the docker run
To view all the Docker images on your computer, you can use command to create a container
the docker images command (see Figure 12).
from an image, a new container
Once a docker image is on your local computer, you can is always created.
simply create another container based on it using the same
docker run command:

C:>docker run hello-world C:\>docker ps -a


CONTAINER ID IMAGE COMMAND CREATED
When you now use the docker ps -a command, you’ll see a STATUS
new container that ran and then exited most recently: PORTS NAMES

codemag.com Introduction to Containerization Using Docker 45


C:\>docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS
PORTS NAMES
0099984a5fc2 hello-world "/hello" 37
minutes ago Exited (0) 37 minutes ago
elated_bassi

Stopping a Container
If you need to stop a container that’s running, you can use
Figure 11: Using the docker ps -a command to view all containers the docker stop command together with the container ID of
the container you wish to stop. Although the hello-world
container ran and exited immediately, you’ll find this com-
mand useful later on when you need to manually stop a run-
ning container.

Removing a Docker Image


If you no longer need a particular Docker image (especially
when you need to free up some space on your computer),
Figure 12: Viewing the Docker images on your computer use the docker rmi command, like this:

C:\>docker images
REPOSITORY TAG IMAGE ID
CREATED SIZE
hello-world latest bf756fb1ae65 12 months
ago 13.3kB

C:\>docker rmi bf756fb1ae65


Error response from daemon: conflict: unable
to delete bf756fb1ae65 (must be forced) –
image is being used by stopped container
0099984a5fc2

In the above commands, you first try to get the Image ID of


the Docker image that you want to delete. Then, you use the
docker rmi command to try to delete the image using its Image
ID. However, notice that in the above example, you’re not able
to delete it because the image is in use by another container.
You can verify this by using the docker ps -a command:

C:\>docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
Figure 13: Using the Docker Hub to search for images that you need 0099984a5fc2 hello-world "/hello" 2
hours ago Exited (0) 2 hours ago
elated_bassi
138cd9c6bc5c hello-world "/hello" 24 seconds
ago Exited (0) 23 seconds ago True enough, there is a container named (0099984a5fc2)
jovial_borg that’s using the image. In this case, you need to remove the
0099984a5fc2 hello-world "/hello" 29 container first before you can remove the image:
minutes ago Exited (0) 29 minutes ago
elated_bassi C:\>docker rm 0099984a5fc2
C:\>docker rmi bf756fb1ae65
Removing a Container Untagged: hello-world:latest
When a container has finished running and is no longer needed, Untagged: hello-world@sha256:1a523af650137b8accdaed439c17d
you can delete it using the docker rm command. To delete a 684df61ee4d74feac151b5b337bd29e7eec
container, you need to first get the container ID of the container Deleted: sha256:bf756fb1ae65adf866bd8c456593cd
that you want to delete using the docker ps -a command, and 24beb6a0a061dedf42b26a993176745f6b
then specify the container ID with the docker rm command: Deleted: sha256:9c27e219663c25e0f28493790cc0b8
8bc973ba3b1686355f221c38a36978ac63
C:\>docker rm 138cd9c6bc5c138cd9c6bc5c
Sometimes you might have a lot of images on your computer
If you now use the docker ps -a command to view all the and you just want to remove all the images that aren’t used
containers, you should find that the specified container no by any containers. In this case, you can use the docker im-
longer exists: age prune -a command:

46 Introduction to Containerization Using Docker codemag.com


Figure 14: The page for the Ubuntu Docker image

C:\>docker image prune -a


WARNING! This will remove all images without
at least one container associated to them.
Are you sure you want to continue? [y/N] y

All unused Docker images will now be removed.

A Docker image can only be


removed when there’s no
Figure 15: Running the Ubuntu Docker image as a container
container associated with it.
You’re now in the shell prompt of Ubuntu. You can now use it
like you’re using a computer running the Ubuntu OS (albeit
Using the Ubuntu Docker Image without the GUI components). Because the Ubuntu image
Now that you’re comfortable with the basics of Docker images contains the bare-minimum tools and libraries, to do any-
and containers, it’s time to do something useful! The first thing useful, you need to install the tools yourself. Let’s
example that I want to demonstrate is the Ubuntu image. Us- now run the apt-get update command to update the pack-
ing the Ubuntu image, you can create a container that allows age cache on the container:
you to write and host applications on a Linux environment.
root@80f6a4603277:/# apt-get update
A great place to find Docker images is Docker Hub (see Fig- Get:1 http://archive.ubuntu.com/ubuntu
ure 13). focal InRelease [265 kB]
Get:2 http://security.ubuntu.com/ubuntu
You can search for images that are of interest to you. Figure focal-security InRelease [109 kB]
14 shows the page for the Ubuntu Docker image. Get:3 http://archive.ubuntu.com/ubuntu
focal-updates InRelease [114 kB]
Let’s now pull and run the Ubuntu image from Docker hub: ...
Get:17 http://archive.ubuntu.com/ubuntu
C:\>docker run -it ubuntu bash focal-backports/universe amd64 Packages
[4250 B]
The -it option instructs Docker to allocate a pseudo-TTY con- Fetched 16.8 MB in 40s (416 kB/s)
nected to the container’s stdin and then run an interactive Reading package lists... Done
bash shell in the container. Figure 15 shows the Ubuntu
container running with the shell running. Then, install the curl utility:

codemag.com Introduction to Containerization Using Docker 47


To run that previously running container, first use the dock-
er ps -a command to get its Container ID:

C:\>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
80f6a4603277 ubuntu "bash" 42
minutes ago Exited (2) 35 minutes ago
eager_wozniak

Then use the docker start command to start the existing


container:

C:\>docker start 80f6a4603277

Notice that after you’ve started the container, control is re-


Figure 16: Testing the NginX server in a Docker container turned back to you (the container is running but you’re not in-
teracting with it)—you’re not in the Ubuntu’s interactive shell.

To connect to the running container interactively, use the


following command:

C:\>docker exec -i -t 80f6a4603277 bash


root@80f6a4603277:/#

The docker exec command runs a command in a running


container. You should now be able to use the curl applica-
tion on the container:

root@80f6a4603277:/# curl
curl: try 'curl --help' or 'curl --manual'
for more information

Figure 17: Viewing the modified Web page on the nginx container

Instead of using the Container ID


root@80f6a4603277:/# apt-get install -y curl
to start a container, you can
Reading package lists... Done
Building dependency tree also use its user-friendly name.
Reading state information... Done
The following additional packages will be
installed:
ca-certificates krb5-locales libasn1-8-
Using the NginX Docker Image
heimdal libbrotli1 libcurl4 libgssapi-krb5-2 Although the idea of running an Ubuntu OS on your comput-
libgssapi3-heimdal libhcrypto4-heimdal er may not sound very exciting to you, how about running a
... Web server? Suppose you need to deploy a Web application
Running hooks in /etc/ca-certificates/update.d on your computer for a project, but you don’t want to install
... done. a Web server on your current work computer. The easiest
way is, of course, to run the Web server using Docker! You
You can now test to see if curl is installed correctly: can use the NginX (pronounced as engine-ex) Web server, an
open source Web server that’s often used as a reverse proxy,
root@80f6a4603277:/# curl HTTP cache, and load balancer.
curl: try 'curl --help' or 'curl --manual'
for more information To pull and run a Docker container using the nginx image,
type the following command:
Finally, you can exit the container by using the exit com-
mand in Ubuntu: C:\>docker run -d -p 88:80 --name webserver
nginx
root@80f6a4603277:/# Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
The container has now exited (no longer running). It’s im- 6ec7b7d162b2: Pull complete
portant to note that all the changes you’ve made to a con- cb420a90068e: Pull complete
tainer are only persisted for that particular container. If you 2766c0bf2b07: Pull complete
use the docker run -it ubuntu bash command again, you’ll e05167b6a99d: Pull complete
create a new Ubuntu container without all the updates that 70ac9d795e79: Pull complete
you made earlier. Digest: sha256:4cf620a5c81390ee209398ecc18e5fb9

48 Introduction to Containerization Using Docker codemag.com


dd0f5155cd82adcbae532fec94006fb9 You’re ready to test the Web server to see if it works. Use
Status: Downloaded newer image for nginx:latest your Web browser and enter the following URL: http://loc-
469be21874e7c80398b0499159a76ee14d9e420d5c9 alhost:88 (see Figure 17). You should now see the “Hello,
f89234f5747eac008fe2b Docker!” message.

Here’s a list of the uses for the various options in the above What if you want to transfer an image into the container?
command: Easy. Use the docker cp command:

• The -d option (or --detach) runs the container in C:\>docker cp docker.png 469be21874e7:/usr/share/nginx/html
background.
• The -p option maps the exposed port to the specified In that snippet, 469be21874e7 is the container ID of the
port; here 88:80 means all traffic to port 88 on the lo- nginx container. The above command copies the file named
cal computer is forwarded to port 80 in the container. docker.png into the container’s /usr/share/nginx/html
• The --name option assigns a name to the container. directory.
• The nginx is a Docker image from Docker Hub.
In the interactive shell of the container, edit the index.
Once the container has been created, you can verify the html file again:
mapping of the ports using the docker port command:
root@469be21874e7:/usr/share/nginx/html# vim
C:\>docker port webserver index.html
80/tcp -> 0.0.0.0:88
Add the additional line in bold in Listing 2 to the index.
You’re ready to test the Web server to see if it works. Use html file.
your Web browser and enter the following URL: http://loc-
alhost:88 (see Figure 16). If the container is running prop-
erly, you should see the welcome message. Listing 1: Modifying the content of index.html
<!DOCTYPE html>
Modifying the Docker Container <html lang="en">
Although being able to run the Web server through a Docker <head>
container is cool, you want to be able to customize the con- <meta charset="UTF-8">
<meta name="viewport" content=
tent of your Web pages. To do that, type in the following "width=device-width, initial-scale=1.0">
commands in bold: <meta http-equiv="X-UA-Compatible"
content="ie=edge">
<title>Hello World</title>
C:\>docker exec -it 469be21874e7 bash <style>
root@469be21874e7:/# apt-get update h1{
Get:1 http://security.debian.org/debian- font-weight:lighter;
security buster/updates InRelease [65.4 kB] font-family: Helvetica, sans-serif;
}
Get:2 http://deb.debian.org/debian buster </style>
InRelease [121 kB] </head>
Get:3 http://deb.debian.org/debian buster- <body>
<h1>
updates InRelease [51.9 kB]
Hello, Docker!
Get:4 http://security.debian.org/debian- </h1>
security buster/updates/main amd64 Packages </body>
[260 kB] </html>
Get:5 http://deb.debian.org/debian buster/main
amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster- Listing 2: The content of index.html
updates/main amd64 Packages [7860 B] <!DOCTYPE html>
Fetched 8414 kB in 2s (3929 kB/s) <html lang="en">
Reading package lists... Done <head>
root@469be21874e7:/# apt-get -y install vim <meta charset="UTF-8">
<meta name="viewport" content=
"width=device-width, initial-scale=1.0">
The above command: <meta http-equiv="X-UA-Compatible"
content="ie=edge">
<title>Hello World</title>
• Connects to the nginx container interactively and ex- <style>
ecutes the bash shell h1{
• Updates the package cache on the container font-weight:lighter;
• Installs the vim editor on the container font-family: Helvetica, sans-serif;
}
</style>
Once the vim editor is installed on the container, let’s </head>
change to the Web publishing directory and edit the index. <body>
<h1>
html file, with the content, as shown in Listing 1. Hello, Docker!
</h1>
root@469be21874e7:/# cd usr/share/nginx/html/ <img width="100" src="docker.png"/>
root@469be21874e7:/usr/share/nginx/html# vim </body>
</html>
index.html

codemag.com Introduction to Containerization Using Docker 49


Listing 3: Content of the Dockerfile MyDocker). The changes are made to the Docker image, and
it will affect all containers created from this image.
FROM nginx
COPY index.html /usr/share/nginx/html/index.html
You can now start a new container based on the modified
nginx image:

C:\>docker run -d -p 88:80 nginx

When you refresh the Web browser, you’ll now see the same
page as shown earlier in Figure 17.

Using the MySQL Docker Image


As mentioned earlier in this article, if you want to experiment
with the various database servers, Docker is a good solution. For
this section, you’ll learn how to use the MySQL Docker image.

Figure 18: The Web page now shows the Docker logo
Docker images use tags for version
control. The latest tag is simply
a tag for an image that doesn’t
Refreshing the Web browser, you should now see the image have a tag.
of the Docker logo (see Figure 18).

Modifying the Docker Image


In the previous section, you saw how to modify a container so Type the following command in bold to download and run a
that you can customize it to your own use. What if you want to MySQL Docker container:
run multiple containers, with all requiring the same software
already pre-installed? In this case, it’s very time consuming C:\>docker run --name My-mysql -p 3306 -e
to modify each container one by one. A better way would be MYSQL_ROOT_PASSWORD=password -d mysql:latest
to modify the Docker image so that all containers based on it Unable to find image 'mysql:latest' locally
will have the same prerequisites installed. latest: Pulling from library/mysql
a076a628af6f: Pull complete
First, create a file named index.html and save it in, say, f6c208f3f991: Pull complete
C:\MyDocker. Populate it with the content shown earlier in 88a9455a9165: Pull complete
Listing 1. 406c9b8427c6: Pull complete
7c88599c0b25: Pull complete
Next, create a file named Dockerfile and save in it the same 25b5c6debdaf: Pull complete
folder as index.html - C:\MyDocker. Populate it with the 43a5816f1617: Pull complete
content shown in Listing 3. 69dd1fbf9190: Pull complete
5346a60dcee8: Pull complete
ef28da371fc9: Pull complete
fd04d935b852: Pull complete
The Dockerfile contains all 050c49742ea2: Pull complete
Digest: sha256:0fd2898dc1c946b34dceaccc3b80d38b
the instructions that you specify 1049
in the command line in order to 285c1dab70df7480de62265d6213
build a Docker image. The filename Status: Downloaded newer image for mysql:latest
8a0496aaca1864846c91915875bdf394bd8fb9e23c2
“Dockerfile” is case-sensitive. 986c5e8c372cee4fc1f69

The above command loads the latest version of the MySQL


Docker image, creates a container, and then runs it. The use
The commands in the Dockerfile copy the local index.html of the various options are:
file and save it into the nginx image’s /usr/share/nginx/
html/ directory as index.html. • -d: the name of the Docker image (mysql) as well as
its tag (latest)
You can now build a new Docker image and call it nginx • --name: gives the container the name of My-mysql
(same as the original nginx image name): • -e: sets the environment variable MYSQL_ROOT_PASS-
WORD to password, which is the password of the root
C:\MyDocker>docker build -t nginx . account (in real-life, please set the password to a
more secure one)
Remember to change into the directory that you have previ- • -p: exposes port 3306 so that you can interact with
ously saved your Dockerfile (which, in this example, is C:\ MySQL through this port

50 Introduction to Containerization Using Docker codemag.com


Once the MySQL Docker image is downloaded and run as a Listing 4: The partial result of the docker inspect command
container, you can use the docker ps command to confirm
[
that it is indeed up and running: {
"Id": "8a0496aaca1864846c91915875bdf394bd8fb9e2
C:\>docker ps 3c2986c5e8c372cee4fc1f69",
CONTAINER ID IMAGE COMMAND "Created": "2021-01-13T03:09:33.345744Z",
"Path": "docker-entrypoint.sh",
CREATED STATUS "Args": [
PORTS NAMES "mysqld"
8a0496aaca18 mysql:latest "docker- ],
entrypoint.s…" About a minute ago Up ...
...
About a minute 33060/tcp, 0.0.0.0:49153- "Mounts": [
>3306/tcp My-mysql {
"Type": "volume",
"Name": "68b5c16eb9f33bdf7e775
Observe that the port 49153 (randomly assigned) is mapped 6e27873d9a4abb1d043e2c0be49a1ca
to 3306 (“0.0.0.0:49153->3306/tcp”). This means that if 2b8562b214d6",
you want to interact with MySQL running in the container, "Source": "/var/lib/docker/volumes/
you need to connect to this port – 49153. Note that you’re 68b5c16eb9f33bdf7e7756e27873d9a4abb1
d043e2c0be49a1ca2b8562b214d6/_data",
likely to see a different number on your end. "Destination": "/var/lib/mysql",
"Driver": "local",
If you want to explicitly map the running container to a spe- "Mode": "",
"RW": true,
cific port, specify the port mapping through the -p option "Propagation": ""
when starting the container. For example: }
],
C:\>docker run --name My-mysql ...
]
-p 32769:3306 -e MYSQL_ROOT_PASSWORD=password
-d mysql:latest

The above -p option maps the port 32769 to 3306. You can root@8a0496aaca18:/var/lib/mysql# ls
use the docker port command to list the port mappings on ...
the My-mysql container:
Finally, the value of the “Type” key is “volume”. This means
C:\>docker port My-mysql that all the changes you made to the /var/lib/mysql direc-
3306/tcp -> 0.0.0.0:49153 tory will not be persisted later on when you commit the con-
tainer as a Docker image. When that happens, all the data
Inspecting the Docker Container that you have previously stored in that MySQL will be lost.
Now that you’ve started the MySQL container running, you To resolve this, it’s always good to map the /var/lib/mysql
may be wondering where all the database’s data are stored. directory to a directory on your local computer (so that it
This is a good opportunity to dive deeper into the container can be backed up independently). This topic is beyond the
to learn how directories are mapped. scope of this article, but if you want to know more details,
check out the -v option in the docker run command.
You can examine your container in more detail using the
docker inspect command: Connecting to the MySQL Container Using a Local
MySQL Client
C:\>docker inspect My-mysql Now that your MySQL server in the container is up and run-
ning, it’s time to create a database in it, add a table, and
The above command will yield something like the results insert some records. For this, you can make use of the mysql
shown in Listing 4 (the important part highlighted): client that ships with the MySQL installer. You can down-
load the MySQL Community Edition from: https://dev.mysql.
The result is quite a lengthy bunch of information. For this com/downloads/mysql/ and install the mysql client onto
discussion, let’s just focus on three specific keys – “Type”, your local computer (during the installation stage, you can
“Source” and “Destination”. choose to only install the client).

The value of the “Source” key refers to the physical direc- For Windows user, the mysql utility is, by default, located in
tory used by MySQL to store its data; if you run Docker on a C:\Program Files\MySQL\MySQL Server 8.0\bin, so you need
Linux computer, this directory refers to an actual directory to change into that directory before running the mysql client:
on the local computer; on a Mac and Windows computer,
this is a directory on the virtual machine created by Docker. C:\Program Files\MySQL\MySQL Server 8.0\bin>
mysql -P 49153 --protocol=tcp -u root -p
The value of the “Destination” key refers to the (logical) Enter password: ********
directory used by MySQL in the Docker container. This is to Welcome to the MySQL monitor. Commands
say, if you connect to the MySQL container interactively, you end with ; or \g.
will be able to change into the /var/lib/mysql directory and
examine its content within: Your MySQL connection id is 8
Server version: 8.0.22 MySQL Community
C:>docker exec -it My-mysql bash bash Server - GPL
root@8a0496aaca18:/# cd /var/lib/mysql

codemag.com Introduction to Containerization Using Docker 51


Listing 5. Content of the mysql.py file The above commands first create a database named my_db.
It then adds a table named Persons to this database and
import MySQLdb
inserts a record into the table. Finally, you retrieve the re-
db = MySQLdb.connect (user='root', cords from the table to verify that the record has been cor-
                      passwd='password', rectly inserted.
                      port=49153,
                      host='127.0.0.1', Writing a Python Program to Use the MySQL Database
                      db='my_db')
cur = db.cursor() You want now to write a program to connect to the MySQL
server on the container. For this, I created a simple Python
def get_allrecords() : program named mysql.py, as shown in Listing 5.
    cur.execute("SELECT * FROM Persons")
    for row in cur.fetchall() :
        print (row[0]) To connect to a MySQL database in Python, I’m using the mysql-
        print (row[1]) client package. To install it, use the pip install command:
        print (row[2])
        print (row[3]) C:>pip install mysqlclient
get_allrecords()
To run the program shown in Listing 5, type the following
db.close() command:

C:\>python mysql.py
0001
SPONSORED SIDEBAR: Copyright (c) 2000, 2020, Oracle and/or its Wei-Meng
affiliates. All rights reserved. Lee
Need FREE 25
Project Advice? Oracle is a registered trademark of Oracle
CODE Can Help! Corporation and/or its affiliates. Other If you see the above output, your Python has successfully
names may be trademarks of their respective connected to the MySQL server on the container!
No strings free advice on owners.
new or existing software
development projects.
Type 'help;' or '\h' for help. Type '\c' to Summary
CODE Consulting experts
have experience in cloud,
clear the current input statement. I hope this article has provided a clearer picture of how con-
Web, desktop, mobile, tainerization works. In particular, I used Docker as an ex-
microservices, containers, The above command in bold connects to the MySQL server ample and provided a few examples of how to use the Docker
and DevOps projects. on the container using the port 49153 (replace this with images to create containers. Let me know how you’re using
Schedule your free hour the port number assigned to your own MySQL container) Docker in your own development environment.
of CODE call with our and log in as root. Once you enter the password, you should
expert consultants today. be able to see the MySQL prompt:  Wei-Meng Lee
For more information, 
visit www.codemag.com/ mysql>
consulting or email us
at info@codemag.com. Creating a Database, Table, and Inserting a Row
In the MySQL prompt, enter the following commands in bold:

mysql> CREATE database my_db;


Query OK, 1 row affected (0.02 sec)

mysql> USE my_db;


Database changed

mysql> CREATE TABLE Persons (ID varchar (5)


NOT NULL PRIMARY KEY, FirstName VARCHAR(30),
LastName VARCHAR(30), Age INT);
Query OK, 0 rows affected (0.07 sec)

mysql> INSERT INTO Persons (ID, FirstName,


LastName, Age) VALUES ("0001", "Wei-Meng",
"Lee", 25);
Query OK, 1 row affected (0.03 sec)

mysql> SELECT * FROM Persons;


+------+-----------+----------+------+
| ID | FirstName | LastName | Age |
+------+-----------+----------+------+
| 0001 | Wei-Meng | Lee | 25 |
+------+-----------+----------+------+
1 row in set (0.00 sec)

52 Introduction to Containerization Using Docker codemag.com


ONLINE QUICK ID 2103071

The Complete Guide to Vue 3 Plug-ins:


Part 2
If you’ve been regularly building apps with Vue, you’ve noticed that in every app, you repeat some common functionalities over
and over. For instance, you use a Log Component in every Vue app you work on. In addition, you might expose a few global
functions or properties that you always find useful and usable in your apps. If this is you, then most probably you can start

packing your common code into a Vue plug-in. A Vue plug-in to instantiate apps. Instead, Vue 3 introduced the Applica-
is a self-contained code that adds a global-level functional- tion API (https://v3.vuejs.org/api/application-api.html),
ity into your app. When you create it, you package it as an which I will introduce shortly, to standardize creating and
NPM package. Then, whenever you need this plug-in, you instantiating apps. Vue 3 introduced the createApp() func-
install it into your app to avoid repetition of components, tion that returns an app instance. It represents the app
directives, global functions, and the like. context and will be used to define plug-ins, components, di-
rectives, and other objects. In a nutshell, this app instance
Vuex (https://next.vuex.vuejs.org/) and Vue Router replaces the Vue instance in Vue 2.
(https://next.router.vuejs.org/) are two examples of Vue
plug-ins. You’ll almost use these two plug-ins in every Vue Bilal Haidar
app you develop. bhaidar@gmail.com
In Vue 3, the global and internal www.bhaidar.dev
In summary, here’s a list of possible scenarios for which you @bhaidar
might consider building a custom Vue plug-in:
APIs have been restructured with
Bilal Haidar is an
tree-shaking support in mind. accomplished author,
• Adding some global methods or properties Microsoft MVP of 10 years,
• Adding directives ASP.NET Insider, and has
• Adding global mixins (https://v3.vuejs.org/guide/ been writing for CODE
mixins.html) This approach has tremendous advantages, especially when Magazine since 2007. 
• Adding global instance methods it comes to instantiating multiple Vue apps without clutter-
• Adding a library that provides an API of its own while ing the Vue global space. An app instance creates a bound- With 15 years of extensive
injecting some combinations of the above ary and isolates its components, directives, plug-ins, and experience in Web develop-
other Vue intrinsic objects from other app instances. ment, Bilal is an expert in
Generally, I monitor some common functionality that I keep providing enterprise Web
copying over and over from one app to another. Based on Moreover, Vue 3 brings over the new Composition API solutions.
that, I decide whether to create a plug-in or not. They make (https://composition-api.vuejs.org/api.html). Vue 2 had
He works at Consolidated
your life easier and your code more organized by promot- the Options API (https://vuejs.org/v2/api/#Options-Data)
Contractors Company in
ing common functionality into a common package that all sitting at its core. In Vue 2, you construct your component
Athens, Greece as a full-
apps can use. When you change the plug-in source code and as an object with properties. For example, data, props, stack senior developer.
publish the package again, you only need to go over all the watch, and many other properties are part of the Options
apps that are using the plug-in and upgrade it. API that you can use to attach functionality onto a com- Bilal offers technical
ponent. consultancy for a variety
of technologies including
Vue 3 Plug-ins In Vue 3, you can still make use of the Options API. It’s Nest JS, Angular, Vue JS,
Vue 3 introduced a long list of changes and additions to the still there and will continue to be there for a while. This JavaScript and TypeScript. 
Vue framework. The most obvious is the way you create and makes migrating Vue 2 apps to Vue 3 much easier, as Vue 3
load a Vue app. is backward compatible with the Options API. However, Vue
3 introduces the Composition API. This API is optional. You
Previously, in Vue 2, you instantiated an object of the Vue use the Composition API inside your Vue component by us-
global constructor that represented the entire app. You ing the setup() function. This function takes as parameters
used this instance to install plug-ins, and define directives, two inputs the props and context.
components, and other Vue intrinsic objects. This approach
has its own pitfalls and limitations. For instance, if you cre- setup(props, context) {
ate multiple Vue apps on the same page, they all share the const attrs = context.attrs;
same Vue global constructor. Hence, if you define a direc- const slots = context.slots
tive on the Vue global constructor, all the instances on the const emit = context.emit'
page have access to this directive. You use the Global API }
(https://vuejs.org/v2/api/#Global-API) to create an app in
Vue 2. It has all the functions required to interact with the This code snippet represents the setup() function signa-
Vue global constructor. ture.

In Vue 3, things have drastically changed when it comes to The props object represents the component’s props. What-
defining apps. You no longer use the Vue global constructor ever props you define on the component are available to the

codemag.com The Complete Guide to Vue 3 Plug-ins: Part 2 53


Listing 1: i18n Vue 3 plugin again helpful when upgrading your existing apps to Vue 3.
export default {
install: (app, options) => {
Note that any plug-in you develop in Vue 3 must support
app.config.globalProperties.$translate = key => { both APIs. This, of course, makes your plug-in both compat-
return key.split('.').reduce((o, i) => { ible and flexible to cover all bases.
if (o) return o[i]
}, options)
}
}
} $attrs now contains all attributes
passed to a component,
Listing 2: i18n Vue 2 plugin including class and style.
export default {
install: (Vue, options) => {
Vue.prototype.$translate = function(key) {
return key.split('.').reduce((o, i) => { As in Vue 2, you still need to define an install() function to
if (o) return o[i] represent your plug-in in Vue 3. This function will be used
}, options)
}
later by the Vue framework to install your plug-in. The only
} difference in Vue 3 is that this function receives an app in-
} stance. Back in Vue 2, the install() function received the
Vue global constructor.

Listing 3: i18n Vue 3 plugin with support for Composition API To support the Options API inside a plug-in, you make use
of the Application Config (https://v3.vuejs.org/api/applica-
export default {
install: (app, options) => { tion-config.html) that’s part of the Application API (https://
function translate(key) { v3.vuejs.org/api/application-api.html). Listing 1 shows a
return key.split(".").reduce((o, i) => { sample Vue 3 plug-in that uses the Application Config.
if (o) return o[i];
}, options);
} The code in Listing 1 shows to define a global property us-
ing the app.config.globalProperties object. In the coming
app.config.globalProperties.$translate = translate; sections, I’ll explore both the Application API and Applica-
tion Config.
app.provide("i18n", {
translate
}); Listing 2 shows the code for the sample plug-in written in
} Vue 2. Notice the use of the Vue.prototype object to define
}; instance methods that can be accessed inside a component
instance.

setup() function. Note that the props object is reactive. In part one of this series (CODE Magazine, January/Febru-
This means that the object is updated inside the setup() ary 2021), you learned about the provide/inject API. Now,
function when new props are passed into the component. in order to support the Composition API, you must make use
of the provide/inject API to provide the plug-in functional-
A context input parameter is an object that contains three ity inside the setup() function. Listing 3 shows how to add
main properties: support for the provide/inject API.

• attrs In addition to making the translate() function available


• slots on the component instance level, the plug-in provides the
• emit same function and makes it available inside the setup()
function.
Both attrs (https://v3.vuejs.org/api/instance-properties.
html#attrs) and slots (https://v3.vuejs.org/api/instance- You can play around with the plug-in in Listing 3 by visit-
properties.html#slots) are proxies to the corresponding ing this link: https://codesandbox.io/s/vue3-plugin-7rmc8
values on the internal component instance. This ensures
that they always expose the latest values even after updates Now that you have a general overview on how to develop Vue
so that we can destructure them without worrying about ac- 3 plug-ins, let’s move forward and explore the Application API
cessing a stale reference. and Application Config in detail.

setup(props, { attrs, slots, emit }) {}


Application API
Finally, the emit() is a reference to the $emit() (https:// Vue 3 provides the new Application API. You create an app
v3.vuejs.org/api/instance-methods.html#emit) function on using the createApp() method. It returns an app instance
the internal component instance. You can use it inside the that exposes the entire Application API.
setup() function to emit events to the parent component.
import { createApp } from 'vue'
Just remember that you can always mix both the Options const app = createApp({})
and Composition APIs inside a single component. This is app.mount('#app')

54 The Complete Guide to Vue 3 Plug-ins: Part 2 codemag.com


The code snippet creates and mounts a new Vue 3 app using console.log('A global mixin!)
the createApp() function. The app instance exposes the fol- }
lowing methods and objects. })

Let’s look at some of the functions. You can read more about The mount() Function
the function by following its corresponding link. The mount() function mounts a root component of the ap-
plication instance on the provided DOM element: (https://
v3.vuejs.org/api/application-api.html#mount).

Any APIs that globally mutate The provide() Function


Vue’s behavior are now moved The provide() function sets a value that can be injected into
all components within the application. Components should
to the app instance in Vue 3. use inject to receive the provided values: (https://v3.vuejs.
org/api/application-api.html#provide).

The component() Function You can read part one of this series (in CODE Magazine,
Use the component() function to register or retrieve a glob- January/February 2021) where I divulge the details of the
al component: (https://v3.vuejs.org/api/application-api. provide/inject API in Vue 3 ()
html#component):
The unmount() Function
// register a component The unmount() function unmounts a root component of
app.component('my-component', { the application instance on the provided DOM element:
/* ... */ (https://v3.vuejs.org/api/application-api.html#unmount).
})
The use() Function
// retrieve a registered component The use() function installs a Vue plug-in. This function ac-
const MyComponent = cepts, as a first input, an object representing the plug-in
app.component('my-component') and having an install() function. It also accepts a function
representing the plug-in itself. In addition, it accepts, as
The config Object a second input, an options parameter. Vue automatically
The config() object is an app global configuration object passes the options input parameter to either the install()
(https://v3.vuejs.org/api/application-api.html#config): function or the plug-in function, depending on what’s being
passed as a first input parameter (https://v3.vuejs.org/api/
app.config = {...} application-api.html#use).

The directive() Function This is just a brief summary of the available functions on the
Use the directive() function to register or retrieve a global direc- Application API. You can read more about the function by
tive (https://v3.vuejs.org/api/application-api.html#directive): following its corresponding link.

To register a directive using an object: Now let’s focus on the app.config object.

app.directive('my-directive', {
// Directive has a set of lifecycle hooks: Application Config
An app config is an object that you use to store Vue app global
beforeMount() {}, configurations. This object exposes several properties and
/* ... */ functions. In this article, I’ll focus on only a few of them.
})
const app = createApp(App);
To register a directive using a function: app.config = { … };
app.mount('#app');
app.directive('my-directive', () => {
/* ... */ The snippet creates a new Vue 3 app, sets the app.config
}) object, and finally mounts the app into a DOM root element.

To retrieve a directive: Let’s explore the most important features of the Application
Config as far as this article is concerned.
const myDirective =
app.directive('my-directive') The globalProperties Object
The app.config.globalProperties object allows you to add
The mixin() Function a global property that can be accessed in any component
This function is used to register an app-global mixin that’s instance inside the application. The component’s property
available in all component instances: (https://v3.vuejs. takes priority when there are conflicting keys.
org/api/application-api.html#mixin)
You define a global property:
app.mixin({
created() { app.config.globalProperties.foo = 'bar'

codemag.com The Complete Guide to Vue 3 Plug-ins: Part 2 55


.env file name description The command npx vue-cli-service build produces a produc-
tion-ready bundle in the dist/ directory, with minification
.env # loaded in all cases for JS/CSS/HTML.
.env.local # loaded in all cases, ignored by git
.env.[mode] # only loaded in the specified mode You can check the list of commands available from this bi-
nary at any time by running the following command:
.env.[mode].local # only loaded in the specified mode, ignored by git

Table 1: Available files npx vue-cli-service help

When you use the vue-cli-service to run the app, the envi-
You retrieve a global property inside a component instance: ronment variables are loaded from the .env file.

this.foo The Vue CLI defines a concept named mode. There are three
typical modes in the Vue CLI:
You can also define a global instance method:
• development
app.config.globalProperties.$translate = • test
(key) => /* ... */ • production

To use this method inside a component instance: You can use the mode when defining environment variable
files in your app. The available files are listed in Table 1.
this.$translate('key');
When you run the command npx vue-cli-service serve, the
The errorHandler() Function NODE_ENV environment variable is set to development.
The errorHandler() function defines a handler for uncaught Hence any of the following files .env, .env.local, and .env.
errors that occur during the execution of the app. Vue run- development are loaded if present.
time calls the error handler providing information about the
error thrown and some additional information. When you run the command npx vue-cli-service build, the
NODE_ENV environment variable is set to production.
app.config.errorHandler = (err, vm, info) => {
console.log(err);
console.log(vm);
console.log(info); The Vue CLI loads automatically
};
the environment variables defined
The err input parameter represents the actual error object, in the app.
the info input parameter is a Vue specific error string, and
the vm input parameter is the actual component instance.

The warnHandler() Function The one requirement to define Vue environment variables,
The warnHandler() function defines a handler for runtime is to prefix their names with VUE_APP_. Let’s say you de-
Vue warnings. This handler is ignored in production. fine the following environment variable inside the .env
file:
For the sake of building a plug-in in this article, I’ll focus on
the app.config.globalProperties object. VUE_APP_APIKEY=...

Next, inside your app, you’ll access this environment vari-


Vue Environment Variables Plug-in able as follows:
Lights! Camera! Action!
process.env.VUE_APP_APIKEY
In this section, I’ll implement a new Vue 3 plug-in that loads
environment variables and makes them available in your app. You can read about the Modes and Environment Variables
in the Vue CLI by following this link: https://cli.vuejs.org/
Before diving into the plug-in details, let’s explore how the guide/mode-and-env.html#modes.
Vue CLI (https://cli.vuejs.org/) allows you to use environ-
ment variables in your app that are defined inside a .env The important takeaway of this section is that the Vue CLI
file. loads the environment variables and makes them available
to your app to use.
Vue CLI Service
The Vue CLI comes with a binary named vue-cli-service. This
binary is the brains behind the Vue CLI. Plug-in Introduction
The Vue 3 plug-in that you’re going to implement in this
The command npx vue-cli-service serve starts a new dev section depends on the vue-cli-service loading the environ-
server, based on the webpack-dev-server (https://github. ment variables from all the different files and making those
com/webpack/webpack-dev-server) that comes with Hot- environment variables available to the app without the need
Module-Replacement (HMR) working out of the box. for the VUE_APP_ prefix.

56 The Complete Guide to Vue 3 Plug-ins: Part 2 codemag.com


Let’s say you have defined the following environment vari- ❯ Default (Vue 3 Preview)
able: ([Vue 3] babel, eslint)
Manually select features
VUE_APP_APIKEY=...
Make sure you select the second option labeled as Default
Eventually, when using the Options API, you’ll be able to (Vue 3 Preview). This option guarantees that the Vue CLI
access this environment variable as follows: creates a Vue 3 rather than a Vue 2 app.

this.$env.APIKEY Once the Vue CLI finishes scaffolding and creating the app
for you, all you need to do is change the directory to the app
root folder and run the app:

Some CLI plug-ins inject cd vue-env-variables


npm run serve
additional commands
to vue-cli-service. You can see The command runs the app and makes it available on port
all injected commands by running: 8080. You can open the app in a browser by visiting http://
localhost:8080/.
npx vue-cli-service help.
What’s left for now is to install the following two NPM pack-
ages to allow you to use SASS when writing your CSS:
When using the Composition API, you’ll be able to access
this environment variable as follows: npm i sass-loader
npm i node-loader
const { APIKEY } = inject('env');
Note that the Vue CLI initializes a new GIT repository inside
Let’s look at the plug-in implementation in detail. the app root folder and creates an initial commit to track all
the files inside GIT.
Plug-in Implementation Add the .env File
To start with, let’s make sure you have the latest bits of the Let’s add a .env file into the root folder of the project with
Vue CLI installed locally on your computer. The latest ver- the following content:
sion of the Vue CLI that I’ll be using in this article is v4.5.9
(https://github.com/vuejs/vue-cli/releases/tag/v4.5.9). VUE_APP_CATSAPIKEY=
e8d29058-baa0-4fbd-b5c2-3fa67b13b0d8
Install or Update Vue CLI
If you already have an older version of the Vue CLI, you can VUE_APP_CATSSEARCHAPI=
upgrade it by running the command: https://api.thecatapi.com/v1/images/search

npm update -g @vue/cli I’ve placed two Vue environment variables that you’re going
# OR to use later to access an online images API (the same one I
yarn global upgrade --latest @vue/cli used in part one of this series). Save the file.

If you don’t have the Vue CLI installed locally, install it by Now that you have the app created and the .env file popu-
running the command: lated with two environment variables, let’s add the plug-in.

npm install -g @vue/cli Add the Plug-in


# OR Inside the /src folder, add a new JavaScript file and name
yarn global add @vue/cli it install.js. Inside this file, paste the code shown in List-
ing 4.
To verify the Vue CLI installation, on your terminal window,
run the following command and check the version: Let’s focus on the install() function in Listing 4.

> vue --version The file also imports the ./env-helper.js helper module. It
@vue/cli 4.5.9 exports the getVueEnvVariables() function that the plug-
in uses.
Create a New Vue 3 App
Now that you’ve installed the Vue CLI on your computer, The module file defines a constant of type Symbol (https://
open a terminal window and run the following command to developer.mozilla.org/en-US/docs/Web/JavaScript/Refer-
create a new Vue 3 app: ence/Global_Objects/Symbol). The Symbol() function re-
turns a unique identifier. It can be used as a key to provide
❯ vue create vue-env-variables the environment variables to the app as you will see shortly.

Vue CLI v4.5.9 In addition, it exports a default ES6 plug-in module


? Please pick a preset: (https://hacks.mozilla.org/2015/08/es6-in-depth-mod-
Default ([Vue 2] babel, eslint) ules/) containing a single install() function.

codemag.com The Complete Guide to Vue 3 Plug-ins: Part 2 57


Moreover, it exports the useEnv() function that you can use registering the plug-in later on, you can pass any additional
inside the Composition API as a hook to retrieve the envi- settings/configurations via the options parameter.
ronment variables’ object.
First of all, the code loads all environment variables from the
The install() function expects two input parameters, the app process.env object. The process object (https://nodejs.org/
instance and options, that your plug-in might use. Upon dist/latest-v8.x/docs/api/process.html#process_process) is
a global that provides information about, and control over,
the current Node.js process. The process.env is an object
Listing 4: Vue Env Variable plugin property that contains the user environment. This also in-
import { inject } from 'vue'; cludes the environment variables defined on the local system.
import { getVueEnvVariables } from './env-helper';
Remember that the Vue CLI is nothing but a Node.js command-
export const envSymbol = Symbol(); line interface. Its main purpose is to make all the environment
export default { variables available on the local system via the process.env ob-
// eslint-disable-next-line no-unused-vars ject. The code stores the user environment in a local variable
install(app, options) { named env. Then it passes this variable to the getVueEnvVari-
// access process.env object ables() function that the ./env-helper.js module defines.
const env = process.env;

// get an object of all vue variables Let’s explore this function in more depth. Listing 5 shows
const vueVariables = getVueEnvVariables(env); the entire source code for this function.
// make $env property available1 Given the .env file that you’ve defined previously, the env-
// to all components using Options API
app.config.globalProperties.$env = vueVariables || {}; Variables object when printed should look as follows:

// provide the env variables to support all components BASE_URL: "/"


// using Composition API or Options API
app.provide(envSymbol, vueVariables);
}, NODE_ENV: "development"
};
VUE_APP_CATSAPIKEY:
export function useEnv() { "e8d29058-baa0-4fbd-b5c2-3fa67b13b0d8"
const env = inject(envSymbol);
if (!env) throw new Error('No env provided!');
VUE_APP_CATSSEARCHAPI:
return env; "https://api.thecatapi.com/v1/images/search
}
Note that the Vue CLI adds two additional environment vari-
ables: BASE_URL and NODE_ENV.
Listing 6: getVueEnvVariable() function
function getVueEnvVariable(envVariable) { The function tries to reduce the envVariable input parameter
// access one env variable at a time into an object containing only the environment variables
const [key, value] = envVariable; whose names start with the VUE_APP_ prefix.
// return if the env variable name
// doesn't start with VUE_APP_ It starts by looping over the key/value pairs in the envVari-
if (!isVueEnvVariable(key)) return null; able input parameter. For each key/value pair, it calls the
getVueEnvVariable() function.
// remove the prefix "VUE_APP_" from the
// env variable name
const cleanedKey = cleanVueVariable(key);
Listing 6 defines the getVueEnvVariable() function.

// construct a new key:value object The function in Listing 6 receives a single environment vari-
return { able in the form of a key/value pair object. It then checks
[cleanedKey]: value, if this is a VUE environment variable; that is, the key starts
};
} with VUE_APP_ prefix.

The isVueEnvVariable() function uses a regular expression


to check if the environment variable key starts with the
Listing 5: getVueEnvVariables() function VUE_APP_ prefix.
function getVueEnvVariables(envVariables) {
return Object.entries(envVariables).reduce( function isVueEnvVariable(envVariableKey) {
(accum, envVariable) => { return envVariableKey.match(/^VUE_APP_{1}/g);
const vueVariable = getVueEnvVariable(envVariable); }
// ignore none vue variables
if (!vueVariable) return accum; The function returns null if it’s not a Vue environment vari-
able. Otherwise, it calls the cleanVueVariable() function
// accumulate the env variable objects that replaces the VUE_APP_prefix with an empty string:
return { ...accum, ...vueVariable };
}, {});
} function cleanVueVariable(envVariableKey) {
return envVariableKey.replace(

58 The Complete Guide to Vue 3 Plug-ins: Part 2 codemag.com


/^VUE_APP_{1}/g, Finally, let’s explore the useEnv() function.
'');
} export function useEnv() {
const env = inject(envSymbol);
Finally, it returns a new object with the newly con- if (!env) throw new Error('No env provided!');
structed environment variable key and keeping the same
value. return env;
}
For instance, given the envVariable input parameter has the
following value:
Listing 7: App Component
{ <template>
VUE_APP_CATSAPIKEY: <div class="app">
"e8d29058-baa0-4fbd-b5c2-3fa67b13b0d8" <h2>My Cats App!</h2>
} <cats-collection />
</div>
</template>
This function returns a new object as follows:
<script>
import CatsCollection from './components/CatsCollection';
{
CATSAPIKEY: export default {
"e8d29058-baa0-4fbd-b5c2-3fa67b13b0d8" name: 'App',
} components: {
'cats-collection': CatsCollection,
},
Now back to the getVueEnvVariables() function where you left };
off. The getVueEnvVariable() function either returns null or </script>
an object representing the Vue environment variable. If the <style>
return value is null, do nothing and move to the next environ- #app {
ment variable. Otherwise, merge this new Vue environment font-family: Avenir, Helvetica, Arial, sans-serif;
variable object with the rest of similar objects and return. -webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-align: center;
The end result of this function, given the existing .env file color: #2c3e50;
you added previously, is the following object: margin-top: 60px;
}
</style>
{
CATSAPIKEY:
"e8d29058-baa0-4fbd-b5c2-3fa67b13b0d8",
Listing 8: CatsCollection Component
CATSSEARCHAPI:
<template>
"https://api.thecatapi.com/v1/images/search, <section class="cats-collection">
} <h4>My Best Collection!</h4>
<section class="cats-collection__favorites">
<favorite-cat :index="1" />
Back to Listing 4. The variable vueVariables stores the ob- <favorite-cat :index="2" />
ject returned from calling the getVueEnvVariables() function </section>
with the env input parameter. </section>
</template>
// make $env property available <script>
// to all components using Options API import FavoriteCat from './FavoriteCat';

export default {
app.config.globalProperties.$env = name: 'CatsCollection',
vueVariables || {}; components: {
'favorite-cat': FavoriteCat,
The plug-in exposes the vueVariables’ object on a global },
};
property named $env that is defined on the app.config.glo- </script>
balProperties object. To revise this, go back to the section
on Application Config. <style lang="scss" scoped>
.cats-collection {
width: 90%;
Now the $env object is exposed to the Options API for all margin: 0px auto;
components in the app. Still, you need to expose the same border-radius: 5px 5px 0 0;
object inside the Composition API. padding: 20px 0;
background: lightblue;

app.provide(envSymbol, vueVariables); &__favorites {


display: flex;
width: 100%;
To do so, expose the object via the app.provide() function. justify-content: space-around;
As you know, this function expects key and value input pa- }
rameters. It uses the envSymbol variable as the key and, of }
course, the value is the environment object variables’ object. </style>

codemag.com The Complete Guide to Vue 3 Plug-ins: Part 2 59


Listing 9: FavoriteCat component Any component that wants to access the environment vari-
ables’ object needs to use this function as a hook inside the
<template>
<section class="favorite-cat">
setup() function of the Composition API.
<p>Favorite Cat {{ index }}</p>
<img v-if="imageUrl" :src="imageUrl" alt="Cat" /> It uses the built-in inject() function that Vue 3 defines to in-
</section> ject the environment variables, provided by the plug-in, into
</template> a local variable named env. It throws an exception if there are
<script> no environment variables provided. Otherwise, it returns the
import { ref, onMounted, computed } from 'vue'; env variable containing all the environment variables.
import axios from 'axios';
import { useEnv } from '../install'; That’s it! The plug-in is ready.
export default {
name: 'CatsCollection', Let’s use the plug-in in the next section.
props: {
index: {
type: Number, Using the Plug-in
default: 1,
}, Let’s use the plug-in in a sample Vue app. For this purpose, I’ll
}, be using the same example code that I used in part one of this
setup(props) { series. The goal of the app is to display two random cat images.
const env = useEnv();
let imageUrl = ref('');
You can see the app live by following this link: https://
const loadNextImage = async () => { stackblitz.com/edit/vue3-prop-drilling-composition-api-
try { provide-inject.
const { CATSAPIKEY, CATSSEARCHAPI } = env;
axios.defaults.headers.common['x-api-key']
= CATSAPIKEY; Open the App.vue file and paste the source code shown in
Listing 7.
let response = await axios.get(CATSSEARCHAPI, {
params: { limit: 1, size: 'full' }, The App component uses the CatsCollection component.
});
Listing 8 defines this component.
const { url } = response.data[0];
imageUrl.value = url; The CatsCollection component instantiates two instances of
} catch (err) { the FavoriteCat component. For each instance, it sets the
console.log(err);
}
index property to be an integer representing the ID of the
}; component. Listing 9 defines the FavoriteCat component.
This component displays a random image using the Cat API
onMounted(() => { (https://thecatapi.com/).
loadNextImage();
});
Let’s look at the setup() function in more depth. Listing 10
return { shows the setup() function.
imageIndex: computed(() => props.index),
imageUrl, To start with, the setup() function receives as input the
};
}, props that you define on the component. In this case, the
}; props object contains a single property named index.
</script>
Then it uses the useEnv() hook to get access to the envi-
<style lang="scss" scoped>
.favorite-cat { ronment variables provided by the plug-in and assigns the
img { returned object to a local constant named env.
max-width: 200px;
border-radius: 5px; It also defines the reactive imageUrl variable to be a ref(‘’).
}
}
You can read more about ref() here: (https://composition-
</style> api.vuejs.org/api.html#ref).

It calls the loadNextImage() function when the component


mounts using the onMounted() lifecycle hook: (https://
Listing 10: setup() function v3.vuejs.org/api/composition-api.html#lifecycle-hooks).
setup(props) {
const env = useEnv(); Finally, it returns both the imageIndex, as a computed
let imageUrl = ref(''); read-only property, and the imageUrl.
onMounted(() => {
loadNextImage(); Let’s move on to the loadNextImage() function. Listing 11
}); shows the loadNextImage() function.
return {
imageIndex: computed(() => props.index), In the context of this article, you mostly care about this
imageUrl, line of code:
};
}, const { CATSAPIKEY, CATSSEARCHAPI } = env;

60 The Complete Guide to Vue 3 Plug-ins: Part 2 codemag.com


You’re destructing the env variable and extracting two envi- Listing 11: loadNextImage() function
ronment variables defined inside the .env file. The Cats API
Key and Cats Search API base URL. const loadNextImage = async () => {
try {
const { CATSAPIKEY, CATSSEARCHAPI } = env;
Remember that the .env file already defines these environment axios.defaults.headers.common['x-api-key']
variables using the VUE_APP_ prefix. However, your plug-in ex- = CATSAPIKEY;
poses the same environment variables without prefixes.
let response = await axios.get(CATSSEARCHAPI, {
params: { limit: 1, size: 'full' },
The rest of the code in the function communicates with the });
remote API to query for a new random image.
const { url } = response.data[0];
Back to the terminal window, run the following command imageUrl.value = url;
} catch (err) {
to run the app: console.log(err);
}
npm run serve };

This command runs the app in development mode and makes


it available at port 8080. Open a browser of your choice and
navigate to http://localhost:8080 and you shall see some-
thing similar to Figure 1.

That’s it! The app communicates with the remote API supply-
ing the API Key and the Search API URL stored in the .env
file. The new Vue Env Variables plug-in successfully loads
those environment variables and provides them to the app
without the VUE_API_ prefix.

The source code for the plug-in is hosted on GitHub and is


accessed by visiting the link https://github.com/bhaidar/
vue-env-variables.

Bonus: Prepare and Publish a Plug-in


to NPM
Instead of going around in circles, I’ll point you in the direc-
tion of a great guide on the Internet to help you prepare and
publish your plug-in as an NPM package: Create and Publish
Your First Vue.JS Plug-in on NPM (https://5balloons.info/ Figure 1: App running
create-publish-you-first-vue-plugin-on-npm-the-right-
way/).
Conclusion
Learning more about how the Vue CLI builds your package This is the end of this series on building Vue 3 plug-ins that
via the vue-cli-service is recommended, and here’s a good support both the Options API and the Composition API. Dur-
place to find out more: https://cli.vuejs.org/guide/build- ing this transitional period, it’s important to keep building
targets.html#app. plug-ins that are backward compatible with the Options API.

I’ve already built and published this plug-in as an NPM Always look at the common functionalities in your apps and
package here: https://www.npmjs.com/package/vue-env- try to promote them to a plug-in. This is very useful and
variables. time effective. You may choose to benefit others in the com-
munity by sharing your plug-in with them.
You can open any Vue 3 app and install the NPM package by
running the following command:  Bilal Haidar

npm i vue-env-variables

Inside the main.js file of the Vue 3 app, you start by import-
ing the plug-in:

import VueEnvVariables from "vue-env-variables";

Then ask the app instance to make use of this plug-in:

// make use of the plugin


app.use(VueEnvVariables);

That’s all! Enjoy it.

codemag.com The Complete Guide to Vue 3 Plug-ins: Part 2 61


ONLINE QUICK ID 2103081

Migrating Monolithic Apps to Multi-


Platform Product Lines with .NET 5
Suppose you must maintain the code base of a software product that incorporates more than 15 years of legacy support. The
capabilities of the programming language evolved over time, new design ideas came up, insights in the domain emerged, but
product owners and management decided to skip necessary refactoring of the code base due to time pressure. This sounds

familiar to many developers and some may already start los- The Common Ground
ing their hair from just thinking about it. The reason for Before you prepare the dish, you need to know the ingre-
the hair loss is obvious, but remains unvalued by manage- dients: ports-and-adapters as an architectural style, and
ment and product owners: Whenever a code base grows for a domain-driven design for creating valuable software. Let’s
long period of time, the design of the code slowly turns into get to know them.
the famous big-ball-of-mud, a nightmare for all of us. Even
worse, when developers suddenly need to deliver a new ap- Ingredient 1: Ports-and-Adapters
plication fast, they inevitably fail, because most of the code There are many different architectural styles from which
can’t be reused somewhere else. architects and developers may choose, each with advan-
Alexander Pirker, PhD tages and disadvantages. Which one to choose depends on
pirker_alexander@hotmail.com In this article. I’ll show you one way out of the misery. It doesn’t several factors, like target platform, performance, etc. But
matter whether you start on a green field, or you already live also, the business domain that the software needs to sup-
Alexander is a Software the nightmare: I cover both scenarios. With a green field, you port influences the choice of style. For example, for soft-
Architect and Team Leader can immediately start out in the right way. For the ones living ware that controls a manufacturing pipeline, a pipes-and-
for Cloud Services at the nightmare, I present a migration process to slowly turn the filters design appears more appropriate than a data-centric
MED-EL, as well as
big-ball-of-mud into something clean and useful, which can approach. However, when the goal is to create software
a Senior Security Consul-
serve as a framework for various software products (which I that needs to potentially run in multiple environments, the
tant at RootSys. He has
refer to as software product line framework here). ports-and-adapters style fits perfectly. But why? What’s so
experience in designing
microservices and desktop special about it?
or mobile applications The common ground for both scenarios builds around two key
but also in writing or ingredients: the well-known architectural style “ports-and- Figure 1 depicts the ports-and-adapters architectural style.
migrating them. adapters,” sometimes also referred to as “hexagonal architec- The business logic lies at the core of the application and the
He received a PhD in ture;” and design patterns and principles from domain-driven environment surrounds it. This sounds great for software
Physics from the University design (DDD). The former ingredient tells you how to structure that needs to run in many different environments, like for
of Innsbruck and holds an application in general, and importantly, it commands you example on a mobile platform or on a desktop.
a master’s degree in to write business logic completely independent of the environ-
Technical Mathematics ment in which the software runs. The latter ingredient gives A re-implementation of the environment enables you to
and a master’s degree in you a hint where to split the business logic into (almost) in- reuse the same business logic to build a new application,
Biomedical Informatics. dependent assemblies. Combining these ingredients provides maybe even on a mobile device, as shown in Figure 2.
In his free time, he likes a very appealing dish using .NET 5: a software product-line
to go to the gym, but also framework of business logic assemblies for multiple platforms. How does it work in detail? What do you really have to do to
enjoys hiking in the Alps. use the full strength of this ingredient?
In the last part, I’ll present a migration process that al-
lows you to slowly migrate there. For that purpose, I outline Before exploring that, I need to explain the main idea of
how to deal and integrate with the legacy code base for new ports and adapters: The business logic component drives
business logic, but also how to migrate existing business the application because it delivers value to the business,
logic into a multi-platform software product line framework. not the other way around. Hence the business logic needs

Figure 1: In the ports-and-adapters architectural style, the Figure 2: From a single business logic component, multiple
environment depends on the business logic. applications emerge by re-implementing the environment.

62 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


to be independent of the environment in which it runs, for
example, the application runtime, the platform, etc. Only
this turns the business logic into a reusable component. The
business logic needs to interact with the environment at
some points, like, for example, to persist data or to com-
municate with an external service. To accomplish that, the
business logic defines ports, which abstract such interac-
tions with the environments. Specifically, every time the
business logic needs to interact with the environment, the
business logic component declares a C# interface for that
interaction. You call an implementation of such an interface
an adapter and implementing all of them makes up the ap-
plication (except the user interface interactions with the
business logic). This principle, also known as dependency-
inversion principle (the “D” of the SOLID principles), enables
you to write business logic free from concrete implementa-
tions of any environment.

Let’s take a close look at how it works in detail. Suppose


you must write an app for Android and iOS using Xamarin Figure 3: The ports-and-adapters architectural style enables both apps to use the same business logic.
that needs to communicate with a fitness tracker periph-
eral device via Bluetooth. The business logic for the “Fit-
nessTracker” app should get re-used on both platforms to our design. For that purpose, DDD suggests that develop-
minimize implementation efforts. Hence, the development ers and domain experts first agree on a shared language,
team defines in the business logic assembly an interface referred to as ubiquitous language. Both use only the terms
IBluetoothService to get the latest fitness data from the from this language when discussing the business domain,
fitness tracker device: to prevent misunderstandings from the beginning. To trans-
form such discussions now into a meaningful model, DDD
interface IBluetoothService provides a rich tool set of tactical and strategic patterns.
{
IList<RawFitnessData> GetFitnessData(); The tactical patterns help you to come up with a software
} design, also referred to as domain model, which captures
the relationships of objects within the business domain.
In the respective platform component, they implement that Because not all objects within the business domain are of
interface in a platform-dependent way to cope with the the same kind, the tactical patterns feature several differ-
platform needs regarding Bluetooth connections. See Fig- ent types of model elements, including (but not limited to):
ure 3 for details.
• Entities characterize objects with an identity through-
The business logic assembly of the Fitness Tracker app de- out their life cycle.
fines an interface for reading the fitness data from the ex- • Value-objects are objects without such an identity,
ternal Bluetooth device, referred to as “peripheral” from where only the attributes of the object matter.
now on. The respective platform-dependent assemblies • Aggregates characterize object graphs that appear as
implement this interface to deal with the specialties of each a single object. In general, they build around invari-
platform regarding Bluetooth. ants.
• Repositories correspond to objects that persist and
Ingredient 2: Domain-Driven Design query aggregates.
Ports-and-adapters architectures isolate the business logic
of an application. How should you now implement this com- These types help you to design the domain model.
ponent? How do you structure it? Should you create one
component covering the whole business logic? Many ques- Too abstract? Let’s try it out on a concrete example. Con-
tions arise and luckily, I have an answer for them: Domain- sider again the Fitness Tracker app from before. The domain
driven design, also known as DDD. expert, a running professional, gives us the following in-
sights: “Each run depends on my condition on that day, so
For modeling and structuring business logic components, two runs are never the same for me. When I go for a run, I
domain-driven design is a standard process. It pursues a want to see how long I need for each individual lap I run.
very close collaboration between software developers and An individual lap is at most a quarter mile long. Sometimes
domain experts. One goal governs this close collaboration: I want to decide on my own when the lap ends. Further,
aligning the software design or software model with the I would like to visualize my heart rate after the run.” Of
model of the business domain the software supports. This course, I’m simplifying here for the sake of clarity. A real-
sounds somehow easy and clear, nevertheless, many of us world model would be much more complex.
(including myself) failed at this task.
Anyway, from the conversation with the running profession-
The reason: We are developers, not domain experts! Obvi- al, I infer that there should be something like a Run object
ous, isn’t it? However, if the goal is to align the software within the domain model. Further, I identify individual laps,
model with the business domain, then we must talk a lot to so a Lap object wouldn’t hurt. The length of such a lap is
the domain expert to grasp his knowledge and reflect it in up to the runner, but at most, a quarter mile. Finally, the

codemag.com Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 63


running professional wants to visualize his heart rate af- Let’s work this concept out using the Fitness Tracker app.
terwards, therefore I add a HeartRate object to the domain The product owner decides that the app in the next release
model. Which type do you associate to each of these objects should additionally support HIIT (high intensity interval
and how do they relate? training) workouts and biking. After several rounds of dis-
cussions with the domain-experts, the software developers
The Lap and HeartRate objects correspond to value-objects conclude that the business domain breaks down into two
because they have no meaningful identity to the runner. domains: HIIT and Endurance (running and biking have a
They only make sense in the context of a specific run. How- lot in common in this simplified example). Each of them
ever, the Run object has an identity to the runner (“Each run comes with a domain model, and the models remain inde-
highly depends on my condition on that day”). Importantly, pendent. Figure 5 depicts the result.
Run implicitly also defines two invariants:
This concept now serves as starting point for your software
• There is at least one Lap for each run (“When I go for a product line framework.
run, I need to see how long I need for each individual
lap I run”).
• Each run comes with a HeartRate object (“Further, I Let’s Cook It Up and Prepare the Dish!
would like to visualize my heart rate after the run”). Ports-and-adapters and domain-driven design provide the
main ingredients to the dish: a software product line frame-
Therefore, the Run object turns out as an aggregate. For the work for multiple platforms. How? Here is the receipt:
full domain model, see Figure 4.
1. Identify your business domain and break it down into
The strategic-patterns in domain-driven design tell you to smaller sub-domains.
which part of a domain a concrete domain model applies. In 2. Come up with a domain model for each of the sub-
most cases, the business domain breaks down into smaller domains, leading to a bounded context.
units, or sub-domains. Such individual sub-domains ad- 3. Every time the domain model needs to interact with
dress a certain area of the whole business, and the objects the environment, create a port for it.
within them form a highly cohesive part with very few re- 4. Implement the domain model of each bounded con-
lationships to other sub-domains. Very often, sub-domains text in a platform-independent assembly.
emerge from organizational structures or individual areas
of a business. DDD defines the notion of so-called bounded The first step of this receipt defines the scope of the whole
context for such sub-domains, which ultimately contain the software product line. It declares which sub-domains you’ll
business logic to support them in software. Therefore, the address with your framework. Step two establishes on the
whole business logic of an application designed using DDD one hand a design for each individual sub-domain, and on
comprises several loosely coupled bounded context compo- the other hand it settles the implementation boundaries in
nents. terms of assemblies. Hence, the second step sets the granu-
larity of your software product line framework. The third
step in the receipt turns your software product line frame-
work into a real multi-platform dish because it eliminates
all environment dependencies from your bounded con-
texts. Finally, the implementation of step four provides you
with a set of reusable assemblies for multiple platforms.

Let’s try the receipt together with the Fitness Tracker App
from before. You start by adding repositories to the bound-
ed contexts, because the app should store the fitness data
somewhere. Recall that the app reads the fitness data from a
peripheral Bluetooth device, as Figure 3 indicates with the
IBluetoothService interface. You further add in the Endur-
Figure 4: The domain model for the Fitness Tracker app for runners ance bounded context a service to read GPS data from the

Figure 5: Two independent bounded contexts support the next release of the Fitness Tracker app. Running and biking
emerge into a single bounded context, the Endurance bounded context.

64 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


Figure 6: Some changes happened to the business domain of the Fitness tracker app.

Figure 7: The power of the software product line framework for the fitness tracker business: Three different applications
composed from one framework.

mobile phone to track the activity (unfortunately there’s no assemble new applications from them. You build new ap-
GPS tracker included in the peripheral device), and you add plications by simply including only the bounded contexts
a so-called HIIT timer service to the HIIT bounded context. (and their dependents) you want to deliver in the new ap-
For the communication with the fitness tracker peripheral, plication instead of all of them. For instance, if the busi-
you create a new bounded context, the Peripherals bound- ness decides to deliver a small app just including endurance
ed context. This bounded context supports the proprietary training, then the app comprises the Endurance and Pe-
protocols and data formats that the fitness tracker peripher- ripherals bounded context (“Endurance tracker app”). Or, if
al implements, independent of the communication interface the business decides to deliver another app just supporting
like Bluetooth. Observe that the data services of the Endur- HIIT workouts, then it comprises the HIIT and Peripherals
ance and HIIT bounded contexts use the PeripheralService bounded context (“HIIT workout tracker app“). Finally, the
from the Peripherals bounded context to read data from the premium version could feature all bounded contexts (“Fit-
peripheral. Figure 6 summarizes the new situation. ness tracker app”). Figure 7 shows the different configura-
tions of the framework into concrete software products.
Figure 6 is the output of step two of the receipt, providing
you with domain models for the bounded contexts Endur- To turn the framework now into a multi-platform dish, you
ance, HIIT, and Peripherals. It leads to an interesting in- need to apply step three of the receipt. As Figure 6 shows,
sight: Endurance and HIIT bounded context both depend on the bounded contexts interact at several points with the en-
the Peripherals bounded context. That means, whenever you vironment. Specifically:
want to use either of them, you need to include the Peripher-
als bounded context. However, this design is a (not yet multi- • Endurance bounded context: TrainingRepository,
platform) software product line framework! But why? GpsService
• HIIT bounded context: TrainingRepository, HiitTim-
The design of individual bounded contexts with very few erService
relationships among each other allows you to modular • Peripherals bounded context: BluetoothService

codemag.com Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 65


The TrainingRepository classes depend on the storage tech- You simply choose a platform-independent framework like
nology the app uses, whereas the classes GpsService, Hiit- .NET Standard as your target framework.
Timer, and BluetoothService depend on the platform the
app runs on. Therefore, if you want to turn the framework
platform-independent, you must push the implementations
Why Is This Now So Useful?
of all the repositories and all the services to the environ- elect a Restaurant and Eat!
ment. This means turning the afore-mentioned repository You may wonder “What’s the purpose of doing this? Why is
implementations and service implementations into inter- it really useful for me?” In the following, I again pick up the
faces and implementing them as adapters outside, which food metaphor to detail the advantages for you.
provides a multi-platform software product line framework,
see Figure 8. I use the following starting point: You transformed the
whole business logic into a multi-platform software prod-
The final step in the receipt is rather trivial because the uct line framework thanks to the use of the .NET Standard
bounded contexts are prepared to be platform independent. TFM (target framework moniker) for the bounded context

Figure 8: After step three of the receipt, the platform-dependent parts have been moved outside of the bounded context.

Figure 9: Both WPF and Xamarin iOS use the same assemblies from the framework you developed before, thereby
delivering the same functionality to the end user.

66 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


assemblies. From them, you can build multiple applications
depending on your needs. What’s so powerful about it is
that you’re even free to choose the platform of the applica-
tion, which corresponds to the environment in which the
application runs. It’s like choosing the restaurant in which
you want to enjoy our dish.

Let’s be more precise about it and consider the Fitness


Tracker app again. Suppose you must deliver the HIIT fea-
ture to mobile phones, but also Windows desktop users. The
bounded contexts you need to include in the applications
are HIIT and Peripherals, both available as .NET Standard
assemblies. Apart from that, you only need to develop the
user interface and the environment for these bounded con-
texts, and you ship the new applications to the custom-
ers. What’s more, whenever you identify a bug in one of
the bounded contexts, it requires only a single fix. Figure 9
summarizes the situation.

But there’s even more. You apply the same idea to turn the
Fitness tracker app into a cloud product. It’s amazingly
straightforward: You replace the “HIIT–UI” assemblies of Figure 10: Most of the business logic runs as an ASP.NET
Figure 9 with ASP.NET Core assemblies full of Controllers, Core WebAPI project using cloud-specific environments of
which delegate incoming calls to the bounded context HIIT. the bounded-contexts it hosts.
Further, you implement the environment of your bounded
contexts using ASP.NET core, thereby providing the missing
functionality. This transformation, i.e., pushing the busi- TFM. Second, .NET 5 supports C# 9.0, which comes with a lot
ness logic of an app into the cloud, leads to an interest- of interesting new language features like Records or Rela-
ing architectural insight: Front-end clients don’t need much tional Pattern Matching. Third, C# source generators seem
business logic, because most of it runs in the cloud. Clients an exciting new compiler feature.
solely implement user interfaces, plus some residual inter-
action logic with the cloud. Figure 10 shows the basic idea. At its heart, your software supports a business. Therefore,
you value your business logic the most. For your migration
As you can see in Figure 10, you should introduce a callback out of the nightmare, you must now tackle two situations:
mechanism using for example gRPC to remotely perform ac-
tions on the client device, especially at points where your • Writing new business logic
bounded context implementation requires a client interac- • Migrating existing business logic
tion with real hardware.
Writing New Business Logic
That’s powerful, and offers you several advantages: When writing new bounded contexts, you start from scratch.
That’s great, because you apply the patterns and principles
• Clients get slim, carrying less business logic. from domain-driven design to get to a clean model that
• Changes happen centralized in the cloud. you consequently implement in .NET Standard. This enables
• Changes are more easily tractable. multi-platform bounded contexts implementations, clean
• Composition of new client applications is easy. and reusable, as shown before. To integrate with your ex-
isting code base, you need an additional technique from
domain-driven design known as anti-corruption layers.
How to Migrate Business Logic
to Such a Framework Anti-corruption layers shield bounded contexts from legacy
Now comes the tricky bit. Most of us don’t have a green systems. They equip them with a protection shell to decouple
field. We must deal with an old code base, probably grown the domain model from the legacy model. The responsibility
over years. Very often the code base involves even different of such a layer is simple: Provide a well-defined view on the
technologies, like WPF mixed with WinForms, WCF commu- new bounded context (inbound facade) and offer a well-
nication channels with REST APIs, etc. What should you do defined view onto the legacy system (outbound facade).
to get to a multi-platform software product line framework? This also includes defining data-transfer objects between
In the following, I outline a strategy to slowly migrate to a new bounded context and legacy systems to achieve model
clean solution, also considering the latest .NET 5 develop- independence. The facades convert between the models in
ment. But before that, I give a short recap about important terms of data-transfer objects.
features of .NET 5.
Consider, for example, that the product owner of the Fit-
First, .NET 5 is, in terms of target platform, somehow the suc- ness tracker app now also wants to sell products directly in
cessor of .NET Core. It unifies the view for developers onto the the application. For that purpose, he explains how the sub-
.NET Framework, .NET Standard, and .NET Core apps. However, domain “Sales” works and explains that he has an old item
when you need to write assemblies that require platform de- catalog system (which acts as an inventory) in place that he
pendent code like WPF for Windows Desktop, you add support intends to use as inventory for all products. Hence, the new
for these functionalities by specifying .net5.0-{platform} as bounded context needs to communicate with this legacy

codemag.com Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 67


third-party system to populate a list of available products 4. Migrate to an anemic domain model.
to the user. To protect the new domain model from any form 5. Migrate to a rich domain model.
of influence, I create two services with corresponding data-
transfer objects to protect the new sales domain model. See Let me explain the five steps with an example from the Fit-
Figure 11. ness Tracker app to you in more depth. Consider the design
of Figure 12. I highly simplify the situation for the sake of
Later, I’ll split the bounded-context implementation into clarity.
two parts: one part containing the anti-corruption layer
and one part containing the implementation of the domain The design contains the following classes:
model. This originates from the following idea: The anti-
corruption layer acts as some form of front-end and back- • RunningCoordinator: This class starts and stops the
end at the same time because it protects the domain model run. On starting the run, it creates a new instance of
from the outside world (incoming/outgoing). It enables you RunningEntity, and then polls the peripheral, in a
to evolve the domain model itself without breaking clients background thread via Bluetooth, for information re-
consuming it, but also to change it without worrying about garding laps and the current heart rate. It then writes
services that it relies on. Usually, such layers contain very these values into the RunningEntity.
little code, which makes the assemblies rather small. • RunningEntity: This class stores the heart rate and
the laps. It also provides the properties the user cares
Migrating Existing Business Logic about to the user interface: distance and duration of
The situation is way more involved when dealing with exist- the run. It also contains methods for updating the
ing business logic that lives somewhere within the big-ball- RunningEntity instance in database.
of-mud. I can also provide you a way out for such scenarios. • TrainingEffectCalculator: This class determines the
This way involves the following steps: training effect of the run afterwards. For that pur-
pose, it gets the heart rate and the laps of the run to
1. Identify which classes implement business logic for determine an overall training effect from it.
the bounded context you want to migrate to. • BluetoothDataReader: This class reads the raw infor-
2. Define a domain model for the bounded context that mation about a run from the peripheral device.
you want to migrate (including the abstractions for the
environment). The classes RunningEntity and TrainingEffectCalculator
3. Define an anti-corruption layer for your new bounded implement the business logic in this example. However, the
context. business logic comes in a bad condition. Additionally, the
RunningEntity class contains persistence code within the
Update method that you need to extract.

After several discussions with the domain expert, you come


up with a new domain model for the “Endurance” sub-do-
main, which Figure 13 depicts.

The responsibilities of each model element are now clearly


defined:

• Training: The application services interact with this


aggregate. It offers only two methods: One to start
the training and one to stop the training. On execut-
ing the Start method, this class creates a new back-
ground thread to read the raw values from the En-
duranceDataService, which abstracts the peripheral
Figure 11: The introduction of an anti-corruption layer helps you to keep your domain model clean. from the “Endurance” sub-domain. Furthermore, the

Figure 12: Class design of the Fitness tracker application before the migration

68 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


computation of the training effect depends on the
children of the Training aggregate only, and, there-
fore, it also moves to this aggregate. Finally, the logic
that the bounded context requires to store, delete, or
query for Training moves to the TrainingRepository.
• Lap, HeartRate, Type: These value-objects define
some of the properties of a training. To interact with
these children of Training, services always need to
pass through the aggregate Training and the inter-
face it offers.
• EnduranceDataService: Abstraction of the peripheral
with respect to the “Endurance” sub-domain. It makes
sure that the domain model of the “Endurance” sub-do-
main does not explicitly depend on concrete peripherals.
• TrainingReposiotry: Like EnduranceDataService. It
abstracts the persistence technology with respect to
the “Endurance” sub-domain.

Next, you define an anti-corruption layer for the new do-


main model. This involves two things. First, you define data- Figure 13: The new domain model for the Endurance bounded context after discussions with
transfer objects wherever necessary to protect the model, the domain expert.
and second, you implement facades to hide from legacy sys-
tems. In the example, this means:
Figure 14 shows the result after this step.
• You define data-transfer objects and services for ac-
cessing the domain model to protect it from the ap- You move the business logic now in two steps: First you mi-
plications that use it. For this example, you introduce grate to an anemic domain model (excluding the outbound
the class TrainingService offering both StartTraining facade and data-transfer objects for them), and then you mi-
and StopTraining methods. The StopTraining method grate to a rich domain model. What does it mean to migrate
returns a TrainingDto data-transfer object summariz- first to an anemic domain model? It means that you migrate
ing the training for the application. the structure and properties of the new domain model first
• You define data-transfer objects for the communica- without migrating the methods and dynamics. You easily
tion with the peripheral via the class EnduranceDa- achieve this by introducing “callbacks” (essentially C# inter-
taService. The class EnduranceDataService will then faces) to the legacy code base that abstract the business logic
be in charge to convert to these data-transfer objects to determine some properties. Implementations of these call-
after gathering the values from the peripheral. backs reside within the legacy code base.

Figure 14: The domain model is in the lower left corner. The anti-corruption layer corresponds to the services, their data-
transfer objects, and the repository. The additional layer makes sure that the model stays clean and precise.

codemag.com Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 69


For example, the property TrainingEffect of the new domain duce the interface ITrainingEffectCallback with the method
model in Figure 14 depends on the business logic that the class GetTrainingEffect. Similarly, the training aggregate uses the in-
TrainingsEffectCalculator of Figure 12 implements. To access terface ITrainingCallback to start and stop the training, but also
this business logic within the new domain model, you intro- to get the data for the laps and the heart rate, see Figure 15.

Figure 15: Step four completes the migration to an anemic domain model by introducing callback interfaces to the old
business logic. Adapter implementations in the legacy code base preserve the functionality.

Figure 16: This is the starting point after the refactoring to the software product line framework. All user interfaces
reside within the same assembly implementing one huge monolithic component.

70 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


Figure 17: Situation after first refactoring step: Parts of the user interface were migrated from WinForms to WPF into a
dedicated assembly supporting the Endurance bounded context.

Before you migrate and start coding, you need to choose push into the use of .NET 5 too much, because the goal is
which platforms the framework should support. Because still the reuse of assemblies rather than jumping on new
you want to support multiple platforms, the choice is be- technologies.
tween .NET Standard and .NET 5. If you further consider mo-
bile phones as target platform, you choose .NET Standard as However, at some point, it makes sense to apply .NET 5,
TFM for the bounded context assemblies to stay compatible like for the anti-corruption layers for example. Because
with Xamarin forms. these layers are small and usually contain very little code,
it makes sense to split this part of the bounded context
Importantly, note that after this step, you have a running implementations apart and offer them as NuGet packages
software product that implements a clean structural mod- to your application developers. This has one big advantage:
el. That’s great, because even though you only completed You can choose to implement the anti-corruption layer as
half of the migration (implementation-wise), you’re still a .NET Standard assembly and .NET 5 assembly. In the .NET
able to deliver to your customer. But the more valuable 5 assembly, you can use C# 9 as language, whereas in .NET
outcome, certainly, is that the structure of the “refac- Standard, you still use C# 8 (with .NET Standard 2.1).
tored” bounded context aligns with the structure of the
domain it supports, and you already brought the bounded
context into the shape that the multi-platform software Ready, Set, Eat!
product framework demands. Finally, you migrate the ane- After successfully migrating your business logic to a multi-
mic domain model to the one of Figure 13 by removing platform software product line framework, you easily com-
all the callback interfaces from the bounded context and pose new applications out of it. Whether you need to de-
moving the business logic from the legacy code base. This velop a mobile application, a desktop application, or a cloud
last step completes the migration. After that step, you service, all of them rely on the same framework. Because
again have a running software application that you deliver your framework uses .NET Standard, it supports all of these
to your customer. platforms.

Applying these steps continuously to all the business logic Using such a modular framework also results in a straight-
of an entire application smoothly migrates the whole soft- forward decomposition of the user interface of an applica-
ware to a multi-platform software product line framework, tion. The structure of the framework somehow naturally also
thanks to the use of .NET Standard as TFM. You can be a bit imposes a decomposition into components on the front-
more precise and use .NET 5 at some points. end. Which pattern, or which technologies, these front-end
components use, you can freely choose. How should you
Anti-Corruption Layers in .NET 5 deal with legacy front-ends that you want to migrate in the
One of the requirements of the multi-platform software long term?
product line framework was that, in principle, applications
should run on any platform. Simply using .NET 5 as TFM for The framework offers a way out: After migrating the busi-
the bounded context assemblies will, unfortunately, not do ness logic that the front-end relies on, you also migrate
the trick for you, because it lacks full support of mobile apps the front-end. Ideally, such a migration affects only some
built with Xamarin forms for example. I also don’t want to views, or some controllers, the ones that essentially the

codemag.com Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 71


What about doing it stepwise? You start with the first part
your customers care most about: the endurance part of your
application. In that case, you first migrate the views of the
endurance part to WPF into a WPF assembly. For that pur-
pose, you use the TFM .net5-windows for example, because
the business logic builds upon .NET Standard with an anti-
corruption layer in .NET 5. Of course, where the new views
must interact with the legacy user interface, you need to
write some interaction logic, which boils down to additional
efforts. The effort pays off, as you’ll see soon. After extract-
ing the endurance part of the application, the application
looks like Figure 17.

Applying this strategy repeatedly leads to a modular ap-


plication. But there’s even more, a final thought I want to
point you at. Suppose you want to host the endurance part
of your application in the cloud but keep the HIIT bounded
context running locally. This is an interesting thought, es-
pecially when people want to compare their endurance re-
sults online.

With the design of Figure 17, you’re in the right shape


for this: For all applications containing the Endurance
bounded context, you substitute the Endurance bounded
context with an assembly that looks and smells like the
original Endurance bounded context, i.e., it offers the
same public interface as the original bounded context.
When consuming one of the services the public inter-
face of the bounded context specifies, a “service mock”
bounded context invokes a network call to a cloud service
offering the Endurance bounded context. Technically, you
achieve this by splitting the bounded context into two
parts: a contract part (containing the public interface to
interact with the bounded context) and an implementa-
tion part. The implementation part references the con-
tract assembly and implements the public interface that it
specifies. All clients of the bounded context also reference
the contract’s assembly, never the implementation. This
keeps the client using the bounded context independent
of the implementation of it. In other words, the client of
the bounded context doesn’t depend on a concrete imple-
mentation thereof but rather on the interface it offers.
Figure 18: Moving the Endurance bounded context to the cloud as a service. Importantly, it implies for the user interface assembly
“Endurance–UI” (WPF) that it doesn’t matter at all if you
handle invocations to the bounded context Endurance lo-
SPONSORED SIDEBAR: user requires to interact with the business logic just mi- cally or remotely because it only depends on the contracts
grated. To keep things clean, you place the new views into of the Endurance bounded context. You can even use .NET
Ready to Modernize a dedicated assembly. 5 for this “service mock” bounded context as well as to the
a Legacy App? ASP.NET Core WebAPI (I’ve chosen to use this one as im-
Need FREE advice on
Suppose, for example, the Fitness Tracker desktop applica- plementation technology). This framework choice appears
migrating yesterday’s tion was written using WinForms a long time ago, as Figure very convenient because you can use Records to define
legacy applications to 16 illustrates. Now, after refactoring to a multi-platform the data-transfer objects between “service mock” bounded
today’s modern platforms? software product line framework, the product owner recog- context and an ASP.NET Core 5.0 WebAPI project. Figure 18
Get answers by taking nizes the value of refactoring as key enabler for new appli- summarizes the architecture of the overall system.
advantage of CODE cations and grants the refactoring toward WPF.
Consulting’s years of By introducing a “contract” assembly that characterizes the
experience by contacting Well, obviously the Fitness Tracker application for the desk- public interface of the bounded context, you can exchange
us today to schedule top is in bad shape, because all user interfaces reside within the implementation of a bounded context without breaking
your free hour of one huge monolithic WinForms component. The user inter- an existing client. Importantly, you may even run the busi-
CODE consulting call. face assembly accesses the individual bounded contexts to ness logic somewhere else entirely.
No strings. No commitment. deliver value to the user of the application. The user in-
Nothing to buy. terface still uses WinForms rather than WPF, maybe even In the same fashion, you also let other applications from
For more information visit an outdated .NET version as well. It’s awkward somehow, other platforms connect to your Endurance service. Just
www.codemag.com/ because you now have a clean and nice business logic, but implement the corresponding, potentially platform-depen-
consulting or email us at the user interface still resides in the middle-ages. There is dent, “service mock” bounded context, and communicate
info@codemag.com. a way out. with your new .NET 5 Endurance service.

72 Migrating Monolithic Apps to Multi-Platform Product Lines with .NET 5 codemag.com


CODE COMPILERS

Code Example new models that emerge from the migration.


The process ultimately leads you to the same
In the GitHub repository https://github.com/ goal: a multi-platform software product line
apirker/FitnessTracker, I provide an example of framework. Mar/Apr 2021
the fitness tracker domain which I presented here Volume 22 Issue 2
for you to explore the techniques and ideas of  Alexander Pirker
this article.  Group Publisher
Markus Egger

Summary Associate Publisher


Rick Strahl
The techniques I presented in this article help Editor-in-Chief
you to slowly evolve your application to a multi- Rod Paddock
platform software product line framework. How
Managing Editor
did I achieve this? A short recap: You combine Ellen Whitney
the ideas of the ports-and-adapters architectur-
al style with domain-driven design to develop or Content Editor
Melanie Spiller
migrate to pure business logic assemblies. Each
of these assemblies supports one sub-domain, Editorial Contributors
Otto Dobretsberger
called a bounded context. Pushing out all en- Jim Duffy
vironmental implementations from the bounded Jeff Etter
context gives you platform independence if your Mike Yeager
implementation targets .NET Standard. When Writers In This Issue
you start on a green field, you can apply these Jason Beres Bilal Haidar
techniques directly. However, when you must Wei-Meng Lee Julie Lerman
Sahil Malik John V. Petersen
maintain a legacy code base, you migrate slowly Alexander Pirker Paul D. Sheriff
to this architectural style by introducing anti-
corruption layers, where necessary, to keep your Technical Reviewers
Markus Egger
new domain models clean and isolated. That, Rod Paddock
in turn, ensures that you don’t pollute your
Production
Franz Wimmer
King Laurin GmbH
39057 St. Michael/Eppan, Italy

Printing
(Continued from 74) The dev, test, and build processes by themselves, Fry Communications, Inc.
they’re just processes. The duty, fidelity, and se- 800 West Church Rd.
The aforementioned tools (WestLaw and LexisNexis) riousness we bring to the work, that’s what makes Mechanicsburg, PA 17055
are very efficient. However, if I didn’t have access it a ritual. And that it’s a ritual means that what Advertising Sales
to such tools, I could still rely on the books. You’ve you and your team does matters. That’s people Tammy Ferguson
seen them in any legal drama—the impressive ar- and process governed by inviolate overarching 832-717-4445 ext 26
tammy@codemag.com
ray of published court reports and statutes. These principles and fidelity to the task at hand. If
books are available for free in the law library of you have that, find the tools that match up well. Circulation & Distribution
your local municipality. In the abstraction hierar- Be true to that ritual. And if some aspect of it General Circulation: EPS Software Corp.
Newsstand: The NEWS Group (TNG)
chy, this is about as close to the metal as it gets. must change for some reason, have a ritual for Media Solutions
You never know when a given tool may not be avail- how you manage that change. If you just change
Subscriptions
able. There’s still a job to do. Of course, at some rules, tools, etc. on a whim, that’s just random Subscription Manager
point, we’ll hit a performance wall once basic ele- chaos devoid of discipline, rigor, duty, and fidel- Colleen Cade
ments, like the Internet, are no longer available. ity. That’s amateur hour run by people who don’t ccade@codemag.com
How well do you know your processes—independent know what they’re doing.
US subscriptions are US $29.99 for one year. Subscriptions
of abstractions? An abstraction based on a process outside the US are US $50.99. Payments should be made
you don’t understand is useless. And the decisions Your rituals are the means by which you can ob- in US dollars drawn on a US bank. American Express,
made therefrom can be quite costly. jectively demonstrate due care and performance. MasterCard, Visa, and Discover credit cards accepted.
Bill me option is available only for US subscriptions.
If your shop keeps running into the same issues, Back issues are available. For subscription information,
Ritual is a means by which we can learn, apply, perhaps it’s time to detach from the keyboard, e-mail subscriptions@codemag.com.
and eventually master our craft. A somewhat stop, inspect, assess, and adapt accordingly. A
Subscribe online at
popular movement these days is the Software good way to start may be to take a healthy look www.codemag.com
Craftsmanship movement in its many variations. in the mirror. If ritual, duty, and fidelity to the
Andrew Hunt’s “The Pragmatic Programmer” pos- task at hand is deemed to be a budgetary non- CODE Developer Magazine
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
its that a developer’s professional development starter, you should ask yourself what exactly it is Phone: 832-717-4445
should be akin to the medieval guilds. Estab- that your organization is doing and why you think
lished rituals are a means by which knowledge sticking with the status quo will lead to a dif-
is effectively transferred. Think of the appren- ferent, better result. That’s the very definition of
ticeship programs in trades such as plumbing insanity in which we hope to obtain a difference
or carpentry. And no, the fly-by-night coding result based on the same actions.
bootcamps that promise to make somebody an
employable software developer without prior ex-  John V. Petersen
perience isn’t that. 

codemag.com CODA: Why Ritual Matters 73


CODA

CODA: Why Ritual Matters


Take time to look around for inspiration. That’s how I concluded my last editorial.
Given recent events, it’s a good theme to continue.

Ritual: A solemn ceremony consisting of a series of thing boils down to integrity. For any system or accelerated—is the concept of time. Time in
actions performed according to a prescribed order. we build to have integrity, there must be rules, this context has two perspectives. The first is the
order, and sound processes. One of my favorite people who used to occupy the chairs held by
Solemn: Formal and dignified, serious, deep sincerity. phrases is People, Process, and Tools. In other others now. This is an organization’s history and
words, we must first have the right people. From the foundation everything that follows. I suspect
Whether we reference SolarWinds (https://www. there, we can build a process. And then finally, that the staffers guarding the ballot boxes had
microsoft.com/security/blog/2020/12/18/ana- once we have the right people committed to the a sense of, benefited from, and were prepared
lyzing-solorigate-the-compromised-dll-file-that- right process, then, and only then, can we em- because of that history. Watching them on TV,
started-a-sophisticated-cyberattack-and-how- ploy the right tools in furtherance of efficiency. there was a sense of confidence and resolve in
microsoft-defender-helps-protect/), Boeing (737 And what device, whether explicit or implied, do their custodian role. The second perspective is
Max 8), or those brave congressional staffers who we invoke to carry on these ceremonies that in- focused on the present and what must be con-
had the presence of mind to take custody of the volve people, processes, and tools? Ritual. fronted now. Every problem to solve requires its
three ballot boxes before the U.S. Capitol was own time. Anybody who has read the Mythical
breached, for purposes of this editorial, the three Things like SOC (System and Organization Con- Man Month knows the lesson that “throwing bod-
are equivalent. With respect to the ballot boxes, trols)—and the ritual and ceremony it requires— ies” at a problem doesn’t work.
indeed they’re just physical things that contain all have a purpose. Read any annual report (10K),
physical things that can all be replaced. As to specifically item 9a, Controls and Procedures. Building trust and confidence, rather than coding
what those ballot boxes and what they contained Documented rituals and ceremonies with docu- or design, is our ultimate job as software devel-
represented, those things can’t be replaced. mented performance are your positive objective opers. We build that confidence through demon-
evidence that you’re doing things correctly. Alter- strated performance that itself is verified through
How these all relate? Like all things, it’s about natively, you can make claims about what you be- evidence. The important thing at the heart of it
context and the inexorable fact that at an ab- lieve is or what was done. Making a claim doesn’t all is how we carry out our work, the principles,
stract level, everything relates to everything else. make it so, nor is it evidence. To meet that burden patterns, practice, and above all, the ethos, duty,
Everything in a given context is interconnected. of proof, there must be work, hard work, and coor- and fidelity we bring to our efforts in furtherance
In other words, actions have knock-on effects; dinated work among many people and the descrip- of carrying out our work. The burden is on us to
some are intentional, others are unintentional. tion of what work was performed by who and when instill that confidence in others, the ones outside
And depending on your system’s complexity, a must be a by-product of the work. looking in. If we don’t, we end up with a trust gap.
small change can have dramatic downstream ef-
fects (AKA the Butterfly Effect). The degree to Assuming that there’s a codified process, how We can learn from the Arts and Crafts movement
which the system and its underlying processes does that get performed in a systematic, repeat- from the 1880s-1910s. That was about the dignity
are understood well are the best defense to unin- able, reliable, and consistent way? Mature pro- of work that arises from being true to the craft as
tended and undesirable consequences. cesses yield positive tangible results via actions well as the recognition and respect from the orga-
that are carried out in some prescribed way and nization that realizes the benefits of such labor.
Separately, there’s what each of us believes and are done in a serious manner—and not only in The tools we employ to carry out our work, those
there’s what each of us does. There is that oft- support of the work at hand, which is to build are part of the ritual. To wield them artfully, we
quoted phrase: Don’t pay attention to what they and deliver software. It’s also about telling the must understand the processes of our craft.
say, pay attention to what they do. The latter is story of what that work entailed. Agile principles
objectively verifiable with our own senses. The rea- coupled with the Scrum Framework is a good ex- Tools are not the craft. CODE IS NOT THE CRAFT.
soning is simple: Mindreading isn’t a real thing. ample of a ritual that can support what SOC (ref- Tools are the means by which we practice the craft,
erenced above) requires. and code is a tool. Diverting to my law persona, my
Think of the last time you advocated for a par- practice of law doesn’t depend on tools like West-
ticular outcome. In any competent and rational But just going through the motions of Scrum Law or LexisNexis for legal research any more than I
business and technology environment, you’d be ceremonies alone isn’t enough. Each individual require any specific software tool or language. Cer-
required to provide evidence in support of your person on the team must believe and be commit- tainly, tools help and some tools, depending on the
conclusions. In other words, there’s what you be- ted. The team members, collectively, must have a context, are better than others. Legal research (a
lieve and there’s what you can objectively prove. shared sense of purpose. Fidelity to that is what process) is a necessary component of law practice.
makes any ritual or process worth anything. That It’s no different than the code we write. At a certain
So then, how is the concept of ritual relevant? fidelity to task, the ability to make that apparent level of detail, there are many ways to write code
to those outside the process, doesn’t just hap- and there are many ways to conduct legal research.
In all cases, it gets to what we’re defending pen. Regardless of whatever tools are thrown
against: the loss of trust and confidence. Every- at a process, the one thing that can’t be faked (Continued on page 73)

74 CODA: What We Can Learn from Jazz codemag.com


KNOWLEDGE
IS POWER!

Sign up today for a free trial subscription at www.codemag.com/subscribe/knowledge

codemag.com/magazine
832-717-4445 ext. 8 • info@codemag.com
UR
GET YO R
O U
FREE H

TAKE
AN HOUR
ON US!
Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience
in .NET, .NET Core, web development, Blazor, Azure, custom apps for iOS and Android and more,
CODE Consulting can get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • info@codemag.com

You might also like