Professional Documents
Culture Documents
MAR
APR
2021
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95
Understanding
Docker
shutterstock.com/anttoniart
Powered by
A SAMPLING OF KEYNOTES AND SPEAKERS FOR OUR SPRING AND FALL EVENTS
SCOTT GUTHRIE SCOTT HANSELMAN CHARLES LAMANNA ROHAN KUMAR BOB WARD
Executive Vice President, Principal Program Corporate Vice President, Corporate Vice President, Principal Architect Microsoft
Cloud + AI Platform, Manager, Web Platform, Low Code Application Azure Data, Microsoft Azure Data / SQL Server Team,
Microsoft Microsoft Platform, Microsoft Microsoft
KATHLEEN DOLLARD SCOTT HUNTER JEFF FRITZ BUCK WOODY ANNA HOFFMAN
Principal Program Director of Program Senior Program Manager, Applied Data Scientist, Data & Applied Scientist,
Manager, Microsoft Management, Microsoft Microsoft Microsoft Microsoft
JOHN PAPA DAN WAHLIN KENDRA HAVENS LESLIE RICHARDSON PAUL YUKNEWICZ
Principal Developer Advocate, Cloud Developer Advocate Program Manager, Program Manager, Principal Group PM Manager,
Microsoft Manager, Microsoft Microsoft Microsoft Microsoft
DEVintersection.com AzureAIConf.com and many more!
If your passion is technology, take it to the next level!
Be among the first to get the insider scoop
on what’s coming in 2021
A unique opportunity to meet with Microsoft engineers
who build the products
Interact with top industry experts sharing real-world solutions
and techniques to sharpen your skills for instant ROI
SAMPLE TECHNOLOGIES
ASP.NET MICROSOFT AZURE AI SQL SERVER AZURE SQL .NET
VISUAL STUDIO TEAMS POWER PLATFORM IoT DEVOPS
POWERSHELL MACHINE LEARNING BLAZOR PROJECT DESIGN
DOCKER C# KUBERNETES REACT TYPESCRIPT SIGNALR
COSMOS DB NOSQL QUANTUM COMPUTING XAMARIN
AND MANY MORE, INCLUDING
UPCOMING TECHNOLOGIES AND PRODUCTS CURRENTLY UNDER WRAPS
Surface Go 2
Xbox One X
Surface
Headphones 2
Features
8 eploy a Real-World ExpressJS
D 62 igrating Monolithic Apps
M
TypeScript Application Using to Multi-Platform Product Lines
Containers with .NET 5
Because you spend so much time interacting with JavaScript (whether you You can just about feel the eyerolls of the dev team when you find
know it or not), Sahil wants to make sure that you make the right decisions out that you’re going to be updating some ancient piece of software.
about how to wire up your own apps. Alexander shows you that using .NET 5, migrating doesn’t have to
Sahil Malik be painful.
Alexander Pirker
Columns
APIs. This is the first of a longer series that will teach you to create great
applications using .NET 5.
Paul D. Sheriff
25 Advertisers Index
42 I ntroduction to Containerization
Using Docker 73 Code Compilers
Wei-Meng explains how Docker replaces virtual machines to host the apps
and libraries you need, completely independent of which OS you’re using.
Wei-Meng Lee
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to subscriptions@codemag.com or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
I’ll return to the original idea and talk about how tion of this part is the concept of MVP: Minimal Maintaining the Fire
creative endeavors are like creating and manag- Viable Project. Put another way, do the simplest Keeping the fire going can be the challenging
ing campfires. thing that could work. Our kindling would be an part. Learning new technology is not without its
ASP/MVC application built with .NET Core 5.0 and pitfalls. You might get stuck, you might get frus-
Many of my editorials are inspired by real-life using the HotChocolate (GraphQL tools) NuGet trated, you might be flummoxed, or you might
events. Be it work or home life, these editorials packages. just need a break. You need to keep that fire go-
spring forth from my reality. The event that in- ing by any means necessary. What I find is that
spired this theme was the discovery of GraphQL After building a project, we built out our first when the fire seems to be waning, I return to the
and how it could really help with one of my new query service that would return a simple set Gathering Material phase. Sometimes I need to
entrepreneurial endeavors. For the astute read- of objects fabricated in memory. We then used step away, take a pause, and gather my thoughts
er, you’ll realize that there have been several the tools built into HotChocolate to test the to explore the new ideas that I wasn’t ready for
GraphQL articles in CODE Magazine’s pages, so API. Once we had our proof of concept built, we when I started the original fire.
the discovery wasn’t all that new. This time was pushed it to a private GitHub repo and set off on
different. I had an application for this technology the next step in building our fire. If you’ve made it this far, I thank you. I know
and was yearning to learn more about it. It’s this the example felt a bit marshmallowy (insert ey-
yearning that provided the seed for this editorial. eroll here LOL). I know you, as a reader, probably
Adding Wood rolled your eyes once or twice when reading it.
When I started to learn about GraphQL I realized that Once the fire has been lit, you can start adding Heck I rolled my eyes a few times while writing
I had a method to my learning process. This method wood to that fire so that it has something solid it. In any case, there’s a process for learning and
isn’t a formal process, but rather an intuitive process to burn. We added wood of various varieties in- this process is yet another reason why what we do
built over many years of exploring technology. As cluding more data entities (types), more queries, is so fun. Now, go build your own fire!
I delved deeper and deeper into this technology, I more relations, and a pseudo data persistence
felt my excitement grow: A creative fire had been lit. layer. We focused on the query aspect of the
This is where my silly but, in my opinion, poignant project at first, as this seemed to be the most Rod Paddock
theme comes from. Learning new technology is akin common use case.
to building a campfire. So let’s explore the steps and
how they applied to my learning of GraphQL.
Stoking the Fire
It’s not enough to just get the fire burning. You
The Need for a Fire need to keep the fire going. This is where the
Before you work on building a fire, you need to de- excitement keeps propelling you to continue. To
termine whether you need it. In this case, the need stoke the fire, we added new features that an API
came from the fact that this new application would needs. We started adding simple Mutations. Mu-
be a mobile application. I didn’t like the idea of tations are the data update portions of GraphQL.
constructing a complex REST API. There had to be Like in the original example, we started simply.
a better way. This better way was GraphQL. We also continued gathering more material—do-
ing more research. Our goal was to not let the
fire go out.
Gathering Material
In order to build a fire, you need fuel. For this
project, the material we gathered was basic re- S’mores Time
search. We consulted the GraphQL spec online, we After working on the API for a bit, I soon learned
Googled, we looked for sample code and papers, that my senior developer had taken it upon
we looked for tools to help us learn. This phase himself to start building an MVP version of the
is sometimes known as the “ideating” phase for mobile application. When we saw that mobile ap-
those playing buzzword bingo at home. plication talking to the API, it was time to have
some S’mores.
Adding Kindling You need to take time to celebrate your wins.
After gathering material, we started constructing Take a pause to reflect where you are. After a
our fire pit by adding kindling. A better explana- quick celebration, it was back to work.
6 Editorial codemag.com
NEVRON OPEN VISION (NOV)
The future of .NET UI development is here
A c c el er
U a
P
te
G
d
Re
n d e ri n g
The
Thesuite
suitefeatures
featuresindustry-leading
industry-leadingUIcomponents
components
including Chart, Diagram, Gauge, Grid, Scheduler,
Rich Text Editor, Ribbon, and others.
JavaScript is indeed awesome. Sure, it has its flaws, after That’s where I’ll pick up in this article. In this article, I’ll
all, the language was invented in a week in a basement but address a very important real-world concern: deployment.
tell me one other language that’s equally flexible and runs The code I have right now works great on a developer’s
everywhere, encapsulates both back- and front-ends, and computer. Shipping code to run in the cloud or on a server
runs with the performance of compiled languages. somewhere brings a whole set of interesting challenges.
Whether you like it or not, probably the most used applica- A very popular and solid way of deploying code these days
tion on both your phone and your desktop is your browser. is using containers. In this article, I’ll show you how I go
As much as other approaches (Blazor <cough cough!>) are about containerizing the application and deploy both the
Sahil Malik trying to encroach on its territory, which language do you application and the database as two separate containers.
www.winsmarts.com end up interacting with the most as of today? It’s JavaScript.
@sahilmalik
JavaScript is incredibly unopinionated. Although that’s Git Repo Structure
Sahil Malik is a Microsoft
great because it fuels innovation, it doesn’t tie you to doing Before I get into the weeds of this article, ensure that you
MVP, INETA speaker,
one way of thinking and it also creates an interesting chal- have the starting point for the code of this article ready to
a .NET author, consultant,
lenge. Anytime you wish to do something simple, there’s no go. All of the code for this article can be found at https://
and trainer.
file\new_project approach here. You must build a project github.com/maliksahil/expressjs-Typescript. I’m going to
Sahil loves interacting with template from scratch, which involves a lot of decisions. leave the master branch as the basic starter code. The start-
fellow geeks in real time. And as you go along through the various phases of the proj- ing point for this article is in a branch called todoapp.
His talks and trainings are ect, you must keep making these decisions. Should I use
full of humor and practical package A or package B? Which package is better? How do I Get the Starter Code Running
nuggets. define better? Its long-term supportability? Security issues? I know I’ve covered this in depth in my previous article, but
Popularity of use? Or features? if you don’t get this part working, you’re going to be very
His areas of expertise are lost in rest of the article. So let’s get this part squared away.
cross-platform mobile app These are some tough decisions. And even after you make Use the following steps to get the starter code running on
development, Microsoft these decisions, did you make the right decision? Is another your computer. You’ll need to do this on a Windows, Mac, or
anything, and security larger development team in another company making a dif- Linux computer that’s set up to use NodeJS.
and identity. ferent decision that may affect the outcome of your deci-
sions in the future? First, clone the repo:
It’s for this reason that I started this series of articles. In git clone
my first article, I showed you how to build a simple proj- https://github.com/maliksahil/
ect template from scratch. I combined ExpressJS and Type- expressjs-typescript
Script to create both a front-end and a back-end for the
JavaScript-based application. I ended the first article with Next, check out the “todoapp” branch:
a very simple working application, really just a project tem-
plate, that had no functionality in it. But even then, I had a git checkout todoapp
template that could run in Visual Studio Code, it supported
live reload, debugging, full TypeScript support, client side For the purposes of this article, create a new branch called “deploy”:
and server side support, and much more. I encourage you to
check the article out. git checkout -b deploy
Of course, the plain vanilla JavaScript application template At this point, you may want to repoint the origin of the Git repo
isn’t something you can ship. So in the second article (https:// to somewhere you can check-in the code, or just not bother
www.codemag.com/Article/2101021/A-Real-World-ExpressJS- checking it in if you just wish to follow along in this article.
and-TypeScript-Application), I took it one step further, and
crafted up a real-world application that talks to a database. I Next, install the dependencies:
wrote a simple ToDo application, and for client-side scripting,
used Vue JS; for server side I wrote an API in ExpressJS. npm install
If you need a walkthrough of this starter code, I highly rec- Identify Dependencies
ommend that you read my previous articles in CODE Magazine. Speaking of “it works on my machine,” let’s see what prob-
lems we can uncover. For a node-based development, I re-
ally prefer not to install stuff in the global namespace, be-
Add Docker Support cause it makes my environment impure. It makes it harder
Docker is an amazing solution, and I’ve talked about it to test how things will run in Docker. At this point, if you
extensively in my articles in CODE Magazine https://www. can manage to, uninstall all global packages except npm. If
codemag.com/Magazine/ByCategory/Docker) (and so have you can’t uninstall global packages, you can find problems
a few other people). Where it really shines is when I can directly in Docker, but that will slow you down.
deploy code to Linux containers running in the cloud. These
containers are super lightweight, so they run fast, and be- The first thing I’d suggest is that you take that dist folder
cause they’re stripped down to the absolute bare minimum, you’d built and from which you were running your applica-
they’re secure, and they’re cheap to run. tion, copy it to a folder outside of your main project folder,
and simply issue the following command:
This is where NodeJS shines. You could have written the whole
application on a Mac or Windows computer, but you can be node index.js
reasonably confident that it will work in a Docker container.
Now, it’s true that some node packages could take OS-level You should see an error like this:
dependencies. This is especially common in electron-based
apps. But for this Web-based application, accessed through a Error: Cannot find module 'dotenv'
browser, this isn’t something I’m worried about.
If you don’t see this error, you probably have dotenv installed
When I dockerize my application, I want to ensure that my in your global namespace. And this is why I really dislike frame-
usual development lifecycle isn’t broken. In other words, I works that insist on installing things in your global namespace.
still wish to be able to hit F5 in VS Code and run the applica- I have only one. Keep your junk to yourself please.
tion as usual. I don’t want to run Docker, if I don’t have an
absolute need to. I know Docker is fast, but I don’t want to Anyway, I need to solve this problem. In fact, dotenv isn’t the
slow my dev work down by that additional step of creating a only problem here. There are a number of packages I took a de-
Docker image, starting the container, etc. Although VSCode is pendence on, and when I run the project from within the folder,
amazing and so is remotely developing in a Docker container, it simply picks it from node_modules. Although I could just ship
I don’t want to pay that overhead unless I absolutely need to. the entire node_modules folder, it would really bloat my project.
There are situations where I want to pay the overhead for To solve this problem, I’ll simply leverage the parcel bun-
Docker even in local development. Now, to be clear, the dler to create a production build in addition to the dev time
overhead is just a few seconds. Frankly, it’s faster than build. If you remember from my previous article, the dev
launching MS Word. For that little overhead, I find running time build was already building client-side code. The client-
the application containerized in Docker on my local dev en- side code isn’t going to read directly from node_modules
vironment useful for the following situations. because it runs in the browser, so that was essential to get
started. You can use a similar approach for server-side code.
I want to take a dependency on numerous packages that I don’t
want to install on the main host operating system. I realize To bundle server-side code, add this script in your package.
that there are things such as NVM or Conda that allow me to json’s scripts node:
manage different node versions or different Python environ-
ments. To be honest, those—especially NVM—have been prob- "parcelnode":
lematic for me. Maybe I’m just holding it wrong. But when I can "parcel build main src/index.ts
just isolate everything in a separate Docker environment, why -o dist/index.js
bother dealing with all that nonsense? Just use Docker. --target node
--bundle-node-modules",
Sometimes I may wish to isolate development environments
from a security perspective. Maybe I’m working in a certain This line of code, when executed, will bundle all server-side
customer’s Azure tenancy, perhaps I’m connected to their dependencies except ejs, and output that in a file called
source control using a separate GitHub account, or perhaps dist/index.js. The reason it won’t include ejs is because ejs
I’m using tools that interfere with my other work. For all is being referenced as a string and not as an import state-
those scenarios and more, Docker gives me a very nice, iso- ment. This isn‘t great, and I’m sure there are workarounds
lated development environment to work in. for this, but I’ll keep things simple and simply install ejs as
a node module in the Docker image. It’s a single package, so
And finally, the elephant in the room, have you heard the I’ll live with a little bloat for lots of effort saved. Let’s chalk
phrase, “it works on my machine”? Docker gives me a way to ship it up to “technical debt” for now.
Cannot resolve dependency 'pg-native' The reason you don’t see any todos is because your database
isn’t running. You can go ahead and run the database as a
Now, if you glance through your code, you aren’t taking a Docker image by running ./db/setupdb.sh in your project
dependency on pg-native. If you read through the stack folder. That shell command, if you remember from the previ-
trace, you’ll see that one of the packages you took a depen- ous article, runs the postgres database as a Docker image,
dency on is taking a dependency on pg-native. Okay, this exposed on Port 5432. Once you run the database, go ahead
is frustrating. Because this isn’t a package that will easily and refresh your browser, and verify that you can see the
install either, it takes dependency on native code. If I did todos appearing, as shown in Figure 3.
that, I’d also have to install it on my Docker image. That’s
not a big deal, so perhaps this is the route I need take.
The frustrating part is that this is a rabbit hole I’m falling in,
where identifying dependencies feels like an unpredictable
never-ending hole of time suck. Well, this is the reality of
node-based development. This is why, when I write node-based
code, I always keep dockerization and deployment in the fore-
front of my mind, and try not to solve a huge project at once.
I make sure that whatever package I take dependency on, it’s
something I can package or I don’t take a dependency on it.
One obvious missing piece here is the .env file. The .env
file, if you remember, is what you had various configura-
tion information in, such as what port to run on, where the
database is, etc., I could copy the .env file in, but if you
look deeper into the .env file, it has a dependency on the
database running on localhost. What is localhost on Docker?
It’s the docker image itself.
I think you can imagine that this may be a problem down the
road. But let’s not try to boil the ocean in one check-in. For now,
just copy the .env file into the dist folder manually, copy the .dist
Figure 1: The built version of folder to an alternate location your disk, and run the project by
my application running node index.js. You should see the following console.log: Figure 3: Finally running locally
With this much in place, let’s start building the docker im- At this point, your Dockerfile should look like Listing 1.
age. This is a matter of authoring a Dockerfile in the root of
your projects. To smooth out your development, also create a file called
“scripts/runindocker.sh” with code as shown in Listing 2.
Create a new file called Dockerfile in the root of your project
and add the following two lines in it. Now, make this file executable:
This RUN command will instruct the Docker daemon to run # Run our app
the aforementioned command in the Docker image, and CMD ["node", "index.js"]
therefore have ejs installed locally. There are better ways to
do this, but for now, let’s go with this.
Listing 2: Shell script to build and run the Docker container
My code is pretty much ready to go, but I need to expose docker build --tag nodeapp:latest .
the right port. This is necessary because Docker, by default, docker container rm
is locked down, as it should be. No ports are exposed un- $(docker container ls -af name=nodeapp -q)
less you ask them to be. Because my application is running docker run -it --name nodeapp -p 8080:8080 nodeapp
Go ahead and run the application container again. Verify No wonder everyone is gaga over it. But the challenge re-
that the application, now running in Docker, shows you to- mains: the learning curve and bewildering array of options
dos, like in Figure 3. make it very difficult to go from Point A to Point Z.
At this point, I’ve made good progress. Go ahead and com- It’s exactly that problem I wish to solve in this series of articles,
mit your code. You can find the commit history associated going from Point A to Point Z, arguing and debating every step,
with this article at https://github.com/maliksahil/express- giving you the reasons why I make a certain decision over anoth-
js-typescript/commits/deploy. You can repro each step by er, and sharing the history of why things are the way they are.
pulling a specific commit down.
In my next article, I’ll continue this further by adding more
deployment concerns, where I’ll extend my simple two con-
Something Still Doesn’t Feel Right tainer application to Kubernetes. I’ll deal with some inter-
In this article, you made some good progress. You took your esting challenges, such as how to keep secrets, such as the
NodeJS application that was effectively running nicely on a database password, safe.
developer’s computer, and you were able to dockerize it. The
advantage here is that now it can run reliably on-premises, Until then, happy coding.
in the cloud, in any cloud. There will be fewer fights between
the IT Pros and the developers. World peace. With fine print Sahil Malik
of course.
Are you being held back by a legacy application that needs to be modernized? We can help.
We specialize in converting legacy applications to modern technologies. Whether your application
is currently written in Visual Basic, FoxPro, Access, ASP Classic, .NET 1.0, PHP, Delphi…
or something else, we can help.
codemag.com/legacy
832-717-4445 ext. 9 • info@codemag.com
ONLINE QUICK ID 2103031
reloading the current Web page. In other words, you can Looking at Figure 1, you can see that an HTML page sends a
manipulate the DOM and the data for the Web page with- request for data (1) to a controller in the Web server (2). The
out having to perform a post-back to the Web server that controller gets that data from the data storage medium (3)
hosts the Web page. Ajax gives you a huge speed benefit and from the data model that serializes that data as JSON
because there is less data going back and forth across the (4) and sends the response back to the HTML page (5). If you
internet. Once you learn how to interact with Ajax, you’ll look at Figure 2, you can see both the JavaScript and C# code
find the concepts apply to whatever front-end language on the client and the server that corresponds to each of the
you use. numbers on Figure 1. Don’t worry too much about the code in
Figure 2, you’re going to learn how to build it in this article.
Paul D. Sheriff Because more mobile applications are being demanded by
http://www.pdsa.com consumers, you’re probably going to have to provide a way Methods of Communication
for consumers to get at data within your organization. A Although the XMLHttpRequest object is the basis for all Ajax
Paul has been in the IT consumer of your data may be a programmer of a mobile calls, there are actually a few different ways you can use
industry over 34 years. application, a desktop application, or even an HTML page this object. Table 1 provides a list of the common methods
In that time, he has success- being served from a Web server. You don’t want to expose in use today.
fully assisted hundreds
your entire database; instead, create an Application Pro-
of company’s architect
gramming Interface (API) in which you decide how and If you’re unfamiliar with the terms callbacks and promises,
software applications
what to expose to these consumers. A Web API, also called the two sections that follow provide you with a definition
to solve their toughest
business problems. a REST API, is a standard mechanism these days to expose and some links to learn more about each.
Paul has been a teacher your data to consumers outside your organization.
and mentor through various Callbacks
mediums such as video This is the first in a series of articles where you’ll learn to A callback is the object reference (the name) of a function
courses, blogs, articles use Ajax and REST APIs to create efficient front-end applica- that’s passed to another function. That function can then
and speaking engagements tions. In this article, you create a .NET 5 Web server to ser- determine if and when to invoke (callback) that function.
at user groups and conferences vice Web API calls coming from any Ajax front-end. You also It may call that function after some variable changes state,
around the world. learn to create an MVC Web application and a Node server to or maybe after some task is performed. For a nice defini-
Paul has 27 courses in the serve up Web pages from which you make Ajax calls to the tion and an example of a callback, check out this article:
www.pluralsight.com library .NET 5 Web server. In future articles, I’ll show you how to https://www.freecodecamp.org/news/javascript-callback-
(http://www.pluralsight.com/ use the XMLHttpRequest object, the Fetch API, and jQuery functions-what-are-callbacks-in-js-and-how-to-use-them.
author/paul-sheriff) on topics to communicate efficiently with a .NET 5 Web API project.
ranging from JavaScript, Promises
Angular, MVC, WPF, XML, A promise is the result of the completion of an asynchronous
jQuery, and Bootstrap. Ajax Defined operation. The operation may succeed, fail, or be cancelled.
Although Ajax stands for Asynchronous JavaScript and XML, Whatever the result, the promise allows access to the result
the data transmitted can be JSON, XML, HTML, JavaScript, and any data returned by the operation. For more informa-
plain text, etc. Regardless of the type of data, Ajax can tion, see this post: https://developer.mozilla.org/en-US/
send and receive it. Ajax uses a built-in object of all modern docs/Web/JavaScript/Reference/Global_Objects/Promise.
browsers called XMLHttpRequest. This object is used to ex- Another great article compares callbacks and promises:
change data back and forth between your Web page and a https://itnext.io/javascript-promises-vs-rxjs-observables-
Web server, as shown in Figure 1. de5309583ca2.
• VS Code: code.visualstudio.com
• .NET 5: dotnet.microsoft.com/download
• SQL Server: www.microsoft.com/en-us/sql-server/sql-
server-downloads
• AdventureWorksLT Sample Database: https://github.
com/PaulDSheriff/AdventureWorksLT
Try It Out
Select Run > Start Debugging from the menu to build the
.NET Web API project and launch a browser. If a dialog box
appears asking if you should trust the IIS Express certifi-
cate, answer Yes. In the Security Warning dialog that ap-
pears next, also answer Yes. Once the browser appears, it
comes up with a 404 error page. Type in the following ad-
dress into the browser address bar: https://localhost:5001/ Figure 4: VS Code will inform you that you it needs to load some packages to support Web
weatherforecast. API programming.
Listing 3: Configure services to work with CORS, serialize JSON and interact with your database.
public void ConfigureServices( // appSettings.json file
IServiceCollection services) { services.AddDbContext<AdventureWorksLTDbContext>
// Tell this project to allow CORS (options => options.UseSqlServer(
services.AddCors(); Configuration.
GetConnectionString("DefaultConnection")));
// Convert JSON from Camel Case to Pascal Case
services.AddControllers().AddJsonOptions( services.AddControllers();
options => { options.JsonSerializerOptions services.AddSwaggerGen(c => {
.PropertyNamingPolicy = c.SwaggerDoc("v1", new OpenApiInfo {
JsonNamingPolicy.CamelCase; Title = "WebAPI",
}); Version = "v1" });
});
// Setup the AdventureWorks DB Context }
// Read in connection string from
Try It Out
Run the Web API project by selecting Run > Start Debug-
ging from the menu. Once the browser comes up, type in
http://localhost:5000/api/product and you should see data
appear like that shown in Figure 6.
Figure 6: Running the Product controller from the browser should produce a set of JSON. Get a Single Product
Besides retrieving all products, you might also need to just
retrieve a single product. This requires you to send a unique
Listing 4: Always create a base controller class for all your controllers to inherit from. identifier to a Web API method. For the Product table, this is
using System; value in the ProductID field.
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
Go back to the Web API project and open the ProductCon-
namespace WebAPI.Controllers { troller.cs file. Add a new method that looks like Listing 7.
public class BaseApiController : This method is very similar to the Get() method you created
ControllerBase { earlier, however, just a single product is returned if found.
protected IActionResult HandleException(
Exception ex, string msg) { Try It Out
IActionResult ret; Save the changes to your ProductController and restart the
Web API project so it can pick up the changes you made. When
// TODO: Publish exceptions here the browser comes up, type in the following: http://local-
// Create new exception with generic message host:5000/api/product/706. This passes the value 706 to the
ret = StatusCode(StatusCodes Get(int id) method. You should see a screen similar to Figure 7.
.Status500InternalServerError,
new Exception(msg));
return ret;
Insert a Product
} If you wish to send a new product to your Web server and
} have it inserted into the Product table, you need a POST Web
} API method to pass a JSON product object to. Create a new
As you can see from the JSON object above, not all fields
are present compared to the C# Product class. That’s why,
in the Post() method, you need to fill in a few other fields
with some default values. I’m doing this just to keep things
simple for passing in a JSON object. In a real application,
you’d most likely be passing in all fields for your entity class. Figure 7: Retrieve a single product by placing the product ID after a forward-slash on the URL.
Listing 5: The ProductController class inherits from the BaseApiController and has the DbContext injected into it.
using System;
using System.Collections.Generic; public ProductController(
using System.Linq; AdventureWorksLTDbContext context)
using Microsoft.AspNetCore.Http; : base() {
using Microsoft.AspNetCore.Mvc; _DbContext = context;
}
namespace WebAPI.Controllers {
[Route("api/[controller]")] private AdventureWorksLTDbContext _DbContext;
[ApiController] }
public class ProductController : }
BaseApiController {
Listing 6: The Get() method retrieves all products from the SalesLT.Product table.
private const string ENTITY_NAME = "product"; } else {
ret = StatusCode(
// GET api/values StatusCodes.Status404NotFound,
[HttpGet] "No " + ENTITY_NAME +
public IActionResult Get() { "s exist in the system.");
IActionResult ret = null; }
List<Product> list = new List<Product>(); } catch (Exception ex) {
ret = HandleException(ex,
try { "Exception trying to get all " +
if (_DbContext.Products.Count() > 0) { ENTITY_NAME + "s.");
list = _DbContext.Products. }
OrderBy(p => p.Name).ToList();
ret = StatusCode(StatusCodes.Status200OK, return ret;
list); }
Listing 7: The Get(int id) method is used to retrieve a single record from the database.
[HttpGet("{id}")] StatusCodes.Status404NotFound,
public IActionResult Get(int id) { "Can't find " + ENTITY_NAME + ": " +
IActionResult ret = null; id.ToString() + ".");
Product entity = null; }
} catch (Exception ex) {
try { ret = HandleException(ex,
entity = _DbContext.Products.Find(id); "Exception trying to retrieve " +
ENTITY_NAME + " ID: " +
if (entity != null) { id.ToString() + ".");
ret = StatusCode(StatusCodes.Status200OK, }
entity);
} else { return ret;
ret = StatusCode( }
Listing 10: Pass in a unique product id of the product you wish to delete. those fields that you pass in from the front-end. Most likely,
[HttpDelete("{id}")] you won’t be doing this in your applications; I’m just trying
public IActionResult Delete(int id) { to keep the sample as small as possible.
IActionResult ret = null;
Product entity = null;
try {
Delete Product Data
entity = _DbContext.Products.Find(id); Now that you have inserted and updated product data, let’s
if (entity != null) { learn to delete a product from the table. Create a DELETE
_DbContext.Products.Remove(entity);
_DbContext.SaveChanges(); Web API method, as shown in Listing 10. Pass in the unique
ret = StatusCode( ID for the product you wish to delete. Locate the product
StatusCodes.Status200OK, true); using the Find() method and if the record is found, call the
} else { Remove() method on the Products collection in the DbCon-
ret = StatusCode(
StatusCodes.Status404NotFound, text. Call the SaveChanges() method on the DbContext ob-
"Can't find " + ENTITY_NAME + ject to submit the DELETE SQL statement to the SQL Server
" ID: " + id.ToString() + database.
" to delete.");
}
} catch (Exception ex) {
ret = HandleException(ex,
Create a .NET MVC Application
"Exception trying to delete " + To make calls using Ajax from a Web page to the Web API
ENTITY_NAME + "ID: " + server, you need to run the HTML pages from their own Web
id.ToString() + ".");
} server. If you wish to use a .NET MVC application, follow
along with the steps in this section of the article. If you
return ret; wish to use a Node server, skip to the next section of this
} article entitled “Create a Node Web Server Project”.
Load the AjaxSample folder in VS Code and wait a few sec- npm init -y
onds until you see a prompt at the bottom right of your npm install lite-server
screen asking to add some assets to the project, as shown
in Figure 8. Answer Yes to the prompt to add the required Once the last command finishes, select File > Open Folder…
assets. from the menu and select the AjaxSample folder you cre-
ated. You should now see a \node_modules folder and two
Open the \Views\Home\Index page file and make the file .json files; package-lock and package. Open the package.
look like the code shown in Listing 11. You’re going to add json file and add new property to “scripts” property.
more HTML to this file later, but this small set of HTML pro-
vides a good starting point. "scripts": {
"dev": "lite-server",
Change the Port Number "test": "echo \"Error: no test
Open the launchSettings.json file located under the \Prop- specified\" && exit 1"
erties folder and modify the applicationUrl property under },
the AjaxSample property, as shown in bold below. Because
you’re going to be using a Web API server that’s running on Create a Home Page
a different port and thus a different domain, Cross-Origin Like any website, you should have a home page that starts
Resource Sharing (CORS) needs to have a specific URL that the Web application. Add a new file under the \AjaxSample
you can whitelist. Change the URL to use port number 3000. folder named index.html. Add the HTML shown in Listing
Later, when you create your Web API server, you’re going to 12 into this new file. You’re going to add more HTML to this
use this domain address with CORS to allow calls to travel file later, but this small set of HTML provides you with a
from one domain to another. good starting point.
function get() {
CD D:\Samples
}
MkDir AjaxSample </script>
CD AjaxSample }
Using XMLHttpRequest
As mentioned earlier in this article, the most fundamen-
tal building block of Ajax is the XMLHttpRequest object.
Let’s now use this object to retrieve the products in the
\resources\projects.json file in your Web server. Open the
index page file and modify the empty get() function to look
like the following code.
function get() {
Figure 9: Your home page should display when you run the
lite-server. let req = new XMLHttpRequest();
req.onreadystatechange = function () {
Listing 12: Add a home page for your application. console.log(this);
};
<!DOCTYPE html>
<html>
<head> req.open("GET", URL);
<title>Ajax Samples</title>
</head> req.send();
<body>
}
<h1>Ajax Samples</h1>
<p>Bring up console window</p>
The code above creates an instance of the XMLHttpRequest
<button type="button" onclick="get();">Get Products</button> object that’s built-into your browser. It then assigns a func-
tion to the onreadystatechange event, which fires whenev-
<script>
'use strict'; er the XMLHttpRequest object performs various operations
that you learn about in the next section of this article. The
const URL = "/resources/products.json"; next line of code calls the req.open() method and passes
//const URL = "http://localhost:5000/api/product"; the type of request it’s performing, and the URL on which
function get() { to send the request. In this case, you are performing a GET
on the URL pointing to the /resources/products.json file.
} The final line of code, req.send(), sends an asynchronous
</script> request to the Web server to retrieve the products.json file.
</body>
function get() {
let req = new XMLHttpRequest();
req.onreadystatechange = function () {
if (this.readyState===XMLHttpRequest.DONE &&
this.status === 200) {
displayResponse(this);
}
};
req.open("GET", URL);
req.send();
}
Try It Out
Save all the changes within VS Code. Restart the debug-
Figure 10: Bring up the Developer Tools on your browser to ger if using MVC, then go back to your browser and click
see the results of your JavaScript. on the Get Products button again. In the console window,
Add the code shown in Listing 14 just below the <h1> and Add a handleAjaxError() Helper Function
the <p> tag to your index page. Ajax errors can generally be handled by a common piece of
code, as shown below. Add this function into the ajax-com-
Modify Script on a Page mon.js file. Don’t worry about what it does for now; you’ll
You added more buttons in the index page, so you need to learn more about this function in a later article.
add functions to correspond to each button’s click event.
Modify the code in the <script> tag to look like Listing 15. function handleAjaxError(error) {
displayError(error);
switch (error.status) {
Add Helper Functions into Scripts Folder case 404:
As with most applications, you’re going to have some ge- console.log(error.responseText);
neric functions that you can reuse on many Web pages. To break;
make them easy to reuse, create an ajax-common.js file case 500:
into which you may place these functions. You then put console.log(error.responseText);
this file into a folder that you can reference from any page break;
that needs them. Create a file named ajax-common.js in the default:
\wwwroot\js folder if using MVC, or if you’re using lite-serv- console.log(error);
er, create a \scripts folder and place the file into that folder.
Into this file, create three functions; getValue(), setValue(),
and handleAjaxError().
theroles of developers and designers and the dynamic be- According to Gartner, “A design system is one of the most
tween them. important strategic assets for an organization that produces
digital products. A robust design system drastically short-
Designers were once solely responsible for dreaming up ens design and development timelines, ensures the user
beautiful interfaces, and they’re now often expected to interface design is consistent, predictable, and usable, and
define the entire front-end UI/UX experience for users. Be- guarantees brand compliance.”
cause designers are the direct gatekeepers for overloaded
developers, they’re critical of eliminating anything that Even better, there are third-party tools that can pull out
could result in extraneous coding or lack of clarity before digital assets like CSS, HTML, and even code from the de-
handing off projects for development. signs, which ensures a mistake-free coded output and often Jason Beres
leads to a significantly accelerated product delivery. jasonb@infragistics.com
This designer-to-developer handoff has long been a source www.infragistics.com
of frustration and animosity among design and develop- indigo.design
ment teams—and inefficiency and lost productivity for @jasonberes
companies. It’s also one of the greatest opportunities for According to Gartner, Jason is the SVP of Devel-
transformation. oper Tools at Infragistics,
design systems are strategic, where, for 17 years, he’s
Four Ways to Accelerate Design “A robust design system drastically held roles at the intersec-
A Few Testers Find platform; don’t let one team use React and another use An- • Avon Products Inc. gave up on a four-year, $125 mil-
Most Issues gular. The potential short-term gain because of familiarity or lion software overhaul after a test of the system in
experience is miniscule when compared to the long-term cost Canada revealed that the system was so burdensome
According to Nielsen Norman of rectifying tool and platform compatibility. and difficult to use that many salespeople quit the
Group, the best results company.
in finding UX issues in a
design come from testing The High Cost of Ignoring To realize gains like American Airlines did, and to avoid the
no more than five users and
running as many small tests
the UX Process massive time and expense loss shown in the Avon example,
as you can afford.
Considering everything you’ve read in this article; you might eliminating waste in the designer-to-developer handoff
assume that incorporating user experience design activities while accelerating and perfecting the design to code pro-
into the standard software development process would add cess is critical. If organizational leaders focus on tools that
time and cost to projects. However, these activities save deliver the best, fastest business outcomes in their digital-
time and money by designing the right solution from the be- first/digital-transformation objective rather than relying on
ginning and by finding and correcting problems early in the individual, siloed teams to choose “tools of their choice,”
project, when they’re easy and inexpensive to change. User they can achieve these goals and they will accelerate deliv-
interfaces designed by someone who understands and ap- ery and drive down cost.
plies principles of human factors and design best practices
helps to avoid many UX problems. Iterative user testing and Jason Beres
redesign finds and fixes problems and validates the design
direction. Before development begins, design validation
occurs by both the business and users, eliminating costly
change requests due to unmet requirements and usability
problems late in the development process.
EF Core 5: Building on the Foundation (https://www.co- Then when debugging, I could see the expected SQL for the
demag.com/Article/2010042/EF-Core-5-Building-on-the- sqlFromQuery variable. But you don’t need to embed this in
Foundation) introduced you to a few of these capabilities. your production code. In fact, I wouldn’t recommend that
In this article, I’ll dive more deeply into a broader collection because it can easily impact performance as EF Core goes
of intriguing ways to access some of EF Core 5’s metadata. through its process of working out the SQL. Instead, you can
call ToQueryString in the debugger, as shown in Figure 1.
ToTraceString Revived as ToQueryString The query variable in Figure 1 has already been evaluated
This one is a blast from the past. In the first iterations of as a DbQuery before I called ToQueryString in the debugger,
Entity Framework, there was no built-in logging. But there and so that works. However, although you can debug the Julie Lerman
was at least ObjectQuery.ToTraceString(), a runtime method context and express a DbSet directly in the debugger, which @julielerman
that would work out the SQL for a LINQ or Entity SQL query means you could also run context.People.ToQueryString() thedatafarm.com/contact
on the fly. Although it wasn’t a great way to log as it still in the debugger successfully, you can’t evaluate LINQ ex-
Julie Lerman is a Microsoft
required you to provide your own logic to output that SQL, pressions directly. In other words, if you were to debug the
Regional director, Docker
there are some helpful use cases for it even today. This fea- context variable and then tack on the Where method in the
Captain, and a long-time
ture hasn’t been part of EF Core until this latest version, EF debugger, it will fail. That’s nothing new and not a limita-
Microsoft MVP who now
Core 5, and has been renamed ToQueryString(). tion of ToQueryString. counts her years as a coder
in decades. She makes
If you want to see what SQL is generated for the simple query of One last important point about ToQueryString in this sce- her living as a coach and
a DbSet called People, you just append ToQueryString to the que- nario is that its evaluation is based on the simplest exe- consultant to software
ry. There’s no LINQ execution method involved. In other words, cution: ToList. Using a LINQ execution query such as Fir- teams around the world.
you’re separating the query itself from the execution method, stOrDefault affects how the SQL is rendered and therefore You can find Julie presenting
which would trigger the query to run against the database. ToQueryString renders different SQL than the SQL sent to on Entity Framework,
the database when executing the query with FirstOrDefault. Domain-Driven Design and
var sqlFromQuery=context.People.ToQueryString(); Gunnar Peipman has some good examples of this in his blog other topics at user groups
post: https://gunnarpeipman.com/ef-core-toquerystring. and conferences around
One interesting use case for ToQueryString is to look at its the world. Julie blogs at
result while debugging so that you don’t have to wait until thedatafarm.com/blog,
after you’ve run your method to inspect SQL in logs. is the author of the highly
Another use case where I find acclaimed “Programming
In the case above, I could build the query, capture the Entity Framework” books,
ToQueryString particularly and many popular videos
string, and then execute the query.
helpful is in integration tests. on Pluralsight.com.
private static void GetAllPeople()
{
using var context = new PeopleContext();
var query = context Another use case where I find ToQueryString particularly
.People.Where(p=>p.FirstName=="Julie"); helpful is in integration tests. If you need to write tests
var sqlFromQuery = query.ToQueryString(); whose success depends on some part of the generated SQL
var people = query.ToList(); expression, ToQueryString is a much simpler path than log-
} ging. With logging, you would have to capture the log into
Here’s an example of a silly test to prove that EF Core writes You can use these categories to filter output to only the type
more intelligent SQL than I do. Note that I’m referencing of information you want to log.
the Microsoft.EntityFrameworkCore.Sqlite provider in my
test project. As you may know, EF and EF Core always project One parameter of LogTo specifies the target—either a con-
the columns related to the properties of the entity. It does sole window, a file, or the debug window. Then a second pa-
not write SELECT *. rameter allows you to filter by .NET LogLevel plus any DLog-
gerCategoy you’re interested in. This example configures a
[TestMethod] DbContext to output logs to the console using a delegate for
public void SQLDoesNotContainSelectStar() Console.WriteLine and it filters on all the DbLoggerCategory
{ types that fall into the LogLevel.Information group.
var builder = new DbContextOptionsBuilder();
builder.UseSqlite("Data Source=testdb.db"); optionsBuilder.UseSqlServer(myConnectionString)
using var context = .LogTo(Console.WriteLine,LogLevel.Information);
new PeopleContext(builder.Options);
var sql=context.People.ToQueryString(); This next LogTo method adds a third parameter—an array
Assert.IsFalse(sql.ToUpper() of DbLoggerCatetory (only one is included) to further fil-
.Contains("SELECT *")); ter on only EF Core’s Database commands. Along with the
} LogTo method, I’ve added the EnableSensitiveDataLogging
method to show incoming parameters in the SQL. This will
A more interesting example would be if you’re using an in- capture all SQL sent to the database: queries, updates, raw
terceptor to perform soft deletes and a global query filter SQL and even changes sent via migrations.
to always filter out those rows. Here, for example, is a query
filter in my DbContext OnModelBuildling method telling EF .LogTo(Console.WriteLine,
Core to append a predicate to filter out Person rows whose LogLevel.Information,
IsDeleted property is true. new[]{DbLoggerCategory.Database.Command.Name},
)
modelBuilder.Entity<Person>() .EnableSensitiveDataLogging();
.HasQueryFilter(p => !p.IsDeleted);
My Person type that includes the IsDeleted property from
With this in place, I can write a test similar to the one above also has a FirstName and LastName property. Here’s
above, but changing the assertion to the following to make the log when calling SaveChanges after adding a new Person
sure I don’t break the global query filter logic. object.
Instantly Search
Notice the EventId at the top. You can even define your log- How does a FREE hour-
ging to filter on specific events using those IDs. You can also long CODE Consulting
Terabytes
filter out particular log categories and you can control the virtual meeting with our
formatting. Check out the docs for more details on these expert consultants sound?
various capabilities at https://docs.microsoft.com/en-us/ Yes, FREE. No strings.
ef/core/logging-events-diagnostics/simple-logging. No commitment.
No credit cards.
Simple logging is a high level way to log EF Core, but you can Nothing to buy.
also dive more deeply into the logger by working directly with For more information,
Microsoft.Extensions.Logging to exert even more control
visit www.codemag.com/ dtSearch’s document filters
over how EF Core’s logs are emitted. Check the EF Core docs
consulting or email us support:
at info@codemag.com.
for more details on getting started with this more advanced • popular file types
usage: https://docs.microsoft.com/en-us/ef/core/logging-
events-diagnostics/extensions-logging. • emails with multilevel
attachments
• a wide variety of databases
Responding to EF Core Events
EF Core 2.1 introduced .NET events in the EF Core pipeline.
• web data
There were only two to begin with: ChangeTracker.Tracked,
which is raised when the DbContext begins tracking an en-
tity, and ChangeTracker.StateChanged is raised when the Over 25 search options
state of an already tracked entity changes. including:
With the base logic in place, the team was able to add three more • efficient multithreaded search
events to EF Core 5 for SaveChanges and SaveChangesAsync. • easy multicolor hit-highlighting
• forensics options like credit
• DbContext.SavingChanges is raised when the context
is about to save changes.
card search
• DbContext.SavedChanges is raised after either of the
two save changes methods have completed successfully.
• DbContext.SaveChangesFailed is used to capture and Developers:
inspect a failure.
• SDKs for Windows, Linux,
It’s nice to be able to separate this logic rather than stuff- macOS
ing it all into a single override of the SaveChanges method. • Cross-platform APIs for
.
C++, Java and NET with
You could even use these events to emit alternate informa-
tion that’s not tracked by the logger. The EF Core docs use
. .
NET Standard / NET Core
an example where they output timestamps when data is • FAQs on faceted search,
added, updated, or deleted. granular data classification,
Azure and more
System.Diagnostics.Process.GetCurrentProcess().Id
With the ID in hand and the app still running, you can then
Figure 3: The makeup of a DbDataReader passed into the ReaderExecuted command trigger the counter to begin monitoring events coming from
the Microsoft.EntityFramework namespace. Note that I’ve
wrapped the command for display purposes.
private void SetUserId(object sender,
SavingChangesEventArgs e) dotnet counters monitor
{ Microsoft.EntityFrameworkCore -p 28436
foreach (var entry in ChangeTracker.Entries()
.Where(entry => entry.Metadata Then, as you run through your application, the counter displays
.GetProperty("UserId") != null)) a specific list of EF Core stats, as shown in Figure 2, and then
{ update the counts as the application performs its functional-
entry.Property("UserId").CurrentValue = ity. I only used it with a small demo app so my counts aren’t
Globals.CurrentUserId; very interesting, but you can see that I have a single DbContext
} instance running (Active DbContexts), I’ve run three queries,
} leveraged the query cache (because I ran some of those queries
more than once), and called SaveChanges twice.
Finally, I can wire up the SetUserId method to the Sav-
ingChanges event in the constructor of the DbContext: This looks like another interesting tool to have in your anal-
ysis toolbelt, but will certainly be more useful when running
public PeopleContext() against a more intensive solution. In the docs, the EF team
{ recommends that you do read up on the dotnet-counters
SavingChanges += SetUserId; feature to properly benefit from using it with EF Core.
}
If I were to edit that Person while it’s still being tracked and
force the context to detect changes, the LongView, in addi-
tion to showing the state as Modified, also notes the change
I made to the LastName property.
The DebugViews output nicely formatted strings filled with I love these debug views that help me at debug time to dis-
information about the state of a context’s ChangeTracker or cover the state and relationship of my tracked objects wheth-
metadata from the model. DebugView provides a beautiful er I’m problem solving or just learning how things work.
document that you can capture and print out and really get
a good look at what’s going on under the covers. I spend Let’s flip over to the Model.DebugViews to see what you can
a lot of time in the debugger drilling into explore various learn from them. First, I should clarify my model. It’s the
details about what the change tracker knows or how EF Core same model I used for the earlier article. In Figure 4, I’m
is interpreting the model I’ve described. The ability to read using the EF Core Power Tools extension in Visual Studio to
this information in this text format, even save it in a file so visualize how EF Core interprets my model. My classes are
you don’t have to debug repeatedly to glean details, is a Person, Address, Wildlife Sighting, and ScaryWildlifeSight-
fantastic feature of EF Core 5. In fact, making sure you know ing. As mentioned already, Person and Address have a ma-
about DebugViews was my inspiration for writing this article. ny-to-many relationship where EF Core infers a join entity.
WildlifeSighting has a one-to-many relationship with Ad-
The way to get to this information is when debugging an dress, and ScaryWildlifeSighting inherits from WildlifeSight-
active DbContext instance. In DbContext.ChangeTracker.De- ing using a Table-Per-Hierarchy mapping to the database.
WANT TO
LIVE ON
MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that
will continue way past the current crisis. Which begs the question: If you can work from
home, then why not make your home in one of the world’s premiere destinations and
most desirable living areas?
The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it
is uniquely situated as a place to live. It offers great infrastructure and a wide range of
things to do, not to mention a very high quality of life.
We have teamed up with CODE Magazine and Markus Egger to provide you information
about living in Maui. Markus has been calling Maui his home for quite some time, so
he can share his own experience of living in Maui and working from Maui in an industry
that requires great infrastructure.
www.Live-On-Maui.com
Advertisement
MAUI NO KA OI!
This loosely translates to “Maui is the best”. As someone who has been calling Maui
home for a while now, I can wholeheartedly confirm this. After having travelled the world,
and after having lived in a variety of places, I find Maui to be truly unique.
Most people know Maui is a place to go for a week on worth considering also). I like this area for its great qual-
vacation. And that is certainly great and very enjoyable. ity of housing, low crime, great weather, and the world’s
However, Maui is so much more! To me, Maui is the per- greatest beaches. I enjoy playing a round of golf or go-
fect mix that makes me feel like I am living on a tropi- ing to a great restaurant with friends. When I want to
cal island yet being a developed place with great infra- feel like I’m on vacation for an hour or two, I swing by
structure and great quality of life. A lot of this is true for one of the hotels for a snack at the pool bar. When I feel
the Hawaiian Islands in general. But while Oahu (with its like exercising, I ride my bike along the ocean or go for
capital of Honolulu) is essentially a big city with a lot of a hike into the jungle or across lava fields.
people that always reminds me of Southern California,
and while islands like Kauai or the Big Island of Hawai’i As we have been going through the COVID-19 crisis, it
are a bit too “back to the roots” for me, Maui is just per- has become more and more clear how great a location
fect. You can enjoy a great day at the beach or in nature, Maui is. For one, the warm weather and outdoor living
or you can go to a nice restaurant, the movies, or a con- have kept the COVID-19 numbers low, and the quality
cert. It’s the quality of life provided by a modern place in of life high. And while nobody wants to be hospitalized,
the Western world, paired with a tropical island paradise. it has been nice to know that we have better healthcare
here than other tropical locations. (Essentially the same
Maui has many unique advantages. There is no hurri- healthcare as anywhere else in the US.) While we also
cane season and no real rainy season. The weather is had to deal with a lockdown, and many restaurants and
nice year-round, especially on the south-side of the is- hotels have been closed, we have several places that
land. There are no dangerous animals. Not even mos- are not just open, but since everything happens out-
quitoes. How does 82-degree weather on a nice beach doors, many are perfectly safe to visit. I really can’t think
with a Mai Tai or Pina Colada sound? That’s Maui for you! of a better place to weather this pandemic than Maui.
As someone who works in the tech industry, good in- So yes: Maui No Ka Oi! To me, there isn’t another place
frastructure is important to me. After all, I need to work that even comes close. I have a long list of other places
as productively from Maui as I do when I am on the I enjoy visiting that are awesome too. Do I want to go
“mainland” (which is what we call the continental US to Bora Bora, Singapore, or many other great loca-
here in Hawai’i). I have a 300Mbit internet connection tions? Sure, I do! But what do you do in Bora Bora after
going to my home, and it is inexpensive. My connec- two weeks? Maui on the other hand is a great place to
tion to other parts of the world is better and less ex- set up a life and stay for good.
pensive here than it is on most main-land locations.
We have the same stores and supermarkets as every- I would love to see you on Maui in the future. Maybe
where else in the US. Schools are decent. The same we can share one of those Mai Tais on the beach. I rec-
is true for healthcare. Flight connections are great, ommend talking to Carol Olsen, who has been helping
and it is not at all difficult to travel from and to Maui. me with all my real estate needs in South Maui. Moving
to Maui has been the best decision of my life. I am sure
I live on the south-side of the island in an area called you would enjoy it too!
“Kihei”. Especially the southern parts of Kihei, known
as “Wailea” and “Makena”, are the areas I truly rec- Markus Egger
ommend to anyone (although there are other places Publisher, CODE Magazine
MAUI PROPERTIES
Introduction to Containerization
Using Docker
Containerization has been a buzzword in the IT industry for the past several years. The term “containerization” has been
increasingly used as an alternative or companion to virtualization. But what exactly is containerization, how does it work and
how does it compare with virtualization? In this article, I’ll take you on a journey to discover just what exactly containerization
is through the use of the Docker platform. By the end of this The kernel of the host’s operating system is shared across
article, you’ll have a much clearer idea of how Docker works, all the containers that are running on it. Docker virtualizes
and how to leverage it in your next development project. the operating system of the host on which it’s installed and
running, rather than virtualizing the hardware components.
What Docker Is and How It Works
Whenever people talk about containerization, people start
to think of something they’re already familiar with—virtual Docker virtualizes the OS of your
machines (VM). Since this is the case, I’ll explain Docker by
computer and simplifies the
Wei-Meng Lee first using VM as an example.
weimenglee@learn2develop.net building, running, managing,
www.learn2develop.net Virtual Machines vs. Docker and distribution of applications.
@weimenglee Figure 1 shows the various layers in a computer that uses
VMs. The bottom layer is the hardware of your computer,
Wei-Meng Lee is a technolo-
gist and founder of Devel- with the OS sitting on top of it. On top of the OS is the hyper-
oper Learning Solutions visor (also known as the Virtual Machine Monitor), which is a Docker is written to run natively on the Linux platform, and
(www.learn2develop.net), software that creates and runs virtual machines. Using the the host and container OS must be the same.
a technology company spe- hypervisor, you can host multiple guest operating systems
cializing in hands-on train- (such as Windows, Linux, etc.). Each VM contains a separate Wait a minute. If the host and container OS must be the same
ing on the latest technolo- set of libraries needed by your application, and each VM is (that is, Linux), how do you run Docker on operating systems
gies. Wei-Meng has many allocated a specific fixed amount of memory. Therefore, the such as Windows and macOS? Turns out that if you’re using
years of training experiences number of VMs you can host in your computer is limited by Windows or Mac OS, Docker creates a Linux virtual machine,
and his training courses the amount of memory you have. which itself hosts the containers (see Figure 4). This is why,
place special emphasis if you use Windows, you’ll need to install WSL2 (Windows Sub-
on the learning-by-doing Figure 2 shows how Docker fits in the picture. Instead of a systems for Linux is a full Linux kernel built by Microsoft). For
approach. His hands-on hypervisor, you now have a Docker engine. The Docker en- Mac, Docker uses the macOS’s Hypervisor framework (Hyper-
approach to learning gine manages a number of containers, which host your ap- Kit) to create a virtual machine to host the containers.
programming makes plication and the libraries that you need. Unlike VMs, each
understanding the subject container does not have a copy of the OS—instead it shares Each container is isolated from the other containers present
much easier than read- the resources of the host’s operating system. on the same host. Thus, multiple containers with different
ing books, tutorials, and application requirements and dependencies can run on the
documentation. His name As mentioned, a Docker container doesn’t have any operat- same host.
regularly appears in online
ing system installed and running on it. But it does have a
and print publications such
as DevX.com, MobiForge.
virtual copy of the process table, network interface(s), and Uses of Docker
com, and CODE Magazine.
the file system mount point(s) (see Figure 3). These are The first question that you might ask is: “Okay, now that I
inherited from the operating system of the host on which know how Docker works, give me a good reason to use Dock-
the container is hosted and running. er.” Here’s one that I usually use to answer this question.
Figure 1:
How a virtual Figure 2:
machine works How Docker works
Figure 8: Trying Docker for the first time The docker run hello-world command downloads
(or in Docker-speak, pulls) the hello-world Docker Image
from Docker Hub and then creates a Docker Container using
Downloading and Installing Docker this image; it then assigns a random name to the container
With all the theory behind us, it’s time to get your hands and starts it. Immediately, the container exits.
dirty and experience for yourself how Docker works.
The hello-world image, as much as it’s useless, allows you
Using a Web browser, go to https://docs.docker.com/dock- to understand a few important concepts of Docker. Rest as-
er-for-windows/install/ and download Docker for the OS sured that you’ll do something useful after this.
you’re using. Follow the installation instructions and when
you’re done, you should see the Docker Desktop icon (the Viewing the Docker Container and Image
dolphin logo) in the system tray (for Windows). Clicking on If you go to the Docker Desktop app and click on the Images
the icon launches Docker for Windows (see Figure 5). item on the left (see Figure 9), you’ll see the hello-world
image listed.
A number of administrative tasks in Docker can be accom-
plished through the Docker Desktop app, but you can do If you now click on the Containers / Apps items on the left (see
more with the Docker CLI. For the rest of this article, I’ll Figure 10), you should now see a container named elated_
demonstrate the various operations through the CLI. bassi (you’ll likely see a different name, as names are randomly
assigned to a container) based on the hello-world image. If
Docker Images and Containers you click on it, you’ll be able to see logs generated by the con-
In Docker, there are two terminologies that are important: tainer, as well as inspect the environment variables associated
with the container and the statistics of the running container.
• Docker Image: A Docker image is a read-only file con-
taining libraries, tools, and dependencies that are re- You can also view the Docker container and image using the
quired for an application to run. Examples of Docker command prompt. To view the currently running container,
images are MySQL Docker image, Python Docker im- use the docker ps command. To view all containers (includ-
age, Ubuntu Docker image, and so on. Each Docker ing those already exited), use the docker ps -a command
image can be customized by adding layers (with ad- (see Figure 11).
ditional tools, libraries added on each layer, for exam-
ple; see Figure 6) on it. A Docker image can’t run by To explicitly name the container when running it, use the
itself. To do so, you need to create a Docker Container. --name option, like this:
Figure 10: Viewing the container created as well as the logs generated by the container
Stopping a Container
If you need to stop a container that’s running, you can use
Figure 11: Using the docker ps -a command to view all containers the docker stop command together with the container ID of
the container you wish to stop. Although the hello-world
container ran and exited immediately, you’ll find this com-
mand useful later on when you need to manually stop a run-
ning container.
C:\>docker images
REPOSITORY TAG IMAGE ID
CREATED SIZE
hello-world latest bf756fb1ae65 12 months
ago 13.3kB
C:\>docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
Figure 13: Using the Docker Hub to search for images that you need 0099984a5fc2 hello-world "/hello" 2
hours ago Exited (0) 2 hours ago
elated_bassi
138cd9c6bc5c hello-world "/hello" 24 seconds
ago Exited (0) 23 seconds ago True enough, there is a container named (0099984a5fc2)
jovial_borg that’s using the image. In this case, you need to remove the
0099984a5fc2 hello-world "/hello" 29 container first before you can remove the image:
minutes ago Exited (0) 29 minutes ago
elated_bassi C:\>docker rm 0099984a5fc2
C:\>docker rmi bf756fb1ae65
Removing a Container Untagged: hello-world:latest
When a container has finished running and is no longer needed, Untagged: hello-world@sha256:1a523af650137b8accdaed439c17d
you can delete it using the docker rm command. To delete a 684df61ee4d74feac151b5b337bd29e7eec
container, you need to first get the container ID of the container Deleted: sha256:bf756fb1ae65adf866bd8c456593cd
that you want to delete using the docker ps -a command, and 24beb6a0a061dedf42b26a993176745f6b
then specify the container ID with the docker rm command: Deleted: sha256:9c27e219663c25e0f28493790cc0b8
8bc973ba3b1686355f221c38a36978ac63
C:\>docker rm 138cd9c6bc5c138cd9c6bc5c
Sometimes you might have a lot of images on your computer
If you now use the docker ps -a command to view all the and you just want to remove all the images that aren’t used
containers, you should find that the specified container no by any containers. In this case, you can use the docker im-
longer exists: age prune -a command:
C:\>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
80f6a4603277 ubuntu "bash" 42
minutes ago Exited (2) 35 minutes ago
eager_wozniak
root@80f6a4603277:/# curl
curl: try 'curl --help' or 'curl --manual'
for more information
Figure 17: Viewing the modified Web page on the nginx container
Here’s a list of the uses for the various options in the above What if you want to transfer an image into the container?
command: Easy. Use the docker cp command:
• The -d option (or --detach) runs the container in C:\>docker cp docker.png 469be21874e7:/usr/share/nginx/html
background.
• The -p option maps the exposed port to the specified In that snippet, 469be21874e7 is the container ID of the
port; here 88:80 means all traffic to port 88 on the lo- nginx container. The above command copies the file named
cal computer is forwarded to port 80 in the container. docker.png into the container’s /usr/share/nginx/html
• The --name option assigns a name to the container. directory.
• The nginx is a Docker image from Docker Hub.
In the interactive shell of the container, edit the index.
Once the container has been created, you can verify the html file again:
mapping of the ports using the docker port command:
root@469be21874e7:/usr/share/nginx/html# vim
C:\>docker port webserver index.html
80/tcp -> 0.0.0.0:88
Add the additional line in bold in Listing 2 to the index.
You’re ready to test the Web server to see if it works. Use html file.
your Web browser and enter the following URL: http://loc-
alhost:88 (see Figure 16). If the container is running prop-
erly, you should see the welcome message. Listing 1: Modifying the content of index.html
<!DOCTYPE html>
Modifying the Docker Container <html lang="en">
Although being able to run the Web server through a Docker <head>
container is cool, you want to be able to customize the con- <meta charset="UTF-8">
<meta name="viewport" content=
tent of your Web pages. To do that, type in the following "width=device-width, initial-scale=1.0">
commands in bold: <meta http-equiv="X-UA-Compatible"
content="ie=edge">
<title>Hello World</title>
C:\>docker exec -it 469be21874e7 bash <style>
root@469be21874e7:/# apt-get update h1{
Get:1 http://security.debian.org/debian- font-weight:lighter;
security buster/updates InRelease [65.4 kB] font-family: Helvetica, sans-serif;
}
Get:2 http://deb.debian.org/debian buster </style>
InRelease [121 kB] </head>
Get:3 http://deb.debian.org/debian buster- <body>
<h1>
updates InRelease [51.9 kB]
Hello, Docker!
Get:4 http://security.debian.org/debian- </h1>
security buster/updates/main amd64 Packages </body>
[260 kB] </html>
Get:5 http://deb.debian.org/debian buster/main
amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster- Listing 2: The content of index.html
updates/main amd64 Packages [7860 B] <!DOCTYPE html>
Fetched 8414 kB in 2s (3929 kB/s) <html lang="en">
Reading package lists... Done <head>
root@469be21874e7:/# apt-get -y install vim <meta charset="UTF-8">
<meta name="viewport" content=
"width=device-width, initial-scale=1.0">
The above command: <meta http-equiv="X-UA-Compatible"
content="ie=edge">
<title>Hello World</title>
• Connects to the nginx container interactively and ex- <style>
ecutes the bash shell h1{
• Updates the package cache on the container font-weight:lighter;
• Installs the vim editor on the container font-family: Helvetica, sans-serif;
}
</style>
Once the vim editor is installed on the container, let’s </head>
change to the Web publishing directory and edit the index. <body>
<h1>
html file, with the content, as shown in Listing 1. Hello, Docker!
</h1>
root@469be21874e7:/# cd usr/share/nginx/html/ <img width="100" src="docker.png"/>
root@469be21874e7:/usr/share/nginx/html# vim </body>
</html>
index.html
When you refresh the Web browser, you’ll now see the same
page as shown earlier in Figure 17.
Figure 18: The Web page now shows the Docker logo
Docker images use tags for version
control. The latest tag is simply
a tag for an image that doesn’t
Refreshing the Web browser, you should now see the image have a tag.
of the Docker logo (see Figure 18).
The above -p option maps the port 32769 to 3306. You can root@8a0496aaca18:/var/lib/mysql# ls
use the docker port command to list the port mappings on ...
the My-mysql container:
Finally, the value of the “Type” key is “volume”. This means
C:\>docker port My-mysql that all the changes you made to the /var/lib/mysql direc-
3306/tcp -> 0.0.0.0:49153 tory will not be persisted later on when you commit the con-
tainer as a Docker image. When that happens, all the data
Inspecting the Docker Container that you have previously stored in that MySQL will be lost.
Now that you’ve started the MySQL container running, you To resolve this, it’s always good to map the /var/lib/mysql
may be wondering where all the database’s data are stored. directory to a directory on your local computer (so that it
This is a good opportunity to dive deeper into the container can be backed up independently). This topic is beyond the
to learn how directories are mapped. scope of this article, but if you want to know more details,
check out the -v option in the docker run command.
You can examine your container in more detail using the
docker inspect command: Connecting to the MySQL Container Using a Local
MySQL Client
C:\>docker inspect My-mysql Now that your MySQL server in the container is up and run-
ning, it’s time to create a database in it, add a table, and
The above command will yield something like the results insert some records. For this, you can make use of the mysql
shown in Listing 4 (the important part highlighted): client that ships with the MySQL installer. You can down-
load the MySQL Community Edition from: https://dev.mysql.
The result is quite a lengthy bunch of information. For this com/downloads/mysql/ and install the mysql client onto
discussion, let’s just focus on three specific keys – “Type”, your local computer (during the installation stage, you can
“Source” and “Destination”. choose to only install the client).
The value of the “Source” key refers to the physical direc- For Windows user, the mysql utility is, by default, located in
tory used by MySQL to store its data; if you run Docker on a C:\Program Files\MySQL\MySQL Server 8.0\bin, so you need
Linux computer, this directory refers to an actual directory to change into that directory before running the mysql client:
on the local computer; on a Mac and Windows computer,
this is a directory on the virtual machine created by Docker. C:\Program Files\MySQL\MySQL Server 8.0\bin>
mysql -P 49153 --protocol=tcp -u root -p
The value of the “Destination” key refers to the (logical) Enter password: ********
directory used by MySQL in the Docker container. This is to Welcome to the MySQL monitor. Commands
say, if you connect to the MySQL container interactively, you end with ; or \g.
will be able to change into the /var/lib/mysql directory and
examine its content within: Your MySQL connection id is 8
Server version: 8.0.22 MySQL Community
C:>docker exec -it My-mysql bash bash Server - GPL
root@8a0496aaca18:/# cd /var/lib/mysql
C:\>python mysql.py
0001
SPONSORED SIDEBAR: Copyright (c) 2000, 2020, Oracle and/or its Wei-Meng
affiliates. All rights reserved. Lee
Need FREE 25
Project Advice? Oracle is a registered trademark of Oracle
CODE Can Help! Corporation and/or its affiliates. Other If you see the above output, your Python has successfully
names may be trademarks of their respective connected to the MySQL server on the container!
No strings free advice on owners.
new or existing software
development projects.
Type 'help;' or '\h' for help. Type '\c' to Summary
CODE Consulting experts
have experience in cloud,
clear the current input statement. I hope this article has provided a clearer picture of how con-
Web, desktop, mobile, tainerization works. In particular, I used Docker as an ex-
microservices, containers, The above command in bold connects to the MySQL server ample and provided a few examples of how to use the Docker
and DevOps projects. on the container using the port 49153 (replace this with images to create containers. Let me know how you’re using
Schedule your free hour the port number assigned to your own MySQL container) Docker in your own development environment.
of CODE call with our and log in as root. Once you enter the password, you should
expert consultants today. be able to see the MySQL prompt: Wei-Meng Lee
For more information,
visit www.codemag.com/ mysql>
consulting or email us
at info@codemag.com. Creating a Database, Table, and Inserting a Row
In the MySQL prompt, enter the following commands in bold:
packing your common code into a Vue plug-in. A Vue plug-in to instantiate apps. Instead, Vue 3 introduced the Applica-
is a self-contained code that adds a global-level functional- tion API (https://v3.vuejs.org/api/application-api.html),
ity into your app. When you create it, you package it as an which I will introduce shortly, to standardize creating and
NPM package. Then, whenever you need this plug-in, you instantiating apps. Vue 3 introduced the createApp() func-
install it into your app to avoid repetition of components, tion that returns an app instance. It represents the app
directives, global functions, and the like. context and will be used to define plug-ins, components, di-
rectives, and other objects. In a nutshell, this app instance
Vuex (https://next.vuex.vuejs.org/) and Vue Router replaces the Vue instance in Vue 2.
(https://next.router.vuejs.org/) are two examples of Vue
plug-ins. You’ll almost use these two plug-ins in every Vue Bilal Haidar
app you develop. bhaidar@gmail.com
In Vue 3, the global and internal www.bhaidar.dev
In summary, here’s a list of possible scenarios for which you @bhaidar
might consider building a custom Vue plug-in:
APIs have been restructured with
Bilal Haidar is an
tree-shaking support in mind. accomplished author,
• Adding some global methods or properties Microsoft MVP of 10 years,
• Adding directives ASP.NET Insider, and has
• Adding global mixins (https://v3.vuejs.org/guide/ been writing for CODE
mixins.html) This approach has tremendous advantages, especially when Magazine since 2007.
• Adding global instance methods it comes to instantiating multiple Vue apps without clutter-
• Adding a library that provides an API of its own while ing the Vue global space. An app instance creates a bound- With 15 years of extensive
injecting some combinations of the above ary and isolates its components, directives, plug-ins, and experience in Web develop-
other Vue intrinsic objects from other app instances. ment, Bilal is an expert in
Generally, I monitor some common functionality that I keep providing enterprise Web
copying over and over from one app to another. Based on Moreover, Vue 3 brings over the new Composition API solutions.
that, I decide whether to create a plug-in or not. They make (https://composition-api.vuejs.org/api.html). Vue 2 had
He works at Consolidated
your life easier and your code more organized by promot- the Options API (https://vuejs.org/v2/api/#Options-Data)
Contractors Company in
ing common functionality into a common package that all sitting at its core. In Vue 2, you construct your component
Athens, Greece as a full-
apps can use. When you change the plug-in source code and as an object with properties. For example, data, props, stack senior developer.
publish the package again, you only need to go over all the watch, and many other properties are part of the Options
apps that are using the plug-in and upgrade it. API that you can use to attach functionality onto a com- Bilal offers technical
ponent. consultancy for a variety
of technologies including
Vue 3 Plug-ins In Vue 3, you can still make use of the Options API. It’s Nest JS, Angular, Vue JS,
Vue 3 introduced a long list of changes and additions to the still there and will continue to be there for a while. This JavaScript and TypeScript.
Vue framework. The most obvious is the way you create and makes migrating Vue 2 apps to Vue 3 much easier, as Vue 3
load a Vue app. is backward compatible with the Options API. However, Vue
3 introduces the Composition API. This API is optional. You
Previously, in Vue 2, you instantiated an object of the Vue use the Composition API inside your Vue component by us-
global constructor that represented the entire app. You ing the setup() function. This function takes as parameters
used this instance to install plug-ins, and define directives, two inputs the props and context.
components, and other Vue intrinsic objects. This approach
has its own pitfalls and limitations. For instance, if you cre- setup(props, context) {
ate multiple Vue apps on the same page, they all share the const attrs = context.attrs;
same Vue global constructor. Hence, if you define a direc- const slots = context.slots
tive on the Vue global constructor, all the instances on the const emit = context.emit'
page have access to this directive. You use the Global API }
(https://vuejs.org/v2/api/#Global-API) to create an app in
Vue 2. It has all the functions required to interact with the This code snippet represents the setup() function signa-
Vue global constructor. ture.
In Vue 3, things have drastically changed when it comes to The props object represents the component’s props. What-
defining apps. You no longer use the Vue global constructor ever props you define on the component are available to the
Listing 3: i18n Vue 3 plugin with support for Composition API To support the Options API inside a plug-in, you make use
of the Application Config (https://v3.vuejs.org/api/applica-
export default {
install: (app, options) => { tion-config.html) that’s part of the Application API (https://
function translate(key) { v3.vuejs.org/api/application-api.html). Listing 1 shows a
return key.split(".").reduce((o, i) => { sample Vue 3 plug-in that uses the Application Config.
if (o) return o[i];
}, options);
} The code in Listing 1 shows to define a global property us-
ing the app.config.globalProperties object. In the coming
app.config.globalProperties.$translate = translate; sections, I’ll explore both the Application API and Applica-
tion Config.
app.provide("i18n", {
translate
}); Listing 2 shows the code for the sample plug-in written in
} Vue 2. Notice the use of the Vue.prototype object to define
}; instance methods that can be accessed inside a component
instance.
setup() function. Note that the props object is reactive. In part one of this series (CODE Magazine, January/Febru-
This means that the object is updated inside the setup() ary 2021), you learned about the provide/inject API. Now,
function when new props are passed into the component. in order to support the Composition API, you must make use
of the provide/inject API to provide the plug-in functional-
A context input parameter is an object that contains three ity inside the setup() function. Listing 3 shows how to add
main properties: support for the provide/inject API.
Let’s look at some of the functions. You can read more about The mount() Function
the function by following its corresponding link. The mount() function mounts a root component of the ap-
plication instance on the provided DOM element: (https://
v3.vuejs.org/api/application-api.html#mount).
The component() Function You can read part one of this series (in CODE Magazine,
Use the component() function to register or retrieve a glob- January/February 2021) where I divulge the details of the
al component: (https://v3.vuejs.org/api/application-api. provide/inject API in Vue 3 ()
html#component):
The unmount() Function
// register a component The unmount() function unmounts a root component of
app.component('my-component', { the application instance on the provided DOM element:
/* ... */ (https://v3.vuejs.org/api/application-api.html#unmount).
})
The use() Function
// retrieve a registered component The use() function installs a Vue plug-in. This function ac-
const MyComponent = cepts, as a first input, an object representing the plug-in
app.component('my-component') and having an install() function. It also accepts a function
representing the plug-in itself. In addition, it accepts, as
The config Object a second input, an options parameter. Vue automatically
The config() object is an app global configuration object passes the options input parameter to either the install()
(https://v3.vuejs.org/api/application-api.html#config): function or the plug-in function, depending on what’s being
passed as a first input parameter (https://v3.vuejs.org/api/
app.config = {...} application-api.html#use).
The directive() Function This is just a brief summary of the available functions on the
Use the directive() function to register or retrieve a global direc- Application API. You can read more about the function by
tive (https://v3.vuejs.org/api/application-api.html#directive): following its corresponding link.
To register a directive using an object: Now let’s focus on the app.config object.
app.directive('my-directive', {
// Directive has a set of lifecycle hooks: Application Config
An app config is an object that you use to store Vue app global
beforeMount() {}, configurations. This object exposes several properties and
/* ... */ functions. In this article, I’ll focus on only a few of them.
})
const app = createApp(App);
To register a directive using a function: app.config = { … };
app.mount('#app');
app.directive('my-directive', () => {
/* ... */ The snippet creates a new Vue 3 app, sets the app.config
}) object, and finally mounts the app into a DOM root element.
To retrieve a directive: Let’s explore the most important features of the Application
Config as far as this article is concerned.
const myDirective =
app.directive('my-directive') The globalProperties Object
The app.config.globalProperties object allows you to add
The mixin() Function a global property that can be accessed in any component
This function is used to register an app-global mixin that’s instance inside the application. The component’s property
available in all component instances: (https://v3.vuejs. takes priority when there are conflicting keys.
org/api/application-api.html#mixin)
You define a global property:
app.mixin({
created() { app.config.globalProperties.foo = 'bar'
When you use the vue-cli-service to run the app, the envi-
You retrieve a global property inside a component instance: ronment variables are loaded from the .env file.
this.foo The Vue CLI defines a concept named mode. There are three
typical modes in the Vue CLI:
You can also define a global instance method:
• development
app.config.globalProperties.$translate = • test
(key) => /* ... */ • production
To use this method inside a component instance: You can use the mode when defining environment variable
files in your app. The available files are listed in Table 1.
this.$translate('key');
When you run the command npx vue-cli-service serve, the
The errorHandler() Function NODE_ENV environment variable is set to development.
The errorHandler() function defines a handler for uncaught Hence any of the following files .env, .env.local, and .env.
errors that occur during the execution of the app. Vue run- development are loaded if present.
time calls the error handler providing information about the
error thrown and some additional information. When you run the command npx vue-cli-service build, the
NODE_ENV environment variable is set to production.
app.config.errorHandler = (err, vm, info) => {
console.log(err);
console.log(vm);
console.log(info); The Vue CLI loads automatically
};
the environment variables defined
The err input parameter represents the actual error object, in the app.
the info input parameter is a Vue specific error string, and
the vm input parameter is the actual component instance.
The warnHandler() Function The one requirement to define Vue environment variables,
The warnHandler() function defines a handler for runtime is to prefix their names with VUE_APP_. Let’s say you de-
Vue warnings. This handler is ignored in production. fine the following environment variable inside the .env
file:
For the sake of building a plug-in in this article, I’ll focus on
the app.config.globalProperties object. VUE_APP_APIKEY=...
this.$env.APIKEY Once the Vue CLI finishes scaffolding and creating the app
for you, all you need to do is change the directory to the app
root folder and run the app:
npm update -g @vue/cli I’ve placed two Vue environment variables that you’re going
# OR to use later to access an online images API (the same one I
yarn global upgrade --latest @vue/cli used in part one of this series). Save the file.
If you don’t have the Vue CLI installed locally, install it by Now that you have the app created and the .env file popu-
running the command: lated with two environment variables, let’s add the plug-in.
> vue --version The file also imports the ./env-helper.js helper module. It
@vue/cli 4.5.9 exports the getVueEnvVariables() function that the plug-
in uses.
Create a New Vue 3 App
Now that you’ve installed the Vue CLI on your computer, The module file defines a constant of type Symbol (https://
open a terminal window and run the following command to developer.mozilla.org/en-US/docs/Web/JavaScript/Refer-
create a new Vue 3 app: ence/Global_Objects/Symbol). The Symbol() function re-
turns a unique identifier. It can be used as a key to provide
❯ vue create vue-env-variables the environment variables to the app as you will see shortly.
// get an object of all vue variables Let’s explore this function in more depth. Listing 5 shows
const vueVariables = getVueEnvVariables(env); the entire source code for this function.
// make $env property available1 Given the .env file that you’ve defined previously, the env-
// to all components using Options API
app.config.globalProperties.$env = vueVariables || {}; Variables object when printed should look as follows:
// construct a new key:value object The function in Listing 6 receives a single environment vari-
return { able in the form of a key/value pair object. It then checks
[cleanedKey]: value, if this is a VUE environment variable; that is, the key starts
};
} with VUE_APP_ prefix.
export default {
app.config.globalProperties.$env = name: 'CatsCollection',
vueVariables || {}; components: {
'favorite-cat': FavoriteCat,
The plug-in exposes the vueVariables’ object on a global },
};
property named $env that is defined on the app.config.glo- </script>
balProperties object. To revise this, go back to the section
on Application Config. <style lang="scss" scoped>
.cats-collection {
width: 90%;
Now the $env object is exposed to the Options API for all margin: 0px auto;
components in the app. Still, you need to expose the same border-radius: 5px 5px 0 0;
object inside the Composition API. padding: 20px 0;
background: lightblue;
That’s it! The app communicates with the remote API supply-
ing the API Key and the Search API URL stored in the .env
file. The new Vue Env Variables plug-in successfully loads
those environment variables and provides them to the app
without the VUE_API_ prefix.
I’ve already built and published this plug-in as an NPM Always look at the common functionalities in your apps and
package here: https://www.npmjs.com/package/vue-env- try to promote them to a plug-in. This is very useful and
variables. time effective. You may choose to benefit others in the com-
munity by sharing your plug-in with them.
You can open any Vue 3 app and install the NPM package by
running the following command: Bilal Haidar
npm i vue-env-variables
Inside the main.js file of the Vue 3 app, you start by import-
ing the plug-in:
familiar to many developers and some may already start los- The Common Ground
ing their hair from just thinking about it. The reason for Before you prepare the dish, you need to know the ingre-
the hair loss is obvious, but remains unvalued by manage- dients: ports-and-adapters as an architectural style, and
ment and product owners: Whenever a code base grows for a domain-driven design for creating valuable software. Let’s
long period of time, the design of the code slowly turns into get to know them.
the famous big-ball-of-mud, a nightmare for all of us. Even
worse, when developers suddenly need to deliver a new ap- Ingredient 1: Ports-and-Adapters
plication fast, they inevitably fail, because most of the code There are many different architectural styles from which
can’t be reused somewhere else. architects and developers may choose, each with advan-
Alexander Pirker, PhD tages and disadvantages. Which one to choose depends on
pirker_alexander@hotmail.com In this article. I’ll show you one way out of the misery. It doesn’t several factors, like target platform, performance, etc. But
matter whether you start on a green field, or you already live also, the business domain that the software needs to sup-
Alexander is a Software the nightmare: I cover both scenarios. With a green field, you port influences the choice of style. For example, for soft-
Architect and Team Leader can immediately start out in the right way. For the ones living ware that controls a manufacturing pipeline, a pipes-and-
for Cloud Services at the nightmare, I present a migration process to slowly turn the filters design appears more appropriate than a data-centric
MED-EL, as well as
big-ball-of-mud into something clean and useful, which can approach. However, when the goal is to create software
a Senior Security Consul-
serve as a framework for various software products (which I that needs to potentially run in multiple environments, the
tant at RootSys. He has
refer to as software product line framework here). ports-and-adapters style fits perfectly. But why? What’s so
experience in designing
microservices and desktop special about it?
or mobile applications The common ground for both scenarios builds around two key
but also in writing or ingredients: the well-known architectural style “ports-and- Figure 1 depicts the ports-and-adapters architectural style.
migrating them. adapters,” sometimes also referred to as “hexagonal architec- The business logic lies at the core of the application and the
He received a PhD in ture;” and design patterns and principles from domain-driven environment surrounds it. This sounds great for software
Physics from the University design (DDD). The former ingredient tells you how to structure that needs to run in many different environments, like for
of Innsbruck and holds an application in general, and importantly, it commands you example on a mobile platform or on a desktop.
a master’s degree in to write business logic completely independent of the environ-
Technical Mathematics ment in which the software runs. The latter ingredient gives A re-implementation of the environment enables you to
and a master’s degree in you a hint where to split the business logic into (almost) in- reuse the same business logic to build a new application,
Biomedical Informatics. dependent assemblies. Combining these ingredients provides maybe even on a mobile device, as shown in Figure 2.
In his free time, he likes a very appealing dish using .NET 5: a software product-line
to go to the gym, but also framework of business logic assemblies for multiple platforms. How does it work in detail? What do you really have to do to
enjoys hiking in the Alps. use the full strength of this ingredient?
In the last part, I’ll present a migration process that al-
lows you to slowly migrate there. For that purpose, I outline Before exploring that, I need to explain the main idea of
how to deal and integrate with the legacy code base for new ports and adapters: The business logic component drives
business logic, but also how to migrate existing business the application because it delivers value to the business,
logic into a multi-platform software product line framework. not the other way around. Hence the business logic needs
Figure 1: In the ports-and-adapters architectural style, the Figure 2: From a single business logic component, multiple
environment depends on the business logic. applications emerge by re-implementing the environment.
Let’s try the receipt together with the Fitness Tracker App
from before. You start by adding repositories to the bound-
ed contexts, because the app should store the fitness data
somewhere. Recall that the app reads the fitness data from a
peripheral Bluetooth device, as Figure 3 indicates with the
IBluetoothService interface. You further add in the Endur-
Figure 4: The domain model for the Fitness Tracker app for runners ance bounded context a service to read GPS data from the
Figure 5: Two independent bounded contexts support the next release of the Fitness Tracker app. Running and biking
emerge into a single bounded context, the Endurance bounded context.
Figure 7: The power of the software product line framework for the fitness tracker business: Three different applications
composed from one framework.
mobile phone to track the activity (unfortunately there’s no assemble new applications from them. You build new ap-
GPS tracker included in the peripheral device), and you add plications by simply including only the bounded contexts
a so-called HIIT timer service to the HIIT bounded context. (and their dependents) you want to deliver in the new ap-
For the communication with the fitness tracker peripheral, plication instead of all of them. For instance, if the busi-
you create a new bounded context, the Peripherals bound- ness decides to deliver a small app just including endurance
ed context. This bounded context supports the proprietary training, then the app comprises the Endurance and Pe-
protocols and data formats that the fitness tracker peripher- ripherals bounded context (“Endurance tracker app”). Or, if
al implements, independent of the communication interface the business decides to deliver another app just supporting
like Bluetooth. Observe that the data services of the Endur- HIIT workouts, then it comprises the HIIT and Peripherals
ance and HIIT bounded contexts use the PeripheralService bounded context (“HIIT workout tracker app“). Finally, the
from the Peripherals bounded context to read data from the premium version could feature all bounded contexts (“Fit-
peripheral. Figure 6 summarizes the new situation. ness tracker app”). Figure 7 shows the different configura-
tions of the framework into concrete software products.
Figure 6 is the output of step two of the receipt, providing
you with domain models for the bounded contexts Endur- To turn the framework now into a multi-platform dish, you
ance, HIIT, and Peripherals. It leads to an interesting in- need to apply step three of the receipt. As Figure 6 shows,
sight: Endurance and HIIT bounded context both depend on the bounded contexts interact at several points with the en-
the Peripherals bounded context. That means, whenever you vironment. Specifically:
want to use either of them, you need to include the Peripher-
als bounded context. However, this design is a (not yet multi- • Endurance bounded context: TrainingRepository,
platform) software product line framework! But why? GpsService
• HIIT bounded context: TrainingRepository, HiitTim-
The design of individual bounded contexts with very few erService
relationships among each other allows you to modular • Peripherals bounded context: BluetoothService
Figure 8: After step three of the receipt, the platform-dependent parts have been moved outside of the bounded context.
Figure 9: Both WPF and Xamarin iOS use the same assemblies from the framework you developed before, thereby
delivering the same functionality to the end user.
But there’s even more. You apply the same idea to turn the
Fitness tracker app into a cloud product. It’s amazingly
straightforward: You replace the “HIIT–UI” assemblies of Figure 10: Most of the business logic runs as an ASP.NET
Figure 9 with ASP.NET Core assemblies full of Controllers, Core WebAPI project using cloud-specific environments of
which delegate incoming calls to the bounded context HIIT. the bounded-contexts it hosts.
Further, you implement the environment of your bounded
contexts using ASP.NET core, thereby providing the missing
functionality. This transformation, i.e., pushing the busi- TFM. Second, .NET 5 supports C# 9.0, which comes with a lot
ness logic of an app into the cloud, leads to an interest- of interesting new language features like Records or Rela-
ing architectural insight: Front-end clients don’t need much tional Pattern Matching. Third, C# source generators seem
business logic, because most of it runs in the cloud. Clients an exciting new compiler feature.
solely implement user interfaces, plus some residual inter-
action logic with the cloud. Figure 10 shows the basic idea. At its heart, your software supports a business. Therefore,
you value your business logic the most. For your migration
As you can see in Figure 10, you should introduce a callback out of the nightmare, you must now tackle two situations:
mechanism using for example gRPC to remotely perform ac-
tions on the client device, especially at points where your • Writing new business logic
bounded context implementation requires a client interac- • Migrating existing business logic
tion with real hardware.
Writing New Business Logic
That’s powerful, and offers you several advantages: When writing new bounded contexts, you start from scratch.
That’s great, because you apply the patterns and principles
• Clients get slim, carrying less business logic. from domain-driven design to get to a clean model that
• Changes happen centralized in the cloud. you consequently implement in .NET Standard. This enables
• Changes are more easily tractable. multi-platform bounded contexts implementations, clean
• Composition of new client applications is easy. and reusable, as shown before. To integrate with your ex-
isting code base, you need an additional technique from
domain-driven design known as anti-corruption layers.
How to Migrate Business Logic
to Such a Framework Anti-corruption layers shield bounded contexts from legacy
Now comes the tricky bit. Most of us don’t have a green systems. They equip them with a protection shell to decouple
field. We must deal with an old code base, probably grown the domain model from the legacy model. The responsibility
over years. Very often the code base involves even different of such a layer is simple: Provide a well-defined view on the
technologies, like WPF mixed with WinForms, WCF commu- new bounded context (inbound facade) and offer a well-
nication channels with REST APIs, etc. What should you do defined view onto the legacy system (outbound facade).
to get to a multi-platform software product line framework? This also includes defining data-transfer objects between
In the following, I outline a strategy to slowly migrate to a new bounded context and legacy systems to achieve model
clean solution, also considering the latest .NET 5 develop- independence. The facades convert between the models in
ment. But before that, I give a short recap about important terms of data-transfer objects.
features of .NET 5.
Consider, for example, that the product owner of the Fit-
First, .NET 5 is, in terms of target platform, somehow the suc- ness tracker app now also wants to sell products directly in
cessor of .NET Core. It unifies the view for developers onto the the application. For that purpose, he explains how the sub-
.NET Framework, .NET Standard, and .NET Core apps. However, domain “Sales” works and explains that he has an old item
when you need to write assemblies that require platform de- catalog system (which acts as an inventory) in place that he
pendent code like WPF for Windows Desktop, you add support intends to use as inventory for all products. Hence, the new
for these functionalities by specifying .net5.0-{platform} as bounded context needs to communicate with this legacy
Figure 12: Class design of the Fitness tracker application before the migration
Figure 14: The domain model is in the lower left corner. The anti-corruption layer corresponds to the services, their data-
transfer objects, and the repository. The additional layer makes sure that the model stays clean and precise.
Figure 15: Step four completes the migration to an anemic domain model by introducing callback interfaces to the old
business logic. Adapter implementations in the legacy code base preserve the functionality.
Figure 16: This is the starting point after the refactoring to the software product line framework. All user interfaces
reside within the same assembly implementing one huge monolithic component.
Before you migrate and start coding, you need to choose push into the use of .NET 5 too much, because the goal is
which platforms the framework should support. Because still the reuse of assemblies rather than jumping on new
you want to support multiple platforms, the choice is be- technologies.
tween .NET Standard and .NET 5. If you further consider mo-
bile phones as target platform, you choose .NET Standard as However, at some point, it makes sense to apply .NET 5,
TFM for the bounded context assemblies to stay compatible like for the anti-corruption layers for example. Because
with Xamarin forms. these layers are small and usually contain very little code,
it makes sense to split this part of the bounded context
Importantly, note that after this step, you have a running implementations apart and offer them as NuGet packages
software product that implements a clean structural mod- to your application developers. This has one big advantage:
el. That’s great, because even though you only completed You can choose to implement the anti-corruption layer as
half of the migration (implementation-wise), you’re still a .NET Standard assembly and .NET 5 assembly. In the .NET
able to deliver to your customer. But the more valuable 5 assembly, you can use C# 9 as language, whereas in .NET
outcome, certainly, is that the structure of the “refac- Standard, you still use C# 8 (with .NET Standard 2.1).
tored” bounded context aligns with the structure of the
domain it supports, and you already brought the bounded
context into the shape that the multi-platform software Ready, Set, Eat!
product framework demands. Finally, you migrate the ane- After successfully migrating your business logic to a multi-
mic domain model to the one of Figure 13 by removing platform software product line framework, you easily com-
all the callback interfaces from the bounded context and pose new applications out of it. Whether you need to de-
moving the business logic from the legacy code base. This velop a mobile application, a desktop application, or a cloud
last step completes the migration. After that step, you service, all of them rely on the same framework. Because
again have a running software application that you deliver your framework uses .NET Standard, it supports all of these
to your customer. platforms.
Applying these steps continuously to all the business logic Using such a modular framework also results in a straight-
of an entire application smoothly migrates the whole soft- forward decomposition of the user interface of an applica-
ware to a multi-platform software product line framework, tion. The structure of the framework somehow naturally also
thanks to the use of .NET Standard as TFM. You can be a bit imposes a decomposition into components on the front-
more precise and use .NET 5 at some points. end. Which pattern, or which technologies, these front-end
components use, you can freely choose. How should you
Anti-Corruption Layers in .NET 5 deal with legacy front-ends that you want to migrate in the
One of the requirements of the multi-platform software long term?
product line framework was that, in principle, applications
should run on any platform. Simply using .NET 5 as TFM for The framework offers a way out: After migrating the busi-
the bounded context assemblies will, unfortunately, not do ness logic that the front-end relies on, you also migrate
the trick for you, because it lacks full support of mobile apps the front-end. Ideally, such a migration affects only some
built with Xamarin forms for example. I also don’t want to views, or some controllers, the ones that essentially the
Printing
(Continued from 74) The dev, test, and build processes by themselves, Fry Communications, Inc.
they’re just processes. The duty, fidelity, and se- 800 West Church Rd.
The aforementioned tools (WestLaw and LexisNexis) riousness we bring to the work, that’s what makes Mechanicsburg, PA 17055
are very efficient. However, if I didn’t have access it a ritual. And that it’s a ritual means that what Advertising Sales
to such tools, I could still rely on the books. You’ve you and your team does matters. That’s people Tammy Ferguson
seen them in any legal drama—the impressive ar- and process governed by inviolate overarching 832-717-4445 ext 26
tammy@codemag.com
ray of published court reports and statutes. These principles and fidelity to the task at hand. If
books are available for free in the law library of you have that, find the tools that match up well. Circulation & Distribution
your local municipality. In the abstraction hierar- Be true to that ritual. And if some aspect of it General Circulation: EPS Software Corp.
Newsstand: The NEWS Group (TNG)
chy, this is about as close to the metal as it gets. must change for some reason, have a ritual for Media Solutions
You never know when a given tool may not be avail- how you manage that change. If you just change
Subscriptions
able. There’s still a job to do. Of course, at some rules, tools, etc. on a whim, that’s just random Subscription Manager
point, we’ll hit a performance wall once basic ele- chaos devoid of discipline, rigor, duty, and fidel- Colleen Cade
ments, like the Internet, are no longer available. ity. That’s amateur hour run by people who don’t ccade@codemag.com
How well do you know your processes—independent know what they’re doing.
US subscriptions are US $29.99 for one year. Subscriptions
of abstractions? An abstraction based on a process outside the US are US $50.99. Payments should be made
you don’t understand is useless. And the decisions Your rituals are the means by which you can ob- in US dollars drawn on a US bank. American Express,
made therefrom can be quite costly. jectively demonstrate due care and performance. MasterCard, Visa, and Discover credit cards accepted.
Bill me option is available only for US subscriptions.
If your shop keeps running into the same issues, Back issues are available. For subscription information,
Ritual is a means by which we can learn, apply, perhaps it’s time to detach from the keyboard, e-mail subscriptions@codemag.com.
and eventually master our craft. A somewhat stop, inspect, assess, and adapt accordingly. A
Subscribe online at
popular movement these days is the Software good way to start may be to take a healthy look www.codemag.com
Craftsmanship movement in its many variations. in the mirror. If ritual, duty, and fidelity to the
Andrew Hunt’s “The Pragmatic Programmer” pos- task at hand is deemed to be a budgetary non- CODE Developer Magazine
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
its that a developer’s professional development starter, you should ask yourself what exactly it is Phone: 832-717-4445
should be akin to the medieval guilds. Estab- that your organization is doing and why you think
lished rituals are a means by which knowledge sticking with the status quo will lead to a dif-
is effectively transferred. Think of the appren- ferent, better result. That’s the very definition of
ticeship programs in trades such as plumbing insanity in which we hope to obtain a difference
or carpentry. And no, the fly-by-night coding result based on the same actions.
bootcamps that promise to make somebody an
employable software developer without prior ex- John V. Petersen
perience isn’t that.
Ritual: A solemn ceremony consisting of a series of thing boils down to integrity. For any system or accelerated—is the concept of time. Time in
actions performed according to a prescribed order. we build to have integrity, there must be rules, this context has two perspectives. The first is the
order, and sound processes. One of my favorite people who used to occupy the chairs held by
Solemn: Formal and dignified, serious, deep sincerity. phrases is People, Process, and Tools. In other others now. This is an organization’s history and
words, we must first have the right people. From the foundation everything that follows. I suspect
Whether we reference SolarWinds (https://www. there, we can build a process. And then finally, that the staffers guarding the ballot boxes had
microsoft.com/security/blog/2020/12/18/ana- once we have the right people committed to the a sense of, benefited from, and were prepared
lyzing-solorigate-the-compromised-dll-file-that- right process, then, and only then, can we em- because of that history. Watching them on TV,
started-a-sophisticated-cyberattack-and-how- ploy the right tools in furtherance of efficiency. there was a sense of confidence and resolve in
microsoft-defender-helps-protect/), Boeing (737 And what device, whether explicit or implied, do their custodian role. The second perspective is
Max 8), or those brave congressional staffers who we invoke to carry on these ceremonies that in- focused on the present and what must be con-
had the presence of mind to take custody of the volve people, processes, and tools? Ritual. fronted now. Every problem to solve requires its
three ballot boxes before the U.S. Capitol was own time. Anybody who has read the Mythical
breached, for purposes of this editorial, the three Things like SOC (System and Organization Con- Man Month knows the lesson that “throwing bod-
are equivalent. With respect to the ballot boxes, trols)—and the ritual and ceremony it requires— ies” at a problem doesn’t work.
indeed they’re just physical things that contain all have a purpose. Read any annual report (10K),
physical things that can all be replaced. As to specifically item 9a, Controls and Procedures. Building trust and confidence, rather than coding
what those ballot boxes and what they contained Documented rituals and ceremonies with docu- or design, is our ultimate job as software devel-
represented, those things can’t be replaced. mented performance are your positive objective opers. We build that confidence through demon-
evidence that you’re doing things correctly. Alter- strated performance that itself is verified through
How these all relate? Like all things, it’s about natively, you can make claims about what you be- evidence. The important thing at the heart of it
context and the inexorable fact that at an ab- lieve is or what was done. Making a claim doesn’t all is how we carry out our work, the principles,
stract level, everything relates to everything else. make it so, nor is it evidence. To meet that burden patterns, practice, and above all, the ethos, duty,
Everything in a given context is interconnected. of proof, there must be work, hard work, and coor- and fidelity we bring to our efforts in furtherance
In other words, actions have knock-on effects; dinated work among many people and the descrip- of carrying out our work. The burden is on us to
some are intentional, others are unintentional. tion of what work was performed by who and when instill that confidence in others, the ones outside
And depending on your system’s complexity, a must be a by-product of the work. looking in. If we don’t, we end up with a trust gap.
small change can have dramatic downstream ef-
fects (AKA the Butterfly Effect). The degree to Assuming that there’s a codified process, how We can learn from the Arts and Crafts movement
which the system and its underlying processes does that get performed in a systematic, repeat- from the 1880s-1910s. That was about the dignity
are understood well are the best defense to unin- able, reliable, and consistent way? Mature pro- of work that arises from being true to the craft as
tended and undesirable consequences. cesses yield positive tangible results via actions well as the recognition and respect from the orga-
that are carried out in some prescribed way and nization that realizes the benefits of such labor.
Separately, there’s what each of us believes and are done in a serious manner—and not only in The tools we employ to carry out our work, those
there’s what each of us does. There is that oft- support of the work at hand, which is to build are part of the ritual. To wield them artfully, we
quoted phrase: Don’t pay attention to what they and deliver software. It’s also about telling the must understand the processes of our craft.
say, pay attention to what they do. The latter is story of what that work entailed. Agile principles
objectively verifiable with our own senses. The rea- coupled with the Scrum Framework is a good ex- Tools are not the craft. CODE IS NOT THE CRAFT.
soning is simple: Mindreading isn’t a real thing. ample of a ritual that can support what SOC (ref- Tools are the means by which we practice the craft,
erenced above) requires. and code is a tool. Diverting to my law persona, my
Think of the last time you advocated for a par- practice of law doesn’t depend on tools like West-
ticular outcome. In any competent and rational But just going through the motions of Scrum Law or LexisNexis for legal research any more than I
business and technology environment, you’d be ceremonies alone isn’t enough. Each individual require any specific software tool or language. Cer-
required to provide evidence in support of your person on the team must believe and be commit- tainly, tools help and some tools, depending on the
conclusions. In other words, there’s what you be- ted. The team members, collectively, must have a context, are better than others. Legal research (a
lieve and there’s what you can objectively prove. shared sense of purpose. Fidelity to that is what process) is a necessary component of law practice.
makes any ritual or process worth anything. That It’s no different than the code we write. At a certain
So then, how is the concept of ritual relevant? fidelity to task, the ability to make that apparent level of detail, there are many ways to write code
to those outside the process, doesn’t just hap- and there are many ways to conduct legal research.
In all cases, it gets to what we’re defending pen. Regardless of whatever tools are thrown
against: the loss of trust and confidence. Every- at a process, the one thing that can’t be faked (Continued on page 73)
codemag.com/magazine
832-717-4445 ext. 8 • info@codemag.com
UR
GET YO R
O U
FREE H
TAKE
AN HOUR
ON US!
Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience
in .NET, .NET Core, web development, Blazor, Azure, custom apps for iOS and Android and more,
CODE Consulting can get your software project back on track.
Contact us today for a free 1-hour consultation to see how we can help you succeed.
codemag.com/OneHourConsulting
832-717-4445 ext. 9 • info@codemag.com