You are on page 1of 133

Week 1 Navigating the System

Introduction to Operating Systems and Becoming a Power User


You've already learned the basics of computing and you just finished learning about the bits and
bytes of computer networking. Now it's time to navigate the Windows and Linux Operating
Systems or OSs. But before we dive in, I'd like to introduce myself. We met way back in the first
course. But for those of you who might have forgotten or skipped those lessons, my name is
Cindy Quach, and I'm a site reliability engineer at Google. The team I work on is responsible for
the management and support of Google's entire internal mobile fleet; Android OS, iOS, and
chrome OS. Before focusing on mobile, I was a systems administrator on the Linux team, and
before that, I was an operations engineer. But like a lot of the Googlers you've met and will meet,
I started my career as an IT support specialist. I've been working in IT for seven years now. The
first time I can remember interacting with computers was in middle school, when my teacher
brought them into our classrooms so we could create fun video and multimedia projects. It was
my brother who brought technology into our house. My parents were immigrants from Vietnam
and we didn't have a lot of money growing up, so we had to be creative if we wanted to play with
a computer at home. I can remember we were spending hours with my brother as he assembled a
computer and I would just ask a million questions. Eventually, I wanted to try and build my own
computer. So, I gathered up some old parts and saved money to buy new components. I finally
put all the parts together from what I remembered my brother doing. But it didn't work out. It
turns out I use some incompatible parts but through a lot of trial and error, troubleshooting, and
long search sessions on the internet. I finally got it to work. The feeling I got when I heard my
computer boot up for the first time was amazing, and before I knew it, I was hooked on
computers. I really enjoyed the intense concentration and problem solving required in IT. But I
didn't think a career in tech was even possible but then. Once I got to college, I had to find a job
to help pay for tuition. And that job was an IT support specialist on campus. That's when I
realized that tech is actually something I could pursue as a career. I've been working with
computers for as long as I can remember and much of my IT knowledge was based on my own
troubleshooting experiences over the years. I was great at troubleshooting issues with operating
systems or so I thought, it wasn't until I became a systems administrator on Google's Linux team
that I realized just how little I knew about operating systems. I was surrounded by brilliant
teammates who maintain code for a large open source operating system projects. Some even had
Wikipedia pages written about them. So it was hard not to feel inadequate at times. Like I was
learning to walk again as I dove more into Linux, I just wasn't used to working on the command
line and it felt overwhelming to use it to troubleshoot obscure issues that popped up. I had to
constantly look up commands and figure out where to find certain files, but I didn't let it get the
best of me. I took things day by day, and after a year of being on the team, I realized I had
progressed incredibly far. One year later, I was building and packaging my own tools then
deploying them for everyone to use. As contributing code directly to open source software.
Using the command line, had become second nature. There's so much to learn about operating
systems and it's one of the reasons why I'm passionate about teaching this course. Learning
Linux doesn't have to be scary. It's not impossible to use Windows commands and it's certainly
not difficult to get started. So let's just do that and get started. While this course will have some
conceptual learning. Will focus more on the practical aspects of the operating system. Not only
when you learn how to use the Windows and Linux OSs, we'll also teach you how to interact
with these operating systems through the command line. Remember that the command line
inputs text commands instead of relying on a graphical user interface or GUI. If this is your first
time using a command line for any OS, you may find this a little intimidating at first. That's
totally normal, but you'll be well on your way to become a command line wizard by the end of
this course. As always, we'll help guide you every step of the way, and you can always re-watch
the lessons if you need to take a refresher. Take your time, you got this. We're not only going to
teach you how to use the command line in Windows and Linux. You'll also learn how file
systems work, and you'll be able to assign different user permissions and roles, which is a super
important task in any support role. You'll be able to understand how to use package managers
and consider the trade offs between different package managers for Windows and Linux. We'll
also teach you about process management, so you understand the nuances of running programs.
That could save you valuable time when troubleshooting in the workplace. We'll also take a
deeper dive into the remote connection tools you've already been using to help you access other
computers, when you're working at a distance. Finally, we'll teach you about OS deployment or
how to install OSs on a lot of machines at once. By the end of this course, you'll become a real
OS power user, in both the Windows and Linux operating systems. This is an invaluable skillset
for anyone pursuing a career as an IT support specialist. After all, we spend most of our time
within an operating system. But remember, you'll need to practice, practice, and practice some
more to get a firm grip on operating systems. Just like with any skill, you need to really apply
yourself to get good at it. Eventually, navigating the operating system will seem like second
nature to you. We strongly recommend that you follow along in this course with a computer
using one if not both of these operating systems. Navigating a real operating system while
following along this course, is a much more efficient way to learn these concepts. If you don't
have access to them that's totally okay. You'll be doing active learning exercises in an application
called Qwiklabs, to help simulate what it's like to use the Windows and Linux OS. I'm super
excited to teach you about Windows Linux OSs. So let's get started.

Basic Commands
We dipped our toes in the Windows and Linux OS's in the first course of this program. Now, let's
jump right in and learn how to perform all the common navigational tasks of both operating
systems. For Windows, we're going to learn how to navigate the operating system using the GUI
and using the command line interpreter or CLI. For Linux, we're only going to focus on learning
the command line. The command line interpreter in Linux is called a shell, and the language that
we'll use to interact with the shell is called Bash. It's worth calling out that these two operating
systems are very similar to one another. So, even if you don't know how to use the Linux GUI, as
long as you know how to navigate the Windows GUI, you'll be able to apply those tools to the
Linux GUI. It's possible that you'll only be using the Windows GUl in the workplace. Even so, if
you learn how to use the Windows command line, this will set you apart from other IT support
specialists. You'll soon discover that using the command line in any operating system can
actually help you complete your work faster and more efficiently. We surely encourage you to
follow along and actually perform the task we do in this course yourself. If you can, pause a
video and do the exercises that we do or type out any of the commands we introduce. It will be
much easier for you to understand them in this way. We also recommend that you document all
the commands that we show you. Either write them down with an old-fashioned pen and paper
notebook, or type them out in a doc or text editor. Just write them on a stone if you have to, we
just want you to write them down somewhere. You probably won't remember all the commands
immediately when we first introduced you to them, but with a little practice, typing the
commands will become second nature to you. You can also use the official Windows CMI and
Bash documentation that we've provided for you in the supplemental reading, right after this
video for reference, if you need to. In this lesson, the content is broken down into two themes.
The first is basic operating system navigation, like navigating from one directory to another,
getting file information, and removing files and directories. The second theme is file and text
manipulation, like searching through your directories to find a specific file, copying and pasting,
chaining commands and more. Okay. Enough chit-chat. Let's get started.
Supplemental Reading for Windows CLI & Unix Bash
For more detailed information on the modern Windows CLI, PowerShell, see the official
PowerShell documentation, and the PowerShell 101 guide. For more on the older Windows
"Command Prompt" CLI (cmd.exe) please see the link here.
If you want to check out more information on Bash, then click the link here.
In operating systems, files and folders or directories are organized in a hierarchical directory tree.
You have a main directory that branches off and holds other directories and files. We call the
location of these files and directories, paths. Most paths in Windows looks something like this
C:\Users\Cindy\Desktop. In Windows, file systems are assigned to drive letters which look like
C:, or D:, Or X:. Each drive letter is a file system. Remember that file systems are used to keep
track of files on our computer. Each file system has a root directory which is the parent for all
other directories in that file system. The root directory of C: would be written C:\, and the root
directory of X: would be written X:\. Subdirectories are separated by backslashes, unlike Linux,
which uses forward slashes. A path starts at the root directory of a drive and continues to the end
of the path. Let's open up this PC and navigate to our main directory. The main directory in a
Windows system is the drive that the file system is stored on. In this case, our file system is
stored on Local Disk C. From here, I'm going to go to Users, then my User folder cindy, and
finally to Desktop. If you look at the top here, you can see the path I'm in. Local disk, Users,
cindy, Desktop. That wasn't too hard, right? You can see here in our desktop directory that we
have a few folders and files. We have a Puppy's Pictures folder, a Hawaii folder, and a file called
My Super Cool File. There are also some files on here that you can't see. We call these hidden
files. They're hidden for a few reasons. One is that we don't want anyone to see or accidentally
modify these files. They could be critical system files or configs or even worse, embarrassing
pictures of you in grade school rocking a mullet. It's okay, you aren't the first person who like
their hair to be business in the front and party in the back. Just for fun, let's see what kind of
hidden files we have in here. We'll go to the top and click View, then check the hidden items
checkbox. Now we can see all the hidden files on our system. Oh, interesting. There is a file
named secret_file. As much as I'd like to take a peek at it, whoever created it probably doesn't
want us to see what's inside so we're going to leave it alone. Let's go ahead and revert this option
so we don't accidentally change something else.
Okay, so what if we wanted to view information about a file? Well, to do this, we can actually
just right click and choose Properties. Let's try this for My Super Cool File. This pop up dialog
has a lot of information displayed here. Let's break it down. In the general tab, we can see the file
name, the type of file, what applications we use to open it, and the location path of the file which
is C\Users\cindy\Desktop, then we have the size of the file, and the size on disk. This can be a
little confusing. The size of the file is actually the amount of data that it takes up, but size on disk
is a little different. It's not something you need to know right now but if you want to learn more
about it, you can check out the next supplemental reading. All right, let's move on. Next you
have timestamps of when the file was created, last modified, and last accessed. After that our file
attributes we can enable for our file. We have Read-Only and Hidden. You might guess that if
you check hidden, our file will be hidden and only visible if we enable show hidden items. There
are some advanced options too but we won't touch those for now. You'll also notice a few other
tabs here at the top. Security, Details, and Previous Versions. We'll talk more about the security
tab in a later lesson. The Details tab, basically, tells us the information we just discussed about a
file. The Previous Versions tab lets us restore an earlier version of a file so if you made a change
to a file and wanted to revert to that change, you could go back to that version. To sum up listing
the directories in the Windows GUI, we can see the list of files and folders by default here. You
can even change how you want to view them using icons or even a list. Then if you want to get
more information about a file, you can look at its properties. Next up, let's see how to view all
this information through the Windows CLI.
Supplemental Reading for 'Size' vs 'Size of Disk' in Windows
For more information on 'size on disk' vs 'folder size' in Windows, please check out the link
here.
Completed
It's important to know that there are a couple of command line interfaces or CLIs available in
Windows. The first one is called the Command Prompt, command.exe. The second one is
PowerShell or powershell.exe. The command prompt has been around for a very long time. It's
very similar to the Command Prompt that was used in MS DOS. Since PowerShell supports most
of the same commands as Command Prompt and many, many more, we're going to use
PowerShell for the exercises in this module. I want to call out that many PowerShell commands
that we use are actually aliases for common commands in other shells. An alias is sort of like a
nickname for a command. The first command that we'll use is for listing files and directories.
Let's start by listing the directories in the root of our C: drive. The C: drive is where the
Windows operating system is installed. For many of you, it might be the only hard drive that you
have in your computer. To get to the PowerShell CLI, just search in your application's list
PowerShell. From here, we can go ahead and launch the PowerShell program. We're going to use
the ls or list directory command and give it the path of where we want to look. The path is not
actually part of the command but it is a command parameter. You can think of parameters as a
value that's associated with a command. Now you can see all the directories in the root of your
C: drive. You might just see a few or a whole bunch of directories. It all depends on what your
computer is used for. The C: drive root folder is what we call a parent directory and the contents
inside are considered child directories. As you continue to work with operating systems, you'll
encounter terms that may seem a bit out of place at first but they actually make a lot of sense.
Parents and children are common terms that stand for hierarchical relationships in OS's. If I have
a folder named dogs and a second folder nested within that folder called Corgi, dogs would be
the parent directory and Corgi would be the child directory. Let's look at a few of the common
child directories in this folder. Program Files x86. These directories contain most of the
applications and other programs that are installed in Windows users. This contains the user
profile directories or home directories. Each user who logs into this Windows machine will get
their own directory here. Windows, this is where the Windows operating system is installed. If
we open a PowerShell and run Get-Help ls, we'll see the text describing the parameters of the ls
command. This will give us a brief summary of the commands parameters. But if you want to
see more detailed help, try Get-Help ls -Full. Now you can see a description of each of the
parameters and some examples of how to use the command. What if we wanted to see all the
hidden files in this directory? Well, we can use another useful parameter for the ls command, -
Force.
The -Force parameter will show hidden and system files that aren't normally listed with just ls.
Now you can see some important files and directories like Recycle Bin. This is where the
Recycle Bin lives. When you move files to the Recycle Bin, they're move to this directory
instead of being deleted immediately. Program data, this directory contains lots of different
things. In general, it's used to hold data for programs that are installed in Program Files. All
right, now that you've seen how to take a look around the file system in Windows, lets see what
this process looks like in Linux.
In Linux, the main directory that all other stem from is called the root directory. The path to the
root directory is denoted by a slash or forward slash. An example of a path in Linux that starts
from the root directory is /home/cindy/Desktop. Just like c:\users\cindy\desktop in Windows.
Let's go ahead and see what's under the root directory. We're going to be using the ls or list
directory contents command. We also want to give this command, the path, the directory that we
want to see. If we don't provide a path, it will just default to the current directory we're in.
So ls slash. All right, now we can see all the directories that are listed under the root directory.
There are a lot of directories here, and they're all used for different purposes. We won't go
through them all, but let's talk about a few of the important ones. Slash bin, this directory stores
our essential binaries or programs. The ls command that we just used is a program, and it's
located here in the slash bin folder. It's very similar to our Windows program files directory.
Slash etc, this folder stores some pretty important system configuration files. Slash home, this is
the personal directory for users. It holds user documents, pictures, and etc. It's also similar to our
Windows users directory. Slash proc, this directory contains information about currently running
processes. We'll talk more about processes in an upcoming lesson. Slash user, the user directory
doesn't actually contain our user files like our home directory. It's meant for user installed
software. Slash var, we store our system logs and basically any file that constantly changes in
here. The ls command has a couple of very useful flags that we can use too. Similar to Windows
command parameters, a flag is a way to specify additional options for a command. We can
usually specify a flag by using a hyphen then the flag option. This varies depending on the
program, though. Every command has different flag options. You can actually view what options
are available for a command by adding the dash, dash help flag. Let's see this in action. There's
an incoming wall of text, but don't panic. You don't have to memorize these options. This is
mainly used for reference. For now, let's just quickly go through the help menu
At the top here it tells you what format to put the command in. And here it gives you a
description of what the command does. This huge chunk of text lists the options that we can use.
It tells us what command flags are available and what they do. The dash, dash help flag is super
useful, and even experienced OS users refer to it every so often. Another method that you can
use to get information about commands is the man command from manual. It's used to show us
manual pages, in Linux we call them man pages. To use this command, just run man, then the
command you want to look up.
So let's look up man ls. And here we get the same information as dash, dash help, but with a little
more detail. Okay, back to using the ls command.
Right now, it's not quite friendly to read. So let's make our directory list more readable with the
dash l flag for long. This shows detailed information about files and folders in the format of a
long list. Now we can see additional information about our directory and the files and folders in
them. Similar to the Windows show properties, the ls command will show us the detailed file
information. Let's break down this output starting from the left. The first column here are file
permissions, side note, we're going to cover file permissions in an upcoming lesson. Okay, next
up is the number of links a file has. Again, we'll discuss this is more detail in a later lesson.
Next, we have the file owner, then the group the file belongs to. Groups are another way we can
specify access, we'll talk about this in another lesson too. So then we have the file size. The time
stamp of last modification, and finally, the file or directory name. The last slide that we'll discuss
for the ls command is the dash a or all option. This shows us all the files in the directory
including the hidden files.
You'll notice that I appended two different flags together. This is the same thing as ls -l -a /. Both
work the exact same way. The order of the flag determines which order it goes in. In our case, it
doesn't matter if we do a long list first or show all files first. Check out how there are some new
files are visible when we use these flag. The dash a or all flag, shows all files including hidden
ones. You can hide a file or directory by pre-pending a dot to it. Like the file shown here
.I_am_hidden.
We've covered a lot in this video, we've learned how to view detailed information about files
with the ls command. We also started using computer paths and we learned how to get help with
commands using the dash dash help flag and man pages. We even took a sneak peek at our Linux
files system. If I went through any of this a little too quickly, just rewatch the video. We'll meet
back up in the next one, where we'll start changing directories in the GUI. See you there.
Okay. Now that we know how directories are laid out, let's start moving from one directory to
the next. You probably change directories in your GUI a lot without even realizing it. Even if
that's not the case, we're going to go ahead and show you how to do it. Knowledge is power.
There, that was pretty simple, right? We can move freely between any directory in any path on
our systems. One thing to call out is that there are two different types of paths, absolute and
relative. An absolute path is one that starts from the main directory. A relative path is the path
from your current directory. These two distinctions aren't as important when we're working in a
GUI, but they're important when you work in a shell. So let's see what this looks like in the
Windows CLI.

File and TexWhen you first open PowerShell, you'll usually be in your home directory. Your
prompt shows you which directory you're currently in, but there's also a command that will tell
you where you are. PWD or print-working directory tells you which directory you're currently in.
If we want to change the directory that we're in, we can use the CD or change directory
command. To use this command, we'll also need to specify the path that we want to change to.
Remember, this path can be absolute, which means it starts from this drive letter and spells out
the entire path. On the flip side, it can be relative, meaning, that we only use part of the path to
describe how to get to where we want to go relative to where we're currently are. I'll show you
what I mean in a minute. So right now, we're in C:\Users\cindy. Let's say that instead, I want to
go to C:\Users\cindy\documents, what do you think the command would look like here? Here it
is, cd C:\Users\cindy\documents. And now we've changed to the documents directory. We use an
absolute path to get to this directory, but this can be a little cumbersome to type out. We know
that that documents directory is under the cindy folder, so can't we just go up one level to get to
that folder? We absolutely can. There's a shortcut to get to the level above your current directory,
CD dot dot. Let's run the PWD command one more time. Now, we can see that I'm in
C:\users\cindy, the parent directory of where I was before. The dot dot is considered a relative
path because it'll take you up one level relative to where you are. Let's go back to the documents
folder and try this again, except this time, let's go to the desktop folder using the new command
we learned. We know that the desktop and document directories are under the home directory, so
we could run CD dot dot then CD desktop, but there is actually an easier way to write this,
cd..\Desktop. Let's check PWD one more time. PWD now shows that were in the Desktop folder.
Sweet. Another cool shortcut for CD that you can use is CD~. The tilde is a shortcut for the path
of your home directory. Let's say I want to get to the desktop directory in my home folder. I can
do something like this, cd~\Desktop. We've done quite a bit of typing so far, you might actually
be wondering, what would happen if we messed up while typing these directory names? How are
we supposed to memorize where everything is, and if it's spelled correctly? Fortunately, we don't
have to do that. Our shell has a built-in feature called tab completion. Tab completion lets us use
the tab key to auto-complete file names and directories. Let's use the tab completion to get to our
desktop from our home directory, if I type D and then tab, the first file or directory starting with
D will now complete. Now, if this isn't the file or directory that I was looking for, I can continue
to press tab, and the path will rotate through all the options that complete the name that I started
to type. So I'll see desktop, and then documents, and then downloads. Take note, that the dot in
front of the path of.\Desktop just means the current directory. If I erased this and instead type DE
then the only directory that matches is desktop. Tab completion is an awesome feature that you'll
be using more and more as you continue to work with commands.t Manipulation.
Let's do the same thing in Bash. From our desktop we're going to navigate to the documents
folder. The commands we used earlier in PowerShell are exactly the same here in bash. Print
working directory or PWD again shows us the current path we're in. Yep, looks good. We're
currently in our desktop directory, which you can see from /home/cindy/Desktop. To navigate
around, we use the CD command just like with Windows. We can give it an absolute path like
this cd/home/cindy/Documents, or we can give it a relative path like this cd../Documents. In
Bash, the tilde is used to reference our home directory. So, cd~/Desktop will take us back to our
desktop, and guess what? We still have that useful tab completion feature in Bash. The
difference between Bash tab complete and Windows tab complete is that if we have multiple
options, it won't rotate through the options, but instead will show us all options at once like this.
We can already start connecting the bridge between Windows and Linux.
Now that we've covered listing and changing directories, let's learn how to add new directories.
We can do this in the GUI in a super simple way. Just right-click, new, then folder, and bam, we
have a new folder. Now, what if we wanted to do this in the CLI? In PowerShell, the command
to make a new directory is called mkdir or make directory. Let's make a new directory called
my_cool_folder and there it is. That was easy. What if we wanted to use spaces in our folder
name instead of underscores? What do you think would happen if I did this instead? Mkdir my
cool folder. That's an error. Mkdir is trying to interpret cool and folder as other parameters to the
mkdir command. It doesn't understand those words as valid parameters. Turns out that our shell
doesn't interpret spaces the way we do. So, we need to tell it explicitly that this folder name is
one single thing. We can do this in a variety of ways. We can surround the name with quotes
like, mkdir 'my cool folder', or we can escape to space by using the back tick character, mkdir
my` cool` folder. Escaping characters is a pretty common concept when dealing with code. It
means that the next character after the back tick should be treated literally. In our example,
escaping the space tells the shell that the space after the back tick is part of our filename. While
the back tick is the escape character in PowerShell, other shells and programming languages may
use another character as an escape character. You'll see this in the next video.
In Bash, the command to make a new directory is the same as in Windows. Let's make a new
directory called my cool folder with the mkdir or make directory command. And now, we can
verify my cool folder is in our desktop. Instead of using back ticks like in windows to escape a
character, in Bash, you can use a backslash. Similar to Windows, you can also use quotes to
encompass an entire file name. How do you think you would make a directory called my cool
folder in Linux with spaces? mkdir my\ cool\ folder. There it is. Or, mkdir ' my cool folder'.
Works as well. If you guessed this, you're right. If you guessed wrong, that's okay. Just re-watch
this video so you can get a better grasp of how we came to this conclusion.
Picking right up from the last video, let's say we want to make a couple of directories,
my_cool_folder2 and my_cool_folder3. We could just type mkdir my_cool_folder2, and then
type again mkdir my_cool_folder3, but instead we're going to use another cool PowerShell
feature called history. Each and every time you enter in a command, it gets saved into memory
and added to a special file. You can go through the previous commands you used with the history
command.
I'm now showing a list of commands that I entered earlier. This information alone isn't very
useful. Instead, there's a better use of the history that lets us quickly scroll through these
commands and use them again. We can scroll through these commands with the up or down keys
on our keyboard. I'm going to go up to my previous command, and I should see that I have mkdir
my_cool_folder. Instead of typing the whole thing to make a new folder, I'm just going to
append the number 2 to my command.
And boom, a new file is created without having to type everything over again. Cool, right? You
can even search through your previously used commands using the history shortcut Ctrl+R.
From here you can start typing bits and pieces of the command you want to look for, and it'll
show you matches. Let's search for the word folder.
I should see the mkdir commands I was using before. Pretty neat. If you're using an older version
of PowerShell, it may not have the Ctrl+R feature. If that's the case you can type the # symbol
followed by some part of your old command, and then use Tab completion to cycle through the
items in your history. The history feature, along with Tab completion and get-help, will be your
best friends while you work in PowerShell. Keep them close to you and get to know them super
well. Hmm, our shell is looking a little cluttered. It's kind of hard to see where I'm at, so let's
clean up our shell a little bit. We can do that with the clear command. This doesn't wipe your
history, it just clears the output on your screen. It looks a little better.
The exact same history command that's used in Windows is used in Linux. From here, we can
use our up and down keys and even search through our history with Ctrl-R. To clear your
terminal app, what do you think you'll do? That's right. The clear command.
We've already created a few files and directories, but we need a couple more. We don't want to
create them off from scratch. So let's make copies instead. In the Windows GUI, all you need to
do is right-click, copy, then paste. You can also use hotkeys if you want. A hotkey is a keyboard
shortcut that does some sort of task. In Windows, the hotkey for copy is Ctrl-C, and for paste, it's
Ctrl-V.
In PowerShell, the command used to copy something CP. We also need to add a file that we
want to copy and the path of where we want to copy it too. Let's copy mycoolfile.text to the
desktop. There you can see mycoolfile.text was added to our desktop. I have a few of these files I
want to move over, but I'm feeling a little lazy and don't want to run this command over and over
again. So, I'm going to use something called a wildcard to help me copy over multiple files at
once. A wildcard is a character that's used to help select files based on a certain pattern. Let's say
you want to get all the files that were JPG and copy them somewhere. Then I go on to my
documents directory. I have files called hotdog.jpg, cotton-candy. jpg, and pretzel.jpg. I need to
come up with a pattern to help me select all these files. What do they have in common besides
being named after delicious food? The.jpg extension. Literally, anything else can be in front of
the.jpg file extension, and it won't matter. That's what the wildcard asterisk does. It's a pattern for
anything. So I'm essentially saying, select all the files with the pattern anything.jpg. So, to copy
over all the JPGs in the folder, I can use CP, asterisk symbol,.jpg, and the path I want to copy
them to. Let's just verify. There it is. Now, instead of copying files one by one, we can use a
single command to get all the files we want. For now, the only select you'll be using is the
asterisk for all. Next up, let's say I want to copy over a directory. I'm going to try to copy a folder
called Bird Pictures to my desktop. Let's just go back into documents. That's Bird Pictures. Now
copy Bird Pictures to desktop. Now, this does exactly what we told you to do. It copies the
directory. However, this directory is empty. What it doesn't do, is copy over the contents of the
directory. To copy of the contents of a directory, you need to use another command parameter,
Recurse. The -Recurse parameter list the contents of the directory. Then if there are any sub-
directories in that listing, it'll recurse or repeat the directory listing process for each of those sub-
directories. We need to use the -Recurse parameter with copy to copy the contents of the
directory along with the directory itself. We're going to use a new parameter Verbose. Copy
doesn't output anything to the CLI by default unless there are errors. When we use copy -
Verbose, it will output one line for each file the directory being copied. Let's give it a try. Copy
Bird Pictures, and the Recurse, and Verbose file. This just messages us that we've already copied
Bird Pictures, but what we didn't do, was copy over the file, which is now here. Excellent. Now
the directory and all the contents are copied to my desktop.
In Bash, the exact same Windows command can be used for copying files. Let's take a look at
this directory. Let's copy my_very_cool_file.txt to my desktop. And there it is. We can also use
the same asterisk wildcard to select patterns. Since this is similar to our Windows copy
command, what do you think we can use to copy over the .png files in this directory? I have files
called Pizza.png, Soda.png, Cake.png. So I can use copy *.png, then the desktop directory. Now
if I look at my desktop again, there they are. The same copy rules apply in bash. If we want to
copy over a directory, we have to recursively copy over the directory to get all the contents. The
flag for recursive copy is dash r. If I want to copy over my cat pictures folder to the desktop, I
can do something like this. And there it is.
We talked about making and copying files and directories so far. But what if we wanted to
rename something that we've created? Well, in the Windows GUI, if you are to rename a file, we
just right-click and rename.
In the command line, if we wanted to rename a file, we can use the move or move item
command. It lets us rename files. Lets move the file without changing the directory that it's
stored in. On my desktop here, I have blue document and I'm going to move or rename it to
yellow document. Now, you can see that I have a yellow document. As you might guess, the
move command also lets us move files from one directory to another. Let's move the yellow
document into My Documents. I can verify that. There it is, cool. You can even move multiple
files by using wildcards. And now you can see, the rest of my colored documents went into My
Documents.
The exact same command can be used for Linux. Mv, or move, can rename and move files in
directories. Same thing applies here. I'm going to move my red_document and rename it to
blue_document. Now we can see it's been renamed to blue_document. Then, I'm going to move
the blue_document in to the documents folder. There it is. Using wildcards, we can move
multiple files at once, just like Windows. Let's move all of the underscored document files here
to our desktop. Now if we check the desktop, there they are.
All righty, now that we've learned how to list, create, and move around files in directories, let's
start removing them. In the Windows GUI, if you wanted to remove a file or folder, just right-
click and delete. The file ends up in the recycle bin, which you can find on your desktop. If you
wanted to restore a file here, you could just right-click and Restore.
If you empty your bin for any reason you won't be able to retrieve those files. In PowerShell, the
command to remove files and directories is rm or remove. Take caution when using remove
because it doesn't use the recycle bin. Once the files and directories are removed, they're gone for
good. Let's remove a file called text1.txt in my home directory. We can see, There it is. I'm just
going to remove it.
And now it's gone. The remove command might seem like a dangerous weapon in the wrong
hands. Fortunately, there are safety measures in place that only give this ability to users that are
actually authorized to use it. We'll talk more about file permissions in a different lesson. But let's
take a quick look at what I mean. Let's remove a file called important_system_file. I get an error
message saying that I don't have permission to delete this file. In some cases like this one, it's
because it's been marked as a system file. In other cases, it might be because I don't have enough
permissions in the file system to remove the file. I do have the right permissions this time, but
since it is an important file, PowerShell wants to make sure that I meant to do this. If I repeat the
command with the -Force parameter, remove will go ahead and remove the file. Let's take a look.
-Force, And you can see the file's gone. If the file belongs to someone else, or if I'm not an
administrator, then I might not have the right permissions to remove the file. In that case I'll need
to access an administrative account to remove the file. Okay, let's try removing a directory with
remove next. Here we go. Here's another place where PowerShell is going to ask us if we really
meant to do this. Since this is in a directory, it contains other files. And we did not use the -
Recurse parameter. We see a prompt asking us to confirm if we really want to remove the
directory and all its contents. We can say Yes or Yes to All to continue. We can also cancel this
command and run it again with the -Recurse parameter. That way, PowerShell knows that we
understand the consequences of what we're doing. So let's go ahead and cancel this and try again.
-Recurse. Yeah, now it's gone. And that's the remove command in a nutshell. Again, because of
the nature of this command, you'll want to be extra careful when removing files or directories.
To remove files from Linux, just like in Windows, we can use the rm or remove command. Let's
remove this text1 file. And just like that, it's gone. Similar to Windows, we get a message if we
try to remove something that we shouldn't be able to. Let's remove this self_destruct_button.
Awesome, everything is working as intended. Next let's try removing a directory. If you thought
to yourself that we need to also recursively remove this directory, you'd be right, excellent
deduction skills. So rm -r, let's remove the misc_folder directory. And if we check the misc
folder is now gone. Remember, when using rm command, take extra precaution that you aren't
removing something important by accident.
I knew enough to be dangerous, and I think that's what got me into my systems administrator
role in Linux. When I got in that role I was working with people who were insanely brilliant.
They had Wikipedia pages written about them, about their contribution to Linux and all these
open source contributions.
They weren't using the operating system, they were engineering it, they were contributing code,
fixing hardware issues and fixing software issues. That type of environment really leveled up my
skills in terms of Linux. because I had to learn, I had to keep up somehow, so I would read their
bug reports and what they did. I guess I'd say, about after a year, I was really comfortable on the
command line. I was packaging my own tools and I was writing code and I was contributing to
open source projects. It was definitely an eye opener considering how much I thought I knew
about operating systems to what I know now. The feeling you get when you contribute code to
something that thousands, if not millions of people might use, you kind of don't believe that you
just did that. That's the feeling you get when you do something in the open source community.
I'm passionate about operating systems because there's a lot of stuff that you can do with them.
You can contribute code to an operating system like Ubuntu or Debian and you can make an
actual impact. I mean I can't go out and build a new CPU or something and have people use it
but I can contribute code, I can fix a bug. There's so much stuff you can do with the operating
system, it's unbelievable.
The most important thing about being a mother is leading by example. What do you do when you
have nothing? A year ago I found myself homeless with my daughters. The whole shelter
experience for the kids, I kept telling them that we were just on vacation and waiting for the
house to be ready. That's the worst thing I ever had to do. I grew up in the housing projects in
East Nashville, so nobody ever talked about career paths. I didn't know what to do or where to
go, but I kept saying, "They are watching how you handle this. You have a serious example to
set for these girls." Most people think that Goodwill is just a retail store, but it's so much more
than that. While I was living at the shelter, I decided to reach out to their career center and I
actually got a job as an office [inaudible] for Goodwill. A co-worker told me that Goodwill and
Google actually have a program to provide IT training, the program is called the IT Support
Professional Certificate. When I learned that I could get a scholarship through Goodwill, it was
life-changing. Chelsea is the person Goodwill was designed to support. That's why thanks to the
assistance of Google.org, we started the Goodwill Digital Career Accelerator. Using tools and
resources from Grow with Google, the Goodwill Digital Career Accelerator is focused on
connecting more than a million people with the skills they need to advance in digital careers. The
Google IT Support Professional Certificate was a great building block for this. I joined the 4:00
AM club, I would get up while the girls were asleep, do my schoolwork. While I was studying, I
learned that a Google representative was going to come and give a tech talk at Goodwill. I had to
go. Chelsea really stood out when I met her at the Goodwill event, she asked some really
interesting questions and the enthusiasm was tangible. So I asked her to send me her resume.
When I got my interview, I was so nervous, I wasn't sure if I was good enough. During the
interview process, Chelsea demonstrated not only the foundational technical knowledge that
she'd developed, but her initiative. That's exactly what we need for people who are working in
our data centers. So we brought her onboard. I absolutely love my job. When I first got the job,
my daughter, she was like, "Mom, you got this job, that means we'll have a house forever." In the
year that we've been working with Google.org, we've seen more than a quarter of a million
people build their digital skills. Almost 30,000 people have gone to work. A year ago, I wasn't
sure where my life was going. I thought everything was falling apart. I feel hopeful about the
future now. I want my daughters to know that they can achieve any goal that they can set for
themselves. My goal is to be a developer, and this is what I want to do. I have come this far, I
plan to get to the stars. My name is Chelsea Rucker, and I'm a Data Center Technician for
Google.
Now that we've learned the basics of file and directory navigation, let's learn how we can display
and edit files, search for text within files and more. In the Windows GUI, if we want to open a
file and view its contents, we can just double click on the file. Depending on the file type, it will
open on a default application. In Windows, text files default to open in an application called
notepad. But we can change this if we want to. To change the default application that opens files,
just right click and click Properties. Under 'Open with', we can change the application to another
text editor, like Word Pad. Most of the files that we'll be dealing with throughout this course,
will be text and configuration files. So, let's just focus on those files instead of images, music
files, etc. Viewing the contents of a file in PowerShell is simply using the 'cat' command, which
stands for concatenate. Let's give it a try. This will dump the contents of the file into our shell.
This isn't the best solution for a file since it just keeps writing the content until the whole file is
displayed. If we want to view the contents of the file one page at a time, we can use the 'more'
command, like this.
The 'more' command will get the contents of the file but will pause once it fills the terminal
window. Now, we can advance the text at our own pace. When we run the 'more' command, were
launched into a separate program from the shell. This means that we interact with the more
program with different keys. The Enter key advances the file by one line. You can use this if you
want to move slowly through the file. Space advances the file by one page. A page in this case
depends on the size of your terminal window. Basically, 'more' will output enough content to fill
the terminal window. The q key allows you to quit out of 'more' and go back to your shell. If we
want to leave the 'more' command and go back to our shell, we can just hit the q key. Here we
are. Now, what if we just wanted to view part of the file? Let's say we want to quickly see what
the first few lines of the text file are. We don't really want to open up the whole file. Instead, we
just want to get a glimpse of what the document is. This is called the head of the file. To do this,
we can go back to 'cat' and add the -Head parameter. This will show us the first 10 lines of the
file. Now, what if we wanted to view the last few lines or the tail of the file? I bet you can guess
what you are going to do. This will show us, by default, the last ten lines of the file. Again, these
two commands don't seem like they have any immediate use to you yet. We'll see their benefits
when we work with logs in an upcoming lesson. Now, let's take a look at how to do these same
tasks in Linux.
To read a simple file in Bash, we can also use the cat command to view a document. So let's look
at important document. The cat command is similar to the Windows cat command, since it
doesn't do a great job at viewing large files. Instead, we use another command, less.
Less does a similar thing that more does for Windows, but it has more functionality. Fun fact,
there's a Bash command called more, but it's been slowly dying out in favor of less. It's literally a
case of less is more. Similar to more, when we use less we're launched into an interactive shell.
Some of the most common keys you'll use to navigate this tool are the up and down keys, page
up and page down. g, this moves to the beginning of a file. You can see now we're at the
beginning. Capital G, this moves to the end of a text file. Now we're at the end. Slash and then a
word_search. This allows you to search for a word or phrase. If I type in slash then type the word
I want to search for, I can scan through the text file for words that match my search. Q, this
allows you to quit out of less and go back to your shell, similar to the q key in the Windows more
command. Do you see how less offers functionality like searching within a file?
Less is a great tool to use to view files of any size. You'll no doubt end up using this command
often as an IT support specialist. Similar to the Windows cat and head parameter, we can do the
same thing in Linux using a command called head. This will show you, by default, the first ten
lines of a file. Now what if you wanted to view the last few lines of a file? You can use a
command called tail. This will show you, by default, the last ten lines of a file.
So far, we've discussed how to read and modify files. But we haven't covered how to edit file
contents yet. Spoiler alert, you're about to learn. You can edit text based files in notepad, which
we used earlier to view a text file. Notepad is great for basic editing. But when making changes
to configuration files, scripts, or other complex text files, you might want something with more
features. There are lots of good editors out there for the Windows GUI. For this demonstration,
we'll use one called Notepad++. Notepad++, which you can access from the next supplemental
reading, is an excellent open source text editor, with support for lots of different file types.
Notepad++ can open multiple files and tabs. It also does syntax highlighting for known file
types, and has a whole bunch of advanced text editing features. Syntax highlighting is a feature
that a lot of text editors provide. It displays text in different colors and fonts to help you
categorize things differently. We've already installed Notepad++ on our machine. So, you can
check out their website and do the same. Now, you can edit any file using Notepad++ by right
clicking it and selecting edit with Notepad++.
What if you wanted to edit a file from the CLI? Unfortunately, there's no good default editor in
the Powershell terminal. But we can launch our Notepad++ text editor from the CLI and begin
modifying text that way. So start, Notepad++, and then just a filename. As you can see, it opened
up Notepad++, and asked if I wanted to create this file. If you'd like to read about text editors
that you can specifically use in the CLI, check out the supplemental reading on an advanced text
editor called Vim.
Supplemental Reading for Notepad++
For more information about Notepad++, check out the link here.
In Linux, there are many popular text editors that we can use to modify files. We won't have
enough time to cover them all. So let's just focus on one editor that can be found on virtually any
distribution, Nano. Nano is an extremely lightweight but useful text editor. We've included it in
the supplementary readings after this video, so go check it out. To edit a file in Nano, just type
Nano then the file name. Once we do that, we'll be launched into the Nano program. From here,
we can start editing content as we normally would with any other text editor. At the bottom of
the screen, you'll notice a few options like caret G and caret K. The caret means to use Ctrl-G or
Ctrl-K. We won't talk about all these options, but a few that might be useful are Ctrl-G, which
helps open up a help page, and Ctrl-X which is used when you want to save your work or exit
Nano. Let's go ahead and edit this file, then save our changes.
It's asking me if I want to save the file or exit and discard my changes. I'm just going to hit Y
because I want to save them. Once I do that, I'll be exited from Nano. Let's verify we actually
changed that file. There it is. Nano is a super useful tool if you need a quick text editor in Linux.
But if you want to be a true OS power user, I recommend that you read the supplemental
material I've included to learn more about the text editors that are used in the industry, like Vim
or Emacs.
Supplemental Reading for GNU Documentation
For more information on Nano click here, for Vim click here and Emacs you can view here.
So far in this course we have been using command aliases in PowerShell. PowerShell is a
complex and powerful command language, that's also super robust. We've been able to use
common aliases, that are exactly the same as their Linux counterparts. But from here on out,
we'll need to deploy some advanced command line features, so we'll need to look at real
PowerShell commands. You've already seen an example of a real PowerShell command, Get-
Help, which is used to see more information about commands. There's another PowerShell
command that we can use to look at one of our aliases, that we've been using as our list directory.
To see what the actually PowerShell command is that gets executed, we can use the PowerShell
command, Get-Alias. Interesting when we call LS, we are calling the PowerShell command Get-
ChildItem, it gets or lists the children which are the files and sub directories of the given item.
Let us actually run this Get-ChildItem command with the item C:\.
You'll see this is the same output as, ls C:\.
Cool. PowerShell commands are very long and descriptive, which makes them easier to
understand. But it does mean a lot of extra typing, when you're working interactively at the CLI.
Aliases for common commands are a great way to work more quickly in PowerShell.
We've been using them up to this point to help us hit the ground running with the command line.
In Windows, you pretty much have three different ways you can execute commands. You can
use real PowerShell commands, or the relatable alias names. Another method that we've
mentioned, but haven't really talked about yet is cmd.exe commands. Cmd.exe commands are
commands from the old MS-DOS days of Windows. But they can still be run due to backwards
compatibility. Keep that in mind, that they aren't as powerful as PowerShell commands. An
example of a cmd.exe command is dir. Which coincidentally points to the PowerShell command
Get-ChildItem, which is also where, ls Alias gets pointed to. Remember the PowerShell
command Get-Help, well there's a command parameter that you can use to get help with
command.ext commands, /?. Keep the difference in mind, Get-Help is used for PowerShell
commands like Get-Help ls, and /?, is used for other commands like dir/?. If I tried to use, ls/?, it
will return nothing, because the PowerShell command that ls is an alias of, doesn't know how to
handle to the parameter /?, and vice versa. You're free to use whatever commands you feel
comfortable with. But in this course we're going to use common aliases, and PowerShell
commands.
You've probably had to search for words in a text document before. Whether it was to find and
replace words or for something else. Most text editors work the same way when it comes to
finding words in the document. All you need to do is Ctrl+F to search for the word. Pretty simple
right? But what if you wanted to see if a word existed in multiple files? There are a few ways we
can do this. Let's talk about the GUI options and then we'll turn to PowerShell and learn how to
search for words from the CLI. Windows has a service called the Windows Search Service. This
service indexes files on your computer by looking through them on a schedule. It then compiles a
list of names and properties of the file that it finds into a database. This is a time consuming and
resource intensive process. So on many Windows Servers, those search service isn't installed or
is disabled. On Windows 8 and Windows 10 desktop computers, It's often enabled for files in
your home directory, but not for the entire hard drive. By default, the Windows Search Service
will let you find files based on their name, path, the last time they were modified, their size, or
other details, but by default you can't search for words inside the files. The Windows Search
Service can be configured to search file contents and their properties. This increases the amount
of time that it takes for the indexer to do its work. It's sort of like the computer is doing all of the
searches that you might want to do ahead of time and then you just have to look up the result.
Let's configure the service to index file contents and see what it looks like. The settings we're
looking for are in the Control Panel, but we can use the Start menu to find the settings we need
faster. Open the Start menu and then type indexing. You'll see the Indexing Options in your
results of the search, click on that. Now you want to change the settings for the user folder which
is where all the home directories are stored. Select Users and then click Advanced.
Now select the File Types tab, and select Index Properties and File Contents.
Click Okay.
Now close out of the indexing options. When you do this, the Windows Search Service will start
to rebuild the index based on your new settings. This could be super fast or could take while. It
all depends on how many files you have and how large they are. On this system, I've already let
the re-indexing complete. Now I can use Windows Explorer, my home directory, to find files
that have a specific word in them.
Let's search for the word cow. The results turn up farm animals and ranch animals dot text.
Awesome, we can see the word cow in this text file. If you don't want to use the Windows
Search Service, we can also use Notepad++, the editor that we installed in an earlier lesson.
From Notepad++, press Ctrl+Shift+F to open the Find in Files dialog.
From here, we can specify what you want to find and what files you want to search. You can
limit your search to a specific directory, to a specific set of file extensions and you can even
actually replace the word with another one from here. So lets search for the word cow again and
this time I'll search on my home directory. Find all, there we go. Now it returns farm animals and
ranch animals. If we can't or don't want to use a GUI, we can search for words within files from
the command line.
In PowerShell, we're going to use the SLS or Select-String command to find words or other
strings of characters and files. You can think of strings as a way for the computer represent text.
The Select-String command lets you search for text that matches a pattern you provide. This
could be a word, part of a word, a phrase or more complicated patterns that are described using a
pattern matching language called Regular Expressions. Keep in mind that this is a really
powerful capability that we're just scratching the surface of. So here we're going to search for a
word in a file in my home directory. Let's search for the word cow again.
You'll see that Select-String found cow and it tells you the file and the line number where it
found it. Excellent, if you wanted to search through several files in a directory, you can use
pattern matching to select them. Remember the wildcard character asterisk for selecting all, we
can use that here as well. Now we can see that it found farm animals and ranch animals. Select-
String can do lots of other things too. We'll get a chance to see that in later lessons. Being able to
find a string in a file or a set of files, it's going to be a critical skill for you on this course and in
your IT support work. It's also an important tool that we're going to learn to combine with other
tools to do really powerful things from the CLI.
What if we wanted to search for something within a directory, like looking for just the
executables in that same directory. This is where the command parameter -filter comes in. I'm
just going to LS my programs files here with the -recurse, -filter and look for exes. Well, that's
lots of exes. The -filter parameter will filter the results for file names that match a pattern. The
Asterisk means match anything. And the.exe Is the file extension for executable files in
windows. So the only results we're going to get are the files that end in.exe. Cool.
In Bash, we can search for words within files that match a certain pattern using the grep
command. What if you wanted to know of a certain file existed in a directory or if a word was in
a file? Similar to the PowerShell select-string command, we can use the grep command in Bash.
Let's search for the word cow in farm animals. You'll see that grep found cow in the text file,
farm animals. You can also use grep to search through multiple files. Let's use the asterisk
wildcard command here. And you can see that it found cow in farm animals and ranch animals.
You'll be using grep a lot throughout this course and in later courses, so it's an important
command to remember.
All right, we've learned a bunch of individual, very powerful tools. These are the most important
day-to-day commands that you'll need to work in PowerShell. Now, we're going to learn how to
combine these tools to make them even more powerful. Let's run the following command in our
desktop directory. Then we'll break it down piece by piece. Scan cd into my desktop directory.
Okay, I go woof > dog.txt. Will do an LS to check our desktop, and we'll now see a file called
dog.txt. Inside that file, we should see the word, woof. Oh, there it is. What's happening here?
Let's take a closer look, echo woof. In PowerShell, the echo is actually an alias for Write-Output.
That gives us a clue to what's happening. We know the echo command prints out our keyboard
input to the screen. But how does this work? Every Windows process and every PowerShell
command can take input and can produce output. To do this, we use something known as I/O
streams or input output streams. Each process in Windows has three different streams: standard
in, standard out, and standard error. It's helpful to think of these streams like actual water streams
in a river. You provide input to a process by adding things to the standard in stream, which flows
into the process. When the process creates output, it adds data to the standard out stream, which
flows out of the process. At the CLI, the input that you provide through the keyboard goes to the
standard in stream of the process that you're interacting with. This happens whether that's
PowerShell, a text editor, or anything else. The process then communicates back to you by
putting data into the Standard out stream, which the CLI writes out on the screen that you're
looking at. Now, what if instead of seeing the output of the command on the screen, we wanted
to save it to a file? The greater than symbol is something we call a redirector operator that lets us
change where we want our standard output to go. Instead of sending standard out to the screen,
we can send a standard out to a file. If the file exists, it'll overwrite it for us. Otherwise, it'll make
a new file. If we don't want to overwrite an existing file, there's another redirector operator we
can use to append information, greater than, greater than. So let's see that in action, echo woof
>> dog.txt. Now, if I look at my dog.txt file again, we can see that woof was added again. But,
what if we wanted to send the output of one command to the input of another command? For
this, we're going to use the pipe operator. First, let's take a look at what's in this file. cat
words.txt. Look at that, it's a list of words. Now, what if we want to just list the words that
contain the string st? We can do what we've done before and just use select-string or SLS on the
file directly. This time, let's use the pipeline to pass the output of cat to the input of select-string.
So cat words.txt | select-string st. And now, we can see a list of words with the string st. To tie
things together, we can use output redirection to put our new list into a file. So now, greater than,
and then a new file called st words.txt. Now, if I cat st words.txt, yup, there it is. That's just a
very basic example of how you can take several simple tools and combine them together to do
complex tasks. Okay, now we're going to learn about the last I/O redirector, standard error.
Remember when we tried to remove a restricted system file earlier and we got an error that said
permission denied? Let's review that once more. This time, I'm going to remove another
protected file, rm secure_file. We see errors like we're supposed to. But what if we didn't want to
see these errors? Turns out, we can just redirect the output of error messages in a different output
stream called standard error. The redirection operator can be used to redirect any of the output
streams, but we have to tell which stream to redirect. So, let's type, rm secure_file 2> errors.txt.
If I look at errors.txt, I can see the error message that we just got. So, what does the two mean?
All of the output streams are numbered. One is for standard out, which is the output that you
normally see, and two is for standard error or the error messages. Heads up, PowerShell actually
has a few more streams that we aren't going to use in this lesson. But they can be redirected in
the same way. You can read more about them in the supplemental reading right after this video.
So when we use two greater than, we're telling PowerShell to redirect the standard error stream
to the file instead of standard out. What if we don't care about the error messages, but we don't
want to put them in a file? Using our newly learned redirector operators, we can actually filter
out these error messages. In PowerShell, we can do this by redirecting standard error to $null.
What's $null? Well, it's nothing. No, really. It's a special variable that contains the definition of
nothing. You can think of it as a black hole for the purposes of redirection. So let's redirect the
error messages this time to $null, rm secure_file 2> $null. Now, our output is filtered from error
messages. There's still much more to learn if you're interested. Try Get-Help about_redirection in
PowerShell to see more detail. It may take a little time to get the hang of using redirector
operators. Don't worry, that's totally normal. Once you do start to get used to them, you'll notice
your command full skills level up and your job becomes a little easier. Now, let's take a look at
output redirection in Linux.
Similar to Windows, we have three different I/O or input-output streams: standard out, standard
in and standard err. Remember the standard out example in the last lesson? Well, the same
concept applies in Linux.
We echo the text woof here, but instead of sending it to our screen by default, we're going to
redirect the output to a file using the standard out redirector operator. Let's verify and there it is.
This overrides any file named dog.text with the content woof. If we don't want to overwrite an
existing file, we can use the append operator or greater than greater than. So, echo woof,
dog.text. We could verify that. There it is. One redirector operator that we talked about in the
Windows lesson, but didn't show an example of, was the standard in redirector operator. The
standard in redirector is denoted by a less than sign. Instead of getting input from the keyboard,
we can get input from files like this.
This command is exactly the same as cat file_input. The difference here is that we aren't using
our keyboard input more, we're using the file as standard in. Finally, similar to Windows, the last
redirector operator we'll talk about is standard err. Standard err displays error messages which
you can get by using the two greater than, redirector operator. Just like Windows, the two is used
to denote standard err. So, to redirect just the error messages of some output, you can use
something like this, ls/ dir/ fake_dir 2> error_output.text. Now, if I view that, new document.
Now, we can see the error message in error output.text. Remember the dollar sign null variable
that we used in Windows to toss unwanted output into a metaphorical black hole? We have
something like that in Linux too. There's a special file in Linux called the /dev/null file. Let's say
we want to filter out the error messages in a file and just want to see standard out messages. We
could do something like this. Now, our output is filtered from error messages. Remember how
we talked about taking the output of one command and using it as the input of another command,
with the Windows pipeline? Well, the same thing exists in Linux. The pipe command allows us
to do this. Let's say we want to see which sub-directories in the slash etc directory contain the
word Bluetooth. We can do something like this. We're using the pipe redirector to take the output
of ls-la/etc and pipe or send it to the grep command. Now, without even looking through the
directory, we're able to quickly see if the Bluetooth directory is in here. There it is. You've gotten
a glimpse of the power of redirectors and as you dive deeper into the world of Linux, you'll be
using them at regular basis. They're super valuable tools to have and now, they're part of your
toolkit.
You've learned a lot of commands and tools to help lay a strong foundation for IT support work.
There are many other commands that you haven't seen yet. Don't worry, we'll get to them as they
come up. As you advance in your career, you might even discover that the tools and commands
you're using aren't powerful or efficient enough anymore. Maybe you'll want to search through
files using more complex patterns. To do that, you'll need to know about tools like regular
expressions. Regular expressions are used to help you do advance pattern based selection.
There's also so much more to power shell. There are excellent videos and articles that can guide
you from the first steps you've learned here to being a Windows CLI master. If this sounds
interesting to you, we really encourage you to check out the supplementary reading right after
this video. And no, we won't grade you on your knowledge of this material in these courses, but
it could be really useful to you in the IT support field. You've done some seriously awesome
work. We've covered a lot of information in this lesson. Maybe this was the first time you've
been exposed to Linux or Windows. If so, you've already passed a huge milestone in your
learning journey. It's super important that you're able to use the commands you learned here by
memory. I hope you wrote them down in your notes while watching the videos in this course.
Next up, we'll be testing you on some of the new commands you learned in Bash and Windows
CLI. Make sure to re-watch the videos and practice the exercises, if you want a refresher before
you start. When you're ready, we'll see you in the next lesson.
Supplemental Reading for Windows PowerShell
For more information on getting started with Microsoft PowerShell, check out the link here
and also here.
I have to say, I think, that looking back, I've been really lucky. I had started off even before I was
a teenager teaching myself without computers and without even access to them. And somehow,
kind of in the end of that era when I was an early teen, I managed to convince a bookstore owner
to let me. He needed to buy a computer. And I would program it for him to automate his
textbook business. [MUSIC] So I had never actually done that before, right. I had never laid my
hands on that kind of computer before. I had never written a program like that before,. And the
guy believed me and he did it. And he hired me. That was a part-time job that stuck with me,
actually, until I was in my mid-20s. So for like 11 years, I had this part-time job automating the
textbook business of a neighborhood bookstore. And the thing that's so amazing, and that I'm so
grateful for, is that this guy had this faith in me and put this trust in me. Then I'm even more
amazed that it actually kind of worked out, right. And I spent all those years doing it, and it
helped their business and all that. But it was a great experience. And it was a great way to kick
off my actual, early career as a professional programmer.

Week 2 Users and Permissions


Users and Groups
Welcome back. Now that we've learned how to navigate around the Windows and Linux
operating systems, let's start setting up our computer for use by other people. As an IT support
specialist, you'll be responsible for other people's machines. People will depend on you to help
set up their machines, troubleshooting their issues, and so on. In this lesson, you'll learn how to
manage multiple accounts on one machine. You'll also learn about the different permissions and
access types, how to add and remove users, and the best practices to use when managing multiple
users. It's common for a computer to have multiple users. On your home computer, you might
have your parents, siblings, or children using the same computer. Your town, library, school, or
other public places might also have computers with multiple users. Even though these machines
have multiple user accounts on them, all users on a computer are isolated from others. This
means that Kevin can't see Victor's files and folders, and vice versa. There are two different
types of users, standard users and administrators. A standard user is one who is given access to a
machine, but has restricted access to do things like install software or change certain settings. An
administrator or admin is a user that has complete control over a machine. They can view
anyone's account, change and remove anyone on the computer, and view every single file. You
can have multiple administrators on a machine as well. On your personal machine, you're the
default administrator because this gives you complete control over your system. After all, it is
your machine. But on a public computer, the administrator is someone who actually runs and
maintains the machine, like an IT support specialist, they can grant access for other users, install
software, change restricted system settings, and perform other actions they deem appropriate.
How terrible would it be if anyone who is using a public computer could just install software?
The computers would be bloated, things would be out of place, and worst of all, they could be
infected with malicious software. Users are put together in groups according to levels of access
and permissions to carry out certain tasks. These tasks depend on what the computer's
administrator considers appropriate. An administrator could give different access and settings
based on the type of group a user is in. Let's say you're an administrator for your home computer,
which everyone in the house uses. You put your parents in a group called Parents, and your kids
in a group called Children. You don't want either of them to be able to install software, but you
also want to add child safety restrictions on the children group. As the administrator, you're able
to specify different permissions for both of these groups. So how do you differentiate what type
of user you are, and what groups you're in on Windows and Linux? Well hopefully, you'd know
this if you are an administrator of a computer. But if you don't, computers do a pretty good job of
telling you. In the next few lessons, we'll see how we can view user and group information in
Windows and Linux.
To view user and group information in windows, we're going to use the computer management
tool if we search computer management in our application search and open it up, we'll see a
window that gives us a lot of information. We'll be using this application a lot throughout this
course, so let's take some time to go over it. At the top of the sidebar, you'll see it says computer
management local. This means we're managing a single machine locally. In an enterprise
environment, you can manage multiple machines in something called a domain. A Windows
domain is a network of computers, users, files, et cetera, that are added to a central database. If
you're an admin of that domain, you can view those accounts and computers from any machine
in the domain. We'll learn more about domains and how to manage them in our next course on
system administration and IT infrastructure services. Underneath this menu, we have system
tools. Let's do a run down of each of these sub menus. Task Scheduler: This lets you schedule
programs and tasks to run at certain times, like automatically shutting off the computer at 11:00
pm every night.
Event Viewer: This is where our system stores its system logs. We'll do a deep dive on this tool
in an upcoming lesson. Shared folders: This shows the folders that different users on the machine
share with each other. Remember how he said that other users can't view anyone else's files?
That's not exactly true. If users store files on a shared folder, anyone who has access to that
folder can view it. Local Users and Groups: This is where we'll be doing our user and group
management. Performance: This shows monitoring for the resources of our machine like CPU
and RAM. Device Manager: This is where we go to manage devices to our computer like our
network cards, sound cards, monitors, and more. Under the storage menu, we have a sub menu
for disk management. We use this when we talk about discs in a later lesson.
Finally, the services and applications menu shows us the programs and services that we have
available on the system. We can choose to enable or disable services like DNS here. All the
essential settings that we as administrators need to change are found in the computer
management tool. If you're a power user it's more efficient to use this than it is to go through the
default settings application. Okay, let's get back to the task at hand. Let's see what kind of user
account we have and what groups we're a part of. Let's go back to the local users and groups.
Under users, we can see a few built in Windows accounts like guest and administrator.
The local administrator account lets you log in using the administrator username and whatever
the administrator password is on the computer. This account is disabled by default. Since this
account has unfettered access on the computer, it can be dangerous to be logged into it at all
times. For now, let's look at the account I'm in. Cindy, let's double click on this to see more
information. Okay, let's do a rundown here. Under the tab general, we can see some basic
information about the user along with some options. User must change password at next logon.
Since I'm an admin, I can force other users to change their password. This is useful if I'm
managing someone's account and their password was compromised. We don't want to risk
someone else logging into their account, so we force them to change their password. User cannot
change password. Password never expires. Account is disabled. Enabling or disabling an account
means making it active or inactive. Account is locked out. This means a user account will not be
able to log in. Maybe a disgruntled employee is looking to do malicious things. We can make it
so that they won't be able to log into their computer. Under the member of tab, we can see which
groups we're part of. I can see that I'm in the administrators group. Heads up that instead of being
logged into a local administrator account all the time, you can be logged into your own account
and use administrative powers when you need to. This is thanks to the help of UAC or user
access control. This is a feature in Windows that prevents unauthorized changes to a system.
These changes have to be approved by the administrator instead. Since I'm an administrator, I
would do is enter in my password to confirm that I want to make a change. Finally, on the last
tab, profile, you can change settings about your user profile, like where you want your home
folder to be. This isn't terribly important on a local account, but it comes in handy when you're
managing many users on a domain. Now if we go to the groups menu on the sidebar, it should
look familiar. Just like the member tab, we can view which groups are available and who their
members are. And that's how you view user and group information using the Windows GUI.
Next, let's take a look at how to do the exact same thing using Windows.
We talked about using the CLI in-depth in the last module. We've seen how it can make our tests
quicker when we modify files and folders. Now, we're going to start using commands to help us
with other tasks on our system. Imagine you're working as an I.T. support specialist at a
company and your boss asks you to check all the user information on 10 machines, to make sure
that the local administrator account is not enabled. Sure you could search computer management
in the search bar, click computer management local, look under system tools, click on local users
and groups, then double click on the username of the computer to ensure that their local
administrator account is enabled. Now, you just have to do that nine more times. There's a much
faster way. You can just use the CLI to quickly see the list of users on the computer using the
command Get-LocalUser.
As you can see, it lists my user account, a few other users and a couple of other default accounts
that are part of Windows. Here you can see that my local administrator account isn't enabled.
That's way easier. What about groups? I bet you can guess, Get-LocalGroup will list the groups
on the local machine. There are a whole bunch of groups but don't worry, these are all built-in
groups. Each of them are important, but we aren't likely to make changes to most of them. One
that we will make changes to is the administrators group. Remember, this group controls who
has administrative access to the machine. It's important to know who's in this group since anyone
in this group can make any change they want to the machine. We just saw in the GUI that we're
in this group, but I wonder who else is? Let's see who's in this group with Get-
LocalGroupMember, and I want to check the administrators group. We can see that the
administrator user and my user are in the administrators group but no one else. Looks good to
me. One last note, these local user and group PowerShell commands require that your running
PowerShell 5.1 or newer. You may have noticed that I keep saying local accounts and local
users. If your organization has a lot of Windows machines, it's very common to use active
directory to manage user accounts in a central directory service. We'll learn more about active
Directory accounts in our next course on system administration and I.T. infrastructure services.
But let's focus on local accounts for now.
In Linux, user management access works just like it does in Windows. Different user types have
different privileges and they can be grouped together with various access levels. There are a few
differences in how Linux does labeling though. There are standard users and there are also
administrators in Linux. There's also a special user called the root user. Don't get this confused
with the root directory or slash. The root user is the first user that gets automatically created
when we install a Linux OS. This user has all the privileges on the OS. They are the super user.
There's technically only one superuser or root account. But anyone that's granted access to use
their powers can be called a superuser too. Now, let's try and view the contents of a root
restricted file. The file path has etc sudoers.
We're getting an error, cat/ etc/ sudoers permission denied. The sudoers file is a protected file
that can only be read by root. We can log in as root and then run this command, no problem. But
it can be really dangerous to always be in root. Since root, like our local administrator account on
Windows has unrestricted access on the machine. If we make even one mistake, we could delete
or modify something important, and that's not good. So instead of logging in its root, we can tell
the shell that we want to run this one command as root. So I'm so much to the Windows UHC
feature. That's because it is. On Linux, we can do this with the sudo command or superuser do.
So sudo cat /etc/ sudoers. Now, we're able to see the contents of this file. If you don't want to run
sudo every time you need to run a command that requires root privileges, you can just use the su
command or substitute user. This will allow you to change to a different user. If you don't
specify user, it defaults to root. Now, you can see my prompt says root@cindy-nyc. Again, it's
generally not a good guideline to stay logged in as root all the time. There are lots of critical
services and files that can be mistakenly changed. If you need to log in as root, it's okay. But just
be careful. Just going ahead and exit out of root for now and go back to my normal user. You can
view who has access to run sudo by viewing the /etc/group file. This is also how you view
memberships for all groups.
This looks a bit different from the windows GUI. But you can see there are some similarities to
the Windows CLI. It's actually pretty simple to read this file, even if you're not an expert yet.
Each line represents a different group. Let's look at the seudo line.
There are four fields here, separated by colons. The first field is the group name. In this case, it's
seudo. Second field is the group password. We don't really need to specify a group password so
it defaults to the root password. The X here means that the password has been encrypted and
store in a separate file that we'll talk about in a later lesson. The third field is the ID of the group
or group ID. When our operating system runs a task that involves a group, it uses a group ID
instead of group name. Finally, the last field is lists of users in the group. What if you wanted to
view the users on our machine? What do you think the file would be that stores that information?
Unfortunately, it's not /etc/user. The file that contains user information is /etc/password.
Wow, there's a lot more information on here and a lot more users. Most of these accounts aren't
actually humans using the computer. They are a bunch of processes that are constantly running
on a computer that we need to associate with a user. So our system has lots of users with
different permissions that are needed to run these processes. Let's look at this first line here,
which is an actual user we can log into root. We won't talk about all the fields since they aren't
important. But the first three are relevant. The first field is the username and the second field is
the user password. The password isn't actually stored in this file. It's encrypted and stored in a
different file, just like our group ID password. The third field here is the user id or UID. Similar
group IDS, user IDs or how our system identifies a user, not by the username. Root has a UID of
zero. That's basically how you view users and groups in Linux.
In this video, we're going to talk about an important part of having users on a machine, and that's
working with passwords. Passwords add security to our user accounts and machines, they make
it so that only Marty knows the magic secret to access her account and no one else's, not even the
admin of the computer. When setting up a password, you want to make sure that you and only
you know that password. Remember, if you're managing other people's accounts on a machine,
you shouldn't know what their password is. Instead, you want the user to enter the password
themselves. To reset a password in the gooey, let's go back to our computer management tool.
Under local users and groups, we're going to right click on a username like this account Sarah.
Let's click on properties. Then from here, we're just going to check this box that says "user must
change password at next log on", then apply and hit "OK." Then, when the user logs into the
account, they'll be forced to change their password. If they forgot their password, you have the
option to set a password for them manually, by right clicking and selecting set password.
This has some caveats though, like losing access to certain credentials. You can read more about
this option in the supplemental reading right after this video. To change a local password in
power shell, we're going to use the DOS style net command. There's a native power shall
command that can be used to set the password, but it's a little more complicated. It requires a bit
of simple scripting to use. For now, we'll stick to the simpler, the less powerful net command. net
does lots of different things, changing local user passwords is just one of them. If you want to
learn more about what the net command can do, take a look at the documentation in the
supplementary reading for the command. Since this is an old DOS style command, you can also
use the slash question mark parameter to get help on the command from the CLI. To change a
password for a user, the command is net user then the username and password.
The best way to use this command, is to use an asterisk instead of writing your password out on
the command line. If you use an asterisk, knit will pause and ask you to enter your password like
so. Why is this approach better? Imagine you're changing your password and right at that
moment someone walks behind you and glances over your shoulder. Your password isn't a secret
anymore. You should also know that in many environments, it's common that the commands that
folks run on the machines they use are recorded in a log file that's sent to a central logging
service. So it's best that passwords of any kind are not logged in this way. Do you notice a
problem with the asterisk approach though? That's right. If I change passwords for someone else
using this command, I would know their password, and that's not good. Instead, we're going to
do what we did in the GUI and force the user to change the default password on their next log on
using the /logonpasswordchg:yes. So I'm just going to force Victor to change his password on
the next log on. So, net user victor /logonpasswordchge:yes.
The slash log on password change yes parameter means that the next time that Victor logs into
this computer, he'll have to change his password. Sorry Victor.
Supplemental Reading for Windows Passwords
You can check out more information on Windows and passwords here.
To change your password in Linux, all you need to do is run the PASSWD or password
command. Let's try changing my password. When you set a password it's securely scrambled
then stored in a special privileged file called slash etc slash shadow. This file can only be read by
Root, to keep away prying eyes. Even if you did have access, you wouldn't be able to descramble
passwords found in here. If you're managing a computer, and you want to force a standard user
to change their password, like we did in Windows, you can use the dash e, or expire flag with
password, like this. This will immediately expire a users password and then make them set a new
password the next time they log in.
Okay. Now that you know how to view information about users and the hierarchy of user
permissions, let's learn how to add and remove users on a machine. To add a user, we're going to
go back to our computer management tool. Under local users and groups, we're going to right
click and select New User.
From here, it asks us to set a username, a full name, and a password. Remember, that in order to
use good password setting practices, we set a default password and then make the user change
that password when they log in. So, we're going to go ahead and set a default password, and
make sure the box for, "user must change password at next log on" is checked, and then click
Create. So I'm just going to make a new user account for Elizabeth and have the password, and
then just make sure that's checked, and hit Create. And that's how you create a new local user. To
remove a user, we simply right click and select Delete. This gives us a warning message that
says, user names are unique, and even if you delete the user and give them the exact same
username, they won't be able to access their old resources. Once you confirm that you want to do
that, just go ahead and click delete it. And that's how you remove a local user account. Adding
and removing local user accounts from the CLI, is going to use the same net command that we
use to change passwords, just with different parameters. Like before, there's a native power shell
command, new dash local user, that requires a little bit of scripting to use. If you want to use new
dash local user, check out the supplemental reading. Now, back to net. To add a new local user,
we simply use the slash add parameter. If we add the slash add parameter to the same command
we used before, it instructs net to create the account. We can still use the asterisk for the
password to be prompted to enter a password. Let's test this out and create a new account for
Andrea. So, net user andrea * / add. Now, let's list the user accounts to be sure it worked. So,
Get-LocalUser. Sweet. There it is. Now, there's a small problem which you saw in the earlier
lesson on passwords. This account is for Andrea, but we know what the password is. We don't
want to know what the password is because that means we can log in as Andrea. We want to
make sure that Andrea changes her password to something that we don't know. So, we're going
to flag her account as requiring a password change using the slash log on password change yes
parameter. So, net user andrea / logonpasswordchg:yes. Now you all know what her password is.
You can actually combine these two commands that we ran to create a new account that requires
a password change at first login. Let's create an account for Cesar. So, net user cesar pa5swOrd /
add / logonpasswordchg:yes. Now, when we run the Get-LocalUser, we should see both of our
new accounts. Sweet. And there it is. Cesar's new account has a password that you know and you
can give it to him, but he'll have to change the password the first time he logs in. All right. Now
let's remove these accounts that we just created. I'm going to show you how to do this using net,
and using Remove-LocalUser. Both of these commands will do the exact same thing. So let's
delete Andrea's account, net user andrea /del. This will delete Andrea's account. And using the
Remove-LocalUser cesar, we can remove Cesar's account. Sweet, now it's gone. See how each of
these options follows a pattern. The net user example, looks just like it did to create a new user
account, except instead of adding an account, we deleted the account. In the second example,
instead of getting, setting, or creating a new dash local user, we removed the account. As you
continue to learn new CLI commands, you'll start to notice these sorts of patterns. Being able to
identify these patterns will help you discover new things that you can do, and remember how to
do things you haven't done in a while.
To add a new user in Linux, you can use the user add command. Sudo useradd juan. This will set
up basic configurations for the user, and set up a home directory. We can verify that one is
created, there. You can combine this with the password command to make the user change their
password and log in. To remove the user, you can just use sudo userdelete juan. He's no longer
on the list. Nice work. Next we'll dive into the wonderful world of permissions. See you there.
Because most mobile devices are used by a single person, mobile operating systems handle user
accounts a little differently than the other OS's we've talked about. Take a GPS unit and a
vehicle, for example. You might never enter a username or sign in to the GPS unit at all. There
are still user accounts in the OS that run the GPS device but you'll never have to see them or
have to deal with them. On the other hand, think about a smartphone or tablet running iOS or
Android. These devices will have you enter the username and password once when you're setting
up the device, but then you're probably not have to reenter that password each time you want to
use the device. The initial account that you use during setup is called the primary account. This
account is used to create a user profile for you on the device. The user profile is like your user
account in a mobile device. It contains all of your accounts, preferences, and apps. In iOS and
Android, the primary account can be used to synchronize settings and data to the Cloud. When
you replace the device or set up a new mobile device with a primary account that you've used
before, you'll have the option to restore data and apps if any had been backed up to the Cloud.
But don't worry about this yet. We'll talk more about synchronization and backups in a future
video. Also an iOS and Android, a user profile can be signed into additional accounts. These
could be additional email accounts, social media accounts or something else. If given permission,
apps on the mobile device can use these accounts for single sign-on or SSO. This means that
instead of those apps asking you for another username and password, they will allow you to
authenticate using an account that you're already signed into. Those apps don't have access to
your credentials but you can let them use those credentials. Check out the security course to learn
more about how SSO works. As an IT support specialist, you might help end users set up these
accounts on their mobile devices but don't ever ask someone for their password. Always have the
end user enter the password themselves and if anyone reveals their password to you, encourage
them to change their password. Most mobile devices only support one user profile and they're
designed to be used by a single person. Some Android devices do support multiple user profiles.
To see how that works, check out the supplemental reading. Think about when you use a larger
device like a desktop, laptop or a server that you have to enter a username and password to
access. By default, most mobile operating systems don't ask you to re-enter your primary account
password each time you want to use the device. This is convenient but it also means that anyone
who picks up the device will have access to all of your personal and work data. Even if there's no
private data on the device, the device may have access to confidential or privileged systems
which can be just as bad. Mobile operating systems usually have safer ways to protect your data.
You can set a device password, a pin or an unlock pattern on your device. Some smartphones use
fingerprint sensors, facial recognition or other kinds of biometric data to grant access to the
device. Biometric data is something about you that's unique to you, like a fingerprint or voice or
a face. We'll talk more about biometric data in the security course. To protect business data,
some organizations use mobile device management or MDM policies to require mobile devices
to be locked. Mobile device management systems are used to apply and enforce rules about how
the device has to be configured and useSupplemental Readings for Mobile Users and Accounts
Check out the following readings for more info:
• Adding accounts to iOS
• Adding accounts to Android
• Multiple user accounts on Android
d. We'll talk more about MDM in a future video.
I'm Ben Fried, I'm Google's Chief Information Officer and I'm the Vice President of the
company. [MUSIC] I lead a team that's responsible for the technology that Googler's use to get
their work done. That could be things like the laptop and phone that you use, or the video
conferencing that you use in a conference room or the phone on your desk, maybe. But it's also
all the software that Googler's use everyday in their work. I spend an awful lot of time actually in
meetings, mostly meeting with my teams. Trying to understand what they're doing and what I as
a leader can do to help them. Then I spend a lot of time also talking to our customers, people
inside the company. So I can understand what my teams need to do and how we can better do our
work. When I came and joined Google, the team that I led needed a lot of help. And it was a
huge challenge for me to figure out what I needed to do as a leader to help them. And I just
remember at one point realizing, if I keep on showing up and just keep on putting in the effort
every day, we'll get through this. In spite of how hard it was and in spite of the fact that I would
leave work thinking. I have no idea how I'm going to solve this problem or that problem or
whatever it was. Eventually, showing up worked, being too stubborn and too stupid to know
when to quit. Was actually what lead to me succeeding in an incredibly challenging situation.

Permissions
File permissions are an important concept in computer security. We only want to give access to
certain files and directories to those who need it. While we think about how we want users to
access files and folders, we should also think about how the concept of permissions carries over
to other areas of your life. Maybe you've locked down your social media post to only people you
trust, or giving a copy of your house key to a relative in case of an emergency. You'll learn more
about security principles in the last course of this program. For now, we're going to focus on one
small building block, file permissions. In Windows, files and directory permissions are assigned
using Access Control Lists or ACLs. Specifically, we're going to work with Discretionary
Access Control Lists or DACLs. Windows files and folders can also have System Access
Control Lists or SACLs assigned to them. SACLs are used to tell windows that it should use an
event log to make a note of every time someone accesses a file or folder. This is a more
advanced topic which you can read up on in the next supplementary reading. You can think of a
DACL as a note about who can use a file and what they're allowed to do with it. Each file or
folder will have an owner and one or more DACLs. Let's take a look at an example. In windows
explorer, I have opened up my home directory. If we right click on desktop and select properties,
we can see the properties dialog for our desktop directory. And if we go to a security tab, we can
see the permissions window here. The top box contains a list of users and groups. And the
bottom box has a list of the permissions that each user group has been assigned. What do each of
these permissions do? It changes a bit depending on whether the permission is assigned to a file
or a directory. Don't worry, it all makes sense soon. Let's do a rundown of these permissions.
Read, the Read permission lets you see that a file exists, and allows you to read its contents. It
also lets you read the files and directories in a directory. Read and Execute, the Read and
Execute permission lets you read files, and if the file is an executable, you can run the file. Read
and Execute includes Read, so if you select Read and Execute, Read will automatically be
selected. List folder contents, List folder contents is an alias for Read and Execute on a directory.
Checking one will check the other. It means that you can read and execute files in that directory.
Write, the Write permission lets you make changes to a file. It might be surprising to you, but
you can have write access to a file without having read permission to that file. The write
permission also lets you create sub directories and write two files in the directory. Modify, the
Modify permission is an umbrella permission that include read, execute and write. Full control, a
user or group with full control can do anything they want to the file. It includes all of the
permissions of Modify, and adds the ability to take ownership of a file and change its ACLs.
Now, when we click on my username, we can see the permissions for Cindy, which show that
I'm allowed all of these access permissions. If we want to see which ACLs are assigned to a file,
we can use a utility designed to view and change ACLs called ICACLs or Improved Change
ACLs.
Let's take a look at my desktop first. ICACLs, Desktop. Well, that looks useful. But what does it
mean? I can see the user accounts that have access to my desktop, and I can see that my account
is one of them. But what about the rest of this stuff? These letters represent each of the
permissions that we talked about before. Let's take a look at the Help for ICACLs, I bet that'll
explain things. So ICACLs, slash, question. All right. There's a description of what each one of
these letters means. The F shows that I have full control of my desktop folder. ICACLs causes
full access, and we saw this in the GUI earlier as full control. These are the same permission.
What are these other letters mean? NTFS permissions can be inherited as we saw from the
ICACLs help. OI means Object Inherit, and CI means Container Inherit. If I create new files or
objects inside my Desktop folder, they'll inherit this DACL. If I create new directories or
containers in my desktop, they'll also inherit this DACL. If you'd like to understand more about
ACL inheritance and NTFS, check out the next supplemental reading.
Supplemental Reading for Windows ACL
For more information about access control lists (ACL) in Windows, check out the link here.
As we've now learned, there are files and folders that have different permissions set on them, so
that unwanted eyes can't view or modify them. There are 3 different permissions that you can
have in Linux; Read, this allows someone to read the contents of a file or folder. Write, this
allows someone to write information to a file or folder. And execute, this allows someone to
execute a program. Let's take a look at this with the LS command, we'll use the long flags so we
can see the permissions on the file. Okay. The first thing we see in this column is -rwxrw-r--
there are 10 bits here. The first one is the file type. In this example, dash means that the file we're
looking at is just a regular file. Sometimes you might see D which stands for a directory. The
next nine bits are our actual permissions, they're grouped in trios or sets of three. The first trio
refers to the permission of the owner of the file. The second trio refers to the permission of the
group that this file belongs to. The last trio refers to the permission of all other users. The R
stands for readable, W stands for writeable and X stands for executable. Like in binary, if a bit is
set then we say that it's enabled. So for our permissions, if a bit is a dash it's disabled. If it has
something other than a dash, it's enabled. Permissions in Linux are super flexible and powerful,
because they allow us to set specific permissions based on our role. Such as an owner in a group
or everyone else. Let's take a look at this in detail. The first set of permissions, rwx, refers to the
permission of the user who owns that file. In this case, its Cindy. We can see in the owner field
of ls- l. So it says here that the owner of the file can read, write, and execute this file. The next
set of permissions are group permissions. We can see the group this file belongs to is the cool
group. They have read and write permissions but not execute permissions. And lastly, the
permissions for all other users and groups only allow them to read this file. And that's Linux
permissions in a nutshell, it might take some time to get used to reading permissions. Don't
worry, you'll eventually get the hang of it. As always, feel free to review this lesson again if you
need a refresher.
Now that we can read permissions, let's take it a step further and learn how to change
permissions in windows. Let's say we want to give access to another person in my family to view
a folder with family pictures on the computer. How do I do that? On my Local Disk C, I have a
folder called Vacation Pictures that I want to share with another user on my machine, Devan. To
do that, I'm going to right click on this folder and go to Properties, then the Security tab.
Now I can see an option to Edit file permissions. I'm going to click on that. From here, I can see
that I can add a group or usernames to this ACL. I'm going to go ahead and click Add. From
here, it asked me to enter the username of the person I want to add on this ACL. I'm going to
enter devan and then click Check Names to verify that I typed it in right.
After it's been verified, I'm going to click OK. Once devan's added to the ACL, I can click on his
username, then check the allow boxes for the permissions I want to give him. Let's give Devan
modify access, so you can add pictures to this folder too.
That's it. We've kind of been glossing over this other checkbox here Deny. You might have
already guess that Deny doesn't allow you to have a certain permission. But it's special because it
generally takes precedence over the allow permissions. Let's say Devan is in a group that has
access to this folder. If we explicitly check the deny box for Devan's username, even if the group
has access to the folder Devan won't. Sorry, Devan. If you want to learn more bout permission
precedence, you can check out the supplemental reading. To modify a permission in the CLI,
we're going to return to the icacls command. In the examples I'm going to show you, will be
running icacls from PowerShell. The icacls command was designed for the command prompt
before PowerShell. And its parameters use special characters that confuse PowerShell. By
surrounding icacls parameters with single quotes, I'm telling PowerShell not to try and interpret
the parameter as code. If you run these commands in command.exe, you'll need to remove the
single quotes for them to work. So let's look at this side by side with PowerShell.exe and
command.exe. In PowerShell, the command would be icacls 'C:\'Vacation Pictures\' /grant' with
single quotes, 'Everyone: (OI)(CI)(R). In command prompt, the command would be icacls with
double quotes "C\Vacation Pictures"/grant Everyone:(IO)(CI)(R). We're going to see with this
command does in just a moment. For now, let's take a look at the difference in the quotes. In the
PowerShell example, we add single quotes to make PowerShell ignore the parentheses and
because there's a space in the path. In the command.exe example, we have to use double quotes
for the path. And we don't need the single quotes anymore to hide the parentheses. Got it?Great.
Now, let's take a look at the permissions that we just gave to Devan with icacls. Cool. I see
there's a new decal attach to the vacation pictures directory for Devan, that gives him modify
access. We can see that any new files or folders that get created in vacation pictures will be
inherited. So let's say we want anyone with permission to use this computer to be able to see
these pictures. We don't want them to add or remove photos though. What permissions do we
want to give them? That's right. We want to give them read permission to the Vacation's Picture
folder. Let's use the special group Everyone to give read permissions to the directory. So icacls
'C:\ Vacation Pictures' /grant Everyone:(OI)(CI)(R). Success. The Everyone group includes,
well, Everyone and includes local user accounts like Cindy and Devan. Guest users: This is a
special type of user that's allowed to use the computer without a password. Guest users are
disabled by default. You might enable them in very specific situations. Now anyone who can use
this computer can browse the photos that Devan and I have put together. Actually, maybe I didn't
really want everyone to look at my vacation photos. Maybe I just want the people that have
passwords on the computer to be able to see them. In that case, I want to use authenticated users
group. That group doesn't include guest users. So first, let's add a new DACL. icacls 'C:\Vacation
Pictures' /grant' Authenticated Users:(OI)(CI)(R). Success. Now, let's remove the permissions for
the Everyone group. icacls 'C:\Vaction Pictures' /remove Everyone. Success. Now, let's use
icacls to verify that the permissions are set away we intended. icacls 'C:\Vacation Pictures'.
Sweet. We can see the Authenticated Users were added and Everyone is removed. Next, let's
take a look at modifying permissions in Linux.
In Linux, we change permissions using the chmod, or change mode, command. First, pick which
permission set you want to change. The owner, which is denoted by u, the group the file belongs
to, which is denoted by a g, or other users, which is noted by an o. To add or remove
permissions, just use a plus or minus symbol that indicate who the permission affects. Let's take
a look at some examples.
So that's chmod u+x my_cool_file. This command is saying that we want to change the
permission of my_cool_file by giving executable or x access to the owner or u. You can do the
same thing if you wanted to remove a permission. So, chmod u-x my_cool_file.
Instead of a plus, we just minus. Pretty simple, right? If you wanted to add multiple permissions
to a file, you could just do something like this. This is saying we want to add read and execute
permissions for the owner of my_cool_file. And you can do the same for multiple permission
sets. You do chmod ugo+r my_cool_file.
Now, this says we want to add read permissions for our owner, the group the file belongs to, and
all other users and groups. This format of using rwx and ugo to denote permissions and users in
chmod is known as symbolic format. We can also change permissions numerically, which is
much faster and simpler, and lets us change all permissions at once.
The numerical equivalent of rwx is 4 for read or r, 2 for write or w, and 1 for execute or x. To set
permissions, we add these numbers for every permission set we want to affect. Let's take a look
at an example. The first number 7, is our owner's permission. The second number, 5, is our group
permissions, and the third number, 4, is the permission for all other users.
Wait a minute, where are we getting 5 and 7? Remember, you have to add the permissions
together. If you add 4, 2, and 1 together, you get rwx, which equals 7. So our owner permission
is able to read, write and execute this file. Can you guess what 5 would stand for? That's right? 4
plus 1 is read and execute. So now, you can see how numeric format is quicker than symbolic
format. Instead of running something like this, We can run chmod 754 my_cool_file to update
them all. Either way, you can change permissions using the symbolic or numerical format. Just
pick whichever is easiest for you.
You can also the owner and the group of a file. The chown or change owner command allows
you to change the owner of a file. Let's go ahead and change the owner to Devan. Awesome.
And Devan is the owner of this file. And to change the group of file belongs to, you can use a
chgrp or change group command. Awesome. Now, the best group ever is the group owner for
this file. It may take a while for you to get the hang of reading and changing permissions. You
can practice changing the permissions on a few files until you get it down. Permissions are an
essential building block to computer security, and you'll be using it throughout your work as an
IT Support Specialist.
You might have noticed that we were looking at permissions in the GUI before. There's a check
box in the permission list for special permissions. The permissions that we've been looking at
and setting so far are called simple permissions. Simple permissions are actually sets of special,
or specific permissions.
For example, when you set the re-permission on a file, you're actually setting multiple special
permissions. Let's take a look at the list of special permissions available. I'm going to click on the
advanced tab under my permissions setting.
When I click on a username, and then go to Advanced Permissions, I can see a list of all the
special permissions enabled on that file. When we select a basic permission like Read, we're
actually enabling the special permissions List folder / read data. Read attributes, read extended
attributes, read permissions, and synchronize, which are just fine-tuned permissions. You can
modify these permissions like you would any other basic permission. Feel free to read more
about the different types of special permissions in the supplemental reading I included after this
video.
In most cases, the simple permissions are going to be all that you need. But sometimes, you need
to create a file or folder that doesn't quite follow a simple pattern. Let's take a look at an example
in this CLI. To view special permissions on a file in the CLI, we will simply use the icacls
command as before.
Let's take a look at a more interesting example than my desktop folder, icacls
C:/Windows/Temp. This directory is used to hold temporary files for all users in the system. We
would like for everyone in the system to be able to create files and folders here. You might think
that we should use modify or full control for this, but we don't want users to be able to delete
each others files.
Let's take a look at some of the DACLs assigned to this folder and figure out how to do this.
First, local administrators and the operating systems computer account have full permissions
over this folder, and all files and folders within it. We see a new descriptor, IO, which indicates
that this DACL is inherit only. That means that it will be inherited, but it is not applied to this
container C:\Windows\Temp. The users group includes all user accounts on the local machine.
We're going to let users WD, or create files like data, AD, create folders and append data, and S
for synchronize.
You can see in the next supplemental reading that these special permissions are included in the
modified simple permission. Unlike the modify a simple permission, we are not granting users
the ability to delete files or folders. We do want users to be able to delete their own files and
folders, though, so how do we do that?
So, if you see creator owner, creator owner is a special user that represents the owner of
whichever file the DACL applies to. In this directory, and all subdirectories, whoever owns a file
or folder has full control of it. Nice, so I'm going to create a folder and file in C:\windows\temp
and see what DACLs are applied.
Let's use what we learned about output redirection to record the output of the icacls in this file,
so icacls. Example for c:/Windows/Temp/example. Then we're going to use our redirector output
to give us icacls.txt. Okay, now let's look at the file we created to view the output of icacls.
Cool, I created the files, so I have full control of them. And all of the other DACLs that we saw
in c:/windows/Temp have been inherited. You can see that using the specials permissions in
NTFS DACLs can be complicated, but it can also let you create really powerful sets of
permissions customized to your exact needs.
Supplemental Reading for Special Permissions in Windows
For more information about file and folder permissions in Windows, check out the link
here.
At Linux, we also have special permissions. What if I want a user to be able to do something that
requires root privileges, but I don't want to give them these privileges? What's the use case for
this? Glad you asked. There are certain commands that need to change files that are owned by
root. Normally, if you need to change a file owned by root, you'd have to use sudo. But we want
it to be able to have normal users change the files without giving them root access. Let's check
out an example. Let's say I want to change my password. I would use a password command like
we've learned. Pretty simple, right? Now I just enter in my new password and my password is
changed. We know that the password command secretly scrambles up our passwords then add
them to this etc shadow file. Let's dive a little deeper into this file.
Oh, it says this file's owned by root. How are we able to write or scramble passwords in this file,
it's owned by root. Well, thanks to a special permission bit known as setuid we can enable files
to be run by the permission of the owner of the file. In this case, when you run the password
command, it's being run as root. Let's verify this.
We see the permissions on this file look a little odd. There's an S here where the x should be. The
s stands for setuid. When the s is substituted where a regular bit would be, it allows us to run the
file with the permissions of the owner of the file. To enable the setuid bit, you can do it
symbolically or numerically.
The symbolic format uses an s while the numerical format uses a 4, which you prepend to the
rest of the permissions like this. Similar to setuid, you can run a file using group permissions
with setgid or set group ID. This allows you to run a file as a member of the file group.
Under our group permissions, we can see that the setgid bit was enabled, meaning that when this
program is run, it's run as group tty. To enable the setgid bit, you can do something similar to
setuid. The only difference is the numerical format uses a two. So, I can do something like this or
something like this.
There's one last special permission bit we should cover and that's the sticky bit. This bit sticks a
file or folder down. It makes it so anyone can write to a file or folder, but they can't actually
delete anything. Only the owner of root can delete anything. Let's look at permissions for slash
tmp directory or a lot of programs write temporary files to and you'll see what I mean.
I added the d flag to show information just for the directory and not the contents. But as you can
see, there's a special permission but at the end here t, this means everyone can add and modify
files in the slash tmp directory, but only root or the owner can delete the slash tmp directory. You
can also enable the sticky bit using a numerical or symbolic format. The symbolic bit is a t and
the numerical bit is a one. So, sudo chmod plus t my folder or sudo chmod 175 my folder.
Works. So let's verify.
That was a lot of information on special bits. You usually won't have to deal with these
permission bits in a practical day-to-day manner but it's important to know they exist in case you
ever want to allow users to either share folders or even run commands with escalated privileges.
User access, group access, passwords, and permissions are all core concepts in security. Right
now, you're only working with permissions and access on a single computer scale. Eventually,
you'll learn about access on multi-user levels across different networks and more in the next
course on system administration and IT infrastructure services. For now, congratulations, you've
just taken your first step toward building a foundation of computer security knowledge. In the
next module, we're going to switch gears and talk about our OS and how it manages software.
Next, we've got two assessments for you covering Windows and bash permissions. Once you've
finished, you're granted permission to take a break before we hit the ground running in the next
module.

Week 3 Package and Software Management


Software Distribution
Congrats. You've made it to the halfway mark in this course. You already learned how to
navigate around the Windows and Linux file systems, you also learned how to manage users and
groups, and the basics of permissions and access. Nice work. Next, we're going to learn about
packages and the major package managers for Windows and Linux. Installing and maintaining
packages is something you'll do almost every day in an I.T. support role. So you should be
familiar with how this works on the Windows and Linux OS's. Let's get to it.
Have you ever wondered how we get software, like the apps in the App Store, or packages on the
Internet to install on our devices? Wonder no more. Developers and organizations that make the
software we use, generally package them up nicely for us. In most cases, all we need to do is
click install and the package gets installed for us. Packaging comes in all sorts of shapes and
sizes. It's just like how you'd package a gift for someone. You could put it in a box or a bag, but
the contents are what really matter. Developers have different ways to package software using
software compiling tools. But at the end result is a package. In the next few videos we'll discuss
some of the most common package types you'll see when you work in IT support. In Windows,
software is usually packaged as a dot exe or executable file. Executable files contain instructions
for a computer to execute when they're run, like copy this file from here to here, install this
program, or more generically, perform this operation. The concept of an executable file isn't
unique to Windows, but Windows has its own special implementation of them in the form of the
exe's. They're created according to Microsoft's portable executable or PE format. Although we
won't get into the details of the PE format, it's good to know that exe files don't just contain
instructions for the computer to perform. They also include things like text or computer code,
images that the program might use, and potentially something called an msi file. A Microsoft
install package or msi is used to guide a program called the Windows Installer in the installation,
maintenance, and removal of programs on the Windows operating system. Besides using the
GUI setup wizard to guide the user in installing the program, the Windows installer also uses the
msi file to create instructions on how to remove the program, if the user wants to uninstall it.
Windows executable files are usually used as starting points to bootstrap the Windows installer.
In this case, they might just contain an msi file and some instructions to start the Windows
installer and read it. Alternatively, executables can be used as stand-alone, custom installers,
with no msi file or usage of the Windows installer. If they're packaged this way, the exe file will
need to contain all the instructions that operating system needs to install the program. So, when
would you use an msi file and the Windows installer? And when would you use an executable
with a custom installer packaged in something like setup.exe? Great questions. If you want
precise granular control over the actions Windows takes when installing your software, you
might go the custom installer route. This can be tricky though, especially when managing things
like code dependencies, which we'll talk about later. On the flip side, using the Windows
installer guided by an msi file takes care of a lot of the bookkeeping and set up for you, but it has
some pretty strict rules about how the software gets installed. As of Windows 8, Microsoft has
introduced a platform to distribute programs called the Windows Store. The Windows Store is an
application repository or warehouse, where you can download and install universal Windows
platform apps. Those are the applications that can run on any compatible Windows devices like
desktop PCs, or tablets. These programs use a format called appx to package their contents and
act like a unit of distribution. We won't go into detail about appx packages, but it's good to know
they're out there giving you another option for packaging software. Feel free to read more about
appx packages and how to make them in the supplemental readings I've included after this video.
We learned how to install exe packages in an earlier course. To install an exe from the GUI, all
we need to do is double click on the executable then go through the installation process provided,
either by the executable itself or the Windows installer. That's pretty straightforward, but what
about installing software from the command line? And why would you need to do this in the first
place? Hold onto your desktops because you're about to find out. Installing executables from the
command line can be handy in lots of IT support scenarios, including automatic installations.
You might want to write a script or use a configuration management tool to install some software
automatically without needing a human to click buttons in an installation wizard. So, how can
you install an executable from the command line? The answer, it depends. Pretty unsatisfying, I
know. Running exe files from the command line is pretty simple. You open up a Command
Prompt or PowerShell, change in to directory where the executable is, and type in its name.
You could also just type the absolute path of the exe from wherever you are in the file system,
like this, C:\users\cindy\desktop\hello.exe. Running from an installer from the command line is
similar, but will potentially have more options for installation. Depending on the installer, you
might have flags for things like a silent installation, where nothing shows up on the screen and
the package is installed quietly, or you might get an argument to have the computer reboot
automatically after the package is installed. You can check out the options for packages created
by using the Microsoft Self-Extractor in the supplemental reading for a better idea of what we
are talking about. A given installer might have these kinds of options for installing from the
command line, but they vary from vendor to vendor. The options available for a Microsoft
package might differ from the options for a Mozilla package. Pro tip, try using the slash question
mark parameter when running a package from the command line to see what kinds of sub
commands the package might support. If the package doesn't have any help related options, your
best bet is to check out the vendor's documentation for what kinds of installations their software
packages support.
Supplemental Reading for Windows Software Packages
Windows Software Packages
Developers have different ways to package software using software compiling tools. In
Windows, software is usually packaged as a .exe (executable file). Windows software can be
sourced from the Microsoft Store or downloaded directly and installed in several ways. This
reading covers the most common methods software packages are installed on Windows OS.
Installation Package
Installation packages contain all the information the Windows Installer needs to install software
on a computer. The packages include a .msi file (Microsoft install file) which contains an
installation database, summary information, and data streams for each part of the installation.
The .msi file may also include internal source files and external source files needed for the
installation. Windows Installer uses the information contained in the .msi file to install, maintain,
and remove programs on Windows.
Portable Executable
These .msi files are contained within a portable executable (PE), which is a format specific to
Windows. The file type extension for a PE is .exe. Although these PEs commonly include
instructions for the computer to run, such as the .msi files, they may also have images that the
program may run or computer code.
Self-extracting Executable
While it is common to install software using the Windows Installer, it is helpful for you to know
how to install software using the command line.
Self-extractor packages are executable files (.exe) that are run in the Windows interface by
clicking on them or running from the command line. Software installed by an IT professional
onto an end user’s computer will likely use this format. Software installation package, update
package, or hotfix package created with the Microsoft Self-Extractor, can be executed using the
following command lines:
• /extract:[path]: Extracts the content of the package to the path folder. If a path isn’t
specified, then a Browse dialog box appears.
• /log:[path to log file]: Enables verbose logging (more detailed information recorded in
the log file) for the update installation.
• /lang:lcid: Sets the user interface to the specified locale when multiple locales are
available in the package.
• /quiet: Runs the package in silent mode.
• /passive: Runs the update without any interaction from the user.
• /norestart: Prevents prompting of the user when a restart of the computer is needed.
• /forcerestart: Forces a restart of the computer as soon as the update is finished.
You can always type /?, /h, or /help from the command line to view these options.
App Packager
The app packager used in the Windows Software Development Kit (SDK) and Microsoft Visual
Studio includes a program called MakeAppx.exe. MakeAppx.exe is a tool that creates an app
package from files on disk or extracts the files from an app package to disk. For Windows 8.1
and higher, this program can also create and extract app package bundles. This tool is primarily
used by software developers.
Microsoft Store
The Microsoft Store, included in the Windows OS, is the primary source for apps, games, and
videos in Windows. The Microsoft Store only contains apps and programs certified for
compatibility and curated for content. Software installed through the Microsoft store is
automatically updated by default. Some organizations may disable the Microsoft store on user
computers to limit users’ ability to install new applications without authorization.
While the Microsoft Store is a convenient and popular way to get programs on Windows, some
software can also be downloaded directly from developers.
Key takeaways
Windows has many different ways to distribute, install, uninstall, and update programs and code
on a computer. Depending on the organization, IT might use any of these installation options
regularly.
• Installation packages contain all the information the Windows Installer needs to install
software on a computer.
• While it is common to install software using the Windows Installer, it is helpful for you
to know how to install software using the command line.
• The Windows Software Development Kit (SDK) and Microsoft Visual Studio include a
program called MakeAppx.exe. MakeAppx.exe is a tool that creates an app package from
files on disk or extracts the files from an app package to disk.
• Microsoft Store is a digital distribution storefront for apps, games, and other media.
Resources for more information
• Installation Package: https://docs.microsoft.com/en-us/windows/win32/msi/installation-
package
• App packager (MakeAppx.exe): https://docs.microsoft.com/en-
us/windows/win32/appxpkg/make-appx-package--makeappx-exe-
• Portable Executables: https://docs.microsoft.com/en-us/windows/win32/debug/pe-format
• Self-extractor: https://docs.microsoft.com/en-us/troubleshoot/windows-
client/deployment/command-switches-supported-by-self-extractor-package
In Linux, there are lots of different distributions and each might have different package types.
For example, in the Linux distribution or Distro, Red Hat, the packages that are used are.rpm or
Red Hat Package Manager packages. We won't cover how to work with RPM packages, but just
be aware that packet types can change when you're working with different Linux distributions. If
you're interested in learning more about RPM packages, I've included a link in the supplementary
reading right after this video. In this course, we'll be working with Debian packages which
Ubuntu uses. A Debian package is packaged as a.deb file for Debian. You've already learned
how to install a Linux package using the help of a package manager in the first course on
technical support fundamentals. We'll dive deeper into this in a later video, but let's focus now on
how to install a single standalone Debian package. You'll have to work with standalone Debian
packages, especially when developers package and release their software on different websites.
To install a Debian package, you'll need to use the D package or Debian package command.
There is a standalone package here for the open source text editor, atom. Let's go ahead and
install it using D package. We have to use the iFlag for the install, and that's it. Now it's installed
on this computer. How about if we wanted to remove a package? To do that, we use the R or
remove flag.
And that's how you install and remove a standalone Debian package. Pretty simple, right? You
can also list the Debian packages that are installed on your machine with a D package dash L.
The L is for list. There are lots of programs on here. It looks kind of messy. Can you think of
another command that we've used before that would help us search if a certain package is
installed? That's right. The grep command. Let's say we want to search for the atom package we
just installed. Keep in mind I just uninstalled it, so am just going to install it really quickly. Now
let's run D package dash L grep atom. Here, we have the D package dash L command that's been
piped to grep. Remember, the pipe command takes the standard output of one command which in
this case is the output of the D package dash L. Then, it sends it to the standard input of the
command it pipes to. In this case, grep. If we run this command, it shows us that atom is
definitely in the list of packages here. Just remember that when using grep, it lists other results
that have the search terms in the name. Just like that, we've learned how to install any Debian
package in Linux. We're really cooking now. Great work.
Now, let's talk about software and mobile operating systems. We're going to mostly use
examples from iOS and Android. But other mobile operating systems work in a similar way. If
your mobile device is using a specialized OS, you'll find information on how that software works
and the devices documentation. Software for mobile OS's is distributed as mobile applications or
apps. Apps have to come from a source that the mobile device has been configured to trust. On
most OS's, you can't just download an app from a random website and install it. Instead, mobile
operating systems use app stores. App stores are a central managed marketplace for app
developers to publish and sell mobile apps. The App Store app acts like a Package Manager, and
the App Store Service acts like a package repository. People use App Stores to access free and
paid applications from a central source through a single interface. Apps published through an
App Store have usually been through a security review and have been approved by the store
owner. Apps published through an App Store are signed by the developer of the app. Though OS
is configured to only trust code that's been signed by publishers that it recognizes. We'll talk
more about co-signing in a future module. For now, just think of it like signing a letter. The
developer is saying I wrote this. There's one way that code signing is different than signing a
letter though. If anyone changes the code, the signature becomes invalid. This lets the operating
system know if the code's been tampered with. Centralized App Stores work great for apps that
are available to the public. But what if your organization needs to run some type of custom App?
You'll need to use enterprise app management, which allows an organization to distribute custom
mobile apps. These apps were developed by or for the organization, and aren't available to the
general public. Enterprise apps are assigned with an enterprise certificate that has to be trusted by
the devices that are installing the applications. As an IT Support Specialist, you might help
manage enterprise app installation through the mobile device management or MDM service,
which we'll learn about in a future video. There's one other way to install an app into a mobile
OS, and that's called side-loading. Side-loading is where you install mobile apps directly without
using an App Store. Side-loading packages is riskier than installing through an App Store, and
you would generally only do this if you're an app developer. Mobile apps are standalone
software packages. So they contain all their dependencies. When you install an app, it will
already have everything it needs to run baked in. Mobile apps are assigned a specific storage
location for their data. As you use a mobile app, anything that's changed or created with that app
will end up in that apps assigned storage location or cache. So resetting a mobile app to how it
was when it was first installed is a simple as deleting or clearing the cache. In your IT support
role, you might help people troubleshoot mobile apps. Clearing the cache will remove all
changes to the settings and sign out of any accounts that the app was signed into. It might not be
the first thing you should try when trying to wrangle an unruly app. But it is a great technique for
when things are really broken. Check out the supplemental reading for a guide on how to do this.
Mobile devices will usually be configured to check for app updates on a regular schedule. In IT
support, you might need to make sure an app is updated. You'll find details on how to check for
app updates in the supplemental reading.
Supplemental Reading for Mobile App Packages
Mobile App Distribution
You are likely familiar with using either the Apple App Store or Google Play store to download
and install apps on your smartphone. As an IT Support professional, you may need to deploy
mobile apps across large organizations. In this reading, you will learn more about how mobile
apps are distributed both publicly and privately for iOS and Android.
How apps are distributed
Apple mobile apps
Apple’s App Store provides apps to millions of mobile devices around the world, including the
iPhone, iPad, and Apple Watch. Apple’s App Store Connect allows developers and organizations
to distribute both public and private apps, provided that the app passes an intensive review
process to meet Apple’s quality standards. App Store Connect also allows developers and
organizations to set individualized prices for the apps, enter banking information to accept
payments for apps or in-app purchases, schedule beta testing, and more. Apple recommends that
developers use the Xcode integrated development environment (IDE) or Ad Hoc for developing
iOS, iPadOS, and watchOS apps.
Apple’s App Store
Apple’s public App Store is a marketplace that reaches millions of Apple mobile device users
across the world. The App Store offers developers unlimited bandwidth for hosting, handles
payment processing, verifies users, etc. Developers must first register through the Apple
Developer Program if they wish to distribute apps through the App Store. The Apple Developer
Program offers resources, tools, and support for app development, including testing tools, beta
software, analytics, etc. Apple has a long and detailed list of guidelines for all apps that
developers and organizations must follow. The guidelines include rules for safety, third-party
software development kits (SDKs), ad networks, trademarks and copyrights, and much more.
Additionally, submitted apps cannot be copies of other developers’ products, nor can they be
designed to steal users’ data. Though the Apple Store Connect review process is rigorous, the
platform also provides an appeals process for rejected apps.
Custom Apple apps
Organizations may opt to create private customized apps to meet specific and unique
organizational needs. These custom apps may be designed for the organization’s students,
employees, clients, partners, franchisees, etc. Organizations can choose to offer the apps for free,
for a price, or through special redemption codes. They also have the option to automatically
distribute and configure apps to large numbers of registered devices using Mobile Device
Management (MDM).
Apple offers a couple of options for private and secure customized app distribution:
• Apple School Manager - For educational institutions, provides the option to distribute
proprietary apps for internal use and to purchase other apps in large volumes, often with
educator discounts. Common apps in Apple School Manager might include those for
course registration or digital textbook access. Apple School Manager also offers
educational institutions the ability to create accounts for students and staff, as well as to
setup automatic device enrollment.
• Apple Business Manager - For businesses, offers similar features as the Apple School
Manager including the distribution and purchase of private apps, as well as the automatic
deployment of apps to the business’ mobile devices. As an IT Support professional, you
might want to volume purchase mobile virus protection and automatically deploy the app
across your business’ mobile devices. An organization can set private audience groups in
App Store Connect. The audience groups will be able to see and download the
organization’s custom apps through the Apps and Books or Content sections of the Apple
School and Apple Business Managers.
Outside official Apple distribution channels
Some developers and organizations might not want to use an Apple platform for app distribution.
As an alternative, they have the option to distribute Apple “trusted developer” apps from
websites or private file shares using their Apple Developer ID certificate and Apple’s
notarization process.
Android mobile apps
Google makes considerable investments into Android development, the Google Play platform,
services, tools, and marketing to support developers and organizations who choose Google Play
to deploy Android apps. Android Studio is the official Android integrated development
environment (IDE) for developing Android apps. Android Studio is used to compile Android
Package Kit (APK) files, and the Android App Bundle is used to publish apps to Google Play.
The Android App Bundle enables Google Play to automatically generate the APK files for a
variety of devices and provide app signing keys. This service is a significant time saver for
developers and it ensures Google Play apps will work on most Android devices.
Google Play Store
Google Play revenue makes it possible for Google to offer the open Android operating system
for free to device manufacturers in order to promote growth and innovation. This business model
has driven Android adoption across 24,000+ device models with billions of Android mobile
device users around the world. The Google Play store hosts 2 million apps and games with 140+
billion downloads per year, and growing. Google also keeps consumers safe with Google Play’s
built-in protections, which require developers to adhere to high safety standards.
To distribute an app publicly through the Google Play Store, a developer will:
1. Create a Google Play developer account.
2. Use the Google Play Console to Create App.
a. Provide preliminary information about the app.
b. Review and agree to the Developer Program Policies, Terms of Service, and
documentation about export laws (where applicable).
3. Use the app’s Dashboard for guidance through the app publishing process:
a. Google Play Store listing
b. Pre-release management
c. Prepare a release
d. Testing
e. Submit app and declarations for review by Google
f. Promotion/pre-registration
g. Publish app (upon review approval)
Custom Android Apps
Large organizations, or Enterprise customers, can use “managed Google Play” as a distribution
tool for deploying apps to employees. Enterprise customers operate their own Google Play store
to host their apps publicly and/or privately. They can grant access to select users or user groups
to view and download private apps.
Google Play Custom App Publishing API is an Application Programming Interface from Google
that enables developers and organizations to create and publish private custom apps. Apps that
are published through Google Play Custom App Publishing API cannot be converted to public
apps. The apps will remain private permanently. Google offers a streamlined verification process
for private custom apps. These apps can be available to an organization for deployment in as
little as 5 minutes after verification.
Google Play Custom App Publishing API can be used by:
• Enterprise mobility management providers (EMMs)
• Third-party app developers
• Organizations/developers that want their enterprise clients to be able to distribute
private/custom apps from an EMM console, IDE, or other interface.
Enterprise customers can publish apps by:
1. Enabling the Google Play Custom App Publishing API.
2. Creating a service account.
3. Granting publishing permission to the service account on the organization’s Play Console
developer account.
Using Google Play within an organization, IT Support administrators should:
1. Use their organization’s managed version of Google Play to select and approve apps.
2. Ensure all employee Android devices are set up to use the organization’s managed
Google Play account.
3. Use the organization’s Enterprise Mobility Manager (EMM) to manage employee
Android devices and deploy selected apps to employees’ Android devices.
For Android devices that are owned by employees (BYODs) and not registered with the
organization’s EMM:
1. Consider Google’s recommendation to create a work profile on each device..
2. Show employees how to use their work profile to access the organization’s managed
Google Play account.
3. Demonstrate that employees can then view and install any of the administrator selected
and approved apps.
Outside official Google distribution channels
Google’s open platform policies includes allowing competitors to innovate in developing app
stores. Some alternative app stores that distribute Android apps include:
• APKMirror
• Aurora Store
• Aptoide
• Amazon Appstore
• F-Droid
• Uptodown
• SlideMe
• APKPure
• Galaxy Store
• Yalp Store
Please see Fossbytes “10 Best Google Play Store Alternatives: Websites And Apps” for more
information about each Android app store in the list above.
Resources for more information
• App Store Review Guidelines - Apple’s comprehensive list of guidelines developers must
follow for designing and submitting apps to the Apple App Store.
• Distributing custom apps for business - Apple’s guide to publishing custom apps.
• About Android App Bundles - Android developer’s guide to using Android App Bundles
to develop and publish apps on Google Play.
• Get started with custom app publishing - Google’s guide to publishing custom apps.
Supplemental Reading for Updating Mobile Apps
Mobile App Packages: App Updates
In this reading, you will learn about updating apps on mobile devices. IT Support professionals
use this skill for the maintenance and troubleshooting of mobile devices. It is a best practice to
keep apps updated for security purposes and to avoid any problems that affect outdated apps.
How to update apps
Android mobile apps
It is important to note that Android is an open operating system (OS). This means mobile device
manufacturers and cellular service providers can modify the Android OS to enhance, control, or
restrict elements of the OS. These modifications can include how system settings are accessed. If
an Android device’s Storage settings cannot be located easily, it is best to consult the device
manufacturer’s manual. Mobile device manuals can often be found online.
Instructions for most Android phones and tablets (note that instructions may vary by OS version;
Android 12 was used for these instructions):
Automatic updates
1. Open the Google Play Store app.
2. At the top right, tap the profile icon.
3. Select Settings.
4. Open the sub-menu for Network preferences.
5. Select an option:
a. App download preference Over any network - to update apps using either Wi-Fi
or mobile data (data usage charges may apply, depending on cellular plan).
b. Auto-update apps Over Wi-Fi only - to update apps only when connected to Wi-
Fi.
Troubleshooting note: If the user is not logged in to their Google account on the Android
device, apps may not update automatically.
Manual updates
1. If automatic updates are toggled on, repeat steps 1 to 5 for the “Automatic updates”
instructions listed above. However, for step 5, select Don’t auto-update apps.
2. Open the Google Play Store app.
3. At the top right, tap the profile icon.
4. Select Manage apps & device.
5. In the Update available section, select See details.
6. Select individual software to Update.
Apple mobile devices
Automatic updates
Apple’s iPhones and iPads are configured by default to automatically update apps stored on these
devices. However, as an IT Support specialist, you may encounter a variety of reasons why
automatic updates were disabled for a device, but need to be enabled again. The instructions to
turn on automatic updates for installed apps may vary by OS version. Please see Apple’s website
to view instructions for the specific OS version in use.
Manual updates
Some IT departments have policies to test all updates before allowing the updates to be applied
across the organization’s devices. In this case, you may need to configure the organization’s
Apple mobile devices to use manual updates for apps. Turning on manual updates will involve
turning off automatic updates. This step enables notifications to display each time an update
becomes available for an app installed on the device.
Instructions for app updates
The instructions for configuring automatic and manual updates for installed apps may vary by
OS version. Please see the “Resources for more information” section below for links to Apple’s
Support website to obtain detailed instructions.
Resources for more information
For more information about updating apps on mobile devices, please visit:
• How to manually update apps on your Apple device - Instructions for configuring both
manual updates and automatic updates for apps on Apple mobile devices.
• Manage software updates for Apple devices - Advanced administrative information for
managing software updates for Apple mobile devices. Centered on devices enrolled in
mobile device management (MDM) solutions.
• How to update the Play Store & apps on Android - Provides step-by-step instructions on
multiple options for updating Android apps.
Supplemental Reading for Mobile Device Storage
Mobile Device Storage Space
In this reading, you will learn how to check mobile devices for available storage space and how
to free up storage when space is low. Storage space on mobile devices is often limited. It is a best
practice to ensure that there is sufficient space on a mobile device before installing new apps or
saving new files. As an IT Support Specialist, checking storage space is an important
troubleshooting step. Like PCs, mobile devices can experience unusual errors when storage
space runs low. Imagine a user is trying to install an app or save a file to a mobile device and an
unexpected error occurs. If the error does not generate an informative error message, you will
have to investigate the problem. The first troubleshooting step for an installation or saving
problem should be to check if there is enough storage space for the new app or file.
Sometimes, limitations may be reached without the user intentionally adding programs or files to
their device. Automatically generated temporary cache files, for instance, can fill up the last bit
of storage space and cause unusual performance problems. Fortunately, unused or rarely used
apps and files can be uninstalled or deleted to make space for new items. Users should also be
encouraged to use cloud storage for photos, videos, and other important files, instead of storing
the files locally on the mobile device. This not only saves storage space, but it also helps in
protecting the files if the mobile device is lost, stolen, or broken.
Apple mobile devices
Both iOS and iPadOS automatically analyze how much space apps occupy in storage on iPhones
and iPads. You can see how much storage is available through the device’s Settings menu, on
iTunes, or through a computer with a connection to the mobile device. Apple mobile devices can
be configured to free up space automatically when they are low on storage space. The devices
will select files that can be downloaded again if needed for removal. These files can include
cache, local copies of files that are stored in the cloud, streamed videos and music, and
temporary files. Apple devices should also generate an alert when storage space is almost full to
give the user an opportunity to select specific apps and files for removal.
The following steps should be followed to check the storage space available on iPhones and
iPads (note that instructions may vary by OS version; iPadOS 15 was used for these
instructions):
1. Navigate to Settings > General > iPhone Storage or iPad Storage.
2. The first item on the Storage screen should be a visual indicator of how much storage
space has been used out of the total storage space available on the device. It might be
color coded to delineate which types of items are occupying the used storage space, such
as apps, messages, media, system data, etc.
3. Check the RECOMMENDATIONS section near the top of the screen (if available). This
section might suggest automatically deleting messages that are over a year old or
automatically uninstalling unused apps when space is running low. Be sure to investigate
the suggested items for deletion to ensure that the items will not be missed before
clicking Enable.
4. Review the next section, which lists the apps installed on the device. The file size and
date last used will be listed for each app. If you open the detailed view for an app, you
might see options like:
a. Offload the app - Removes the app only, but keeps app data and documents.
b. Delete the app - Removes the app, its data, and related documents.
5. Select the best option that suits the device user’s needs.
6. Move any photos, videos, and other user-created files to iCloud storage and remove the
copy stored on the device’s storage space.
Android mobile devices
Android is an open operating system (OS), which allows manufacturers to change the OS
configuration. These changes can include how system settings are accessed. For example, most
versions of Android should have Storage listed immediately under Settings. However, Samsung
Android phones have Storage settings listed under either Device Maintenance or Device Care.
If an Android device’s Storage settings cannot be located easily, it is best to consult the device
manufacturer’s manual. Mobile device manuals can often be found online.
Instructions for most Android phones and tablets (note that instructions may vary by OS version;
Android 12 was used for these instructions):
1. Navigate to Settings > Storage
2. The Storage screen may display a visual indicator illustrating how much storage space
has been used out of the total storage space available on the device. Like Apple devices,
the graphic might be color coded to indicate which types of USER DATA are occupying
the used storage space, such as images, videos, audio, documents, apps, etc.
3. Click the CLEAN UP button under the graphic (if available).
4. A new window should open to show a list of items that Android has analyzed and
RECOMMENDED FOR CLEANUP. Next to each item may be a button labeled
CLEAN UP. Scroll down to the bottom of the list to find the SPECIAL CLEANUPS
section.
a. For some items, like Junk Files, clicking the CLEAN UP button will
automatically remove the files.
b. For other items, like Images or Videos, clicking the CLEAN UP button will give
the user a checklist of specific items to select for removal. Be sure to investigate
the suggested items for deletion to ensure that the items will not be missed by the
user.
One type of package that we haven't discussed yet isn't really a package at all, it's an archive. An
archive is comprised of one or more files that's compressed into a single file. Package archives
are basically the core or source software files that are compressed into one file. When we install
software from a source archive, it's referred to as, installing from source. Popular archive types
you'll see are .tar, .zip, and .rar. To install software found in an archive, you first have to extract
the contents of the archive so you can see the files inside. Then, depending on what type of code
it was written in, you have to use a specific method to install it. We won't discuss how to install
from source, since it changes depending on what language the software was written in. But we'll
discuss how to extract the contents of an archive, which you'll have to do a lot as an IT support
specialist.
It's not just software that's stored in an archive, anything can be archived, like pictures or music
files. You'll see these a lot in IT support. To make things more complicated, archive types have
lots of different way they can be extracted. Luckily, there's a very popular tools in windows for
file archiving and unarchiving different file types, like .rar .zip and tar. This is the open source
tool 7-zip. It's already installed on my computer. If you want to download it yourself, I've
included a link in the supplemental reading. There's an archive on my desktop called colors.zip.
Let's go ahead and extract this archive so that we can see the files inside. I'm just going to right
click, 7-Zip, extract, here. It looks like there are a bunch of files inside of this archive.
Besides unarchiving files, you can also archive files. I'm going to make a new folder called new
colors. Then, I'm going to add this new blue dot text file and the old colors in this folder. Then,
I'm just going to archive it with 7-Zip and Add to archive. Click OK.
Pretty cool, right? Now, if you wanted to send someone a bunch of files in an email, you don't
have to send them one by one. Instead, you can combine them all in one archive and send one
single file. If you're using PowerShell version 5.0 of greater, you can actually extract and
compress archives right from the command line. Let's say you've got a bunch of files in a folder
called cool files on your desktop, that you'd like to add to the new zip file.
After you've opened up the PowerShell command line interface, you can issue this command,
Compress-Archive.path. CoolFiles, and then we're just going to make a new archive in the
desktop called CoolArchive.zip. Now, if we check our desktop. There you should see it,
CoolArchive.zip. This will take everything from the desktop CoolFiles directory, and compress it
into this CoolArchives.zip
Supplemental Reading for 7-Zip and PowerShell Zips
If you're interested in downloading 7-Zip, check out the link here.
For more information about creating an archive or zipped file using Windows PowerShell,
check out the link here.
Supplemental Reading for the Linux Tar Command
For more information about the Linux tar command, please check out the following link
here.
Packages of software usually rely on other pieces of code in order to work. Let's say you're
installing a game on your Windows computer. The program might need to do some calculations
to make the physics of the game work properly and render the results in the form of graphics on
the screen. To perform these tasks, the game might have a dependency on a physics engine to do
the calculations and a rendering library to show the sweet graphics on the screen. In order for the
game to work, you'll have to have all that software available to the game. Counting on other
pieces of software to make an application work is called having dependencies since one bit of
code depends on another in order to work. In our example, the game depends on both the physics
engine and a rendering library to run. But wait, what do we mean when we refer to a library?
You can think of the library as a way to package a bunch of useful code that someone else wrote.
This code is bundled together into a single unit. Programs that want to use the functionality that
the code provides can tap into it if they need to. In Windows, these shared libraries are called
dynamic-link libraries, or DLL for short. You can find out more details about dynamic link
libraries in the next reading.
One super useful feature of a DLL is that the same DLL can be used by lots of different
programs. This means all that shared code doesn't need to be loaded into memory for each
application that wants to use it, so less memory overall is used. Windows applications typically
have many dependencies all located together in a single installation package. Along with
something called an MSI file that tells the Windows Installer how to put it all together.
This means that a given installation package will have all the resources and dependencies like
DLLs, right there in the package. The Windows Installer will also handle managing those
dependencies and make sure they are available to the program. In the old days, things weren't
always so great. Imagine this scenario. A video player you've been using to play movies on your
computer uses a graphics DLL to display films on your screen. A new game just came out that
you want to play, so you install that too.
The game comes along with a new version of that graphics library. So the game installer updates
the existing version with the new DLL. All of a sudden, your video player stops working. It turns
out the video player doesn't know how to use the new version of the DLL, which is a pretty big
bummer. On modern Windows Operating Systems though, DLL hell is a problem of the past. To
fix it, most shared libraries and resources in Windows are managed by something called side-by-
side assemblies or SxS.
Most of these shared libraries are stored in a folder at C:\Windows\WinSxS. If an application
needs to use a shared library to perform a task, that library will be specified in something called a
Manifest. This tells Windows to load the appropriate library from the SxS folder. The SxS
system also supports access to multiple versions of the same shared library automatically. So
when you install software, you don't pull the rug out from underneath the programs you've
already got. In addition to manifest, the SSX system and installers bundling dependencies
together in their installation packages. You can use a Windows Package Manager to help install
and maintain the libraries and other dependencies that your installed software needs to use. We'll
talk about this in more detail in our lesson on Windows package managers. We'll give you
preview using the Windows package management feature for PowerShell. Using a Windows
package management cmdlet called Find-Package, you can locate software, along with its
dependencies right from the command line.
By the way, a cmdlet is basically the name we give to Windows PowerShell commands that use
the verb-noun format. We've already used lots of cmdlets such as get-help, select-string etc.
There are hundreds of cmdlets built into Windows and you can even write your own. Okay, back
to my task at hand. Let's say you wanted to install the Sysinternals package. Which is a set of
tools released by Microsoft that can help you troubleshoot all sorts of problems on your
Windows computers. You could download the Sysinternals package from the Microsoft Website
or you could use the package management feature. First we'll need to open up a PowerShell
terminal by typing in PowerShell from the start menu.
Then we can try to locate the sysinternals package by executing this command. Find-Package
sysinternals-IncludeDependencies.
An error. No match found. What's that all about?
This exception was generated because the default source of packages in PowerShell is the
PowerShell gallery, which doesn't contain the Sysinternals package.
Luckily, all we need to do is tell PowerShell about a place where it can find the Sysinternals
package. And that's a package repository called Chocolatey. We'll dip into more about
Chocolatey in the package manager video. But for now, just know it's a place where all kinds of
windows software packages live. So, before we can install any packages, we need to add a
package source that tells our computer where it can find the packages we want to install. Since
we want to use Chocolatey to find our packages, we need to add it as a package source. We're
going to do that with the PowerShell command Register-PackageSource. Let's go and type
Register-PackageSource-Name chocolatey-ProviderName Chocolatey-Location,
.org/api/v2.
We can verify both sources of software are now good to go with the Get-PackageSource
command. And then, try to locate our package and its dependencies again with Find-Package,
sysinternals-includeDependencies. Sweet! Now that we know that’s the package we want, we
can use a cmdlt called Install-Package to actually install Sysinternals and its corresponding
dependencies. We’ll do that in a later lesson. Now it's time for snack break. All this Chocolatey
talk made me hungry.
Supplemental reading for Windows Package Dependencies
DLL Files and Windows Package Dependencies
In this reading, you will learn about dynamic link library (DLL) files. This information includes
how Windows package dependencies can break and how Microsoft has remedied these DLL
dependency problems using the .NET framework and other methods. You will also learn about
the side-by-side assemblies and manifest files for Windows applications.
Dynamic link library (DLL)
Windows DLL files are vital to the core functions of the Windows operating system (OS). Some
Windows-compatible applications also use DLL files to function. DLLs are made up of
programming modules that contain reusable code. Multiple applications can use and reuse the
same DLL files. For example, the Comdlg32 DLL file is used by many applications to provide
Windows dialog box functions. The reusable feature helps Windows conserve disk space and use
RAM more efficiently, which improves the operating speed of the OS and applications. The
modular structure also makes updating a DLL file fast and simple, eliminating the need to update
the entire library. DLL updates are installed once for use by any number of applications.
A few common DLLs used by Windows include:
• .drv files - Device drivers manage the operation of physical devices such as printers.
• .ocx files - Active X controls provide controls like the program object for selecting a date
from a calendar.
• .cpl files - Control panel files manage each of the functions found in the Windows
Control Panel.
An application can use DLLs to load parts of the app as modules. This means that if the
application offers multiple functions, the app can selectively load only the modules that offer the
functionality requested by the user. For example, if a user does not access the Print function
within an application, then the printer driver DLL file does not need to be loaded into memory.
This system requires less RAM to hold the application in working memory, which improves
operating speeds.
DLL dependencies
A Windows package dependency is created when an application uses a DLL file. Although the
Windows DLL system supports the sharing of DLL files by multiple applications, the
applications’ dependencies can be broken under certain circumstances.
DLL dependencies can be broken when:
• Overwriting DLL dependencies - It is possible for an application to overwrite the DLL
dependency of another app, causing the other app to fail.
• Deleting DLL files - Some applications and malware may delete the DLLs needed by
other applications installed on a system.
• Applying upgrades or fixes to DLLs - Can cause a problem called “DLL hell” where an
application installs a new version of the shared DLL for a computer system. However,
other applications that are dependent on the shared DLL have not yet been updated to be
compatible with the new version of the DLL. This causes the other applications to fail
when the end user tries to launch them.
• Rolling-back to previous DLL versions - A user may try to reinstall an older
application that stopped working after a shared DLL file was upgraded by a newer app.
However, the reinstallation of the app that uses the old DLL version can overwrite the
new DLL file. This DLL version roll-back can cause the newer app with the shared DLL
dependency to fail the next time it tries to run.
Microsoft has remedied these problems through the use of:
• Windows File Protection - The Windows OS controls the updates and deletions of
system DLL files. Windows File Protection will allow only applications with valid digital
signatures to update and delete DLL files.
• Private DLLs - Removes the sharing option from DLLs by creating a private version of
the DLL and storing it in the application’s root folder. Changes to the shared version of
the DLL will not affect the application’s private copy.
• .NET Framework assembly versioning - Resolves the “DLL hell” problem by allowing
an application to add an updated version of a DLL file without removing the older
version of the DLL file. This prevents the malfunction of applications that have
dependencies on the older DLL file. The DLL versions can be found in the
"C:\Windows\assembly" path and are placed in the Global Assembly Cache (GAC). The
GAC contains the .NET “Strong Name Assembly” of each DLL file version. This
“Strong Name Assembly” includes the:
o name of the assembly - multiple DLL files can share the assembly name
o version number - differentiates the version of DLLs
o culture - country or region where the application is deployed, can be “neutral”
o public key token - a unique 16-character key assigned to an assembly when it is
built
Side-by-side assemblies
DLLs and dependencies can also be located in side-by-side assemblies. A side-by-side assembly
is a public or private resource collection that is available to applications during run time. Side-
by-side assemblies contain XML files called manifests. The manifests contain data similar to the
configuration settings and other data that applications traditionally stored in the Windows
registry. Instead of registering this data in the Windows registry, the applications store shared
side-by-side assembly manifests in the WinSxS folder of the computer. Private manifests are
stored inside the application’s folder or they can be embedded in an application or assembly. The
metadata of a manifest may include:
• Names - Manages file naming.
• Resource collections - Can include one or more DLLs, COM servers, Windows classes,
interfaces, and/or type libraries.
• Classes - Included if versioning is used.
• Dependencies - Applications and assemblies can create dependencies to other side-by-
side assemblies.
As an IT Support professional, this concept should be considered when troubleshooting
application issues. If the application’s configuration settings are not found in the Windows
registry, they might be located in the manifest from the app’s side-by-side assembly.
Let's see what a package dependency would look like and Linux. We learned how to install a
standalone package in Linux using dpkg in the last lesson. Let's install one more package. I
downloaded the Google Chrome browser here in my desktop and I want to install it with sudo
dpkg -i google - chrome. Wait a minute, what's this error I'm getting? Dependency problems
prevent configuration of google chrome stable. This is saying it can't install Google Chrome
because it's dependent on another package that isn't currently installed on this machine. So before
we can install Chrome, we have to install this package lib app indicator one. While a standalone
package installer like dpkg may be quick to use, it doesn't install package dependencies for us. In
Linux, these dependencies can be other packages or they could be something like shared
libraries. Linux shared libraries similar to Windows dlls are libraries of code that other programs
can use. So what do you do if you're stuck with a dependency error? You could install the
dependencies one by one. Sure. But in some cases, you might see more than just one
dependency. You might even see 10. This is especially true in Linux. It's not that fun to
continually install programs just so you can get one program to work. Luckily for us, that's
where package managers come in. Package managers come with the works to make package
installation and removal easier, including installing package dependencies. We'll talk about
package managers in the next lesson but for now, it's enough to know that if you install a
standalone package, you won't automatically install its dependencies.
Supplemental Reading for Linux Package Dependencies
Linux Package Dependencies
In this reading, you will review how to install and manage Debian packages in Linux using the
dpkg command. This skill may be helpful to IT Support professionals that work with Linux
systems like Debian or Ubuntu.
The following is a list of terms used in this reading:
• Debian: One of many free Linux operating systems (OSes), used as the foundation for
other OSes, like Ubuntu.
• Linux packages: A compressed software archive file that contains the files needed for a
software application. These files can include binary executables, a software libraries,
configuration files, package dependencies, command line utilities, and/or application(s)
with a graphical user interface (GUI). A Linux package can also be an OS update. Linux
OS installations normally come with thousands of packages. Common Linux package
types include:
o .deb - Debian packages
o .rpm - Redhat packages
o .tgz - TAR archive file
• Linux repository: Storage space on a remote server that hosts thousands of Linux
packages. Repositories must be added to a Linux system in order for the system to search
and download packages from the repository.
• Stand alone package: A package that does not require any dependencies. All files
required to install and run the package on a Linux system are contained inside a single
package.
• Package dependency: A package that other Linux packages depend upon to function
properly. Often, packages do not include the dependencies required to install the software
they contain. Instead, package manifests list the external dependencies needed by the
package.
• Package manager: A tool on Linux systems used for installing, managing, and removing
Linux packages. Package managers can also read package manifests to determine if any
dependencies are needed. The package manager then finds and downloads the
dependency packages before installing the packaged software. Several common Linux
Package Managers include:
o For Debian and Debian-based systems, like Ubuntu:
▪ dpkg - Debian Package Manager
▪ APT - Advanced Package Tool, uses dpkg commands
▪ aptitude - user-friendly package manager
o RedHat and RedHat-based systems, like CentOS:
▪ rpm - RedHat Package Manager
▪ yum - Yellowdog Updater Modified, comes with RedHat
▪ dnf - Dandified Yum
The dpkg command
The Linux dpkg command is used to build, install, manage, and remove packages in Debian or
Debian-based systems.
Syntax
The following are a few common dpkg command action parameters, with syntax and uses:
To install a package:
To update a package saved locally:
To remove a package:
To purge a package, which removes the package and all files belonging to the package:
To get a list of packages installed:
To get a list of all files belonging to or associated with a package:
To list the contents of a new package:
When an action parameter is added to the dpkg command, one of the following two commands
are run in the background:
• dpkg-deb: A back-end tool for manipulating .deb files. The dpkg-deb tool provides
information about .deb files, and can pack and unpack their contents.
• dpkg-query: A back-end tool for querying .deb files for information.
Additional Debian package managers
There are several alternate methods for managing Debian packages. Some have command-line
interfaces (CLI) while others have GUIs. The alternative options to dpkg include:
• APT (Advanced Package Tool) - A powerful package manager designed to be a front-
end for the dpkg command. APT installs and updates dependencies required for proper
.deb package installation.
• Synaptic Package Manager – A popular GTK (GNU Image Manipulation Program
ToolKit) widget with a GUI. Provides an array of package management features.
• Ubuntu Software Center – A GTK GUI developed by Ubuntu and integrated into the
Ubuntu OS.
• aptitude – A user-friendly front-end for APT, with a menu-driven console and a CLI.
• KPackage – A part of KDE (Kool Desktop Environment) used to install and load
packages that do not contain binary content. Non-binary content includes graphics and
scripted extensions.

Package Managers
Now that we know a bit about installing software and dependencies from individual executables
or package files, let's take a look at a different way to manage software installations using tools
called package managers. You've actually already seen a package manager in action. Remember
the apt or advanced package tool we talked about in earlier video? Well, the advanced package
tool is actually a package manager for the Ubuntu operating system. We'll talk about apt in a
little bit. But you might be curious about what options you have for Windows package
management. A package manager makes sure that the process of software installation, removal,
update, and dependency management is as easy and automatic as possible. Think about the
normal way you might install a new program on your Windows computer. You might search for
it in a search engine, go to the program's website, download the installer, then run it. If you
wanted to update the software, you might open up the program and use whatever mechanism it
provides for you to install the new version. Lots of programs give you a way to perform
automatic updates and Microsoft takes care of the ones it writes through Windows update. But
you might even need to go back to the website you downloaded the software from originally to
grab another installer for the new version. Finally, if you wanted to remove the software, you
might use the windows Add/Remove programs utility. Or maybe run a custom uninstaller if it
provides you with one. Some installation technologies like the Windows installer can take care
of dependency management. But they don't do much to help you install software from a central
catalog of programs or perform automatic updates. This is where a package manager like
Chocolatey can come in handy. Chocolatey is a third party package manager for Windows. This
means it's not written by Microsoft. It lets you install Windows applications from the command
line. Chocolatey is built on some existing Windows technologies like PowerShell, and lets you
install any package or software that exists in the public Chocolatey repository. I've included links
to both in the next reading. You can add any software that might be missing to the public
repository. You can even create your own private repository if you need to package something
like an internal company application. Configuration management tools like SCCM and Puppet,
even integrate with Choclatey. That helps make managing deployments of software to the
Windows computers in your company, automatic and simple. We've talked about a few ways we
can install packages in earlier videos. Let's add Chocolatey to the mix, which supports several
methods of software installation itself. First, you can install the Choclatey command line tool and
run it directly from your PowerShell CLI. Or you can use the package management feature that
was recently released for PowerShell. Just specify that the source of the package should be the
Choclatey repository. Remember this from our talk about installing software? We use this
command to locate the Windows Sysinternals package after adding Choclatey as a software
source. Just a refresher, the command was Find-package sysinternals include dependencies.
That's all well and good. But how do we actually go about installing this package? Well, that's
where the Install-Package command-let comes into play. We can use this tool to install a piece of
software and its dependencies. Let's get installing that sysinternals package we found earlier
shot. I'm just going to go install, package-name sysinternals. Yep, I'm just going to confirm. And
just like that, we've got our package. We can verify it's in place with the Get-Package command-
let. Get-Package -name sysinternals. You can also uninstall a package using the Uninstall-
Package -Name sysinternals.
Supplemental Reading for Windows Package Managers
For more information about the NuGet package manager, check out the link here.
For more information about the Chocolatey package manager, check out the link here.
Okay. Now, to talk about the package manager used in Ubuntu called the APT or Advanced
Package Tool. We've actually already used APT in an earlier course, so hopefully, this won't
look new. The APT package manager is used to extend the functionality of the package. It makes
package installation a lot easier. It installs package dependencies for us, makes it easier for us to
find packages that we can install, cleans up packages we don't need, and more. Let's see how we
will install the open source graphical editor, Gimp, using APT. And if you want to follow along
on your own machine, I've included a link to the Gimp download in the next reading. So, sudo
apt install gimp. Let's take a look at what this command is doing. APT grabs the dependencies
that this package requires automatically and asks us if we want to install it. You can see this line
here, 0 upgraded, 18 newly installed, 0 to remove, and 16 not upgraded. This gives us a good
overview of what we're doing to the packages on our machine. Now, let' s remove this package.
It's sudo APT remove gimp.
You can see that it removes dependencies for us that we're not using anymore because we don't
need Gimp. You also noticed that when installing this package, we didn't have to download the
gimp package. It was just on our system. How is that possible? Well, thanks to something known
as a package repository, we don't have to manually search for each and every software we want
online. We've already seen the chocolatey package repository in action. Repositories are servers
that act like a central storage location for packages. Lots of software developers and
organizations host their software on the internet, and give out a link to where that location is.
You can add that link to your own machine, so it references that package or list of packages.
You've already seen this with The Register-PackageSource commandlet where we added this
location of the chocolatey repository. So on Linux, where do you add a package or repository
link? The repository source file in Ubuntu the /etc/APT/sources.list. Your computer doesn't
know where to check for software if you don't explicitly add the package or repository links to
this file. Let's just open this up real quick and take a peek.
There's some extra information in here that isn't important. But you can see here that there are
links here. If you navigate to those links, you'll see a directory that holds lots of packages.
Ubuntu already includes several repository sources in here to help you install the base operating
system packages, and other tools too. If you work in a Linux environment, there are also special
repositories called PPAs or personal package archives.
PPAs are hosted on Launchpad servers. Launchpad is a website owned by the organization,
Canonical Limited. It allows open source software developers to develop, maintain, and
distribute software. You can add PPAs like you would a regular repository link, but be a little
careful when using a PPA instead of the original developer's repositories. PPA software isn't as
vetted as repositories you might find from reputable sources like Ubuntu. They can sometimes
contain defective, or even malicious software. One more thing to call out about repositories is
that the repository managers update their software pretty regularly. If you want to get the latest
package updates, you should update your package repositories with the APT update, and then,
APT upgrade commands. The APT update command updates the list of packages in your
repositories, so you get the latest software available. But it won't install or upgrade packages for
you. Instead, once you have an updated list of packages, you can use APT upgrade, and it will
install any outdated packages for you automatically. Before installing new software, it's good to
run APT update to make sure you're getting the most up-to-date software in your repositories.
You'll also want to run APT upgrade to install any available updated packages on your machine.
You can use the apt--help command to learn more about the commands available with APT. We
won't cover them all, but you can list packages, search packages, get more information about
packages, and more. There are lots of different package managers you can use with Ubuntu. We
chose APT because it's a popular package manager, but you can read up on an alternative
package manager in Ubuntu, in the next supplemental reading. Awesome. Now that you've
entered the APT command to your toolkit, your ready to maintain packages in Linux. This is a
skill you'll be using a whole lot in the IT support world. We'll talk about that more in the next
lesson.
Supplemental Reading for Linux PPAs
If you work in a Linux environment, there are also special repositories known as PPAs or
Personal Package Archives. PPAs are hosted on Launchpad servers. For more information
about PPAs, check out the link here.
Supplemental Reading on GIMP
For more information on how to install the open-source graphical editor GIMP click here.

What’s happening in the background?


We've talked a lot about the practical aspects of installing software, which has been abstracted
for us in the form of package managers. But as someone who might be working in IT, it's also
important for you to understand what happens underneath the hood, when installing or removing
software. In other words, what really happens with the underlying technology when you perform
this action. You might come across a situation where a package you install, modifies a
configuration file that it's not supposed to, and then starts causing issues for you. So how does
software installation work underneath the hood? Let's take a look at how an EXE gets installed in
Windows. When you click on an installation executable, what happens next depends on how the
developer of the program has set their application app to be installed. If the EXE contains code
for a custom installation that doesn't use the Windows installer system, then the details of what
happens under the hood will be mostly unclear. This is because most Windows software is
distributed in closed source packages. Meaning you can't look at the source code to see what the
program is doing. In this case though, although you can't read the instructions the developer has
written, you can use certain tools to check out the actions the installer is taking. One way to do
this, will be to use the process monitoring program provided by the Microsoft CIS internals
toolkit. This will show you any activity the installation executable is taking, like the files it
writes and any process activity it performs. You can read more about the Microsoft CIS internals
toolkit in the next supplemental reading. So what about MSI file, or an executable wrapping an
MSI? Again, the application itself will be closed source, so you won't be able to peek at the
source code to see what it does. But, installation packages that use the MSI format have a set of
rules and standards they need to conform to, so that the Windows installer system can understand
their instructions and perform the installation. There are more to MSI files than it might seem at
first. In fact, they aren't simple files at all. They're actually a combination of databases that
contain installation instructions in different tables along with all the files, objects, shortcuts,
resources and libraries the program will need all grouped together. The Windows installer uses
the information stored on the tables in the MSI database, to guide how the installation should be
performed. They'll know where files and application data should go, and any other things that
should happen to successfully install the program. The Windows installer will keep track of all
the actions it takes and create a separate set of instructions to undo them. This is how it lets users
uninstall the program. If you're curious about the details of what goes into an MSI file, or want to
create a Windows installer package yourself, check out the orca.exe tool that Microsoft provides.
It's a good way to satisfy your curiosity. Orca, is part of the Windows SDK or software
development kit, but you don't need to be a programmer to use it. Orca can help you edit, or
create Windows installer packages. So, feel free to explore what it can do. We've provided a link
to the tool in the supplementary reading right after this video. Wow, there's a lot going on
underneath the hood in a Windows installation, and it's all kicked off by a couple of clicks. So
whatWindows Setting Panel
Windows settings panel reference guide
The Windows settings panels allow users to view and change system settings in Windows. Each
setting group has a panel that allows changes to several system features. This guide explains the
function of each settings panel.
Download the Windows settings panel reference guide to learn about each settings panel.
about LIXSupplemental Reading for Windows Installers and Process Monitors
For more information about various ways you can create and edit Windows installer
packages, check out the following links: Process Monitor, Windows Installer Examples,
and Orca.exe.
? Glad you asked, that's up next.
In Linux, software installations are a little bit more clear. We mentioned in an earlier lesson that
you can install software directly from source code. This method changes depending on the
software because different programming languages are compiled differently. We won't go in-
depth about software development, but let's say we had an archive with a simple package we
want to install. This is my Flappy app package. I've already extracted it, so you can see there's a
setup script. This is a script file that will run a bunch of tasks on the computer in order to set up
the package. And then flappy_app_code, this is the actual software code. The README is a
pretty standard file contained in source archives that has information about the archive. It not so
subtlety asks you to read it before you do anything. The set of script is what we're concerned
with since it tells us how to install our package. A sample setup script can contain program
instructions like compile flappy_app_code into machine instructions, copy compiled flappy app
binary to slash bin directory, or create a folder to slash home slash, whatever the username for
flappy app is. This is a very simple overview of what happens when you install a simple
package. In the end, the software developers decide what their software needs to work and runs
tasks to get it working. It's up to the developer whether those tasks are creating files or updating
directories. If you knew the programming languages used, you could read these instructions
yourself. But that's a bit out of scope for this course. Anyways, that's how software installation
works on Linux in a nutshell.

Device Software Management


An important piece of software that we've talked about, but haven't really seen in action, is a
driver. Remember that a driver is used to help our hardware devices interact with our operating
system. In this lesson, we're going to talk about device drivers and how to manage them. First,
let's talk about how to manage the devices that our computer sees, and then we'll go over how we
install drivers for them. In Windows, Microsoft groups all of the devices and drivers on the
computer together in a single Microsoft management console called the Device Manager. You
can get to the Device Manager in a couple of different ways. You can open up the Run dialog
box and type in devmgmt.msc. Or you can right-click on This PC, select Manage, and click on
the Device Manager option in the left-hand navigation menu. I'm just going to open it up from
here.
Most devices you've got on your computer will be grouped together according to some broad
categories by Windows. So any displays you might be using with your computer would show up
under the Monitors section in the Device Manager.
Like so. This grouping usually happens automatically when you plug in a new device. It's part of
the plug and play system that Windows uses to automatically detect new hardware plugged into
your computer. It then recognizes and installs the appropriate software to manage it.
If you're interested, you can read more about the PnP system in the supplementary reading. We'll
give you an overview of how this works too, so you can get a feel for it. When you plug a new
device, like a mouse or keyboard, into your computer, the Windows operating system will go
through a few steps to try and get it working. Most vendors or computer hardware manufacturers
will assign a special string of characters to their devices called a hardware ID. When Windows
notices that a new device has been connected, the first thing it'll do is ask the device that's been
plugged in for its hardware ID.
Once Windows has the new device's hardware ID, the OS uses it to search for the right driver for
the device. It looks for it in a few places, starting with a local list of well-known drivers. Then it
goes on to Windows Update or the driver store if it needs to expand the search. Sometimes the
device will come with an installation disk, which contains custom driver software and you can
tell Windows to look there too. Finally, Windows will take the driver software it found and
install it so you can use your new device. Although this process mostly happens automatically
and behind the scenes, you can interact directly with the Windows drivers through the Device
Manager console we mentioned earlier. You can expand any of the categories in the Device
Manager to view the devices inside them, like so.
You can also use the all-powerful Windows right-click to open up a menu with options to work
with them. You can uninstall, disable, and update a device driver from this menu. You can also
tell Windows to look for hardware changes like a newly plugged in device. Finally, if you choose
Properties from the right-click menu, you can see some details about the device and its driver.
Like its manufacturer and the driver version being used. If you're interested in accessing drivers
through Windows CLI, check out the following reading for more info.
Supplemental Reading Windows Devices and Drivers
For an Introduction to Plug and Play click here.
As discussed in the previous lecture video, when Windows notices that a new device has
been connected, the first thing it will do is ask the device that's been plugged in for it's
Hardware ID. For more information on hardware identification click here.
Once Windows has the Hardware ID of the new device, the OS uses that ID to search for
the right driver for the device, For more information on this click here.
It looks in a few places for the driver, starting with a local list of well-known drivers, then
going onto Windows Update or the Driver Store. For more information click here.
Ubuntu has a slightly messier way of showing us device management. In Linux, everything's
considered a file, even hardware devices. When a device is connected to your computer, a device
file is created in the /dev directory. Let's take a look at this directory. There are lots of devices in
this directory, but not all of them are actually physical devices. For example, the /dev/null
devices in here. We won't talk about all the device types in Linux because there are a lot of them.
But we'll go over the more common ones you'll see, character devices and block devices.
Character devices, like a keyboard or mouse, transmit data character by character. Block devices
like USB drives, hard drives, and CD-ROMs transfer blocks of data. A data block is just a unit of
data storage. Remember from an earlier lesson, that the first bit you see in an LS-L command is
the type of file. So far, we've seen dash which stands for a regular file and a D which stands for a
directory. But in this output, we can see we have a few other file types. Some of them have B for
block device and C for character device. If you'd like to learn more about that other device types,
you can check out the next supplemental reading. Let's look at some of the block devices we'll be
interacting with in this course. You'll see a few files that start with /dev/sda or /sdb. SD devices
are mass storage devices like our hard drives, memory sticks, et cetra. If you see an A after SD, it
just means the device was detected by the computer first. So you might see something like /dev
sda, /dev sdb, /dev sdc. Revisiting the /dev/null device, we can see it's considered a character
device because it's used to transfer data, character by character.
This is a pretty simple overview of device files. I left a lot of stuff out that you don't necessarily
need to know now. If you want to learn more about the inner workings of devices in Linux
checkout, you guessed it, the next supplemental reading. Let's talk about updating device drivers
for Linux. With Windows, we were able to just click update driver and in most cases that works.
In Linux, things are a little more complicated, and at the same time pretty easy. I'm not trying to
be confusing. You'll see what I mean in a moment. Device drivers aren't stored in the /dev
directory. Sometimes, they're part of the Linux kernel. Remember, that the kernel of our machine
handles the interaction with hardware. The kernel is a really monolithic piece of software that
has lots of functions including support for lots of hardware. These days, a lot of hardware
support is built into the kernel. So when you plug in a device, it automatically works. But if there
are devices that don't have support built into the kernel, they most likely have something called a
kernel module. Well, what's this kernel module thingy? For a lot of developers, touching
software like the Linux kernel is kind of intimidating. Instead, they can create kernel modules
which extend the kernel's functionality without them actually touching it. So, if you need to
install kernel module for a specific type of device, you can install the same way we've been
installing all software in Linux. Keep in mind that not all kernel modules are drivers. We won't
get into kernel modules, but if you'd like to read more, I've included a link to that as well in the
next reading. Since we just need to get started and get hands-on with the operating systems, this
should be more than enough. Let's keep moving.
Supplemental reading for Linux Devices and Drivers
Linux Devices and Drivers
In this reading, you will learn how devices and drivers are managed in Linux. Previously, you
learned that in Linux, devices attached to the computer are recognized by the operating system as
device files. Devices are located in the /dev directory in Linux. A few examples of devices you
may find in the /dev directory include:
• /dev/sda - First SCSI drive
• /dev/sr0 - First optical disk drive
• /dev/usb - USB device
• /dev/usbhid - USB mouse
• /dev/usb/lp0 - USB printer
• /dev/null - discard
Some of the Linux device categories include:
• Block devices: Devices that can hold data, such as hard drives, USB drives, and
filesystems.
• Character devices: Devices that input or output data one character at a time, such as
keyboards, monitors, and printers.
• Pipe devices: Similar to character devices. However, pipe devices send output to a
process running on the Linux machine instead of a monitor or printer.
• Socket devices: Similar to pipe devices. However, socket devices help multiple
processes communicate with each other.
Installing a device in Linux
There are hundreds of versions of Linux available due to the fact that Linux is an open source
operating system. The methods for installing devices on Linux can vary from version to version.
The instructions in this section provide various options for installing a printer and its device
drivers on a Red Hat 9 Linux system running the GNOME user interface.
Device autodetect with udev
Udev is a device manager that automatically creates and removes device files in Linux when the
associated devices are connected and disconnected. Udev has a daemon running in Linux that
listens for kernel messages about devices connecting and disconnecting to the machine.
Installation through a user interface - GNOME
There are multiple user interfaces available for Linux. These instructions are specifically for the
GNOME user interface.
1. In the GNOME user interface, open the Settings menu.
2. On the left-side menu, select Printers.
3. Click the Unlock button in the top right corner to change the system settings. Note that
your user account must have superuser, sudo, or printadmin privileges to unlock the
system settings for printers.
4. A dialog box will open showing a list of available printers. If your network has a large
number of printers, you can search for the printer by IP address or host name.
5. Select the printer you want to install on the local system and click Add.
6. The printer listing will appear in the Settings window for the Printers.
7. In the top right corner of the printer listing, click the Printer Settings icon and select
Printer Details from the pop-up menu.
8. The details of the printer will open in a new window. You should have three options for
installing the printer driver:
a. Search for Drivers: The GNOME Control Center will automatically search for
the driver in driver repositories using PackageKit.
b. Select from Database: Manually select a driver from any databases installed on
the Linux system.
c. Install PPD File: Manually select from a list of postscript printer description
(PPD) files, which may be used as printer drivers.
Installation through the command line
Red Hat Linux uses the Common Unix Printing System (CUPS) to manage printers from the
command line. CUPS servers broadcast to clients for automatic printer installation on Linux
machines. However, for network environments with multiple printers, it may be preferable to
manually install specific printers through the command line.
• From the command-line, enter $ lpadmin -p printername -m driverfilename.ppd
o Lpadmin is the printer administrator command.
o The -p printername command adds or modifies the named printer.
o The -m driverfilename.ppd command installs the postscript printer description
(PPD) driver filename that you provide. The file should be stored in the
/usr/share/cups/model/ directory.
o Enter $ man lpadmin to open the manual for the lpadmin command to find
additional command line options.
How to check if a device is installed
There are a couple of methods for checking if a device is already installed on a Linux machine:
Through a user interface like GNOME
1. In the GNOME user interface, open the Settings menu.
2. Browse each device set on the left-side menu.
3. The attached devices of the selected device type will appear in the window pane on the
right.
Through the command line
The most common way to check if a device is installed is to use the “ls” (lowercase L and S)
command, which means “list”.
• $ ls /dev - Lists all devices in the /dev folder
• $ lscpci - Lists devices installed on the PCI bus
• $ lsusb - Lists devices installed on the USB bus
• $ lsscsi - Lists SCSI devices, such as hard drives
• $ lpstat -p - Lists all printers and whether they are enabled
• $ dmesg - Lists devices recognized by the kernel
You've made it to the last lesson in this module where we're going to cover the most important
software, the operating system. We've already looked at how to install and maintain applications
like a word processor, graphical editor, etc. Then we looked at how to install device driver
software too. Now we're going to look at the core operating system updates. Spoiler alert, they
work just the same way as every other software we've installed.
It's important to keep your operating system up to date for lots of different reasons. You want the
newest features that your operating system has, and you want the security updates that your
operating system needs. When your operating system manufacturer discovers a security hole in
the OS, they do their best to create a patch for this hole.
A security patch is software that's meant to fix up a security hole. When you have an operating
system update with security patches it's vital that you install those patches right away. The longer
you wait the more prone you are to being effected by a security hole. As an IT support specialist,
it's very common to routinely install operating system updates to keep your system up to date
and secure.
Windows usually does a great job of telling you when there are updates to install. The Windows
Update Client service runs in the background on your computer to download and install updates,
and patches for your operating system. It does this by checking in with the Windows Update
servers at Microsoft every so often, you can learn more in the next reading. If it finds updates
that should be applied to your computer it'll download them, if you decided to allow it to, more
on that later. Once the download has completed, depending on your Windows Update settings,
the Windows Update Client will ask you if it's okay to install the updates or just go ahead and
install them automatically. This process usually requires a restart of your computer, which the
Client performs after requesting permission. In versions of Windows before Windows 10, you
can tell Windows to manage your updates in a few different ways. You could have the Windows
Update Client install updates and patches that Microsoft releases automatically or can let
Windows Update know that you want to decide whether or not you'd like to download and install
them. You can even turn off updating entirely, but that's probably not a good idea for the security
reasons we talked about. You can configure Windows Update by searching updates in the search
box and going to Windows Update setting.
From there, you can tell the Windows Update Client to check for new updates, look at the history
of updates installed, or change the way that it'll download and apply patches by clicking into the
settings section.
From there, you can tell the Update Client how you want to manage your updates and even set a
time when you want them installed. Windows 10 does things differently, instead of downloading
a handful of independent updates that you can choose to apply or not apply to your computer,
updates are cumulative. This means that every month a package of updates and patches is
released that supersedes the previous month's updates.
The idea behind this is that computers will need to download less stuff in order to be up to date.
As an example of how this might be beneficial, think about a Windows machine that's been
turned off for a while. When it boots up again after a long period of inactivity, it'll need to
download all of the updates that it's missed and apply them. If it's been off for a really long time,
this could mean it'll need to download and apply hundreds of updates. With the cumulative
update model, a computer like that would only need to download the latest cumulative update,
then be good to go.
One downside to this is that in Windows 10, installing updates is no longer optional. You also
can't pick and choose the updates you want to apply, since they're all rolled into one monthly
release. Microsoft has announced that the update model in Windows 7 and 8 will also be moving
in this cumulative package direction. So, Windows 10 users won't be alone.
Supplemental Reading for Windows Update
Windows Update
The Windows operating system updates frequently, These updates often include important
security patches. It is important \ to keep your Windows systems up to date with the most current
changes. This reading covers the different types of Windows updates and how to install them.
The Windows OS includes the Windows Update Client service. This service runs in the
background on your computer to help you download and install updates and patches for the
operating system. It does this by checking in with the Windows Update servers at Microsoft and
looking for updates that should be applied to your computer. If your Windows system is
functioning properly, the Windows Update Client will alert you when there are updates to install.

Types of Windows updates


There are several types of updates that the Windows Update Client might find for your Windows
system.
• Critical updates address critical bugs that are not security related. These are widely
released fixes for a specific problem.
• Definition updates are widely released and frequent updates to a product's definition
database. Definition databases are used to detect specific types of objects on your system,
such as malicious code, phishing websites, or junk mail.
• Driver updates: Drivers are software that control the input and output of devices running
on your system. This software may be updated when new versions of the driver become
available for your devices or if you install a new device on your system.
• Feature packs add new product functionality to your system. This functionality is first
distributed as an update to a product currently running on your system. It is usually
included in the next full product release.
• Security updates are widely released patches for a security related vulnerability.
Security vulnerabilities are rated by severity as being critical, important, moderate, or
low.
a) Critical vulnerabilities pose an active threat. Patch should be installed immediately.
b) Important vulnerabilities pose a likely threat. Patch should be installed as soon as possible.
c) Moderate vulnerabilities pose a potential threat. Patch should be installed soon.
d) Low severity vulnerabilities are not an immediate threat, but a patch is recommended.
• Service packs collect all tested hotfixes, security updates, critical updates, and general
updates together and distribute them as a set. A service pack also may contain new fixes
or design changes requested by customers.
• General updates are widely released fixes for specific problems. They address
noncritical bugs that are not security related.
• Update rollups collect a set of tested hotfixes and updates that target a specific area,
such as a component or service. These fixes and updates are packaged together for easy
deployment.
• Security-only updates collect all the new security updates from a given month for
distribution through the Windows Server Update Services (see below). These updates are
called “Security Only Quality Update” when you download them and will be rated as
“Important.”
• New OS: A new version of the Windows operating system may also be deployed through
the Windows Update Client. For example, Windows 10 and 11 were both delivered as
updates to a previously installed OS.
Installing updates
The process for installing updates may be automatic, depending on which version of Windows
you’re using
Automatic updates
Beginning with Windows 10, the Windows OS ships with automatic updates turned on. With
automatic updates on, Windows Update Client will download and install available updates
without prompting you. For older versions of Windows, you must configure Windows Update to
update automatically.
Windows 10 and 11 no longer allow you to turn off automatic updates completely, but you can
pause updates for up to 35 days. Once the pause period ends, you are required to perform an
update before you can pause again.
Manual updates
You can manually prompt Windows to perform an update at any time by checking for updates
with the Windows Update tool. Manually updating does vary based on the version of Windows
used. For detailed instructions on how to do this, see the Windows Update: FAQ page.
To ensure top performance and security for your Windows system you should make sure it is
always updated to the most recent changes.
Key takeaways
The Windows operating system updates frequently, so it is important that you know how to keep
your Windows systems up to date with the most current changes.
• Windows operating systems include the Windows Update Client service to help you
download and install updates and patches for the operating system.
• There are several types of updates that the Windows Update Client might find for your
Windows system.
• The process for installing updates depends on which version of Windows you’re using.
• Regular updates ensure top performance and security for your Windows system.
In Linux, you've already learned how to update and upgrade software on your machines. When
using the apt update and apt upgrade command, they may already install security updates for
you. But when you run apt upgrade, it doesn't upgrade the core operating system. In windows,
our OS package is Windows 10. In Linux, It's the kernel along with other packages. The kernel
controls the core components of our operating system. Like our word processors, the kernel is
just another package. The kernel developers regularly include security patches, new features, and
fixes for bugs in their updates. If you want to get all these things, you should be running a new
kernel. To first view what kernel version you have, we going to learn a new command called
uname. The uname command gives us the system information. If you use the dash r flag for
kernel release, you'll see what kernel version you have.
You can see that I have kernel Version four point one on here. To update the kernel and other
packages, we use our nifty apt command with the option full dash upgrade. Before running this
command, remember to update your application sources with APT update. Sudo apt update.
Now, we can run sudo apt full upgrade.
If there's a new version of the kernel Available it will install it for us. Once you reboot the
computer, you can start using it. You can verify the latest kernel is being used with the uname
dash r command. We left out a few details about kernel installations and security updates, but
this is a good start updating your system. If you're curious about learning the intricate details of
kernel and Linux updates, check out the supplemental reading. With that, we've covered all the
essentials to help you hit the ground running with software installation and maintenance. Great
work. You learned how to install standalone packages, use package managers, and work with
archives, device drivers, and core operating system updates. These skills will be super useful as
an IT support specialist. Next, we're testing you again on both Bash and Windows. When you
finished, I'll see in the next module.
Supplemental Reading for Linux Update
Linux Update
Linux is a free, open-source operating system used on a wide variety of computing systems, such
as embedded devices, mobile devices including its use in the Android operating system, personal
computers, servers, mainframes, and supercomputers. The Linux kernel is the core interface
between a device’s hardware and the rest of its processes. The kernel controls all the major
functions of hardware running the Linux operating system. To keep the core operating system up
to date with current security patches, new features, and bug patches, you need to update the
Linux kernel. This reading covers how the Linux kernel functions and how to update Ubuntu, the
most common Linux distribution.
Linux kernel
The Linux kernel is the main component of a Linux operating system (OS). The kernel is
software located in the memory that tells the central processing unit (CPU) what to do. The
Linux kernel is like a personal assistant for the hardware that relays messages and requests from
users to the hardware.
The kernel has four main jobs:
1. Memory management tracks how much memory is being used by what and where it is
stored.
2. Process management determines which processes can use the central processing unit
(CPU), when, and for how long.
3. Device drivers act as an interpreter between the hardware and processes.
4. System calls and security receives requests for service from the processes.
To ensure that Linux distribution is running the most current version of the operating system,
you will need to update it regularly.
Updating Ubuntu Linux distribution
A Linux distribution is an operating system (OS) that includes the Linux kernel and usually a
package management system. There are almost one thousand Linux distributions, and each
distribution has a slightly different way of updating.
The Ubuntu distribution is one of the most popular since it is easy to use. There are two ways to
update the Ubuntu distribution:
• Update Manager is a graphical user interface (GUI) that is nearly 100% automated.
When updates are available, it will open on your desktop and prompt you to complete the
updates. It checks for security updates daily and nonsecurity updates weekly. You can
also choose to check for updates manually.
• Apt is the Ubuntu package management system that uses command line tools to update a
Ubuntu distribution. Apt does not check for updates automatically, you must manually
run it to check for updates. You can use the following commands to check for updates
and upgrade:
1. apt-get update To update with apt, open the terminal and use the command apt-get
update. This command prompts you to enter your password, then it updates the list of
system packages.
2. apt-get upgrade Once the package list is up to date, use the command apt-get upgrade
to actually download and install all updated versions for the packages in the list.
Key Takeaways
Linux is a free open-source operating system used on a wide variety of computing systems.
• The kernel is a part of the operating system of Linux and runs communications between
the computer’s hardware and its processes.
• Unbuntu is the most popular distribution because it is easy to use and update with the
update manager or the command sudo apt-get upgrade.
• As improvements to the processes are released, Linux needs to be updated to ensure the
kernel communicates the right information to the hardware about the process.
Resources for more information
For more information on updating various distributions of Linux, visit this Linux Foundation
article.
For more complete command information for using apt in Ubuntu, visit Ubuntu’s guide here.
Week 4 Filesystems
Filesystem Types

Wow. We've covered a lot of material so far. In the last lesson, We went over how to install,
uninstall, and maintain software in the Windows and Linux OSs. These are tasks that you'll find
yourself doing over and over as an IT support specialist. In this lesson, we're going to cover
another very important function of an IT support specialist; working with disks. In our first
course, Technical Support fundamentals, we learned about physical disks like hard disk drives
and SSDs. In this lesson, we're going to expand on that and talk about the tools needed to make a
disk usable on a computer. Ready? Let's get started.
You may remember that we introduced the concept of a filesystem in the Technical Support
Fundamentals course. Here's a refresher. A filesystem is used to keep track of files and file
storage on a disk. Without a filesystem, the operating system wouldn't know how to organize
files. So when you have a brand new disk or any type of storage device, like a USB drive, you
need to add a filesystem to it.
There are lots of file systems out there, but the two that we'll talk about in this course are
recommended as default filesystems for Windows and Linux. For Windows, we use the NTFS
filesystem, and for Linux, it's recommended to use ext4. Filesystems have different
compatibilities with different OSes. Most of the time, cross operating system support is minimal
at best. Let's say you have a USB drive that's using an NTFS filesystem. Both Windows and
Linux's Ubuntu can read and write to the USB drive. But if you have an ext4 USB drive, it'll only
work on Ubuntu and not on Windows, at least without the help of third party tools.
It's pretty likely that you'll encounter this situation in an IT support role. Let's say you have some
important files on that same USB drive that you want to copy over to your Windows, Linux, and
Mac OSes, what would you do then? This is a pretty common situation. You'd have to reformat
or wipe the USB drive and add a filesystem that's compatible with all three operating systems.
Luckily, there are filesystems like FAT32 that support reading and writing data to all three major
operating systems. FAT32 has some shortcomings though. It doesn't support files larger than 4
gigabytes, and the size of the filesystem can't be larger than 32 gigabytes. This might be enough
for a small USB drive, but it's not really great for anything else.
You can learn more about FAT32 in the next supplemental reading. This still begs the question,
what if you wanted to be able to share files between multiple OSes and don't want to deal with
filesystem limitations? Don't worry, we've got you covered. In the next course on system
administration and IT infrastructure services, we'll discuss another filesystem type called
network filesystems that solves this exact problem. All right, now that you've got a quick
refresher on filesystems, let's spend the next few lessons discussing how you actually set them
up.
Supplemental Reading for FAT32 File System
For more information about the FAT32 File System, please check out the link here.
Before we start adding a filesystem to a disk, let's do a rundown of the components of the disk
that allow you to store and retrieve files. A storage disk can be divided into something called
partitions. A partition is just a piece of the disk that you can manage. When you create multiple
partitions, it gives you the illusion that you're physically dividing a disk into separate disks.
To add a filesystem to a disk, first you need to create a partition. Usually, we just have a single
partition for our OS, but it's not uncommon to have multiple partitions for different uses. Let's
say you want to have two partitions on a disk, one for a Windows OS and one for a Linux OS.
Instead of using two machines to use both operating systems, you can just use one machine and
switch between the two OSs on boot-up. You can also add different filesystems on different
partitions of the same disk. Partitions essentially act as their own separate sub-disks, but they all
use the same physical disk. One thing to call out is that, when you format a filesystem on a
partition, it becomes a volume. Volume and partition are sometimes mistakenly used
synonymously, but we want to make sure that you understand this distinction.
The other component of a disk is a partition table. A partition table tells the OS how the disk is
partitioned. The table will tell you which partitions you can boot from, how much space is
allocated to partition, etc. There are two main partition table schemes that are used, MBR, or
Master Boot Record, and GPT, or GUID Partition Table.
These schemes decide how to structure the information on partitions. MBR is a traditional
partition table, and it's mostly used in the Windows OS. MBR only lets you have volume sizes of
2 terabytes or less. It also uses something called primary partitions. You can only have four
primary partitions on a disk. If you want to add more, you have to take a primary partition and
make it into something known as an extended partition. Inside the extended partition, you can
then make something called a logical partition. It's a little odd to get at first, but that's just how
the partition table was created.
MBR is an old standard, and it's slowly being faded out by the next partition table scheme we'll
talk about, GPT. GPT is becoming the new standard for disks. You can have a volume size
greater than 2 terabytes, and it only has one type of partition. You can make as many of them as
you want in a disk. In an earlier lesson, we learned about a new BIOS standard called UEFI that's
become the default BIOS for newer systems. To use UEFI booting, your disk has to use the
GUID Partition Table. Now that you know what you need to do to make a partition, let's partition
an actual disk. In the next few lessons, we're going to learn how to partition and format a USB
drive for each respective OS.
Now that we've got a little theory under our belts, how can we actually partition a disk and
format a file system in Windows? Although a quick web search will turn up all kinds of third
party disk partitioning programs other people have written, Windows actually ships with a great
native of tool called the Disk Management Utility. Like most things in Windows, there are a few
ways to get to disk management. We'll launch it by right clicking this PC, selecting the
"Manage" option then clicking the "Disk Management" console underneath the storage grouping.
We should see a display of both the disks and disk partitions along with information about what
type of file system they're formatted with. There are all kinds of good things to know here too.
Like the free and total capacity of disks and partitions. One super-cool property of the disk
management console is that from here, you can also make modifications to the disk and
partitions on your computer. Messing with the partition or the Windows operating system is
installed probably isn't the best way to demonstrate the partitioning and formatting abilities of
the disk management console. So let's use a USB drive instead. Once the drive has been inserted
and the plug and play service does the work of installing the driver for it, you should see it show
up in the disk management as an additional disk. The USB drive is currently formatted using the
FAT32 file system. Let's go ahead and reformat partition using NTFS instead. To do this, we
right click on the partition and choose format.
From this window, we can choose the volume label or name we'd like to give the disk. Let's just
stick with USB drive. You can also specify the file system which will change to NTFS. That's
pretty straightforward, but there are also some other options that might not be so clear. Like
what's that allocation unit size thing? Well, the allocation unit size is the block size that will be
used when you format the partition in NTFS. In other words, this is the size of the chunks that
the partition will be chopped into. Data that needs to be saved will spread out across those
chunks. This means that if you store lots of small files, you'll waste less space with small block
sizes. If you store large files, larger block sizes will mean you'll need to read less blocks to
assemble the file. We'll pick the default, which is fine in most cases. You'll also see the option to
perform a quick format is available. The difference between a quick format and a full format is
that in a full format, Windows will do a little extra work to scan the disk or USB drive in our
case, for errors or bad sectors. This extra work will make the formatting process a little longer, so
we'll just stick to quick for now. We're on our own, we don't want anything to slow us down. The
last option on the format screen is whether or not to enable file or folder compression. The
decision to enable or disable compression comes with a trade-off. If you enable compression,
your files and folders will take up less space on the disk, but compressed files will need to be
expanded when you open them, which means the computer's processor will need to do some
extra work. We aren't particularly concerned with squeezing out every last bit of disk space, so
we'll leave this box unchecked. Finally, we can hit "okay" to proceed with the format. Windows
will warn us first that formatting the volume will erase any data that might be on it. Once we let
it know that it's okay it'll start the formatting process. After a little bit of processing, we should
see the label on the partition turn to healthy. Using the GUI is pretty intuitive, but there's also a
command line way to accomplish the same task. This can come in handy if you need to automate
disk partitioning. To do disk manipulation from the CLI we'll dive into a tool called Diskpart.
Diskpart is a terminal based tool built for managing disks right from the command line. Let's
format our thumb drive again but using Diskpart instead of the GUI. First of we'll plug in our
thumb drive, then to launch Diskpart all we need to do is open up a command prompt, in this
case command.exe and type Diskpart into it.
This will open up another terminal window where the prompt should read Diskpart. You can list
the current discs on the system by typing "list disk". Next, we identify the disk we want to
format. A good signal is the size of the disk, which will be much smaller for a USB drive. Then
we can select it with select disk and then disk one, now we'll wipe the disk using the "Clean
command" which will remove any and all partition or volume formatting from the disk. With the
disk wiped, we now need to create a partition in it. This can be done with the create partition
primary command, which will create a blank partition for our file system.
Then let's select the partition with select partition one. That's the number of our freshly created
partition and now we'll mark it as active by simply typing active. If you guess that the next step
is to format the disk with the NTFS file system, you're right? We can do this by running this
command at the Diskpart prompt format FS for file system NTFS and the the label. I'm just
going to call it "my thumb drive". And then the formatting type, we'll want to make it quick. This
command will format the thumb drive with NTFS in quick mode, which we talked about earlier
and we just gave it the name "My thumb drive". Congratulations, you've just formatted a USB
drive from the command line. If you want to learn more about the options and tasks you can
accomplish with Diskpart, check out the Diskpart link in the supplementary reading I've included
right after this video. And there you have it, that's how you format a disk with the NTFS file
system in the Windows operating system using both the command line and the GUI. If you want
a refresher, feel free to watch this lesson again before heading to the next one.

Supplemental Reading for Disk Partitioning


and Formatting in Windows
Disk Partitioning and Formatting in
Windows
Disk partitioning enables more efficient management of hard disk space by breaking or “slicing” up the
disk storage space into partitions. This breaking allows for each partition to be managed separately by
reducing inefficient use of space. DiskPart is a disk partitioning utility on the Windows operating system
which uses the command line to perform operations. This reading covers the component parts that make
up a drive, common DiskPart commands, and how cluster size affects your usable drive space in the
Windows OS.

DiskPart
The DiskPart command terminal helps you manage storage on your computer's drives. DiskPart utility can
be used to manage partitions of hard disks including creating, deleting, merging, or expanding partitions
and volumes. It can also be used to assign a file formatting system to a partition or volume.

There are three main divisions of storage that you will find on a drive: cluster, volume, and partition.
• Cluster (allocation unit size) is the minimum amount of space a file can take up in a volume or
drive.
• Volume is a single accessible storage area with a single file system; this can be across a single disk
or multiple.
• Partition is a logical division of a hard disk that can create unique spaces on a single drive.
Generally used for allowing multiple operating systems.
To use DiskPart you will need to use specific commands to select and manage the parts of your drive you
need to access. For a list of common DiskPart terminal commands visit this helpful guide.

The commands let you work with partitions and volumes but the base storage unit called cluster size is set
when initiating the volume or partition.

Cluster Size
Cluster size is the smallest division of storage possible in a drive. Cluster size is important because a file will
take up the entire size of the cluster regardless of how much space it actually requires in the cluster.

For example, if the cluster size is 4kb (the default size for many formats and sizes) and the file you're trying
to store is 4.1kb, that file will take up 2 clusters. This means that the drive has effectively lost 3.9 kb of
space for use on a single file.

When partitioning a disk, you should specify the cluster size based on your file sizes. If no cluster size is
specified when you format a partition, a default is selected based on the size of the partition. Using
defaults can result in loss of usable storage space.
It is important to remember when using DiskPart that the actions you take are permanent so be careful not
to erase data accidentally.

Key Takeaways
DiskPart is a tool that lets you manage your storage from a command line interface and is useful for a
multitude of actions including creating, deleting, merging, and repairing drives.

• The three main divisions of storage that you will find on a drive are cluster, volume, and partition.
• To use DiskPart you will need to use specific commands to select and manage the parts of your
drive you need to access.
• Cluster size is the smallest division of storage possible in a drive. Cluster size is important because
a file will take up the entire size of the cluster regardless of how much space it actually requires in
the cluster.

Now that you've formatted your new file system, there's one more step left. You have to mount
your file system to a drive. In IT, when we refer to mounting something like a file system or a
hard disk, it means that we're making something accessible to the computer. In this case, we
want to make our USB drive accessible so we mount the file system to a drive. Windows does
this for us automatically. You might have noticed this if you plug in a USB drive, it'll show up
on your list of drives and you can start using it right away. When you're done using the drive,
you'll just have to safely eject or essentially unmount the drive by right clicking and selecting
eject. We'll talk about why this is important in a later lesson.
In Linux, there are a few different partitioning command line tools we can use. One that supports
both MBR and GPT partitioning is the parted tool. Parted can be used in two modes. The first is
interactive, meaning we're launched into a separate program, like when we use the less
command. The second is command line, meaning you just run commands while still in your
shell. We're going to be using the interactive mode for most of this lesson. Before we do that let's
run a command to show what disks are connected to the computer using the command line mode.
We can do this by running the parted - l command. So sudo parted - l. This lists out the disks that
are connected to our computer. We can see that the disk /dev/sda is 128 gigabytes. I've also
plugged in a USB drive and you can see that, /dev /sdb is around 8 gigabytes. Let's quickly go
through what this output says. Here we can see the partition table is listed as gpt. The number
field corresponds to the number of partitions on the disk. We can see that there are three
partitions. Since this disk is /dev/sda, the first partition will correspond to /dev/sda 1 and the
second will correspond to /dev/sda 2 et cetera. The start field is where the partition starts on the
disk. For this first partition we can see that it starts at 1,049 kilobytes and ends at 538 megabytes.
The field after that shows us how large the partition size is. The next field tells us what file
system is on the partition. Then, we have the name and finally, we can see some flags that are
associated with this partition. You can see here that /dev /sdb doesn't currently have any
partitions, we'll fix that in a minute. Let's select our /dev/sdb disk and start partitioning it. We
want to be super careful that we select the correct disk when partitioning something so we don't
accidentally partition the wrong disk. We're going to use the interactive mode of parted by
running sudo parted /dev/sdb. Now we're in the parted tool. From here, we can run more
commands. If we want to get out of this tool and go back to the shell then we just use the quit
command. I'm going to run print just to see this disk one more time. It says we have an
unrecognized disk label. We'll need to set a disk label with the mklabel command. Since we want
to use the gpt partition table let's use this command. Mklabel gpt. Let's look at the status of our
disk again to do that we can use a print command. Here we can see the disk information for the
selected /dev/sdb disk. Now it says we have the partition table gpt. All right. Let's start making
modifications to the disk. We want to partition the /dev/sdb disk into two partitions. Inside the
parted tool we're going to use the mkpart command. The mkpart command needs to have the
following information, what type partition we want to make, what file system we want to format,
and the start of the disk and the end of the disk like this.
The partition type is only meaningful for mbr partition tables. Remember, the mbr uses primary,
extended, and logical partitions. Since we are formatting this using gpt, we're just going to use
primary as the partition type. The start point here is one mebibyte and the endpoint is five
gibibytes. So our partition is essentially five gibibytes. Remember from the earlier course, that
data sizes have long been referred to in two different ways, using the exact data measurement
and the estimated data measurement. Remember that one kibibyte is actually 1,024 bytes while
one kilobyte is 1,000 bytes. We haven't really had to care about this distinction before. Some
operating systems sometimes measure one kilobyte as 1,024 bytes which is confusing, but when
dealing with data storage we want to make sure we're using the precise measurements so we
don't waste precious storage space. Let's opt to use mebibyte and gibibyte in our partition. Next,
we're going to format the partition with the file system using mkfs. So I'm just going to quick,
sudo mkfs type is ext4. And I want to format the partition, so sdb1. We also left the rest of the
disk unpartitioned because we're going to use it for something else later. With that, we've created
a partition and formatted a file system on a USB drive. Remember to always be careful when
using the parted tool. It's very powerful and if you modify the wrong disk on here it could cause
a pretty big mess. Even though we've partitioned our disk and formatted a file system on here,
we're not actually able to start reading and writing files to it just yet. There's one last step to get a
usable disk in Linux. We have to mount the file system to a directory so that we can access it
from the shell. Spoiler alert, you'll learn how to do that in the next video.
To begin interacting with the disk, we need to mount the file system to the directory. You might
be thinking, why can't we just cd into /dev/sdb? That's the disk device, isn't it? It is, but if we try
to cd into /dev/sdb like this We'd get an error saying the device is not a directory, which is true.
To resolve this, we need to create a directory on our computer and then mount the file system of
our USB drive to this directory.
Let's pull up where our partition is with sudo parted -l. Okay, I can see that partition that we want
to access is /dev/sdb1. I've created a directory already under root called my_usb. So let's give this
a try. So sudo mount /dev/sdb1 /my_usb/. Now if we go to my_usb, we can start reading and
writing to the new file system. We actually don't need to explicitly mount a file system using the
mount command. Most operating systems actually do this for us automatically, when we plug in
a device like a USB drive.
File systems have to be mounted one way or the other, because we need to tell the OS how to
interact with the device. We can also unmount the file system in a similar way using the umount
command. Unmounting is the opposite of mounting a disk. So now let's unmount the file system.
I can either use sudo umount /my_usb, or sudo umount /dev/sdb1. Both will work to unmount a
file system. When you shut down your computer, disks that were mounted manually are
automatically unmounted. In some cases, like if we were using a USB drive, we just want to
unmount the file system for the USB drive without shutting down.
Always be sure to unmount a file system of a drive before physically disconnecting the drive. In
the case of the USB drive, we can run into some interesting file system errors if we don't do this.
We'll talk more about this in the upcoming lesson. Also, keep in mind that we when we use the
mount command to mount a file system to a directory, once we shut off the computer, the mount
point disappears. We can permanently mount a disk though if we needed to automatically load
up when the computer boots.
To do this, we need to modify a file called /etc/fstab. If we open this up now, you'll see a list of
unique device IDs, their mount points, what type of file system they are, plus a little more
information. If we want to automatically mount file systems when the computer boots, just add
an entry similar to what's listed here. Let's go ahead and do that really quickly.
The first field that we need to add for /etc/fstab is the UUID or universally Unique ID of our
USB Drive. To get the UUID of our devices we can use this command, sudo blkid. This will
show us the UUID for block device IDs, aka storage device IDs, and that's it. We've covered a lot
of essential disk management tasks. So far we've partitioned a disk, added a file system, and
mounted it for use. If you're curious and want to learn more about the /etc/fstab file and its
options, check out the next supplemental reading. Otherwise, let's move on.

Supplemental reading Mounting and


Unmounting a Filesystem in Linux
Mounting and Unmounting a File System
In this reading, you will learn how to mount and unmount file systems in Linux using the fstab table. IT
Support professionals who work with Linux systems should know how to mount and unmount file systems
both manually and automatically. This skill is often used when configuring Linux servers and other Linux
systems to connect to network file systems.

File system table (fstab)


File System Table (fstab) is a Linux configuration table. It helps to simplify mounting and unmounting file
systems in Linux. Mounting means to connect a physical storage device (hard drives, CD/DVD drives,
network shares) to a location, also called a mount point, in a file system table. In the past, IT Support
specialists for Linux systems had to manually mount hard drives and file systems using the mount
command. The fstab configuration file made this administrative task more efficient by offering the option
to automate the mounting of partitions or file systems during the boot process. Additionally, fstab allows
for customized rules for mounting individual file systems.

The fstab configuration table consists of six columns containing the following parameters:

• Column 1 - Device: The universally unique identifier (UUID) or the name of the device to be
mounted (sda1, sda2, … sda#).
• Column 2 - Mount point: Names the directory location for mounting the device.
• Column 3 - File system type: Linux file systems, such as ext2, ext3, ext4, JFS, JFS2, VFAT, NTFS,
ReiserFS, UDF, swap, and more.
• Column 4 - Options: List of mounting options in use, delimited by commas. See the next section
titled “Fstab options” below for more information.
• Column 5 - Backup operation or dump: This is an outdated method for making device or
partition backups and command dumps. It should not be used. In the past, this column contained
a binary code that signified:
o 0 = turns off backups
o 1 = turns on backups
• Column 6 - File system check (fsck) order or Pass: The order in which the mounted device
should be checked by the fsck utility:
o 0 = fsck should not run a check on the file system.
o 1 = mounted device is the root file system and should be checked by the fsck command
first.
o 2 = mounted device is a disk partition, which should be checked by fsck command after
the root file system.
Example of an fstab table:

<File System> <Mount Point> <Type> <Options> <Dump> <Pass>


/dev/sda1 / ext3 nouser 0 1
/dev/sda2 swap swap defaults 0 0
/dav/hda1 /mnt/shared nfs rw, noexec 0 2
Fstab options
In Column 4 of the fstab table, the available options include:

• sync or async - Sets reading and writing to the file system to occur synchronously or
asynchronously.
• auto - Automatically mounts the file system when booting.
• noauto - Prevents the file system from mounting automatically when booting.
• dev or nodev - Allows or prohibits the use of the device driver to mount the device.
• exec or noexec - Allows or prevents file system binaries from executing.
• ro - Mount file system as read-only.
• rw - Mount file system for read-write operations.
• user - Allows any user to mount the file system, but restricts which user can unmount the file
system.
• users - Any user can mount the file system plus any user can unmount file system.
• nouser - The root user is the only role that can mount the file system (default setting).
• defaults - Use default settings, which include rw, suid, dev, exec, auto, nouser, async.
For more options, consult the man page for the file system in use.

Editing the fstab table


As an IT Support professional, you may need to expand the hard drive space on a server. Imagine that you
have installed a new hard drive and the Linux server does not seem to recognize the drive. In the
background, Linux has detected the new hardware, but it does not know how to display information about
the drive. So, you will need to add an entry in the fstab table so that Linux will know how to mount it and
display its entry within the file system. The following steps will guide you through this process:

1. Format the drive using the fdisk command. Select a Linux compatible file system, like ext4. If
needed, you can also create a partition on the drive with the fdisk command.
2. Find which block devices the Linux system has assigned to the new drive. The block device is a
storage device (hard drive, DVD drive, etc.) that is registered as a file in the /dev directory. The
device file provides an interface between the system and the attached device for read-write
processes. Use the lsblk command to find the list of block devices that are connected to the
system.
Example output from the lsblk command:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT


sda 8:0 0 512G 0 disk
┖ sda1 8:1 0 1G 0 part /boot
sdb 8:16 0 1T 0 disk
┖ sdb1 8:17 0 128G 0 part
The seven columns in the output from the lsblk command are as follows:

a. NAME - Device names of the blocks. In this example, the device names are the existing sda drive and
sda1 partition plus the new sdb hard drive and a newly formatted sdb1 partition.

b. MAJ:MIN - Major and minor code numbers for the device:

1. The major number is the driver type used for device communication. A few examples include:
• 1 = RAM
• 3 = IDE hard drive
• 8 = SCSI hard drive
• 9 = RAID metadisk
2. The minor number is an ID number used by the device driver for the major number type.
• The minor numbers for the first hard drive can range from 0 to 15.
a. The 0 minor number value for sda represents the physical drive.
b. The 1 minor number value for sda1 represents the first partition on the sda drive.
• The minor numbers for the second hard drive can range from 16 to 31.
a. The 16 minor number value for sdb represents the physical drive.
b. The 17 minor number value for sdb1 represents the first partition on the sdb
drive.
• Minor numbers for a third hard drive would range from 32 to 47, and so on.
c. RM - Indicates if the device is:

1. 0 = not removable
2. 1 = removable
d. SIZE - The amount of storage available on the device.

e. RO - Indicates file permissions:

1. 0 = read-write
2. 1 = read-only
f. TYPE - Lists the type of device, such as:

1. disk = hard drive


2. part = disk partition
g. MOUNTPOINT - The location where the device is mounted. A blank entry in this column means it is not
mounted.

3. Use an editor, like gedit, to open the fstab file:

Device Mount Point File System Options Dump Pass

/dev/sda1 / ext3 nouser 0 1


Example fstab file:

4. To add the new file system partition:

1. In the first column, add the new file system device name. In this example, the device name would
be /dev/sdb1.
2. In the second column, indicate the mount point for the new partition. This should be a directory
that would be easy to find and identify for users. For the sake of simplicity, the mount point for this
example is /mnt/mystorage.
3. In the third column, enter the file system used on the new partition. In this example, the file system
used for the new partition is ext4.
4. In the fourth column, enter any options you would like to use. The most common option is to
select default.
5. In the fifth column, set the dump file to 0. Dump files are no longer configured in the fstab file, but
the column still exists.
6. In the sixth column, the pass value should be 2 because it is not the root file system and it is a best
practice to run a file system check on boot. Your fstab table should now include the new partition:
<File System> <Mount Point> <Type> <Options> <Dump> <Pass>

/dev/sda1 / ext3 nouser 0 1

/dev/sdb1 /mnt/mystorage ext4 default 0 2

7. Reboot the computer and check the mystorage directory for the new partition.

One term you might have heard in relation to disks and partitions, is swap space. Before we talk about
swap space, let's talk about the concept of virtual memory. Virtual memory is how our OS provides the
physical memory available in our computer (like RAM) to the applications that run on the computer. It does
this by creating a mapping, a virtual to physical addresses. This makes life easier for the program, which
needs to access memory since it doesn't have to worry about what portions of memory other programs
might be using. It also doesn't have to keep track of where the data it's using is located in RAM. Virtual
memory also gives us the ability for our computer to use more memory than we physically have installed.
To do this, it dedicates an area of the hard drive to use a storage base for blocks of data called pages. When
a particular page of data isn't being used by an application, it gets evicted. Which means it gets copied out
of memory onto the hard drive. This is because accessing data on RAM is fast, much faster than the hard
drive where space is at a premium. Because of this, the operating system wants to keep the most
commonly accessed data pages in RAM. It then puts stuff that hasn't been used in a while on the disk. This
way, if a program needs a page that's not accessed a lot, the operating system can still get to it. But it has
to read it from the comparatively slow hard drive and put it back into memory. Almost all operating
systems use some kind of virtual memory management scheme and paging mechanism. So how does it
work on windows? The Windows OS uses a program called The Memory manager to handle virtual
memory. Its job is to take care of that mapping of virtual to physical memory for our programs and to
manage paging. In Windows, pages saved to disk are stored in a special hidden file on the root partition of
a volume called page file dot sys. Windows automatically creates page files and it uses the memory
manager to copy pages of memory to be read as needed. The operating system does a pretty good job of
managing the page file automatically. Even so, windows provides a way to modify the size, number and
location of paging files through a control panel applet called System Properties. You can get to the system
properties applet by opening up the control panel.

Going to the system and security setting, and clicking on system. Once in the system pane, you can open
up the advanced system settings on the left hand menu. Pick the advanced tab, then click on the settings
button in the performance section. One last time, click on the advance tab and you should see a section
called virtual memory which displays the paging file size. If you click the change button, you can override
the defaults Windows provides, so you can set the size of the paging file, and add paging files to other
drives on the computer. Microsoft has some guidelines for setting the page in file size that you can follow.
For example, on 64 bit Windows 7, the minimum paging file size should be set to 1x, the amount of RAM in
the machine. Unless you have a specific reason to change it, it's generally fine to let windows automatically
manage the paging file size itself.

Supplemental Reading for Windows Paging

Windows Paging Files

In this reading, you will learn about Windows paging files and their primary functions. You will also learn
how to set the appropriate Windows paging file size. As an IT Support specialist, you may want to add or
maintain page files to improve system performance. A paging file is an optional tool that uses hard drive
space to supplement a system’s RAM capacity. The paging file offloads data from RAM that has not been
used recently by the system. Paging files can also be used for system crash dumps or to extend the system
commit charge when the computer is in peak usage. However, paging files may not be necessary in
systems with a large amount of RAM.

Page file sizing

Determining the size needed for a paging file depends on each system’s unique needs and uses. Variables
that have an impact on page file sizes include:

• System crash dump requirements - A system crash dump is generated when a system crashes. A
page file can be allocated to accept the Memory.dmp. Crash dumps have several size options that
can be useful for various troubleshooting purposes. The page file needs to be large enough to
accept the size of the selected crash dump. If the page file is not large enough, the system will not
be able to generate the crash dump file. If the system is configured to manage page dumps, the
system will automatically size the page files based on the crash dump settings. There are multiple
crash dump options. Two common options include:

o Small memory dump: This setting will save the minimum amount of info needed to
troubleshoot a system crash. The paging file must have at least 2 MB of hard drive space
allocated to it on the boot volume of the Windows system. It should also be configured to
generate a new page file for each system crash to save a record of system problems. This
history is stored in the Small Dump Directory which is located in the
%SystemRoot%\Minidump file path.

▪ To configure a small memory dump file, run the following command using the
cmd utility:

Wmic recoveros set DebugInfoType = 3

• Alternatively, this option can be configured in the registry:

Set the CrashDumpEnabled DWORD value to 3

• To set a folder as the Small Dump Directory, use the following command line:

Wmic recoveros set MiniDumpDirectory = <folderpath>

• Alternatively, the directory option can be set in the registry:


Set the MinidumpDir Expandable String Value to <folderpath>

• Complete memory dump: The option records the contents of system memory when the computer
stops unexpectedly. This option isn't available on computers that have 2 or more GB of RAM. If you
select this option, you must have a paging file on the boot volume that is sufficient to hold all the
physical RAM plus 1 MB. The file is stored as specified in %SystemRoot%\Memory.dmp by default.
The extra megabyte is required for a complete memory dump file because Windows writes a
header in addition to dumping the memory contents. The header contains a crash dump signature
and specifies the values of some kernel variables. The header information doesn't require a full
megabyte of space, but Windows sizes your paging file in increments of megabytes.

o To configure a complete memory dump file, run the following command using the cmd
utility:

wmic recoveros set DebugInfoType = 1

• Alternatively, a complete memory dump file can be configured in the registry:

Set the CrashDumpEnabled DWORD value to 1

• To set a memory dump file, use the following command line:

wmic recoveros set DebugFilePath = <folderpath>

• Alternatively, the memory dump file can be set in the registry:

Set the DumpFile Expandable String Value to <folderpath>

• To indicate that the system should not overwrite kernel memory dumps or other complete
memory dumps, which may be valuable for troubleshooting system problems, use the command:

wmic recoveros set OverwriteExistingDebugFile = 0

• Alternatively, the overwrite setting can be turned off in the registry:

o Set the Overwrite DWORD value to 0

• Peak usage or expected peak usage of the system commit charge - The system commit limit is the
total of RAM plus the amount of disk space reserved for paging files. The system commit charge
must be equal to or less than the system commit limit. If a page file is not in place, then the system
commit limit is less than the system’s RAM amount. The purpose of these measurements is to
prevent the system from overpromising available memory. If this system commit limit is exceeded,
Windows or the applications in use may stop functioning properly. So, it is a best practice to assess
the amount of disk storage allocated to the page files periodically to ensure there is sufficient
space for what the system needs during peak usage. It is fine to reserve 128 GB or more for the
page files, if there is sufficient space on the hard drive to dedicate a reserve of this size. However, it
might be a waste of available storage space if the system only needs a small fraction of the
reserved disk space. If disk space is low, then consider adding more RAM, more hard drive storage,
or offload non-system files to network or cloud storage.
• Space needed to offload data from RAM - Page files can serve to store modified pages that are not
currently in use. This keeps the information easily accessible in case it is needed again by the
system, without overburdening RAM storage. The modified pages to be stored on the hard drive
are recorded in the \Memory\Modified Page List Bytes directory. If the page file is not large
enough, some of the pages added to the Modified Page List Bytes might not be written to the page
file. If this happens, the page file either needs to be expanded or additional page filles should be
added to the system. To assess if the page file is too small, the following conditions must be true:

o \Memory\Available MBytes indicates more physical memory is needed.

o A significant amount of memory exists in the modified page list.

o \Paging Files(*)% Usage (existing page files) are almost full.

In Linux, the dedicated area of the hard drive used for virtual memory is known as swap space.
We can create swap space by using the new disk partitioning tools that we learned. A good
guideline to use to determine how much swap space you need is to follow the recommended
partitioning scheme in the next supplementary reading. In our case, since we just have a USB
drive which doesn't need swap, we're just going to partition the rest of it as swap to show you
how this works. In practice, you would create swap partitions for your main storage devices like
hard drives and SSDs. Okay. Let's make swap space. First, go back into the parted tool and select
/dev/sdb, where our USB is. We're going to partition it again this time to make a swap partition.
And then we'll format the Linux dash swap file system on it. So, mkpart primary Linux swap 5
gibibytes 100 percent. You'll notice that the end point of the drive says 100 percent which
indicates that we should use the rest of the free space on our drive. We're not done yet. Swap isn't
actually a file system, so this command won't be enough. I know I'm sorry, I just lied to you like
five seconds ago. If you think about it, it makes a lot of sense since pages go into swap and not
file data. Anyways, to complete this process, we need to specify that we want to make it swap
space with the mkswap command. Let's quit out of parted and run this command on a new swap
partition. So, sudo mkswap dev, and our new swap partition is on dev sdb2. Finally, there's one
more command to run to enable swap on the device, swapon. So, sudo swapon dev sdb2. If we
want to automatically mount swap space every time the computer boots up, just add a swap entry
to the /etc fstab file like we did earlier.

Supplemental Reading for Linux Swap

For more information about swap, please check out the link here.

Now that we've gone a few practical things out of the way with disk partitioning and file system
creation, we can talk about concepts for a bit. Remember when we talked about how our OS
handles files? It actually manages the actual file data, file metadata, and file systems. We've
already covered file systems. In this video, we're going to cover the file data and file metadata.
When we talk about data, we're referring to the actual contents of the file; like a text document
that we saved to our hard drives. The file metadata includes everything else, like the owner of the
file, permissions, size of the file, it's location on the hard drive, and so on. Remember that the
NTFS file system is the native file system format of windows. So how exactly does NTFS store
and represent the files we're working with on our operating system? NTF uses something called
The Master File Table or MFT to keep everything straight. Every file on a volume has at least
one entry in the MFT, including the MFT itself. Usually, there's a one-to-one correspondence
between files and MFT records. But if a file has a whole lot of attributes, there might be more
than one record to represent it. In this context, attributes are things like the name of a file, it's
creation time stamp, whether or not a file is read-only, whether or not the file is compressed, the
location of the data that the file contains, and many other pieces of information. When you create
files on an NTFS file system, entries get added to the MFT. When files get deleted, their entries
in the MFT are marked as Free so they can get reused. One important part of a file's entry in the
MFT is an identifier called the file record number. This is the index of the files entry in the MFT.
A special type of file we should mention in Windows is called a shortcut. A shortcut is just
another file and another entry in the MFT. But it has a reference to some destination, so that
when you open it up, you can get taken to that destination. You can create a shortcut by right-
clicking on the target file and selecting the Create Shortcut option.
There it is. Besides creating shortcuts as ways to access other files, NTFS provides two other
ways using hard and symbolic links. This might get a little weird but stay with me. Symbolic
links are kind of like shortcuts but at the file system level. When you create a symbolic link, you
create an entry in the MFT that points to the name of another entry or another file. This might
seem like just another way to make a shortcut but symbolic links have a key difference. The
operating system treats them like substitutes for the file they're linked to in almost every
meaningful way. This is the part that sounds strange. So, let's demonstrate. Let's create a
directory on the desktop called Links.
Inside of it, we'll create a text file called file_1. And inside of that, let's add the word, Hello! And
then, let's make a shortcut that points this file called file_1 - Shortcut. Next, let's open up a
command prompt and navigate to this directory. Let's try to open up file_1 through its shortcut
with Notepad. What do you think will happen?
If you expect the Notepad to display, Hello! Then you'd be disappointed. Instead, notepad
opened up the shortcut file which has some text in there that isn't readable by us. Instead of a
shortcut, let's create a symbolic link. You can create symbolic links with the Make Link program
from the command prompt. Let's make one called file_1_symlink with the following command
and then open it up a Notepad and see what happens. All right, let's open it up in Notepad. This
is what we mean when we say the operating system treats the symbolic link just like the original
file. There's another type of link worth mentioning called a hard link. When you create a hard
link in NTFS, an entry is added to the MFT that points to the linked file record number, not the
name of the file. This means the file name of the target can change and the hard link will still
point to it. You can create hard links in a way that's similar to symbolic links, but with the /H
option. So mklink /H file_1_hardlink file_1. Since a hard link points out the file record number
and not the file name, you can change the name of the original file and the link will still work.
Next, we'll have a look at how Linux organizes files and the way it treats hard links and symbolic
links. Onward and upward.
Supplemental Reading on NTFS File System
For more information about the NTFS file system, please check out the following links:
Master File Table, Creating Symbolic Links, and Hard Links and Junctions.
In Linux, metadata and files are organized into a structure called an inode. Inodes are similar to
the Windows NTFS MFT records. We store inodes in an inode table and they help us manage the
files on our file system. The inode itself doesn't actually store file date or the file name, but it
does store everything else about a file. In the last lesson, we learned how to create file shortcuts,
symbolic links, and hardlinks in Windows. Well in Linux we have the same concept. Shortcuts in
Linux are referred to as softlinks, or symlinks. They work in a similar way symbolic links work
in Windows, in that they just point to another file.
Softlinks allow us to link to another file using a file name. They're great for creating shortcuts to
other files. The other type of link found in Linux are hardlinks. Similar to Windows, hardlinks
don't point to a file. In Linux, they link to an inode which is stored in an inode table on the file
system. Essentially, when you're creating a hardlink, you're pointing to a physical location on
disk or more specifically on the file system. But if you deleted a file of a hardlink, all other
hardlinks would still work. Let's actually see where hardlinks are referenced. If we did an ls-l on
this file, important_file, You'll notice the third field in the details, this field actually indicates the
amount of hardlinks a file has.
When the hardlink count of a file reaches zero, then the file is completely removed from the
computer. To create a softlink, we can run the command ln with the flag -s for softlink. So ln-s
important_file important_file_softlink. To create a hardlink, we can run the ln command without
the -s to specify a hardlink. So ln important_file important_file_hardlink. Now, if we check ls-l
important_file, we'll see that the hardlink count was increased by one. Hardlinks are great if you
need to have the same file stored in different places, but you don't want to take up any additional
space on the volume.
This is because all the hardlinks point to the same space on the volume. You could use softlinks
to do the same thing. But what if you moved one file, broke the softlink, and forgot about all the
other places that you used it? Those would be broken too and may take some time to clean up.
You may not see a use for making your own softlinks or hardlinks right now, but they are used
all throughout your system, so you should be aware how they work.
Now that we've taken a good, hard look at files in different file systems, let's turn our attention to
how we can monitor the number and size of those files in Windows. You seen how there are
loads of third party programs out there to partition and format discs on Windows. Well, there are
also lots of applications you can download that can check and visualize disk usage on a Windows
machine. But you can use the disk management council we examined in an earlier lesson to get a
sense of your disk capacity usage. To check disk usage, you can open up the computer
management utility. Then head to the disk management console. From there, right click on the
partition you're interested in and select properties.
This will bring up the general tab where you can see the used and free space on the drive. In
addition to using this graphical user interface to check the disk usage, Windows provides a
command line utility called disk usage as part of it system internal tool offering. That DU utility
can print out the usage of a given disk and tell you how many files it has. It can be useful for
creating scripts which might need text based output instead of visual reports like the pie chart in
disk management. You can find a link to the DU tool in the next supplemental reading. On the
same tab in the disk management console, you might notice a button that says disk cleanup. If
you press this button, Windows will launch a program called CleanManager.exe which will do a
little housekeeping on your hard drive to try and free up some space. This housekeeping includes
things like deleting temporary files, compressing old and rarely used files, cleaning up logs and
emptying the recycle bin. Another task related to disk health is called defragmentation. The idea
behind disc defragmentation is to take all the files stored on a given disk and reorganize them
into neighboring locations. Having files ordered like this will make life easier for rotating hard
drive disks that use an actuator arm to write to and read from a spinning disk. The head of the
actuator arm will actually travel less to read the data it needs. I should call out that this is less of
a benefit for solid state drives since there's no physical read write head that needs to move
around a spinning disk. For these kinds of drives, the operating system can use a process called
Trim to reclaim unused portions of the solid state disk. We won't go into details of how trim
works but it's good to know that exists. I've included a link to more information on trim in the
reading right after this video. Defragmentation in windows is handled as a scheduled task. Every
so often the operating system will defragment the drive automatically and you don't need to
worry about it but you can manually defragment a drive in Windows if you want to. To kick off a
manual defragmentation, open up the disk defragmenter tool bundled with the OS. Type disk
defragmenter.
When it launches, you'll be given a list of disks which can be defragmented along with buttons to
analyze the potential gains from running a defrag or defragmentation and to run the defrag itself.

Supplemental Reading for Windows Disk Usage


For more information about disk usage in Windows, check out the following links: Disk
Usage, How to start Disk Cleanup by using the command line.
In the last lesson, we saw how to view the disk utilization on your computer in Windows. In
Linux, we do this using the du -h command. The du or disk usage command shows us the disk
usage of a specific directory. If you don't specify a directory, it'll default to your current one. The
-h flag gives you the data measurements in human readable form. You should use the du
command if you want to know how much data space is being used by files in a directory.
Another command you can use if you want to know how much free space you have on your
machine is the df command, or disk free. This shows you the free space available on your entire
machine. The -h flag gives you the data measurements in human readable form. You should use
the df command if you want to know how much free space you have on your entire system.
You might have noticed that we didn't really touch on file system defragmentation for Linux.
Linux generally does a good job of avoiding fragmentation more than Windows. We won't get
into this in depth, but you can learn more in the next supplemental reading. In common IT
scenarios, you might find yourself running low on disk space. It's up to you to investigate what
files and folders are taking up space, and if you need to, to remove these files. As always, make
sure to be super careful when removing files.
Supplemental reading for Linux Disk Usage
To learn more about why Linux doesn't need defragmentation, check out the article here.
In an earlier lesson, we talked about the dangers of unplugging a USB device without ejecting or
unmounting it from the computer. You might have seen error messages like this yourself, when
the system alerts that you must safely eject this flash drive. Why do we need to do this? When
we copy over files to a flash drive and we see that the file copied successfully, why can’t we just
unplug the drive without unmounting or hitting the eject button in the OS? Turns out, it may not
be finished copying over that data. It's not just yelling at us for fun. When we read or write
something to a drive, we actually put it into a buffer, or cache, first. A data buffer is a region of
RAM that's used to temporarily store data while it's being moved around. So when you copy
something from your OS to your USB drive, it first gets copied to a data buffer because RAM
operates faster than hard drives.
So if you don't properly unmount a file system and give your buffer enough time to finish
moving data, you run the risk of data corruption. Data corruption could happen for lots of
reasons, other than unsafely removing a disk drive.
Let's say you're working on your computer and the power to the building went out, causing your
computer to suddenly shut off. This kind of crash also causes data corruption. System failure or
software bugs can cause data corruption as well. The NTFS file system has some advanced
features built into it that can help minimize the danger of corruption, as well as, try to recover
when the file system does get damaged.
One of these features, through a process called journaling, logs changes made to a file metadata
into a log file called the NTFS log. By logging these changes, NTFS creates a history of the
actions it's taken. This means it can look at the log to see what the current state of the file system
should be. If a crash or bug does cause corruption, the file system can initiate recovery process
that will use that log to make sure the system is in a consistent state.
In addition to journaling, NTFS and Windows implements something called self-healing. As you
might guess from the name, the self-healing mechanism makes changes to minor problems and
corruptions on the disk automatically in the background. It does this while Windows is running
so you don't need to perform a reboot.
If you want to check the status of the self-healing process on your computer, you can open up an
administrative command prompt and use the fsutil tool, like this. Fsutil repair query, and I want
to query my C drive.
Finally, when things get really bad and there's some serious or catastrophic disk corruption, like
bad disk sectors, disk failures, and more, you can turn to the NTFS check disk utility. The
recovery features NTFS has built into it mean that you don't usually need to run check disk. But
it's available in emergencies. To run check disks manually, you can open up an administrator
command prompt and type check disk onto the command line. By default, check disk will run in
read-only mode. So it'll give you a report on the health of the disk, but won't make any
modifications or repairs to it. You can tell check disk to fix any problems it finds with the /F
flag. You can also specify the drive you want to check like this. chkdsk/F I'm going to check my
thumb drive, which is on the D.
A lot of times, you won't need to run check disk manually, though. If the operating system
detects that some data's been corrupted or that the disk has a bad sector, it'll set a bit in a
metadata file on the volume that indicates there's corruption. When the system boots, the check
disk utility will check this bit. If it's set, it'll execute and try to repair the corruption by
reconstructing the broken bits of the file system from the NTFS log. As you can see, the
Windows NTFS file system has some pretty robust measures and features in place to recover and
prevent corruption from breaking your partitions. Next, let's have a look at how you can perform
file system repairs in Linux.
To try and repair a file system manually in Linux you can also use the fsck or file system check
command. Just make sure the file system isn't mounted. I won't run this command, but this is
what it would look like. If you run fsck on a mounted partition, there's a high chance that it'll
damage the file system. File system repair isn't always a guaranteed fix, but it can help in most
cases. Just be nice to your hardware and it will be nice to you, in most cases. Another thing to
call out is that on in some versions of Linux, fsck actually runs on your computer when you boot
it to check for any issues and attempt to auto-repair the file system. You can learn more about
how to enable this and about some advanced features that you can use with fsck in the next
supplementary reading. We've covered a lot of essential disk management and filesystem
concepts in this lesson. You learned how to partition a disk, how to format a file system, and
how to mount a file system.
We even talked about how you could repair a corrupt file system. In an IT support role, knowing
how to work with disks is essential. Your customers store their precious data on these disks. And
they don't want to lose those photos of their children, important presentations, their collection of
music, or whatever it may be. Knowing how to work with disks and the data on them is a vital
part of an IT role. Next, you guessed it, it's time for another pair of Windows and Bash
assessments. Take your time with them and feel free to go back and review any material from
this module beforehand if you need to.
Supplemental Reading for Linux Filesystem Repair
Linux File System Repair
In this reading, you will learn how to use the file system consistency check or fsck command to
repair data corruption in file systems on Linux machines. As an IT Support specialist, you will
most likely encounter instances of data corruption in onsite systems. It is critical for you to know
how to recover corrupted data, file systems, and hard drives.
A computer file system is software that provides structure for storing the operating system (OS)
and all other software installed on system hard drives. A hard drive must be formatted with a file
system before the operating system can be installed. Since Linux is an open source OS,
innovators have created nearly 100 file systems that support Linux OS installations. Several
common file systems that are used for Linux systems include ext, ext2, ext3, ext4, JFS, XFS,
ZFS, F2FS, and more.
Like all software, software-based computer file systems can experience corruption. File system
corruption can impede the computer’s ability to locate files stored on the hard drive, including
important OS files. File locations are stored as i-nodes (index nodes) in Linux. Every file in a
Linux system has its own i-node identifier. The i-node stores metadata about the storage block
and fragment location(s) where each file is stored. The i-node metadata also holds information
about the file type, size of the files, file permissions, links to the file, and more.
Symptoms of data corruption
Symptoms of data corruption can include:
• System suddenly shuts down
• Software program will not launch or it crashes when opening a corrupted file. May also
give an error message saying:
o “File format not recognized” or
o “(file name) is not recognized”
• Corrupted files and folders may no longer appear in the file system.
• The operating system (OS) may report bad sectors when failing to execute commands.
• Damaged platter-based hard drives can make clicking sounds or unusual vibrations.
Causes of data corruption
Data corruption on system hard drives and file systems can be caused by:
• Software errors -
o Software errors can be any software event that interferes with normal hard disk
read/write operations.
o Viruses and malware can be designed to intentionally cause corruption to data.
o Antivirus software can damage files if the software experiences problems while
scanning or repairing the files.
• Hardware malfunctions -
o Larger files are more likely to experience corruption than smaller files. Large files
occupy more disk space, making them statistically more likely to cross a bad
sector on the hard drive.
o Hard drives that contain platters are at risk of experiencing malfunctioning
read/write heads. Damaged heads can corrupt multiple files and directories in a
single read/write transaction. Hard drives with moving mechanical parts are more
likely to experience failures from moving parts that wear out over time.
• Electrical damage - Can happen when a power failure occurs while the system is writing
data to a hard drive.
Data corruption repair
The most critical first step, after data corruption has been identified or suspected, is to shut down
the affected hard drive(s). The reason for this step is to stop the cause of the corruption from
writing to the hard drives. The longer the corruption activity continues, the more difficult
recovering the data becomes.
Precautions should be taken before powering up a corrupted hard drive to run repair tools. It is
important to minimize any read/write operations on the disk other than those produced by data
recovery tools. One method to prevent further damage could be to have a corrupted Linux system
boot from an external device or network (PXE boot). An alternative method might be to attach
the corrupted hard drive as an external hard drive to a healthy system running Linux. A hard
drive adapter or drive docking station can be used to convert an internal drive into an external
device.
Before connecting a corrupted drive to a healthy system, the automount service must be
disabled. The fsck command will not repair corruption on a mounted file system. In fact,
mounting a corrupted file system can cause the healthy Linux system to crash. Although the
corrupted file system should not be mounted, the device file for the corrupted hard drive in the
/dev directory must be readable for the fsck command to access the drive.
The fsck command
Important Warning: The fsck command should NOT be used:
• on a hard drive that was a member of a RAID array.
• on a mounted file system (must be unmounted).
An important command line data recovery tool offered in the Linux operating system is the fsck
command. It should be run anytime a Linux system malfunctions. The fsck command can check
the file system and repair some, but not all, inconsistencies found. In some cases, fsck may
recommend deleting a corrupted file or directory. The default setting for the fsck command is to
prompt the user to approve or deny the repair of any problems found. The user running the fsck
command must have write permissions for the corrupted file or directory to be able to approve a
repair. If the user does not choose to repair inconsistencies found, the file system will remain in a
corrupted state.
The fsck command will check for inconsistencies and and prompt the user to make decisions on
whether or fsck should repair for the following problems:
• Block count is not correct
• Blocks and/or fragments that are:
o allocated to multiple files
o illegally allocated
• Block and/or fragment numbers listed in i-node metadata that are:
o overlapping
o out of range
o marked free in the disk map
o corrupted
• Map inconsistencies on the disk or in the i-node.
• Directory:
o contains references to a file but that number does not equal the number of links
listed in the same file’s i-node metadata.
o sizes are not multiples of 512
The following checks are not run on compressed file systems.
• Directory checks:
o Directories or files that cannot be located or read.
o The i-node map has an i-node number marked as being free in the directory entry.
o The i-node number in the metadata is out of range.
o The . (current directory) or .. (parent directory) link is missing or does not point to
itself.
• Fragments found in files larger than 32KB.
• Any fragments that are not listed as the last address of the file in an i-node metadata file.
How to use the fsck command
1. Enter fsck as a command line instruction. Syntax:
fsck [ -n ] [ -p ] [ -y ] [ -f ] [ FileSystem1name - FileSystem2name ... ]
• The -n flag - Sends a “no” response to all fsck questions and does not allow fsck to write
to the drive.
• The -p flag - Prevents error messages for minor problems from displaying while
automatically fixing those minor errors. Outside of recovering from data corruption, it is
a best practice to run the fsck -p command regularly at startup as a preventative measure.
• The -y flag - Sends a “yes” response to all fsck questions to automatically attempt to
repair all inconsistencies found. This flag should be reserved for severely corrupt file
systems only.
• The -f flag - Runs a fast check that excludes file systems that were successfully
unmounted for shutdown before the system crashed.
• FileSystem#name - If you do not specify a file system, the fsck command checks all file
systems in /etc/filesystems, where the check attribute is set to true.
• To see more advanced flags, use the man fsck command.
a. To have the fsck command check all of the default file systems and prompt the user on how to
handle each inconsistency found, simply enter at a command line:
b. For ext, ext2, ext3, and ext4 file systems, the e2fsck command can be used:
c. To have the fsck command check specific file system(s) and automatically fix any
inconsistencies found, enter:
2. The fsck command outputs an exit value, or code, when the tool terminates. The code is the
sum of one or more of the following conditions:
• 0 = All scanned file systems have been restored to a functional state.
• 2 = fsck did not finish checks or repairs due to an interruption.
• 4 = File system has changed and the computer needs to be rebooted.
• 8 = fsck could not repair some or all file system damage.
How to run fsck on the next boot or reboot
In many Linux OS distributions, the fsck utility will automatically run at boot under certain
circumstances, including:
• When a file system has been labeled as “dirty”, meaning that data scheduled to be written
to the file system is different from what was actually written or not written to the disk.
This could occur if the system shut down during a write operation.
• When a file system has been mounted multiple times (can be set to a specific value)
without a file system check.
Configuring the fsck command to run automatically on boot and reboot differs depending on
which brand and version of Linux is installed on the system. As a root or sudo user, use vi
(visual instrument) to add the fsck command to the boot sequence.
1. In Debian and Ubuntu,
a. Edit the rcS file.
b. Add the following command to the rcS file:
2. In CentOS,
a. Create or edit a file named autofsck.
b. Add the following command to the autofsck file:
When I was really young, my dad took me to a library at Columbia University where they
installed these terminals to connect to Columbia's mainframe and I saw this stuff for the first
time and something clicked and a light bulb went off over my head. And I thought, this is how I
want to spend the rest of my life. I think it was kind of this concept that you can get this machine
to do whatever you wanted if you just knew how to phrase it, eight? And this idea, that you had
such control and that these were such amazing devices that the computer was like this anything
box. You could shape to solve any kind of problem you can imagine, if you just knew how to do
it. And it was incredibly addictive. Even before I was actually able to lay my hands on a
computer, it was just addictive to know that power was out there and was almost an obsession for
me to try to get access and figure out how I was going to be able to use these things and work
with them and learn about them. My parents are both historians, and I think they thought I was
some kind of freak child that had been dropped in their family by accident. And so, they bought
the computer for me, thinking for sure in six months they'd be selling it used. And lucky for me,
they were very, very wrong.

Week 5 Process Management


Life of a Process
Welcome back, four modules down, two to go in this course, great work so far. In the last lesson,
we learned how to partition and set up disks with file systems to start storing files. We also dove
deeper into the details of file systems, and even learned tools for repairing corrupt file systems
and disks. In this lesson, we're going to talk about processes. Processes play an important part in
our computer user experience. After all, why use a computer if you can't use any programs? With
more and more processes running on our computer, we have to think about ways to better utilize
our hardware resources. Get ready, because we're going to get into the nitty gritty of processes.
We'll talk about how to read process output, and learn how to track our resources. Ready, set,
let's go!
In earlier lesson, we learned that programs are the applications that we can run like the Chrome
web browser. Processes are programs that are running. We can have lots of processes running for
the same program like how we can have many chrome windows open at once or lots of movies
playing using one program. When we launch a process, we're executing a program. And
remember, a program is just software. To calculate the information that our software contains,
we need to give it resources so that it can be run. When processes are run, they take up hardware
resources like CPU and RAM. Luckily, today's computers are powerful enough to handle the
processes that we use in our day-to-day activities, like browsing the web, watching movies, etc..
But, sometimes this isn't enough, sometimes a process is taking more resources than it's
supposed to. Sometimes, processes are unresponsive and freeze up our system making our entire
computer unresponsive. Well, we're going to talk about why this happens, and how we can fix it
in the upcoming lessons. But before we can talk about managing processes, we have to
understand how they work. When you open up an application like a word processor, you're
launching a process. That processes get in something called a process ID to uniquely identify it
from other processes. Our computer sees that the process needs hardware resources to run. So
our kernel makes decisions to figure out what resources to give it. Then, in the blink of an eye,
our computer starts up a word processor and tadah, already to start working. This happens for
every process you launch yourself, and for every process you don't even know who's running.
Besides, the visible processes that we start, like our music player or word processor, there are
also not so visible processes running. These are known as background processes, sometimes
referred to as daemon processes. Background processes are processes that run in the background.
We don't really see them, and we don't interact with them, but our system needs them to
function. They include processes like scheduling resources, logging, managing networks, and
more. When we take a look at all the processes running on our system, you'll see what I'm
talking about. In the next couple of lessons, we'll talk about how processes get created and
terminated. Then, we can start digging into the details of process management. Process
management is a vital skill in IT support. You'll often find yourself troubleshooting issues with
frozen applications, slow applications, and more.
The way that processes are created and stopped differs based on the operating system you use.
First, let's have a look at how Windows does things. When Windows boots up or starts, the first
non-kernel user mode that starts is the Session Manager Subsystem or smss.exe. The smss.exe
process is in charge of setting some stuff up for the OS to work. It then kicks off the log-in
process called winlogon.exe appropriately enough, along with the Client/Server Runtime
Subsystem called csrss.exe, which handles running the Windows GUI and command line
console. We'll talk about a process called init in the next lesson, which Linux uses as the first
process. You might be tempted to think of smss.exe as a Windows equivalent of init. Don't fall
into that trap though. When it comes to process creation mechanisms, they're all pretty different.
In Windows, each new process that's created needs a parent to tell the operating system that a
new process needs to be made. The child process inherit some things from its parent like
variables and settings, which we can collectively refer to as an environment. This gives the child
process a pretty good start in life, but after the initial creation step, the child is pretty much on its
own. Unlike in Linux, Windows processes can operate independently of their parents. Let's take
a look at how this works by creating our own. First, let's launch the PowerShell process to give
us a Windows command prompt.
From there, we can type in notepad.exe to create a new process for the notepad program. So far,
so good. The parent process is PowerShell, and the child is the notepad application. What
happens if we kill the parent process though by clicking on the X button? Notice that notepad
keeps on running happily even though its parent has been terminated. Those children are just in
their own world. Clicking the X is just one way to stop a process from running in Windows, but
as you might expect, there are other ways you can stop processes. You can use a command
prompt command by calling on the task kill utility. Task kill can find and halt a process in a few
ways. One of the more common ways is use an identification number, known as the process id or
PID to tell task kill which process you'd like stopped. One way to do this is to kill notepad again
by specifying the PID using taskkill/pid and then the PID number. Taskkill/pid, this is the
process id of notepad. That's success. This will send the termination signal to the process
identified by the PID, which happens to be notepad in our case. This is useful, but how do we get
that PID in the first place? Glad you asked. We'll talk about how to locate and view processes
and other more detailed process information in an upcoming lesson.
Supplemental Reading for Process Creation and Termination in Windows
For more information about taskkill, or ending one or more tasks or processes in Windows
CLI, check out the link here.
In Linux processes have a parent child relationship. This means that every process that you
launch comes from another process. Lets check out this command. The less command would be
the parent process to our grep process. If all processes come from another process, there must be
an initial process that started this all, right?
Yes, there is, when you start up your computer, the kernel creates a process called init, which has
a pit of one. Init starts up other processes that we need to get our computer up and running. There
are more nuances to process creation than this, but I wanted to introduce the parent process
concept, since you'll see them when we start managing processes. What about what happens
when we're done with our processes? When your processes complete their task, they'll generally
terminate automatically. Once a process terminates, it'll release all the resources it was using
back to the kernel, so that they can be used for another process. You can also manually terminate
a process, which we'll discuss how to do in an upcoming lesson.

My name is Jessica Thera and I'm a systems engineer in the Site Reliability organization.
[MUSIC] So I'd been talking to one of my mentors, and I said, man, I'd really kill to have a job
this summer. I would love to work with computers. And she said, well you know I have this
opportunity, but we're not sure if you're quite ready for it because you're a little young and
inexperienced. And I pretty much begged her, she took the chance on me, and I stuck with this
from the time that I was 15 until I entered college. The first time I was challenged to problem
solve, was probably when we got our first computer and I broke it. I was sitting at the computer,
I had been inspired by a movie that I saw and decided that I wanted to be a young hacker. And so
I ran some command lines and I managed to blue screen, a death to the computer. And so I
panicked trying to figure out what I could do to revert what I just did, and there was no saving it.
I am a first generation born in the US, my family is from Haiti. All of my life my parents and
everyone around me always asked me, what did you want to be when you grow up? I really
honestly didn't know what I wanted to be until I started playing around with computers and I
eventually figured out that I had a love for it. And I basically thought to myself, there has to be a
job that I can do with computers. When I decided that I wanted to pursue a profession in
technology and computing no one understood what I was talking about. Coming from an
immigrant family, everyone talks about being a doctor or a lawyer or a teacher. And if you're not
one of the three, you're not doing it right. But now they don't think that anymore so, they think
I'm a god.

Managing Processes
It might feel like we're starting to get into the weeds here. So let's take a step back and think
about what processes really are and what they represent in the context of an operating system.
You can think of processes as programs in motion. Consider all the code for your Internet
browser. It sits there on your hard drive quietly waiting for its time to shine. Once you start it up,
the operating system takes that resting code then turns it into a running, responding, working
application. In other words, it becomes a process. You interact with launch and halt processes all
the time on computers, although the OS usually takes care of all that behind the scenes. By
learning about processes, you're taking a peek behind the curtain at how operating systems really
work. This knowledge is both fascinating and powerful, especially when applied by a savvy IT
support specialist to solve problems. Keep all that in mind as we take a look at how you can pull
back the curtain even further. Next, we'll learn about the different ways you can investigate
which processes are running on a Windows computer and more methods of interacting with
them. On the Windows operating system, the task manager or task mgr.exe is one method of
obtaining process information. You can open it with the control shift escape key combination or
by locating it using the start menu.
If you click on the processes tab, you should see a list of the processes that the current user is
running along with a few of the system level processes that the user can see. Information about
each process is broken out into columns in the task manager. The task manager tells you what
application or image the process is running along with the user who launched it and the CPU or
memory resources it's using. To kill a process, you can select any of the process rows and click
the end task button in the lower right corner. We can demonstrate this by launching another
notepad.exe process from the command line, then switching over to the task manager, selecting
the notepad.exe process and ending it. I already have Notepad open so I'm going to click on it,
click end task. In an earlier lesson, we talked about starting and ending Windows processes.
Remember that we used the task kill command to stop a process by its identification number or
PID. So how do we get that PID number? While in task manager, you can click on the details
menu option and here, you can see a whole bunch of other information you can get the task
manager to display, including the PID. You can also see this information from both a command
prompt and PowerShell. From the command prompt, you can use utility called TaskList to show
all the running processes.
From a PowerShell prompt, you can use a Commandlet called Get-Process to do the same. There
are lots of ways you can get process information from the Windows operating system. We've
included links to the documentation of both TaskList and Get-Process in the supplementary
reading in case you want to dive deeper into either of these tools.
Okay, now let's talk about how to view the processes running on our system in Linux. We'll be
using the ps command, so let's just go ahead and run that command with the dash X flag, and see
what happens. This shows you a snapshot of the current processes you have running on your
system. The ps output can be overwhelming to look at at first, but don't worry, we'll walk
through how to read this output.
Let's start from right to left here. P-I-D or PID is the process ID, remember processes get a
unique ID when they're launched. TTY, this is the terminal associated with the process, we won't
talk about this field but you can read more about it in the manpages linked right after this video.
STAT this is the process status, if you see an R here it means the process is running or it's
waiting to run. Another common status you'll see is T for stopped, meaning a process that's been
suspended.
Another one you might see is an S for interruptible sleep, meaning the task is waiting for an
event to complete before it resumes. You can read more about the other types of process statuses
in the manpages. TIME, this is the total CPU time that the process has taken up. And lastly,
command, this is the name of the command we're running. Okay, now we're going to enter hard
mode here. Run this command, PS-EF. The E flag is used to get all processes, even the ones run
by other users. The dash F flag is for full, which shows you full details about a process. Look at
that, we have more processes and even more process details. Let's break this down.
UID is the user ID of the person who launched the process. PID is the process ID, and PPID is
the parent ID that we discussed in earlier lesson which launched the process. C is the number of
children processes that this process has. STime is the start time of the process. TTY is the
terminal associated with the process. TIME is the total CPU time that the process has taken up.
And CMD or command is the name of the command that we're running. What if we wanted to
search through this output? It's super messy right now, can you think of a way we can see if a
process is running? That's right, with the grep command, I told you we were going to use it all
the time.
This will give us a list of process that have the name Chrome in them. There's another way to
view process information, remember everything in Linux has a file, even processes. To view the
files that correspond to processes we can look into slash proc directory. There are a lot of
directories here for every process that's running. If you looked inside of one of the subdirectories
it'll give you even more information about the process. Let's look at a sample process file for PID
1805.
This tells us even more information about a process state than what we saw in PS. While the
slash proc directory is interesting to look at, it's not very practical when we need to troubleshoot
issues with processes. For now stick with the PS-EF command to look at process information. As
you can see, we can learn a lot about the processes running on our machine with just a few key
strokes. In an upcoming lesson we'll talk about how to use process information to our benefit
when figuring out which processes are taking up too many resources. For now, feel free to learn
a little more about the processes that you're running, I'll be waiting for you in the next video.
Supplemental Reading for Reading Process Information in Linux
For more information about ps, or the command to read current processes in Linux, check
out the link here.
Imagine you're starting up a video game that's taking a while to render its graphics. You decide
that you don't want to play anymore, which leaves you with a few options. You can wait for it to
finish loading and then quit the game from the menu, or you can interrupt the process altogether,
telling it to quit at the system level. This is just one example of a time you might find yourself
wanting to close a process before it fully completes. To tell a process to quit at the system level,
we use something called a signal. A signal is a way to tell a process that something's just
happened. You can generate a signal with special characters on your keyboard and through other
processes and software. One of the most common signals you'll come across is called SIGINT,
which stands for signal interrupt. You can send this signal to a running process with the
CTRL+C key combination. Let's say you start up the DiskPart tool we looked at in our
discussion on partition formatting. I'm just going to open up command prompt and then launch
DiskPart.
If you decide you don't want to actually format any disks, you can hold down the CTRL key and
press C at the same time to send the SIGINT signal to the DiskPart process. You'll see that the
window that the DiskPart program was running in closes and the process terminates. There are a
few other Windows signals that processes can send and receive. But unlike in Linux, there isn't
an easy way for an end user to issue arbitrary signal commands. If you're interested in learning
more about Windows signals, check out the signal reference link in the supplementary reading.
In Linux, there are lots of signals that we can send the processes. These signals are labeled with
names starting with sig. Remember the sigint signal, we talked about before. You can use sigint
to interrupt a process and the default action of this signal is to terminate the process that's
interrupting. This is true for Linux too. You can send a sigint signal through the keyboard
combination Ctrl+C. Let's see this in action. I'm going to do the same thing as we did in
Windows and start a program like sudo parted. We can see that we're in the parted tool now.
Let's interrupt his tool and say we want it to abort the process with the Ctrl+C keyboard
combination. Now, we can see that the process closed and we're back in our shell. We were able
to interrupt our process midway and terminate it. Success. There are lots of signals used in
Linux, and we'll talk about the most common ones in the upcoming lessons.
Supplemental Reading for Windows Signal
For more information about signal handling in Windows, check out the link here.
We've also seen how to send a running processor signal through Control C, but there's another
Process Management tool we haven't talked about which lets you do things like restart or even
pause processes. This tool is called Process Explorer. Process Explorer is a utility Microsoft
created let IT support specialists, systems administrators, and other users look at running
processes. Although it doesn't come built into the Windows operating system, you can download
it from the Microsoft website which I've linked to in the supplemental reading right after this
video. Once you've downloaded Process Explorer and started it up, you'll be presented with a
view of the currently active processes in the top window pane. You'll also see a list of the files a
selective process is using in the bottom window pane. This can be super handy if you need to
figure out which processes use a certain file, or if you want to get insight into exactly what the
process is doing, and how it works. You can search for a process easily in Process Explorer by
either pressing Control F, or clicking on the little binocular button. Let's go ahead and do a
search for the notepad process we opened up earlier. You should see
C\Windows\System32\notepad.exe listed as one of the search results. If you see something that
says notepad.exe.mui, don't worry. MUI stands for multilingual user interface, and it contains a
package of features to support different languages. Anyways, once you've located the
notepad.exe process, notice how it's nested under the command.exe process in the UI.
This indicates that it's a child process of command.exe. If you right-click on the notepad.exe
process, you'll be given a list of different options that you can use to manage the process. Check
out the ones that say Kill Process, Kill Process Tree, Restart, and Suspend. Kill Process is what
you might expect. Say goodbye to notepad. Kill Process Tree does a little bit more. It'll kill the
process and all of its descendants. So, any child process started from it will be stopped. Kill
Process Tree takes no prisoners. Restart is another interesting option. You might be able to guess
what it does just by its name. It will stop and start the process again. Let's do that with the
notepad.exe process. We started from command.exe. Interesting. After the restart, notepad.exe
doesn't appear as a child of command.exe anymore. What gives? Well, if you'll search for
notepad.exe again, we can see it's been restarted as a child of the procexp.exe process. This is the
process name for Process Explorer. This makes sense since Process Explorer was a process in
charge of starting it again after we terminated it. But what about the Suspend option? Instead of
killing a process, you can use this option to suspend it and potentially continue it at a later time.
If we right-click, suspend the process, we'll see that in the CPU column, the process explorer
output, the word suspended appears.
While a process is suspended, it doesn't consume the resources it did when it was active. We can
kick it off again by right-clicking and selecting the Resume option. Process Explorer can do a lot,
and we'll take a look at some of the monitoring information it can give us in an upcoming lesson.
We won't get into the details of all its features though. So, if you're curious, you can check out
the documentation on Microsoft's website. We put a link to it for you and the supplementary
reading.
Supplemental Reading for Managing Processes in Windows
For more information about the Process Explorer in Windows, check out the link here.
Let's talk about how to use signals to manage processes in Linux. First up, terminating processes.
We can terminate a process using the kill command. It might sound a bit morbid, but that's just
how it is in the dog-eat-dog world of terminating processes. The kill command without any flags
sends a termination signal or SIGTERM. This will kill the process, but it'll give it some time to
clean up the resources it was using. If you don't give the process a chance to clean up some of the
files it was working with, it could cause file corruption. I'm going to keep a process window
open so you can see how our processes get affected as we run these commands. So, to terminate
a process we'll used to kill command along with the PID of the process we want to terminate.
Let's just go ahead and kill this Firefox process.
And if we check the process window, we can see that the process is no longer running. The other
signal that you might see pop up every now and then is the SIGKILL signal. This will kill your
process with a lot of metaphorical fire. Using a SIGTERM is like telling your process, ''Hey
there process, I don't really need you to complete right now, so could you just stop what you're
doing?'' And using SIGKILL is basically telling your process, ''OK, it's time to die.'' The signal
does its very best to make sure your process absolutely gets terminated and will kill it without
giving it time to clean up. To send a SIGKILL signal, you can add a flag to the kill command
dash kill for SIGKILL. So, let's open up Firefox one more time. So, kill dash kill 10392, and now
you can see that Firefox has been killed. These are the two most common ways to terminate a
process. But it's important to call out that using kill dash kill is a last resort to terminating a
process. Since it doesn't do any cleanup, you could end up doing more harm to your files than
good. Let's say you had a process running that you didn't want to terminate but maybe you just
want to put it on pause. You can do this by sending the SIGTSTP signal for terminal stop, which
will put your process in a suspended state. To send this, you can use the kill command with the
flag dash TSTP. I'm going to run PS dash X so you can see the status of the processes. We're just
going to put this process in a suspended state. So, kill dash TSTP. Now you can see the process
10754 is now in a suspended state. You can also send the SIGTSTP signal using the keyboard
combination, Control Z. To resume the execution of the process, you can use the SIGCONT for
continued signal. Let's go and look at the process table again. I'm going to go ahead and use that
command on this process.
Now, if I look at the process again, you'll see that the process status turned from a T to an S.
SIGTERM, SIGKILL, and SIGSTP are some of the most common signals you'll see when you're
working with processes in Linux. Now that you have a grasp on these signals, let's use them to
help us utilize hardware resources better.
In mobile operating systems like iOS and Android, you won't be able to see a list of running
processes. Instead, you'll manage mobile apps that are running on the OS. When a mobile app is
running, there will be one or more processes associated with them, but those details will be
managed by the OS. Let's take a look at how you can manage your running mobile apps and
understand how they're using your mobile device's resources. As an IT support specialist, you
may help end users to troubleshoot slow mobile devices and manage their mobile apps. We'll
show you examples of what you might see, but you may have to refer to your device's
documentation if it doesn't look like these examples. First, let's check what apps are currently
running on a device by opening the app switcher in iOS. From the app switcher, I can see a list of
apps running on this iPhone. Now let's do the same thing in Android. Great, each of the apps that
I have launched is listed here. I can scroll through this list and switch to an app by tapping it.
Now I can use this calculator. The app that we're using is called the foreground app. All of these
other apps are in the background. What do you think is happening with the background apps
while I'm calculating how many bits are in this megabyte? The details can be a little complicated,
but the basic idea is this, as soon as it can, the OS will suspend background mobile apps. A
suspended app is paused, but not closed. The OS can occasionally wake a backgrounded app to
allow to do some work, but it will try to keep apps suspended as much as it can. Let's go back to
the home screen. Now that I'm on the home screen, all of the apps are backgrounded and there
are no foreground apps. The calculator hasn't been closed. Each new app that you open will be
kept backgrounded and usually suspended. This helps the device use less battery power. And pro
tip, as an IT support specialist, it's pretty helpful to learn which apps on your mobile device use
the most battery power. If you have an app that the OS can't suspend because the app keeps
working in the background or it's frozen, then that can slow your device and use up battery. IT
support specialists often have to find these misbehaving apps and close or uninstall them. Let's
try closing some of the apps. From the iOS app switcher, we can swipe up on any of the
background apps, this will close the app. You can do the same thing in Android. In this version
of Android, we can also swipe over here and hit Clear all to close all of the apps at once. You
can troubleshoot a misbehaving app by closing apps one at a time and seeing if there is one app
in particular that slows the device down. Sometimes closing a misbehaving app will be all you
need to do to make your device run smoothly again. Start with the app that's currently being used
and see if that helps. The app switcher shows you the apps in order from most recently used to
least recently used. Work backwards through time, trying one app at a time. Remember that this
is not something that you should have to do very often to make your device work properly. With
current versions of iOS and Android, you shouldn't ever have to close an app for performance
reasons, unless the app is misbehaving. It can actually use up more battery to close and reopen an
app than it would if you had just left it running. If you discover that you have an app that's
routinely misbehaving, you can try resetting it completely by clearing its cache, like we saw in
an earlier video. If the device is still running sluggishly after closing all of the apps, the next
thing to try is to simply restart the device. And if restarting the device doesn't fix the
performance issues or it's only a temporary fix, then we need to dig deeper. Let's check the
battery use of the apps that we've installed. On the iPhone, I go to the Settings app > Battery >
Battery Health. Here I can see how quickly the battery's been used since the last charge. I can
also see which apps are using the most battery. Let's look at the same settings in Android. Again,
I go to the Settings app, and from here, I'll choose Battery > More > Battery usage. From here, I
can see which apps are using the most battery. If I see an app that's using a lot of battery, then it
might not be working as it should, or maybe it's an app that uses a lot of battery to work. You'll
need to learn which apps the end user needs to know whether or not the battery use is unusual.
Supplemental Readings for Mobile App Management
Check out the following links for more info:
• Switching apps in iOS
• How to force an app to close in iOS
• Find, open & close apps in Android
• Android processes and application Lifecycle
• iOS - Battery and performance for iOS
• Fix battery drain problems for Android

Process Utilization
You've been doing a great job and we're almost done with this module. Now that we spent all
this time learning about processes, like how to read them and how to manage them, when are we
ever going to use these newfound skills? Well, pretty much all the time. But an IT support role,
managing processes comes in handy the most when processes become a little unruly. Our
systems usually have some pretty good ways of monitoring processes and telling us which
processes might be having issues. In Windows, what are the most common ways to quickly take
a peek at how the system resources are doing is by using the Resource Monitoring tool. You can
find it in a couple of places, but we will launch it right from the start menu.
Once it opens, you'll see five tabs of information. One is an overview of all the resources on the
system. Each other tab is dedicated to displaying information about a particular resource on the
system. You'll also notice that Resource Monitor displays process information too along with
data about the resources that the process is consuming. You can get this performance information
in a slightly less detailed presentation from process explorer. Just like the process you are
interested in, right click and choose properties.
From there, pick the performance graph tab. You can see quick visualizations of the current CPU
memory indicated by private bytes and disk activity indicated by I/O. But how can we get this
information from the command line? I am glad you asked. There are several ways to get this
information from the command line but we will focus on a PowerShell centric one, our friend
Get-Process. We know that if we run Get-Process without any options or flags, we get process
information for each running process on the system. If you check out the column headings at the
start of the output, you'll see things like NPM(K) values in this column represent the amount of
non paged memory the process is using. And the K stands for the unit, kilobytes. You can see
Microsoft's documentation for a full write up of each column in the next supplemental reading.
This is useful but it is a lot of information. It can be really helpful to filter down to just the data
you are interested in. Let's say you wanted to just display the top three processes using the MOS
CPU, you could write this command.
Get-Process| Sort CPU -descending | Select -first 3 -property ID, ProcessName and CPU. And
just like that, we get the top three CPU hogs on the system. This command might be a little hard
to understand, so let's go through it step by step. First, we call the Get-Process Commandlet to
obtain all that process information from the operating system. Then, we use a pipe to connect the
output of that command to the sort command. You might remember pipes from some Linux
examples earlier. We sort the output of Get-Process by the CPU column descending to put the
biggest numbers first. Then, we pipe that information to the select command. Using select, we
pick the first three rows from the output of sort and pick only the property ID, name, and CPU
amount to display. Now that you've got some knowledge about both the command line and
graphical tools Windows provides for investigating resource usage, let's have a look at Linux
Resource Monitoring.
Supplemental Reading Resource Monitoring in Windows
For more information about system diagnostics processes in Windows, check out the link
here.
A useful command to find out what your system utilization looks like in real time is the top
command. Top shows us the top processes that are using the most resources on our machine. We
also get a quick snapshot of total tasks running or idle, CPU usage, memory usage, and more.
One of the most common places to check when using the top command are these fields here,
percentage CPU and percentage mem. This shows what CPU and memory usage a single task is
taking up.To get out of the top command, just hit the Q key, Quit. A common situation you
might encounter is when a user's computer is running a little slow. It could be for lots of reasons,
but one of the most common ones is the overuse of hardware resources. If you find that a top
shows you a certain task is taking up a lot of memory or CPU, you should investigate what the
process is doing. You might even terminate the process so that it gives back the resources it was
using. Another useful tool for resource utilization is the uptime command. This command shows
information about the current time, how long your system's been running, how many users are
logged on, and what the load average of your machine is. From here, we can see the current time
is 16:43 or 4:43, our system has been up for five hours and eight minutes, and we have one user
logged in. The path that we want to talk about here is the system load average. This shows the
average CPU load in 1, 5, and 15 minute intervals. Load averages are an interesting metric to
read. They become super useful when you need to see how your machine is doing over a certain
period of time. We will get into load averages here but you should read about them in the next
supplemental reading. Another command that you can use to help manage processes is the lsof
command. Let's say you have a USB drive connected to your machine, you're working with some
of the files on the machine, then when you go to eject the USB drive, you get an error saying,
device or resource busy. You've already checked that none of the files on the USB driver is in
use or opened anywhere, or so you think. Using the lsof command, you don't have to wonder. It
lists open files and what processes are using them.
This command is great for tracking down those pesky processes that are holding open files. One
last thing to call out about hardware utilization is that you can monitor it separately from
processes. If you just wanted to see how your CPU or memory is doing, you could use various
commands to check their output. This isn't immediately useful to see on a single machine, but
maybe in the future, if you manage a fleet of machines, you might want to think about
monitoring the hardware utilization for all of your machines at once. We won't discuss how to do
this, but you can read more about it in the supplemental reading. You've done some really great
work in this module. You learned a lot about how to read process information and manage
processes, something that will be vital for you'd know when troubleshooting issues as an I.T.
support specialists. The next assessments will test you on that new process management
knowledge. Then, drum roll please, we'll be on to the last and final lesson of this course. Will
cover some of the essential tools that are used in the role of an I.T. support specialist.
Supplemental reading for Resource Monitoring in Linux
Resource Monitoring in Linux
Balancing resources keeps a computer system running smoothly. When processes are using too
many resources, operating problems may occur. To avoid problems from the overuse of
resources, you should monitor the usage of resources. Monitoring resources and adjusting the
balance is important to keep computers running at their best. This reading will cover how to
monitor resources in Linux using the load average metric and the common command.
Load in Linux
In Linux, a load is the set of processes that a central processing unit (CPU) is currently running
or waiting to run. A load for a system that is idle with no processes running or waiting to run is
classified as a 0. Every process running or waiting to run adds a value of 1 to the load. This
means if you have 3 applications running and 2 on the waitlist, the load is 5. The higher the load,
the more resources are being used, and the more the load should be monitored to keep the system
running smoothly.
Load average in Linux
The load as a measurement doesn’t provide much information as it constantly changes as
processes run. To account for this, an average is used to measure the load on the system. The
load average is calculated by finding the load over a given period of time. Linux uses three
decimal values to show the load over time instead of the percent other systems use. An easy way
to check the load average is to run the uptime command in the terminal. The following image
depicts the load values returned from the uptime command.

The command returns three load averages:


1. Average CPU load for last minute, which corresponds to 0.03. This is a very low value
and means an average of 3% of the CPU was used over the last minute.
2. Average CPU load for last 5 minutes corresponds to the second value of 0.03. Again,
this can be thought of as, on average, 3% of the CPU was being used over the past five
minutes.
3. Average CPU load for last 15 minutes corresponds to 0.01, meaning on average, 1% of
the CPU has been used over the last 15 minutes.
Top
Another way you can monitor the load average in Linux is to use the top (table of processes)
command in the terminal. The result of running the top command is an in-depth view of the
resources being used on your system.

The first line displayed is the same as the load average output given using the uptime command
It lists what percent of the CPU is running processes or has processes waiting. The second line
shows the task output and describes the status of processes in the system. The five states in the
task output represent:
1. Total shows the sum of the processes from any state.
2. Running shows the number of processes currently handling requests, executing
normally, and having CPU access.
3. Sleeping shows the number of processes awaiting resources in their normal state.
4. Stopped shows the number of processes ending and releasing resources. The stopped
processes send a termination message to the parent process. The process created by the
kernel in Linux is known as the “Parent Process.” All the processes derived from the
parent process are termed as “Child Processes.”
5. Zombie shows the number of processes waiting for its parent process to release
resources. Zombie processes usually mean an application or service didn't exit gracefully.
Having a few zombie processes is not a problem.
The top command gives detailed insight on usage for an IT individual to gauge the availability of
resources on a system.
Key Takeaways
Computers need to balance the resources used with the resources that are free. Ensuring that the
CPU is not overused means that a system will run with few issues.
• The load in Linux is calculated by adding 1 for each process that is running or waiting to
run.
• Monitoring the average load of Linux allows an IT professional to identify which
processes are running to determine what to end in order to balance the system. A
balanced system runs with fewer problems than one that is using too high of a percent of
resources.
• The load average uses three time lengths to determine the use of the CPU: one minute,
five minutes and fifteen minutes.
The top command can give detailed information about the resource usage of tasks that are
running or waiting to run.

Week 6 Operating Systems in Practice


Remote Access
You've made it all the way to the last module of this course. Great job. So far, you've learned
how to navigate both the Windows and Linux operating systems, how to set up and manage
users, along with how to manage software. You also learn how to work with disks and file
systems on top of working with processes and hardware resources. That's some seriously great
work. The skills you learn in this course are essential for building a solid technical foundation as
an IT Support Specialist. Now let's finish strong. In the next few lessons, we're going to wrap up
the course with some of the more practical aspects of operating systems that you'll use all the
time in IT support. Let's get started.
In this lesson, we're going to talk about important part of computing that makes working in IT
support a little easier. Actually, it makes things a lot easier for just about anyone. Picture this,
you're on your way to an important meeting. You've been rehearsing for this presentation all
week and now you're ready to show the big wigs what you got. But wait, the slide deck, where is
it? It's not on your laptop. Where could it be? It turns out you forgot your only copy on your
desktop at home. It's too late now to turn around and get it, so you sit there dreading the
inevitable. But wait a minute, suddenly you remember that you have a remote connection setup
from your laptop to your desktop. You use this connection to log into your computer at home,
and just as if you are sitting at home, you're able to grab the file from your desktop and copy it to
your laptop. You then proceed to give one amazing presentation. Consider another scenario, you
bought a computer at a store and you're having a lot of issues with it. Store has a computer help
desk that can help you with issues, but it's after hours and the store's closed. You really need to
get your computer issue fixed, so what are your options? Fortunately, the store provides 24-7
tech support online. Now, instead of waiting until a physical store is open again, you can reach a
tech online and have them help you with your issue through a remote connection. Remote
connection makes working in an IT support role much easier since it allows us to manage
multiple machines from anywhere in the world. In this lesson, we're going to learn about remote
connection. SSH or secure shell is a protocol implemented by other programs to securely access
one computer from another. To use SSH, you need to have an SSH client installed on the
computer you're connecting from along with an SSH server on the computer you're trying to
connect to. Keep in mind that when we say SSH server, we don't mean another physical machine
that serves a data. An SSH server is just software. On the remote machine, the SSH server is
running as a background process. It constantly checks if a client is trying to connect to it, then
will authenticate its requests. The most popular program to use SSH within Linux is the
OpenSSH program. We'll talk about how to use SSH from a Windows machine using the popular
Open Source program PuTTY. For now, let's just talk about what happens when you use SSH.
We're going to show you an example of SSH into a remote machine. So first things first, to login
to a remote machine, we have to have an account on that computer, we also need the host name
or IP address of that computer. Let's test this. So sshcindy@ IP address.
We get this message, the authenticity of host and then the IP address can't be established. This
message is just saying we've never connected to this machine before and our SSH client can't
really verify or connect to a machine we want to connect to, but we can verify this is the right
machine. So let's just go ahead and type yes.
Now, this host gets saved to the computer as a known host, so we won't get this message again
when we try to login to it. Now that we're connected through SSH, any of the text commands
that we type are sent securely to the SSH server. From here, you can even launch an application
that'll let you see a gooey instead of working directly in the shell. You can read more about how
to do that in the supplemental reading. We can connect to SSH using passwords as you saw
earlier. This way of authenticating to remote machine is pretty standard, but it's not super secure.
The alternative is using an SSH authentication key. SSH keys come in a set of two keys called
private and public keys. You can think of them as actual physical keys to a special safe. You can
use one key to lock the safe, but it won't unlock it. The other key can then only unlock the safe
but not lock it. That's basically how public and private keys work. You can lock something with
the public key, but you can only unlock it with a private key and vice versa. This ensures that
whatever is in the safe is available to only those with the public and private keys. You'll learn
about the technical details of public and private keys in our IT security course. Don't worry if
this doesn't make sense right now, it will. That's basically how SSH works. Not too scary, right?
Another way that you can connect securely to remote machine is through VPN. A VPN is a
Virtual Private Network, it allows you to connect to a private network like your work network
over the internet. Think of it as a more sophisticated SSH with a lot more setup. It allows you to
access resources like shared file servers and network devices as if you are connected to your
work network. Spoiler alert, we'll also touch upon the technical details behind VPN in the IT
security course. We've covered a lot about remote connections and how they work. We'll talk
more about the popular remote connection programs for Windows and Linux and how to set
them up in the system administration course.
The ability to make remote connections is equally useful on Windows computers. PuTTY is a
free open-source software that you can use to make remote connections through several network
protocols including SSH. You can visit the PuTTY website to download the entire software
package with a Microsoft installer. Those are the MSI files we talked about earlier. Or you can
choose a specific executable which provides the functionality you're after, like putty.exe. The
PuTTY downloads page is linked in the next supplemental reading in case you want to check it
out. Once you've downloaded and installed PuTTY, you can use it by launching the GUI.
A window will appear showing you the basic options for your connection. Make a note of the
host name, port, and connection type options. By default, the port is set to 22 which is the default
port the SSH protocol uses and the connection type is set to SSH. All you need to do is type in
the host name or IP address of the computer you want to connect to, then click open start up a
new SSH session.
Now I've SSHed into a remote computer. Running PuTTY from the GUI isn't your only option.
You can also use it on the command line. Open a PowerShell prompt and type out the application
name like this, then tell it you want to connect via SSH by adding the dash SSH option. You
could also provide the user an address in the form of user@IP address and specifying the port at
the end. Altogether, the command would look something like this.
PuTTY also comes with a tool called Plink or PuTTY link which is built into the command line
after PuTTY is installed. You can use Plink to make remote SSH connections too. SSH can be
super useful especially if you want to connect from a computer running Windows to a Linux-
based operating system running remotely. Microsoft actually provides another way to connect to
other Windows computers called the Remote Desktop Protocol or RDP. There are also RDP
clients for Linux and OS 10 too, like RealVNC and Microsoft RDP on Mac. We'll add some
links to these clients in the supplemental reading. RDP provides users with a graphical user
interface to remote computers, provided the remote computer has enabled incoming RDP
connections. A client program called the Microsoft Terminal Services Client or mstsc.exe is used
to create RDP connections to remote computers. You can enable remote connections on your
computer by opening up the start menu, right clicking on "This PC", then selecting "Properties".
From there, select "Remote Settings", and then pick an option from the remote desktop portion
of the panel. There are some security implications that come with allowing people to remotely
connect to your computer. You should only let users who you trust do this. Typically, in an
industry setting, these kinds of settings are usually set by the system administrator for the
company's computers that connect to the network. Once you've allowed connections on the
remote computer and provided you're on the list of users allowed to access it, you can use the
Remote Desktop Protocol client mstsc.exe to connect to it from anywhere else on the network.
You can launch the RDP client in a few ways. You can type "mstsc" at the run box or look up
Remote Desktop connections in the Start Menu.
Once you've launched the client, it'll ask for the name or IP address of the computer you want to
connect to. The Windows RDP client can also be launched from the command line, where you
can specify more parameters like slash admin if you want to connect to the remote machine with
administrative credentials. We've linked to the RDP documentation in the supplementary reading
in case you want to learn more.
Supplemental reading for Remote Connections in Windows
Remote Connections in Windows
Connecting securely to remote machines is an important task for deploying services. Secure
Shell (SSH) was developed in the 1990s to address this issue. This reading will cover what SSH
is, the features it enables, and common SSH clients and their key features in Windows.
SSH
Secure Shell (SSH) is a network protocol that gives users a secure way to access a computer over
an unsecured network. SSH enables secure remote access to SSH-enabled network systems or
devices and automated processes. It also allows for secure remote access to transfer files, use
commands and manage network infrastructure.
OpenSSH
OpenSSH is the open-source version of the Secure Shell (SSH) tools used by administrators of
Linux and other non-Windows for cross-platform remote systems management. OpenSSH has
been added to Windows (as of autumn 2018) and is included in Windows Server and Windows
client.
Common SSH Clients
An SSH client is a program that establishes secure and authenticated SSH connections to SSH
servers. The following common SSH clients are Windows compatible:
PuTTY is a terminal emulator and the inspiration for all subsequent remote access systems.
• Features: This tool offers Telnet, SSH, Rlogin (A remote login tool for use with UNIX-
based machines on your network), and raw socket connections plus Secure File Transfer
Protocol (SFTP) and Secure Copy Protocol (SCP) for file transfers between two hosts.
• Protocols: SCP, SSH, Telnet, rlogin, and raw socket connection.
SecureCRT is a remote access system available for macOS, Linux, iOS, and Windows.
• Features: It offers terminal emulation and file transfer through an SSH tunnel. It enables
connections through many protocols and has usability features like tabbed sessions and
customizable menus.
• Protocols: SSH1, SSH2, Telnet, and Telnet/SSL
SmarTTY is a free SSH client with a multi-tabbed interface to allow multiple simultaneous
connections.
• Features: This tool includes SCP capabilities for file transfers. It also includes usability
features like auto-completion, file panel, and package management.
• Protocols: SSH and SCP
mRemoteNG is a remote desktop system with a tabbed interface for multiple simultaneous
connections.
• Features: The system enables connections with Remote Desktop Protocol (RDP), Telnet
(two-way text communication via virtual terminal connections), Rlogin, Virtual Network
Computing (VNC, a graphics-based desktop sharing system), and SSH.
• Protocols: RDP, VNC, SSH, Telnet, HTTP/HTTPS, rlogin, Raw Socket Connections,
Powershell remoting
MobaXterm is a remote access system built for Unix and Linux, and Windows.
• Features: Features include an embedded X server (a graphical interface akin to
windows), X11 forwarding (a way to run applications over a remote connection), and
easy display exportation to let X11 applications know which screen to run on.
• Protocols: SSH, X11, RDP, VNC
Key Takeaways
Secure Shell(SSH) is a way to securely connect two remote machines over an unsecured
network.
• You can use SSH to remotely control, transfer files from, and manage network resources
for SSH-enabled clients.
• OpenSSH is an open-source version for cross-platform management.
• There are many common Window-compatible SSH clients with various features to fit any
need, including PuTTY, SecureCRT, SmarTTY, mRemoteNG, and MobaXterm.
Resources
• Download PuTTY
• Download SecureCRT
• Download SmarTTY
• Download mRemoteNG
• Download MobaXterm
Have you ever tried sending a file over to a coworker? What methods do you use? Do you attach
it to an email and send it, or do you copy the file to a USB drive and then transfer the file that
way? There are lots of ways to transfer files. In this lesson, we're going to talk about a method
that uses remote connection. SCP, or secure copy, is a command you can use in Linux to copy
files between computers on a network. It utilizes SSH to transfer the data. So just like you would
SSH into a machine you can send a file that way.
Let's see this in action. Let's say you want to copy over a file from our computer to another
computer. To do this, we can run the SCP command with a few flags. So SCP, Desktop, myfile,
over to cindy@, In this command, we run the SCP command with the path of the file we want to
transfer to the user account, hostname, and path of where we want to copy the file to. Now, it
prompts us for the login information of the computer we want to send the file to. After we enter
this, we can verify that the file successfully copied over. And there it is. The SCP command is a
super useful tool if you need to copy files to and from computers in a network. You can read
more about the command by checking out the Manpage.
How can we share files and data over the network on a Windows computer? Well it just so
happens that the PuTTY program we talked about a couple lessons back supports the SCP
protocol. The PuTTY package comes with a tool called the PuTTy Secure Copy Client, or
pscp.exe. You can use it to copy files in a very similar way to the Linux SCP command. Let's
take a look. Pscp.exe, and I'm going to grab a file from my desktop, Then I'm going to copy it to
my Linux workstation. And then I'm going to add the location of where I want to copy it to.
Now, if you go back to my Linux workstation, We can see that it was copied. Using PuTTY or
SCP to transfer files can be a little time-consuming, especially if you need to transfer files to
multiple machines. As an alternative, Windows came up with a built in mechanism to share files
by using the concept of shared folders. Shared folders do pretty much what you'd expect from
their name. You tell Windows you want to share a folder with a person or group of people, then
drop some files into it. Anyone you've shared the folder with can then access those files. Sharing
folders in Windows is easy. Just right-click on the folder you want to share, Then mouse over the
Share with option, And then pick specific people from here.
From here, you can add the individual users or groups you want to share the folder with. There's
even an option to add everyone to the sharing permissions, which might be convenient, but isn't
super secure. Once you've shared the folder, you can access it from other computers. Start by
opening up This PC, Then going into the Computer tab. And from here, you can map the folder
directly to your computer with the map network drive option.
Finally, on another computer, you can visit it directly from the run box by typing in backslash,
whatever the computer name is, and then backslash the folder name that you mapped it to. You
might be interested to know that you can share folders from the command line too, using the net
share command. Net share lets you do the same thing as the GUI sharing workflow, and you'll
need to specify what kind of permissions you'd like to give which users. Let's say you wanted to
give everyone on your network full permissions to a folder called shareme. You could execute
this command from an elevated or administrator level PowerShell prompt.
Net share shareme, let's see, Users, /grant:everyone,full. Users will be able to access the share
folder by using the same methods we talked about before. The net share command can also be
used to list the currently shared folders on your computer by executing it without any arguments.
Just like this. If you'd like more information on net share and its abilities, check out the
documentation in the supplemental reading.
Supplemental Reading for Remote Connection File Transfer in Windows
For more information about managing shared resources in Windows, check out the link
here.

Virtualization
We've talked a little bit about virtual machines before. We've also been using virtual machines
throughout the quick lab assessments. In this lesson, you're going to learn how to install, manage
and remove a virtual instance. A virtual instance is just a single virtual machine. We're going to
be using the popular opensource virtualization software, Virtual Box, to manage virtual
instances. You'll find a link to download Virtual Box in the reading right after this lesson. I'm
currently using a Windows machine and in this lesson we're going to set up and virtualize an
ubuntu instance. I've already installed Virtual Box on my machine. So let's go ahead and launch
this application.
We won't go through all the menu items from Virtual Box, but we will talk about some of the
main ones. First step, how do we install a virtual instance? I've already pre-downloaded an image
of ubuntu from their website and saved it onto my desktop but I have to install it somehow. Well,
to install this, I'm just going to click this new button here to create a new VM. I'm going to give
my VM a name and select the type and version of my OS. Just going to stick with the defaults.
Next it asks how much RAM I want to dedicate to this VM. One gigabytes is more than enough
for me so I'm just going to keep this and then continue. Now it asks how much hard drive space I
want to dedicate to this VM. I'm just going to keep the default of 10 gigabytes and click Create.
We're going to keep the default values here and just skip through to the create. Awesome. You
can read more about these options in the supplemental reading. Now in my menu here I can click
Start and it'll start the VM. It will prompt me to select a media to launch from, similar to booting
a USB drive with the OS image on it. So I'm just going to select the image I downloaded.
And from, here the installation starts up. That's pretty much it. Okay, what if we decide we want
to use more than one gigabyte for the OS? On a physical machine, we'd have to buy more RAM
and install it. But since we're using a VM, it's as easy as changing a setting. To modify hardware
resource allocation to a VM, all we need to do is right click on the VM then click settings. From
here, we'll be able to change how much RAM we want along with other settings. We won't
discuss the specifics of these settings, but you can see how simple it is to modify a VM instance.
Now, what if we decide we don't want to use this VM anymore? If this is a physical machine,
we'd have to worry about where to store or recycle the hardware. For virtual machines though, all
we need to do is right click and select remove.
From here, it'll ask if we want to remove all files including the VM install itself or just remove it
from the list of VMs. Let's go ahead and delete all files. And that's it in a nutshell. Super simple.
If you want to learn more about how to use Virtual Box or other virtualization software, don't
forget to check out the supplemental reading.
Supplemental reading for Virtual Machines
Virtual Machines
Virtualization creates a simulated computer environment for running a complete operating
system (OS). The simulated computer environment is called a virtual machine (VM). On a VM,
you can run an OS as if it were running directly on your physical hardware. This reading
explains how virtual machines work and introduces some tools for creating a VM.

How VMs work


Virtual machine software creates a virtualized environment that behaves like a separate computer
system. The VM runs in a window on the operating system of your physical computer. The
operating system that runs on your physical computer is called the “host” OS. Any operating
systems running inside a VM are called “guests.” In the virtual environment, you can install your
guest OS, and it will function like it’s running on a physical machine. Whenever you want to use
the guest OS, open your VM software and run the guest OS in a window on your host desktop.
Using a virtual machine lets you experiment with different operating systems without having to
uninstall or replace your host OS. For example, you can try a Linux OS as a VM on your
Windows computer to see how the two OSs compare, or you can use a VM on your Linux
system to run a Mac software package.
Another advantage of VMs is that they are isolated from the rest of your system. Software
running inside a VM doesn’t affect the host OS or other VMs on your system. This isolation
makes VMs a safe place to test software even when there is a risk of negative effects on a
system.
A key advantage of VMs is significant reduction in hardware and electricity costs. You can run
many VMs on a single host by dividing available hardware resources among each virtualized
environment. Modern computer hardware offers a lot of computing power in a single device. But
a typical OS will require only a fraction of the computing resources available in a computer. This
means you won’t have to run those systems on several physical computers that are only partially
used.
VM software divides hardware resources among virtualized environments by designating a
portion of resources as virtual resources. When you create a VM you may be asked to specify the
amount of physical hard drive space you want to set apart for your VM to use. The VM software
will create a virtual hard drive for your VM of the specified size. VM software may have you
also specify the amount of RAM you want to allocate for your VM. After you create the VM,
you can usually adjust resource allocations. If you want more drive space or RAM for your VM,
you can adjust the settings in the VM software to allocate more of those resources.
VM software
Some common Virtual Machine software used to create VMs:
• VirtualBox runs on Windows, Linux, Macintosh, and Solaris hosts. VirtualBox supports
various guest operating systems, including Windows, Linux, Solaris, OpenBSD, and
macOS. VirtualBox is open-source software available for free on the VirtualBox
download page.
• Hyper-V is Microsoft's virtualization platform. It is available as an integrated feature on
the Windows operating system. Hyper-V supports Windows, Linux, and FreeBSD virtual
machines. It does not support macOS. See Microsoft’s Hyper-V for Windows
documentation for information on how to access Hyper-V on recent versions of
Windows.
• VMware desktop software runs on Windows, Linux, and macOS hosts. VMware
Workstation Player is the VMware software that lets users run multiple operating systems
on a single physical personal computer. It is freely available for non-commercial use on
the VMware Workstation Download page.
• Red Hat Virtualization (RHV) is a business-oriented platform developed for
virtualization in enterprise computing contexts. RHV supports a variety of guest systems.
Red Hat charges an annual subscription fee for product access, updates, patches, and
technical support. See Red Hat’s RHV Datasheet for information on how to implement
RHV on existing hardware infrastructures.
Key takeaways
Virtualization lets you create a simulated computer environment for running a complete
operating system.
• Virtual machine (VM) software creates a virtualized environment that behaves like a
separate computer system.
• Virtualization lets you experiment with different operating systems without having to
uninstall or replace your host OS and provides a safe place to test software.
• VM software divides hardware resources among virtualized environments by allocating
portions of available resources as virtual resources.
• A variety of Virtual Machine software are available for creating VMs.
More resources
For step-by-step instructions on how to create a virtual machine using VirtualBox, see the
VirtualBox manual.

Logging
Remember from the first course in our program, Technical Support Fundamentals, that we
introduced the concept of logs? A log is like your computer's diary, it records events that happen
on your system. What kind of events, well, pretty much everything. Like when your system shuts
down, when it starts up, when a driver's loaded, when someone logs in. All of these things can be
written to a log. It's also written with a lot of detail. Logs tell you the exact time that an event
occurred, who caused the event, and more.
We'll be looking into some sample log snippets in the upcoming lessons to get a better sense of
how to read one. The act of creating log events is called logging. Your system does a pretty good
job of logging events right out of the box. In most systems, there is a service that runs in the
background and constantly writes events to logs. These systems are customizable so you can log
any specific field you want, but by default it logs all the essentials. By the end of this lesson,
you'll learn where all the important logs are kept on the Windows and Linux OSes. You'll also
learn how to read a log and utilize common troubleshooting practices when it comes to logs.
When you're working in IT support, you'll need to gather as much data as you can to
troubleshoot an issue. Logs tell us important things like errors that occurred, changes that were
made, etc. They are a reliable source of information.
Similar to how we can jot down our life events in a journal, events are also logged on our
machines. In Windows, the events logged by the operating system are stored in an application
called the Event Viewer. Whether you're trying to figure out why a computer game keeps
crashing, or troubleshooting login or access problems, or just satisfying your curiosity about
what's going on in your system, the Event Viewer is a great first stop.
Let's take a look at some of the information it collects, and how you can use the Event Viewer to
get answers you're looking for. You can launch the Event Viewer either from the start menu or
by typing in eventvwr.msc from the run box. The default view of the Event Viewer shows is a
summary of potentially important recent events. In our case, this isn't super interesting, since
we're more concerned with any issues that occurred. Instead, let's take a look at the left-hand
pane, where we can see a few different event groupings.
The first group we see is called Custom Views. The Event Viewer records a lot of information
about the system. So it can sometimes be a little difficult to tease out the signal, like recent
events, from the noise or the stuff you don't care about. This is where the concept of custom
views comes in handy. With a custom view, you can create a filter that will look across all the
event logs the Event Viewers know about and tease out just the information you're interested in.
Let's say we wanted to only see events of error, severity or higher that we're logged in the last
hour. To do this, click on the Create Custom View options in right-hand actions pane. This will
bring up a tab called Filter. From there, click the error and critical checkboxes. We're going to
change the logged drop down menu to last hour.
In the Event logs, we're going to select just are just the Windows Logs, then click OK. Then
we're going to give our view a new name. Click OK once more. Once you're done, you'll see a
new view come up under custom menus, where only the events that matched your filter are
displayed. The other two categories of logs you'll see in the left-hand navigation page are
Windows Logs and Application and Services. The Windows Logs categories contain event logs
that are generally applied to the whole operating system.
Let's say you're having an issue with a driver failing during startup. The log called System would
be a good place to start. If you want to see whose been accessing the computer, then you begin
investigating the Security log. The other category is called Applications and Services Logs. This
category contains logs that track events from a single application, or operating system
component, instead of the system wide events of the Windows logs category. For example, if
you're having trouble with PowerShell and wanted to get more information about it, checking out
the PowerShell log under Applications and Services log would be a great first step. Regardless
of its category, each line in a given log in the Event Viewer represents an event. Each event
contains information grouped in columns about the event like the login level. Information is the
lowest level and critical is the highest. You could also find the Date and Time the event
occurred.
Selecting an event will bring up more detailed information in the bottom pane of the Event
Viewer. This can help you dig into troubleshooting or even give you context for a bug report.
The Event Viewer is a super helpful tool for IT support specialists. It can provide you with a lot
of really detailed information about the problems any software or hardware might be
experiencing on your system. There's a lot of information in there though. So don't forget about
its custom views and filtering capabilities. More importantly, don't hesitate to poke around the
tool and get used to finding things in its interface. You'll have fun and you'll learn a lot. Next
stop, the wild world of Linux logs.
Logs in Linux are stored in the /var/log directory. Remember that /var directory stands for
variable, meaning, files that constantly change are kept in this directory, and it turns out that logs
are constantly changing. If we look at the /var/log directory with an LS, it might seem a little
intimidating. Don't worry. Each of these log files store specific information that we can figure
out by their file names. Let's check out some of the common ones you'll look at,
/var/log/auth.log, authorization and security-related events are logged here. /var/log/kern.log,
kernel messages are logged here. /var/log/dmesg, system startup messages are logged here. If
you encounter an issue at, let's say, boot up, this is a good place to check for information. It
might get a little tiresome to open up each of these log files to find information about events.
Luckily, there are also log files that combine the information of other log files. The downside is
that these files are usually very large. If you have a pretty good idea of where a problem might
lie, you might want to opt for the smaller and more specific log file. The one log file that logs
pretty much everything on your system is a /var/log/syslog file. The only thing that sys log
doesn't log by default are off events. When troubleshooting issues with user machines,
/var/log/syslog will usually contain the most comprehensive information about your system, so
that should be your first stop. Log files output a lot of events. By that logic, they take up a lot of
data that has to be stored on our machine somewhere. We generally just want to see the latest
events on our system, so we don't need to overload or disk with all this information. Luckily, our
systems also do a good job of cleaning out log files to make room for new ones. They use
something called log rotation to do this. In Linux, the utility rotate logs is called log rotate. You
might want to investigate an event that happened a month ago, so you can change your log
rotation settings to make sure not to delete events that are that old. We won't discuss how to
work with log rotation, but you can read more about it in the supplemental reading. We've talked
about logging in the context of a single machine, but if you find yourself managing many
systems and want to be able to parse their logs in one central location, you can use something
called centralized logging. We won't talk about how to do this, but if you're interested in setting
up a centralized server, check out the next supplemental reading. Okay, enough talk about what
logs are. Let's actually look at some real ones. Whoa, this looks super intimidating. But don't
worry, we're not going to be reading all of this. In the next lesson, we'll teach you how to
troubleshoot using logs. But for now, let's just read one line in sys log and parse what it says.
The first field here is the time stamp when the event occurred. Pretty straightforward. But
depending on the log, you might see a time format you aren't familiar with like a long string of
numbers such as 1501538594. Time stamps found in a format like this are referred to as Unix or
epoch time. At first, you might be baffled by this. Why would you represent time in this way?
And just what exactly is the Unix epoch? The Unix epoch time is used to represent, then, it's the
number of seconds since midnight on January first, 1970, a sort of zero hour for Unix based
computers to anchor their concept of time. This means that 1501538594 represents the date,
time, Monday, July 31st. 30314 Pacific Standard Time. Why midnight on January first, 1970? Is
that date the birthday of Unix? Or does it mark some other significant event? The actual answer
is much simpler. The original engineers of Unix at Bell Labs just picked it because it was
convenient. So, don't be caught off guard if you see a time stamp like this. The next field is the
host name of the machine the event occurred on. Next up is the service that the log event is
referring to. And last is the event that occurred. In the next lesson, we'll show you some common
troubleshooting tactics when using logs.
Supplemental Reading for Linux Logs
For more information about logrotate, or the command to manage large numbers of log
files in Linux, check out the link here.
Yes, now you've made it to the really fun stuff. Let's use the information we learned about logs
to actually investigate system issues. Take the scenario, you're working in IT support role and
one of your users tells you that they leave their computer on all the time but they recently woke
up to find that the computer had shut down. What do you do? Maybe you stay up through the
night keeping a close eye on the computer not taking any breaks to use the restroom or even
blink. You wait and wait and wait until the computer shuts off, or in a sane and normal world,
you decide to just look through the system logs. Let's go with that option. So where do you
begin? At first, logs can be really messy and daunting to look at. We'll talk about the techniques
you can use to view logs, but rest assured, you'll never need to read a log line by line. The first
thing you want to do when looking at a log is to search for something specific. But what if you're
seeing issues within an application and you don't know where to start looking? Well luckily for
us, our systems log information in a pretty standard way. If an application is getting a lot of
errors, what do you think you can search for? That's right. The word error. What if you're seeing
an issue with a specific application? What else do you think you can search for? If you guess the
application name, you're right. You've already been able to filter out your logs to look for
specific things that you might be seeing. Let's see this in action.
Here, we can see the log results that have the word error in them. If you need to investigate
issues that happen around a certain time, you can actually do that by checking the timestamps
around that time. You may find the problem that's causing your issue this way or at least get a
little closer to figuring out what it is.
When you finally get to a juicy log portion that might help you uncover problems, you usually
want to start looking at the output from either the top or bottom. Let's say you're seeing lots of
errors. Each of these errors could be happening because of a root issue. If you resolve the root
issue, you'll fix the cascading errors. Take a look at this. The log is riddled with errors but if we
scroll up, we can see the one error that spun up all these others. If we fix that, then the other
issues will most likely be fixed. On the flip side, if you aren't seeing any indicators of a problem
in a log, you might want to work from the bottom until you come across a clue. Your system
could be functioning normally but when you scroll down to read the output, you see a log entry
that may be related to your problem. Another troubleshooting tactic you can use with logs is to
check them in real time. Let's say every time you launch a specific application, it does something
abrupt and shuts off. Sure, you can check the logs and post and keep track of your time or you
can look at the logs in real time. To do this, we can use one of the commands we learned in a
very early lesson, tail. Let's take a look at what this means. We're going to tail -f the syst log file
and keep it in an open window.
Then we're going to turn off Bluetooth to show you the events it's logging. Now we can see
Bluetooth logging data in real time. Look at that, we've come full circle. I told you those
commands would come in handy. Using these simple log tactics will help you throughout your
career as an IT support specialist. You've certainly covered a lot so far. Now, you've picked up
how to troubleshoot using logs, too. Logs will be one of your best friends when you're faced with
a problem machine that leaves no obvious clues. Talk to the logs and listen to what that sweet,
sweet love voice is telling you. You'll discover the problem in no time.

Operating System Deployment


We learned how to install an operating system for ourselves, but when you work in an IT support
role, you have to install operating systems for everyone else. Installing an operating system with
a single USB stick like we did in the very early lessons of this program, can get extremely time
consuming, especially when you need to do it for lots of machines. Fortunately, in the IT world
we use fantastic tools to make our jobs easier. Remember that imaging a machine, means to
format a machine with an image of another machine. This includes everything, from the
operating system to the settings. In this lesson, we are going to briefly touch upon some of the
options you can use to image machines, and help deploy operating systems a little easier.
One tool we can use to image computer is a disk cloning tool. It makes a copy of an entire disk
and allows you to back up a current machine or set up a new one. There are lots of discloning
tools out there to help you complete this task. The benefit of disk cloning over a standalone
installation media is that you can also install settings and folders that you might need. One of the
many disk cloning tools out there is the open source software Clonezilla. It can be used to
backup and restore a single machine or many machines simultaneously. A popular commercial
imaging tool is Symantec Ghost. To read more about other disk cloning software, check out the
supplemental reading right after this video. With disk imaging lots of tools offer different ways
you can clone a disk. One option is disk-to-disk cloning where you connect an external hard
drive to the machine you want to clone. You can connect a hard drive like your HDDs and SSDs
into something known as an external hard drive dock. These devices are great IT tools that kind
of look like toasters. Once you connect your external hard drive, you can use any disk cloning
tool of your choice. We're going to show you a really quick example of how disk cloning works.
Let's use the Linux command line tool dd to copy files. Dd is a lightweight tool that's also used
to clone a drive. Again, you can use any tools you want to clone your disks, but right now we're
just going to use dd. Let's make a copy of the USB drive I have connected in my laptop then save
it as an image file. First, we want to make sure we have this drive unmounted.
Then, we want to run dd. You don't have to know how dd works to use this command. Actually,
you should check out the final supplemental reading to learn more about this tool. This just says,
I'm going to copy the contents of /dev/SDD which is the USB drive and save it to the desktop in
an image file. Once the image file is saved, if we open it up we should see the exact same
contents as the USB drive.
You can use dd for larger disks like hard drives and it'll function the same way. Pretty cool,
right? Another method you can use to image a machine is to request the images directly from the
network. Lots of operating system manufacturers today offer network initiated deployments.
This means no more smessy standalone installation media. Instead, you can just download and
install an operating system through the network. If you want to use your own images and not the
built in network boot options for your computers, there are other options for that too. We don't
discuss the specifics of them here but they require a bit of automation to get going. It doesn't
matter if it's a laptop, desktop, Windows OS, Linux OS e.t.c. If you're managing the operating
system deployment for a company, you want to keep some aspects of hardware standardization
in mind. Imagine if your company has a different laptop with different drivers that needed to be
installed. This can get tedious to maintain. It's usually a good idea to try and standardize what
type of hardware you use in a company to make your job of deploying operating systems a little
easier. Okay, you're so close to finishing this course. We've got a final pair of assessments for
you where you'll use logs to help you track down some issues.
Supplemental Reading for OS Deployment Methods
OS Deployment Methods
In this reading, you will learn about operating system (OS) deployment methods, including the
use of disk cloning. A cloned disk is an identical copy of a hard drive. Cloning is often used
when an Enterprise company purchases a large number of identical computers. The IT Support
Administrators for the company are responsible for installing and configuring the computers to
meet the needs of the company and its network. Disk cloning is used to save time on this type of
deployment. IT Administrators will select one of the new computers to install and configure
needed items, such as the OS, utilities, tools, network settings, software, drivers, firmware, and
more. Then they make a clone of this first hard drive. The cloned disk is used to copy the entire
disk image over to the remaining new computers so that the IT Admins do not need to repeat the
same installation and configuration steps on each new computer. They may keep a copy of the
original disk from this deployment to reimage the systems if a clean OS install is required (e.g.,
following a virus or malware infection, OS corruption, etc.).
Cloned disks have uses beyond deploying OSs. They can be used to test new software and
configurations in a lab environment before applying the updates to similar production systems.
Cloning can also be used for system migrations, data backups, disk archival, or to make a copy
of a hard drive for investigative or auditing purposes.
Tools for duplicating disks
Hard disk duplicator
Hard drive duplicators are machines that can make identical copies of hard drives. The original
drive is inserted into the duplicator machine along with one or more blank hard drives as targets.
Disk duplicators can have anywhere from a single target bay for limited disk cloning (example
use: law enforcement investigations) up to 100+ target bays for industrial use (example use:
computer manufacturing). If the target drives are not blank, the duplicator machine can wipe the
drives. The target drives usually need to share the same characteristics (e.g., interface, form
factor, transfer rate) of the original drive. The targets should also have the same or greater
storage capacity than the original.
The hard drive duplicator may have an LCD interface built-in to the machine and/or a
management software/HTML interface, the latter of which can be accessed over a networked or
directly-connected computer or server. The duplicator interface can be used to initiate and
manage disk cloning and/or disk wiping (reformatting). Most duplicators copy data sector-by-
sector. The time to transfer data from the original to the target drives depends on multiple
variables. The machine’s user manual should be consulted to calculate duplication time.
Disk cloning software
Hard drives can also be cloned using software. This method allows the original and target to be
different media from one another. For example, a hard drive can be cloned from an IDE drive to
an SSD drive, a CD-ROM/DVD, removable USB drive, cloud-based systems, or other storage
media, and vice versa. Software cloning supports full disk copies (including the OS, all settings,
software, and data) or copies of selected partitions of the drive (useful for data-only or OS-only
copies). Disk cloning software is often used by IT Administrators who need to deploy disk
images across a network to target workstations or to cloud-based systems. Cloud platforms
normally offer a virtual machine (VM) cloning tool as part of their services. VM cloning is the
most efficient method for cloning servers and workstations. VM cloning takes a few seconds to
deploy new systems.
A few examples of disk cloning software include:
• NinjaOne Backup - Cloud-based cloning, backup, and data recovery service designed
for managed service providers (MSPs) and remote workplaces.
• Acronis Cyber Protect Home Office - Desktop and mobile device cloning software that
works with Windows, Apple, and Android systems. Designed for end users. Supports
backup, recovery, data migration, and disk replication. Includes an anti-malware service
that can overcome ransomware attacks.
• Barracuda Intronis Backup - Cloud-based cloning and backup service on a SaaS
platform. Designed for MSPs who support small to mid-sized businesses. Can integrate
with professional services automation (PSA) and remote monitoring and management
(RMM) packages.
• ManageEngine OS Deployer - Software for replications, migrations, standardizing
system configurations, security, and more. Can create images of Windows, macOS, and
Linux operating systems with all drivers, system configurations, and user profiles. These
images can be saved to a locally stored library. The library is available to deploy OSs to
new, migrated, or recovered systems as needed.
• EaseUS Todo Backup - Free Windows-compatible software for differential,
incremental, and full backups, as well as disaster recovery. Supports copying from NAS,
RAID, and USB drives.
Methods for deploying disk clones
The sections above have described disk clone deployment through copied hard drives, image
libraries, network storage, and cloud-based deployments. There are some other options for
cloned disk deployments:
Flash drive distribution
OSs can be distributed on flash drives. IT professionals can format flash drives to be bootable
prior to copying a cloned disk image to the flash drive. Target systems should be set to boot from
removable media in the BIOS. After inserting a flash drive containing the OS into an individual
computer, restart the system and follow the prompts to install the OS. Microsoft offers this
method as an option for Windows installations. Linux systems can also be booted and installed
from flash drives.
The Linux dd command
The Linux/Unix dd command is a built-in utility for converting and copying files. On
Linux/Unix-based OSs, most items are treated as files, including block (storage) devices. This
characteristic makes it possible for the dd command to clone and wipe disks.
Key takeaways
Hard drives can be duplicated by:
• Hard disk duplicator machines
• Disk cloning software. Examples:
o NinjaOne Backup
o Acronis Cyber Protect Home Office
o Barracuda Intronis Backup
o ManageEngine OS Deployer
o EaseUS Todo Backup
Operating systems can be deployed through:
• Cloned hard drives
• Hard drive image libraries
• Network storage
• Cloud-based deployments
• Flash drive distributions
• In Linux, using the dd command
Resources for more information
For more information on disk cloning and OS deployment techniques, please visit:
• How to clone a hard drive on Windows - Step-by-step guide with screenshots on how to
clone a hard drive using the software Macrium Reflect Free.
• Best Hard Drive Duplicator/Cloner Docking Station for 2022 - Comparison guide to
popular hard drive duplicator machines.
• OS deployment methods with Configuration Manager - Microsoft’s guide to options for
deploying Windows in a network environment.
• dd(1) - Linux manual page - The manual for the Linux dd command, which describes
how to use the command
When a mobile device is built, it's installed with an operating system at the factory. So a factory
reset returns the device back to the state it was in when the device well shipped from the factory.
You'll have to factory reset mobile devices a lot as an IT support specialist. You might do this
before reassigning a device to another end-user, before sending a device out for repair, or as a
last resort when troubleshooting a misbehaving device. Don't forget a factory reset will remove
all data apps and customization from the device. Make sure that anything you don't want to lose
has been backed up or synced to the Cloud. We'll cover mobile device synchronization and
backup in a future video. Another heads up, watch out for expansion storage. Additional storage
devices like SD cards or USB drives can contain personal or proprietary data. Doing a factory
reset while expansion storage is attached might erase data that you intend to keep. Just as bad,
the factory reset for lots of devices may leave the contents of expansion storage intact. You don't
want to re-purpose or decommission a device with personal or proprietary data still attached.
One final thing, in Android and iOS you need the primary account credentials in order to
perform a factory reset. This is so stolen devices can not be easily factory reset and then resold.
Overtime, mobile device manufacturers will release updates to the devices operating system.
These updates will usually be delivered over the air or OTA. OTA update is one that's
downloaded and installed by the mobile device itself. There are times when you might need to
use a computer to install operating system updates, some mobile devices like fitness trackers and
medical devices might not have mobile or WiFi network interfaces to contact the internet with.
Maybe the mobile device won't boot or doesn't have a good enough data connection to download
the update itself. In these cases you can re-flash or overwrite the OS of the device from a
computer. But be careful, pay close attention to the devices instructions for re-flashing. For some
devices re-flashing will preserve the end-user data on the device, but for others the end result
will be like a factory reset. The details will be different from one device to another. The basic
steps are: one, download the update to a computer; two, attach the mobile device to a computer
using a USB cable; and then three run some software on the computer that will re-flash the
mobile device. In the supplemental reading, you'll find instructions on how to restore iOS and
Android devices from a computer. For other kinds of devices, refer to the device manufacturers
documentation on how to perform a factory reset or re-flash using a computer.
Supplemental Readings for Mobile Device Resetting and Imaging
Check out the following links for more information:
• Android - Reset your device to factory settings
• iOS - Restore your device to factory settings
• Full OTA Images for Nexus and Pixel Devices
• If your iPhone, iPad, or iPod touch won't turn on or is frozen
Windows Troubleshooting Tools
Windows Troubleshooting Tools
In this reading, you will learn some basic steps for troubleshooting the Windows operating
system (OS). This article focuses primarily on troubleshooting tools available in the
desktop/laptop versions of Windows 10 and 11. However, many of these tools and solutions are
available in other versions of Windows. Additionally, there are multiple methods for
approaching and solving problems in Windows. This article is not an extensive resource of all
possible troubleshooting tools and solutions.
Troubleshooting tools for Windows
Some of the troubleshooting tools provided by Windows include:
• Windows Update: One of the most important repair tools for Windows problems.
Widespread and known Windows problems will often have a software resolution
provided by Microsoft or Original Equipment Manufacturers (OEMs). Windows Update
will find, download, and install the required and/or recommended software resolutions,
which include operating system patches and updates, security updates and fixes, .NET
framework updates, driver and firmware updates, etc.
• Updates from the hardware manufacturer(s): Some OEM updates are not accessible
through Windows Update. For these items, it is necessary to go to the OEM’s website for
updates, patches, drivers, and firmware for components such as computer hardware,
peripherals, and third-party applications.
• Optimize Drives with Disk Defragmenter: When files on a hard drive are saved,
deleted, or altered, fragmentation across storage blocks can occur. A file may become
spread across the drive in non-contiguous storage blocks. This issue results in
performance problems within the system as the hard drive spends additional seek time
finding the scattered file fragments and piecing them back together. The Windows Disk
Defragmenter can automatically relocate file fragments onto a continuous series of
storage blocks in order to remedy these seek time delays.
• Disk Cleanup: Windows utility that simplifies removing temporary files including
downloaded program files, thumbnail files, system files, and temporary internet files.
Disk Cleanup also offers an option to compress the primary hard drive where the
Windows OS resides.
• CHKDSK command: A command-line utility for Windows that scans hard drives to
find and flag bad sectors. Flagged bad sectors will be removed from use and no data will
be stored on them. The tool will attempt to recover any data found on the bad sector.
• Disk Management tool: A Windows system utility for performing advanced storage
management tasks, including initializing a new drive, extending or shrinking a volume,
and changing a drive letter.
• Event Viewer: Software tool for monitoring events and errors produced by the system,
security, hardware, software, and more. The Event Viewer divides logs into four main
categories:
o Custom Views
o Windows Logs
o Applications and Services Logs
o Subscriptions
• Registry Editor (regedit): The Registry Editor should only be used by advanced system
administrators. It is possible to cause serious system and software problems if the wrong
edits are made to the Registry.
• System Configuration tool (msconfig): Software tool for changing system settings,
including the services and applications that can load on system startup.
• Safe Mode (Windows 10 and 11): There are multiple options for booting into safe
mode. A couple of these options include:
o System Configuration tool - Can be used to configure a clean boot in Safe Mode
to help isolate the source of a system problem.
o Startup Settings - Can be accessed through System > Recovery or through the
sign-in screen.
• System Troubleshoot tool (Windows 11): The Windows Troubleshoot menu can be
accessed from Start > Settings > System > Troubleshoot. The following options are
available on the Troubleshoot menu:
o Recommended troubleshooter preferences - Set preferences for Microsoft’s
recommendations for troubleshooting tools.
o Recommended troubleshooter history - Easy access to troubleshooting tools
used previously.
o Other troubleshooters - This menu includes tools for troubleshooting internet
connections, audio, printers, Windows Update, Bluetooth, camera, incoming
connections, keyboard, network adapter, power, program compatibility, search &
indexing, shared folders, video, Windows Store apps, privacy, and misc help.
Common problems in Windows
The following is a list of common problems encountered in Windows, along with common
troubleshooting first steps:
• Computer is running slowly: There are many issues that could make a computer run
slowly. Troubleshooting can involve multiple steps, many of which should be performed
on a regular schedule to proactively prevent problems from happening. The first step
should almost always be to reboot the computer. This step can fix a large percentage of
problems reported by end users. If rebooting does not resolve the problem, check that
there is sufficient processing power, disk space, and RAM to support the OS, hardware,
software, and intended use of the computer. For example, video editing may require a
relatively more powerful computer, a large amount of free hard drive space, and lots of
RAM. Check system event logs for errors. Research any error codes found using the
Microsoft knowledge base or an internet search to see if there is a known solution to the
problem. Run an antivirus and anti-malware scan. Use Windows Update and OEM
updates to ensure the system is up to date. Remove temporary and unneeded files and
software. Check the software and services that load at startup for potential problem
sources. Reboot the computer into Safe Mode to see if the computer performance
improves. Unplug peripherals and turn off network connections to eliminate these as
sources of the slow down. If the OS is Windows 11, use the System Troubleshoot tools
found at Start > Settings > System > Troubleshoot.
• Computer is frozen: Power off the computer. Wait 30 seconds to drain residual power
and clear any potentially corrupted data held by RAM. Boot up the computer again and
check system event logs. If the system does not boot, go to the BIOS settings and boot
into Safe Mode to gain access to the event logs. Research any error codes found. If the
root cause cannot be determined, run the same checks as listed above for “Computer is
running slowly”.
• Blue screen errors: If the blue screen provides an error code or QR code, record this
information in order to research the root cause of the issue and possible solutions. Power
off the computer, wait 30 seconds, then boot the computer again. If the system does not
boot, go into the BIOS settings to boot into Safe Mode. Obtain system event logs in the
Windows Event Viewer and research any error codes found there. If the root cause
cannot be determined through event codes within the logs, then run the same checks as
listed above for “Computer is running slowly”.
• Hardware problems: Check the hardware OEM’s website for updates to drivers,
firmware, and software management consoles. If this does not resolve the problem, check
the system Device Manager to see if the device has been disabled or is not recognized.
Additionally, check system event logs and research any error codes found. If the root
cause cannot be determined, then run the same checks as listed above for “Computer is
running slowly”.
• Software problems: Go to the software manufacturer’s website to check for software
patches or updates. If the problem continues after updating the software, check the
application event logs and research any error codes found. If the root cause cannot be
determined, then run the same checks as listed above for “Computer is running slowly”.
• Application is frozen: End application processes in Task Manager. Restart application.
If the problem persists, reboot the computer and try to run the application again. If the
issue is still not resolved, then follow the instructions listed above for software problems.
• A peripheral is not working: Check to ensure the peripheral is on and is receiving
sufficient power, especially if the item is battery powered. Check cables to ensure they
are attached securely. If the item is connected through USB, try a different USB port. If
the device connects through Bluetooth, check to ensure that Bluetooth is active on both
the computer and the peripheral. Reboot the computer to see if the system can reconnect
to the device. Inexpensive, high-use peripheral devices experience high failure rates,
especially keyboards and mice. Swap the peripheral for a working replacement to see if
the problem was the peripheral itself, or perhaps an error in how the computer is
detecting the peripheral. If the problem persists with the replacement peripheral, check
the system Device Manager to see if the device has been disabled or is not recognized.
Check the event logs for any errors. Visit the OEM’s website to look for updates to
drivers, firmware, and/or software management consoles, if available. Run a Windows
Update as well.
• Audio problems: Check audio volume. Run the Windows audio troubleshooter. Check
speaker cables, plugs, jacks, and/or headphones. Check sound settings. Update or repair
audio drivers and sound card firmware. Check to ensure the active and default audio
devices are the desired audio devices. Turn off audio enhancements. Stop and restart
audio services in Task Manager. Restart the computer. Research if specific audio
CODECs are needed for audio media. If audio is not working in a browser, ensure the
browser has permission to use the system audio and/or microphone.
Resources
• Windows Server performance troubleshooting documentation - Microsoft list of articles
on common Windows Server errors, troubleshooting, and solutions.
• How to scan and repair disks with Windows 10 Check Disk - Instructions for using the
CHKDSK command.
• Overview of Disk Management - Lists uses for the Windows Disk Management system
utility, along with links to step-by-step instructions for using the utility.
• How to use Event Viewer on Windows 10 - A walkthrough tour of Windows Event
Viewer with screenshots and detailed explanations of each part of the tool.
• Registry - Microsoft article about the Windows Registry.
• How to use System Configuration tool on Windows 10 - Tutorial for using the Windows
System Configuration tool.
Supplemental Reading for Windows Troubleshooting Example
Troubleshooting a problem in Windows
As an IT Support professional, you will likely run into problems caused by a full primary hard
drive, where the OS is installed. An affected computer may display an error message stating
there is insufficient space on the drive to save new files, apply an update, or install new software.
In some cases, the computer might not provide an informative error message at all. Instead, the
system may experience performance issues, hang, crash, or it might not even load the OS after
booting. Note that it is a best practice to routinely perform maintenance and clean-up of
computer hard drives to free storage space, improve system performance, and prevent the myriad
of issues that can arise when the primary hard drive is full.
Imagine that you are an IT Support Specialist for an organization. An employee reports that their
computer is running very slowly and keeps hanging. You know that Windows Update had been
scheduled to run overnight to update all of the organization’s systems with multiple patches,
updates, and fixes. Although it is possible for these changes to cause system problems, there is
only one employee reporting a problem. So, it is more likely that the system did not have
adequate storage space to install all of the updates on that employee’s computer system. You
suspect that the primary hard drive could be full. Your troubleshooting and repair steps might
include:
1. Check how much free storage space remains. A quick and easy troubleshooting step
for system performance issues is to check if the primary hard drive is full. In this
scenario, you discover that the employee’s hard drive has less than 5 GB of space left.
Microsoft recommends giving Windows 10 at least 20 GB of free space for normal OS
processes. You will need to find at least 15 GB of files to delete or move to another
storage location.
2. Delete temporary and unneeded files. There are a few methods for cleaning out junk
files from Windows. Two system maintenance tools for this purpose, found in several
versions of Windows, include:
a. Storage Sense: Use the Windows Storage Sense tool to delete unnecessary files
like temporary files, offline cloud files, downloads, and those stored in the
Recycle Bin. You can also configure Storage Sense to regularly and automatically
clean the hard drive for proactive maintenance.
b. Disk Cleanup: A simple alternative tool to Storage Sense. Disk Cleanup
performs most of the same operations as Storage Sense, plus it offers a drive
compression utility. Note: If you run Disk Cleanup on a drive, but the computer is
still reporting “Low Disk Space”, the Temp folder is most likely filling up with
Microsoft Store .appx files. In this case, you will need to clear the cache for
Microsoft Store.
3. Reset Windows Update. Since you know the employee’s computer went through a
Windows Update overnight and possibly did not complete this process fully, it may be
wise to perform a Windows Update reset. The reset tool can check whether a system
reboot is required to apply the updates, security settings were changed, update files are
missing or corrupted, service registrations are missing or corrupt, and more.This utility
can be found in the Windows system Settings menu, under Troubleshoot > Other
troubleshooters > Windows Update.
4. Move files off of the primary hard drive and onto (one or more of the following):
a. Internal or external storage device: Install an additional hard drive or add an
external storage device, like a USB drive or SD card, to hold user files.
b. Network storage: Network storage space is often available in network
environments in the form of Network Attached Storage (NAS) appliances or large
Enterprise Storage Area Networks (SANs). In these environments, end users
should have network drive space mapped to their workstations for file storage,
instead of saving files to their local hard drives.
c. Cloud storage (OneDrive, File Explorer, Google Drive, etc.): Providing cloud
storage space to end users is a lower cost alternative to network storage. However,
this option is less secure than onsite NAS or SAN storage.
In Windows System Storage, under Advanced storage settings, set the new drive storage as the
destination for “Where new content is saved.”
1. Set any cloud storage solutions to be online-only. This will prevent cloud files from
downloading an offline or cached version of the files to the hard drive.
2. Uninstall apps that are not needed (including Windows Store apps). This is an effective
way to free up large amounts of storage space.
3. Run antivirus and antimalware software. Some viruses and malware intentionally fill
up hard drives with garbage data.
4. Wipe hard drive and reinstall the OS. If none of the suggestions listed above solve the
problem with slow system performance and hanging, consider wiping the hard drive and
reinstalling the OS. This is the best method for repairing failed system updates.
Resources
• Free up drive space in Windows - Microsoft article for Windows 10 and 11 that provides
step-by-step instructions for freeing storage space on a hard drive.
• Low Disk Space error due to a full Temp folder - Steps to clear the cache for Microsoft
Store and reset Windows Update for Windows 10 and 11.
• Manage drive space with Storage Sense - Instructions for configuring this Windows tool
to automatically remove temporary files, downloads, offline cloud files, and empty the
Recycle Bin.
• How to use Event Viewer on Windows 10 - A walkthrough tour of Windows Event
Viewer with screenshots and detailed explanations of each part of the tool.
• How do I reset Windows Update components? - Steps for troubleshooting problems with
Windows Update.
All right. So for this scenario, let's say that you stopped by to see a user at their desk and they
reported some kind of application issue. When you get there, they show you the problem.
They're trying to launch an application from a shortcut on their desktop and every time they
double-click on it, they get this generic error message that says, "There was an error launching
the application." Start troubleshooting that for me. What OS are you using and what's the name
of the application? Say it's Linux desktop and it's a custom application. We'll say that I built it in-
house. Let's call it Application X. Okay. Do you remember the last time this worked? Sure. We
can say it worked on Friday and today is Monday. Okay. Do you know if there's any other users
that's having this issue? So this is the first report of it but it's still early on a Monday. Okay. Are
you aware if there is any type of updates that happened over the weekend? Not that I'm aware but
is there any way we can check? So we can check the Apt log. What is Apt? Apt is pretty much a
utility that we use in installing applications and updates. Great. Cool. So where were we? Can
you walk me through where we can find the logs for Apt? I'm actually not sure where the logs
for the application would be at. Great, and how might you find it out if this was a real-world
scenario? We can probably check the main page or search online. Okay. Great, that's a good idea.
So, let's say that you're able to find out that the log is in slash var, slash log, slash apt, and the file
that we're looking for is history.log. Now that you've got that file, we've got a hundred different
entries in here. How do we find what we're looking for specifically just for this application? We
can use the grep command with the application name. Okay. Let's say we do that and we find
that there was an update done just over this past weekend. So it's actually possible that the update
could have caused this issue. There could be a missing dependency or the application could have
gotten corrupted. So we can also check permissions, too. So where do we start? So I want you to
run a few updates just to make sure everything's installed and there's no missing dependencies.
So first, I want you to run sudo apt get update and then sudo apt get upgrade. Okay. Great. So
let's say that those both run, they complete successfully, what do we do now? So let's try and
launch the application. So it still fails, same exact error. So I think now, we should probably
check the permissions. All right. How do we do that? So first, you want to get the location. So
we can probably get the location by right clicking on the shortcut and then seeing what the
command section says. Okay. So we do that and let's say it says application dash x as the
command. So now, you want to use the which command with the location that provide it. Okay.
We'll say it's located in slash usr, slash bin. So now, you want to navigate with cd to the directory
of that application. Okay. Say we're there. Okay. So now, you want to list out all the permissions.
So you want to do ls space hyphen L? Okay, so here's what I see. Dash RWX, R dash X, dash
dash dash, then it says root, space, root. Can you explain to me what this all means? R stands for
read, W stands here write, X stands for execute. The first [inaudible] three is associated with the
owners of the application, the second set of three is associated with the groups of the application,
and then the last set of three is associated for other or users. Root is associated with the owner
and then the second root is associated with the group. Okay. So after looking at all that, does
anything stand out as wrong here? Yes. So since the last [inaudible] three is associated with users
and other, I'm noticing that there's no read or execute permissions for this application. Okay.
How do we correct that? So we can use a change modify command, ch mod, to update the
permissions. Okay. So now it's fixed for me, we tried the application, everything works. Any
follow up that we need to do? Yes. I would want to notify the owners of the application because
it's going to affect a lot of users and this will also help prevent reoccurring issues from
happening. Okay. Good. Thank you. In this scenario, we saw a lot of back and forth which is
very common for troubleshooting interviews. The initial description was very broad and the
candidate used several follow-up questions to better scope the problem. Since the error message
wasn't clear, there were several possible causes of the problem. Eliminating the most likely
culprits first allowed us to keep trying until we found the actual cause. We also showed that it's
okay if you don't know everything. The candidate didn't know where the log for apt is stored but
she explained how she would find that out if this were a real-life issue she we're trying to
address. When you do that, it shows that you're resourceful and a good problem solver. It's
impossible to know everything but knowing where to find answers is a critical skill for an IT
support specialist. That's it for now. See you again at the end of our next course.
Wow, amazing work, you've covered a ton of information in this course. You learned how to
navigate the Windows and Linux OSs, how the intricate components of an OS work, and how to
troubleshoot common issues found within them. You also learned how to manage software from
basic device drivers to applications, setup discs and partitions, format a file system, read logs and
traverse them. Finally, you've picked up some skills on how to make your job easier using
remote connections and operating system imaging. You've accomplished a lot and you should be
incredibly proud. The skills you've learned in this course will follow you throughout your IT
career. After all, the OS is the component that you deal with the most. In the next course we are
going to kick it up a notch and show you how to work with operating systems at a larger scale.
You're going to use the skills you learned in this course that helped you manage one machine to
jump start your learning for the next course. That's where we'll teach you how to manage a whole
fleet of machines. So it's time for me to say so long. It's been a blast being your guide to
operating systems and sharing my passions for OSs with you. Your next course, System and
Administration and IT Infrastructure Services, is being taught by my friend and colleague of
mine, Devon Shree-Theron. Tell him I said hi.
A four year degree prepares you in a certain set of ways. But experience on the job and the right
kind of mentors will actually propel you much further than university degrees. While I went to
college I do not have a degree. Education learning is about a student and a teacher. If you don't
have a four-year degree that's fine, but make sure that you have a mentor. And engage yourself in
educating yourself. It doesn't have to necessarily be an university. IT and IT support is not going
anywhere anytime soon. So, when it is a stable job, it is a very intellectually fascinating job, and
it's also very flexible. You can go in lots of different places with it. So, it's really a field where
you can understand the fundamentals of how everything works, and use it as a jumping point into
anything else that you might be interested in. The advice I give people just coming into their
career no matter what career they're going to, is do what you love and love what you do. Find
that topic, that piece of technology, that thing that you first think about when you wake up in the
morning because you will want to spend every waking hour thinking about it, working on it,
improving your skill set and making the world a better place because of that thing. For me that
was security. I think about it when I wake up in the morning, I think about it before I go to sleep,
it is my very identity. Everybody has something like that in mind, you just have to find out what
that is.

You might also like