Professional Documents
Culture Documents
Basic Commands
We dipped our toes in the Windows and Linux OS's in the first course of this program. Now, let's
jump right in and learn how to perform all the common navigational tasks of both operating
systems. For Windows, we're going to learn how to navigate the operating system using the GUI
and using the command line interpreter or CLI. For Linux, we're only going to focus on learning
the command line. The command line interpreter in Linux is called a shell, and the language that
we'll use to interact with the shell is called Bash. It's worth calling out that these two operating
systems are very similar to one another. So, even if you don't know how to use the Linux GUI, as
long as you know how to navigate the Windows GUI, you'll be able to apply those tools to the
Linux GUI. It's possible that you'll only be using the Windows GUl in the workplace. Even so, if
you learn how to use the Windows command line, this will set you apart from other IT support
specialists. You'll soon discover that using the command line in any operating system can
actually help you complete your work faster and more efficiently. We surely encourage you to
follow along and actually perform the task we do in this course yourself. If you can, pause a
video and do the exercises that we do or type out any of the commands we introduce. It will be
much easier for you to understand them in this way. We also recommend that you document all
the commands that we show you. Either write them down with an old-fashioned pen and paper
notebook, or type them out in a doc or text editor. Just write them on a stone if you have to, we
just want you to write them down somewhere. You probably won't remember all the commands
immediately when we first introduced you to them, but with a little practice, typing the
commands will become second nature to you. You can also use the official Windows CMI and
Bash documentation that we've provided for you in the supplemental reading, right after this
video for reference, if you need to. In this lesson, the content is broken down into two themes.
The first is basic operating system navigation, like navigating from one directory to another,
getting file information, and removing files and directories. The second theme is file and text
manipulation, like searching through your directories to find a specific file, copying and pasting,
chaining commands and more. Okay. Enough chit-chat. Let's get started.
Supplemental Reading for Windows CLI & Unix Bash
For more detailed information on the modern Windows CLI, PowerShell, see the official
PowerShell documentation, and the PowerShell 101 guide. For more on the older Windows
"Command Prompt" CLI (cmd.exe) please see the link here.
If you want to check out more information on Bash, then click the link here.
In operating systems, files and folders or directories are organized in a hierarchical directory tree.
You have a main directory that branches off and holds other directories and files. We call the
location of these files and directories, paths. Most paths in Windows looks something like this
C:\Users\Cindy\Desktop. In Windows, file systems are assigned to drive letters which look like
C:, or D:, Or X:. Each drive letter is a file system. Remember that file systems are used to keep
track of files on our computer. Each file system has a root directory which is the parent for all
other directories in that file system. The root directory of C: would be written C:\, and the root
directory of X: would be written X:\. Subdirectories are separated by backslashes, unlike Linux,
which uses forward slashes. A path starts at the root directory of a drive and continues to the end
of the path. Let's open up this PC and navigate to our main directory. The main directory in a
Windows system is the drive that the file system is stored on. In this case, our file system is
stored on Local Disk C. From here, I'm going to go to Users, then my User folder cindy, and
finally to Desktop. If you look at the top here, you can see the path I'm in. Local disk, Users,
cindy, Desktop. That wasn't too hard, right? You can see here in our desktop directory that we
have a few folders and files. We have a Puppy's Pictures folder, a Hawaii folder, and a file called
My Super Cool File. There are also some files on here that you can't see. We call these hidden
files. They're hidden for a few reasons. One is that we don't want anyone to see or accidentally
modify these files. They could be critical system files or configs or even worse, embarrassing
pictures of you in grade school rocking a mullet. It's okay, you aren't the first person who like
their hair to be business in the front and party in the back. Just for fun, let's see what kind of
hidden files we have in here. We'll go to the top and click View, then check the hidden items
checkbox. Now we can see all the hidden files on our system. Oh, interesting. There is a file
named secret_file. As much as I'd like to take a peek at it, whoever created it probably doesn't
want us to see what's inside so we're going to leave it alone. Let's go ahead and revert this option
so we don't accidentally change something else.
Okay, so what if we wanted to view information about a file? Well, to do this, we can actually
just right click and choose Properties. Let's try this for My Super Cool File. This pop up dialog
has a lot of information displayed here. Let's break it down. In the general tab, we can see the file
name, the type of file, what applications we use to open it, and the location path of the file which
is C\Users\cindy\Desktop, then we have the size of the file, and the size on disk. This can be a
little confusing. The size of the file is actually the amount of data that it takes up, but size on disk
is a little different. It's not something you need to know right now but if you want to learn more
about it, you can check out the next supplemental reading. All right, let's move on. Next you
have timestamps of when the file was created, last modified, and last accessed. After that our file
attributes we can enable for our file. We have Read-Only and Hidden. You might guess that if
you check hidden, our file will be hidden and only visible if we enable show hidden items. There
are some advanced options too but we won't touch those for now. You'll also notice a few other
tabs here at the top. Security, Details, and Previous Versions. We'll talk more about the security
tab in a later lesson. The Details tab, basically, tells us the information we just discussed about a
file. The Previous Versions tab lets us restore an earlier version of a file so if you made a change
to a file and wanted to revert to that change, you could go back to that version. To sum up listing
the directories in the Windows GUI, we can see the list of files and folders by default here. You
can even change how you want to view them using icons or even a list. Then if you want to get
more information about a file, you can look at its properties. Next up, let's see how to view all
this information through the Windows CLI.
Supplemental Reading for 'Size' vs 'Size of Disk' in Windows
For more information on 'size on disk' vs 'folder size' in Windows, please check out the link
here.
Completed
It's important to know that there are a couple of command line interfaces or CLIs available in
Windows. The first one is called the Command Prompt, command.exe. The second one is
PowerShell or powershell.exe. The command prompt has been around for a very long time. It's
very similar to the Command Prompt that was used in MS DOS. Since PowerShell supports most
of the same commands as Command Prompt and many, many more, we're going to use
PowerShell for the exercises in this module. I want to call out that many PowerShell commands
that we use are actually aliases for common commands in other shells. An alias is sort of like a
nickname for a command. The first command that we'll use is for listing files and directories.
Let's start by listing the directories in the root of our C: drive. The C: drive is where the
Windows operating system is installed. For many of you, it might be the only hard drive that you
have in your computer. To get to the PowerShell CLI, just search in your application's list
PowerShell. From here, we can go ahead and launch the PowerShell program. We're going to use
the ls or list directory command and give it the path of where we want to look. The path is not
actually part of the command but it is a command parameter. You can think of parameters as a
value that's associated with a command. Now you can see all the directories in the root of your
C: drive. You might just see a few or a whole bunch of directories. It all depends on what your
computer is used for. The C: drive root folder is what we call a parent directory and the contents
inside are considered child directories. As you continue to work with operating systems, you'll
encounter terms that may seem a bit out of place at first but they actually make a lot of sense.
Parents and children are common terms that stand for hierarchical relationships in OS's. If I have
a folder named dogs and a second folder nested within that folder called Corgi, dogs would be
the parent directory and Corgi would be the child directory. Let's look at a few of the common
child directories in this folder. Program Files x86. These directories contain most of the
applications and other programs that are installed in Windows users. This contains the user
profile directories or home directories. Each user who logs into this Windows machine will get
their own directory here. Windows, this is where the Windows operating system is installed. If
we open a PowerShell and run Get-Help ls, we'll see the text describing the parameters of the ls
command. This will give us a brief summary of the commands parameters. But if you want to
see more detailed help, try Get-Help ls -Full. Now you can see a description of each of the
parameters and some examples of how to use the command. What if we wanted to see all the
hidden files in this directory? Well, we can use another useful parameter for the ls command, -
Force.
The -Force parameter will show hidden and system files that aren't normally listed with just ls.
Now you can see some important files and directories like Recycle Bin. This is where the
Recycle Bin lives. When you move files to the Recycle Bin, they're move to this directory
instead of being deleted immediately. Program data, this directory contains lots of different
things. In general, it's used to hold data for programs that are installed in Program Files. All
right, now that you've seen how to take a look around the file system in Windows, lets see what
this process looks like in Linux.
In Linux, the main directory that all other stem from is called the root directory. The path to the
root directory is denoted by a slash or forward slash. An example of a path in Linux that starts
from the root directory is /home/cindy/Desktop. Just like c:\users\cindy\desktop in Windows.
Let's go ahead and see what's under the root directory. We're going to be using the ls or list
directory contents command. We also want to give this command, the path, the directory that we
want to see. If we don't provide a path, it will just default to the current directory we're in.
So ls slash. All right, now we can see all the directories that are listed under the root directory.
There are a lot of directories here, and they're all used for different purposes. We won't go
through them all, but let's talk about a few of the important ones. Slash bin, this directory stores
our essential binaries or programs. The ls command that we just used is a program, and it's
located here in the slash bin folder. It's very similar to our Windows program files directory.
Slash etc, this folder stores some pretty important system configuration files. Slash home, this is
the personal directory for users. It holds user documents, pictures, and etc. It's also similar to our
Windows users directory. Slash proc, this directory contains information about currently running
processes. We'll talk more about processes in an upcoming lesson. Slash user, the user directory
doesn't actually contain our user files like our home directory. It's meant for user installed
software. Slash var, we store our system logs and basically any file that constantly changes in
here. The ls command has a couple of very useful flags that we can use too. Similar to Windows
command parameters, a flag is a way to specify additional options for a command. We can
usually specify a flag by using a hyphen then the flag option. This varies depending on the
program, though. Every command has different flag options. You can actually view what options
are available for a command by adding the dash, dash help flag. Let's see this in action. There's
an incoming wall of text, but don't panic. You don't have to memorize these options. This is
mainly used for reference. For now, let's just quickly go through the help menu
At the top here it tells you what format to put the command in. And here it gives you a
description of what the command does. This huge chunk of text lists the options that we can use.
It tells us what command flags are available and what they do. The dash, dash help flag is super
useful, and even experienced OS users refer to it every so often. Another method that you can
use to get information about commands is the man command from manual. It's used to show us
manual pages, in Linux we call them man pages. To use this command, just run man, then the
command you want to look up.
So let's look up man ls. And here we get the same information as dash, dash help, but with a little
more detail. Okay, back to using the ls command.
Right now, it's not quite friendly to read. So let's make our directory list more readable with the
dash l flag for long. This shows detailed information about files and folders in the format of a
long list. Now we can see additional information about our directory and the files and folders in
them. Similar to the Windows show properties, the ls command will show us the detailed file
information. Let's break down this output starting from the left. The first column here are file
permissions, side note, we're going to cover file permissions in an upcoming lesson. Okay, next
up is the number of links a file has. Again, we'll discuss this is more detail in a later lesson.
Next, we have the file owner, then the group the file belongs to. Groups are another way we can
specify access, we'll talk about this in another lesson too. So then we have the file size. The time
stamp of last modification, and finally, the file or directory name. The last slide that we'll discuss
for the ls command is the dash a or all option. This shows us all the files in the directory
including the hidden files.
You'll notice that I appended two different flags together. This is the same thing as ls -l -a /. Both
work the exact same way. The order of the flag determines which order it goes in. In our case, it
doesn't matter if we do a long list first or show all files first. Check out how there are some new
files are visible when we use these flag. The dash a or all flag, shows all files including hidden
ones. You can hide a file or directory by pre-pending a dot to it. Like the file shown here
.I_am_hidden.
We've covered a lot in this video, we've learned how to view detailed information about files
with the ls command. We also started using computer paths and we learned how to get help with
commands using the dash dash help flag and man pages. We even took a sneak peek at our Linux
files system. If I went through any of this a little too quickly, just rewatch the video. We'll meet
back up in the next one, where we'll start changing directories in the GUI. See you there.
Okay. Now that we know how directories are laid out, let's start moving from one directory to
the next. You probably change directories in your GUI a lot without even realizing it. Even if
that's not the case, we're going to go ahead and show you how to do it. Knowledge is power.
There, that was pretty simple, right? We can move freely between any directory in any path on
our systems. One thing to call out is that there are two different types of paths, absolute and
relative. An absolute path is one that starts from the main directory. A relative path is the path
from your current directory. These two distinctions aren't as important when we're working in a
GUI, but they're important when you work in a shell. So let's see what this looks like in the
Windows CLI.
File and TexWhen you first open PowerShell, you'll usually be in your home directory. Your
prompt shows you which directory you're currently in, but there's also a command that will tell
you where you are. PWD or print-working directory tells you which directory you're currently in.
If we want to change the directory that we're in, we can use the CD or change directory
command. To use this command, we'll also need to specify the path that we want to change to.
Remember, this path can be absolute, which means it starts from this drive letter and spells out
the entire path. On the flip side, it can be relative, meaning, that we only use part of the path to
describe how to get to where we want to go relative to where we're currently are. I'll show you
what I mean in a minute. So right now, we're in C:\Users\cindy. Let's say that instead, I want to
go to C:\Users\cindy\documents, what do you think the command would look like here? Here it
is, cd C:\Users\cindy\documents. And now we've changed to the documents directory. We use an
absolute path to get to this directory, but this can be a little cumbersome to type out. We know
that that documents directory is under the cindy folder, so can't we just go up one level to get to
that folder? We absolutely can. There's a shortcut to get to the level above your current directory,
CD dot dot. Let's run the PWD command one more time. Now, we can see that I'm in
C:\users\cindy, the parent directory of where I was before. The dot dot is considered a relative
path because it'll take you up one level relative to where you are. Let's go back to the documents
folder and try this again, except this time, let's go to the desktop folder using the new command
we learned. We know that the desktop and document directories are under the home directory, so
we could run CD dot dot then CD desktop, but there is actually an easier way to write this,
cd..\Desktop. Let's check PWD one more time. PWD now shows that were in the Desktop folder.
Sweet. Another cool shortcut for CD that you can use is CD~. The tilde is a shortcut for the path
of your home directory. Let's say I want to get to the desktop directory in my home folder. I can
do something like this, cd~\Desktop. We've done quite a bit of typing so far, you might actually
be wondering, what would happen if we messed up while typing these directory names? How are
we supposed to memorize where everything is, and if it's spelled correctly? Fortunately, we don't
have to do that. Our shell has a built-in feature called tab completion. Tab completion lets us use
the tab key to auto-complete file names and directories. Let's use the tab completion to get to our
desktop from our home directory, if I type D and then tab, the first file or directory starting with
D will now complete. Now, if this isn't the file or directory that I was looking for, I can continue
to press tab, and the path will rotate through all the options that complete the name that I started
to type. So I'll see desktop, and then documents, and then downloads. Take note, that the dot in
front of the path of.\Desktop just means the current directory. If I erased this and instead type DE
then the only directory that matches is desktop. Tab completion is an awesome feature that you'll
be using more and more as you continue to work with commands.t Manipulation.
Let's do the same thing in Bash. From our desktop we're going to navigate to the documents
folder. The commands we used earlier in PowerShell are exactly the same here in bash. Print
working directory or PWD again shows us the current path we're in. Yep, looks good. We're
currently in our desktop directory, which you can see from /home/cindy/Desktop. To navigate
around, we use the CD command just like with Windows. We can give it an absolute path like
this cd/home/cindy/Documents, or we can give it a relative path like this cd../Documents. In
Bash, the tilde is used to reference our home directory. So, cd~/Desktop will take us back to our
desktop, and guess what? We still have that useful tab completion feature in Bash. The
difference between Bash tab complete and Windows tab complete is that if we have multiple
options, it won't rotate through the options, but instead will show us all options at once like this.
We can already start connecting the bridge between Windows and Linux.
Now that we've covered listing and changing directories, let's learn how to add new directories.
We can do this in the GUI in a super simple way. Just right-click, new, then folder, and bam, we
have a new folder. Now, what if we wanted to do this in the CLI? In PowerShell, the command
to make a new directory is called mkdir or make directory. Let's make a new directory called
my_cool_folder and there it is. That was easy. What if we wanted to use spaces in our folder
name instead of underscores? What do you think would happen if I did this instead? Mkdir my
cool folder. That's an error. Mkdir is trying to interpret cool and folder as other parameters to the
mkdir command. It doesn't understand those words as valid parameters. Turns out that our shell
doesn't interpret spaces the way we do. So, we need to tell it explicitly that this folder name is
one single thing. We can do this in a variety of ways. We can surround the name with quotes
like, mkdir 'my cool folder', or we can escape to space by using the back tick character, mkdir
my` cool` folder. Escaping characters is a pretty common concept when dealing with code. It
means that the next character after the back tick should be treated literally. In our example,
escaping the space tells the shell that the space after the back tick is part of our filename. While
the back tick is the escape character in PowerShell, other shells and programming languages may
use another character as an escape character. You'll see this in the next video.
In Bash, the command to make a new directory is the same as in Windows. Let's make a new
directory called my cool folder with the mkdir or make directory command. And now, we can
verify my cool folder is in our desktop. Instead of using back ticks like in windows to escape a
character, in Bash, you can use a backslash. Similar to Windows, you can also use quotes to
encompass an entire file name. How do you think you would make a directory called my cool
folder in Linux with spaces? mkdir my\ cool\ folder. There it is. Or, mkdir ' my cool folder'.
Works as well. If you guessed this, you're right. If you guessed wrong, that's okay. Just re-watch
this video so you can get a better grasp of how we came to this conclusion.
Picking right up from the last video, let's say we want to make a couple of directories,
my_cool_folder2 and my_cool_folder3. We could just type mkdir my_cool_folder2, and then
type again mkdir my_cool_folder3, but instead we're going to use another cool PowerShell
feature called history. Each and every time you enter in a command, it gets saved into memory
and added to a special file. You can go through the previous commands you used with the history
command.
I'm now showing a list of commands that I entered earlier. This information alone isn't very
useful. Instead, there's a better use of the history that lets us quickly scroll through these
commands and use them again. We can scroll through these commands with the up or down keys
on our keyboard. I'm going to go up to my previous command, and I should see that I have mkdir
my_cool_folder. Instead of typing the whole thing to make a new folder, I'm just going to
append the number 2 to my command.
And boom, a new file is created without having to type everything over again. Cool, right? You
can even search through your previously used commands using the history shortcut Ctrl+R.
From here you can start typing bits and pieces of the command you want to look for, and it'll
show you matches. Let's search for the word folder.
I should see the mkdir commands I was using before. Pretty neat. If you're using an older version
of PowerShell, it may not have the Ctrl+R feature. If that's the case you can type the # symbol
followed by some part of your old command, and then use Tab completion to cycle through the
items in your history. The history feature, along with Tab completion and get-help, will be your
best friends while you work in PowerShell. Keep them close to you and get to know them super
well. Hmm, our shell is looking a little cluttered. It's kind of hard to see where I'm at, so let's
clean up our shell a little bit. We can do that with the clear command. This doesn't wipe your
history, it just clears the output on your screen. It looks a little better.
The exact same history command that's used in Windows is used in Linux. From here, we can
use our up and down keys and even search through our history with Ctrl-R. To clear your
terminal app, what do you think you'll do? That's right. The clear command.
We've already created a few files and directories, but we need a couple more. We don't want to
create them off from scratch. So let's make copies instead. In the Windows GUI, all you need to
do is right-click, copy, then paste. You can also use hotkeys if you want. A hotkey is a keyboard
shortcut that does some sort of task. In Windows, the hotkey for copy is Ctrl-C, and for paste, it's
Ctrl-V.
In PowerShell, the command used to copy something CP. We also need to add a file that we
want to copy and the path of where we want to copy it too. Let's copy mycoolfile.text to the
desktop. There you can see mycoolfile.text was added to our desktop. I have a few of these files I
want to move over, but I'm feeling a little lazy and don't want to run this command over and over
again. So, I'm going to use something called a wildcard to help me copy over multiple files at
once. A wildcard is a character that's used to help select files based on a certain pattern. Let's say
you want to get all the files that were JPG and copy them somewhere. Then I go on to my
documents directory. I have files called hotdog.jpg, cotton-candy. jpg, and pretzel.jpg. I need to
come up with a pattern to help me select all these files. What do they have in common besides
being named after delicious food? The.jpg extension. Literally, anything else can be in front of
the.jpg file extension, and it won't matter. That's what the wildcard asterisk does. It's a pattern for
anything. So I'm essentially saying, select all the files with the pattern anything.jpg. So, to copy
over all the JPGs in the folder, I can use CP, asterisk symbol,.jpg, and the path I want to copy
them to. Let's just verify. There it is. Now, instead of copying files one by one, we can use a
single command to get all the files we want. For now, the only select you'll be using is the
asterisk for all. Next up, let's say I want to copy over a directory. I'm going to try to copy a folder
called Bird Pictures to my desktop. Let's just go back into documents. That's Bird Pictures. Now
copy Bird Pictures to desktop. Now, this does exactly what we told you to do. It copies the
directory. However, this directory is empty. What it doesn't do, is copy over the contents of the
directory. To copy of the contents of a directory, you need to use another command parameter,
Recurse. The -Recurse parameter list the contents of the directory. Then if there are any sub-
directories in that listing, it'll recurse or repeat the directory listing process for each of those sub-
directories. We need to use the -Recurse parameter with copy to copy the contents of the
directory along with the directory itself. We're going to use a new parameter Verbose. Copy
doesn't output anything to the CLI by default unless there are errors. When we use copy -
Verbose, it will output one line for each file the directory being copied. Let's give it a try. Copy
Bird Pictures, and the Recurse, and Verbose file. This just messages us that we've already copied
Bird Pictures, but what we didn't do, was copy over the file, which is now here. Excellent. Now
the directory and all the contents are copied to my desktop.
In Bash, the exact same Windows command can be used for copying files. Let's take a look at
this directory. Let's copy my_very_cool_file.txt to my desktop. And there it is. We can also use
the same asterisk wildcard to select patterns. Since this is similar to our Windows copy
command, what do you think we can use to copy over the .png files in this directory? I have files
called Pizza.png, Soda.png, Cake.png. So I can use copy *.png, then the desktop directory. Now
if I look at my desktop again, there they are. The same copy rules apply in bash. If we want to
copy over a directory, we have to recursively copy over the directory to get all the contents. The
flag for recursive copy is dash r. If I want to copy over my cat pictures folder to the desktop, I
can do something like this. And there it is.
We talked about making and copying files and directories so far. But what if we wanted to
rename something that we've created? Well, in the Windows GUI, if you are to rename a file, we
just right-click and rename.
In the command line, if we wanted to rename a file, we can use the move or move item
command. It lets us rename files. Lets move the file without changing the directory that it's
stored in. On my desktop here, I have blue document and I'm going to move or rename it to
yellow document. Now, you can see that I have a yellow document. As you might guess, the
move command also lets us move files from one directory to another. Let's move the yellow
document into My Documents. I can verify that. There it is, cool. You can even move multiple
files by using wildcards. And now you can see, the rest of my colored documents went into My
Documents.
The exact same command can be used for Linux. Mv, or move, can rename and move files in
directories. Same thing applies here. I'm going to move my red_document and rename it to
blue_document. Now we can see it's been renamed to blue_document. Then, I'm going to move
the blue_document in to the documents folder. There it is. Using wildcards, we can move
multiple files at once, just like Windows. Let's move all of the underscored document files here
to our desktop. Now if we check the desktop, there they are.
All righty, now that we've learned how to list, create, and move around files in directories, let's
start removing them. In the Windows GUI, if you wanted to remove a file or folder, just right-
click and delete. The file ends up in the recycle bin, which you can find on your desktop. If you
wanted to restore a file here, you could just right-click and Restore.
If you empty your bin for any reason you won't be able to retrieve those files. In PowerShell, the
command to remove files and directories is rm or remove. Take caution when using remove
because it doesn't use the recycle bin. Once the files and directories are removed, they're gone for
good. Let's remove a file called text1.txt in my home directory. We can see, There it is. I'm just
going to remove it.
And now it's gone. The remove command might seem like a dangerous weapon in the wrong
hands. Fortunately, there are safety measures in place that only give this ability to users that are
actually authorized to use it. We'll talk more about file permissions in a different lesson. But let's
take a quick look at what I mean. Let's remove a file called important_system_file. I get an error
message saying that I don't have permission to delete this file. In some cases like this one, it's
because it's been marked as a system file. In other cases, it might be because I don't have enough
permissions in the file system to remove the file. I do have the right permissions this time, but
since it is an important file, PowerShell wants to make sure that I meant to do this. If I repeat the
command with the -Force parameter, remove will go ahead and remove the file. Let's take a look.
-Force, And you can see the file's gone. If the file belongs to someone else, or if I'm not an
administrator, then I might not have the right permissions to remove the file. In that case I'll need
to access an administrative account to remove the file. Okay, let's try removing a directory with
remove next. Here we go. Here's another place where PowerShell is going to ask us if we really
meant to do this. Since this is in a directory, it contains other files. And we did not use the -
Recurse parameter. We see a prompt asking us to confirm if we really want to remove the
directory and all its contents. We can say Yes or Yes to All to continue. We can also cancel this
command and run it again with the -Recurse parameter. That way, PowerShell knows that we
understand the consequences of what we're doing. So let's go ahead and cancel this and try again.
-Recurse. Yeah, now it's gone. And that's the remove command in a nutshell. Again, because of
the nature of this command, you'll want to be extra careful when removing files or directories.
To remove files from Linux, just like in Windows, we can use the rm or remove command. Let's
remove this text1 file. And just like that, it's gone. Similar to Windows, we get a message if we
try to remove something that we shouldn't be able to. Let's remove this self_destruct_button.
Awesome, everything is working as intended. Next let's try removing a directory. If you thought
to yourself that we need to also recursively remove this directory, you'd be right, excellent
deduction skills. So rm -r, let's remove the misc_folder directory. And if we check the misc
folder is now gone. Remember, when using rm command, take extra precaution that you aren't
removing something important by accident.
I knew enough to be dangerous, and I think that's what got me into my systems administrator
role in Linux. When I got in that role I was working with people who were insanely brilliant.
They had Wikipedia pages written about them, about their contribution to Linux and all these
open source contributions.
They weren't using the operating system, they were engineering it, they were contributing code,
fixing hardware issues and fixing software issues. That type of environment really leveled up my
skills in terms of Linux. because I had to learn, I had to keep up somehow, so I would read their
bug reports and what they did. I guess I'd say, about after a year, I was really comfortable on the
command line. I was packaging my own tools and I was writing code and I was contributing to
open source projects. It was definitely an eye opener considering how much I thought I knew
about operating systems to what I know now. The feeling you get when you contribute code to
something that thousands, if not millions of people might use, you kind of don't believe that you
just did that. That's the feeling you get when you do something in the open source community.
I'm passionate about operating systems because there's a lot of stuff that you can do with them.
You can contribute code to an operating system like Ubuntu or Debian and you can make an
actual impact. I mean I can't go out and build a new CPU or something and have people use it
but I can contribute code, I can fix a bug. There's so much stuff you can do with the operating
system, it's unbelievable.
The most important thing about being a mother is leading by example. What do you do when you
have nothing? A year ago I found myself homeless with my daughters. The whole shelter
experience for the kids, I kept telling them that we were just on vacation and waiting for the
house to be ready. That's the worst thing I ever had to do. I grew up in the housing projects in
East Nashville, so nobody ever talked about career paths. I didn't know what to do or where to
go, but I kept saying, "They are watching how you handle this. You have a serious example to
set for these girls." Most people think that Goodwill is just a retail store, but it's so much more
than that. While I was living at the shelter, I decided to reach out to their career center and I
actually got a job as an office [inaudible] for Goodwill. A co-worker told me that Goodwill and
Google actually have a program to provide IT training, the program is called the IT Support
Professional Certificate. When I learned that I could get a scholarship through Goodwill, it was
life-changing. Chelsea is the person Goodwill was designed to support. That's why thanks to the
assistance of Google.org, we started the Goodwill Digital Career Accelerator. Using tools and
resources from Grow with Google, the Goodwill Digital Career Accelerator is focused on
connecting more than a million people with the skills they need to advance in digital careers. The
Google IT Support Professional Certificate was a great building block for this. I joined the 4:00
AM club, I would get up while the girls were asleep, do my schoolwork. While I was studying, I
learned that a Google representative was going to come and give a tech talk at Goodwill. I had to
go. Chelsea really stood out when I met her at the Goodwill event, she asked some really
interesting questions and the enthusiasm was tangible. So I asked her to send me her resume.
When I got my interview, I was so nervous, I wasn't sure if I was good enough. During the
interview process, Chelsea demonstrated not only the foundational technical knowledge that
she'd developed, but her initiative. That's exactly what we need for people who are working in
our data centers. So we brought her onboard. I absolutely love my job. When I first got the job,
my daughter, she was like, "Mom, you got this job, that means we'll have a house forever." In the
year that we've been working with Google.org, we've seen more than a quarter of a million
people build their digital skills. Almost 30,000 people have gone to work. A year ago, I wasn't
sure where my life was going. I thought everything was falling apart. I feel hopeful about the
future now. I want my daughters to know that they can achieve any goal that they can set for
themselves. My goal is to be a developer, and this is what I want to do. I have come this far, I
plan to get to the stars. My name is Chelsea Rucker, and I'm a Data Center Technician for
Google.
Now that we've learned the basics of file and directory navigation, let's learn how we can display
and edit files, search for text within files and more. In the Windows GUI, if we want to open a
file and view its contents, we can just double click on the file. Depending on the file type, it will
open on a default application. In Windows, text files default to open in an application called
notepad. But we can change this if we want to. To change the default application that opens files,
just right click and click Properties. Under 'Open with', we can change the application to another
text editor, like Word Pad. Most of the files that we'll be dealing with throughout this course,
will be text and configuration files. So, let's just focus on those files instead of images, music
files, etc. Viewing the contents of a file in PowerShell is simply using the 'cat' command, which
stands for concatenate. Let's give it a try. This will dump the contents of the file into our shell.
This isn't the best solution for a file since it just keeps writing the content until the whole file is
displayed. If we want to view the contents of the file one page at a time, we can use the 'more'
command, like this.
The 'more' command will get the contents of the file but will pause once it fills the terminal
window. Now, we can advance the text at our own pace. When we run the 'more' command, were
launched into a separate program from the shell. This means that we interact with the more
program with different keys. The Enter key advances the file by one line. You can use this if you
want to move slowly through the file. Space advances the file by one page. A page in this case
depends on the size of your terminal window. Basically, 'more' will output enough content to fill
the terminal window. The q key allows you to quit out of 'more' and go back to your shell. If we
want to leave the 'more' command and go back to our shell, we can just hit the q key. Here we
are. Now, what if we just wanted to view part of the file? Let's say we want to quickly see what
the first few lines of the text file are. We don't really want to open up the whole file. Instead, we
just want to get a glimpse of what the document is. This is called the head of the file. To do this,
we can go back to 'cat' and add the -Head parameter. This will show us the first 10 lines of the
file. Now, what if we wanted to view the last few lines or the tail of the file? I bet you can guess
what you are going to do. This will show us, by default, the last ten lines of the file. Again, these
two commands don't seem like they have any immediate use to you yet. We'll see their benefits
when we work with logs in an upcoming lesson. Now, let's take a look at how to do these same
tasks in Linux.
To read a simple file in Bash, we can also use the cat command to view a document. So let's look
at important document. The cat command is similar to the Windows cat command, since it
doesn't do a great job at viewing large files. Instead, we use another command, less.
Less does a similar thing that more does for Windows, but it has more functionality. Fun fact,
there's a Bash command called more, but it's been slowly dying out in favor of less. It's literally a
case of less is more. Similar to more, when we use less we're launched into an interactive shell.
Some of the most common keys you'll use to navigate this tool are the up and down keys, page
up and page down. g, this moves to the beginning of a file. You can see now we're at the
beginning. Capital G, this moves to the end of a text file. Now we're at the end. Slash and then a
word_search. This allows you to search for a word or phrase. If I type in slash then type the word
I want to search for, I can scan through the text file for words that match my search. Q, this
allows you to quit out of less and go back to your shell, similar to the q key in the Windows more
command. Do you see how less offers functionality like searching within a file?
Less is a great tool to use to view files of any size. You'll no doubt end up using this command
often as an IT support specialist. Similar to the Windows cat and head parameter, we can do the
same thing in Linux using a command called head. This will show you, by default, the first ten
lines of a file. Now what if you wanted to view the last few lines of a file? You can use a
command called tail. This will show you, by default, the last ten lines of a file.
So far, we've discussed how to read and modify files. But we haven't covered how to edit file
contents yet. Spoiler alert, you're about to learn. You can edit text based files in notepad, which
we used earlier to view a text file. Notepad is great for basic editing. But when making changes
to configuration files, scripts, or other complex text files, you might want something with more
features. There are lots of good editors out there for the Windows GUI. For this demonstration,
we'll use one called Notepad++. Notepad++, which you can access from the next supplemental
reading, is an excellent open source text editor, with support for lots of different file types.
Notepad++ can open multiple files and tabs. It also does syntax highlighting for known file
types, and has a whole bunch of advanced text editing features. Syntax highlighting is a feature
that a lot of text editors provide. It displays text in different colors and fonts to help you
categorize things differently. We've already installed Notepad++ on our machine. So, you can
check out their website and do the same. Now, you can edit any file using Notepad++ by right
clicking it and selecting edit with Notepad++.
What if you wanted to edit a file from the CLI? Unfortunately, there's no good default editor in
the Powershell terminal. But we can launch our Notepad++ text editor from the CLI and begin
modifying text that way. So start, Notepad++, and then just a filename. As you can see, it opened
up Notepad++, and asked if I wanted to create this file. If you'd like to read about text editors
that you can specifically use in the CLI, check out the supplemental reading on an advanced text
editor called Vim.
Supplemental Reading for Notepad++
For more information about Notepad++, check out the link here.
In Linux, there are many popular text editors that we can use to modify files. We won't have
enough time to cover them all. So let's just focus on one editor that can be found on virtually any
distribution, Nano. Nano is an extremely lightweight but useful text editor. We've included it in
the supplementary readings after this video, so go check it out. To edit a file in Nano, just type
Nano then the file name. Once we do that, we'll be launched into the Nano program. From here,
we can start editing content as we normally would with any other text editor. At the bottom of
the screen, you'll notice a few options like caret G and caret K. The caret means to use Ctrl-G or
Ctrl-K. We won't talk about all these options, but a few that might be useful are Ctrl-G, which
helps open up a help page, and Ctrl-X which is used when you want to save your work or exit
Nano. Let's go ahead and edit this file, then save our changes.
It's asking me if I want to save the file or exit and discard my changes. I'm just going to hit Y
because I want to save them. Once I do that, I'll be exited from Nano. Let's verify we actually
changed that file. There it is. Nano is a super useful tool if you need a quick text editor in Linux.
But if you want to be a true OS power user, I recommend that you read the supplemental
material I've included to learn more about the text editors that are used in the industry, like Vim
or Emacs.
Supplemental Reading for GNU Documentation
For more information on Nano click here, for Vim click here and Emacs you can view here.
So far in this course we have been using command aliases in PowerShell. PowerShell is a
complex and powerful command language, that's also super robust. We've been able to use
common aliases, that are exactly the same as their Linux counterparts. But from here on out,
we'll need to deploy some advanced command line features, so we'll need to look at real
PowerShell commands. You've already seen an example of a real PowerShell command, Get-
Help, which is used to see more information about commands. There's another PowerShell
command that we can use to look at one of our aliases, that we've been using as our list directory.
To see what the actually PowerShell command is that gets executed, we can use the PowerShell
command, Get-Alias. Interesting when we call LS, we are calling the PowerShell command Get-
ChildItem, it gets or lists the children which are the files and sub directories of the given item.
Let us actually run this Get-ChildItem command with the item C:\.
You'll see this is the same output as, ls C:\.
Cool. PowerShell commands are very long and descriptive, which makes them easier to
understand. But it does mean a lot of extra typing, when you're working interactively at the CLI.
Aliases for common commands are a great way to work more quickly in PowerShell.
We've been using them up to this point to help us hit the ground running with the command line.
In Windows, you pretty much have three different ways you can execute commands. You can
use real PowerShell commands, or the relatable alias names. Another method that we've
mentioned, but haven't really talked about yet is cmd.exe commands. Cmd.exe commands are
commands from the old MS-DOS days of Windows. But they can still be run due to backwards
compatibility. Keep that in mind, that they aren't as powerful as PowerShell commands. An
example of a cmd.exe command is dir. Which coincidentally points to the PowerShell command
Get-ChildItem, which is also where, ls Alias gets pointed to. Remember the PowerShell
command Get-Help, well there's a command parameter that you can use to get help with
command.ext commands, /?. Keep the difference in mind, Get-Help is used for PowerShell
commands like Get-Help ls, and /?, is used for other commands like dir/?. If I tried to use, ls/?, it
will return nothing, because the PowerShell command that ls is an alias of, doesn't know how to
handle to the parameter /?, and vice versa. You're free to use whatever commands you feel
comfortable with. But in this course we're going to use common aliases, and PowerShell
commands.
You've probably had to search for words in a text document before. Whether it was to find and
replace words or for something else. Most text editors work the same way when it comes to
finding words in the document. All you need to do is Ctrl+F to search for the word. Pretty simple
right? But what if you wanted to see if a word existed in multiple files? There are a few ways we
can do this. Let's talk about the GUI options and then we'll turn to PowerShell and learn how to
search for words from the CLI. Windows has a service called the Windows Search Service. This
service indexes files on your computer by looking through them on a schedule. It then compiles a
list of names and properties of the file that it finds into a database. This is a time consuming and
resource intensive process. So on many Windows Servers, those search service isn't installed or
is disabled. On Windows 8 and Windows 10 desktop computers, It's often enabled for files in
your home directory, but not for the entire hard drive. By default, the Windows Search Service
will let you find files based on their name, path, the last time they were modified, their size, or
other details, but by default you can't search for words inside the files. The Windows Search
Service can be configured to search file contents and their properties. This increases the amount
of time that it takes for the indexer to do its work. It's sort of like the computer is doing all of the
searches that you might want to do ahead of time and then you just have to look up the result.
Let's configure the service to index file contents and see what it looks like. The settings we're
looking for are in the Control Panel, but we can use the Start menu to find the settings we need
faster. Open the Start menu and then type indexing. You'll see the Indexing Options in your
results of the search, click on that. Now you want to change the settings for the user folder which
is where all the home directories are stored. Select Users and then click Advanced.
Now select the File Types tab, and select Index Properties and File Contents.
Click Okay.
Now close out of the indexing options. When you do this, the Windows Search Service will start
to rebuild the index based on your new settings. This could be super fast or could take while. It
all depends on how many files you have and how large they are. On this system, I've already let
the re-indexing complete. Now I can use Windows Explorer, my home directory, to find files
that have a specific word in them.
Let's search for the word cow. The results turn up farm animals and ranch animals dot text.
Awesome, we can see the word cow in this text file. If you don't want to use the Windows
Search Service, we can also use Notepad++, the editor that we installed in an earlier lesson.
From Notepad++, press Ctrl+Shift+F to open the Find in Files dialog.
From here, we can specify what you want to find and what files you want to search. You can
limit your search to a specific directory, to a specific set of file extensions and you can even
actually replace the word with another one from here. So lets search for the word cow again and
this time I'll search on my home directory. Find all, there we go. Now it returns farm animals and
ranch animals. If we can't or don't want to use a GUI, we can search for words within files from
the command line.
In PowerShell, we're going to use the SLS or Select-String command to find words or other
strings of characters and files. You can think of strings as a way for the computer represent text.
The Select-String command lets you search for text that matches a pattern you provide. This
could be a word, part of a word, a phrase or more complicated patterns that are described using a
pattern matching language called Regular Expressions. Keep in mind that this is a really
powerful capability that we're just scratching the surface of. So here we're going to search for a
word in a file in my home directory. Let's search for the word cow again.
You'll see that Select-String found cow and it tells you the file and the line number where it
found it. Excellent, if you wanted to search through several files in a directory, you can use
pattern matching to select them. Remember the wildcard character asterisk for selecting all, we
can use that here as well. Now we can see that it found farm animals and ranch animals. Select-
String can do lots of other things too. We'll get a chance to see that in later lessons. Being able to
find a string in a file or a set of files, it's going to be a critical skill for you on this course and in
your IT support work. It's also an important tool that we're going to learn to combine with other
tools to do really powerful things from the CLI.
What if we wanted to search for something within a directory, like looking for just the
executables in that same directory. This is where the command parameter -filter comes in. I'm
just going to LS my programs files here with the -recurse, -filter and look for exes. Well, that's
lots of exes. The -filter parameter will filter the results for file names that match a pattern. The
Asterisk means match anything. And the.exe Is the file extension for executable files in
windows. So the only results we're going to get are the files that end in.exe. Cool.
In Bash, we can search for words within files that match a certain pattern using the grep
command. What if you wanted to know of a certain file existed in a directory or if a word was in
a file? Similar to the PowerShell select-string command, we can use the grep command in Bash.
Let's search for the word cow in farm animals. You'll see that grep found cow in the text file,
farm animals. You can also use grep to search through multiple files. Let's use the asterisk
wildcard command here. And you can see that it found cow in farm animals and ranch animals.
You'll be using grep a lot throughout this course and in later courses, so it's an important
command to remember.
All right, we've learned a bunch of individual, very powerful tools. These are the most important
day-to-day commands that you'll need to work in PowerShell. Now, we're going to learn how to
combine these tools to make them even more powerful. Let's run the following command in our
desktop directory. Then we'll break it down piece by piece. Scan cd into my desktop directory.
Okay, I go woof > dog.txt. Will do an LS to check our desktop, and we'll now see a file called
dog.txt. Inside that file, we should see the word, woof. Oh, there it is. What's happening here?
Let's take a closer look, echo woof. In PowerShell, the echo is actually an alias for Write-Output.
That gives us a clue to what's happening. We know the echo command prints out our keyboard
input to the screen. But how does this work? Every Windows process and every PowerShell
command can take input and can produce output. To do this, we use something known as I/O
streams or input output streams. Each process in Windows has three different streams: standard
in, standard out, and standard error. It's helpful to think of these streams like actual water streams
in a river. You provide input to a process by adding things to the standard in stream, which flows
into the process. When the process creates output, it adds data to the standard out stream, which
flows out of the process. At the CLI, the input that you provide through the keyboard goes to the
standard in stream of the process that you're interacting with. This happens whether that's
PowerShell, a text editor, or anything else. The process then communicates back to you by
putting data into the Standard out stream, which the CLI writes out on the screen that you're
looking at. Now, what if instead of seeing the output of the command on the screen, we wanted
to save it to a file? The greater than symbol is something we call a redirector operator that lets us
change where we want our standard output to go. Instead of sending standard out to the screen,
we can send a standard out to a file. If the file exists, it'll overwrite it for us. Otherwise, it'll make
a new file. If we don't want to overwrite an existing file, there's another redirector operator we
can use to append information, greater than, greater than. So let's see that in action, echo woof
>> dog.txt. Now, if I look at my dog.txt file again, we can see that woof was added again. But,
what if we wanted to send the output of one command to the input of another command? For
this, we're going to use the pipe operator. First, let's take a look at what's in this file. cat
words.txt. Look at that, it's a list of words. Now, what if we want to just list the words that
contain the string st? We can do what we've done before and just use select-string or SLS on the
file directly. This time, let's use the pipeline to pass the output of cat to the input of select-string.
So cat words.txt | select-string st. And now, we can see a list of words with the string st. To tie
things together, we can use output redirection to put our new list into a file. So now, greater than,
and then a new file called st words.txt. Now, if I cat st words.txt, yup, there it is. That's just a
very basic example of how you can take several simple tools and combine them together to do
complex tasks. Okay, now we're going to learn about the last I/O redirector, standard error.
Remember when we tried to remove a restricted system file earlier and we got an error that said
permission denied? Let's review that once more. This time, I'm going to remove another
protected file, rm secure_file. We see errors like we're supposed to. But what if we didn't want to
see these errors? Turns out, we can just redirect the output of error messages in a different output
stream called standard error. The redirection operator can be used to redirect any of the output
streams, but we have to tell which stream to redirect. So, let's type, rm secure_file 2> errors.txt.
If I look at errors.txt, I can see the error message that we just got. So, what does the two mean?
All of the output streams are numbered. One is for standard out, which is the output that you
normally see, and two is for standard error or the error messages. Heads up, PowerShell actually
has a few more streams that we aren't going to use in this lesson. But they can be redirected in
the same way. You can read more about them in the supplemental reading right after this video.
So when we use two greater than, we're telling PowerShell to redirect the standard error stream
to the file instead of standard out. What if we don't care about the error messages, but we don't
want to put them in a file? Using our newly learned redirector operators, we can actually filter
out these error messages. In PowerShell, we can do this by redirecting standard error to $null.
What's $null? Well, it's nothing. No, really. It's a special variable that contains the definition of
nothing. You can think of it as a black hole for the purposes of redirection. So let's redirect the
error messages this time to $null, rm secure_file 2> $null. Now, our output is filtered from error
messages. There's still much more to learn if you're interested. Try Get-Help about_redirection in
PowerShell to see more detail. It may take a little time to get the hang of using redirector
operators. Don't worry, that's totally normal. Once you do start to get used to them, you'll notice
your command full skills level up and your job becomes a little easier. Now, let's take a look at
output redirection in Linux.
Similar to Windows, we have three different I/O or input-output streams: standard out, standard
in and standard err. Remember the standard out example in the last lesson? Well, the same
concept applies in Linux.
We echo the text woof here, but instead of sending it to our screen by default, we're going to
redirect the output to a file using the standard out redirector operator. Let's verify and there it is.
This overrides any file named dog.text with the content woof. If we don't want to overwrite an
existing file, we can use the append operator or greater than greater than. So, echo woof,
dog.text. We could verify that. There it is. One redirector operator that we talked about in the
Windows lesson, but didn't show an example of, was the standard in redirector operator. The
standard in redirector is denoted by a less than sign. Instead of getting input from the keyboard,
we can get input from files like this.
This command is exactly the same as cat file_input. The difference here is that we aren't using
our keyboard input more, we're using the file as standard in. Finally, similar to Windows, the last
redirector operator we'll talk about is standard err. Standard err displays error messages which
you can get by using the two greater than, redirector operator. Just like Windows, the two is used
to denote standard err. So, to redirect just the error messages of some output, you can use
something like this, ls/ dir/ fake_dir 2> error_output.text. Now, if I view that, new document.
Now, we can see the error message in error output.text. Remember the dollar sign null variable
that we used in Windows to toss unwanted output into a metaphorical black hole? We have
something like that in Linux too. There's a special file in Linux called the /dev/null file. Let's say
we want to filter out the error messages in a file and just want to see standard out messages. We
could do something like this. Now, our output is filtered from error messages. Remember how
we talked about taking the output of one command and using it as the input of another command,
with the Windows pipeline? Well, the same thing exists in Linux. The pipe command allows us
to do this. Let's say we want to see which sub-directories in the slash etc directory contain the
word Bluetooth. We can do something like this. We're using the pipe redirector to take the output
of ls-la/etc and pipe or send it to the grep command. Now, without even looking through the
directory, we're able to quickly see if the Bluetooth directory is in here. There it is. You've gotten
a glimpse of the power of redirectors and as you dive deeper into the world of Linux, you'll be
using them at regular basis. They're super valuable tools to have and now, they're part of your
toolkit.
You've learned a lot of commands and tools to help lay a strong foundation for IT support work.
There are many other commands that you haven't seen yet. Don't worry, we'll get to them as they
come up. As you advance in your career, you might even discover that the tools and commands
you're using aren't powerful or efficient enough anymore. Maybe you'll want to search through
files using more complex patterns. To do that, you'll need to know about tools like regular
expressions. Regular expressions are used to help you do advance pattern based selection.
There's also so much more to power shell. There are excellent videos and articles that can guide
you from the first steps you've learned here to being a Windows CLI master. If this sounds
interesting to you, we really encourage you to check out the supplementary reading right after
this video. And no, we won't grade you on your knowledge of this material in these courses, but
it could be really useful to you in the IT support field. You've done some seriously awesome
work. We've covered a lot of information in this lesson. Maybe this was the first time you've
been exposed to Linux or Windows. If so, you've already passed a huge milestone in your
learning journey. It's super important that you're able to use the commands you learned here by
memory. I hope you wrote them down in your notes while watching the videos in this course.
Next up, we'll be testing you on some of the new commands you learned in Bash and Windows
CLI. Make sure to re-watch the videos and practice the exercises, if you want a refresher before
you start. When you're ready, we'll see you in the next lesson.
Supplemental Reading for Windows PowerShell
For more information on getting started with Microsoft PowerShell, check out the link here
and also here.
I have to say, I think, that looking back, I've been really lucky. I had started off even before I was
a teenager teaching myself without computers and without even access to them. And somehow,
kind of in the end of that era when I was an early teen, I managed to convince a bookstore owner
to let me. He needed to buy a computer. And I would program it for him to automate his
textbook business. [MUSIC] So I had never actually done that before, right. I had never laid my
hands on that kind of computer before. I had never written a program like that before,. And the
guy believed me and he did it. And he hired me. That was a part-time job that stuck with me,
actually, until I was in my mid-20s. So for like 11 years, I had this part-time job automating the
textbook business of a neighborhood bookstore. And the thing that's so amazing, and that I'm so
grateful for, is that this guy had this faith in me and put this trust in me. Then I'm even more
amazed that it actually kind of worked out, right. And I spent all those years doing it, and it
helped their business and all that. But it was a great experience. And it was a great way to kick
off my actual, early career as a professional programmer.
Permissions
File permissions are an important concept in computer security. We only want to give access to
certain files and directories to those who need it. While we think about how we want users to
access files and folders, we should also think about how the concept of permissions carries over
to other areas of your life. Maybe you've locked down your social media post to only people you
trust, or giving a copy of your house key to a relative in case of an emergency. You'll learn more
about security principles in the last course of this program. For now, we're going to focus on one
small building block, file permissions. In Windows, files and directory permissions are assigned
using Access Control Lists or ACLs. Specifically, we're going to work with Discretionary
Access Control Lists or DACLs. Windows files and folders can also have System Access
Control Lists or SACLs assigned to them. SACLs are used to tell windows that it should use an
event log to make a note of every time someone accesses a file or folder. This is a more
advanced topic which you can read up on in the next supplementary reading. You can think of a
DACL as a note about who can use a file and what they're allowed to do with it. Each file or
folder will have an owner and one or more DACLs. Let's take a look at an example. In windows
explorer, I have opened up my home directory. If we right click on desktop and select properties,
we can see the properties dialog for our desktop directory. And if we go to a security tab, we can
see the permissions window here. The top box contains a list of users and groups. And the
bottom box has a list of the permissions that each user group has been assigned. What do each of
these permissions do? It changes a bit depending on whether the permission is assigned to a file
or a directory. Don't worry, it all makes sense soon. Let's do a rundown of these permissions.
Read, the Read permission lets you see that a file exists, and allows you to read its contents. It
also lets you read the files and directories in a directory. Read and Execute, the Read and
Execute permission lets you read files, and if the file is an executable, you can run the file. Read
and Execute includes Read, so if you select Read and Execute, Read will automatically be
selected. List folder contents, List folder contents is an alias for Read and Execute on a directory.
Checking one will check the other. It means that you can read and execute files in that directory.
Write, the Write permission lets you make changes to a file. It might be surprising to you, but
you can have write access to a file without having read permission to that file. The write
permission also lets you create sub directories and write two files in the directory. Modify, the
Modify permission is an umbrella permission that include read, execute and write. Full control, a
user or group with full control can do anything they want to the file. It includes all of the
permissions of Modify, and adds the ability to take ownership of a file and change its ACLs.
Now, when we click on my username, we can see the permissions for Cindy, which show that
I'm allowed all of these access permissions. If we want to see which ACLs are assigned to a file,
we can use a utility designed to view and change ACLs called ICACLs or Improved Change
ACLs.
Let's take a look at my desktop first. ICACLs, Desktop. Well, that looks useful. But what does it
mean? I can see the user accounts that have access to my desktop, and I can see that my account
is one of them. But what about the rest of this stuff? These letters represent each of the
permissions that we talked about before. Let's take a look at the Help for ICACLs, I bet that'll
explain things. So ICACLs, slash, question. All right. There's a description of what each one of
these letters means. The F shows that I have full control of my desktop folder. ICACLs causes
full access, and we saw this in the GUI earlier as full control. These are the same permission.
What are these other letters mean? NTFS permissions can be inherited as we saw from the
ICACLs help. OI means Object Inherit, and CI means Container Inherit. If I create new files or
objects inside my Desktop folder, they'll inherit this DACL. If I create new directories or
containers in my desktop, they'll also inherit this DACL. If you'd like to understand more about
ACL inheritance and NTFS, check out the next supplemental reading.
Supplemental Reading for Windows ACL
For more information about access control lists (ACL) in Windows, check out the link here.
As we've now learned, there are files and folders that have different permissions set on them, so
that unwanted eyes can't view or modify them. There are 3 different permissions that you can
have in Linux; Read, this allows someone to read the contents of a file or folder. Write, this
allows someone to write information to a file or folder. And execute, this allows someone to
execute a program. Let's take a look at this with the LS command, we'll use the long flags so we
can see the permissions on the file. Okay. The first thing we see in this column is -rwxrw-r--
there are 10 bits here. The first one is the file type. In this example, dash means that the file we're
looking at is just a regular file. Sometimes you might see D which stands for a directory. The
next nine bits are our actual permissions, they're grouped in trios or sets of three. The first trio
refers to the permission of the owner of the file. The second trio refers to the permission of the
group that this file belongs to. The last trio refers to the permission of all other users. The R
stands for readable, W stands for writeable and X stands for executable. Like in binary, if a bit is
set then we say that it's enabled. So for our permissions, if a bit is a dash it's disabled. If it has
something other than a dash, it's enabled. Permissions in Linux are super flexible and powerful,
because they allow us to set specific permissions based on our role. Such as an owner in a group
or everyone else. Let's take a look at this in detail. The first set of permissions, rwx, refers to the
permission of the user who owns that file. In this case, its Cindy. We can see in the owner field
of ls- l. So it says here that the owner of the file can read, write, and execute this file. The next
set of permissions are group permissions. We can see the group this file belongs to is the cool
group. They have read and write permissions but not execute permissions. And lastly, the
permissions for all other users and groups only allow them to read this file. And that's Linux
permissions in a nutshell, it might take some time to get used to reading permissions. Don't
worry, you'll eventually get the hang of it. As always, feel free to review this lesson again if you
need a refresher.
Now that we can read permissions, let's take it a step further and learn how to change
permissions in windows. Let's say we want to give access to another person in my family to view
a folder with family pictures on the computer. How do I do that? On my Local Disk C, I have a
folder called Vacation Pictures that I want to share with another user on my machine, Devan. To
do that, I'm going to right click on this folder and go to Properties, then the Security tab.
Now I can see an option to Edit file permissions. I'm going to click on that. From here, I can see
that I can add a group or usernames to this ACL. I'm going to go ahead and click Add. From
here, it asked me to enter the username of the person I want to add on this ACL. I'm going to
enter devan and then click Check Names to verify that I typed it in right.
After it's been verified, I'm going to click OK. Once devan's added to the ACL, I can click on his
username, then check the allow boxes for the permissions I want to give him. Let's give Devan
modify access, so you can add pictures to this folder too.
That's it. We've kind of been glossing over this other checkbox here Deny. You might have
already guess that Deny doesn't allow you to have a certain permission. But it's special because it
generally takes precedence over the allow permissions. Let's say Devan is in a group that has
access to this folder. If we explicitly check the deny box for Devan's username, even if the group
has access to the folder Devan won't. Sorry, Devan. If you want to learn more bout permission
precedence, you can check out the supplemental reading. To modify a permission in the CLI,
we're going to return to the icacls command. In the examples I'm going to show you, will be
running icacls from PowerShell. The icacls command was designed for the command prompt
before PowerShell. And its parameters use special characters that confuse PowerShell. By
surrounding icacls parameters with single quotes, I'm telling PowerShell not to try and interpret
the parameter as code. If you run these commands in command.exe, you'll need to remove the
single quotes for them to work. So let's look at this side by side with PowerShell.exe and
command.exe. In PowerShell, the command would be icacls 'C:\'Vacation Pictures\' /grant' with
single quotes, 'Everyone: (OI)(CI)(R). In command prompt, the command would be icacls with
double quotes "C\Vacation Pictures"/grant Everyone:(IO)(CI)(R). We're going to see with this
command does in just a moment. For now, let's take a look at the difference in the quotes. In the
PowerShell example, we add single quotes to make PowerShell ignore the parentheses and
because there's a space in the path. In the command.exe example, we have to use double quotes
for the path. And we don't need the single quotes anymore to hide the parentheses. Got it?Great.
Now, let's take a look at the permissions that we just gave to Devan with icacls. Cool. I see
there's a new decal attach to the vacation pictures directory for Devan, that gives him modify
access. We can see that any new files or folders that get created in vacation pictures will be
inherited. So let's say we want anyone with permission to use this computer to be able to see
these pictures. We don't want them to add or remove photos though. What permissions do we
want to give them? That's right. We want to give them read permission to the Vacation's Picture
folder. Let's use the special group Everyone to give read permissions to the directory. So icacls
'C:\ Vacation Pictures' /grant Everyone:(OI)(CI)(R). Success. The Everyone group includes,
well, Everyone and includes local user accounts like Cindy and Devan. Guest users: This is a
special type of user that's allowed to use the computer without a password. Guest users are
disabled by default. You might enable them in very specific situations. Now anyone who can use
this computer can browse the photos that Devan and I have put together. Actually, maybe I didn't
really want everyone to look at my vacation photos. Maybe I just want the people that have
passwords on the computer to be able to see them. In that case, I want to use authenticated users
group. That group doesn't include guest users. So first, let's add a new DACL. icacls 'C:\Vacation
Pictures' /grant' Authenticated Users:(OI)(CI)(R). Success. Now, let's remove the permissions for
the Everyone group. icacls 'C:\Vaction Pictures' /remove Everyone. Success. Now, let's use
icacls to verify that the permissions are set away we intended. icacls 'C:\Vacation Pictures'.
Sweet. We can see the Authenticated Users were added and Everyone is removed. Next, let's
take a look at modifying permissions in Linux.
In Linux, we change permissions using the chmod, or change mode, command. First, pick which
permission set you want to change. The owner, which is denoted by u, the group the file belongs
to, which is denoted by a g, or other users, which is noted by an o. To add or remove
permissions, just use a plus or minus symbol that indicate who the permission affects. Let's take
a look at some examples.
So that's chmod u+x my_cool_file. This command is saying that we want to change the
permission of my_cool_file by giving executable or x access to the owner or u. You can do the
same thing if you wanted to remove a permission. So, chmod u-x my_cool_file.
Instead of a plus, we just minus. Pretty simple, right? If you wanted to add multiple permissions
to a file, you could just do something like this. This is saying we want to add read and execute
permissions for the owner of my_cool_file. And you can do the same for multiple permission
sets. You do chmod ugo+r my_cool_file.
Now, this says we want to add read permissions for our owner, the group the file belongs to, and
all other users and groups. This format of using rwx and ugo to denote permissions and users in
chmod is known as symbolic format. We can also change permissions numerically, which is
much faster and simpler, and lets us change all permissions at once.
The numerical equivalent of rwx is 4 for read or r, 2 for write or w, and 1 for execute or x. To set
permissions, we add these numbers for every permission set we want to affect. Let's take a look
at an example. The first number 7, is our owner's permission. The second number, 5, is our group
permissions, and the third number, 4, is the permission for all other users.
Wait a minute, where are we getting 5 and 7? Remember, you have to add the permissions
together. If you add 4, 2, and 1 together, you get rwx, which equals 7. So our owner permission
is able to read, write and execute this file. Can you guess what 5 would stand for? That's right? 4
plus 1 is read and execute. So now, you can see how numeric format is quicker than symbolic
format. Instead of running something like this, We can run chmod 754 my_cool_file to update
them all. Either way, you can change permissions using the symbolic or numerical format. Just
pick whichever is easiest for you.
You can also the owner and the group of a file. The chown or change owner command allows
you to change the owner of a file. Let's go ahead and change the owner to Devan. Awesome.
And Devan is the owner of this file. And to change the group of file belongs to, you can use a
chgrp or change group command. Awesome. Now, the best group ever is the group owner for
this file. It may take a while for you to get the hang of reading and changing permissions. You
can practice changing the permissions on a few files until you get it down. Permissions are an
essential building block to computer security, and you'll be using it throughout your work as an
IT Support Specialist.
You might have noticed that we were looking at permissions in the GUI before. There's a check
box in the permission list for special permissions. The permissions that we've been looking at
and setting so far are called simple permissions. Simple permissions are actually sets of special,
or specific permissions.
For example, when you set the re-permission on a file, you're actually setting multiple special
permissions. Let's take a look at the list of special permissions available. I'm going to click on the
advanced tab under my permissions setting.
When I click on a username, and then go to Advanced Permissions, I can see a list of all the
special permissions enabled on that file. When we select a basic permission like Read, we're
actually enabling the special permissions List folder / read data. Read attributes, read extended
attributes, read permissions, and synchronize, which are just fine-tuned permissions. You can
modify these permissions like you would any other basic permission. Feel free to read more
about the different types of special permissions in the supplemental reading I included after this
video.
In most cases, the simple permissions are going to be all that you need. But sometimes, you need
to create a file or folder that doesn't quite follow a simple pattern. Let's take a look at an example
in this CLI. To view special permissions on a file in the CLI, we will simply use the icacls
command as before.
Let's take a look at a more interesting example than my desktop folder, icacls
C:/Windows/Temp. This directory is used to hold temporary files for all users in the system. We
would like for everyone in the system to be able to create files and folders here. You might think
that we should use modify or full control for this, but we don't want users to be able to delete
each others files.
Let's take a look at some of the DACLs assigned to this folder and figure out how to do this.
First, local administrators and the operating systems computer account have full permissions
over this folder, and all files and folders within it. We see a new descriptor, IO, which indicates
that this DACL is inherit only. That means that it will be inherited, but it is not applied to this
container C:\Windows\Temp. The users group includes all user accounts on the local machine.
We're going to let users WD, or create files like data, AD, create folders and append data, and S
for synchronize.
You can see in the next supplemental reading that these special permissions are included in the
modified simple permission. Unlike the modify a simple permission, we are not granting users
the ability to delete files or folders. We do want users to be able to delete their own files and
folders, though, so how do we do that?
So, if you see creator owner, creator owner is a special user that represents the owner of
whichever file the DACL applies to. In this directory, and all subdirectories, whoever owns a file
or folder has full control of it. Nice, so I'm going to create a folder and file in C:\windows\temp
and see what DACLs are applied.
Let's use what we learned about output redirection to record the output of the icacls in this file,
so icacls. Example for c:/Windows/Temp/example. Then we're going to use our redirector output
to give us icacls.txt. Okay, now let's look at the file we created to view the output of icacls.
Cool, I created the files, so I have full control of them. And all of the other DACLs that we saw
in c:/windows/Temp have been inherited. You can see that using the specials permissions in
NTFS DACLs can be complicated, but it can also let you create really powerful sets of
permissions customized to your exact needs.
Supplemental Reading for Special Permissions in Windows
For more information about file and folder permissions in Windows, check out the link
here.
At Linux, we also have special permissions. What if I want a user to be able to do something that
requires root privileges, but I don't want to give them these privileges? What's the use case for
this? Glad you asked. There are certain commands that need to change files that are owned by
root. Normally, if you need to change a file owned by root, you'd have to use sudo. But we want
it to be able to have normal users change the files without giving them root access. Let's check
out an example. Let's say I want to change my password. I would use a password command like
we've learned. Pretty simple, right? Now I just enter in my new password and my password is
changed. We know that the password command secretly scrambles up our passwords then add
them to this etc shadow file. Let's dive a little deeper into this file.
Oh, it says this file's owned by root. How are we able to write or scramble passwords in this file,
it's owned by root. Well, thanks to a special permission bit known as setuid we can enable files
to be run by the permission of the owner of the file. In this case, when you run the password
command, it's being run as root. Let's verify this.
We see the permissions on this file look a little odd. There's an S here where the x should be. The
s stands for setuid. When the s is substituted where a regular bit would be, it allows us to run the
file with the permissions of the owner of the file. To enable the setuid bit, you can do it
symbolically or numerically.
The symbolic format uses an s while the numerical format uses a 4, which you prepend to the
rest of the permissions like this. Similar to setuid, you can run a file using group permissions
with setgid or set group ID. This allows you to run a file as a member of the file group.
Under our group permissions, we can see that the setgid bit was enabled, meaning that when this
program is run, it's run as group tty. To enable the setgid bit, you can do something similar to
setuid. The only difference is the numerical format uses a two. So, I can do something like this or
something like this.
There's one last special permission bit we should cover and that's the sticky bit. This bit sticks a
file or folder down. It makes it so anyone can write to a file or folder, but they can't actually
delete anything. Only the owner of root can delete anything. Let's look at permissions for slash
tmp directory or a lot of programs write temporary files to and you'll see what I mean.
I added the d flag to show information just for the directory and not the contents. But as you can
see, there's a special permission but at the end here t, this means everyone can add and modify
files in the slash tmp directory, but only root or the owner can delete the slash tmp directory. You
can also enable the sticky bit using a numerical or symbolic format. The symbolic bit is a t and
the numerical bit is a one. So, sudo chmod plus t my folder or sudo chmod 175 my folder.
Works. So let's verify.
That was a lot of information on special bits. You usually won't have to deal with these
permission bits in a practical day-to-day manner but it's important to know they exist in case you
ever want to allow users to either share folders or even run commands with escalated privileges.
User access, group access, passwords, and permissions are all core concepts in security. Right
now, you're only working with permissions and access on a single computer scale. Eventually,
you'll learn about access on multi-user levels across different networks and more in the next
course on system administration and IT infrastructure services. For now, congratulations, you've
just taken your first step toward building a foundation of computer security knowledge. In the
next module, we're going to switch gears and talk about our OS and how it manages software.
Next, we've got two assessments for you covering Windows and bash permissions. Once you've
finished, you're granted permission to take a break before we hit the ground running in the next
module.
Package Managers
Now that we know a bit about installing software and dependencies from individual executables
or package files, let's take a look at a different way to manage software installations using tools
called package managers. You've actually already seen a package manager in action. Remember
the apt or advanced package tool we talked about in earlier video? Well, the advanced package
tool is actually a package manager for the Ubuntu operating system. We'll talk about apt in a
little bit. But you might be curious about what options you have for Windows package
management. A package manager makes sure that the process of software installation, removal,
update, and dependency management is as easy and automatic as possible. Think about the
normal way you might install a new program on your Windows computer. You might search for
it in a search engine, go to the program's website, download the installer, then run it. If you
wanted to update the software, you might open up the program and use whatever mechanism it
provides for you to install the new version. Lots of programs give you a way to perform
automatic updates and Microsoft takes care of the ones it writes through Windows update. But
you might even need to go back to the website you downloaded the software from originally to
grab another installer for the new version. Finally, if you wanted to remove the software, you
might use the windows Add/Remove programs utility. Or maybe run a custom uninstaller if it
provides you with one. Some installation technologies like the Windows installer can take care
of dependency management. But they don't do much to help you install software from a central
catalog of programs or perform automatic updates. This is where a package manager like
Chocolatey can come in handy. Chocolatey is a third party package manager for Windows. This
means it's not written by Microsoft. It lets you install Windows applications from the command
line. Chocolatey is built on some existing Windows technologies like PowerShell, and lets you
install any package or software that exists in the public Chocolatey repository. I've included links
to both in the next reading. You can add any software that might be missing to the public
repository. You can even create your own private repository if you need to package something
like an internal company application. Configuration management tools like SCCM and Puppet,
even integrate with Choclatey. That helps make managing deployments of software to the
Windows computers in your company, automatic and simple. We've talked about a few ways we
can install packages in earlier videos. Let's add Chocolatey to the mix, which supports several
methods of software installation itself. First, you can install the Choclatey command line tool and
run it directly from your PowerShell CLI. Or you can use the package management feature that
was recently released for PowerShell. Just specify that the source of the package should be the
Choclatey repository. Remember this from our talk about installing software? We use this
command to locate the Windows Sysinternals package after adding Choclatey as a software
source. Just a refresher, the command was Find-package sysinternals include dependencies.
That's all well and good. But how do we actually go about installing this package? Well, that's
where the Install-Package command-let comes into play. We can use this tool to install a piece of
software and its dependencies. Let's get installing that sysinternals package we found earlier
shot. I'm just going to go install, package-name sysinternals. Yep, I'm just going to confirm. And
just like that, we've got our package. We can verify it's in place with the Get-Package command-
let. Get-Package -name sysinternals. You can also uninstall a package using the Uninstall-
Package -Name sysinternals.
Supplemental Reading for Windows Package Managers
For more information about the NuGet package manager, check out the link here.
For more information about the Chocolatey package manager, check out the link here.
Okay. Now, to talk about the package manager used in Ubuntu called the APT or Advanced
Package Tool. We've actually already used APT in an earlier course, so hopefully, this won't
look new. The APT package manager is used to extend the functionality of the package. It makes
package installation a lot easier. It installs package dependencies for us, makes it easier for us to
find packages that we can install, cleans up packages we don't need, and more. Let's see how we
will install the open source graphical editor, Gimp, using APT. And if you want to follow along
on your own machine, I've included a link to the Gimp download in the next reading. So, sudo
apt install gimp. Let's take a look at what this command is doing. APT grabs the dependencies
that this package requires automatically and asks us if we want to install it. You can see this line
here, 0 upgraded, 18 newly installed, 0 to remove, and 16 not upgraded. This gives us a good
overview of what we're doing to the packages on our machine. Now, let' s remove this package.
It's sudo APT remove gimp.
You can see that it removes dependencies for us that we're not using anymore because we don't
need Gimp. You also noticed that when installing this package, we didn't have to download the
gimp package. It was just on our system. How is that possible? Well, thanks to something known
as a package repository, we don't have to manually search for each and every software we want
online. We've already seen the chocolatey package repository in action. Repositories are servers
that act like a central storage location for packages. Lots of software developers and
organizations host their software on the internet, and give out a link to where that location is.
You can add that link to your own machine, so it references that package or list of packages.
You've already seen this with The Register-PackageSource commandlet where we added this
location of the chocolatey repository. So on Linux, where do you add a package or repository
link? The repository source file in Ubuntu the /etc/APT/sources.list. Your computer doesn't
know where to check for software if you don't explicitly add the package or repository links to
this file. Let's just open this up real quick and take a peek.
There's some extra information in here that isn't important. But you can see here that there are
links here. If you navigate to those links, you'll see a directory that holds lots of packages.
Ubuntu already includes several repository sources in here to help you install the base operating
system packages, and other tools too. If you work in a Linux environment, there are also special
repositories called PPAs or personal package archives.
PPAs are hosted on Launchpad servers. Launchpad is a website owned by the organization,
Canonical Limited. It allows open source software developers to develop, maintain, and
distribute software. You can add PPAs like you would a regular repository link, but be a little
careful when using a PPA instead of the original developer's repositories. PPA software isn't as
vetted as repositories you might find from reputable sources like Ubuntu. They can sometimes
contain defective, or even malicious software. One more thing to call out about repositories is
that the repository managers update their software pretty regularly. If you want to get the latest
package updates, you should update your package repositories with the APT update, and then,
APT upgrade commands. The APT update command updates the list of packages in your
repositories, so you get the latest software available. But it won't install or upgrade packages for
you. Instead, once you have an updated list of packages, you can use APT upgrade, and it will
install any outdated packages for you automatically. Before installing new software, it's good to
run APT update to make sure you're getting the most up-to-date software in your repositories.
You'll also want to run APT upgrade to install any available updated packages on your machine.
You can use the apt--help command to learn more about the commands available with APT. We
won't cover them all, but you can list packages, search packages, get more information about
packages, and more. There are lots of different package managers you can use with Ubuntu. We
chose APT because it's a popular package manager, but you can read up on an alternative
package manager in Ubuntu, in the next supplemental reading. Awesome. Now that you've
entered the APT command to your toolkit, your ready to maintain packages in Linux. This is a
skill you'll be using a whole lot in the IT support world. We'll talk about that more in the next
lesson.
Supplemental Reading for Linux PPAs
If you work in a Linux environment, there are also special repositories known as PPAs or
Personal Package Archives. PPAs are hosted on Launchpad servers. For more information
about PPAs, check out the link here.
Supplemental Reading on GIMP
For more information on how to install the open-source graphical editor GIMP click here.
Wow. We've covered a lot of material so far. In the last lesson, We went over how to install,
uninstall, and maintain software in the Windows and Linux OSs. These are tasks that you'll find
yourself doing over and over as an IT support specialist. In this lesson, we're going to cover
another very important function of an IT support specialist; working with disks. In our first
course, Technical Support fundamentals, we learned about physical disks like hard disk drives
and SSDs. In this lesson, we're going to expand on that and talk about the tools needed to make a
disk usable on a computer. Ready? Let's get started.
You may remember that we introduced the concept of a filesystem in the Technical Support
Fundamentals course. Here's a refresher. A filesystem is used to keep track of files and file
storage on a disk. Without a filesystem, the operating system wouldn't know how to organize
files. So when you have a brand new disk or any type of storage device, like a USB drive, you
need to add a filesystem to it.
There are lots of file systems out there, but the two that we'll talk about in this course are
recommended as default filesystems for Windows and Linux. For Windows, we use the NTFS
filesystem, and for Linux, it's recommended to use ext4. Filesystems have different
compatibilities with different OSes. Most of the time, cross operating system support is minimal
at best. Let's say you have a USB drive that's using an NTFS filesystem. Both Windows and
Linux's Ubuntu can read and write to the USB drive. But if you have an ext4 USB drive, it'll only
work on Ubuntu and not on Windows, at least without the help of third party tools.
It's pretty likely that you'll encounter this situation in an IT support role. Let's say you have some
important files on that same USB drive that you want to copy over to your Windows, Linux, and
Mac OSes, what would you do then? This is a pretty common situation. You'd have to reformat
or wipe the USB drive and add a filesystem that's compatible with all three operating systems.
Luckily, there are filesystems like FAT32 that support reading and writing data to all three major
operating systems. FAT32 has some shortcomings though. It doesn't support files larger than 4
gigabytes, and the size of the filesystem can't be larger than 32 gigabytes. This might be enough
for a small USB drive, but it's not really great for anything else.
You can learn more about FAT32 in the next supplemental reading. This still begs the question,
what if you wanted to be able to share files between multiple OSes and don't want to deal with
filesystem limitations? Don't worry, we've got you covered. In the next course on system
administration and IT infrastructure services, we'll discuss another filesystem type called
network filesystems that solves this exact problem. All right, now that you've got a quick
refresher on filesystems, let's spend the next few lessons discussing how you actually set them
up.
Supplemental Reading for FAT32 File System
For more information about the FAT32 File System, please check out the link here.
Before we start adding a filesystem to a disk, let's do a rundown of the components of the disk
that allow you to store and retrieve files. A storage disk can be divided into something called
partitions. A partition is just a piece of the disk that you can manage. When you create multiple
partitions, it gives you the illusion that you're physically dividing a disk into separate disks.
To add a filesystem to a disk, first you need to create a partition. Usually, we just have a single
partition for our OS, but it's not uncommon to have multiple partitions for different uses. Let's
say you want to have two partitions on a disk, one for a Windows OS and one for a Linux OS.
Instead of using two machines to use both operating systems, you can just use one machine and
switch between the two OSs on boot-up. You can also add different filesystems on different
partitions of the same disk. Partitions essentially act as their own separate sub-disks, but they all
use the same physical disk. One thing to call out is that, when you format a filesystem on a
partition, it becomes a volume. Volume and partition are sometimes mistakenly used
synonymously, but we want to make sure that you understand this distinction.
The other component of a disk is a partition table. A partition table tells the OS how the disk is
partitioned. The table will tell you which partitions you can boot from, how much space is
allocated to partition, etc. There are two main partition table schemes that are used, MBR, or
Master Boot Record, and GPT, or GUID Partition Table.
These schemes decide how to structure the information on partitions. MBR is a traditional
partition table, and it's mostly used in the Windows OS. MBR only lets you have volume sizes of
2 terabytes or less. It also uses something called primary partitions. You can only have four
primary partitions on a disk. If you want to add more, you have to take a primary partition and
make it into something known as an extended partition. Inside the extended partition, you can
then make something called a logical partition. It's a little odd to get at first, but that's just how
the partition table was created.
MBR is an old standard, and it's slowly being faded out by the next partition table scheme we'll
talk about, GPT. GPT is becoming the new standard for disks. You can have a volume size
greater than 2 terabytes, and it only has one type of partition. You can make as many of them as
you want in a disk. In an earlier lesson, we learned about a new BIOS standard called UEFI that's
become the default BIOS for newer systems. To use UEFI booting, your disk has to use the
GUID Partition Table. Now that you know what you need to do to make a partition, let's partition
an actual disk. In the next few lessons, we're going to learn how to partition and format a USB
drive for each respective OS.
Now that we've got a little theory under our belts, how can we actually partition a disk and
format a file system in Windows? Although a quick web search will turn up all kinds of third
party disk partitioning programs other people have written, Windows actually ships with a great
native of tool called the Disk Management Utility. Like most things in Windows, there are a few
ways to get to disk management. We'll launch it by right clicking this PC, selecting the
"Manage" option then clicking the "Disk Management" console underneath the storage grouping.
We should see a display of both the disks and disk partitions along with information about what
type of file system they're formatted with. There are all kinds of good things to know here too.
Like the free and total capacity of disks and partitions. One super-cool property of the disk
management console is that from here, you can also make modifications to the disk and
partitions on your computer. Messing with the partition or the Windows operating system is
installed probably isn't the best way to demonstrate the partitioning and formatting abilities of
the disk management console. So let's use a USB drive instead. Once the drive has been inserted
and the plug and play service does the work of installing the driver for it, you should see it show
up in the disk management as an additional disk. The USB drive is currently formatted using the
FAT32 file system. Let's go ahead and reformat partition using NTFS instead. To do this, we
right click on the partition and choose format.
From this window, we can choose the volume label or name we'd like to give the disk. Let's just
stick with USB drive. You can also specify the file system which will change to NTFS. That's
pretty straightforward, but there are also some other options that might not be so clear. Like
what's that allocation unit size thing? Well, the allocation unit size is the block size that will be
used when you format the partition in NTFS. In other words, this is the size of the chunks that
the partition will be chopped into. Data that needs to be saved will spread out across those
chunks. This means that if you store lots of small files, you'll waste less space with small block
sizes. If you store large files, larger block sizes will mean you'll need to read less blocks to
assemble the file. We'll pick the default, which is fine in most cases. You'll also see the option to
perform a quick format is available. The difference between a quick format and a full format is
that in a full format, Windows will do a little extra work to scan the disk or USB drive in our
case, for errors or bad sectors. This extra work will make the formatting process a little longer, so
we'll just stick to quick for now. We're on our own, we don't want anything to slow us down. The
last option on the format screen is whether or not to enable file or folder compression. The
decision to enable or disable compression comes with a trade-off. If you enable compression,
your files and folders will take up less space on the disk, but compressed files will need to be
expanded when you open them, which means the computer's processor will need to do some
extra work. We aren't particularly concerned with squeezing out every last bit of disk space, so
we'll leave this box unchecked. Finally, we can hit "okay" to proceed with the format. Windows
will warn us first that formatting the volume will erase any data that might be on it. Once we let
it know that it's okay it'll start the formatting process. After a little bit of processing, we should
see the label on the partition turn to healthy. Using the GUI is pretty intuitive, but there's also a
command line way to accomplish the same task. This can come in handy if you need to automate
disk partitioning. To do disk manipulation from the CLI we'll dive into a tool called Diskpart.
Diskpart is a terminal based tool built for managing disks right from the command line. Let's
format our thumb drive again but using Diskpart instead of the GUI. First of we'll plug in our
thumb drive, then to launch Diskpart all we need to do is open up a command prompt, in this
case command.exe and type Diskpart into it.
This will open up another terminal window where the prompt should read Diskpart. You can list
the current discs on the system by typing "list disk". Next, we identify the disk we want to
format. A good signal is the size of the disk, which will be much smaller for a USB drive. Then
we can select it with select disk and then disk one, now we'll wipe the disk using the "Clean
command" which will remove any and all partition or volume formatting from the disk. With the
disk wiped, we now need to create a partition in it. This can be done with the create partition
primary command, which will create a blank partition for our file system.
Then let's select the partition with select partition one. That's the number of our freshly created
partition and now we'll mark it as active by simply typing active. If you guess that the next step
is to format the disk with the NTFS file system, you're right? We can do this by running this
command at the Diskpart prompt format FS for file system NTFS and the the label. I'm just
going to call it "my thumb drive". And then the formatting type, we'll want to make it quick. This
command will format the thumb drive with NTFS in quick mode, which we talked about earlier
and we just gave it the name "My thumb drive". Congratulations, you've just formatted a USB
drive from the command line. If you want to learn more about the options and tasks you can
accomplish with Diskpart, check out the Diskpart link in the supplementary reading I've included
right after this video. And there you have it, that's how you format a disk with the NTFS file
system in the Windows operating system using both the command line and the GUI. If you want
a refresher, feel free to watch this lesson again before heading to the next one.
DiskPart
The DiskPart command terminal helps you manage storage on your computer's drives. DiskPart utility can
be used to manage partitions of hard disks including creating, deleting, merging, or expanding partitions
and volumes. It can also be used to assign a file formatting system to a partition or volume.
There are three main divisions of storage that you will find on a drive: cluster, volume, and partition.
• Cluster (allocation unit size) is the minimum amount of space a file can take up in a volume or
drive.
• Volume is a single accessible storage area with a single file system; this can be across a single disk
or multiple.
• Partition is a logical division of a hard disk that can create unique spaces on a single drive.
Generally used for allowing multiple operating systems.
To use DiskPart you will need to use specific commands to select and manage the parts of your drive you
need to access. For a list of common DiskPart terminal commands visit this helpful guide.
The commands let you work with partitions and volumes but the base storage unit called cluster size is set
when initiating the volume or partition.
Cluster Size
Cluster size is the smallest division of storage possible in a drive. Cluster size is important because a file will
take up the entire size of the cluster regardless of how much space it actually requires in the cluster.
For example, if the cluster size is 4kb (the default size for many formats and sizes) and the file you're trying
to store is 4.1kb, that file will take up 2 clusters. This means that the drive has effectively lost 3.9 kb of
space for use on a single file.
When partitioning a disk, you should specify the cluster size based on your file sizes. If no cluster size is
specified when you format a partition, a default is selected based on the size of the partition. Using
defaults can result in loss of usable storage space.
It is important to remember when using DiskPart that the actions you take are permanent so be careful not
to erase data accidentally.
Key Takeaways
DiskPart is a tool that lets you manage your storage from a command line interface and is useful for a
multitude of actions including creating, deleting, merging, and repairing drives.
• The three main divisions of storage that you will find on a drive are cluster, volume, and partition.
• To use DiskPart you will need to use specific commands to select and manage the parts of your
drive you need to access.
• Cluster size is the smallest division of storage possible in a drive. Cluster size is important because
a file will take up the entire size of the cluster regardless of how much space it actually requires in
the cluster.
Now that you've formatted your new file system, there's one more step left. You have to mount
your file system to a drive. In IT, when we refer to mounting something like a file system or a
hard disk, it means that we're making something accessible to the computer. In this case, we
want to make our USB drive accessible so we mount the file system to a drive. Windows does
this for us automatically. You might have noticed this if you plug in a USB drive, it'll show up
on your list of drives and you can start using it right away. When you're done using the drive,
you'll just have to safely eject or essentially unmount the drive by right clicking and selecting
eject. We'll talk about why this is important in a later lesson.
In Linux, there are a few different partitioning command line tools we can use. One that supports
both MBR and GPT partitioning is the parted tool. Parted can be used in two modes. The first is
interactive, meaning we're launched into a separate program, like when we use the less
command. The second is command line, meaning you just run commands while still in your
shell. We're going to be using the interactive mode for most of this lesson. Before we do that let's
run a command to show what disks are connected to the computer using the command line mode.
We can do this by running the parted - l command. So sudo parted - l. This lists out the disks that
are connected to our computer. We can see that the disk /dev/sda is 128 gigabytes. I've also
plugged in a USB drive and you can see that, /dev /sdb is around 8 gigabytes. Let's quickly go
through what this output says. Here we can see the partition table is listed as gpt. The number
field corresponds to the number of partitions on the disk. We can see that there are three
partitions. Since this disk is /dev/sda, the first partition will correspond to /dev/sda 1 and the
second will correspond to /dev/sda 2 et cetera. The start field is where the partition starts on the
disk. For this first partition we can see that it starts at 1,049 kilobytes and ends at 538 megabytes.
The field after that shows us how large the partition size is. The next field tells us what file
system is on the partition. Then, we have the name and finally, we can see some flags that are
associated with this partition. You can see here that /dev /sdb doesn't currently have any
partitions, we'll fix that in a minute. Let's select our /dev/sdb disk and start partitioning it. We
want to be super careful that we select the correct disk when partitioning something so we don't
accidentally partition the wrong disk. We're going to use the interactive mode of parted by
running sudo parted /dev/sdb. Now we're in the parted tool. From here, we can run more
commands. If we want to get out of this tool and go back to the shell then we just use the quit
command. I'm going to run print just to see this disk one more time. It says we have an
unrecognized disk label. We'll need to set a disk label with the mklabel command. Since we want
to use the gpt partition table let's use this command. Mklabel gpt. Let's look at the status of our
disk again to do that we can use a print command. Here we can see the disk information for the
selected /dev/sdb disk. Now it says we have the partition table gpt. All right. Let's start making
modifications to the disk. We want to partition the /dev/sdb disk into two partitions. Inside the
parted tool we're going to use the mkpart command. The mkpart command needs to have the
following information, what type partition we want to make, what file system we want to format,
and the start of the disk and the end of the disk like this.
The partition type is only meaningful for mbr partition tables. Remember, the mbr uses primary,
extended, and logical partitions. Since we are formatting this using gpt, we're just going to use
primary as the partition type. The start point here is one mebibyte and the endpoint is five
gibibytes. So our partition is essentially five gibibytes. Remember from the earlier course, that
data sizes have long been referred to in two different ways, using the exact data measurement
and the estimated data measurement. Remember that one kibibyte is actually 1,024 bytes while
one kilobyte is 1,000 bytes. We haven't really had to care about this distinction before. Some
operating systems sometimes measure one kilobyte as 1,024 bytes which is confusing, but when
dealing with data storage we want to make sure we're using the precise measurements so we
don't waste precious storage space. Let's opt to use mebibyte and gibibyte in our partition. Next,
we're going to format the partition with the file system using mkfs. So I'm just going to quick,
sudo mkfs type is ext4. And I want to format the partition, so sdb1. We also left the rest of the
disk unpartitioned because we're going to use it for something else later. With that, we've created
a partition and formatted a file system on a USB drive. Remember to always be careful when
using the parted tool. It's very powerful and if you modify the wrong disk on here it could cause
a pretty big mess. Even though we've partitioned our disk and formatted a file system on here,
we're not actually able to start reading and writing files to it just yet. There's one last step to get a
usable disk in Linux. We have to mount the file system to a directory so that we can access it
from the shell. Spoiler alert, you'll learn how to do that in the next video.
To begin interacting with the disk, we need to mount the file system to the directory. You might
be thinking, why can't we just cd into /dev/sdb? That's the disk device, isn't it? It is, but if we try
to cd into /dev/sdb like this We'd get an error saying the device is not a directory, which is true.
To resolve this, we need to create a directory on our computer and then mount the file system of
our USB drive to this directory.
Let's pull up where our partition is with sudo parted -l. Okay, I can see that partition that we want
to access is /dev/sdb1. I've created a directory already under root called my_usb. So let's give this
a try. So sudo mount /dev/sdb1 /my_usb/. Now if we go to my_usb, we can start reading and
writing to the new file system. We actually don't need to explicitly mount a file system using the
mount command. Most operating systems actually do this for us automatically, when we plug in
a device like a USB drive.
File systems have to be mounted one way or the other, because we need to tell the OS how to
interact with the device. We can also unmount the file system in a similar way using the umount
command. Unmounting is the opposite of mounting a disk. So now let's unmount the file system.
I can either use sudo umount /my_usb, or sudo umount /dev/sdb1. Both will work to unmount a
file system. When you shut down your computer, disks that were mounted manually are
automatically unmounted. In some cases, like if we were using a USB drive, we just want to
unmount the file system for the USB drive without shutting down.
Always be sure to unmount a file system of a drive before physically disconnecting the drive. In
the case of the USB drive, we can run into some interesting file system errors if we don't do this.
We'll talk more about this in the upcoming lesson. Also, keep in mind that we when we use the
mount command to mount a file system to a directory, once we shut off the computer, the mount
point disappears. We can permanently mount a disk though if we needed to automatically load
up when the computer boots.
To do this, we need to modify a file called /etc/fstab. If we open this up now, you'll see a list of
unique device IDs, their mount points, what type of file system they are, plus a little more
information. If we want to automatically mount file systems when the computer boots, just add
an entry similar to what's listed here. Let's go ahead and do that really quickly.
The first field that we need to add for /etc/fstab is the UUID or universally Unique ID of our
USB Drive. To get the UUID of our devices we can use this command, sudo blkid. This will
show us the UUID for block device IDs, aka storage device IDs, and that's it. We've covered a lot
of essential disk management tasks. So far we've partitioned a disk, added a file system, and
mounted it for use. If you're curious and want to learn more about the /etc/fstab file and its
options, check out the next supplemental reading. Otherwise, let's move on.
The fstab configuration table consists of six columns containing the following parameters:
• Column 1 - Device: The universally unique identifier (UUID) or the name of the device to be
mounted (sda1, sda2, … sda#).
• Column 2 - Mount point: Names the directory location for mounting the device.
• Column 3 - File system type: Linux file systems, such as ext2, ext3, ext4, JFS, JFS2, VFAT, NTFS,
ReiserFS, UDF, swap, and more.
• Column 4 - Options: List of mounting options in use, delimited by commas. See the next section
titled “Fstab options” below for more information.
• Column 5 - Backup operation or dump: This is an outdated method for making device or
partition backups and command dumps. It should not be used. In the past, this column contained
a binary code that signified:
o 0 = turns off backups
o 1 = turns on backups
• Column 6 - File system check (fsck) order or Pass: The order in which the mounted device
should be checked by the fsck utility:
o 0 = fsck should not run a check on the file system.
o 1 = mounted device is the root file system and should be checked by the fsck command
first.
o 2 = mounted device is a disk partition, which should be checked by fsck command after
the root file system.
Example of an fstab table:
• sync or async - Sets reading and writing to the file system to occur synchronously or
asynchronously.
• auto - Automatically mounts the file system when booting.
• noauto - Prevents the file system from mounting automatically when booting.
• dev or nodev - Allows or prohibits the use of the device driver to mount the device.
• exec or noexec - Allows or prevents file system binaries from executing.
• ro - Mount file system as read-only.
• rw - Mount file system for read-write operations.
• user - Allows any user to mount the file system, but restricts which user can unmount the file
system.
• users - Any user can mount the file system plus any user can unmount file system.
• nouser - The root user is the only role that can mount the file system (default setting).
• defaults - Use default settings, which include rw, suid, dev, exec, auto, nouser, async.
For more options, consult the man page for the file system in use.
1. Format the drive using the fdisk command. Select a Linux compatible file system, like ext4. If
needed, you can also create a partition on the drive with the fdisk command.
2. Find which block devices the Linux system has assigned to the new drive. The block device is a
storage device (hard drive, DVD drive, etc.) that is registered as a file in the /dev directory. The
device file provides an interface between the system and the attached device for read-write
processes. Use the lsblk command to find the list of block devices that are connected to the
system.
Example output from the lsblk command:
a. NAME - Device names of the blocks. In this example, the device names are the existing sda drive and
sda1 partition plus the new sdb hard drive and a newly formatted sdb1 partition.
1. The major number is the driver type used for device communication. A few examples include:
• 1 = RAM
• 3 = IDE hard drive
• 8 = SCSI hard drive
• 9 = RAID metadisk
2. The minor number is an ID number used by the device driver for the major number type.
• The minor numbers for the first hard drive can range from 0 to 15.
a. The 0 minor number value for sda represents the physical drive.
b. The 1 minor number value for sda1 represents the first partition on the sda drive.
• The minor numbers for the second hard drive can range from 16 to 31.
a. The 16 minor number value for sdb represents the physical drive.
b. The 17 minor number value for sdb1 represents the first partition on the sdb
drive.
• Minor numbers for a third hard drive would range from 32 to 47, and so on.
c. RM - Indicates if the device is:
1. 0 = not removable
2. 1 = removable
d. SIZE - The amount of storage available on the device.
1. 0 = read-write
2. 1 = read-only
f. TYPE - Lists the type of device, such as:
1. In the first column, add the new file system device name. In this example, the device name would
be /dev/sdb1.
2. In the second column, indicate the mount point for the new partition. This should be a directory
that would be easy to find and identify for users. For the sake of simplicity, the mount point for this
example is /mnt/mystorage.
3. In the third column, enter the file system used on the new partition. In this example, the file system
used for the new partition is ext4.
4. In the fourth column, enter any options you would like to use. The most common option is to
select default.
5. In the fifth column, set the dump file to 0. Dump files are no longer configured in the fstab file, but
the column still exists.
6. In the sixth column, the pass value should be 2 because it is not the root file system and it is a best
practice to run a file system check on boot. Your fstab table should now include the new partition:
<File System> <Mount Point> <Type> <Options> <Dump> <Pass>
7. Reboot the computer and check the mystorage directory for the new partition.
One term you might have heard in relation to disks and partitions, is swap space. Before we talk about
swap space, let's talk about the concept of virtual memory. Virtual memory is how our OS provides the
physical memory available in our computer (like RAM) to the applications that run on the computer. It does
this by creating a mapping, a virtual to physical addresses. This makes life easier for the program, which
needs to access memory since it doesn't have to worry about what portions of memory other programs
might be using. It also doesn't have to keep track of where the data it's using is located in RAM. Virtual
memory also gives us the ability for our computer to use more memory than we physically have installed.
To do this, it dedicates an area of the hard drive to use a storage base for blocks of data called pages. When
a particular page of data isn't being used by an application, it gets evicted. Which means it gets copied out
of memory onto the hard drive. This is because accessing data on RAM is fast, much faster than the hard
drive where space is at a premium. Because of this, the operating system wants to keep the most
commonly accessed data pages in RAM. It then puts stuff that hasn't been used in a while on the disk. This
way, if a program needs a page that's not accessed a lot, the operating system can still get to it. But it has
to read it from the comparatively slow hard drive and put it back into memory. Almost all operating
systems use some kind of virtual memory management scheme and paging mechanism. So how does it
work on windows? The Windows OS uses a program called The Memory manager to handle virtual
memory. Its job is to take care of that mapping of virtual to physical memory for our programs and to
manage paging. In Windows, pages saved to disk are stored in a special hidden file on the root partition of
a volume called page file dot sys. Windows automatically creates page files and it uses the memory
manager to copy pages of memory to be read as needed. The operating system does a pretty good job of
managing the page file automatically. Even so, windows provides a way to modify the size, number and
location of paging files through a control panel applet called System Properties. You can get to the system
properties applet by opening up the control panel.
Going to the system and security setting, and clicking on system. Once in the system pane, you can open
up the advanced system settings on the left hand menu. Pick the advanced tab, then click on the settings
button in the performance section. One last time, click on the advance tab and you should see a section
called virtual memory which displays the paging file size. If you click the change button, you can override
the defaults Windows provides, so you can set the size of the paging file, and add paging files to other
drives on the computer. Microsoft has some guidelines for setting the page in file size that you can follow.
For example, on 64 bit Windows 7, the minimum paging file size should be set to 1x, the amount of RAM in
the machine. Unless you have a specific reason to change it, it's generally fine to let windows automatically
manage the paging file size itself.
In this reading, you will learn about Windows paging files and their primary functions. You will also learn
how to set the appropriate Windows paging file size. As an IT Support specialist, you may want to add or
maintain page files to improve system performance. A paging file is an optional tool that uses hard drive
space to supplement a system’s RAM capacity. The paging file offloads data from RAM that has not been
used recently by the system. Paging files can also be used for system crash dumps or to extend the system
commit charge when the computer is in peak usage. However, paging files may not be necessary in
systems with a large amount of RAM.
Determining the size needed for a paging file depends on each system’s unique needs and uses. Variables
that have an impact on page file sizes include:
• System crash dump requirements - A system crash dump is generated when a system crashes. A
page file can be allocated to accept the Memory.dmp. Crash dumps have several size options that
can be useful for various troubleshooting purposes. The page file needs to be large enough to
accept the size of the selected crash dump. If the page file is not large enough, the system will not
be able to generate the crash dump file. If the system is configured to manage page dumps, the
system will automatically size the page files based on the crash dump settings. There are multiple
crash dump options. Two common options include:
o Small memory dump: This setting will save the minimum amount of info needed to
troubleshoot a system crash. The paging file must have at least 2 MB of hard drive space
allocated to it on the boot volume of the Windows system. It should also be configured to
generate a new page file for each system crash to save a record of system problems. This
history is stored in the Small Dump Directory which is located in the
%SystemRoot%\Minidump file path.
▪ To configure a small memory dump file, run the following command using the
cmd utility:
• To set a folder as the Small Dump Directory, use the following command line:
• Complete memory dump: The option records the contents of system memory when the computer
stops unexpectedly. This option isn't available on computers that have 2 or more GB of RAM. If you
select this option, you must have a paging file on the boot volume that is sufficient to hold all the
physical RAM plus 1 MB. The file is stored as specified in %SystemRoot%\Memory.dmp by default.
The extra megabyte is required for a complete memory dump file because Windows writes a
header in addition to dumping the memory contents. The header contains a crash dump signature
and specifies the values of some kernel variables. The header information doesn't require a full
megabyte of space, but Windows sizes your paging file in increments of megabytes.
o To configure a complete memory dump file, run the following command using the cmd
utility:
• To indicate that the system should not overwrite kernel memory dumps or other complete
memory dumps, which may be valuable for troubleshooting system problems, use the command:
• Peak usage or expected peak usage of the system commit charge - The system commit limit is the
total of RAM plus the amount of disk space reserved for paging files. The system commit charge
must be equal to or less than the system commit limit. If a page file is not in place, then the system
commit limit is less than the system’s RAM amount. The purpose of these measurements is to
prevent the system from overpromising available memory. If this system commit limit is exceeded,
Windows or the applications in use may stop functioning properly. So, it is a best practice to assess
the amount of disk storage allocated to the page files periodically to ensure there is sufficient
space for what the system needs during peak usage. It is fine to reserve 128 GB or more for the
page files, if there is sufficient space on the hard drive to dedicate a reserve of this size. However, it
might be a waste of available storage space if the system only needs a small fraction of the
reserved disk space. If disk space is low, then consider adding more RAM, more hard drive storage,
or offload non-system files to network or cloud storage.
• Space needed to offload data from RAM - Page files can serve to store modified pages that are not
currently in use. This keeps the information easily accessible in case it is needed again by the
system, without overburdening RAM storage. The modified pages to be stored on the hard drive
are recorded in the \Memory\Modified Page List Bytes directory. If the page file is not large
enough, some of the pages added to the Modified Page List Bytes might not be written to the page
file. If this happens, the page file either needs to be expanded or additional page filles should be
added to the system. To assess if the page file is too small, the following conditions must be true:
In Linux, the dedicated area of the hard drive used for virtual memory is known as swap space.
We can create swap space by using the new disk partitioning tools that we learned. A good
guideline to use to determine how much swap space you need is to follow the recommended
partitioning scheme in the next supplementary reading. In our case, since we just have a USB
drive which doesn't need swap, we're just going to partition the rest of it as swap to show you
how this works. In practice, you would create swap partitions for your main storage devices like
hard drives and SSDs. Okay. Let's make swap space. First, go back into the parted tool and select
/dev/sdb, where our USB is. We're going to partition it again this time to make a swap partition.
And then we'll format the Linux dash swap file system on it. So, mkpart primary Linux swap 5
gibibytes 100 percent. You'll notice that the end point of the drive says 100 percent which
indicates that we should use the rest of the free space on our drive. We're not done yet. Swap isn't
actually a file system, so this command won't be enough. I know I'm sorry, I just lied to you like
five seconds ago. If you think about it, it makes a lot of sense since pages go into swap and not
file data. Anyways, to complete this process, we need to specify that we want to make it swap
space with the mkswap command. Let's quit out of parted and run this command on a new swap
partition. So, sudo mkswap dev, and our new swap partition is on dev sdb2. Finally, there's one
more command to run to enable swap on the device, swapon. So, sudo swapon dev sdb2. If we
want to automatically mount swap space every time the computer boots up, just add a swap entry
to the /etc fstab file like we did earlier.
For more information about swap, please check out the link here.
Now that we've gone a few practical things out of the way with disk partitioning and file system
creation, we can talk about concepts for a bit. Remember when we talked about how our OS
handles files? It actually manages the actual file data, file metadata, and file systems. We've
already covered file systems. In this video, we're going to cover the file data and file metadata.
When we talk about data, we're referring to the actual contents of the file; like a text document
that we saved to our hard drives. The file metadata includes everything else, like the owner of the
file, permissions, size of the file, it's location on the hard drive, and so on. Remember that the
NTFS file system is the native file system format of windows. So how exactly does NTFS store
and represent the files we're working with on our operating system? NTF uses something called
The Master File Table or MFT to keep everything straight. Every file on a volume has at least
one entry in the MFT, including the MFT itself. Usually, there's a one-to-one correspondence
between files and MFT records. But if a file has a whole lot of attributes, there might be more
than one record to represent it. In this context, attributes are things like the name of a file, it's
creation time stamp, whether or not a file is read-only, whether or not the file is compressed, the
location of the data that the file contains, and many other pieces of information. When you create
files on an NTFS file system, entries get added to the MFT. When files get deleted, their entries
in the MFT are marked as Free so they can get reused. One important part of a file's entry in the
MFT is an identifier called the file record number. This is the index of the files entry in the MFT.
A special type of file we should mention in Windows is called a shortcut. A shortcut is just
another file and another entry in the MFT. But it has a reference to some destination, so that
when you open it up, you can get taken to that destination. You can create a shortcut by right-
clicking on the target file and selecting the Create Shortcut option.
There it is. Besides creating shortcuts as ways to access other files, NTFS provides two other
ways using hard and symbolic links. This might get a little weird but stay with me. Symbolic
links are kind of like shortcuts but at the file system level. When you create a symbolic link, you
create an entry in the MFT that points to the name of another entry or another file. This might
seem like just another way to make a shortcut but symbolic links have a key difference. The
operating system treats them like substitutes for the file they're linked to in almost every
meaningful way. This is the part that sounds strange. So, let's demonstrate. Let's create a
directory on the desktop called Links.
Inside of it, we'll create a text file called file_1. And inside of that, let's add the word, Hello! And
then, let's make a shortcut that points this file called file_1 - Shortcut. Next, let's open up a
command prompt and navigate to this directory. Let's try to open up file_1 through its shortcut
with Notepad. What do you think will happen?
If you expect the Notepad to display, Hello! Then you'd be disappointed. Instead, notepad
opened up the shortcut file which has some text in there that isn't readable by us. Instead of a
shortcut, let's create a symbolic link. You can create symbolic links with the Make Link program
from the command prompt. Let's make one called file_1_symlink with the following command
and then open it up a Notepad and see what happens. All right, let's open it up in Notepad. This
is what we mean when we say the operating system treats the symbolic link just like the original
file. There's another type of link worth mentioning called a hard link. When you create a hard
link in NTFS, an entry is added to the MFT that points to the linked file record number, not the
name of the file. This means the file name of the target can change and the hard link will still
point to it. You can create hard links in a way that's similar to symbolic links, but with the /H
option. So mklink /H file_1_hardlink file_1. Since a hard link points out the file record number
and not the file name, you can change the name of the original file and the link will still work.
Next, we'll have a look at how Linux organizes files and the way it treats hard links and symbolic
links. Onward and upward.
Supplemental Reading on NTFS File System
For more information about the NTFS file system, please check out the following links:
Master File Table, Creating Symbolic Links, and Hard Links and Junctions.
In Linux, metadata and files are organized into a structure called an inode. Inodes are similar to
the Windows NTFS MFT records. We store inodes in an inode table and they help us manage the
files on our file system. The inode itself doesn't actually store file date or the file name, but it
does store everything else about a file. In the last lesson, we learned how to create file shortcuts,
symbolic links, and hardlinks in Windows. Well in Linux we have the same concept. Shortcuts in
Linux are referred to as softlinks, or symlinks. They work in a similar way symbolic links work
in Windows, in that they just point to another file.
Softlinks allow us to link to another file using a file name. They're great for creating shortcuts to
other files. The other type of link found in Linux are hardlinks. Similar to Windows, hardlinks
don't point to a file. In Linux, they link to an inode which is stored in an inode table on the file
system. Essentially, when you're creating a hardlink, you're pointing to a physical location on
disk or more specifically on the file system. But if you deleted a file of a hardlink, all other
hardlinks would still work. Let's actually see where hardlinks are referenced. If we did an ls-l on
this file, important_file, You'll notice the third field in the details, this field actually indicates the
amount of hardlinks a file has.
When the hardlink count of a file reaches zero, then the file is completely removed from the
computer. To create a softlink, we can run the command ln with the flag -s for softlink. So ln-s
important_file important_file_softlink. To create a hardlink, we can run the ln command without
the -s to specify a hardlink. So ln important_file important_file_hardlink. Now, if we check ls-l
important_file, we'll see that the hardlink count was increased by one. Hardlinks are great if you
need to have the same file stored in different places, but you don't want to take up any additional
space on the volume.
This is because all the hardlinks point to the same space on the volume. You could use softlinks
to do the same thing. But what if you moved one file, broke the softlink, and forgot about all the
other places that you used it? Those would be broken too and may take some time to clean up.
You may not see a use for making your own softlinks or hardlinks right now, but they are used
all throughout your system, so you should be aware how they work.
Now that we've taken a good, hard look at files in different file systems, let's turn our attention to
how we can monitor the number and size of those files in Windows. You seen how there are
loads of third party programs out there to partition and format discs on Windows. Well, there are
also lots of applications you can download that can check and visualize disk usage on a Windows
machine. But you can use the disk management council we examined in an earlier lesson to get a
sense of your disk capacity usage. To check disk usage, you can open up the computer
management utility. Then head to the disk management console. From there, right click on the
partition you're interested in and select properties.
This will bring up the general tab where you can see the used and free space on the drive. In
addition to using this graphical user interface to check the disk usage, Windows provides a
command line utility called disk usage as part of it system internal tool offering. That DU utility
can print out the usage of a given disk and tell you how many files it has. It can be useful for
creating scripts which might need text based output instead of visual reports like the pie chart in
disk management. You can find a link to the DU tool in the next supplemental reading. On the
same tab in the disk management console, you might notice a button that says disk cleanup. If
you press this button, Windows will launch a program called CleanManager.exe which will do a
little housekeeping on your hard drive to try and free up some space. This housekeeping includes
things like deleting temporary files, compressing old and rarely used files, cleaning up logs and
emptying the recycle bin. Another task related to disk health is called defragmentation. The idea
behind disc defragmentation is to take all the files stored on a given disk and reorganize them
into neighboring locations. Having files ordered like this will make life easier for rotating hard
drive disks that use an actuator arm to write to and read from a spinning disk. The head of the
actuator arm will actually travel less to read the data it needs. I should call out that this is less of
a benefit for solid state drives since there's no physical read write head that needs to move
around a spinning disk. For these kinds of drives, the operating system can use a process called
Trim to reclaim unused portions of the solid state disk. We won't go into details of how trim
works but it's good to know that exists. I've included a link to more information on trim in the
reading right after this video. Defragmentation in windows is handled as a scheduled task. Every
so often the operating system will defragment the drive automatically and you don't need to
worry about it but you can manually defragment a drive in Windows if you want to. To kick off a
manual defragmentation, open up the disk defragmenter tool bundled with the OS. Type disk
defragmenter.
When it launches, you'll be given a list of disks which can be defragmented along with buttons to
analyze the potential gains from running a defrag or defragmentation and to run the defrag itself.
My name is Jessica Thera and I'm a systems engineer in the Site Reliability organization.
[MUSIC] So I'd been talking to one of my mentors, and I said, man, I'd really kill to have a job
this summer. I would love to work with computers. And she said, well you know I have this
opportunity, but we're not sure if you're quite ready for it because you're a little young and
inexperienced. And I pretty much begged her, she took the chance on me, and I stuck with this
from the time that I was 15 until I entered college. The first time I was challenged to problem
solve, was probably when we got our first computer and I broke it. I was sitting at the computer,
I had been inspired by a movie that I saw and decided that I wanted to be a young hacker. And so
I ran some command lines and I managed to blue screen, a death to the computer. And so I
panicked trying to figure out what I could do to revert what I just did, and there was no saving it.
I am a first generation born in the US, my family is from Haiti. All of my life my parents and
everyone around me always asked me, what did you want to be when you grow up? I really
honestly didn't know what I wanted to be until I started playing around with computers and I
eventually figured out that I had a love for it. And I basically thought to myself, there has to be a
job that I can do with computers. When I decided that I wanted to pursue a profession in
technology and computing no one understood what I was talking about. Coming from an
immigrant family, everyone talks about being a doctor or a lawyer or a teacher. And if you're not
one of the three, you're not doing it right. But now they don't think that anymore so, they think
I'm a god.
Managing Processes
It might feel like we're starting to get into the weeds here. So let's take a step back and think
about what processes really are and what they represent in the context of an operating system.
You can think of processes as programs in motion. Consider all the code for your Internet
browser. It sits there on your hard drive quietly waiting for its time to shine. Once you start it up,
the operating system takes that resting code then turns it into a running, responding, working
application. In other words, it becomes a process. You interact with launch and halt processes all
the time on computers, although the OS usually takes care of all that behind the scenes. By
learning about processes, you're taking a peek behind the curtain at how operating systems really
work. This knowledge is both fascinating and powerful, especially when applied by a savvy IT
support specialist to solve problems. Keep all that in mind as we take a look at how you can pull
back the curtain even further. Next, we'll learn about the different ways you can investigate
which processes are running on a Windows computer and more methods of interacting with
them. On the Windows operating system, the task manager or task mgr.exe is one method of
obtaining process information. You can open it with the control shift escape key combination or
by locating it using the start menu.
If you click on the processes tab, you should see a list of the processes that the current user is
running along with a few of the system level processes that the user can see. Information about
each process is broken out into columns in the task manager. The task manager tells you what
application or image the process is running along with the user who launched it and the CPU or
memory resources it's using. To kill a process, you can select any of the process rows and click
the end task button in the lower right corner. We can demonstrate this by launching another
notepad.exe process from the command line, then switching over to the task manager, selecting
the notepad.exe process and ending it. I already have Notepad open so I'm going to click on it,
click end task. In an earlier lesson, we talked about starting and ending Windows processes.
Remember that we used the task kill command to stop a process by its identification number or
PID. So how do we get that PID number? While in task manager, you can click on the details
menu option and here, you can see a whole bunch of other information you can get the task
manager to display, including the PID. You can also see this information from both a command
prompt and PowerShell. From the command prompt, you can use utility called TaskList to show
all the running processes.
From a PowerShell prompt, you can use a Commandlet called Get-Process to do the same. There
are lots of ways you can get process information from the Windows operating system. We've
included links to the documentation of both TaskList and Get-Process in the supplementary
reading in case you want to dive deeper into either of these tools.
Okay, now let's talk about how to view the processes running on our system in Linux. We'll be
using the ps command, so let's just go ahead and run that command with the dash X flag, and see
what happens. This shows you a snapshot of the current processes you have running on your
system. The ps output can be overwhelming to look at at first, but don't worry, we'll walk
through how to read this output.
Let's start from right to left here. P-I-D or PID is the process ID, remember processes get a
unique ID when they're launched. TTY, this is the terminal associated with the process, we won't
talk about this field but you can read more about it in the manpages linked right after this video.
STAT this is the process status, if you see an R here it means the process is running or it's
waiting to run. Another common status you'll see is T for stopped, meaning a process that's been
suspended.
Another one you might see is an S for interruptible sleep, meaning the task is waiting for an
event to complete before it resumes. You can read more about the other types of process statuses
in the manpages. TIME, this is the total CPU time that the process has taken up. And lastly,
command, this is the name of the command we're running. Okay, now we're going to enter hard
mode here. Run this command, PS-EF. The E flag is used to get all processes, even the ones run
by other users. The dash F flag is for full, which shows you full details about a process. Look at
that, we have more processes and even more process details. Let's break this down.
UID is the user ID of the person who launched the process. PID is the process ID, and PPID is
the parent ID that we discussed in earlier lesson which launched the process. C is the number of
children processes that this process has. STime is the start time of the process. TTY is the
terminal associated with the process. TIME is the total CPU time that the process has taken up.
And CMD or command is the name of the command that we're running. What if we wanted to
search through this output? It's super messy right now, can you think of a way we can see if a
process is running? That's right, with the grep command, I told you we were going to use it all
the time.
This will give us a list of process that have the name Chrome in them. There's another way to
view process information, remember everything in Linux has a file, even processes. To view the
files that correspond to processes we can look into slash proc directory. There are a lot of
directories here for every process that's running. If you looked inside of one of the subdirectories
it'll give you even more information about the process. Let's look at a sample process file for PID
1805.
This tells us even more information about a process state than what we saw in PS. While the
slash proc directory is interesting to look at, it's not very practical when we need to troubleshoot
issues with processes. For now stick with the PS-EF command to look at process information. As
you can see, we can learn a lot about the processes running on our machine with just a few key
strokes. In an upcoming lesson we'll talk about how to use process information to our benefit
when figuring out which processes are taking up too many resources. For now, feel free to learn
a little more about the processes that you're running, I'll be waiting for you in the next video.
Supplemental Reading for Reading Process Information in Linux
For more information about ps, or the command to read current processes in Linux, check
out the link here.
Imagine you're starting up a video game that's taking a while to render its graphics. You decide
that you don't want to play anymore, which leaves you with a few options. You can wait for it to
finish loading and then quit the game from the menu, or you can interrupt the process altogether,
telling it to quit at the system level. This is just one example of a time you might find yourself
wanting to close a process before it fully completes. To tell a process to quit at the system level,
we use something called a signal. A signal is a way to tell a process that something's just
happened. You can generate a signal with special characters on your keyboard and through other
processes and software. One of the most common signals you'll come across is called SIGINT,
which stands for signal interrupt. You can send this signal to a running process with the
CTRL+C key combination. Let's say you start up the DiskPart tool we looked at in our
discussion on partition formatting. I'm just going to open up command prompt and then launch
DiskPart.
If you decide you don't want to actually format any disks, you can hold down the CTRL key and
press C at the same time to send the SIGINT signal to the DiskPart process. You'll see that the
window that the DiskPart program was running in closes and the process terminates. There are a
few other Windows signals that processes can send and receive. But unlike in Linux, there isn't
an easy way for an end user to issue arbitrary signal commands. If you're interested in learning
more about Windows signals, check out the signal reference link in the supplementary reading.
In Linux, there are lots of signals that we can send the processes. These signals are labeled with
names starting with sig. Remember the sigint signal, we talked about before. You can use sigint
to interrupt a process and the default action of this signal is to terminate the process that's
interrupting. This is true for Linux too. You can send a sigint signal through the keyboard
combination Ctrl+C. Let's see this in action. I'm going to do the same thing as we did in
Windows and start a program like sudo parted. We can see that we're in the parted tool now.
Let's interrupt his tool and say we want it to abort the process with the Ctrl+C keyboard
combination. Now, we can see that the process closed and we're back in our shell. We were able
to interrupt our process midway and terminate it. Success. There are lots of signals used in
Linux, and we'll talk about the most common ones in the upcoming lessons.
Supplemental Reading for Windows Signal
For more information about signal handling in Windows, check out the link here.
We've also seen how to send a running processor signal through Control C, but there's another
Process Management tool we haven't talked about which lets you do things like restart or even
pause processes. This tool is called Process Explorer. Process Explorer is a utility Microsoft
created let IT support specialists, systems administrators, and other users look at running
processes. Although it doesn't come built into the Windows operating system, you can download
it from the Microsoft website which I've linked to in the supplemental reading right after this
video. Once you've downloaded Process Explorer and started it up, you'll be presented with a
view of the currently active processes in the top window pane. You'll also see a list of the files a
selective process is using in the bottom window pane. This can be super handy if you need to
figure out which processes use a certain file, or if you want to get insight into exactly what the
process is doing, and how it works. You can search for a process easily in Process Explorer by
either pressing Control F, or clicking on the little binocular button. Let's go ahead and do a
search for the notepad process we opened up earlier. You should see
C\Windows\System32\notepad.exe listed as one of the search results. If you see something that
says notepad.exe.mui, don't worry. MUI stands for multilingual user interface, and it contains a
package of features to support different languages. Anyways, once you've located the
notepad.exe process, notice how it's nested under the command.exe process in the UI.
This indicates that it's a child process of command.exe. If you right-click on the notepad.exe
process, you'll be given a list of different options that you can use to manage the process. Check
out the ones that say Kill Process, Kill Process Tree, Restart, and Suspend. Kill Process is what
you might expect. Say goodbye to notepad. Kill Process Tree does a little bit more. It'll kill the
process and all of its descendants. So, any child process started from it will be stopped. Kill
Process Tree takes no prisoners. Restart is another interesting option. You might be able to guess
what it does just by its name. It will stop and start the process again. Let's do that with the
notepad.exe process. We started from command.exe. Interesting. After the restart, notepad.exe
doesn't appear as a child of command.exe anymore. What gives? Well, if you'll search for
notepad.exe again, we can see it's been restarted as a child of the procexp.exe process. This is the
process name for Process Explorer. This makes sense since Process Explorer was a process in
charge of starting it again after we terminated it. But what about the Suspend option? Instead of
killing a process, you can use this option to suspend it and potentially continue it at a later time.
If we right-click, suspend the process, we'll see that in the CPU column, the process explorer
output, the word suspended appears.
While a process is suspended, it doesn't consume the resources it did when it was active. We can
kick it off again by right-clicking and selecting the Resume option. Process Explorer can do a lot,
and we'll take a look at some of the monitoring information it can give us in an upcoming lesson.
We won't get into the details of all its features though. So, if you're curious, you can check out
the documentation on Microsoft's website. We put a link to it for you and the supplementary
reading.
Supplemental Reading for Managing Processes in Windows
For more information about the Process Explorer in Windows, check out the link here.
Let's talk about how to use signals to manage processes in Linux. First up, terminating processes.
We can terminate a process using the kill command. It might sound a bit morbid, but that's just
how it is in the dog-eat-dog world of terminating processes. The kill command without any flags
sends a termination signal or SIGTERM. This will kill the process, but it'll give it some time to
clean up the resources it was using. If you don't give the process a chance to clean up some of the
files it was working with, it could cause file corruption. I'm going to keep a process window
open so you can see how our processes get affected as we run these commands. So, to terminate
a process we'll used to kill command along with the PID of the process we want to terminate.
Let's just go ahead and kill this Firefox process.
And if we check the process window, we can see that the process is no longer running. The other
signal that you might see pop up every now and then is the SIGKILL signal. This will kill your
process with a lot of metaphorical fire. Using a SIGTERM is like telling your process, ''Hey
there process, I don't really need you to complete right now, so could you just stop what you're
doing?'' And using SIGKILL is basically telling your process, ''OK, it's time to die.'' The signal
does its very best to make sure your process absolutely gets terminated and will kill it without
giving it time to clean up. To send a SIGKILL signal, you can add a flag to the kill command
dash kill for SIGKILL. So, let's open up Firefox one more time. So, kill dash kill 10392, and now
you can see that Firefox has been killed. These are the two most common ways to terminate a
process. But it's important to call out that using kill dash kill is a last resort to terminating a
process. Since it doesn't do any cleanup, you could end up doing more harm to your files than
good. Let's say you had a process running that you didn't want to terminate but maybe you just
want to put it on pause. You can do this by sending the SIGTSTP signal for terminal stop, which
will put your process in a suspended state. To send this, you can use the kill command with the
flag dash TSTP. I'm going to run PS dash X so you can see the status of the processes. We're just
going to put this process in a suspended state. So, kill dash TSTP. Now you can see the process
10754 is now in a suspended state. You can also send the SIGTSTP signal using the keyboard
combination, Control Z. To resume the execution of the process, you can use the SIGCONT for
continued signal. Let's go and look at the process table again. I'm going to go ahead and use that
command on this process.
Now, if I look at the process again, you'll see that the process status turned from a T to an S.
SIGTERM, SIGKILL, and SIGSTP are some of the most common signals you'll see when you're
working with processes in Linux. Now that you have a grasp on these signals, let's use them to
help us utilize hardware resources better.
In mobile operating systems like iOS and Android, you won't be able to see a list of running
processes. Instead, you'll manage mobile apps that are running on the OS. When a mobile app is
running, there will be one or more processes associated with them, but those details will be
managed by the OS. Let's take a look at how you can manage your running mobile apps and
understand how they're using your mobile device's resources. As an IT support specialist, you
may help end users to troubleshoot slow mobile devices and manage their mobile apps. We'll
show you examples of what you might see, but you may have to refer to your device's
documentation if it doesn't look like these examples. First, let's check what apps are currently
running on a device by opening the app switcher in iOS. From the app switcher, I can see a list of
apps running on this iPhone. Now let's do the same thing in Android. Great, each of the apps that
I have launched is listed here. I can scroll through this list and switch to an app by tapping it.
Now I can use this calculator. The app that we're using is called the foreground app. All of these
other apps are in the background. What do you think is happening with the background apps
while I'm calculating how many bits are in this megabyte? The details can be a little complicated,
but the basic idea is this, as soon as it can, the OS will suspend background mobile apps. A
suspended app is paused, but not closed. The OS can occasionally wake a backgrounded app to
allow to do some work, but it will try to keep apps suspended as much as it can. Let's go back to
the home screen. Now that I'm on the home screen, all of the apps are backgrounded and there
are no foreground apps. The calculator hasn't been closed. Each new app that you open will be
kept backgrounded and usually suspended. This helps the device use less battery power. And pro
tip, as an IT support specialist, it's pretty helpful to learn which apps on your mobile device use
the most battery power. If you have an app that the OS can't suspend because the app keeps
working in the background or it's frozen, then that can slow your device and use up battery. IT
support specialists often have to find these misbehaving apps and close or uninstall them. Let's
try closing some of the apps. From the iOS app switcher, we can swipe up on any of the
background apps, this will close the app. You can do the same thing in Android. In this version
of Android, we can also swipe over here and hit Clear all to close all of the apps at once. You
can troubleshoot a misbehaving app by closing apps one at a time and seeing if there is one app
in particular that slows the device down. Sometimes closing a misbehaving app will be all you
need to do to make your device run smoothly again. Start with the app that's currently being used
and see if that helps. The app switcher shows you the apps in order from most recently used to
least recently used. Work backwards through time, trying one app at a time. Remember that this
is not something that you should have to do very often to make your device work properly. With
current versions of iOS and Android, you shouldn't ever have to close an app for performance
reasons, unless the app is misbehaving. It can actually use up more battery to close and reopen an
app than it would if you had just left it running. If you discover that you have an app that's
routinely misbehaving, you can try resetting it completely by clearing its cache, like we saw in
an earlier video. If the device is still running sluggishly after closing all of the apps, the next
thing to try is to simply restart the device. And if restarting the device doesn't fix the
performance issues or it's only a temporary fix, then we need to dig deeper. Let's check the
battery use of the apps that we've installed. On the iPhone, I go to the Settings app > Battery >
Battery Health. Here I can see how quickly the battery's been used since the last charge. I can
also see which apps are using the most battery. Let's look at the same settings in Android. Again,
I go to the Settings app, and from here, I'll choose Battery > More > Battery usage. From here, I
can see which apps are using the most battery. If I see an app that's using a lot of battery, then it
might not be working as it should, or maybe it's an app that uses a lot of battery to work. You'll
need to learn which apps the end user needs to know whether or not the battery use is unusual.
Supplemental Readings for Mobile App Management
Check out the following links for more info:
• Switching apps in iOS
• How to force an app to close in iOS
• Find, open & close apps in Android
• Android processes and application Lifecycle
• iOS - Battery and performance for iOS
• Fix battery drain problems for Android
Process Utilization
You've been doing a great job and we're almost done with this module. Now that we spent all
this time learning about processes, like how to read them and how to manage them, when are we
ever going to use these newfound skills? Well, pretty much all the time. But an IT support role,
managing processes comes in handy the most when processes become a little unruly. Our
systems usually have some pretty good ways of monitoring processes and telling us which
processes might be having issues. In Windows, what are the most common ways to quickly take
a peek at how the system resources are doing is by using the Resource Monitoring tool. You can
find it in a couple of places, but we will launch it right from the start menu.
Once it opens, you'll see five tabs of information. One is an overview of all the resources on the
system. Each other tab is dedicated to displaying information about a particular resource on the
system. You'll also notice that Resource Monitor displays process information too along with
data about the resources that the process is consuming. You can get this performance information
in a slightly less detailed presentation from process explorer. Just like the process you are
interested in, right click and choose properties.
From there, pick the performance graph tab. You can see quick visualizations of the current CPU
memory indicated by private bytes and disk activity indicated by I/O. But how can we get this
information from the command line? I am glad you asked. There are several ways to get this
information from the command line but we will focus on a PowerShell centric one, our friend
Get-Process. We know that if we run Get-Process without any options or flags, we get process
information for each running process on the system. If you check out the column headings at the
start of the output, you'll see things like NPM(K) values in this column represent the amount of
non paged memory the process is using. And the K stands for the unit, kilobytes. You can see
Microsoft's documentation for a full write up of each column in the next supplemental reading.
This is useful but it is a lot of information. It can be really helpful to filter down to just the data
you are interested in. Let's say you wanted to just display the top three processes using the MOS
CPU, you could write this command.
Get-Process| Sort CPU -descending | Select -first 3 -property ID, ProcessName and CPU. And
just like that, we get the top three CPU hogs on the system. This command might be a little hard
to understand, so let's go through it step by step. First, we call the Get-Process Commandlet to
obtain all that process information from the operating system. Then, we use a pipe to connect the
output of that command to the sort command. You might remember pipes from some Linux
examples earlier. We sort the output of Get-Process by the CPU column descending to put the
biggest numbers first. Then, we pipe that information to the select command. Using select, we
pick the first three rows from the output of sort and pick only the property ID, name, and CPU
amount to display. Now that you've got some knowledge about both the command line and
graphical tools Windows provides for investigating resource usage, let's have a look at Linux
Resource Monitoring.
Supplemental Reading Resource Monitoring in Windows
For more information about system diagnostics processes in Windows, check out the link
here.
A useful command to find out what your system utilization looks like in real time is the top
command. Top shows us the top processes that are using the most resources on our machine. We
also get a quick snapshot of total tasks running or idle, CPU usage, memory usage, and more.
One of the most common places to check when using the top command are these fields here,
percentage CPU and percentage mem. This shows what CPU and memory usage a single task is
taking up.To get out of the top command, just hit the Q key, Quit. A common situation you
might encounter is when a user's computer is running a little slow. It could be for lots of reasons,
but one of the most common ones is the overuse of hardware resources. If you find that a top
shows you a certain task is taking up a lot of memory or CPU, you should investigate what the
process is doing. You might even terminate the process so that it gives back the resources it was
using. Another useful tool for resource utilization is the uptime command. This command shows
information about the current time, how long your system's been running, how many users are
logged on, and what the load average of your machine is. From here, we can see the current time
is 16:43 or 4:43, our system has been up for five hours and eight minutes, and we have one user
logged in. The path that we want to talk about here is the system load average. This shows the
average CPU load in 1, 5, and 15 minute intervals. Load averages are an interesting metric to
read. They become super useful when you need to see how your machine is doing over a certain
period of time. We will get into load averages here but you should read about them in the next
supplemental reading. Another command that you can use to help manage processes is the lsof
command. Let's say you have a USB drive connected to your machine, you're working with some
of the files on the machine, then when you go to eject the USB drive, you get an error saying,
device or resource busy. You've already checked that none of the files on the USB driver is in
use or opened anywhere, or so you think. Using the lsof command, you don't have to wonder. It
lists open files and what processes are using them.
This command is great for tracking down those pesky processes that are holding open files. One
last thing to call out about hardware utilization is that you can monitor it separately from
processes. If you just wanted to see how your CPU or memory is doing, you could use various
commands to check their output. This isn't immediately useful to see on a single machine, but
maybe in the future, if you manage a fleet of machines, you might want to think about
monitoring the hardware utilization for all of your machines at once. We won't discuss how to do
this, but you can read more about it in the supplemental reading. You've done some really great
work in this module. You learned a lot about how to read process information and manage
processes, something that will be vital for you'd know when troubleshooting issues as an I.T.
support specialists. The next assessments will test you on that new process management
knowledge. Then, drum roll please, we'll be on to the last and final lesson of this course. Will
cover some of the essential tools that are used in the role of an I.T. support specialist.
Supplemental reading for Resource Monitoring in Linux
Resource Monitoring in Linux
Balancing resources keeps a computer system running smoothly. When processes are using too
many resources, operating problems may occur. To avoid problems from the overuse of
resources, you should monitor the usage of resources. Monitoring resources and adjusting the
balance is important to keep computers running at their best. This reading will cover how to
monitor resources in Linux using the load average metric and the common command.
Load in Linux
In Linux, a load is the set of processes that a central processing unit (CPU) is currently running
or waiting to run. A load for a system that is idle with no processes running or waiting to run is
classified as a 0. Every process running or waiting to run adds a value of 1 to the load. This
means if you have 3 applications running and 2 on the waitlist, the load is 5. The higher the load,
the more resources are being used, and the more the load should be monitored to keep the system
running smoothly.
Load average in Linux
The load as a measurement doesn’t provide much information as it constantly changes as
processes run. To account for this, an average is used to measure the load on the system. The
load average is calculated by finding the load over a given period of time. Linux uses three
decimal values to show the load over time instead of the percent other systems use. An easy way
to check the load average is to run the uptime command in the terminal. The following image
depicts the load values returned from the uptime command.
The first line displayed is the same as the load average output given using the uptime command
It lists what percent of the CPU is running processes or has processes waiting. The second line
shows the task output and describes the status of processes in the system. The five states in the
task output represent:
1. Total shows the sum of the processes from any state.
2. Running shows the number of processes currently handling requests, executing
normally, and having CPU access.
3. Sleeping shows the number of processes awaiting resources in their normal state.
4. Stopped shows the number of processes ending and releasing resources. The stopped
processes send a termination message to the parent process. The process created by the
kernel in Linux is known as the “Parent Process.” All the processes derived from the
parent process are termed as “Child Processes.”
5. Zombie shows the number of processes waiting for its parent process to release
resources. Zombie processes usually mean an application or service didn't exit gracefully.
Having a few zombie processes is not a problem.
The top command gives detailed insight on usage for an IT individual to gauge the availability of
resources on a system.
Key Takeaways
Computers need to balance the resources used with the resources that are free. Ensuring that the
CPU is not overused means that a system will run with few issues.
• The load in Linux is calculated by adding 1 for each process that is running or waiting to
run.
• Monitoring the average load of Linux allows an IT professional to identify which
processes are running to determine what to end in order to balance the system. A
balanced system runs with fewer problems than one that is using too high of a percent of
resources.
• The load average uses three time lengths to determine the use of the CPU: one minute,
five minutes and fifteen minutes.
The top command can give detailed information about the resource usage of tasks that are
running or waiting to run.
Virtualization
We've talked a little bit about virtual machines before. We've also been using virtual machines
throughout the quick lab assessments. In this lesson, you're going to learn how to install, manage
and remove a virtual instance. A virtual instance is just a single virtual machine. We're going to
be using the popular opensource virtualization software, Virtual Box, to manage virtual
instances. You'll find a link to download Virtual Box in the reading right after this lesson. I'm
currently using a Windows machine and in this lesson we're going to set up and virtualize an
ubuntu instance. I've already installed Virtual Box on my machine. So let's go ahead and launch
this application.
We won't go through all the menu items from Virtual Box, but we will talk about some of the
main ones. First step, how do we install a virtual instance? I've already pre-downloaded an image
of ubuntu from their website and saved it onto my desktop but I have to install it somehow. Well,
to install this, I'm just going to click this new button here to create a new VM. I'm going to give
my VM a name and select the type and version of my OS. Just going to stick with the defaults.
Next it asks how much RAM I want to dedicate to this VM. One gigabytes is more than enough
for me so I'm just going to keep this and then continue. Now it asks how much hard drive space I
want to dedicate to this VM. I'm just going to keep the default of 10 gigabytes and click Create.
We're going to keep the default values here and just skip through to the create. Awesome. You
can read more about these options in the supplemental reading. Now in my menu here I can click
Start and it'll start the VM. It will prompt me to select a media to launch from, similar to booting
a USB drive with the OS image on it. So I'm just going to select the image I downloaded.
And from, here the installation starts up. That's pretty much it. Okay, what if we decide we want
to use more than one gigabyte for the OS? On a physical machine, we'd have to buy more RAM
and install it. But since we're using a VM, it's as easy as changing a setting. To modify hardware
resource allocation to a VM, all we need to do is right click on the VM then click settings. From
here, we'll be able to change how much RAM we want along with other settings. We won't
discuss the specifics of these settings, but you can see how simple it is to modify a VM instance.
Now, what if we decide we don't want to use this VM anymore? If this is a physical machine,
we'd have to worry about where to store or recycle the hardware. For virtual machines though, all
we need to do is right click and select remove.
From here, it'll ask if we want to remove all files including the VM install itself or just remove it
from the list of VMs. Let's go ahead and delete all files. And that's it in a nutshell. Super simple.
If you want to learn more about how to use Virtual Box or other virtualization software, don't
forget to check out the supplemental reading.
Supplemental reading for Virtual Machines
Virtual Machines
Virtualization creates a simulated computer environment for running a complete operating
system (OS). The simulated computer environment is called a virtual machine (VM). On a VM,
you can run an OS as if it were running directly on your physical hardware. This reading
explains how virtual machines work and introduces some tools for creating a VM.
Logging
Remember from the first course in our program, Technical Support Fundamentals, that we
introduced the concept of logs? A log is like your computer's diary, it records events that happen
on your system. What kind of events, well, pretty much everything. Like when your system shuts
down, when it starts up, when a driver's loaded, when someone logs in. All of these things can be
written to a log. It's also written with a lot of detail. Logs tell you the exact time that an event
occurred, who caused the event, and more.
We'll be looking into some sample log snippets in the upcoming lessons to get a better sense of
how to read one. The act of creating log events is called logging. Your system does a pretty good
job of logging events right out of the box. In most systems, there is a service that runs in the
background and constantly writes events to logs. These systems are customizable so you can log
any specific field you want, but by default it logs all the essentials. By the end of this lesson,
you'll learn where all the important logs are kept on the Windows and Linux OSes. You'll also
learn how to read a log and utilize common troubleshooting practices when it comes to logs.
When you're working in IT support, you'll need to gather as much data as you can to
troubleshoot an issue. Logs tell us important things like errors that occurred, changes that were
made, etc. They are a reliable source of information.
Similar to how we can jot down our life events in a journal, events are also logged on our
machines. In Windows, the events logged by the operating system are stored in an application
called the Event Viewer. Whether you're trying to figure out why a computer game keeps
crashing, or troubleshooting login or access problems, or just satisfying your curiosity about
what's going on in your system, the Event Viewer is a great first stop.
Let's take a look at some of the information it collects, and how you can use the Event Viewer to
get answers you're looking for. You can launch the Event Viewer either from the start menu or
by typing in eventvwr.msc from the run box. The default view of the Event Viewer shows is a
summary of potentially important recent events. In our case, this isn't super interesting, since
we're more concerned with any issues that occurred. Instead, let's take a look at the left-hand
pane, where we can see a few different event groupings.
The first group we see is called Custom Views. The Event Viewer records a lot of information
about the system. So it can sometimes be a little difficult to tease out the signal, like recent
events, from the noise or the stuff you don't care about. This is where the concept of custom
views comes in handy. With a custom view, you can create a filter that will look across all the
event logs the Event Viewers know about and tease out just the information you're interested in.
Let's say we wanted to only see events of error, severity or higher that we're logged in the last
hour. To do this, click on the Create Custom View options in right-hand actions pane. This will
bring up a tab called Filter. From there, click the error and critical checkboxes. We're going to
change the logged drop down menu to last hour.
In the Event logs, we're going to select just are just the Windows Logs, then click OK. Then
we're going to give our view a new name. Click OK once more. Once you're done, you'll see a
new view come up under custom menus, where only the events that matched your filter are
displayed. The other two categories of logs you'll see in the left-hand navigation page are
Windows Logs and Application and Services. The Windows Logs categories contain event logs
that are generally applied to the whole operating system.
Let's say you're having an issue with a driver failing during startup. The log called System would
be a good place to start. If you want to see whose been accessing the computer, then you begin
investigating the Security log. The other category is called Applications and Services Logs. This
category contains logs that track events from a single application, or operating system
component, instead of the system wide events of the Windows logs category. For example, if
you're having trouble with PowerShell and wanted to get more information about it, checking out
the PowerShell log under Applications and Services log would be a great first step. Regardless
of its category, each line in a given log in the Event Viewer represents an event. Each event
contains information grouped in columns about the event like the login level. Information is the
lowest level and critical is the highest. You could also find the Date and Time the event
occurred.
Selecting an event will bring up more detailed information in the bottom pane of the Event
Viewer. This can help you dig into troubleshooting or even give you context for a bug report.
The Event Viewer is a super helpful tool for IT support specialists. It can provide you with a lot
of really detailed information about the problems any software or hardware might be
experiencing on your system. There's a lot of information in there though. So don't forget about
its custom views and filtering capabilities. More importantly, don't hesitate to poke around the
tool and get used to finding things in its interface. You'll have fun and you'll learn a lot. Next
stop, the wild world of Linux logs.
Logs in Linux are stored in the /var/log directory. Remember that /var directory stands for
variable, meaning, files that constantly change are kept in this directory, and it turns out that logs
are constantly changing. If we look at the /var/log directory with an LS, it might seem a little
intimidating. Don't worry. Each of these log files store specific information that we can figure
out by their file names. Let's check out some of the common ones you'll look at,
/var/log/auth.log, authorization and security-related events are logged here. /var/log/kern.log,
kernel messages are logged here. /var/log/dmesg, system startup messages are logged here. If
you encounter an issue at, let's say, boot up, this is a good place to check for information. It
might get a little tiresome to open up each of these log files to find information about events.
Luckily, there are also log files that combine the information of other log files. The downside is
that these files are usually very large. If you have a pretty good idea of where a problem might
lie, you might want to opt for the smaller and more specific log file. The one log file that logs
pretty much everything on your system is a /var/log/syslog file. The only thing that sys log
doesn't log by default are off events. When troubleshooting issues with user machines,
/var/log/syslog will usually contain the most comprehensive information about your system, so
that should be your first stop. Log files output a lot of events. By that logic, they take up a lot of
data that has to be stored on our machine somewhere. We generally just want to see the latest
events on our system, so we don't need to overload or disk with all this information. Luckily, our
systems also do a good job of cleaning out log files to make room for new ones. They use
something called log rotation to do this. In Linux, the utility rotate logs is called log rotate. You
might want to investigate an event that happened a month ago, so you can change your log
rotation settings to make sure not to delete events that are that old. We won't discuss how to
work with log rotation, but you can read more about it in the supplemental reading. We've talked
about logging in the context of a single machine, but if you find yourself managing many
systems and want to be able to parse their logs in one central location, you can use something
called centralized logging. We won't talk about how to do this, but if you're interested in setting
up a centralized server, check out the next supplemental reading. Okay, enough talk about what
logs are. Let's actually look at some real ones. Whoa, this looks super intimidating. But don't
worry, we're not going to be reading all of this. In the next lesson, we'll teach you how to
troubleshoot using logs. But for now, let's just read one line in sys log and parse what it says.
The first field here is the time stamp when the event occurred. Pretty straightforward. But
depending on the log, you might see a time format you aren't familiar with like a long string of
numbers such as 1501538594. Time stamps found in a format like this are referred to as Unix or
epoch time. At first, you might be baffled by this. Why would you represent time in this way?
And just what exactly is the Unix epoch? The Unix epoch time is used to represent, then, it's the
number of seconds since midnight on January first, 1970, a sort of zero hour for Unix based
computers to anchor their concept of time. This means that 1501538594 represents the date,
time, Monday, July 31st. 30314 Pacific Standard Time. Why midnight on January first, 1970? Is
that date the birthday of Unix? Or does it mark some other significant event? The actual answer
is much simpler. The original engineers of Unix at Bell Labs just picked it because it was
convenient. So, don't be caught off guard if you see a time stamp like this. The next field is the
host name of the machine the event occurred on. Next up is the service that the log event is
referring to. And last is the event that occurred. In the next lesson, we'll show you some common
troubleshooting tactics when using logs.
Supplemental Reading for Linux Logs
For more information about logrotate, or the command to manage large numbers of log
files in Linux, check out the link here.
Yes, now you've made it to the really fun stuff. Let's use the information we learned about logs
to actually investigate system issues. Take the scenario, you're working in IT support role and
one of your users tells you that they leave their computer on all the time but they recently woke
up to find that the computer had shut down. What do you do? Maybe you stay up through the
night keeping a close eye on the computer not taking any breaks to use the restroom or even
blink. You wait and wait and wait until the computer shuts off, or in a sane and normal world,
you decide to just look through the system logs. Let's go with that option. So where do you
begin? At first, logs can be really messy and daunting to look at. We'll talk about the techniques
you can use to view logs, but rest assured, you'll never need to read a log line by line. The first
thing you want to do when looking at a log is to search for something specific. But what if you're
seeing issues within an application and you don't know where to start looking? Well luckily for
us, our systems log information in a pretty standard way. If an application is getting a lot of
errors, what do you think you can search for? That's right. The word error. What if you're seeing
an issue with a specific application? What else do you think you can search for? If you guess the
application name, you're right. You've already been able to filter out your logs to look for
specific things that you might be seeing. Let's see this in action.
Here, we can see the log results that have the word error in them. If you need to investigate
issues that happen around a certain time, you can actually do that by checking the timestamps
around that time. You may find the problem that's causing your issue this way or at least get a
little closer to figuring out what it is.
When you finally get to a juicy log portion that might help you uncover problems, you usually
want to start looking at the output from either the top or bottom. Let's say you're seeing lots of
errors. Each of these errors could be happening because of a root issue. If you resolve the root
issue, you'll fix the cascading errors. Take a look at this. The log is riddled with errors but if we
scroll up, we can see the one error that spun up all these others. If we fix that, then the other
issues will most likely be fixed. On the flip side, if you aren't seeing any indicators of a problem
in a log, you might want to work from the bottom until you come across a clue. Your system
could be functioning normally but when you scroll down to read the output, you see a log entry
that may be related to your problem. Another troubleshooting tactic you can use with logs is to
check them in real time. Let's say every time you launch a specific application, it does something
abrupt and shuts off. Sure, you can check the logs and post and keep track of your time or you
can look at the logs in real time. To do this, we can use one of the commands we learned in a
very early lesson, tail. Let's take a look at what this means. We're going to tail -f the syst log file
and keep it in an open window.
Then we're going to turn off Bluetooth to show you the events it's logging. Now we can see
Bluetooth logging data in real time. Look at that, we've come full circle. I told you those
commands would come in handy. Using these simple log tactics will help you throughout your
career as an IT support specialist. You've certainly covered a lot so far. Now, you've picked up
how to troubleshoot using logs, too. Logs will be one of your best friends when you're faced with
a problem machine that leaves no obvious clues. Talk to the logs and listen to what that sweet,
sweet love voice is telling you. You'll discover the problem in no time.