0% found this document useful (0 votes)
163 views272 pages

System Commands (SC)

The document covers basic Linux commands for navigating and managing files and directories, including 'pwd' for printing the working directory, 'ls' for listing files, and 'cd' for changing directories. It also explains the output of commands like 'ps' for process status and 'uname' for system information, along with details about file types and permissions. Additionally, it provides examples of using these commands in a terminal environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views272 pages

System Commands (SC)

The document covers basic Linux commands for navigating and managing files and directories, including 'pwd' for printing the working directory, 'ls' for listing files, and 'cd' for changing directories. It also explains the output of commands like 'ps' for process status and 'uname' for system information, along with details about file types and permissions. Additionally, it provides examples of using these commands in a terminal environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Week 1

Lecture 1.2
pwd — “print working directory”
when you run pwd it prints the full absolute path of the current working
directory

hunder@thunder-VMware-Virtual-Platform:~$ pwd
/home/thunder

ls — “list”

when you run ls it prints the files and directories present in the current
working directory

thunder@thunder-VMware-Virtual-Platform:~$ ls
Desktop Documents Downloads Music Pictures Public snap Template
s Videos

ps — “process status”
when you run ps it prints the a snapshot of the currently running processes
for the current user/session

thunder@thunder-VMware-Virtual-Platform:~$ ps
PID TTY TIME CMD
9302 pts/0 [Link] bash
9556 pts/0 [Link] ps

PID — Process ID

TTY — Terminal type

Time — CPU time used

CMD — Command that started the process

Week 1 1
⚠️ ps -e or ps -A — shows all processes on the system

ps -f — full format listing (includes PPID, UID, etc.)

thunder@thunder-VMware-Virtual-Platform:~$ ps -f
UID PID PPID C STIME TTY TIME CMD
thunder 9302 9294 0 11:23 pts/0 [Link] bash
thunder 10667 9302 0 12:03 pts/0 [Link] ps -f

here UID — User ID (for here thunder)


PID — Process ID (Unique ID for each process)
C — CPU usage (0 — CPU utilization percentage, here it is very
low usage so 0)
STIME — Start Time (Time when each process started)
TTY — Terminal
TIME — CPU time used

ps aux — detailed list with memory and CPU usage

uname — “Unix name”


it is used to display system information about Unix/Linux operating system

thunder@thunder-VMware-Virtual-Platform:~$ uname
Linux

Week 1 2
⚠️ uname -a — all system information

thunder@thunder-VMware-Virtual-Platform:~$ uname -a
Linux thunder-VMware-Virtual-Platform 6.11.0-25-generic #25~2
4.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 [Link] UT
C 2 x86_64 x86_64 x86_64 GNU/Linux

Linux — Kernal version


thunder-VMware-Virtual-Platform — Hostname
6.11.0-25-generic — Build version

clear — to clear the shell

exit or ctrl+D — close the shell

⚠️ user@machine:~$ ls -a —

user — username (thunder)


@machine — hostname (thunder-VMware-Virtual-Platform)
~ —path

user@machine:~$ — command prompt (thunder@thunder-VMware-Virtual-


Platform:~$)

ls — Command

-a — option

user@machine:~$ man ls

man — command

ls — argument

ls -a —

Week 1 3
ls -acommand, which lists all files and directories, including hidden ones
(hidden ones are those that start with a dot . )

thunder@thunder-VMware-Virtual-Platform:~$ ls -a
. .bashrc Documents .local .profile .sudo_as_admin_successful
.. .cache Downloads Music Public Templates
.bash_history .config .gnupg Pictures snap Videos
.bash_logout Desktop .lesshst .pki .ssh

Name What It Is
. The current directory (your home directory)

The parent directory (usually /home if you're in


..
/home/thunder )

Configuration file for Bash shell (runs every time a


.bashrc
terminal is opened)
.bash_logout Script that runs when you log out of a terminal session
.bash_history Stores your command history

.cache , .config , .local ,


Config and cache directories for apps
.gnupg

.pki , .ssh Used for encryption/security (e.g., SSH keys)

.profile Used to set environment variables for your session

A marker file created when you first successfully used


.sudo_as_admin_successful
sudo

It is useful when configuring environments or troubleshooting

ls -l —
It gives detailed information about each file and directory in the current
location

thunder@thunder-VMware-Virtual-Platform:~$ ls -l
total 36
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Desktop
drwxr-xr-x 2 thunder thunder 4096 May 16 13:25 Documents
drwxr-xr-x 2 thunder thunder 4096 May 18 12:31 Downloads

Week 1 4
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Music
drwxr-xr-x 2 thunder thunder 4096 May 16 13:28 Pictures
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Public
drwx------ 7 thunder thunder 4096 May 18 12:31 snap
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Templates
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Videos

Column Meaning
drwxr-xr-x Permissions and file type
2 Number of hard links
thunder Owner (username)
thunder Group (group name)
4096 Size in bytes
May 16 00:54 Last modified date & time
Desktop File or directory name

man page sections —

File system Hierarchy Standard —

Week 1 5
File system: traversing the tree

/ is the root of the file system

/ is also the delimiter for sub-directories

. is current directory

.. is parent directory

Path for traversal can be absolute or relative

Week 1 6
cd — “Change Directory”
It is used to move between folders (directories) in the terminal.

cd foldername

this command changes your current directory to foldername .

Week 1 7
⚠️ cd — Takes you to your home directory

cd .. — go one level up (to the parent directory)

cd / — go to the root directory

cd ~/Documents — Go to the downloads folder in your home directory

cd ../Documents — Go to Documents folder in the parent directory

Week 1 8
🧪

Step by step analysis:—

thunder@thunder-VMware-Virtual-Platform:~$ pwd
/home/thunder

You were inside your home directory

thunder@thunder-VMware-Virtual-Platform:~$ ls
Desktop Documents Downloads Music Pictures Publ
ic snap Templates Videos

Showed the visible directories in your home

thunder@thunder-VMware-Virtual-Platform:~$ ls -a
. .bashrc Documents .local .profile .sudo_as
_admin_successful
.. .cache Downloads Music Public Templat
es
.bash_history .config .gnupg Pictures snap Vide
os
.bash_logout Desktop .lesshst .pki .ssh

Week 1 9
Listed all files, including hidden ones (those starting
with . )

thunder@thunder-VMware-Virtual-Platform:~$ cd ..
thunder@thunder-VMware-Virtual-Platform:/home$ pw
d
/home

You moved one level up ,from /home/thunder —> /home . The


.. means “parent directory”

thunder@thunder-VMware-Virtual-Platform:~$ cd ..
thunder@thunder-VMware-Virtual-Platform:/$ pwd
/

Again we have moved one level up , this is the root


directory, the top of the linux file system. Everything
starts from here

thunder@thunder-VMware-Virtual-Platform:~$ cd
thunder@thunder-VMware-Virtual-Platform:~$ pwd
/home/thunder

You went back to your home directory using just your


cd

thunder@thunder-VMware-Virtual-Platform:~$ cd /

here we are going to the root directory using cd /

thunder@thunder-VMware-Virtual-Platform:~$ cd /
thunder@thunder-VMware-Virtual-Platform:/$ ls

Listed all directories which are present in the root


directory

Week 1 10
thunder@thunder-VMware-Virtual-Platform:~$ cd /
thunder@thunder-VMware-Virtual-Platform:/$ ls
thunder@thunder-VMware-Virtual-Platform:/$ ls -a

Used ls -a to show hidden entries.

thunder@thunder-VMware-Virtual-Platform:~$ cd /
thunder@thunder-VMware-Virtual-Platform:/$ ls
thunder@thunder-VMware-Virtual-Platform:/$ ls -a
thunder@thunder-VMware-Virtual-Platform:/$ cd bin
thunder@thunder-VMware-Virtual-Platform:/bin$ ls -a

Used cd bin to go the bin directory of the root

Used ls -a to show all the entries including hidden ones

/bin — Essential command binaries

/boot — static files of the boot loader

/dev — Device files

/etc — host specific system configuration

/lib — essential shared libraries and kernal modules

/media — mount points for removable devices

/mnt — mount points

/opt — add on application software packages

/run — data relevant to running processes

/sbin — essential system binaries

/srv — data for services

/tmp — temporary files

/usr — secondary hierarchy

Week 1 11
/var — Variable data

thunder@thunder-VMware-Virtual-Platform:~$ cd /
thunder@thunder-VMware-Virtual-Platform:/$ ls
#here all the entries will show for the root directory
thunder@thunder-VMware-Virtual-Platform:/$ cd usr
thunder@thunder-VMware-Virtual-Platform:/usr$ ls
#here all the entries will show for usr directory

— store crash reports from applications or system


/var/crash

processes

/var/log — here linux stores log files

/usr/bin — user commands

/usr/lib — libraries

/usr/local — local hierarchy

/usr/sbin — non-vital system binaries

/usr/share — Architecture dependent data

/usr/include — header files included by c programs

/usr/src — source code

/var/cache — application cache data

/var/lib — variable state information

/var/local — variable data for /usr/local

/var/lock — lock files

/var/run — data relevant to running processes

/var/tmp — temporary files preserved between reboots

Shareable unsharable
/usr /etc
static
/opt /boot

Week 1 12
/var/run
variable /var/mail
/var/lock

Lecture 1.3

Week 1 13
🧪 date date and time thunder@thunder-VMware-
calendar of a Virtual-Platform:~$ date
cal
month Fri May 23 [Link] AM IST
memory 2025
free
statistics

groups to which thunder@thunder-VMware-


groups
user belongs Virtual-Platform:~$ free
what type of a total used
file
fie it is? free shared buff/cache
available
Mem: 8082144 2883
508 2727116 398152
3175120 5198636
Swap: 4194300 0
4194300

groups—this shows the user groups that a particular user


belongs to.

thunder@thunder-VMware-Virtual-Platform:~$ groups
thunder adm cdrom sudo dip plugdev users lpadmin

thunder — default personal group


sudo — can use sudo command
cdrom , plugdev — can access removable media
lpadmin — can manage printers

man date — it opens the manual page for dates

Option Meaning thunder@thunder-VMwar


Output in RFC e-Virtual-Platform:~$ date
-R
5322 format "+%Y-%m-%d"

Week 1 14
Year (e.g., 2025-05-23
+%Y
2025)
+%m Month (01–12) thunder@thunder-VMwar
e-Virtual-Platform:~$ date
Day of month
+%d -u
(01–31)
Fri May 23 [Link] AM U
+%H Hour (00–23)
TC 2025
+%M Minute (00–59)

Seconds (00– date -u — shows time in


+%S
59)
UTC
Display or set
-u — it shows system’s
free -h
time in UTC
memory usage in a human
- readable format

free vs free -h — free is


used when you need the
exact precise numbers in
bytes or kilobytes but for
free -h it is easy to read

memory stats

Typical output of ls -l (least directory contents in long format)

thunder@thunder-VMware-Virtual-Platform:~$ ls -l
total 36
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Desktop
drwxr-xr-x 2 thunder thunder 4096 May 16 13:25 Documents
drwxr-xr-x 2 thunder thunder 4096 May 18 12:31 Downloads
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Music
drwxr-xr-x 3 thunder thunder 4096 May 19 18:56 Pictures
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Public
drwx------ 7 thunder thunder 4096 May 18 12:31 snap
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Templates
drwxr-xr-x 2 thunder thunder 4096 May 16 00:54 Videos

Week 1 15
File Types

- Regular file inode — index node ( inode ) is a


data structure linux / unix file
d Directory
systems to store metadata
l Symbolic link about a file
c Character file
every file has a unique
b Block file inode number
s Socket file
— displays the inode
ls -i
p Named pipe number of each file or
directory in the current
directory

thunder@thunder-VMware-Virtual-Platform:~$ ls -i
3932236 Desktop 3932237 Downloads 3932242 Pictures
3932171 snap 3932243 Videos
3932240 Documents 3932241 Music 3932239 Public 3
932238 Templates

find -inum <number> — finds the file with the inode number <number>

thunder@thunder-VMware-Virtual-Platform:~$ find -inum 39322


36
./Desktop

permission string — rwx r-x r-x ( r - read, w - write, x - execute,


- - no permission)

Week 1 16
rwx — owner — read, write and execute

r-x — Group — read, no write , execute

r-x — others — read, no write, execute

Numeric Equivalent — 755

7 ( rwx ) for owner — 4 (read)+2 (write)+1(execute)

5 ( r-x ) for groups — 4(read)+0+1(execute)

5 ( r-x ) for others — 4(read)+0+1(execute)

mkdir — make directory is used to create new folders

thunder@thunder-VMware-Virtual-Platform:~$ mkdir level1


thunder@thunder-VMware-Virtual-Platform:~$ ls
Desktop Documents Downloads level1 Music Pictures Pu
blic snap Templates Videos

mkdir newfolder — creates a single directory

mkdir -p a/b/c — creates nested directories (a, a/b and a,b,c) in one go

mkdir folder1 folder 2 folder3 — creates multiple directories at once

Week 1 17
thunder@thunder-VMware-Virtual-Platform:~$ mkdir projects
thunder@thunder-VMware-Virtual-Platform:~$ cd projects
thunder@thunder-VMware-Virtual-Platform:~/projects$ mkdir c
ode notes data
thunder@thunder-VMware-Virtual-Platform:~/projects$ ls
code data notes

chmod— change file or directory permission and it controls who


can read, write or execute a file.
u- user o - others + - add permission - - remove
permission
g - group a - all (groups + user + others) = - set exact
permission

chmod u+x [Link] # Add execute permission for user


chmod go-w [Link] # Remove write permission for group
and others
chmod a=r [Link] # Set read-only for everyone

touch — it creates a file , also updates timestamps of existing


files

thunder@thunder-VMware-Virtual-Platform:~/projects/code
$ touch file
thunder@thunder-VMware-Virtual-Platform:~/projects/code
$ ls
file

thunder@thunder-VMware-Virtual-Platform:~/projects/code
$ touch [Link]
thunder@thunder-VMware-Virtual-Platform:~/projects/code
$ ls
[Link] file

Week 1 18
mkdir— creates directories and touch — creates empty files updates
timestamps

cp — it is the copy command in linux used to copy files or


directories.

cp [Link] [Link]

take the contents of [Link] and copies it

make an exact copy of that content and save it as a new file


named [Link]

so after running this command you’ll have two files —

[Link] (original file)

[Link] (new copy, identical context)

cp [Link] /home/thunder/Documents/

copies [Link] into the documents folder

mv — is used to move or rename files and directories

mv [Link] /home/thunder/Documents/

this moves [Link] into the documents folder

mv [Link] [Link]

this renames [Link] to [Link]

thunder@thunder-VMware-Virtual-Platform:~$ mkdir billo


thunder@thunder-VMware-Virtual-Platform:~$ cd billo
thunder@thunder-VMware-Virtual-Platform:~/billo$ touch ba
[Link]
thunder@thunder-VMware-Virtual-Platform:~/billo$ ls

Week 1 19
[Link]
thunder@thunder-VMware-Virtual-Platform:~/billo$ mv bagg
[Link] ..
thunder@thunder-VMware-Virtual-Platform:~/billo$ ls
thunder@thunder-VMware-Virtual-Platform:~/billo$

rm — is used to remove files or directories.

thunder@thunder-VMware-Virtual-Platform:~$ ls
[Link] Desktop Downloads Music projects snap
Videos
billo Documents level1 Pictures Public Templates
thunder@thunder-VMware-Virtual-Platform:~$ rm [Link]
thunder@thunder-VMware-Virtual-Platform:~$ ls
billo Documents level1 Pictures Public Templates
Desktop Downloads Music projects snap Videos
thunder@thunder-VMware-Virtual-Platform:~$

here [Link] deletes by running rm command

rm -r foldername

this deletes the directory foldername and everything inside it

alias rm="rm -i"

normally rm deletes files immediately

with rm -i it asks for confirmation before deleting each file

alias — used to create shortcut or custom commands

alias shortname='long command here'

then when you type shortname the shell runs the full command you
aliased

Week 1 20
alias ll='ls -l'

now typing ll will run ls -l to show a detailed list of files

ls -li — here two commands are combined for ls

-l — shows a long listing format

-i — shows the inode number of each file

ls -lia — here three commands are combined for ls

-l — shows long listing format

-i — shows inode number

-a — show all files including hidden files

whoami — shows the username of the current user

thunder@thunder-VMware-Virtual-Platform:~$ whoami
thunder

— a file viewer used to read the contents of text files one


less

screen at a time— especially useful for large files.

chmod change permission of a file

touch change modified timestamp of a file

cp create a copy of a file

mv rename or move a file

mkdir create a directory

rm remove a file

Lecture 1.4

Week 1 21
🧪 Multiple uses of / is as good as one root folder / is its own parent

ls — list

Short and long forms of options

interpretation of directory as an argument

recursive listing

order of options on command line

what is /etc/profile? — system wide configuration file that sets up


environment variables and startup scripts for all users when they log
in using a login shell (like when using bash -l or logging in via terminal)

thunder@thunder-VMware-Virtual-Platform:/etc$ ls -l profile
-rw-r--r-- 1 root root 582 Apr 22 2024 profile

its gives detailed information about the profile file located in the /etc

directory

— less command in linux is used to view large text files one


less

scree at a time

less filename
#less /etc/profile

allows scrolling both forward and backward

doesn’t load the entire file into memory - great for large files

supports searching within the file

opens faster and is more interactive than more or cat

cat — concatenate and is commonly used to:

display the contents of a file

combines multiple files

Week 1 22
create new files

cat [options] [file...]


#cat [Link]

display the entire contents of [Link]

cat [Link] [Link]

display the contents of both files, one another the other

cat > [Link]

creates a new file and you can start typing content then press ctrl + D

to save and exit

cat -n [Link] #number all output lines


cat -b [Link] #number non-blank output lines
cat -s [Link] #suppress repeated blank lines
cat -E [Link] #show $ at the end of each line

head — is used to display the first few lines of a file (by default the
first 10 lines)

head filename

this shows the first 10 lines of filename

head -n 5 [Link] # shows the first 5 lines


head -c 20 [Link] # shows first 20 bytes

tail— is used to display the last part of a file typically the last 10
lines by default

tail filename

Week 1 23
this will show last 10 lines of the file

tail -n 5 [Link] #shows the last 5 lines


tail -c 20 [Link] # shows the last 20 bytes
tail -f /var/log/syslog # live monitors the end of a file (-f follow th
e file as it grows)

wc — “word counts” is used for counting lines, words, characters or


bytes, longest line length

wc filename

this will display all lines, words, bytes, longest line length and also
the filename

thunder@thunder-VMware-Virtual-Platform:/etc$ wc profile
27 97 582 profile

27 lines, 97 words 582 bytes profile

wc -l # counts lines only


wc -w # counts words only
wc -c # counts bytes only
wc -m # count characters only
wc -L # show length of longest line

— is used to locate the executable file associated with a given


which

command

which <command>

thunder@thunder-VMware-Virtual-Platform:~$ which ls
/usr/bin/ls

finding where a command is installed

Week 1 24
verifying which version of a command you are using

debugging when a command is not working as expected

more — to view the content of text files one screen at a time similar
to less but with more limited navigation.

more filename

— stands for the manual it displays manual pages for other


man

commands and programs

man <command>

man ls

this will show the manual for the ls command

— is used to search the man (manual) pages for a keyword or


apropos

description. it helps you find relevant commands when you don’t


remember the exact command name

— is used to view detailed documentation for a command or


info

program -often more extensive than man pages. it opens GNU info
pages which are structured and include navigation features

ifo <command>

info ls

this opens the info page for ls command

— to get a one-line description of a command. it pulls this short


whatis

summary from the manual (man) database

Week 1 25
whatis <command>

thunder@thunder-VMware-Virtual-Platform:~$ whatis ls
ls (1) - list directory contents

this tells you:

ls is a command found in section 1 of the manual.

its purpose is to list directory contents

— to display information about shell built-in commands — these are


help

commands that are part of the shell itself (like cd , echo , etc.)

help [builtin_command]

help cd

this shows help for the cd (change directory) built in


when to use help:

when the command is built the shell (not a standalone executable)

for quick reference about syntax and options

type — is used to display information about how a given command name will
be interpreted by the shell- whether it’s a builtin, an alias, a function or an
external executable

type [command]

thunder@thunder-VMware-Virtual-Platform:~$ type ls
ls is aliased to `ls --color=auto'

use cases —

to debug a script and see why a command behaves differently

Week 1 26
find whether a command has been aliased

location of a command

Week 1 27
🧪 Multiple arguments —

Second argument

Interpretation of last argument

Recursion assumed for mv and not cp

Multiple Arguments — It refers to commands like cp and mv that allow


you to pass more than one argument

Second argument:— means the second item you pass in the


command

mv [Link] [Link] dir/

[Link] is the first argument

[Link] is the second argument

dir/ is the last argument

Interpretation of last argument:—


this refers to how Linux commands treat the last argument when
multiple arguments are provided.

cp [Link] [Link] [Link] myfolder/

the last argument myfolder/ is interpreted as the destination


directory

all previous ones are source files to be copied into myfolder/


⚠️ But if myfolder does not exist, the command will throw an error
— because it's expecting it to be a directory when multiple
sources are given.

cpdoes not copy folders/directories by default — you need to


explicitly use -r option

Week 1 28
cp -r folder1/ folder2/

mv automatically moves folders and their contents:

mv folder1/ folder2/

So, recursion is assumed (or automatic) for mv , but for cp you must
specify it with -r .

so the line

Multiple arguments —
1. “Second argument”
2. “Interpretation of last argument”
3. “Recursion assumed for mv and not not cp ”

...is summarizing how Linux command-line tools handle multiple


files/directories:

Pay attention to how each argument is treated.

The last one is usually the target (especially if it's a directory).

Some commands like mv handle directories automatically, but cp

needs special flags.

Week 1 29
💡 Hard links
A hard link is an additional name for an existing file. It points directly to
the inode (the actual data on disk), not just the file name. All hard links to
a file are equal — they are indistinguishable from the original file
🧠key concepts:
Every file in Linux is a pointer to an inode, which stores the file's
actual data.

A hard link creates another name (or label) pointing to the same
inode.

Deleting one hard link does not delete the data as long as at least
one hard link exists

ln [Link] [Link]

this creates a hard link called [Link] that points to the same inode as
[Link]

echo "Hello, World!" > [Link]


ln [Link] [Link]

[Link] and [Link] both point to the same data on disk

Editing one file will reflect in the other

deleting [Link] does not delete the data since [Link] still exists

ls -li [Link] [Link]

same inode number

same file size

same modification time

⚠️ Limitations of Hard Links


Week 1 30
Can’t link directories (to avoid cycles)

Can’t span different filesystems (since inodes are filesystem-


specific)

Feature Hard Link

Points to Inode (actual file content)

Shared Inode? ✅ Yes


File removed deletes
data?
❌ No, not unless all links are deleted
Works across
filesystems?
❌ No
Can link directories?
❌ No (only with in special cases for
ln -d
system use)

Week 1 31
💡 Symbolic Links:—
A symbolic link is a special type of file that points to another file or
directory by name, not by inode.
🧠 Key Concepts
A symbolic link just contains a path to another file.

It’s a separate file that points to the target by name, not inode.

If the target is deleted, the symbolic link becomes broken (also


called a dangling symlink).

ln -s [Link] [Link]

s → tells ln to create a symbolic link

[Link] → the original file

[Link] → the symlink name (shortcut)

echo "Linux is awesome" > [Link]


ln -s [Link] [Link]
cat [Link]

output: — Linux is awesome

rm [Link]
cat [Link]

Error: No such file or directory — because the symlink is now broken.

Feature Symbolic Link Hard Link

Inode (actual file


Points to File name (path)
data)

Works across
filesystems
✅ Yes ❌ No
Can link directories ✅ Yes ❌ No
Week 1 32
Breaks if target deleted ✅ Yes ❌ No
Can be chained
✅ Yes (can link to another ❌ No
link)

ln — is used to create links between files. It supports

Hard links (default)

Symbolic (soft) links (with -s option)

ln [OPTION] TARGET LINK_NAME

ln file1 file2

Creates another name ( file2 ) for the same inode as file1 .

Both files are now indistinguishable.

Deleting one won’t affect the other.

🧠 Hard links can’t span filesystems and can’t link directories.


ln -s file1 link1

link1 is a pointer to file1 .

If file1 is deleted, link1 becomes broken.

Week 1 33
💡 File Sizes

— The ls -s command in Linux displays the file sizes in blocks


ls -s

(usually 1 KB each) along with file names.

ls -s [options] [file...]

$ ls -s
4 Desktop 8 Downloads 4 [Link]

This means:

Desktop uses 4 blocks (4 KB),

Downloads uses 8 blocks,

[Link] uses 4 blocks.

ls -sh
4.0K Desktop 8.0K Downloads 4.0K [Link]

- The stat command in Linux is used to display detailed


stat

information about a file or directory, including: file size, inode no.,


permissions, ownership, etc

$ stat [Link]
File: [Link]
Size: 1024 Blocks: 8 IO Block: 4096 regular file
Device: 802h/2050d Inode: 1835026 Links: 1
Access: 2025-05-24 [Link].000000000 +0000
Modify: 2025-05-23 [Link].000000000 +0000
Change: 2025-05-23 [Link].000000000 +0000
Birth: -

du — The du command in Linux stands for Disk Usage. It is used to


estimate the amount of disk space used by files and directories.

Week 1 34
du [options] [file or directory]

du mydir

This will show how much space mydir and its subdirectories are
using in kilobytes (by default).

du -h #human-redable sizes
du -s #show only the total for each argument
du -a #show sizes for all files too
du -c #displayy a grand total

Block Size:— The block size in Linux (and other Unix-like systems)
plays a crucial role in how disk space is allocated and reported by
tools like du , ls , and stat .
A block is the smallest unit of data the filesystem uses to store files.
Common block sizes are 512 bytes, 1KB, 4KB, etc.

Term Meaning

Block size Smallest unit of storage on disk

Affects Disk usage, efficiency, reporting commands

du shows Space used (block-based)

ls -l shows Actual content size (byte-based)

Week 1 35
💡 In memory file-system

/proc — The /proc directory in Linux is a virtual filesystem that


provides a window into the running kernel. It doesn’t contain real
files but rather runtime system information and process-specific
data, all generated on-the-fly by the kernel.
What Is /proc ?

Full path: /proc

Type: Virtual filesystem (procfs)

Purpose: Gives access to kernel data structures, process info,


system configuration, etc.

cat /proc/cpuinfo # Show CPU details


cat /proc/meminfo # Show memory usage
cat /proc/uptime # Show how long the system has been runn
ing
ls /proc/$$ # Show info about your current shell's process

Entry Description

/proc/cpuinfo Information about your CPU (model, cores, etc.)

/proc/meminfo Memory usage stats

/proc/version Kernel version

/proc/uptime System uptime

/proc/[PID] Info about a specific process with that PID

/proc/self Symlink to the process accessing /proc

/proc/mounts Mounted filesystems

/proc/partitions Info about disk partitions

/proc/filesystems Supported filesystems

/proc/net/ Network-related info

/sys — The /sys directory in Linux is another virtual filesystem, just


like /proc , but it’s specifically used to expose information about

Week 1 36
devices, drivers, and the kernel itself. It’s part of a system called
sysfs.
What Is /sys ?

Type: Virtual filesystem (sysfs)

Purpose: Provides a structured way to interact with kernel


objects, such as hardware devices, drivers, and kernel
subsystems.

Main Use Cases

Shows how the kernel sees the hardware.

Allows configuration of hardware-related settings (some


writable).

Used by udev, device drivers, and hardware tools to discover


and manage devices.

Path Description
/sys/class/ Contains high-level classes like net , block , power
/sys/block/ Info about block devices like sda , nvme0n1
/sys/bus/ Bus systems like pci , usb , and their devices
/sys/devices/ Tree of all physical and virtual devices
/sys/firmware/ Firmware-related interfaces (e.g., ACPI)
/sys/module/ Info about loaded kernel modules
/sys/kernel/ Kernel settings and information
/sys/power/ Power management (e.g., suspend, hibernate)

cat /sys/class/power_supply/BAT0/status # Show battery statu


s
cat /sys/class/net/eth0/address # Show MAC address of
eth0
ls /sys/block # List block devices
cat /sys/block/sda/size # Show size of sda in 512-by
te blocks

Week 1 37
Feature /proc /sys

Purpose Process & kernel info Hardware and driver interface

Filesystem procfs sysfs

Dynamic Yes (created on boot) Yes (reflects current system)

Writable? Some files (carefully) Some files (device config)

Week 1 38
WEEK 2
Lecture 2.1 (Command line editors — Part 1)

🧪 Command line editors


Working With text files in the terminal

WEEK 2 1
🧪

Features —

Scrolling, view modes, current position in file (Allows you to move


through the file, switch between views (e.g; line wrap, hex), and see
your current cursor position)

Navigation (char, word, line, pattern) — (Enables movement by


character, word, line or search pattern for efficient for efficient
editing)

Insert, Replace, Delete (Let’s you add, overwrite, or remove text at


the cursor or selected area)

Cut-Copy-Paste (Supports removing , duplicating, and moving text


within or across files.)

Search-Replace (Allows finding specific text or patterns and


replacing then interactively or globally)

Language-aware syntax highlighting (Highlights code elements


based on programming language syntax to improve readability)

WEEK 2 2
Key-maps, init scripts, macros (Customizes editor behavior,
automates tasks, and binds keys to commands or sequences)

Plugins (Extends editor functionality with additional features like


autocomplete, linters, or file explorers)

ed (ed is one of the oldest line editors in Unix and Linux systems)

1. it was the standard editor in early Unix systems (developed in the


1970’s)
2. It’s extremely lightweight, usually just a few kilobytes
3. Still available on nearly all unix-like systems by default — even in
minimal rescue environments

Show the Prompt — P (turns on a prompt (*) after each command,


making ed easier to follow interactively)

Command Format — [addr[,addr]]cmd[params]

optional addresses (line numbers or patterns)

followed by the command ( d , s , etc.)

then any parameters if required

Example — 1,3d deletes lines 1 to 3

Commands for location — 2 . $ % + - , ; /RE/

2 — refers to line 2

. — current line

$ — last line

% — shortcut for 1,$ (entire file).

+ / - — Next/previous line relative to current

, — same as .,$ (from current to end)

/RE/ — searches forward for line matching regular expression


RE .

Commands for editing — f p a c d i j s m u

WEEK 2 3
f — show or set the current file name

p — print the addressed line

a — append text after the addressed line

c — change the addressed line (replace them)

d — delete the addressed line

i — insert text before the addressed line

j — join lines into one.

s — substitute text in the current line ( s/old/new/ )

m — move lines to another location

u — undo the last command (limited to one level)

execute a shell command — !command (runs the specified shell


command and displays the output
example — !ls shows files in the current directory)

edit a file — e filename (closes the current buffer and loads filename into
the editor (unsaved changes are lost unless written))

read file contents into buffer — r filename (appends the contents of


filename after current line)

read command output into buffer — r !command (appends the output of


the shell command after the current line.
Example — r !date adds the current date/time)

write buffer to filename — w filename (saves the buffer to filename )

quit — q (exits the editor. Use Q to force quit without warning about
unsaved changes)

Creating a file —

thunder@thunder-VMware-Virtual-Platform:~$ echo "line- 1 hello wo


rld" > [Link]
thunder@thunder-VMware-Virtual-Platform:~$ echo "line-2 hello wo
rld" >> [Link]

WEEK 2 4
thunder@thunder-VMware-Virtual-Platform:~$ echo "line-3 ed is pe
rhaps the oldest ediotr out there" >> [Link]
thunder@thunder-VMware-Virtual-Platform:~$ echo "line-4 end of fi
le" >> [Link]
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]
line- 1 hello world
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file

thunder@thunder-VMware-Virtual-Platform:~$ ed [Link]
107
q

this means 107 bytes were loaded into the buffer — that’s the total size
of [Link] , and we used q to quits the editor without making any
changes
for more information —

thunder@thunder-VMware-Virtual-Platform:~$ ls -l [Link]
-rw-rw-r-- 1 thunder thunder 107 Jun 15 17:06 [Link]

where

-rw-rw-r-- — permissions

1 — hard link count — 1

thunder — owner (the user who owns the files)

thunder — Group — the group that owns the file

107 — size

Jun 15 17:06 — last modified date $ time

[Link] — filename

WEEK 2 5
thunder@thunder-VMware-Virtual-Platform:~$ ed [Link]
107
P
*1
line- 1 hello world
*$
line-4 end of file
*,p
line- 1 hello world
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file
*2,3p
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
*/hello/
line- 1 hello world
*/oldest/
line-3 ed is perhaps the oldest ediotr out there
*1
line- 1 hello world
*+
line-2 hello world
*-
line- 1 hello world
*3
line-3 ed is perhaps the oldest ediotr out there
*;p
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file
*%p
line- 1 hello world
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file

WEEK 2 6
*.
line-4 end of file
*!date
Sun Jun 15 [Link] PM IST 2025
!
*r !date
32
*,p
line- 1 hello world
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file
Sun Jun 15 [Link] PM IST 2025
*w
139
*q

ed [Link] — opened file

P — now each command is followed by * prompt.

1 → Line 1
$ → Last line
,p → Print all lines
2,3p → Print lines 2 to 3
/hello/ → Search for "hello"

+/− → Move forward/back one line


3 → Go to line 3
;p → Print from current line (3) to line 4
%p → Print the whole file
. → Print current line (line 4)

!date — shows the date and time of modification

r !date — reads the output of ‘date’ and appends it after the current
line

w — shows the file is now 139 bytes

WEEK 2 7
,p —

line- 1 hello world


line-2 hello world
line-3 ed is perhaps the oldest editor out there
line-4 end of file
Sun Jun 15 [Link] PM IST 2025

Now the files looks like something like this —

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]


line- 1 hello world
line-2 hello world
line-3 ed is perhaps the oldest ediotr out there
line-4 end of file
Sun Jun 15 [Link] PM IST 2025

thunder@thunder-VMware-Virtual-Platform:~$ ed [Link]
139
P
*%s/\(.*\)/PREFIX\1/
*,p
PREFIXline- 1 hello world
PREFIXline-2 hello world
PREFIXline-3 ed is perhaps the oldest ediotr out there
PREFIXline-4 end of file
PREFIXSun Jun 15 [Link] PM IST 2025
*%s/\(.*\)/PREFIX \1/
*,p
PREFIX PREFIXline- 1 hello world
PREFIX PREFIXline-2 hello world
PREFIX PREFIXline-3 ed is perhaps the oldest ediotr out there
PREFIX PREFIXline-4 end of file
PREFIX PREFIXSun Jun 15 [Link] PM IST 2025

WEEK 2 8
*3,5s/PREFIX/prefix/
*,p
PREFIX PREFIXline- 1 hello world
PREFIX PREFIXline-2 hello world
prefix PREFIXline-3 ed is perhaps the oldest ediotr out there
prefix PREFIXline-4 end of file
prefix PREFIXSun Jun 15 [Link] PM IST 2025

%s/\(.*\)/PREFIX\1/ — This prepends PREFIX to each line

\1 captures the entire line using (.*)


Result — PREFIXline- 1 hello world

— but since
%s/\(.*\)/PREFIX \1/ PREFIX was already there, this added
another PREFIX in front:

PREFIX PREFIXline- 1 hello world

3,5s/PREFIX/prefix/ — Replaces PREFIX with prefix in line 3 to 5

line 3 → prefix PREFIX...


line 4 → prefix PREFIX...
line 5 → prefix PREFIX...

,p — shows the current file buffer

PREFIX PREFIXline- 1 hello world


PREFIX PREFIXline-2 hello world
prefix PREFIXline-3 ed is perhaps the oldest ediotr out there
prefix PREFIXline-4 end of file
prefix PREFIXSun Jun 15 [Link] PM IST 2025

ed / ex commands —

f — show name of f ile being edited

p — p rint the current line

WEEK 2 9
a — a ppend at the current line

c — c hange the line

d — d elete the current line

i — i nsert line at the current position

j — j oin lines

s — s earch for regex pattern

m — m ove current line to position

u — u ndo latest change

Lecture 2.2 (Command line editors part 2)

WEEK 2 10
🧪 What is pico —
pico is a simple, easy-to -use command -line text editor originally

developed as part of the Pine email client


1. Extremely user friendly for beginners
2. common keybindings shown at the bottom
a. pico is not open source which is why many linux distributions now
use nano
common shortcuts
ctrl + 0 — Write out (save)

ctrl + x — Exit editor

ctrl + w — Search

ctrl + k — Cut a line

ctrl + u — Uncut (paste)

thunder@thunder-VMware-Virtual-Platform:~$ which pico


/usr/bin/pico
thunder@thunder-VMware-Virtual-Platform:~$ which nano
/usr/bin/nano
thunder@thunder-VMware-Virtual-Platform:~$ ls -l /usr/bin/pico
lrwxrwxrwx 1 root root 22 Oct 10 2024 /usr/bin/pico -> /etc/alternati
ves/pico
thunder@thunder-VMware-Virtual-Platform:~$ ls -l /bin/nano
-rwxr-xr-x 1 root root 279040 Oct 10 2024 /bin/nano
thunder@thunder-VMware-Virtual-Platform:~$ readlink -f /ur/bin/pic
o
thunder@thunder-VMware-Virtual-Platform:~$ readlink -f /usr/bin/pi
co
/usr/bin/nano

which pico — the command pico runs the executable (or link) located
at /usr/bin/pico

— the command
which nano nano runs the executable (or link) is
located at /usr/bin/nano

WEEK 2 11
ls -l /usr/bin/pico — this means /usr/bin/pico is a symbolic link pointing to
/etc/alternatives/pico

The l at the beginning of the permissions ( lrwxrwxrwx ) confirms


it’s a symbolic (soft) link.

ls -l /bin/nano — this is an actual binary file — nano itself

It’s executable ( x permission), owned by root

— This follows the entire chain of symbolic links


readlink -f /usr/bin/pico

and shows you the real destination

so even though you run pico it actually ends up executing nano

SO when you type pico , you are really running nano

readlink —print resolved symbolic links or canonical file names


readlink -f path — Resolves all symlinks to show the absolute, final target

of the path.
readlink -e path — like -f but only returns result if all path components

exist
readlink -m path — resolves path even if some component don’t exist

readlink -n path — Outputs the result without a trailing newline

readlink -q path — Suppresses most error messages

readlink -s path — Outputs the immediate target of the symlink, without

resolving furthur
readlink -v path — Verbose output — prints what it’s doing

readlink -z path — Ends output with a null byte ( \0 ) instead of a newline

readlink --help — displays the help manual for readlink

readlink --version — shows the version information of readlink

thunder@thunder-VMware-Virtual-Platform:~$ ls -l .bashrc
-rw-r--r-- 1 thunder thunder 3808 May 31 23:26 .bashrc

— is showing detailed information about


ls -l .bashrc .bashrc file in your
home directory

WEEK 2 12
— where
-rw-r--r-- rw- is owner can read and write r-- groups
can read

r-- means others can read also

what is .bashrc — it’s a startup script for interactive bash shells

contains personal shell configurations, like:

Aliases

Prompt settings

Environment variables

Nano shortcuts — [Link]


[Link]/dist/latest/[Link]
although some important keystrokes —

ctrl+s — Save current file

ctrl+o — Offer to write file (save as)

ctrl+r — Insert a file into current one

ctrl+x — Close buffer exit form nano

Modes in vi —

command mode — Esc

🧠Purpose — Navigate, delete, copy, paste, or issue commands


this is the default mode in vi

Every keystroke is interpreted as a command

dd — Delete current line

yy — copy current line

p — paste below

u — undo

/word — search for the “word”

insert mode — i , o , a , I , O . A

WEEK 2 13
🧠Purpose — Type and edit text like a normal text editor
Anything you type is inserted into the file

You are editing the content of the file here

to enter in insert mode press i to insert before cursor a to


insert after cursor o to open a new line below

Press Esc to go back to command mode.

ex mode — :

💡Purpose : Execute extended commands


Entered from Command Mode by typing :

You’ll see a : appear at the bottom of the screen

:w — save the file

:q — Quit

:wq — save and quit

:set number — Show line numbers

:s/foo/bar/g — Replace “foo” with “bar”

Exiting vi —

:w — write out

:x — Write out and quit

:wq — write out and quit

:q — quit (if write out is over)

:q! — ignore changes and quit

WEEK 2 14
0 — start of the current line

$ — end of the current line

w — beginning of next word

b — beginning of preceding word

:0 — first line in the file

1G — first line in the file

:n — nth line in the file

ng — nth line in the file

:$ — last line in the file

G — last line in the file

yy — copy current line to buffer

Nyy — copy next N lines, including current, into buffer

p — Paste buffer into text after current line

u — undo previous actions

:se nu — set line numbers

:se nonu — unset line numbers

/string — search forward for string

WEEK 2 15
?string — search backward for string

n — move cursor to next occurance of string

N — move to previous occurance of string

For more commands in VI — [Link]

Lecture 2.3 (Command line editors part 03)

WEEK 2 16
⛔ Emacs

Emacs is a powerful, extensible and programming text editor used by


developers, It almost acts like a full computing environment

Features —

Modes — Specialized editing environments (example — python


modes, Org mode,, HTML mode)

Extensible — Written in Emacs lisp - you can customize or create


new features

Built -in tools — includes terminal, file manager, calculator,


debugger, and more

Org mode — Advanced note-taking, to -do lists, agenda and


project management

Keyboard - centric — fully operable without a mouse —


everything is done via powerful keybindings

Programming IDE — Supports syntax highlighting, linting, Git,


compilation etc.

Emacs vs Other Editors —

Vim Lightweight, very fast once mastered, model editing

More like an OS; deeply extensible, better for long-running


Emacs
sessions

Nano Beginner-friendly, basic editing, simple UI

Moving around —

ctrl + p — Move up one line

ctrl + b — Move left one char

ctrl + f — Move right one char

ctrl + n — Move down one line

ctrl + a — Go to beginning of current line

WEEK 2 17
ctrl + e — Go to end of current line

ctrl + v — Move forward one screen

alt + < — Move to first line of the line

alt + b — Move left to previous word

alt + f — Move right to next word

alt + > — Move to last line of the file

alt + a — Move to beginning of current sentence

alt + e — Move to end of current sentence

alt + v — Move back one screen

Exiting emacs

ctrl + x ctrl + s — Save buffer to file

ctrl + z — Exit emacs but keep it running

ctrl + x ctrl + c — Exit emacs and stop it

Searching text

ctrl + s —Search forward

ctrl + r — Search backward

alt + x — replace string

Copy Paste

alt + backspace — Cut the word before cursor

alt + d — Cut the word after cursor

ctrl + k — cut from cursor to end of line

alt + k — cut from cursor to end of the sentence

ctrl + y — Paste the content at the cursor

for more reference —


[Link]

WEEK 2 18
Lecture 2.4 (Networking commands and SSH)

WEEK 2 19
⛔ Accessing remote machines on command line —

WEEK 2 20
Ways to gain remote access—

VPN access

A VPN (Virtual p\Private Network) creates a secure, encrypted


tunnel to a private network over the internet. It allow remote

WEEK 2 21
users to access internal systems and services as if they were on
the local network.

SSH tunneling

SSH tunneling forwards network traffic securely through an


encrypted SSH connection. IT’s often used to securely access
internal services or bypass firewalls

Remote desktop — x2go , rdp , pcoip

These protocols allows you to graphically access and control a


remote desktop as if you were sitting in front of it. x2go is
optimized for Linux, RDP is Microsoft’s protocol, and PCoIP is
used in VMware environments

Desktop over browser — Apache Guacamole

Apache Guacamole is a clientless remote desktop gateway that


lets you access your desktop through just a web browser. It
supports RDP , VNC and SSH without needing to install anything
on the client

Commercial, over internet: Teamviewer , AnyDesk , zoho assist …

Some Important ports —

21 ftp File transfer

22 ssh Secure shell

25 smtp Simple Mail Transfer Protocol

80 http Hypertext Transfer Protocol

443 https Secure Hypertext Transfer Protocol

631 cups Common Unix Printing System

3306 mysql MySQL database

What is Port Number ? — A port number is a 16-bit identifier used to


specify a specific service or application on device in a network,
allowing multiple services (like web, email, FTP) to run on the small IP
address simultaneously

WEEK 2 22
Port 21 is associated with FTP (File Transfer Protocol) which is used
to transfer files between a client and a server over a network

Port 22 is the default port used by SSH (Secure Shell), which


provides encrypted remote login and command execution over a
network, ensuring secure communication between client and server

Port 25 is the default port used by SMTP (Simple Mail Transfer


Protocol), which is responsible for sending emails between mail
servers across the internet.

Port 80 is the default port used by HTTP (Hypertext Transfer


Protocol), which handles unencrypted web traffic for loading
websites over the internet.

Port 443 is the default port used by HTTPS (Hypertext Transfer


Protocol Secure), which enables secure, encrypted communication
between a web browser and a server using SSL/TLS.

Port 631 is the default port used by CUPS (Common Unix Printing
System) to allow network printing and printer administration via
the Internet Printing Protocol (IPP).

Port 3306 is the default port used by MySQL database servers to


listen for and accept client connections, enabling database queries
and transactions over a network.

Firewall (a firewall is a security system (hardware or software) that


monitors and controls incoming and outgoing network traffic based on
predefined rules, acting as a barrier between trusted and untrusted
networks to protect systems from unauthorized access or attacks)

Ports open on my machine

Ports needed to be accessed on remote machine

Network routing over the port

Firewall controls at each hop

Protecting a server

WEEK 2 23
SELinux

Security Enhanced Linux mode available on Ubuntu too, apart from


server grade flavors like CentOS, Fedora,RHEL, SuSE Linux etc.,

Additional layer of access control on files to services

Role Based Access Control

Process sandboxing, least privilege access for subjects

Check using ls -lZ and ps -eZ

RBAC items: user (unconfined_u), role (object_r), type (user_home_t),


level (s0)

Modes: disabled, enforcing, permissive

Tools: semanage, restorecon

SELinux is recommended for all publicly visible servers

Network tools

ping To see if the remote machine is up

traceroute Diagnostics the hop timings to the remote machine

nslookup Ask for conversion of IP address to name

dig DNS lookup utility (Domain Information Groper)

netstat Print network connections

[Link] for help with accessibility from public network

WEEK 2 24
whois lookup Who owns which domain name

nmap (careful !) Network port scanner

wireshark (careful !) Network protocol analyzer

High Performance COmputing

Look at [Link] for statistics

Accessing a remote HPC machine is usally ove SSH

Long duration jobs are submitted to a job scheduler for execution

Raw data if large needs to be processed remotely before being


transferred to your machine (network changes? bandwidth?)

Comfort with command line is a must

thunder@thunder-VMware-Virtual-Platform:~$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 15
00
inet [Link] netmask [Link] broadcast 192.168.
85.255
inet6 fe80::20c:29ff:fe62:939b prefixlen 64 scopeid 0x20<link
>
ether [Link] txqueuelen 1000 (Ethernet)
RX packets 284280 bytes 335129717 (335.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 78604 bytes 24620823 (24.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536


inet [Link] netmask [Link]
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 2575 bytes 300420 (300.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0

WEEK 2 25
TX packets 2575 bytes 300420 (300.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

— stands for the interface configuration and is used to view


ifconfig

and configure network interfaces on a linux system

ens33 — Ethernet Network Interface

inet [Link] — the IP address assigned to your machine on the


network

broadcast [Link] — The address used to send data to all hosts


in the subnet.

lo — Loppback Interface

Used for internet communication on your machine

inet [Link] —The local IP address for loopback (aka localhost)

inet6::1 — The IPv6 loopback address

thunder@thunder-VMware-Virtual-Platform:~$ nslookup [Link].a


[Link]
Server: [Link]
Address: [Link]#53

Non-authoritative answer:
[Link] canonical name = [Link].
Name: [Link]
Address: [Link]

nslookup www,[Link] — DNS (Domain Name System) to resolve the


domain name [Link] into its corresponding IP address

— This is the final IP address of [Link] which


Address: [Link]

your browser or any tool will use to connect to the IITM website

thunder@thunder-VMware-Virtual-Platform:~$ dig [Link]

WEEK 2 26
; <<>> DiG 9.18.30-0ubuntu0.24.04.2-Ubuntu <<>> [Link]
m
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49597
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL:
1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;[Link]. IN A

;; ANSWER SECTION:
[Link]. 5 IN A [Link]

;; Query time: 29 msec


;; SERVER: [Link]#53([Link]) (UDP)
;; WHEN: Tue Jun 17 [Link] IST 2025
;; MSG SIZE rcvd: 59

— dig command is used to query DNS servers to


dig [Link]

resolve domain name to IP addresses — more detailed than nslookup

You asked your system to resolve [Link] . The DNS query


succeeded and returned [Link] as the IP address.

thunder@thunder-VMware-Virtual-Platform:~$ dig -x [Link]

; <<>> DiG 9.18.30-0ubuntu0.24.04.2-Ubuntu <<>> -x [Link]


0
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61875
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONA
L: 1

WEEK 2 27
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;[Link].[Link]. IN PTR

;; ANSWER SECTION:
[Link].[Link]. 5 IN PTR [Link].
[Link].[Link]. 5 IN PTR [Link]
t.

;; Query time: 30 msec


;; SERVER: [Link]#53([Link]) (UDP)
;; WHEN: Tue Jun 17 [Link] IST 2025
;; MSG SIZE rcvd: 125

WEEK 2 28
WEEK 3
Lecture 3.1 (Combining commands and files) —

WEEK 3 1
⛔ Executing multiple commands —

command1; command2; command3;

each command will be executed one after other

command1 && command2

command 2 will be executed only if command1 succeeds

command1 || command2

command2 will not be executed if command 1 fails

thunder@thunder-VMware-Virtual-Platform:~$ ls ; date ; wc -l /etc/p


rofile ;
billo Desktop Downloads Music Pictures Public Templates tes
[Link]
clear Documents level1 mydir projects snap [Link] Video
s
Tue Jun 17 [Link] PM IST 2025
27 /etc/profile

first ls is executed which shows the lists

then date is executed which shows the date and time

then wc -l /etc/profile executed where counts the number of lines in


/etc/profile

thunder@thunder-VMware-Virtual-Platform:~$ ls || date
billo Desktop Downloads Music Pictures Public Templates tes
[Link]
clear Documents level1 mydir projects snap [Link] Video
s

as the first command succeeds that’s why 2nd command doesn’t run

WEEK 3 2
thunder@thunder-VMware-Virtual-Platform:~$ /blah || ls && date &&
wc -l /etc/profile
bash: /blah: No such file or directory
billo Desktop Downloads Music Pictures Public Templates tes
[Link]
clear Documents level1 mydir projects snap [Link] Video
s
Sun Jun 29 [Link] AM IST 2025
27 /etc/profile

as the first command didn’t run that’s why 2nd command run
without any problem

thunder@thunder-VMware-Virtual-Platform:~$ ls /blah && date && w


c -l /etc/profile
ls: cannot access '/blah': No such file or directory

as the first command didn’t run that’s why next other commands also
stops

thunder@thunder-VMware-Virtual-Platform:~$ ls && date -Q && wc


-l /etc/profile
billo Desktop Downloads Music Pictures Public Templates tes
[Link]
clear Documents level1 mydir projects snap [Link] Video
s
date: invalid option -- 'Q'
Try 'date --help' for more information.

as the 1st command successfully run that’s why 2nd command was
executing but as it throws error that’s why next command doesn’t
executed

File descriptors —

WEEK 3 3
This image visually represents the concept of standard streams in a
Unix/Linux environment, particularly focusing on input and output
redirection in the command-line (shell). Here's what it means:

1. stdin (Standard Input) – File Descriptor 0

Represented by the keyboard on the left.

This is where the command takes its input from by default


(usually from the keyboard).

It’s identified internally by file descriptor 0.

2. stdout (Standard Output) – File Descriptor 1

Represented by the green path leading to the monitor.

This is where the normal output of the command goes (typically


displayed on the screen).

Identified by file descriptor 1.

3, stderr (Standard Error) – File Descriptor 2

Represented by the red path leading to the same monitor.

This is where error messages are sent, separately from standard


output.

Identified by file descriptor 2.

Why this separation matters?

Allows redirecting output and errors independently.

WEEK 3 4
Example commands:

command > [Link] → Redirects stdout to a file.

command 2> [Link] → Redirects stderr to a file.

command > [Link] 2>&1 → Redirects both stdout and stderr to same file.

command > file 1

This image explains how standard output redirection works in


Linux/Unix-like systems.

✅ What this diagram shows —


The user inputs a command.

The command takes input from the keyboard ( stdin , file descriptor
0).

The standard output ( stdout , fd 1) is redirected to a file named file1 .

The standard error ( stderr , fd 2) is not redirected, so error messages


still go to the screen.

📌 Important Note
"Warning: contents of file1 will be overwritten. New file1 will be created
if it does not exist."

WEEK 3 5
this refers to what happens when you use the > redirection operator:

command > file1

If file1 already exists: it will be overwritten.

If file1 does not exist: it will be created.

Only stdout (normal output) goes to file1 .

Any errors still show up on the terminal.

ls > [Link]

ls lists directory contents.

The result will be saved in [Link] .

If there’s an error (like invalid directory), it will still appear on the


terminal.

To redirect both standard output (stdout) and standard error (stderr)


to the same file you can use the following syntax:

command > file1 2>&1

Example —

ls myfolder > [Link] 2>&1

If myfolder exists → its contents go to [Link] .

If it doesn't exist → the error message (like No such file or directory ) also
goes to [Link] .

ls -l > file1

📌What does this command did —

WEEK 3 6
It ran ls -l in your current directory.

Instead of printing the output to the terminal (stdout), it redirected it


to a file named file1 .

Since file1 didn’t exist before, it was created.

If file1 already existed, it was overwritten with the new output.

As you can now see in the second ls -l result, file1 is there and its
size is 941 bytes, indicating the output was written correctly.

thunder@thunder-VMware-Virtual-Platform:~$ ls -l /blah > file1


ls: cannot access '/blah': No such file or directory

You tried to list the contents of a nonexistent directory: /blah .

This triggered an error message, specifically:

But look carefully:

The error was still shown on your terminal.

The file file1 was created or overwritten, but it's empty ( 0 bytes ).

🧠Why —
> only redirects stdout

But the error ( /blah not found) is sent to stderr .

Since stderr wasn't redirected, it went to your screen, not to file1 .

so does it make my content loss in file1 ?


Yes, absolutely — when you use:
ls -l /blah > file1

or even:
any_command > file1

👉 It overwrites the existing contents of file1 .

even if the directory does't exist like here /blah does it exist so it
makes my content loss in file 1?
Yes, exactly. Even if the directory /blah doesn't exist, and the

WEEK 3 7
command fails — your file file1 is still emptied (i.e., its previous
content is lost) if you use the > redirection operator.

cat — concatenate files and print on the standard output


🔹Display the contents of files
🔹Combine multiple files
🔹Redirect file contents into other files
cat [Link]

show the content of [Link] on the terminal

cat [Link] [Link]

Prints contents of both files one after the other

cat [Link] [Link] > [Link]

Combines both files into [Link] .

cat [Link] >> [Link]

Adds [Link] contents at the end of [Link] .

WEEK 3 8
This diagram illustrates the append redirection operator >> in
Unix/Linux shell.
✅ Explanation of the Diagram:
🔹 Command Used:
bash
CopyEdit
command >> file1

🔍 What Happens:
stdin (0) : Takes input from the keyboard (not redirected here).

stdout (1) : The standard output of the command is appended to file1 .

stderr (2) : Standard error still goes to the screen, not the file.

If file1 does not exist, it is created.

If file1 already exists, new output is added to the end—nothing is


erased.

🧾 Text in the Image:

WEEK 3 9
✅ Green Note:
Contents will be appended to file1.

⚠️ Red Note:
New file1 will be created if it does not exist.

This behavior is exactly opposite of > , which overwrites the file.


🧪 Example:
bash
CopyEdit
echo "Hello, again!" >> [Link]

Adds the line to [Link] without removing old content.

❗ stderr is not affected:


To redirect both output and error and append:

bash
CopyEdit
command >> file1 2>&1

Lecture 3.2 (Redirection) —

WEEK 3 10

This diagram explains how to redirect only the error output ( stderr ) to a
file using:

command 2> file1

✅ Explanation of Each Part:


🔹 2>

This tells the shell:


➜ Redirect file descriptor 2 (which is stderr )
➜ To a file named file1 .

🔹 file1

Will receive only the error messages, not the regular output.

🧠 What Happens Internally:


Descriptor Stream Destination
0 stdin (input) Keyboard (unchanged)
1 stdout Still goes to screen

WEEK 3 11
2 stderr Goes to file1

📌 Red Text Warning in the Image:


"Warning: contents of file1 will be overwritten. New file1 will be created
if it does not exist."

✅ This means:
If file1 exists → it will be cleared first (overwritten).

If file1 does not exist → it will be created.

🧪 Example:
ls /nonexistentfolder 2> [Link]

The error message like:

ls: cannot access '/nonexistentfolder': No such file or directory

goes to [Link]

Normal output (if any) would still show on your terminal.

Want to append errors instead of overwrite? Use:

command 2>> file1

thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /billo


ls: cannot access '/billo': No such file or directory
/home/thunder:
billo Desktop Downloads level1 mydir projects snap [Link]
shrc Videos
clear Documents file1 Music Pictures Public Templates test.t
xt

WEEK 3 12
thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /blah 2> b
[Link]
/home/thunder:
[Link] clear Documents file1 Music Pictures Public Templat
es [Link]
billo Desktop Downloads level1 mydir projects snap [Link]
rc Videos

🔍 What it did:
ls $HOME→ lists the contents of your home directory /home/thunder

(works fine).

/blah → doesn't exist, so ls throws an error message to stderr .

2> [Link] → redirects the error message ( stderr ) to [Link] .

🧪Want to view what was captured?


thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]
ls: cannot access '/blah': No such file or directory

this shows the error message that was captured after running ls

$HOME /blah 2> [Link]

WEEK 3 13
This diagram demonstrates how to separately redirect standard output
(stdout) and standard error (stderr) into two different files using:

command > file1 2> file2

✅ Explanation of the Parts:


Stream Descriptor Description Redirected To

stdin 0 Input from keyboard (unchanged)

Normal output (green


stdout 1 file1
arrow)

Error messages (blue


stderr 2 file2
arrow)

🧾 In Simple Terms:
> → redirects stdout to file1

2> → redirects stderr to file2

So you cleanly separate success output and error messages.

⚠️ Warning in Red (in image):

WEEK 3 14
"Contents of file1 and file2 will be overwritten."

✅ Yes, both file1 and file2 :

Will be created if they don’t exist.

Will be overwritten (emptied first) if they do exist.

🧪 Example:
ls /home/thunder /blah > [Link] 2> [Link]

If /home/thunder exists, its contents go to [Link] .

If /blah doesn't exist, the error goes to [Link] .

💡 Useful Variants:
Append instead of overwrite:

command >> file1 2>> file2

Redirect both to same file:

command > [Link] 2>&1

thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /blah > out


[Link] 2> [Link]
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]
/home/thunder:
[Link]
billo
clear
Desktop
Documents
Downloads
file1
level1
Music

WEEK 3 15
mydir
[Link]
Pictures
projects
Public
snap
Templates
[Link]
[Link]
Videos
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]
ls: cannot access '/blah': No such file or directory

Your Code — ls $HOME /blah > [Link] 2> [Link]

🧠 What It Did:
Part Purpose Result
$HOME A valid path Contents listed normally
/blah Invalid path Triggered an error
> Redirected stdout Wrote normal output to [Link]
2> Redirected stderr Wrote error message to [Link]

📂 The Output Files:


📄 (169 bytes)
[Link]

cat [Link]

Shows the complete list of files and directories under /home/thunder

✔️ This is stdout (normal output)


⚠️ [Link] (53 bytes)

cat [Link]

Shows:

WEEK 3 16
ls: cannot access '/blah': No such file or directory

✔️ This is stderr (error message)


🔁 Recap in Simple Words:
✅ → goes to
stdout [Link]

❌ → goes to
stderr [Link]

📎 The command keeps the output clean and organized, without


mixing success and failure outputs.

🧪 Want to Combine Both in One File?


Use:

ls $HOME /blah > [Link] 2>&1

Or append instead:

ls $HOME /blah >> [Link] 2>> [Link]

WEEK 3 17
This diagram explains how the input redirection operator < works in
Linux/Unix shells.
✅ Syntax:
command < file1

🔍 What This Means:


Instead of typing input from the keyboard ( stdin ), you tell the shell:

“Take the contents of file1 and feed it to command as input.”

📊 Stream Mapping in the Diagram:


Stream Descriptor Role Source/Destination
stdin 0 Input stream From file1

Output (normal)
stdout 1 Goes to screen
stream
stderr 2 Error stream Goes to screen

🧪 Example Use Case:


Let's say file1 contains:

5
10
15

Run:

cat < file1

This is the same as:

cat file1

Because cat reads input from stdin , and we redirected stdin from file1 .

WEEK 3 18
🧠 Why Use < Instead of Just Giving the File?
While most commands (like cat , sort , wc , etc.) can take a file name
directly, < file1 becomes very useful in:

Shell scripting

Input testing (for commands that only read from stdin, not files)

Command substitution and pipelines

Feeding input to interactive programs (e.g. ftp , python , etc.)

🚫 file1 is not modified


Unlike > or >> , which can change a file, the < operator:

Only reads from a file

Never alters it

In the lecture —

wc /etc/profile

The output:

27 97 581 /etc/profile

This shows:

27 lines

97 words

581 bytes

from /etc/profile

👇 Next:
wc [Link]

Output:

WEEK 3 19
5 35 327 [Link]

That means:

5 lines

35 words

327 bytes

inside [Link]

🔍 The Important One:


wc < [Link]

This uses input redirection ( < ) to send [Link] to the wc command as


stdin .

Output:

5 35 327

Notice:

It gives the same line/word/byte count, but does not print the
filename.

Because wc is not reading directly from a file, but from stdin .

✅ Summary of What You Learned


Command Meaning
wc [Link] Counts lines, words, bytes in file and shows filename
wc < [Link] Same counts, but reads via stdin (no filename shown)
cat [Link] Displays contents of the file
[Link] Was created earlier by redirecting stderr with 2>

WEEK 3 20
This image explains how input/output redirection works in a Unix/Linux
shell (like Bash), specifically for the command:

command > file1 2>&1

🔍 Let's break it down:


1. Streams Explained
In Unix/Linux, every process has 3 standard data streams:

0→ stdin (Standard Input): Input to the command (usually from the


keyboard).

1 → stdout (Standard Output): Normal output from the command.

2 → stderr (Standard Error): Error messages from the command.

2. What this command does

command > file1 2>&1

: Redirects stdout ( 1 ) to
> file1 file1 . This means normal output is
written to file1 .

: Redirects stderr ( 2 ) to wherever stdout ( 1 ) is currently going


2>&1

—which is now file1 .

WEEK 3 21
➡️ So both normal output and error messages will go to file1 .
📁 Final Effect
Input: Comes from keyboard (stdin)

Output (stdout): Written to file1

Errors (stderr): Also written to file1

🔴 Warning: The existing contents of file1 will be overwritten.

✅ Use Case Example


ls /nonexistent > [Link] 2>&1

This will try to list a directory that doesn't exist.

Both the error message and any output (if any) will go to [Link] .

A example using a command that produces both standard output and


standard error —

command > file12>&1

🧪 Example Command:
(ls /etc/passwd /notfound) > [Link] 2>&1

🔍 Explanation:
ls /etc/passwd /notfound tries to list two files:

/etc/passwd ✅ (stdout)
— this exists

/notfound — this does not exist ❌ (stderr)

> [Link] — redirects stdout to [Link]

2>&1 — redirects stderr to the same destination as stdout

📝 Contents of [Link] after the command:

WEEK 3 22
ls: cannot access '/notfound': No such file or directory

You’ll see both:

✅ The successful output (listing /etc/passwd )

❌ The error message (missing /notfound )

All in one file: [Link] .


💡 Pro Tip:
If you only want to capture errors in a separate file, use:

ls /etc/passwd /notfound 2> [Link]

ls $HOME /blah > file1 2>&1

🧠 What it does:
ls $HOME /blah : Lists contents of two directories:

$HOME → /home/thunder ✅ (exists)


/blah ❌ (does not exist)
> file1 → Redirects stdout (normal output) to file1

2>&1 → Redirects stderr (errors) to wherever stdout is going (→ file1 )

WEEK 3 23
The image you're looking at illustrates how pipes ( | ) work in the
Unix/Linux shell.
🔧 Syntax:
command1 | command2

This is called a pipeline.


📊 What’s Happening in the Diagram:
1. command1 runs first

It takes input from stdin (usually keyboard or a file).

Its stdout (1) — the normal output — is piped into the stdin (0) of
command2 .

Its stderr (2) — errors — are not passed to command2 . They go


directly to the screen (or terminal).

2. command2 receives command1 's output

It reads from stdin (0) — which now comes from command1 's stdout.

It produces its own stdout (1) and stderr (2) — both go to your
screen/terminal.

WEEK 3 24
🧠 Key Concept:
Only stdout (1) from command1 is piped to command2 .

stderr (2) from both commands goes to the terminal (unless


redirected separately).

✅ Example:
ls /home | grep Documents

ls /home lists all items in /home

grep Documents filters the list for items containing "Documents"

Only ls 's stdout is piped to grep . Any error from ls will appear on
screen directly.

💡 Want to include stderr in the pipe too?


Use:

command1 2>&1 | command2

Now both stdout and stderr from command1 are piped to command2 .
Let me know if you want examples with that or how to split output into
files.

This screenshot shows an example of output redirection ( > ) and piping


( | ) in action in a Linux terminal.

WEEK 3 25
✅ 1. ls /usr/bin > file1

Lists all files in the /usr/bin directory.

The > symbol redirects the output to a file named file1 .

So instead of printing on the screen, the list of files is saved inside


file1 .

✅ 2. wc -l file1

wc = word count

l = line count

This command counts how many lines are in file1 .

📄 In this case:
2596 file1

It means there are 2596 lines in the file file1 — each line represents one
file or directory entry from /usr/bin .
✅ 3. less file1

Opens file1 using the less viewer so you can scroll and view its
contents page by page.

Useful when the file has many lines (like 2596 here).

✅ 4. ls /usr/bin | wc -l

This time, no redirection is used.

Instead, a pipe ( | ) connects the output of ls /usr/bin directly to wc -l .

So it counts how many lines are output from ls , without saving them
in a file.

📄 Output:
2596

Which is the same result as before.

WEEK 3 26
💡 Summary:
Command What it Does
ls /usr/bin > file1 Saves list of files in /usr/bin to file1
wc -l file1 Counts how many files are listed (2596)
less file1 Opens the file in a scrollable viewer
ls /usr/bin | wc -l Counts files directly using a pipe (no file created)

This diagram explains how redirection and piping work together in the
shell command:

command1 | command2 > file1

🧠 Conceptual Breakdown
🔹 command1 | command2

This is a pipe:

The stdout (1) of command1 is connected to the stdin (0) of


command2 .

In other words, the output of command1 becomes the input to


command2 .

WEEK 3 27
🔹 > file1

Redirects the stdout of command2 to the file file1 .

If file1 already exists, it will be overwritten ( ⚠️ warning shown in


red).

🔹 stderr

The stderr (2) of both command1 and command2 are not redirected:

They go to the terminal (monitor) by default.

You see them on the screen if there are any errors.

📊 What the Image Shows


Component Role
stdin Input to command1 , usually from keyboard or a script
command1 Executes and sends output (stdout) to command2 via pipe
command2 Receives input via pipe, produces output
> file1 Redirects command2 's stdout into file1

Errors from command1 and command2 go to the terminal (not


stderr
file1)

✅ Example:
ls /usr/bin | grep python > [Link]

What happens:

1. ls /usr/bin lists all files in /usr/bin .

2. grep python filters only entries that contain the word python .

3. The results are written into [Link] via > [Link] .

4. Any errors (like permission denied or invalid directories) will still be


shown on the screen.

thunder@thunder-VMware-Virtual-Platform:~$ ls /usr/bin | wc -l > fil


e1

WEEK 3 28
thunder@thunder-VMware-Virtual-Platform:~$ more file1
1804

🧠 Step-by-Step Breakdown:
1. ls /usr/bin

Lists all files and directories inside /usr/bin (typically many commands
and utilities).

2. | wc -l

Pipes ( | ) the list to wc -l , which counts the number of lines — each


line = one file/dir.
So you're counting how many items are in /usr/bin .

3. > file1

Redirects the final output (the line count) to a file named file1 .

Part Description
ls /usr/bin Lists files in /usr/bin

` wc -l`
> file1 Saves that count into file1
more file1 Displays the contents of file1 (which is 1804)

What is /dev/null ?
is a special file in Unix/Linux systems — often called the "bit
/dev/null

bucket" or "black hole" of the system.


🧠 What is /dev/null ?

It’s a special device file that discards all data written to it.

If you read from it, it gives you nothing (empty output).

It's used when you want to get rid of output (stdout or stderr).

Action Command Example Effect

Discard standard output command > /dev/null Hides normal output

Discard standard error command 2> /dev/null Hides error messages

WEEK 3 29
Discard both stdout and command > /dev/null Silent — no output at
stderr 2>&1 all

This diagram explains the meaning of the shell command:

bash
Copy code
command > file1 2> /dev/null

Let’s break this down visually and conceptually using the image:

🔍 Understanding the Components


🟩 > file1

Redirects stdout (file descriptor 1 ) to the file file1 .

So all normal output from command is saved to file1 .

⚠️ Overwrites the contents of file1 if it already exists.

🟥 2> /dev/null

Redirects stderr (file descriptor 2 ) to /dev/null .

WEEK 3 30
That means any error messages produced by command are
completely discarded — sent to the "black hole".

🔵 stdin remains unchanged

The command will still take input from the keyboard (or a script),
unless redirected.

🖥️ What the Diagram Shows


Descriptor Description Destination
stdin (0) Input from keyboard Passed to command
stdout (1) Normal output from command Redirected to file1
stderr (2) Error output from command Redirected to /dev/null

✅ Example in Practice
bash
Copy code
ls /usr/bin /notarealfile > [Link] 2> /dev/null

Lists contents of /usr/bin ✅


Tries to list /notarealfile ❌ (causes an error)
Output of /usr/bin goes to [Link]

Error about /notarealfile is discarded

🧠 Summary
Component Purpose
> Redirect stdout
2> Redirect stderr
/dev/null Discards whatever is sent to it
file1 Will contain only normal output

WEEK 3 31
thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /blah
ls: cannot access '/blah': No such file or directory
/home/thunder:
[Link] Desktop file1 Music Pictures snap [Link]
billo Documents file2 mydir projects Templates Videos
clear Downloads level1 [Link] Public [Link]
thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /blah > file
1
ls: cannot access '/blah': No such file or directory
thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME /blah > file
1 2> /dev/null
thunder@thunder-VMware-Virtual-Platform:~$ cat file1

🧪 Step-by-step Explanation
🔹 Command 1:
bash
Copy code
ls $HOME /blah

Lists files in two directories:

✅ $HOME → /home/thunder (exists)

❌ /blah (does not exist)

Result:

Terminal shows:

The error message from /blah ( stderr )

The listing of /home/thunder ( stdout )

🔹 Command 2:
bash
Copy code

WEEK 3 32
ls $HOME /blah > file1

This redirects only stdout to file1

But stderr (error about /blah ) still goes to the screen

Terminal shows:

ls: cannot access '/blah': No such file or directory

file1 contains:

The listing of /home/thunder only

🔹 Command 3:
ls $HOME /blah > file1 2> /dev/null

> file1 → Redirects stdout to file1

2> /dev/null → Sends stderr (error) to the "black hole"

Result:

Nothing is shown on screen ✅


file1 contains only:

/home/thunder:
[Link]
billo
...
Videos

Error message about /blah is silently discarded 💨


✅ Summary Table:
Where stdout Where stderr Shown on
Command
goes goes screen?

WEEK 3 33
ls $HOME /blah Screen Screen Yes
ls $HOME /blah > file1 file1 Screen Yes (error)

ls $HOME /blah > file1 Discarded


file1 No
2> /dev/null ( /dev/null )

What is a tee ?
The tee command in Linux is a useful tool for both redirecting and
displaying output at the same time.
🧠 What is tee ?

tee takes the output from a command and:

1. Writes it to a file

2. Also displays it on the screen ( stdout )

So it's like a “T-junction” pipe: it splits the stream into two directions.
📌 Syntax:
command | tee filename

✅ Example:
ls /usr/bin | tee [Link]

Lists all files in /usr/bin

Output goes to:

The screen ✅
Saved in [Link] ✅
➕ With append:
ls /etc | tee -a [Link]

a means append (add to the file instead of overwriting).

🔄 Multiple files:
WEEK 3 34
ls | tee file1 file2

Duplicates the output to both file1 and file2 and the screen.

📊 Use Case:
Without tee :

command > [Link]

🔹 Output only goes to the file — you don’t see anything.


With tee :

command | tee [Link]

🔹 Output goes to both the file and the screen — you see and save it.
💡 Summary Table:
Feature > or >> tee

Writes to file ✅ ✅
Displays on screen ❌ ✅
Can append with -a ✅( >> ) ✅( tee -a )

WEEK 3 35
This diagram visually explains how the tee command works in the shell:
🔧 Command:
command1 | tee file1

🧠 What Happens Here:


1. command1 runs:

It takes input ( stdin ) — usually from the keyboard or script.

It sends:

stdout (1) → to the tee command

stderr (2) → directly to the screen

2. tee file1 receives stdout from command1 :

It writes this output to file1

And at the same time, it prints the output to the terminal


(screen)

📊 Diagram Breakdown:
Stream Flow Path

WEEK 3 36
stdin (0) Input to command1
stdout (1) command1 → tee → written to file1 and displayed on screen

stderr (2) Goes directly to the terminal — not affected by tee

⚠️ will be overwritten by default. Use


file1 tee -a file1 to append instead.
✅ Real Example:
ls /usr/bin | tee [Link]

Lists files in /usr/bin

Saves the list in [Link]

Also shows the same list on screen

🔄 Append instead of overwrite:


ls /etc | tee -a [Link]

🧠 Summary
What tee Does Behavior

Writes command output to file ✅ (default overwrite)


Shows same output on screen ✅
Can append instead of overwrite ✅ with option
-a

Captures only stdout ✅ ( is unaffected)


stderr

thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME | tee file1


thunder@thunder-VMware-Virtual-Platform:~$ cat file1
thunder@thunder-VMware-Virtual-Platform:~$ ls $HOME | tee file1 fi
le2
thunder@thunder-VMware-Virtual-Platform:~$ cat file2

✅ Step-by-step Breakdown:
🔹 Command 1:

WEEK 3 37
ls $HOME | tee file1

ls $HOME : Lists everything in your home directory.

| tee file1 : Sends the output both:

To the screen ✅
To the file file1 ✅
Then:

cat file1

Confirms that file1 contains the same output that was displayed on your
terminal.
🔹 Command 2:
ls $HOME | tee file1 file2

💡 Nice! This is a rarely used feature:


By giving multiple filenames to tee , you're telling it to duplicate the
output to all listed files.

Output went to:

file1 ✅
file2 ✅
And also displayed on screen ✅
Then:

cat file2

Shows that file2 also got the same content.


⚠️ Important Notes:
tee overwrites files by default.

WEEK 3 38
Use -a if you want to append instead:

ls $HOME | tee -a file1 file2

is supported in Bash, but not POSIX-compliant. In some


tee file1 file2

older or minimal shells, only the first filename is used.

🧠 Summary
Command Effect

`command tee file`


tee file1 file2 Save same output to both files
tee -a file1 Append instead of overwrite

Lecture 3.3 (Software Management - Part 01)

WEEK 3 39
⛔ Need for a Package Manager —
● Tools for installing, updating, removing, managing software
● Install new / updated software across network
● Package – File look up, both ways
● Database of packages on the system including versions
● Dependency checking
● Signature verification tools
● Tools for building packages

WEEK 3 40
amd64 | x86_64

Refers to 64-bit architecture used in modern PCs and servers,


compatible with Intel and AMD processors.
Most commonly used in desktop and laptop systems today.

i386 | x86

Refers to 32-bit architecture, originally from Intel 386 processors.


It supports older hardware and has limitations on memory usage.

arm
Architecture used in smartphones, tablets, and Raspberry Pi.
Known for power efficiency, especially in mobile and embedded
systems.

ppc64el | OpenPOWER

Refers to 64-bit PowerPC architecture with little-endian byte order.


Used in IBM systems and some high-performance computing ( HPC )
environments.

WEEK 3 41
all | noarch | src

all: Architecture-independent packages (e.g., text files).

: No specific hardware dependency (common in


noarch

Python/Java).

src: Source code packages, not yet compiled for any


architecture.

Linux package management tools based on the package format type:


RPM or DEB. Here's a two-line explanation of each type and its tools:
RPM (Red Hat Package Manager)

: Low-level tool to install, query, verify, update, and remove


rpm .rpm

packages.

yum ( Yellowdog Updater Modifier): High-level command-line tool for


installing packages and resolving dependencies on Red Hat-based
systems.

dnf (Dandified YUM): Modern replacement for yum , faster and more
efficient, used in Fedora and newer Red Hat distributions.

DEB (Debian Package)

WEEK 3 42
(Advanced Package Tool): High-level tool for handling .deb
apt

packages, resolving dependencies and managing repositories.

: Low-level tool to install and manage


dpkg .deb packages without
dependency resolution.

: Utility to create, extract, or provide information about


dpkg-deb .deb

packages.

synaptic : Graphical frontend for APT, allows managing packages with a


GUI.

: Text-based
aptitude frontend for APT with a rich interface and
advanced features.

Package management in UBUNTU using apt

Inquiring package db —
1. apt-cache search keyword

This command searches for packages whose name or description


contains the specified keyword.
It is helpful when you're not sure of the exact package name.
It returns a list of matching packages with short descriptions.
2. apt-cache pkgnames

This lists all available package names in the APT package database,
one per line.
It doesn't show descriptions—just the names, which is useful for
scripting or searching manually.
Running it without arguments lists all packages; with an argument, it
filters by prefix.
3. apt-cache show -a package

Displays detailed information about a specific package, such as


version, dependencies, and description.
The -a option lists all versions of the package available in the
repositories.

WEEK 3 43
It helps you understand what the package does and choose the right
version to install.

Package Priorities —

required — essential to proper functioning of the system


(Core packages necessary for the basic functioning of the system.
Removing them can make the system unusable or fail to boot.)

important — provides functionality that enables the system to run well


(Packages that ensure the system runs reliably and can be
maintained.
They are not absolutely critical but still essential for usability.)

standard — included in a standard system installation


(Commonly installed packages in a typical system installation.
Includes editors, mail readers, and system utilities.)

optional — can omit if you do not have enough storage


(Useful packages that enhance functionality but aren’t essential.
They are safe to skip if disk space is limited.)

— could conflict with packages with higher priority, has


extra

specialized requirements, install only if needed


(Packages with special dependencies or potential conflicts.

WEEK 3 44
Installed only when explicitly requested, often for advanced or niche
needs.)

thunder@thunder-VMware-Virtual-Platform:~$ apt-cache show f


ortunes
Package: fortunes
Architecture: all
Version: 1:1.99.1-7.3build1
Priority: optional
Section: universe/games
Source: fortune-mod
Origin: Ubuntu

here Priority: optional means this is not essential; can be skipped if


disk space or needs are limited

— software built using the .NET-compatible Mono runtime


Mono/CLI

and common language infrastructure.

thunder@thunder-VMware-Virtual-Platform:~$ apt-cache show fortu


nes
Package: fortunes

WEEK 3 45
Architecture: all
Version: 1:1.99.1-7.3build1
Priority: optional
Section: universe/games
Source: fortune-mod
Origin: Ubuntu

Here in the Section: universe/games means it is categorized under


“games” in the “universe” repository

It illustrates checksum algorithms used to verify file integrity by


generating fixed-length hash values (also called digests )

🔹 md5sum — 128-bit

Generates a 128-bit (32-character hexadecimal) hash.

Fast but insecure due to known vulnerabilities; not recommended for


security-critical use.

🔹 SHA1 — 160-bit

Produces a 160-bit hash, stronger than MD5 but still considered


broken for cryptographic purposes.

WEEK 3 46
Sometimes used for checksums but no longer secure for
authentication or signatures.

🔹 SHA256 — 256-bit

Generates a 256-bit hash (64 hexadecimal characters).

Part of the SHA-2 family; secure and recommended for file integrity
and digital signatures.

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] me


[Link]
this file is edited by hagu.
this file is edited by thunder
thunder@thunder-VMware-Virtual-Platform:~$ md5sum meow1.t
xt
29c23bed0ea90bac9018c50dfa322e86 [Link]
thunder@thunder-VMware-Virtual-Platform:~$ md5sum meow2.
txt
3c1c140a2fa5b08388d0b2682fd493d2 [Link]
thunder@thunder-VMware-Virtual-Platform:~$ sha256sum meo
[Link]
db2f8f1d6f1cc4e4ad74eb8e7a70ef98740048d392096b4910901
d3e2005b3fb [Link]

3. Generated MD5 hashes:

bash
CopyEdit
md5sum [Link]
→ 29c23bed0ea90bac9018c50dfa322e86
md5sum [Link]
→ 3c1c140a2fa5b08388d0b2682fd493d2

The two hashes are different, proving that even a small difference in
content changes the checksum.
4. Generated SHA256 hash:

WEEK 3 47
bash
CopyEdit
sha256sum [Link]
→ db2f8f1d6f1cc4e4ad74eb8e7a70ef98740048d392096b49109
01d3e2005b3fb

This is a longer, 256-bit hash, providing much stronger integrity


checking than MD5 .
🧠 Conclusion:
✅ verify file content integrity.
Checksums

✅ Even a single character change results in a completely


different checksum.

🔐 Use or stronger for secure verification; avoid


SHA256 MD5 for
security-sensitive tasks

Lecture 3.4 (Software Management — Part 02)

WEEK 3 48
⛔ Only sudoers can install/ upgrade/ remove packages
/etc/sudoers

In Linux systems, only users with sudo privileges (sudoers) are allowed
to install, upgrade, or remove packages. This protects the system from
unauthorized or accidental changes.

/etc/sudoers File

This is the configuration file that defines who can run commands as
root or other users using sudo .

It determines which users or groups have administrative privileges.

/etc/apt
Files : [Link]
Folder : [Link].d

📁 /etc/apt/

This directory contains APT (Advanced Package Tool) configuration


files used by Debian-based systems (like Ubuntu) for package
management.
📄 [Link] (File)

This is the main file listing software repositories (URLs) where APT
looks for packages.

Each line specifies a source like:

nginx
CopyEdit
deb [Link] focal main restricted

It defines where and what packages the system can install or


update.

📂 [Link].d/ (Folder)

This is a directory for additional repository list files.

WEEK 3 49
Each file inside can define a separate third-party or PPA repository.

Example use:

swift
CopyEdit
/etc/apt/[Link].d/[Link]

🛠️ These files together control what software sources APT uses,


allowing you to add PPAs or remove repos cleanly. Always run sudo apt

update after making changes here.

sudo apt autoremove

The command sudo apt autoremove is used on Debian-based Linux systems


to automatically remove packages that were installed as dependencies
but are no longer needed.
Key Points:

It frees up disk space by removing unused libraries and tools.

It is safe to use, but you should always review the list of packages
before confirming.

Typically used after uninstalling software that had many


dependencies.

Synchronize package overview files:

apt-get update

Upgrade all installed packages:

apt-get upgrade

Install a package:

WEEK 3 50
apt-get install package

Reinstall a package:

apt-get reinstall package

Remove packages that were automatically installed to


satisfy a dependency and not needed:

apt-get autoremove

Clean local repository of retrieved package files:

apt-get clean

Remove a package:

apt-get remove package

Purge package files from the system:

apt-get purge package

/var/lib/dpkg
Files : arch, available, status
Folder : info

is where Debian keeps records of all package installation


/var/lib/dpkg

details. Damaging this folder can break your system’s package


management.
📄 Files inside:
WEEK 3 51
1. arch

Stores the system's architecture info (e.g., amd64 , i386 ).

Helps dpkg determine which package formats are compatible.

2. available (may be deprecated)

Previously listed all available packages from repositories.

Not actively used now (APT tools handle this instead).

3. status

Most critical file — contains the list of all installed packages.

Stores package names, versions, states (installed, removed, etc.).

📂 Folder: info/

Contains .list files for each installed package (like [Link] ).

These files track which files belong to which packages.

Using dpkg —
📦 List all packages matching a pattern
dpkg -l pattern

Lists all packages with names matching the given pattern (supports
wildcards like ).

Example: dpkg -l '*python*' lists all installed Python-related packages.

📁 List installed files from a package


dpkg -L package

Shows all the files installed on the system by the specified package.

Useful to see where binaries, configs, or docs were placed.

📋 Report status of a package

WEEK 3 52
dpkg -s package

Displays the current status of a package: installed, version,


maintainer, description, etc.

Useful for checking if something is correctly installed.

🔍 Search which installed package owns a file


dpkg -S /path/to/file

Finds the package that installed a specific file on your system.

Example: dpkg -S /bin/ls might return coreutils .

This image explains how to install a .deb package manually using dpkg ,
and provides an important caution:
📦 Installing a .deb file

dpkg -i package_version-revision_architecture.deb

Installs a Debian package file directly.

WEEK 3 53
i stands for install.

Example: dpkg -i chrome_117.0.0_amd64.deb

⚠️ Important Notes:
✅ Preferred method: Use or from trusted repositories
apt apt-get

(they handle dependencies automatically).

❌ Avoid uninstalling with directly — it doesn't resolve


dpkg

dependencies, which can break the system.

💡 If you do use , and encounter missing dependencies, fix them


dpkg -i

using:

sudo apt-get install -f

Lecture 3.5 (Linux process management)

WEEK 3 54
⛔ coproc is a built-in keyword used to run a command asynchronously (in
the background) while allowing communication with it through file
descriptors — essentially setting up a lightweight, built-in way to interact
with a subprocess.

coproc [NAME] command [redirections]

NAME is optional — if omitted, the co-process uses the default


variable name COPROC .

command is the command to run in the background.

The command’s input and output are available via file descriptors.

coproc GREP_PROCESS (grep foo)

echo "foo bar" >&"${GREP_PROCESS[1]}"


exec {GREP_PROCESS[1]}>&- # Close input to signal EOF

read line <&"${GREP_PROCESS[0]}"


echo "Output: $line"

${GREP_PROCESS[1]} is the write end (input to the coproc).

${GREP_PROCESS[0]} is the read end (output from the coproc).

thunder@thunder-VMware-Virtual-Platform:~$ help coproc


coproc: coproc [NAME] command [redirections]
Create a coprocess named NAME.

Execute COMMAND asynchronously, with the standard output an


d standard
input of the command connected via a pipe to file descriptors assi
gned
to indices 0 and 1 of an array variable NAME in the executing shel
l.

WEEK 3 55
The default NAME is "COPROC".

Exit Status:
The coproc command returns an exit status of 0.

ps --forest

displays the current process tree for your shell session

PID TTY TIME CMD


5042 pts/0 [Link] bash
5095 pts/0 [Link] \_ ps

5042 — this is your interactive bash shell process (the terminal


you’re using)

5095 — this is the ps command you just ran — it is a child of the


bash process.

the \- indicates it’s a subprocess (child) of the line above it

pts/0 : Refers to the pseudo-terminal (your current terminal session).

: Shows how much CPU time the process has used (very low
TIME

since both are short-lived).

fg [%job_id]

The fg command in a Unix/Linux shell is used to bring a background job


to the foreground.

If no job ID is given, it brings the most recent background job to the


foreground.

%job_id refers to the job number (as listed by the jobs command).

sleep 60 &

WEEK 3 56
this starts sleep 60 in the background

jobs

Might show —

[1]+ Running sleep 60 &

fg %1

This brings job 1 (sleep 60) to the foreground. Now the terminal waits for
it to complete or be interrupted.

jobs — lists background jobs

bg — resumes a stopped job in the background

Ctrl+Z — suspends the current foreground job

& — runs a job in the background

Summary —
1. Job Control Basics
Job control allows you to manage multiple processes (jobs) in a terminal.
Commands:

sleep N— Delays execution for N seconds. Useful for testing long-


running jobs.

& — Runs a command in the background.

Ctrl+Z — Suspends the current foreground job.

fg — Brings a background/suspended job to the foreground.

bg — Resumes a suspended job in the background.

kill PID — Sends a signal to terminate a process with a given PID.

jobs — Lists background and suspended jobs in the current shell.

WEEK 3 57
🔹 2. coproc (Co-process)
Purpose:
Runs a command asynchronously while allowing input/output
communication via file descriptors.
Syntax:

coproc NAME { command; }

Key Points:

Doesn't block the prompt (like & ).

Allows interaction via file descriptors.

No man page → It's a shell keyword.

Use type coproc → shows it’s a shell built-in.

Use help coproc for usage.

🔹 3. Process Hierarchy
ps --forest

Shows a tree structure of running processes:

Bash is the parent shell.

Commands you run (like sleep , ps ) are child processes.

coproc or & also spawn background child processes under Bash.

🔹 4. Process Management
To kill a foreground process: Ctrl+C

To kill any process by ID: kill -9 PID

To get PID of current shell: echo $$

To list processes:

ps — Show current shell’s processes

WEEK 3 58
ps -e — Show all processes

top — Dynamic real-time process list

htop — Interactive version of top

🔹 5. Special Shell Variables


$- :
Displays shell options currently enabled.
Common flags:

Flag Meaning
h hash all (remember command paths)
i interactive shell
m job control
B brace expansion
H history expansion ( ! )
s reading from standard input

$$ :
Gives the PID of the current shell.

$? :
Gives the exit code of the last command.
🔹 6. Exit Codes
Code Meaning
0 Success
1 General error / permission denied
2 Misuse of shell built-ins or syntax error
126 Command found but not executable
127 Command not found
130 Script terminated with Ctrl+C
137 Process killed by signal (e.g., kill -9 )

WEEK 3 59
Code Meaning
>255 Wrapped with modulo 256 (e.g., 300 → 44)

You can simulate this with:

bash -c 'exit 300'; echo $?

🔹 7. History & Command Recall


history — shows previous commands with line numbers.

!N — runs command at history line N .

!! — repeats the last command.

🔹 8. Brace Expansion
Allows generating sets of strings:
Examples:

echo {a..z}
echo {A..K}
echo {A,B}{1,2}

Outputs:

a b c ... z
A B C ... K
A1 A2 B1 B2

Used for loops, filename patterns, automation.


🔹 9. Command Grouping and Sequencing
You can run multiple commands on the same line using ; :

ls; date; wc -l /etc/profile

Each command runs in sequence, regardless of previous


success/failure.

WEEK 3 60
Semicolon ; separates commands.

🔹 10. Interactive vs Non-Interactive Shells


Launch a non-interactive shell:

bash -c 'echo $-; ps'

You’ll see fewer flags (e.g., hBc ) since it's not interactive.
🔹 11. Combining Commands with Logical Operators
cmd1 && cmd2 : Run cmd2 only if cmd1 succeeds (exit code 0)

cmd1 || cmd2 : Run cmd2 only if cmd1 fails (non-zero exit code)

🔹 12. Background Jobs in Tabs/Sessions


You can open a new terminal/tab, run a process (e.g., top ), and kill it
from the first terminal using:

kill -9 PID

🔹 13. Using top

Shows live CPU/memory usage.

Press q to quit.

Ctrl+C also works but is less graceful.

WEEK 3 61
WEEK 4
Lecture 4.1 (Pattern Matching- 01)

WEEK 4 1
⛔ POSIX standard (Portable Operating System Interface) is a family of
standards specified by the IEEE to maintain compatibility between Unix-
like operating systems. POSIX defines a consistent API (Application
Programming Interface), command-line shells, and utility interfaces

Regex
● regex is a pattern template to filter text
● BRE : POSIX Basic Regular Expression engine

● ERE : POSIX Extended Regular Expression engine

🔍 Regex (Regular Expressions)


Regex is a pattern template used to match, search, and filter text.

It is widely used in tools like grep , sed , awk , vi , perl , bash , etc.

🧱 POSIX Regular Expressions

POSIX defines two main types of regex engines:

Type Full Form Used In Syntax Characteristics

Fewer meta characters.


Basic Regular grep , sed (by Some (like + , ? , {} )
BRE
Expression default) need backslashes to
work.

More meta characters


Extended
egrep , awk , supported without
ERE Regular
grep -E backslashes. Cleaner
Expression
and more expressive.

🧩 Metacharacters Comparison
Feature BRE ERE

. Match any character ✅ ✅


^ Start of line ✅ ✅
$ End of line ✅ ✅
[...] Character class ✅ ✅
* Zero or more ✅ ✅

WEEK 4 2
Feature BRE ERE

\+ One or more ✅ (escaped) ✅ (just +)

\? Zero or one ✅ (escaped) ✅ (just ? )

\{n,m\} Repetition ✅ (escaped) ✅ (just {n,m} )

| Alternation (OR) ❌ ✅ (just `


() Grouping ❌ ✅
🧪 Example Patterns
Goal BRE Pattern ERE Pattern

Match cat or dog cat|dog `cat

Match 3 digits [0-9]\{3\} [0-9]{3}

Optional "ing" ing\{0,1\} ing?

One or more "a" a\+ a+

🛠 Common POSIX Tools Using Regex:

Tool Type Notes


grep BRE (default) Use grep -E for ERE
sed BRE Use sed -E or -r for ERE
awk ERE Built-in support for extended regex

vi , vim Mostly BRE Some extended features supported

Why learn REGREX ?

Languages — Java, Perl, Python, Ruby, …

Tools — grep , sed , awk ,…

Applications — MySQL, PostgreSQL, …

❓ Why Learn Regex (Regular Expressions)?


Regex is a powerful pattern-matching language used to search,
validate, extract, transform, or replace text. It's a universal skill across
many programming languages, tools, and platforms.
💻 1. Used in Many Programming Languages

WEEK 4 3
Language Regex Support

Java Pattern and Matcher classes

Python re module

Perl Built-in regex syntax ( =~ , s/// , etc)

JavaScript RegExp object

Ruby Built-in support ( =~ , match , etc)

C/C++ <regex> in C++11, or libraries like PCRE

Regex is often embedded into the language syntax for validation,


parsing, and search tasks.
🧰 2. Vital in Unix/Linux Tools
Tool Usage

grep Search for lines matching a pattern

sed Stream editing with substitution ( s/old/new/ )

awk Pattern-action processing

find Name matching with wildcards or regex

vi / vim Search & replace ( :%s/old/new/g )

Regex makes text processing faster and more powerful.


🗄️ 3. Databases and Search
System Usage

MySQL REGEXP clause in WHERE

PostgreSQL POSIX regex support via ~ and ~*

MongoDB $regex in queries

Elasticsearch Regex-based queries for indexing and search

Regex helps in querying flexible patterns, especially for user input,


logs, or free text fields.
🌐 4. Web & Text Processing
Form validation (e.g., email, phone, password)

Log analysis (filtering specific patterns)

WEEK 4 4
Scraping data from HTML or APIs

Syntax highlighting in editors

URL routing and rewriting (in frameworks and web servers)

🎯 5. Practical Examples
Task Regex Pattern

Match an email [a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-z]{2,}

Extract numbers \d+

Find dates \b\d{4}-\d{2}-\d{2}\b

Validate hex color ^#?[0-9a-fA-F]{6}$

✅ Summary
Benefit Explanation

Portable skill Works across languages and tools

Productive Saves hours of manual searching or filtering

Powerful Expresses complex text patterns concisely

In-demand Valued in devops, data science, QA, web dev, etc

Usage

grep ‘pattern’ filename

command | grep ‘pattern’

Default engine: BRE

Switch to use ERE:


egrep ‘pattern’ filename
grep-E ‘pattern’ filename

🔧 Usage of with Regular Expressions


grep

✅ Basic Syntax:
grep 'pattern' filename

WEEK 4 5
Searches for lines matching the pattern in the specified file.

By default, uses POSIX BRE (Basic Regular Expression) engine.

🧪 Common Usage Examples:


Command Meaning
grep 'hello' [Link] Find lines containing "hello"
grep '^a' [Link] Lines that start with a
grep 'foo$' [Link] Lines that end with foo
grep '[0-9]' [Link] Lines containing any digit

🔁 With Pipelines:
command | grep 'pattern'

Filters output of a command.

Example:

ps aux | grep 'bash'

Find all running bash processes.


🧠 Default Regex Engine: BRE
Basic Regular Expression

Some metacharacters ( + , ? , {} ) must be escaped with \

Example:

grep 'a\+' [Link] # Match one or more 'a's (BRE)

🚀 Switch to Extended Regex (ERE):


Command Purpose
egrep 'pattern' filename Uses ERE (deprecated in some systems)
grep -E 'pattern' filename Recommended way to use ERE

WEEK 4 6
ERE allows more metacharacters without backslashes.
Examples:

grep -E 'cat|dog' [Link] # Match "cat" or "dog"


grep -E 'a{3,5}' [Link] # Match 3 to 5 'a's
grep -E 'ab+c' [Link] # Match 'a' followed by 1+ 'b's, then 'c'

📝 Summary Table:
Task BRE ERE

Match one or more a's a\+ a+

Match a or b a|b `a

Grouping \(ab\) (ab)

Character Meaning
. Matches any single character except newline or null

Matches zero or more occurrences of the preceding


*
character or expression

Matches any one of the enclosed charactersHyphen ( - )


[]
defines a range, e.g., [0-9]

- At start: Anchors match to the beginning of the line- Inside


^
[ ] : Negates the character class, e.g., [^a-z]

$ Anchors match to the end of the line


\ Escapes special characters (especially needed in BRE)

WEEK 4 7
Quick example —

| Pattern | Meaning |
| -------- | ------------------------------------------ |
| `^Hi` | Lines that **start** with "Hi" |
| `end$` | Lines that **end** with "end" |
| `[a-z]` | Any lowercase letter |
| `[^0-9]` | Any **non-digit** character |
| `a*` | Zero or more `'a'` characters |
| `\.` | A literal period `.` (dot), not a wildcard |

Example
Character Meaning Matches
Pattern

Match at least n and at


most m times of the Matches oo ,
\{n,m\} o\{2,3\}
preceding character or ooo
group

Group expressions for


repetition or
\(\) \(ab\)\{2\} Matches abab
backreference (in sed ,
etc.)

#Repetition Range \{n,m\}


echo "hoop" | grep 'o\{2\}' # matches (oo)
echo "hooop" | grep 'o\{2,3\}' # matches (ooo)
echo "hoooooop" | grep 'o\{2,4\}' # matches first 4 o's
#must escape curly braces in BRE; \{ and }\
#Use grep, not egrep, for this

WEEK 4 8
#Grouping \(\)
echo "hahaha" | grep 'ha\{2\}' # doesn't match (we need to gr
oup)
echo "hahaha" | grep 'h\(a\{2\}\)' # matches 'haa'
echo "hahaha" | grep '\(ha\)\{3\}' # matches 'hahaha' (ha 3 time
s)
#Grouping allows applying *, \{n,m\} to entire patterns like ha.
#In BRE, grouping must be escaped using \( and )\

Character Meaning Example Pattern Matches

Match the preceding


{n,m} pattern at least n and a{2,3} aa , aaa
at most m times
() Group expressions (ab){2} abab

Match one or more of


gogle , google ,
+ the previous character go+gle
gooogle
or group

Match zero or one of


? the previous character colou?r color , colour
or group

Logical OR
` ` `cat
between patterns

#you must use grep -E or egrep to use ERE features


echo "aaa" | grep -E 'a{2,3}' # matches (2 or 3 a's)
echo "abab" | grep -E '(ab){2}' # matches abab

WEEK 4 9
echo "google" | grep -E 'go+gle' # matches google
echo "color" | grep -E 'colou?r' # matches color
echo "dog" | grep -E 'cat|dog' # matches dog

Character Example
Matches Matches Example
Class Pattern

Letters and digits ( A–


[:alnum:] [[:alnum:]] A, m, 7
Z , a–z , 0–9 )

[:alpha:] Letters ( A–Z , a–z ) [[:alpha:]] b, Z

[:digit:] Digits ( 0–9 ) [[:digit:]] 3, 9

Lowercase letters ( a–
[:lower:] [[:lower:]] q, z
z)

Uppercase letters ( A–
[:upper:] [[:upper:]] M, T
Z)

Printable characters
[:print:] [[:print:]] #, A, 9
(including space)

Printable non-space
[:graph:] [[:graph:]] @, G, 4
characters
[:blank:] Space or tab only [[:blank:]] , \t

Any whitespace
[:space:] (space, tab, newline, [[:space:]] , \t , \n
etc.)
[:punct:] Punctuation symbols [[:punct:]] !, ., ?

Hex digits ( 0–9 , A–F ,


[:xdigit:] [[:xdigit:]] C, f, 2
a–f )

WEEK 4 10
Character Example
Matches Matches Example
Class Pattern

Control characters Not printable—like


[:cntrl:] [[:cntrl:]]
(ASCII 0–31, 127) \n , \t

# Match lines containing at least one digit


grep '[[:digit:]]' [Link]

# Match lines containing only alphabetic characters


grep '^[[:alpha:]]*$' [Link]

# Match tabs or spaces


grep '[[:blank:]]' [Link]

# Match valid hexadecimal characters


grep '[[:xdigit:]]' [Link]

Backreferences
● \1 through \9
● \n matches whatever was matched by nth
earlier paranthesized subexpression
● A line with two occurances of hello will be
matched using:
\(hello\).*\1
🔁 Backreferences in BRE
🔹 What Are Backreferences?
\1 to \9 refer to the 1st to 9th capturing group (i.e., patterns inside \

(...\) ).

They let you match repeated patterns within the same line.

🧪 Syntax:
Group: \(pattern\)

Backreference: \1 , \2 , ..., \9

WEEK 4 11
🔄 Example:
\(hello\).*\1

This matches:

A line that contains hello

Followed anywhere later by another occurrence of the same word


( hello )

✅ Example Matches:
hello there hello
say hello, again hello

❌ Non-Matches:
hello world
HELLO hello # case-sensitive

🔣 More Examples
1. Match repeated word (like word word ):

\([[:alpha:]]\+\)[[:space:]]\+\1

Group 1: a word → \([[:alpha:]]\+\)

Then: spaces → [[:space:]]\+

Then: backreference to the same word → \1

✅ Matches:
hello hello
test test

⚠️ Notes:

WEEK 4 12
Works in BRE (e.g. grep , sed ) with \(...\) and \1 .

In ERE (with grep -E ), backreferences aren't supported. Use perl , sed ,


or awk instead.

✅ Try It in Practice:
echo "hello hello" | grep '\(hello\).*\1'
echo "test test" | grep '\([a-z]\+\) \1'

It illustrates the operator precedence in POSIX BRE (Basic Regular


Expressions) — which determines how regex patterns are evaluated
when multiple operators are present.

Precedence Operator Description Example

Character collation,
1️⃣ Highest [..] , [==] , [::] equivalence, and
[[:digit:]] → any
digit
classes

WEEK 4 13
Precedence Operator Description Example

2️⃣ \metachar
Escaped special
characters
\. matches a
literal dot .

3️⃣ []
Bracket expressions
for character sets
[aeiou] matches
any vowel

4️⃣ \( \) and \n
Grouping and
backreferences
\(ab\).*\1 matches
repeated "ab"

5️⃣ * , \{m,n\} Repetition operators


a\{2,4\} matches
2–4 a s

6️⃣ Concatenation
Implicit AND (patterns
next to each other)
ab matches "a"
followed by "b"

7️⃣ Lowest ^, $ Line anchors


^abc$ matches
exact line "abc"

#How this affects matching


grep '\([0-9]\{2\}\).*-\1' [Link]

\{2\} binds tighter to [0-9] than \( does to group

.* does not interfere with earlier grouping

\1 refers back to the group matched earlier

WEEK 4 14
It illustrates the operator precedence in POSIX ERE (Extended Regular
expressions) — which determines how regex patterns are evaluated
when multiple operators are present

Precedence Operator(s) Description Example

Character collation,
1️⃣ Highest [..] , [==] , [::] equivalence, and
[[:digit:]] → any
digit
classes

Escaped special
2️⃣ \metachar characters (e.g., \. for \. matches .
literal dot)

3️⃣ []
Character classes /
bracket expressions
[a-z] matches
lowercase letters

Grouping (no
4️⃣ () backslashes needed in
(ab)+ matches
ab , abab , ...
ERE)

WEEK 4 15
Precedence Operator(s) Description Example

5️⃣ *, +, ? ,
{n,m}
Repetition operators
a+ = one or
more a s

Concatenation —

6️⃣ (implicit)
characters next to each
other must match in
abc matches
"abc"
sequence

7️⃣ ^, $
Anchors for line
beginning or end
^Hi$ matches
exactly "Hi"

8️⃣ Lowest ` `
Alternation
(logical OR)

# How precedence affects matching


echo "catdog" | grep -E 'cat|dog'
# - matches cat or dog anywhere (because | is lowest precedence)

Key difference from BRE :

Feature BRE ERE

Grouping \( \) ()

Repetition \+ , \? , \{n,m\} + , ? , {n,m}

Alternation | `

Escaping Needed more often Less needed

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]


M029024 Mary Kanchick
ED229842 Raman Singh
PH220895 Charles M. Sagayam
EE228905 Anu K. Jain
ED229902 Anupama Sridhar
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ grep Raman names.t
xt
ED229842 Raman Singh
thunder@thunder-VMware-Virtual-Platform:~$ grep 'Raman' names.

WEEK 4 16
txt
ED229842 Raman Singh
thunder@thunder-VMware-Virtual-Platform:~$ grep 'Anu' [Link]
EE228905 Anu K. Jain
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ grep 'Sa' [Link]
PH220895 Charles M. Sagayam
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ grep 'ai' [Link]
EE228905 Anu K. Jain
thunder@thunder-VMware-Virtual-Platform:~$ grep ri [Link]
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'ai'
EE228905 Anu K. Jain

What is the difference between “ grep ri [Link] ” and “ grep ‘ri’ [Link] ”

Without quotes: grep ri [Link]

The shell interprets ri as a plain word (no special characters).

grep searches for the string ri normally.


It works just fine if ri has no spaces, wildcards, or special
characters.
⚠️ Risk:
If the pattern contains spaces, , $ ,, etc., the shell may try to
|

interpret them before passing to grep , which can lead to errors or

unexpected results.

With quotes: grep 'ri' [Link]

Always safe and recommended, especially when:

The pattern includes spaces

You're using special characters (like . or )

You're scripting or need predictable behavior

WEEK 4 17
Quotes ensure the entire string is passed as-is to grep .

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep


'S.n'
# S -- literal character . -- any character n -- literal character
# so it matches lines with S followed by any char then n
ED229842 Raman Singh
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'.am'
# Any char ( . ) followed by a, followed by m
#any substring like Ram, am, pam matches this
ED229842 Raman Singh
PH220895 Charles M. Sagayam
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'.am$'
# Any char + am, at the end of the line ($)
# Only one line ends in am.
PH220895 Charles M. Sagayam
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'\.'
# Matches a literal dot (.)
# Since . is a special regex metacharacter (match any char), we mus
t escape it as \.

PH220895 Charles M. Sagayam


EE228905 Anu K. Jain
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'.\.'
# . → any character \. → a literal dot
PH220895 Charles M. Sagayam
EE228905 Anu K. Jain

WEEK 4 18
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'^M'
#^M → Match lines that start with "M"
M029024 Mary Kanchick
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'^E'
#^E → Match lines that start with "E"
ED229842 Raman Singh
EE228905 Anu K. Jain
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'^e'
# ^e → Match lines starting with lowercase e
# No output, because no lines start with lowercase e (grep is case s
ensitive by default)
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
-i '^e'
# -i → Ignore case ^e → Match lines that start with either e or E
ED229842 Raman Singh
EE228905 Anu K. Jain
ED229902 Anupama Sridhar

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep


'am'
# This just matches any line that contains the substring "am" anywh
ere.
ED229842 Raman Singh
PH220895 Charles M. Sagayam
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'am\b'
# This uses the word boundary \b to match “am” at the end of a wor
d.
PH220895 Charles M. Sagayam

WEEK 4 19
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'am$'
# This matches "am" at the very end of the line.
PH220895 Charles M. Sagayam
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'an\b'
# Word ends with an.
ED229842 Raman Singh
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'an$'
# Matches only if the line ends in an.
PH220841 Vel Sankaran

hunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep


'M[ME]'
# M[ME] means: M followed by either M or E So matches MM or ME
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'E[ED]'
# E followed by either E or D So matches EE or ED
ED229842 Raman Singh
EE228905 Anu K. Jain
ED229902 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'[ME]E'
# Any line containing M or E followed by E
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'S.*[mn]'
# S → literal capital S .* → any number of any characters (greedy)
# [mn] → ends in either m or n
ED229842 Raman Singh
PH220895 Charles M. Sagayam
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'\bS.*[mn]'

WEEK 4 20
# \b = word boundary before S So it tries to match only if S is at the
start of a word
ED229842 Raman Singh
PH220895 Charles M. Sagayam
PH220841 Vel Sankaran

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep


'[aeiou]'
#Match any line that contains at least one vowel (a, e, i, o, or u)
M029024 Mary Kanchick
ED229842 Raman Singh
PH220895 Charles M. Sagayam
EE228905 Anu K. Jain
ED229902 Anupama Sridhar
PH220841 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep
'[aeiou][aeiou]'
#Match any line that contains two vowels next to each other
EE228905 Anu K. Jain

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link]


MM22B901 Mary Manickam
ED22B902 Raman Singh
ME22B903 Umair Ahmed
CS22B904 Charles M. Sagayam
EE22B905 Anu K. Jain
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep 'B9
0[1-4]'
# Match lines containing B90 followed by 1,2,3 or 4
MM22B901 Mary Manickam
ED22B902 Raman Singh
ME22B903 Umair Ahmed

WEEK 4 21
CS22B904 Charles M. Sagayam
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep 'B9
0[123456789]'
#Match lines containing: B90 followed by 1–9 (excluding 0)
MM22B901 Mary Manickam
ED22B902 Raman Singh
ME22B903 Umair Ahmed
CS22B904 Charles M. Sagayam
EE22B905 Anu K. Jain
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\


(ma\)'
# Match the string "ma"
ED22B902 Raman Singh
ME22B903 Umair Ahmed
NA22B906 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\
(ma\).*\1'
#Match ma Followed later by the same string ma (backreference \1)
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\(.
a\).*\1'
#Group: any character + a (.a) Then match the same group again lat
er
MM22B901 Mary Manickam
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\
(a.\).*\1'
#Group: a followed by any character Match same group again
PH22B907 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\
(a.\)\{3\}'
# Match 3 consecutive groups of a. (a followed by any character)
#This is not a backreference, it’s a quantifier for grouped pattern
CS22B904 Charles M. Sagayam

WEEK 4 22
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\
(a.\)\{2\}'
#Match exactly 2 a. patterns anywhere
ED22B902 Raman Singh
CS22B904 Charles M. Sagayam
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | grep '\
(a.\)\{2,3\}'
# Match 2 or 3 occurrences of a.
ED22B902 Raman Singh
CS22B904 Charles M. Sagayam
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | egrep


'(ED|ME)'
#Match lines that contain either ED or ME, Both lines begin with the
#specified department codes.
ED22B902 Raman Singh
ME22B903 Umair Ahmed
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | egrep
'(Anu|Raman)'
#Match lines containing either Anu or Raman
#egrep matches any part of word unless you use word boundaries
\bAnu\b
ED22B902 Raman Singh
EE22B905 Anu K. Jain
NA22B906 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | egrep
'(am|an)'
#Match any line that contains either the substring am or an
MM22B901 Mary Manickam
ED22B902 Raman Singh
CS22B904 Charles M. Sagayam

WEEK 4 23
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | egrep


'(ma)+'
#Match one or more repetitions of the sequence ma
ED22B902 Raman Singh
ME22B903 Umair Ahmed
NA22B906 Anupama Sridhar
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | egrep
'(ma)*'
#Match zero or more repetitions of ma
MM22B901 Mary Manickam
ED22B902 Raman Singh
ME22B903 Umair Ahmed
CS22B904 Charles M. Sagayam
EE22B905 Anu K. Jain
NA22B906 Anupama Sridhar
PH22B907 Vel Sankaran

Lecture 4.2 (Pattern Matching 02)

WEEK 4 24
⛔ dpkg-query— It is used to query the debian package database. it’s helpful
for checking installed packages their versions, files, and status

List all installed packages

dpkg-query -l # or dpkg -l

List files installed by a package

dpkg-query -L <package-name>

Check if a specific package is installed

dpkg-query -W <package-name>

Find out which package installed a particular file

dpkg-query -S <filename>

Get detailed information about a package

dpkg-query -s <package-name>

Custom output using format

dpkg-query -W -f='${binary:Package} ${Version}\n'

dpkg-query -W -f'${Section} ${(binary:Package)}\n'


#queries all installed packages and displays their section and name
#in a custom format.

thunder@thunder-VMware-Virtual-Platform:~$ dpkg-query -W -
f'${Section} ${(binary:Package)}\n' | egrep '.{4}$'

WEEK 4 25
#Match lines that end with exactly 4 characters, whatever they are.
#Specifically: match any line that ends with 4 characters total, not
#"a 4-character word".

thunder@thunder-VMware-Virtual-Platform:~$ dpkg-query -W -
f'${Section} ${(binary:Package)}\n' | egrep 'k.*'
#Matches any line that has the letter k followed by anything (.*)
#This will match as soon as it sees k anywhere in the line, regardless
#of where.

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr


ep '[[:alpha:]]'
#[[:alpha:]] matches any alphabetical character (A–Z, a–z)
#So it matches lines that contain at least one letter
hello : alphabetical stuff : 5g
l : start lower end upper : H
L : start upper and lower : h
5g : alpha numeric stuff : 42
42 : solution to everything :
: start with control C end with dot : .
, : start with comma end with equals : =
: start with blank end with control char :
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:alnum:]]'
#[[:alnum:]] matches any alphanumeric character (A–Z, a–z, 0–9)
#It matches everything [[:alpha:]] does, plus numbers
hello : alphabetical stuff : 5g
l : start lower end upper : H
L : start upper and lower : h
5g : alpha numeric stuff : 42
42 : solution to everything :
: start with control C end with dot : .
, : start with comma end with equals : =
: start with blank end with control char :

WEEK 4 26
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:punct:]]'
#Match any line containing at least one punctuation character
hello : alphabetical stuff : 5g
l : start lower end upper : H
L : start upper and lower : h
5g : alpha numeric stuff : 42
42 : solution to everything :
: start with control C end with dot : .
, : start with comma end with equals : =
: start with blank end with control char :
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '^[[:punct:]]'
#Match lines that start with a punctuation character
, : start with comma end with equals : =
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:punct:]]$'
#Match lines that end with a punctuation character
42 : solution to everything :
: start with control C end with dot : .
, : start with comma end with equals : =
: start with blank end with control char :

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr


ep '^[[:lower:]]'
#Match lines that start with a lowercase letter
hello : alphabetical stuff : 5g
l : start lower end upper : H
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:lower:]]$'
#Match lines that end with a lowercase letter
hello : alphabetical stuff : 5g
L : start upper and lower : h

WEEK 4 27
Pattern Meaning
'^[[:lower:]]' Line starts with lowercase letter
'[[:lower:]]$' Line ends with lowercase letter
'[[:upper:]]' Line contains at least one uppercase
'^[[:digit:]]' Line starts with digit

thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr


ep '^[[:blank:]]'
#matches the lines that start with a space or tab
: start with control C end with dot : .
: start with blank end with control char :
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:blank:]]$'
#match the lines that ends with a space
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '[[:space:]]$'
#match the lines that ends with a space
thunder@thunder-VMware-Virtual-Platform:~$ cat [Link] | gr
ep '^[[:space:]]'
#match the lines that starts with a space
: start with control C end with dot : .
: start with blank end with control char :

WEEK 4 28
Week 5
Slides 1 ($hell variables)

Week 5 1
⛔ echo

Print strings to screen


echo hello, world

Print values of variables


echo $HOME

#print a simple string


echo hello, world
hello, world

#Print the value of a variable


echo $HOME
/home/thunder

Frequently used shell variables

● $USERNAME
● $HOME
● $HOSTNAME
● $PWD
● $PATH
printenv, env, set

Variable Meaning
$USERNAME Your login/user name (often $USER )
$HOME Your home directory path ( /home/user )
$HOSTNAME Name of your machine on the network
$PWD Present Working Directory (current path)

Colon-separated list of directories the shell searches for


$PATH
commands

echo $USERNAME # thunder


echo $HOME # /home/thunder
echo $HOSTNAME # thunder-VMware-Virtual-Platform

Week 5 2
echo $PWD # /home/thunder/somefolder
echo $PATH # /usr/local/bin:/usr/bin:/bin:...

Command Description
printenv Prints only environment variables (used by programs)
env Similar to printenv ; shows variables for the current environment

Lists all variables: environment + shell + user-defined variables +


set
functions

printenv # Only environment variables


env # Same as above,often used for running commands with a cust
om environment
set # Environment vars + shell vars + user-defined + functions

Special shell variables

$0 : name of the shell


$$ : process ID of the shell
$? : return code of previously run program
$- : flags set in the bash shell

Variable Purpose Example Output


$0 Name of shell or script bash or [Link]

$$ Shell’s Process ID 23847

$? Exit status of last command 0 , 1 , 2 , ...

$- Shell flags (options) himBH

echo $0 #bash
echo $$ #5507
echo $? #0
echo $- #himBHs

Process control
● Use of & to run a job in the background
● fg

Week 5 3
● coproc
● jobs
● top
● kill

echo $$

Command Purpose
& Run in background
jobs List background jobs
fg Bring job to foreground
coproc Start co-process
top Monitor system processes
kill Terminate a process
echo $$ Show shell’s PID

Program exit codes

0 : success
1 : failure
2 : misuse
126 : command cannot be executed
127 : command not found
130 : processes killed using control+C
137 : processes killed using kill -9 <pid>

Flag Meaning
h hash: Hash commands to speed up lookup
B brace expansion is enabled (e.g., echo file{1,2} → file1 file2 )
i interactive shell (i.e., you’re typing commands directly)
m job control is enabled (background jobs, fg , bg , etc.)
H history substitution using ! (e.g., !! to repeat last cmd)
s Commands are being read from stdin (non-interactive shell)
c Commands are being read from a command string (via bash -c )

Lecture 5.1

Week 5 4
⛔ What is echo ?

The echo command in linux is used to display a line of text of the value of a
variable to the terminal

thunder@thunder-VMware-Virtual-Platform:~$ echo hello world


hello world

Print variable value

name="billo"
echo $name
#output
billo

Display special characters (Using escape sequences) —

echo -e "Line1\nLine2"
#output
Line1
Line2

-e enables interpretation of escape characters like \n , \t , etc.

Suppress newline

thunder@thunder-VMware-Virtual-Platform:~$ echo -n "Hello"


Hellothunder@thunder-VMware-Virtual-Platform:~$

Common Escape Sequences with -e

Sequence Meaning

\n Newline

\t Tab

\\ Backslash

Week 5 5
\” Double quote

Why use echo ?

To print messages in shell scripts

To debug by printing variable values

To format output with tabs/newlines

To test values of environment variable

thunder@thunder-VMware-Virtual-Platform:~$ echo hello world


hello world

Even though typed many spaces between hello and world bash treats all
consecutive spaces/tabs as one when passing arguments to a command.

thunder@thunder-VMware-Virtual-Platform:~$ echo "hello world"


hello world

if you want to preserve the spacing quote the string

thunder@thunder-VMware-Virtual-Platform:~$ echo "hello world'


> some more input
> closing single quote'
> finishing th double quote"
hello world'
some more input
closing single quote'
finishing th double quote

You started with a double quote " but ended with a single quote ' .

Bash sees this as:

“you opened a double quote, but you haven’t closed it yet”

The > prompt appears because the shell is waiting for you to finish the
string (i.e., close the starting double quote).

Week 5 6
It keeps reading input as part of that same string until you finally type a
matching " .

$USERNAME (or more commonly $USER )

Meaning — this holds the login name of the current user.

$USER is more universally supported across systems than $USERNAME

thunder@thunder-VMware-Virtual-Platform:~$ echo $USERNAME


thunder
thunder@thunder-VMware-Virtual-Platform:~$ echo my username is:
$USER
my username is: thunder

$HOME

Meaning — This stores the path to your home directory — where your
personal files and configurations are stored.
this is where land when you log in, and where things like Documents ,
Downloads , bashrc , ect are located.

thunder@thunder-VMware-Virtual-Platform:~$ echo this is my home:


$HOME
this is my home: /home/thunder
thunder@thunder-VMware-Virtual-Platform:~$ echo $HOME
/home/thunder

$HOSTNAME

Meaning — This shows the name of your machine on a network (how it’s
defined)

Useful in networking and when using ssh or scripts running across multiple
machines.

thunder@thunder-VMware-Virtual-Platform:~$ echo $HOSTNAME


thunder-VMware-Virtual-Platform
thunder@thunder-VMware-Virtual-Platform:~$ echo welcome to the ho

Week 5 7
st $HOSTNAME
welcome to the host thunder-VMware-Virtual-Platform

$PWD

Meaning — This gives you the present working directory — the directory
you are currently in when you ran a command.

same as running the pwd command

thunder@thunder-VMware-Virtual-Platform:~$ pwd
/home/thunder
thunder@thunder-VMware-Virtual-Platform:~$ $PWD
bash: /home/thunder: Is a directory
thunder@thunder-VMware-Virtual-Platform:~$ echo my working space
$PWD is cozy
my working space /home/thunder is cozy

$PATH

Meaning — This is a colon-separated list of directories that the shell


searches when you type a command.

If you run python , the shell looks in these directories (in order) for an
executable named python .

You can even add custom directories to PATH if you want to run your own
scripts from anywhere.

thunder@thunder-VMware-Virtual-Platform:~$ echo $PATH


/home/thunder/.local/bin:/home/thunder/.local/bin:/usr/local/sbin:/usr/l
ocal/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sna
p/bin:/snap/bin

printenv — Print Environment Variable

Purpose: Displays the values of environment variable only

Scope: Shows variables that are exported (available to child processes)

Week 5 8
soumya@Soumyadip:~$ printenv USER
soumya

printenv won’t show shell-specific or local variables (only exported ones).

env - Show or Run with Modified Environment

Displays current environment variables (like printenv )

Can temporarily set environment variables for a single command without


permanently changing them.

set - Show All Shell Variables

Shows all variables (environment +shell + functions)

Can also be used to set shell options and positional parameters.

Scope: Shows everything the shell knows about — not just exported
variables

This will produce a long list including:

Environment variable (like PATH , HOME )

Shell variables (local variables in Bash)

Shell functions

Special parameters

Can
Shows Shows
Temporarily Shows
Command Environment Local Shell
Set Vars for a Functions
Vars Vars
Command
printenv ✅ Yes ❌ No ❌ No ❌ No
env ✅ Yes ❌ No ✅ Yes ❌ No
set ✅ Yes ✅ Yes ❌ No ✅ Yes
printenv → to see/exported environment variables only.

env → to view environment OR run a command with a temporary variable.

set → to see everything in the current shell (env vars + shell vars +
functions).

Week 5 9
Special shell variables

$0 : name of the shell


$$ : process ID of the shell
$? : return code of previously run program
$- : flags set in the bash shell

$0 — Name of the Shell / Script

Meaning —

in an interactive shell, $0 contains the name of the shell program (bash ,


zsh , etc.)

In a script $0 contains the name of the script itself

soumya@Soumyadip:~$ echo $0
-bash
soumya@Soumyadip:~$ echo "Script name: $0"
Script name: -bash

$$ — Process ID (PID) of the Shell

Meaning: The PID of the current shell process

soumya@Soumyadip:~$ echo $$
308

(Your number will be different)

$? — Exit status of last command

Meaning — shows whether the last command succeeded (0) or failed (non-
zero)

soumya@Soumyadip:~$ echo $?
0
soumya@Soumyadip:~$ cd billo
-bash: cd: billo: No such file or directory
soumya@Soumyadip:~$ echo $?
1

Week 5 10
0 —> success and non-zero ( 1 ) —> error code

$- — Current Shell Flags

Meaning: Shows which options are currently set in the shell

soumya@Soumyadip:~$ echo $-
himBHs

Each letter represents a shell option (like h for hashing commands, i for
interactive shell)

Variable Meaning Example Output

Shell/program/script
$0 bash or ./[Link]
name
$$ PID of current shell 12345

Exit code of last


$? 0 or 1 or 127
command
$- Current shell options himBH

Process Control — Process control commands help you manage programs that
are running on your system — whether they’re in the background, foreground
or even paused

Use of & to run a job in the background

Runs a command without locking your terminal so you can keep typing
other commands

if something takes a long time you don’t want to stare at the terminal
until it finishes

soumya@Soumyadip:~$ sleep 20 & ls vehicle-parking-app-v1/template


s/
[1] 594
admin_dashboard.html [Link] [Link]
admin_lots.html edit_lot.html [Link]
soumya@Soumyadip:~$

Week 5 11
here first you tell your shell to “run sleep 20 in the background” which gives
the output [1] 594 where

[1] — job number and 594 — Process ID (PID) assigned by linux

and the other command ls vehicle-parking-app-v1/templates/ immediately runs in the


foreground by listing all files in that folder
Because you put sleep 20 in the background, the ls command didn’t have
to wait — it ran right away.
If you didn’t use & , the terminal would have been blocked for 20 seconds
before running ls .

fg — Bring job to the foreground

Brings a background job back into focus so you can interact with it

If you started a process in the background but now want to see its
output or stop it with ctrl+c

soumya@Soumyadip:~$ sleep 60 &


[1] 616
soumya@Soumyadip:~$ jobs
[1]+ Running sleep 60 &
soumya@Soumyadip:~$ fg %1
sleep 60
soumya@Soumyadip:~$

sleep 60 & — runs for a minute in background

jobs — shows the job number

fg %1 — brings it to the foreground

without a number, it picks the most recent job.

coproc — Runs a background process with communication

Starts a command in the background and connects it to a pipe so you


can send / receive data from it

for scripts that needs to interact with a process without blocking.

soumya@Soumyadip:~$ coproc date


[1] 634

Week 5 12
soumya@Soumyadip:~$ cat <&"${COPROC[0]}"
Thu Aug 14 [Link] UTC 2025
[1]+ Done coproc COPROC date
soumya@Soumyadip:~$

a. First line runs date in a background subprocess with a communication


channel.

b. Second line reads the output from that process.

coproc is useful when you want asynchronous two-way communication


with a process, without manually creating mkfifo pipes or using
background redirection.

jobs — See all background jobs

Lists processes started in the current shell that are running in the
background or stopped

lets you see what’s still running or paused so you can fg or kill them.

example

jobs
[1]+ Running sleep 60 &
[2]- Stopped nano [Link]

top — Live process monitor

Shows a real-time, updating list of all processes, their CPU/memory


usage, and system load.

Helps spot programs using too many resources so you can stop or
optimize them.

top

press q — quit

press k — Kill a processes by PID


press P — sort by CPU usage

Week 5 13
top is a scrolling “task manager” for linux

kill — End a process

Sends a signal to a process — often used to terminate it

Ends a program that’s stuck or no longer needed

& → Start in background

fg → Bring to foreground

coproc → Background with communication channel

jobs → List background jobs in current shell

top → Live process & resource monitor

kill → End process by PID

Program exit codes

0 —> Success

(meaning the program finished without any error)

ls /home
echo $? # prints 0

1 —> General failure


(The program ran but encountered an error not covered by a more specific
code.)

ls /nonexistent
echo $? # prints 1

Often used for general errors like wrong arguments, missing files etc.

2 — Misuse of shell builtins


(Command syntax error or incorrect use of shell built-in functions.)

cd
exit 2

Week 5 14
#Reason: Shell uses 2 to indicate incorrect usage of its internal comma
nds.

Shell uses 2 to indicate incorrect usage of its internal commands

126 — Command found but cannot be executed


(The command exists but you don’t have execute permission, or it’s not a
valid binary/script.)

touch [Link]
./[Link]
echo $? # 126 (no execute permission)
#Reason: Permission issue or incompatible file type.

127 — Command not found


(Shell tried to run a command that doesn’t exist in PATH .)

nosuchcommand
echo $? # 127
# Reason: Typo in command name or not installed.

137 — Process killed with kill -9 <pid> (pid-process ID)

(Exit due to SIGKILL (signal 9). 137 = 128 + 9 .)

sleep 1000 &


kill -9 <pid>
wait
echo $? # 137
#Reason: Forceful kill that the process cannot handle or ignore.

130 — Process terminated by ctrl+c

(Meaning: Exit due to SIGINT (signal 2). 130 = 128 + 2 .)

sleep 30
# press Ctrl+C
echo $? # 130

Week 5 15
#Reason: Signal math — Bash exit codes for killed processes are 128 +
signal_number.

Any exit code ≥ 128 usually means the process was killed by a signal

Flags set in bash —


h — Locate and hash commands

Meaning: When you run a command, Bash stores the full path in a hash
table so it doesn’t have to search PATH every time. (Speeds up command
execution for repeated calls)

B — Brace expansions enabled

Meaning: Lets you use {} to generate strings automatically.

soumya@Soumyadip:~$ echo file{1..3}.txt


[Link] [Link] [Link]
soumya@Soumyadip:~$

i — Interactive mode

Meaning: The shell is running interactively (waiting for user commands)


rather than executing a script and exiting.

example —

Interactive: bash (no script) → waits for input.

Non-interactive: bash [Link] → just runs script.

m — Job control enabled

Meaning: You can run commands in background ( & ), bring them to


foreground ( fg ), or suspend them ( Ctrl+Z ).

sleep 100 &


jobs
fg %1

H — History substitution with !

Meaning: Allows you to use ! to recall and run past commands.

Week 5 16
echo test
!ec # runs the last command starting with 'ec'

s — Commands read from stdin

Meaning: Shell is reading commands directly from standard input


(keyboard, pipe, or redirection).

bash # reads from keyboard


cat [Link] | bash # reads from stdin via pipe

c — Commands read from arguments

Meaning: Commands are provided directly via -c option when starting


bash.

Flag Meaning Why it’s on

Bash caches paths to speed up execution


h Hash commands
( ls , cat , etc.)

You’re typing commands in a live terminal


i Interactive mode
session

You can run jobs in background ( & ), suspend


m Job control
( Ctrl+Z ), and resume ( fg )

B Brace expansion Lets you use {} for ranges and patterns

H ! history substitution Enables history recall with !

Read commands You’re entering commands directly from


s
from stdin keyboard (standard input)

Lecture 5.2 (Shell variables — Creation, Inspection,


modification, lists…)

Week 5 17
1. Variable name rules

Can mix alphanumeric characters and underscore ( _ )


Example: my_var , user1 , FILE_PATH are valid.

Cannot start with a number

❌ 1var=value → invalid
✅ var1=value → valid

2. The equals sign ( = )

No spaces around = when assigning:


✅ myvar=value

❌ myvar = value —> will cause an error

3. Value types

A value can be:

A numbers —> count=5

A string —> name=”Soumyadip”

A command output (using backticks ‘command’ or $(command) ):

today=$(date)

If the value has spaces, it must be quoted:

✅ myvar=”value string”

Week 5 18
❌ myvar=value string —> would be interpreted as two commands

Example

soumya@Soumyadip:~$ name="soumyadip" age="19"


soumya@Soumyadip:~$ echo "hello $name and $age"
hello soumyadip and 19
soumya@Soumyadip:~$

Two ways to export a variable

Method 1: Export while creating


export myvar=”value string”

Creates myvar and immediately exports it

Any program or script run from this shell can access it

Method 2:Creates first, export later


myvar=”value string”

export myvar

First creates myvar only in the current shell

export then marks it as available to child processes

Week 5 19
💡 Why export?
without export the variable only exists in the current shell session:

myvar="Hello"
bash -c 'echo $myvar' #output: (blank)

with export ,the variable is passed to the subshell:

export myvar="Hello"
bash -c 'echo $myvar' #output: hello

Step Visible in current shell? Visible in child processes?


myvar=value ✅ Yes ❌ No
export myvar=value ✅ Yes ✅ Yes
myvar=value; export ✅ Yes ✅ Yes

1. echo $myvar

Prints the value stored in the variable myvar

myvar="hello"
echo $myvar
#output -- hello

Week 5 20
2. echo ${myvar}

Works the same as $myvar but safer when your variable is next to other characters.

${...} clearly marks the variable’s name boundary.

myvar="hello"
echo ${myvar}
#ouput -- hello

3. echo “${myvar}_something”

Combines the variable’s value with extra text without ambiguity.

Without {} , Bash might think _something is part of the variable name.

myvar="hello"
echo "${myvar}_something"
#output-- hello_something

1. Removing the variable completely — unset myvar

Deletes the variable from the shell environment entirely.

After unset , the variable no longer exists.

Week 5 21
myvar="hello"
unset myvar
echo "$myvar"

2. Removing only the value — myvar=

Keeps the variable name but makes its value empty.

The variable still exists, just with an empty string.

myvar="hello"
myvar=
echo "$myvar"
#output -- (blank, but variable still exits)

[[ -v myvar ]]; echo $?

1. [[ -v myvar ]] — here -v checks whether the variable myvar is set (exists in the shell),
regardless of whether its value is empty.

Week 5 22
2. echo $?

$? gives the exit status of the last command:

0 → success (true)

nonzero (usually 1 ) → failure (false)

unset myvar
[[ -v myvar ]]; echo $? # myvar is not set → 1
myvar= # set but empty
[[ -v myvar ]]; echo $? # myvar exists → 0
myvar="hello"
[[ -v myvar ]]; echo $? # myvar exists → 0

[[ -z ${myvar+x} ]]; echo $?

${myvar+x} → This is parameter expansion.

If myvar is set (even if empty), this expands to "x" (or whatever you put instead
of x ).

If myvar is unset, this expands to an empty string.

-z → Tests if the resulting string has zero length.

Week 5 23
Case ${myvar+x} expands to -z test result Exit code from echo $?

Unset variable "" (empty) True 0 (success → not set)

Set (empty) "x" False 1 (failure → is set)

Set (non-empty) "x" False 1 (failure → is set)

unset myvar
[[ -z ${myvar+x} ]]; echo $? # 0 → not set

myvar=""
[[ -z ${myvar+x} ]]; echo $? # 1 → set (even if empty)

myvar="hello"
[[ -z ${myvar+x} ]]; echo $? # 1 → set

#example
soumya@Soumyadip:~$ hello=""
soumya@Soumyadip:~$ [[ -z ${hello+x} ]]; echo $?
1
soumya@Soumyadip:~$ [[ -z ${billo+x} ]]; echo $?
0
soumya@Soumyadip:~$

Week 5 24
${variable:-default_value} —

If variable is unset or null (empty string), use default_value .

Otherwise, use the value of variable

unset myvar
echo ${myvar:-"default"} # → default

myvar=""
echo ${myvar:-"default"} # → default

myvar="hello"
echo ${myvar:-"default"} # → hello

#example
soumya@Soumyadip:~$ billo=""
soumya@Soumyadip:~$ echo ${billo:-"hagu"}
hagu
soumya@Soumyadip:~$ echo ${billo:-"default"}
default

Week 5 25
— if
${myvar:="default"} myvar is unset (or empty in this case), Bash will set it to "default"

before returning it.

unset myvar
echo ${myvar:="hello"} # prints hello
echo $myvar # now it's actually set to hello

"${myvar:+default}" means:

If myvar is set (even if empty), substitute "default" .

If myvar is unset, substitute nothing (empty string).

myvar="something"
echo ${myvar:+"default"} # prints default
echo $myvar # still "something"

unset myvar
echo ${myvar:+"default"} # prints nothing

Week 5 26
${!prefix*} expands to all variable names starting with the given prefix.

Here, H is the prefix, so ${!H*} lists all variable names whose names begin with
H.

soumya@Soumyadip:~$ echo ${!s*}


salt snap_bin_path snap_xdg_path

HELLO="hi"
HOME="/home/user"
HOSTNAME="my-pc"
PATH="/usr/bin" # won't match because it doesn't start with H
echo ${!H*}
#output -- HELLO HOME HOSTNAME

Week 5 27
echo ${#myvar} —

${#var} gives the length in characters of the value stored in var .

If var is unset, Bash treats it as an empty string, so the length is 0 .

If var is set but empty ( "" ), length is also 0 .

myvar="Hello"
echo ${#myvar} # 5
myvar=""
echo ${#myvar} # 0

#example
soumya@Soumyadip:~$ salt=""
soumya@Soumyadip:~$ echo ${salt:="namak"}
namak
soumya@Soumyadip:~$ echo $salt
namak
soumya@Soumyadip:~$ echo ${#salt}
5

Week 5 28
echo ${myvar:5:4 —

myvar → the variable containing a string

5 → offset (skip the first 5 characters, counting from 0)

4 → slice length (take 4 characters starting from that offset)

myvar="ABCDEFGHIJ"
echo ${myvar:5:4}
#output -- FGHI

salt="namak"
soumya@Soumyadip:~$ echo ${salt:2:3}
mak

Week 5 29
${myvar#pattern} → remove shortest match of pattern from the start of $myvar

${myvar##pattern} → remove longest match of pattern from the start of $myvar

myvar="abcabc123"

echo ${myvar#*b} # remove shortest match from start


echo ${myvar##*b} # remove longest match from start
#cabc123 # (*b) shortest match = 'ab', removed → 'cabc123'
#123 # (*b) longest match = 'abcab', removed → '123'

salt="namak"
soumya@Soumyadip:~$ echo ${salt#*a}
mak
soumya@Soumyadip:~$ echo ${salt##*a}
k

Week 5 30
echo ${myvar/pattern/string} # replace first match
echo ${myvar//pattern/string} # replace all matches

salt="namak namak"
echo ${salt/na/xx} # replaces first "na"
# xxmak namak
echo ${salt//na/xx} # replaces all "na"
# xxmak xxmak

Week 5 31
echo ${myvar/#pattern/string} # replace if pattern matches at the START
echo ${myvar/%pattern/string} # replace if pattern matches at the END

salt="namak"

echo ${salt/#na/xx} # "na" at start → replace with "xx"


# xxmak
echo ${salt/#ma/xx} # "ma" not at start → no change
# namak
echo ${salt/%ak/xx} # "ak" at end → replace with "xx"
# namxx
echo ${salt/%ma/xx} # "ma" not at end → no change
# namak

This image shows the case modification parameter expansions in Bash:

Lowercasing:

${myvar,} # Change first character to lowercase


${myvar,,} # Change all characters to lowercase

Uppercasing

Week 5 32
${myvar^} # Change first character to uppercase
${myvar^^} # Change all characters to uppercase

word="HeLLo"

echo ${word,} # heLLo


echo ${word,,} # hello
echo ${word^} # HeLLo
echo ${word^^} # HELLO

This image shows how to restrict variable value types in Bash using declare :

declare -i myvar → Only integers can be assigned (non-numeric input becomes 0 ).

declare -l myvar → Automatically converts all assigned text to lowercase.

declare -u myvar → Automatically converts all assigned text to uppercase.

declare -r myvar → Makes the variable read-only (cannot be changed after


assignment).

declare -l name
name="HELLO"
echo "$name" # hello

declare -u code

Week 5 33
code="abc"
echo "$code" # ABC

declare -i num
num="42"
echo "$num" # 42
num="hi"
echo "$num" #0

This image explains how to remove restrictions from variables in Bash using declare +

instead of declare - :

declare +i myvar → Removes integer-only restriction.

declare +l myvar → Removes lowercase-only restriction.

declare +u myvar → Removes uppercase-only restriction.

→ Not possible — once a variable is read-only, you cannot remove


declare +r myvar

that restriction.

declare -l name
name="HELLO"
echo "$name" # hello

declare +l name

Week 5 34
name="HELLO"
echo "$name" # HELLO (restriction removed)

This image explains how to work with indexed arrays in Bash:

declare -a arr → Declare arr as an indexed array.

arr[0]="value" → Set value at index 0 .

echo ${arr[0]} → Show value at index 0 .

echo ${#arr[@]} → Show number of elements in the array.

echo ${!arr[@]} → Show all indices used.

echo ${arr[@]} → Show all values in the array.

unset 'arr[2]' → Delete the element at index 2 .

arr+=("value") → Append "value" to the end of the array.

declare -a fruits
fruits=("apple" "banana" "cherry")
echo ${fruits[1]} # banana
echo ${#fruits[@]} # 3
fruits+=("date")
echo ${fruits[@]} # apple banana cherry date

Week 5 35
— -A specifies that
declare -A hash hash will be an associative array (keys are
strings, not just numbers).

hash["a"]="value" — Sets the value "value" for the key "a" .

echo ${hash["a"]} — Prints the value associated with key "a" .

echo ${#hash[@]} — Shows the number of key-value pairs in the array.

echo ${!hash[@]} — Displays all keys present in the array.

echo ${hash[@]} — Displays all values stored in the array.

unset 'hash["a"]' — Removes the element with key "a" from the array.

Syntax Meaning
declare -a arr Declare indexed array
arr[i]=value Assign value at index i
${arr[i]} Access element at index i
${#arr[@]} Number of elements in array
${arr[@]} All elements in array
${!arr[@]} All indices in array
unset 'arr[i]' Remove element at index i

Week 5 36
Week 6
Lecture 6.1

🛠️ Common Unix/Linux Utilities


Utility Purpose

Searches for files and directories based on criteria like name,


find size, or modification date. Can also execute actions on found
items.

tar bundles multiple files into a single archive; gzip compresses


tar , gzip
files to reduce size. Often used together to create .[Link] files.

Automates tasks based on conditions defined in a Makefile—


make commonly used for compiling code, but can handle any
conditional workflow.

find — Locate and Process files


The find command searches for files and directories in a directory hierarchy.

find [path] [expression]

Common Options:

. – current directory

name "pattern" – match files by name

type f – search for files ( d for directories)

Week 6 1
size +10M – files larger than 10MB

mtime -7 – modified within the last 7 days

exec command {} \; – execute a command on each result

find . -name "*.log" -type f -exec rm {} \;

This finds all log files and deletes them. is replaced by each file found.

tar , gzip — Archive and Compress files

tar — tape archive

tar [options] [archive-name] [file/directory]

Common Options:

c – create a new archive

v – verbose (shows progress)

f – specify archive file name

x – extract files

z – compress with gzip

tar -czvf [Link] my_folder/

c creates the archive

z compresses it with gzip

v shows progress

f names the archive [Link]

make — Automate Conditional Actions

make reads a file called Makefile to determine how to build or process files based
on rules
Structure of a Makefile:

Week 6 2
target: dependencies
command

Example

build: main.c utils.c


gcc -o app main.c utils.c

How It Works:

If main.c or utils.c changes, make build will recompile.

Targets can be anything: build , clean , install , etc.

You can automate tasks like testing, packaging, or deployment.

clean:
rm -f *.o app

#Run with
make clean

Week 6 3
-name Match filenames against a simple pattern (supports wildcards like * and
? )

find . -name "*.txt"

-type

Filter by file type:

f → regular file

d → directory

l → symbolic link

c → character device

b → block device

find /var/log -type d


#Lists all directories under /var/log.

-atime Search by last access time in days:

+n → more than n days ago

-n → less than n days ago

n → exactly n days ago

find . -atime -3
#finds files accessed within the last 3 days

-ctime — Search by last status change (permissions, ownership, content):

+n -n or n days same as -atime

find /tmp -ctime +10


Finds files changed more than 10 days ago

Week 6 4
Option Meaning Updates When… Typical Use
-atime Access Time File was read (opened) Find unused files

Modification
-mtime File content changes Find old or updated files
Time

File metadata changes


(permissions,
Find files whose attributes
-ctime Change Time ownership, link count,
changed
etc.) OR content
changes

(with -regextype ) — Match filenames using a regular expression for more


-regex

complex patterns

find . -regextype posix-egrep -regex ".*(jpg|png|gif)$"


#Finds all image files ending in .jpg, .png, or .gif

-exec— Run a command on each matching file. {} gets replaced by the


filename. \; → end of command (spaces and escape are important).

find . -name "*.log" -type f -exec rm {} \;


Deletes all .log files found

-print — Explicitly output the matching filenames (often implied by default)

find /etc -name "*.conf" -print


Prints all .conf files under /etc.

Term / Option What It Means Common Usage Example

Matches file or directory names using


wildcards ( * for many characters, ? find . -name "*.txt" → Finds all
-name
for a single character). Case-sensitive .txt files.
by default.

Filters results by file type: f = regular


file d = directory l = symbolic link c find /var/log -type d → Finds only
-type
= character device b = block directories in /var/log .
device...and more.

Week 6 5
Term / Option What It Means Common Usage Example

Searches by last access time in whole


days. +n = more than n days ago -n find . -atime -7 → Accessed
-atime
= less than n days ago n = exactly n within last 7 days.
days ago.

Searches by last status change time


(permissions, ownership, or content find /tmp -ctime +10 → Changed
-ctime
changes) in days. Uses same +n , -n , more than 10 days ago.
n syntax.

Matches full path names using regular


find . -regextype posix-egrep -regex
expressions for complex pattern
-regex ".*(jpg|png)$" → Finds .jpg or
matching. Often paired with -
.png files.
regextype to choose regex style.

Runs a command on each found item.


find . -name "*.log" -exec rm {} \; →
-exec {} acts as a placeholder for the
Deletes all .log files.
filename, and \; ends the command.

Displays the full path of each matching


find /etc -name "*.conf" -print →
-print file. (Often implied, but can be used
Lists .conf files.
for clarity or scripting.)

Tools for Packaging —

1. tar - Tape Archive

Purpose: Collects multiple files/directories into a single archive file.

Week 6 6
Does not compress by itself — just bundles.

2. gzip - Compression

Purpose: Compresses a single file (often a .tar archive).

Reduces disk usage, especially useful for tiny files that compress well.

Applications

Use Case Benefit

Backup One archive file is easier to store and restore.

Uploading/downloading one compressed file is faster


File Sharing
and simpler.

Reduce Disk Usage Compression saves space, especially for text-heavy files.

Compression Tools

Tool Notes

compress /
Legacy Unix compression; rarely used today.
ncompress

gzip Fast, widely supported; good compression ratio.


bzip2 Slower than gzip, but better compression.

High compression ratio, slowest among common


xz
tools.

Very high compression; supports many formats; less


7z ( p7zip-full )
native on Unix.

Week 6 7
Tarballs — Package + Compress

A tarball is a .tar archive optionally compressed:

.[Link] or .tgz —> tar + gzip

.tar.bz2 —> tar + bzip2

.[Link] —> tar + xz

Performance Trade-offs

Compression Speed
Format Memory Usage
Ratio (Compression/Decompression)

gzip Moderate Fast Low

bzip2 High Slower Moderate

xz Very High Slowest High

7z Very High Variable High

Lecture 6.2

Week 6 8
1. Do one thing well

Focus on a single, well-defined task.

Example: grep searches text; it doesn’t sort, format, or compress.

[Link] lines of text, not binary

Text is human-readable and easier to debug.

Tools like awk , sed , and cut thrive on line-based input.

3. Use Regular Expressions

Enables powerful pattern matching and text manipulation.

Example: grep '^[A-Z]' finds lines starting with uppercase letters.

Makes tools flexible and expressive.

4. Default to Standard I/O

Read from stdin , write to stdout unless told otherwise.

Allows chaining tools with pipes ( | ) and redirection.

Example: cat [Link] | grep 'error' | sort

5. Don’t Be Chatty

Output only what’s necessary — no extra messages or formatting.

Keeps output clean for further processing.

Week 6 9
Example: wc -l [Link] returns just the line count.

6. Generate Same Output Format Accepted as Input

Promotes interoperability between tools.

Example: sort takes plain text and outputs plain text — easy to feed into uniq ,
head , etc.

7. Let Someone Else Do the Hard Part

Reuse existing tools or libraries instead of reinventing.

Example: make delegates compilation to gcc ; tar uses gzip for compression.

8. Detour to Build Specialized Tools

If a task is complex, build a separate tool for it.

Encourages clarity and maintainability.

Example: diff and patch are specialized tools for comparing and applying
changes.

Elements of a Program / Script

Element Purpose Quick Example

First line in a script


#! interpreter telling the OS which
#!/bin/bash → use the Bash shell
(shebang) program should run
it.

Week 6 10
Element Purpose Quick Example

Human-readable
notes ignored by
# comments the interpreter; # This script backs up my files
useful for
documentation.

The actual
instructions or
Commands ls -l /home
operations to
perform.

Repeat a block of
Loops code for multiple for file in *.txt; do echo "$file"; done
items or conditions.

Store and reuse


Variables name="Soumyadip"
values.

Branching logic to
Case handle multiple
case $day in Mon) echo "Start of week";; esac
statements specific cases
cleanly.

Group reusable
Functions code blocks under a greet(){ echo "Hello $1"; }
single name.

Choosing an interpreter

Tool Sweet spot Strengths Watch-outs

Processes,
Shell Orchestrating commands, Quoting, error
pipelines,
(sh/bash/zsh) file ops, simple glue handling, portability
ubiquity

Structured text/CSV logs, Field-aware, Steeper readability for


Awk
quick reports concise, fast non-Awk users

Stream edits, substitutions, Hard for multi-step


Sed One-liners, speed
line filters logic

Libraries,
Automation, data work, Startup time vs one-
Python readability,
APIs, cross-platform liners
testing

Week 6 11
Tool Sweet spot Strengths Watch-outs

Scripting with expressive Less common on


Ruby Clean OO, gems
syntax, tooling minimal systems

Text munging, legacy Powerful Maintainability for


Perl
scripts regex/text ops large codebases

Sourcing a Script
When you source a script, you’re not starting anything new — you’re inviting
that script to run inside your current shell session.

When to use

Loading environment variables and functions

Initializing shell options or aliases

Activating virtual environments

Running helper scripts that change your directory or PATH

# [Link]
export PROJECT_HOME="$HOME/myproj"
cd "$PROJECT_HOME"

source [Link]
echo $PROJECT_HOME # still set

Week 6 12
pwd # still in the new directory

Executing a script ( ./scriptname or bash scriptname )


When you execute a script, you spin up a separate process — a child shell
that lives briefly and then disappears

# [Link]
export TEMP_VAR=123

./[Link]
echo $TEMP_VAR # nothing printed, variable died with child process

1. Use absolute or relative path

Absolute path:

/home/soumya/scripts/[Link]

Always works regardless of your current directory.

Relative path:

./[Link]

Week 6 13
Works if the script is in your current directory (you need ./ because . isn’t
usually in $PATH for safety).

2. Keep the script in folder listed in $PATH

is a colon-separated list of directories your shell searches for


$PATH

commands:

echo $PATH
/usr/local/bin:/usr/bin:/bin:/home/soumya/bin

If your script is in one of these folders (and has execute permissions), you
can run it without specifying the path:

[Link]

3. Watch out for the sequence in $PATH

The shell searches $PATH from left to right, so the first matching name
wins.

If there’s another [Link] earlier in $PATH , it will run instead of yours.

You can check which script will be run:

which [Link]

To override, either:

Move your directory earlier in $PATH , or

Call your script with its full path.

Week 6 14
A login shell is typically the first shell you get after logging into a system (via a
console, SSH, or display manager). It loads a set of startup files aimed at setting
up your full working environment.
Load order (first file found in each step stops that step’s search):

1. – System-wide defaults for all users. Often sets global environment


/etc/profile

variables, system paths, umask, etc.

2. – User-specific login config. This is where you set your PATH,


~/.bash_profile

aliases, and environment tweaks for login sessions.

3. If ~/.bash_profile is missing, Bash tries:

~/.bash_login

~/.profile

A non-login shell is usually any shell you start inside an existing session — for
example, opening a new terminal tab, or running bash inside a terminal.

Week 6 15
Key Differences

Feature echo printf

Simplicity ✅ Super easy to use ❌ Requires format strings


Formatting
❌ Limited ✅ Precise formatting like in
power C

Portability
⚠ Can behave differently in POSIX vs
Bash
✅ Consistent across shells
Newline Auto (unless -n ) Only if you explicitly \n

Common Format Specifiers

Specifier Meaning Example Output


%s String "Hello"

%d / %i Integer (decimal) 42

%f Floating-point number 3.141593

%.nf Floating-point with n decimal places 3.14 for %.2f

%x Integer in hexadecimal (lowercase) 2a

%X Integer in hexadecimal (uppercase) 2A

%o Integer in octal 52

%c Single character A

Week 6 16
read var

read— prompts the shell to accept input from standard input (keyboard,
unless redirected).

var — the variable name where the input will be stored.

Whatever the user types (until Enter) becomes the value of $var .

echo "What’s your name?"


read name
echo "Hello, $name!"

Week 6 17
What Is Command Substitution?
It means: “Run this command, take its output, and use it as a value.”

1. Backtick style (older)

var='command'

2. $() style (modern,preferred)

Week 6 18
var=$(command)

echo "$(echo "Today is $(date)")"

user=$(whoami)
echo "Logged in as: $user"

How it works —
[Link] shell assigns each value in list to variable var , one at a time.

2. For each value, the commands inside do ... done run once.

3. Default list separation is by spaces (because IFS — Internal Field Separator —


defaults to space, tab, newline).

for fruit in apple mango banana


do
echo "I like $fruit"
done
#I like apple

Week 6 19
#I like mango
#I like banana

How it works:

1. var is compared against each pattern in sequence.

2. When a match is found, the corresponding commands run.

3. ;; tells the shell “stop checking further patterns” and move on.

4. Patterns can use wildcards (, ? , [...] ) just like filename globbing.

file="[Link]"

case $file in
*.txt)
echo "Plain text file"
;;
*.pdf)
echo "PDF document"
;;
*.jpg|*.png)
echo "Image file"
;;

Week 6 20
*)
echo "Unknown file type"
;;
esac
#output -- PDF document

echo "Choose an option: start / stop / restart"


read action

case $action in
start)
echo "Starting service..."
;;
stop)
echo "Stopping service..."
;;
restart)
echo "Restarting service..."
;;
*)
echo "Unknown option: $action"
;;
esac

Week 6 21
here first orange one is Multi-line style and then the next one is Compact style

How It Works

The condition is usually a command or a test expression.

If the command returns exit status 0 (success), the then block runs.

If it returns non-zero, the block is skipped.

#Check if a file exists


if [ -f "[Link]" ]; then
echo "File exists."
fi

#check if two numbers are equal


a=5
b=5
if [ "$a" -eq "$b" ]; then
echo "Numbers match."
fi

if [ "$user" = "root" ]; then


echo "Admin access"
else
echo "Regular user"
fi

Week 6 22
1. test Expression — test -e file

Checks if file exists.

Returns 0 (true) if it does.

Equivalent to [ -e file ] .

2. Single Square Brackets [ ... ] — [ -e file ]

POSIX-compliant.

Used for file tests, string comparisons, etc.

3. Double Square Brackets [[ .. ]] — [[ $ver == 5.* ]]

Bash-specific.

Supports pattern matching ( == , != , =~ ).

4. Double Parentheses (( ... )) — (( $v ** 2 > 10 ))

Arithmetic evaluation.

Supports operators like + ,, * , > , < , etc.

Returns 0 if the expression is true.

5. Command as Condition wc -l file

Any command can act as a condition.

If it exits with status 0, it’s considered true.

6. Pipeline as Condition who | grep "joy" > /dev/null

Week 6 23
Combines commands.

> /dev/null suppresses output.

Often used with grep , awk , etc., to search or filter.

Summary Table

Syntax Purpose Shell Compatibility


test File/string checks POSIX
[ ... ] File/string checks POSIX
[[ ... ]] Advanced string/patterns Bash, Zsh
(( ... )) Arithmetic evaluation Bash, Zsh
command Command success/failure All shells
pipeline Combined command logic All shells
! condition Negation All shells

Week 6 24
Week 6 25
How it works

1. Evaluate condition

If true (exit status 0 ), run the commands inside the loop.

If false (non-zero), exit the loop immediately.

Week 6 26
2. After running the commands, go back to step 1.

3. Continue until condition becomes false.

#Count down from 5


count=5
while [ $count -gt 0 ]
do
echo "Count: $count"
((count--))
done

#Read file line by line


while IFS= read -r line
do
echo "Line: $line"
done < [Link]

How it works —

Evaluate the condition.

If false → run the commands, then re-check.

If true → exit the loop immediately.

This makes until great for “wait until X is ready/true” workflow

Week 6 27
#Wait for a file to exist
until [ -f "[Link]" ]
do
echo "Still waiting for [Link]..."
sleep 3
done
echo "File found!"

When to use until vs while :

while → do something while a condition is true.

until → do something until a condition becomes true.

Step-by-Step Flow

Define the function — give it a descriptive name and put your command
sequence inside.

Call the function — the shell jumps into its code, runs each command in order.

Return to caller — once complete, control goes back to where you left off.

#A greeting function
say_hello() {

Week 6 28
echo "Hello, $1!"
}

say_hello "Soumyadip"

#combining with loops


print_lines() {
for line in "$@"; do
echo "$line"
done
}

print_lines "Unix" "Linux" "Scripting"

Week 6 29
Week 7

1. Enable tracing inside your script

set -x # turn on debug mode


# your commands here
set +x # (optional) turn it off later

Every command (and its expanded arguments) will be printed to stderr before
execution.

2. Run the whole script in trace mode

bash -x ./[Link]

Same effect as set -x , but applied from the outside — no code changes needed.

Week 7 1
1. Logical AND ( && )

[ $a -gt 3 ] && [ $a -gt 7 ]

Both tests must be true for the whole expression to succeed.

Second test only runs if the first one passes.

2. Logical OR ( || )

[ $a -lt 3 ] || [ $a -gt 7 ]

The overall result is true if either test passes.

Second test runs only if the first one fails.

Week 7 2
Quick Comparison

Method Built‑in? Speed Portability Recommended?


let ✅ Yes Fast Bash/Ksh 👍 Fine
expr ❌ No Slower Very high 👌 Legacy use
$[ ... ] ✅ Yes Fast Older Bash 🚫 Deprecated
$(( ... )) ✅ Yes Fastest POSIX Bash/Ksh ⭐ Best choice

Week 7 3
for the left one
a = 2.5, b = 3.2, c=4

scale = 5
($a+$b)^$C #(2.5+3.2)^4 -- 1055.6001

Week 7 4
How it works

Conditions are checked top to bottom.

The first true condition executes its block, and the rest are skipped.

If none match, the else block runs.

if [ "$score" -ge 90 ]; then


echo "Grade: A"
elif [ "$score" -ge 80 ]; then
echo "Grade: B"
elif [ "$score" -ge 70 ]; then
echo "Grade: C"
else
echo "Grade: F"
fi

Week 7 5
Components

case $var in : Begins the case block, testing the value of $var .

op1) : Pattern to match. If $var equals op1 , run commandset1 .

op2 | op3) : Multiple patterns using | (logical OR).

) : Wildcard — acts as the default case if no patterns match.

esac : Ends the case block (literally “case” spelled backward).

read -p "Enter a fruit: " fruit

case $fruit in
apple)
echo "Red and crunchy.";;
banana | mango)
echo "Sweet and tropical.";;
orange | lemon | lime)
echo "Citrusy goodness.";;
*)
echo "Unknown fruit.";;
esac

Week 7 6
Components

(( a = $begin; a < $finish; a++ )) : Arithmetic expression block

a = $begin : Initialization

a < $finish : Condition

a++ : Increment

do ... done : Loop body

1
2
3
...
9

Week 7 7
Components

Initialization: a=$begin1, b=$begin2

Condition: a < $finish → loop continues as long as a is less than finish

Update: a++ (increment), b-- (decrement)

Body: echo $a $b → prints both variables each iteration

1 10
29
38
47
56
65
74
83
92

Week 7 8
: Creates a temporary file using the current process ID ( $$ ) to
filename=tmp.$$

ensure uniqueness.

for (( ... )) : C-style loop from $begin to $finish - 1 .

echo $a : Prints each value of a .

> $filename : Redirects all output from the loop to the file.

#the file tmp.<PID> will contain:


1
2
3
...
9

Week 7 9
while [ $i -lt $n ] : Loop runs while i is less than 10 .

echo $i : Prints the current value of i .

(( i++ )) : Increments i .

if [ $i -eq 5 ] : Checks if i has reached 5 .

break : Exits the loop immediately when the condition is true.

0
1
2
3
4

Week 7 10
Outer loop: while [ $i -lt $n ]

Inner loop: while [ $j -le $i ]

break 2 : Exits both the inner and outer loops when j == 7

The outer loop prints i , and the inner loop prints numbers from 0 to i .

When j reaches 7 , break 2 exits both loops, halting the entire script

Outer loop: Iterates i from 0 to 8

Inner loop: Iterates j from 1 to i+1

Week 7 11
: Skips printing when
continue j is between 4 and 5 (i.e., 4 and 5 are
skipped)

j=4 and j=5 are skipped in every loop due to the continue condition.

All other values are printed normally.

$1 : Refers to the first command-line argument

n "$1" : Checks if $1 is non-empty

shift : Moves all positional parameters one place to the left

$2 becomes $1 , $3 becomes $2 , and so on

(( i++ )) : Increments the counter

What it does

shift moves all positional parameters one place to the left:

$2 becomes $1

$3 becomes $2

The old $1 is discarded

$# (argument count) decreases by 1

Week 7 12
What exec really do?

Program replacement: When you run


exec ./my-executable --my-options --my-args

the shell process you were in is gone. It's replaced in memory by the new
program. No new process is spawned — the PID stays the same — and if the
program ends, there’s no shell to “return” to.

I/O redirection without program launch:


exec > [Link]

This doesn't replace the shell; it just changes where the shell's standard
output goes (here, to [Link] ) for the rest of that shell session or script.

What it actually does?

Week 7 13
Argument joining → It concatenates all arguments into a single string.

Re-parsing → That string is then fed back into the shell for execution.

Exit status → You get the return code of whatever that executed command
produced.

step by step

getopts "[Link]" options

The string "[Link]" defines valid options.

a → flag, no argument.

b: → flag, requires an argument.

c: → flag, requires an argument.

On each loop, getopts grabs the next option from the command line and:

Stores the option letter in the variable options .

Stores any argument (for b or c ) in $OPTARG .

The case statement decides how to respond to each option.

If an invalid option is found, case prints the usage message.

Week 7 14
1. echo select a middle one

Prints an instruction for the user.

2. select i in {1..10}

Generates a numbered menu of values from 1 to 10.

Bash automatically prompts with #? and waits for the user to type a choice
number (not the value itself).

The chosen value is stored in variable i .

3. do ... done

This loop runs each time the user makes a selection.

4. case $i in ... esac

Groups the chosen number into categories:

1, 2, 3 → “small one”

8, 9, 10 → “big one”

4, 5, 6, 7 → “the right one” and then break exits the loop.

5. break

Stops the select loop when a “middle” number is chosen.

6. echo selection completed with $i

Prints the final chosen value after the loop ends.

Week 7 15
Week 7 16
Week 8
Automating Scripts

Cron is a daemon (background service) that runs commands or scripts


automatically at specified dates and times.
You set the schedule using a crontab (cron table), which is just a text file of timing
rules and commands.
Related tools

Tool Purpose
at Run a job once at a specific time.
crontab Edit, list, or remove your personal cron jobs.

Like cron, but ensures jobs run even if the system was off
anacron
at the scheduled time (good for laptops).

Not a scheduler, but often used with cron to rotate and


logrotate
clean up log files on schedule.

Week 8 1
Cron jobs use five time fields before the command:

Field Value Meaning here

Minute 5 Runs at the 5th minute of the hour

Hour 2 Runs at 02:00 AM hour

Day of Month * Any day of the month

Month * Any month

Day of Week 1-5 Monday through Friday (working days)

So: Every weekday at 02:05 AM.

Week 8 2
These contain scripts that start or stop system services.

In the classic SysV init system, /etc/init.d/ holds the actual service control
scripts (start, stop, restart).

With newer upstart or systemd , /etc/init/ or /lib/systemd/system/ may take over, but the
concept is similar.

SED — A Language for processing text


streams
Introduction
● It is a programming language
● sed is an abbreviation for stream editor
● It is a part of POSIX
● sed precedes awk

Week 8 3
When sed processes data, it works in cycles — one per input line — and
maintains two key buffers:

Buffer Purpose

The “active” workspace. Each line of input is loaded here,


Pattern space processed by the script, then (unless told otherwise)
printed.

An auxiliary buffer where you can store data for later use.
Hold space Content here survives between cycles until you explicitly
swap or append to/from it.

The execution cycle :—

Read → sed reads one line from the input stream into the pattern space.

Execute → It runs through your script from top to bottom, checking each
command’s address pattern.

If a pattern matches, the associated action is executed.

If no pattern matches, nothing happens to that line unless there’s a default


action.

Output → Unless the n option is used, the (possibly modified) pattern space
is printed.

Repeat → Load the next line, pattern space is overwritten, hold space remains
unchanged unless you modify it.

Week 8 4
Inline command mode —
sed -e 's/hello/world/g' [Link]

Week 8 5
Here, -e introduces the editing script directly on the command line. The
s/hello/world/g command replaces all occurrences of “hello” with “world” in each

line of [Link] .

Script file mode —


sed -f ./[Link] [Link]

The -f flag tells sed to read its editing commands from a file ( [Link] in this
case).

The four main component

Component Purpose & Notes Example

A named jump point for branching inside your


Label ( :label ) script. Useful for loops or skipping :loop
commands.

Tells sed which lines the action applies to.


Can be: <br>• Line numbers ( 5 , 10,20 )
Address / <br>• Regex patterns ( /ERROR/ ) <br>•
/foo/ , 5,10 , /^#/!
Pattern Ranges by address or pattern ( /start/,/end/ )
<br>• Negated with ! to apply when pattern
doesn’t match

The operation to perform — substitution ( s ),


Action delete ( d ), print ( p ), insert ( i ), transform s/foo/bar/
( y ), flow control ( b , t ), etc.

Week 8 6
Component Purpose & Notes Example

Extra arguments or flags for the action. For


s , these are flags like g , p , numeric
Options s/foo/bar/g , s/foo/bar/2
replacements. For y , they define from→to
character sets.

When sed reads a line into pattern space:

1. It checks the address pattern — if the current line matches, the action is
executed.

2. If there’s a label, branch/jump commands may change the flow.

3. Options fine‑tune how the action behaves.

4. The result is either printed (default) or skipped, depending on your flags.

#!/usr/bin/sed -f
# Label for looping
:again
/^#/d # Delete comment lines
/ERROR/{
s/ERROR/WARNING/g
t again # If substitution happened, jump back to :again
}

Grouping commands

{ cmd; cmd; }

How it works

The opening brace { must be on the same line as the address/pattern or


following it, usually with a space.

Each command inside is separated by a semicolon ; or placed on its own line.

The closing brace } ends the group.

Week 8 7
This grouping is useful when you want multiple edits to apply to the same
match before the next line is read.

sed '/ERROR/ { s/ERROR/WARNING/; s/CRIT/CRITICAL/ }' logfile

Address /ERROR/ is checked for the current line.

If it matches:

First substitution changes ERROR → WARNING .

Second substitution changes CRIT → CRITICAL .

Then the cycle continues.

Select by numbers
These operate purely on line positions:

5 → matches only line 5

$ → matches the last line

% → matches the entire file (GNU sed extension)

1~3 → matches every 3rd line starting from line 1 ( 1, 4, 7, ... )

Week 8 8
Selecting by text
Uses regular expressions wrapped in slashes:

/regexp/ → matches any line whose text matches the given pattern

Range Addresses
Target continuous spans of lines:

/regexp1/,/regexp2/ → from first match of regexp1 to first match of regexp2

(inclusive)

/regexp/, +4 → a match plus the next 4 lines

5,15 → from line 5 to line 15

5,/regexp/ → from line 5 through the line that matches regexp

→ from match, then every 2nd line thereafter (GNU


/regexp/,~2 sed

extension)

Week 8 9
Week 8 10
Week 9
AWK — A language for processing fields and
records

At its heart, awk follows this execution model:


pattern { action }

Pattern → decides when a block runs (can be regex, range, expression, or omitted)

Action → the commands to run for matching input lines

If no pattern is given, the action runs for all lines.

If no action is given, the default action is to print matching lines.

Input as Records

Week 9 1
awk sees your input as a stream of records.

By default:

Record separator ( RS ) = newline ( \n ), so each line is a record.

You can change RS to split on other delimiters (e.g., blank lines, a comma, even a regex).

Records as Fields

Each record is automatically split into fields.

By default:

Field separator ( FS ) = space(s) or tabs.

Fields are accessed with $1 , $2 , …, $NF (last field), where NF is the number of fields in
the current record.

No manual splitting needed — it’s built in.

Pattern-Action Cycle

For each record:

1. The record is split into fields.

2. awk checks if the pattern for a block matches:


pattern { action }

3. If matched → run the action on that record.

4. Repeat for all pattern–action blocks in the program.

Special Pattern Blocks

BEGIN { ... } → runs before any records are processed (good for headers, setup).

END { ... } → runs after all records are processed (good for summaries, totals).

Week 9 2
cat /etc/passwd | awk -F":" '{ print $1 }'

F":" → sets field separator ( FS ) to a colon, which is the delimiter in /etc/passwd .

{ print $1 } → prints the first field (the username).

cat /etc/passwd | ... feeds the file into awk via a pipe — though you could skip cat and just do:

Week 9 3
1. Pattern half

Think of patterns as awk ’s gatekeepers. They decide whether the procedure block should
run for a given line.

BEGIN → special block that runs once before reading input (good for setup like FS=":" ).

END → runs once after all input is processed (great for summaries/totals).

General expression → any condition that evaluates to true/false, e.g. $3 > 1000 .

Regex → matches text patterns, e.g. /bash$/ for login shells ending in bash .

Relational expression → comparisons like $1 == "root" .

Pattern-matching expression → e.g. $0 ~ /pattern/ or $2 !~ /nologin/ .

💡 If there’s no pattern, awk just runs the procedure for every record.

2. Procedure Half
Once a pattern allows a record through, the procedure defines the action.

Variable assignment → count = count + 1

Array assignment → names[$1] = $3

Input/output commands → print , printf , getline , file writes.

Week 9 4
Built-in functions → length($1) , toupper($2) , split() .

User-defined functions → reusable logic blocks.

Control loops → for , while , if/else , break , continue .

Block Type When It Runs Key Traits Practical Use

Can appear
Setup: set FS ,
Once, before anywhere in
initialize
BEGIN { ... } reading any the script; runs
counters, print
input before the first
headers
record

Once, after all Summarize


Runs at the
END { ... } input is results, print
very end
processed totals

For every , ! ), regex


Patterns can be Filtered
pattern { ... } record where ( /bash$/ ), ranges
logical ( && , ` processing
pattern is true ( NR==2,NR==5`)

{ ... } (no For every Implicit “always Default action


pattern) record true” for each line

Flow in Plain Terms

1. BEGIN block(s) fire once — perfect for setting up environment.

2. For each line:

Evaluate each pattern in your script.

Week 9 5
If pattern matches, run its associated commands.

If a bare { ... } exists, it runs every time.

3. END block(s) execute once at the end.

1. Assignment Operators

= → assign a value: userCount = 0

+= , = , = , /= , %= → update in place: sum += $3

^= / *= → power‑update: x **= 2 squares x .

2. Logical Opertor

&& (AND) — both must be true.

|| (OR) — either can be true.

Negation uses ! , e.g., $7 !~ /nologin/ .

3. Algebraic Operators

+-*/% — basic arithmetic

^ or * — exponentiation.

[Link] Operator
> , < , >= , <= , == , !=

Week 9 6
Week 9 7
Associative → indexes (keys) don’t have to be numbers — they can be strings like "root" or
"bash" .

Sparse storage → only stores elements you actually assign; no fixed size, no wasted
memory for unused indices.

Dynamic typing → keys and values can be assigned on the fly without declarations.

Operation Example What it does

Assign arr["root"] = 0 Stores 0 at key "root" .

Iterate for (u in arr) print u, arr[u] Loops over all keys (unordered).

Delete delete arr["root"] Removes that key/value pair.

Count count[$7]++ Tally how many times a value (e.g., shell path) appears.

Week 9 8
Type Syntax Example Purpose in awk scripts

Iterate over keys in an associative array (unordered).


For‑in loop for (a in array) { print a }
Useful for summarizing tallies at the end.

Run code only when a condition is true — often used


If statement if (a > b) { print a }
in filtering inside the procedure block.

Counter loop for (i=1; i<n; i++) { print i } Classic index‑based iteration for numeric ranges.

Repeat while a condition remains true — good for


While loop while (a < n) { print a; a++ }
streaming data or manual increments.

Like while , but ensures the block runs at least once


Do‑while loop do { print a; a++ } while (a < n)
before the condition check.

mylib — your function library

function name(args) { ... } → defines a reusable block of code.

Arguments are local variables within that function.

Can call built‑in functions (like rand() ) or use $n fields from the current record.

Lives in a separate file so you can import it anywhere.

[Link] — the script that uses the library

BEGIN sets up variables.

Main block calls your custom functions just like built‑ins.

Output is the combo of reusable logic ( mylib ) + script flow ( [Link] ).

Week 9 9
Control letter Meaning Example output
%c ASCII character printf "%c", 65 → A

%d / %i Integer (decimal) 42

%e Scientific notation 3.141593e+00

%f Floating point 3.141593

%g Shortest of %e / %f Depends on value


%o Octal 52 for decimal 42

%s String "hello"

%x / %X Hexadecimal (lower/upper) 2a / 2A

printf "format", a, b, c
#"format" → a string containing format specifiers (placeholders)
#a, b, c → values to insert in order into those placeholders

width → minimum field width: %10s (pads left with spaces to width 10)

precision → decimal places for floats: %.2f → 3.14

flag → left‑align within the given width: %-10s

bash + awk
● Including awk inside shell script
● heredoc feature
● Use with other shell scripts on command line
using pipe

Week 9 10
AWK

Level 1: Absolute basics

> Basic Syntax = awk 'pattern {action}' input_file\


-> $0 = whole line ; $1 = 1st field ; $2 = 2nd field ; ...

Level 2: Patterns and Simple Conditions

> Regex = /regex/


-> awk '/^[0-9]/ {print $0}' [Link] = lines starting with any digit.
-> awk '$1==1 {print $0}' [Link] = lines having 1st field as 1 (starting with 1)
-> awk '$2 ~ /^a/ {print $0}' [Link] = lines where 2nd field starts with an 'a'.

> Built-in Vars:-

NR: Record number

NF: Number of fields in a line.

FS: Field Seperator (default = space).

awk '{ print NR ": " $0 }' [Link] = Print Line nos.

Level 3: Actions with Math and Strings - Doing Calcs

> awk '{print $1 + $2}' [Link] = Print sum of fields for each line.
-> awk '{print "Total row sum: " $1 + $2}' [Link] = Concatenate.
-> awk '$1!~/^-/ {print $1+$2}' [Link] = takes lines whose 1st field is +ve.

> Creating custom var

awk '{sum += $1+$2; print "Running total= " sum}' [Link] = Running total, last print will be
total sum.

Level 4: BEGIN and END Blocks

> BEGIN { actions }: Runs once before processing lines (setup)


-> END { actions }: Runs once after all lines (summary/cleanup)
-> Regular patterns/actions in-b/w.
-> Syntax: awk 'BEGIN { ... } pattern { ... } END { ... }' file

> awk 'BEGIN { sum=0 } { sum += $1 + $2 } END { print "Grand Total: " sum }' [Link] =
Prints the complete total at the end

Level 5: Control Flow - If Statements and Loops

> If-Else Syntax: if (condition) { ... } else { ... }


-> Conditions: ==, !=, >, <, >=, <=

> awk '{ if ($1 % 2 == 0) print $1 " even"; else print $1 " odd" }' [Link] = Prints num at 1st
field and even/odd.

Week 9 11
> Loops: While, For, Do-While

For: for (i=1 (init), i<=NF (stop condition), i++ (increment/decrement) {print $i} (action)) =
loops through fields

eg: awk '{ for (i=NF; i>=1; i--) printf "%s ", $i; printf "\n" }' [Link] = prints fields backwards
(uses C-syntax, cuz simple {print $i} will print fields one by one on seperate lines).

Level 6: Arrays

> Arrays hold multiple vals, indexed by nos or strs.


-> Syntax: array[index] = value

Access: array[index]

Loop: for (key in array) { ... }

awk '{ count[$1]++ } END { for (fruit in count) print fruit ": " count[fruit] }' [Link] = how
many times each fruit appears at the beginning (1st field).

awk '{ for (i=1; i<=NF; i++) count[$i]++ } END { for (fruit in count) print fruit ": " count[fruit] }'
[Link] = count of each fruit.

Level 7: Functions

> Syntax: function name(params) { ... return value }

awk 'function add(a,b) { return a+b } { print add($1, $2) }' [Link] = prints sum of 1st and 2nd
field for each line.

awk 'function smartCalc(a, b) { if (b > a) { return a - b } else { return a + b } } { print


smartCalc($1, $2) }' [Link]

> Built-in Funcs:

length(string) = Lenght ; toupper(str) = Uppercase ; split(str, arr, sep) = Split into array

echo "a,b,c" | awk '{ split($0, arr, ","); print arr[3] }' = split into array, seperated by commas,
print 3rd entry (indexing starts from 1).

Level 8: Advanced Topics

> FS = ',' for csv. Or, output seperator OFS='\t'.

awk 'BEGIN { FS="," } NR > 1 { print $1 " is " $2 " years old" }' [Link] = Print 'name' is 'age'
years old for every line, staarting from 2nd line.

> Full Script execution

#!/bin/bash
BEGIN { print "Starting fruit count" }
{ for (i=1;i<=NF;i++) counts[$i]++ }
END {
for (f in counts) {

Week 9 12
if (counts[f] > 1) {print f " appears " counts[f] " times."}
}
print "Done!"
}
= prints 'fruits' and how many times they appear for all such fruits appearing >1 times.

save as [Link] > awk -f [Link] [Link]

> Piping in bash: ls -l | awk '{ print $9 }' = prints filenames

Week 9 13
Week 10
Lecture 10.1 (Version Control)

Save —> a new version of code

“Make” —> compile only those parts of code that have changed

version control —> trace back to a working version of code


versions —> #users * # files * # version
SVN — Centrally hosted managed version system

GIT — Distributed version control system

Knowing your hardware

Week 10 1
⛔ clinfo

clinfo is a simple command-line application that enumerates all possible


(known) properties of the OpenCL platform and devices available on the
system.
You get detailed specs:

Max compute units (like cores in OpenCL terms)

Max work-group size (important for parallel kernel design)

Clock frequency

Memory sizes

Global memory

Local memory

Constant memory

Supported extensions (e.g., cl_khr_fp64 for double precision)

This is super useful if you want to:

Compare CPU vs GPU performance potential

Check whether your device supports certain features (like images,


double precision, OpenCL 3.0, etc.)

Developers use clinfo before running OpenCL programs to:

Choose the right device

Tune kernels based on memory size/work-group size

Confirm if an extension is supported

is like a “system profiler for OpenCL” — it helps you see what


clinfo

devices and drivers you can target with OpenCL code.

Options

-a , --all-props — tries to fetch every possible property

-A, --always-all-props —same as -a but also shows errors

Week 10 2
--human — produces readable output

-l, --list — list of platforms/devices by name

clinfo -l → Quick check of what devices exist

clinfo --json > [Link] → Save OpenCL info to JSON for


scripting/logging

⛔ coreutils

(short for GNU Core Utilities) is a package that contains the


coreutils

basic command-line tools needed on almost every Linux/Unix system.


ex — ls , cp , mkdir etc

⛔ dmidecode

dmidecodeis a command-line utility that retrieves hardware information


by reading the DMI (Desktop Management Interface) / SMBIOS tables
provided by the system’s BIOS or UEFI firmware.

BIOS info → vendor, version, release date

System info → manufacturer, product name, serial number, UUID

Motherboard info → vendor, product, version, serial

Processor info → family, model, speed, cores

Memory info → size, type (DDR3/DDR4), speed, slots used/empty

Chassis info → type, vendor, serial

Cache info → L1/L2/L3 sizes

Week 10 3
⛔ fdisk

is a command-line partitioning tool used to view, create, delete,


fdisk

and manage partitions on hard disks.


It’s a low-level disk management utility which works directly with disk
partition tables (MBR, GPT)

⛔ hardinfo

hardinfo is a system profiler and benchmark tool for Linux. It gathers


detailed information about your hardware and operating system , and
can also run some simple benchmarks.

⛔ hdparm

hdparm is a command-line utility to get and set low-level parameters of


hard drives and SSDs (mostly SATA/IDE, but some options also work
with NVMe).
It can:

Query disk information

Test raw disk performance (read speed)

Enable/disable disk features (like write caching, power management,


etc.)

⚠️ Since it can change firmware-level settings, misuse may cause data


loss or hardware issues. Always be cautious.

Week 10 4
⛔ hwinfo

hwinfo is a comprehensive hardware information tool for Linux. It


probes the system and reports details about all hardware components
— CPU, memory, storage, network, USB, sensors, BIOS, etc.

⛔ memtester

What it is: A userspace tool to stress test RAM for errors.

Use case: Detect bad memory (like memtest86+, but runs inside a
running OS).

⛔ net-tools

What it is: An old networking toolkit (before iproute2 ).

Includes commands:

ifconfig → show/set network interfaces

netstat → network connections, routing tables

route → show/set routing table

arp → ARP table

⛔ pciutils

What it is: Tools to list and query PCI devices.

Main command: lspci

Week 10 5
⛔ procps

What it is: Package containing process and system monitoring tools.

Includes commands:

ps → list processes

top → interactive process monitor

uptime → system uptime & load average

free → memory usage

kill → send signals to processes

watch → repeat commands every X seconds

Critical for system administration.

⛔ sysstat

What it is: Performance monitoring toolkit.

Includes commands:

iostat → CPU & disk I/O stats

mpstat → per-CPU usage

pidstat → per-process resource usage

sar → collect & report system activity logs over time

Great for debugging performance bottlenecks.

Week 10 6
⛔ upower

What it is: Power management daemon/tool (mostly for laptops).

Main command: upower

⛔ util-linux

What it is: A collection of essential low-level system tools.

Includes commands (just a few examples):

lsblk → list block devices

blkid → show UUIDs and labels of partitions

mount / umount → mount/unmount filesystems

fdisk / cfdisk → partition disks

kill , more , setsid , agetty , dmesg … and MANY more

Basically, the backbone of Linux system administration.

Managing Storage

Week 10 7
⛔ LVM (Logical Volume Manager)

It is a storage management system in Linux/Unix that allows flexible


disk space management.

Instead of working directly with physical disks and partitions, LVM


adds a layer of abstraction, letting you create, resize, and manage
storage volumes more easily.

PV (Physical Volume)
Actual storage devices (e.g., /dev/sda1 , /dev/nvme0n1p2 ) initialized for
use by LVM.

VG (Volume Group)
A pool of storage created by combining one or more PVs. Think of it
as a "storage container.”

LV (Logical Volume)
Virtual partitions created from a VG. These act like "logical disks"
that you can format and mount (e.g., /home , /var ).

# 1. Initialize physical volume


pvcreate /dev/sdb1

# 2. Create a volume group


vgcreate my_vg /dev/sdb1

# 3. Create a logical volume (10G) (You are carving


# out 10 gb of space from the VG)

Week 10 8
lvcreate -L 10G -n my_lv my_vg

# 4. Format the logical volume


mkfs.ext4 /dev/my_vg/my_lv

# 5. Mount it
mount /dev/my_vg/my_lv /mnt

LVM are dynamic where as traditional partitions are static

Used for servers, databases and VM where storage needs change

makes disk management simpler in large environment

Week 10 9
RAID (Redundant Arrays of Independent Disks)

Technique of distributing data across multiple disks


to achieve redundancy, speed, or increased capacity.

Improves storage reliability and/or performance.

Mainly the data is split, mirrored or parity-protected

Redundancy — Protect against disk failures

Speed — Faster read/write via parallel disk access

Capacity — Combines multiple disk into one logical unit

Hardware RAID

Dedicated Raid controller card

Software Raid

Implemented by OS (Linux mdadm)

RAID 0 – Striping

Splits data across multiple disks.

✅ High performance (read/write).


❌ No redundancy (if one disk fails → all data lost).
RAID 1 – Mirroring

Duplicates data across disks.

✅ High redundancy (if one disk fails, data is safe).


❌ Storage efficiency = 50% (two 1TB disks → only 1TB usable).
RAID 5 – Striping with Distributed Parity

Needs ≥ 3 disks.

Data + parity info spread across disks.

✅ Balance of performance, redundancy, and efficiency.


Week 10 10
❌ Slower writes (parity calculation), one disk failure tolerated.
RAID 6 – Striping with Double Parity

Needs ≥ 4 disks.

Like RAID 5 but with two parity blocks.

✅ Survives failure of two disks.


❌ Slower writes than RAID 5.
RAID 10 (1+0) – Mirroring + Striping

Combines RAID 1 (mirroring) + RAID 0 (striping).

Needs ≥ 4 disks.

✅ High performance + redundancy.


❌ Only 50% usable storage.
RAID Mode Min Drives Description Comments

RAID 0 2 Striping Speed up

Read is n times faster, n-1


RAID 1 2 Mirroring
drive failures tolerated

1 drive failure tolerated,


Block-level striping
RAID 5 3 read is n times faster,
with distributed parity
write is n-1 times faster

Block level striping 2 drive failure tolerated,


RAID 6 4 with dual distributed read is n times faster,
parity write is n-2 times faster

Week 10 11
Prompt Strings

Week 10 12
Context for prompt strings
● bash, dash, zsh, ksh, csh
● python
● octave
● gnuplot
● sage

Bash Prompts
● PS1 : primary prompt string : $
● PS2 : secondary prompt for multi-line input : >
● PS3 : prompt string in select loops : #?
● PS4 : prompt srting for execution trace : +

Python command line

ps1 and ps2 are defined in the module sys

Change sys.ps1 and sys.ps2 if needed

Override str method to have dynamic prompt

Week 10 13

You might also like