You are on page 1of 2

Log Parsing Cheat Sheet

GREP allows you to -n: Number of lines that matches


search patterns in files. -i: Case insensitive
-v: Invert matches
GREP ZGREP for GZIP files.
-E: Extended regex
$grep <pattern> file.log
-c: Count number of matches
-l: Find filenames that matches the pattern
NGREP is used for -d: Specify network interface
analyzing network -i: Case insensitive.
NGREP packets. -x: Print in alternate hexdump
-t: format
Print timestamp
$ngrep -I file.pcap -I: Read pcap file
The CUT command is
used to parse fields -d: Use the field delimiter
CUT from delimited logs. -f: The field numbers
-c: Specifies characters position
$cut -d “:” -f 2 file.log
SED (Stream Editor) is s: Search -e: Execute command
used to replace strings g: Replace -n: Suppress output
SED in a file. d: Delete
w: Append to file
$sed s/regex/replace/g
SORT is used to sort a -o: Output to file -c: Check if ordered
-r: Reverse order -u: Sort and remove.
SORT file.
$sort foo.txt
-n: Numerical sort -f: Ignore case
-k: Sort by column. -h: Human sort
UNIQ is used to extract -c: Count the number of duplicates
1 UNIQ uniq occurrences. -d: Print duplicates
-i: Case insensitive
$uniq foo.txt

DIFF is used to display How to read output?


differences in files by a: Add #: Line numbers
DIFF comparing line by line. c: Change <: File 1
d: Delete >: File 2
$diff foo.log bar.log
AWK is a programming Print first column with separator “:”
language use to $awk -F: '{print $1}' /etc/passwd
AWK manipulate data. Extract uniq value from two files:
$awk {print $2} foo.log awk 'FNR==NR {a[$0]++; next} !($0 in a)' f1.txt f2.txt

@FrØgger_
Thomas Roccia
Log Parsing Cheat Sheet 2
HEAD is used to display -n: Number of lines to display
the first 10 lines of a
HEAD file by default.
-c: Number of bytes to display
$head file.log
TAIL is used to display -n: Number of lines to display
TAIL the last 10 lines of a
file by default.
-f: Wait for additional data
-F: Same as -f even if file is rotated
$tail file.log
LESS is used to space: Display next page
visualize the content /: Search
LESS of a file, faster than n: Next
MORE. ZLESS for g: Beginning of the file
compressed files. G: End of the file
$less file.log +F: Like tail -f
Three columns as output:
COMM is used to Column 1: lines only in file 1
select or reject lines
COMM common to two files.
Column 2: lines only in file 2
Column 3: lines in both files
$comm foo.log bar.log -1, -2, -3: Suppress columns output
CSVCUT is used to -n: Print columns name
CSVCUT parse CSV files. -c: Extract the specified column
$ csvcut -c 3 data.csv -C: Extract all columns except specified one
-x: Delete empty rows
JQ is used to parse jq . f.json: Pretty print
JQ JSON files.
$jq . foo.json
jq '.[]' f.json: Output elements from arrays
jq '.[0].<keyname>' f.json
TR is used to replace a -d: Delete character
TR character in a file. -s: Compress characters to a single one
Lower to upper every character:
$ tr ";" "," < foo.txt
tr "[:lower:]" "[:upper:]" < foo.txt
CCZE is used to color -h: Output in html
CCZE logs. -C: Convert Unix timestamp
$ccze < foo.log -l: List available plugins
-p: Load specified plugin
@FrØgger_
Thomas Roccia

You might also like