You are on page 1of 2

DISK ENCRYPTION

cryptsetup -- plain/luks
(linux unified key setup)
plain --> takes our password and encrypts the data

sudo cryptsetup --verify-passphrase open --type plain /dev/vde my securedisk


verify-passphrase: ask for password twice as a safety measurement
open: informs the cryptsetup utility to open the data for reading and writing
encrypted data to the disk
open->type: uses plain encryption scheme
mysecuredisk: map device (listed in the dev directory)

using luks encryption


1) Format the disk/partition for use with luks encryption
cryptsetup luksFormat block_device
2) open the disk for use
cryptsetup open block_device mapper_name

Changing luksKey (password)


cryptsetup luksChangeKey block_device

USER AND GROUP DISK QUOTAS


package for quota management tools: quota

choose filesystem to enforce quotas on, by setting mount options


/dev/sdb1 /mnt xfs default,usrquota,grpquota 0 0
usrquota:
grpquota:

on xfs filesystem, specifying such mount options and mounting those disks
are enough to get quota working as xfs tracks quotas internally

on other filesystems such as ext4, additional steps are required


quotacheck --create-files --user --group blk_dev
creates two files in blk_dev: aquota.user/group
quotaon mount_point
start enforcing limits

to edit quotas for user on a filesystem


edquota --user username

RAID
level 0/striped array/array
group of disks combined to form a single storage area
1G + 1G + 1G = 3G

level 1/mirrored array


when a file is written to one disk, it is written to all
disks in the array (cloned/mirrored)

level 5
min 3 disks
we can lose 1 disk and our data will is still safe, because level 5
keep parity on each disk, 10% of each disk is used to store parity

level 6
min 4 disks
we can lose 2 disks and our data will be safe

level 10/1+0
combination of level 1 array and level 0 array

mdadm --create raid_dev_file --level=raid_level --raid-devices=3 blk_devices


mdadm --stop raid_dev_file
madam --zero-superblock blk_devices
mdadm --create /dev/md0 --level=1 --raid-devices blk_devices --spare-devices=1
blk_devices
mdadm --manage raid_dev --add blk_dev
mdadm --manage raid_dev --remove blk_dev
/proc/mdstat

You might also like