You are on page 1of 14

Billing and organisations

1. By default, what is the maximum number of Linked Accounts per Paying Account under
Consolidated Billing?
Answ  20
2. Relate resource groups via tags

S3

1. S3
 Default max bucket count is 100
 Bucket name restrictions
1. Globally unique names.
2. DNS naming conventions 
3. 3 to 63 characters.
4. No uppercase characters or underscores.
5. start with a lowercase letter or number.
6. series of one or more labels  labels separated by a single period
7. Alpha numeric, period and hyphens, but only end in lowercase letter or a
number.
8. Bucket names must not be formatted as an IP address (for example,
192.168.5.4).
9. When you use virtual hosted–style buckets with Secure Sockets Layer (SSL),
the SSL wildcard certificate only matches buckets that don't contain periods
 Unstructured data – flat files – 0 to 5tb – unlimited storage The largest object that can be
uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes
 S3 Path style
i. rest protocol (path style) s3.aws-region.amazonaws.com/bucket
ii. Virtual-hosted style URL  http://bucket.s3-aws-region.amazonaws.com
 S3 Tiered storage  all has 11x9 durability
i. Standard  built for 99.99% avail (SLA 99.9%) stored redundantly across multiple
devices and facilities  designed for loss of 2 facilities concurrently
ii. S3 IA  build for 99.9% availability (SLA 99%)  data access less frequently but rapid
access when needed  lower storage fee but charge retrieval fee  99% availability
SLA
iii. S3 one zone  build for 99.5% availability (SLA 99%)  stored in one availability zone
 infrequently accessed
iv. S3 intelligent tiering  build for 99.9% availability  looks at usage frequency and
moves object around accord to best cost for usage pattern  uses AI to learn
v. Glacier  normal and deep archive
1. E.g. for federal regulation
2. Deep archive is lowest cost  12hrs retrieval time (old info)
3. Newer is faster depending on option
a. Expedited retrievals typically return data in 1-5 minutes,
b. Standard retrievals typically complete between 3-5 hours,
c. Bulk retrievals  lowest-cost retrieval option  returning
large amounts of data 5-12 hours.
d. The Amazon S3 Glacier Deep Archive storage class provides
two retrieval options ranging from 12-48 hours.
4. Individual glacier archives are limited to a maximum size of 40
terabytes. There is no minimum limit to the amount of data that
can be stored in Amazon S3 Glacier and individual archives can
be from 1 byte to 40 terabytes.
5. All data will be encrypted on the server side. Amazon S3 Glacier
handles key management and key protection. (AES-256).
Customers wishing to manage their own keys can encrypt data
prior to uploading it.
6. You can create up to 1,000 vaults per account per region. Vaults
are groups of archives
 Minimum billable object size for S3  128 KB. The minimum object size is 0 bytes, however
you will be billed for 128 KB. Objects smaller that 128 can still be stored, but will be billed as
if they are 128KB.

2. Cross region replication


 Requires versioning to be enabled
 Can replicate based on tags/prefixes
 Files only replicated if added after replication is enabled. Does not do retrospect
 Permissions are also replicated (do not manually edit permissions on replicated copy though)

3. Snow ball
 Petabyte scale transport solution
 50 tb or 80tb drive, snowball edge is 100TB
 256bit encryption, TPM (industry standard trusted platform module)

4. Storage gateway
 Virtual or physical device
 Virtual Supports vmware and hypervisor
 File gateway  accessed via NFS or SMB  stored in s3  frequent used data cached
 Volume gateway (iSCSI)
i. Caches volumes  stored in s3  frequently used data (hot data) is cached
onsite up to 32 tb
ii. Stored volumes  EBS snapshots asynchronously backed up to S3  up to 16tb
 Tape Gateway  VTL  stores virtual tapes  sotred in Glacier
i. Can archive virtual tapes stored in deep archive

https://aws.amazon.com/storagegateway/faqs/  Read this

5. Placement groups
 The name of your placement group must be unique within your AWS Account
6. EC2 101 theory course
 curl http://169.254.169.254/latest/meta-data/
 curl http://169.254.169.254/latest/user-data/
i. EBS  ST  frequently used high throughput  SC  less frequently used
ii. Data Lifecycle Manager (Amazon DLM)  Automate EBS snashopt quick to implement
iii. 5 Elastic Ips per region by default
iv. SLA guarantees a Monthly Uptime Percentage of at least 99.99%.
v. On demand  vCPUs limit per region
vi. Reserved  20 default limit
vii. Spot instances  20 default limit per region
viii. Dedicated hosts  dedicated physical host  server bound SW licenses
ix. Convertible  convert between instance types  Convertible Reserved Instances are
associated with a specific Region
x. Since the additional disk does not contain the operating system, you can detach it in the
EC2 Console while the instance is running. However, any data on that drive would
become inaccessible, and possibly cause problems for the EC2 instance
xi. You are limited to running On-Demand Instances per your vCPU-based On-
Demand Instance limit, purchasing 20 Reserved Instances, By default, there is an
account limit of 20 Spot Instances per Region (new accounts might have lower
spot limits). 
xii. Spot fleets and EC2 fleet considerations is about maintaining capacity and price/cost limit
-> fleets scale to maintain capacity and up to a cost ceiling
7. RDS
 6 different databases on AWS  SQL Server, Oracle, MySQL, PostgreSQL, Aurora,
MariaDB
 Read replica
i. not avail for sql server
ii. can create in another region
iii. backups need to be activated
 RDS key features
i. By default, customers are allowed to have up to a total of 40 Amazon RDS DB
instances. Of those 40, up to 10 can be Oracle or SQL Server DB instances under
the "License Included" model. All 40 can be used for Amazon Aurora, MySQL,
MariaDB, PostgreSQL and Oracle under the "BYOL" model.
ii. Note that RDS for SQL Server has a limit of up to 100 databases per DB instance
to learn more see the. RDS for PostgreSQL, MariaDB, MySQL, Aurora: No limit
imposed by software. RDS for Oracle: 1 database per instance; no limit on
number of schemas per database imposed by software
iii. DB storage size limits on RDS
1. Sql Server  16TiB
2. MySql, PostGress, MAriaDB  64TiB
3. ORable  63 TiB
iv. You can purchase up to 40 reserved DB instances. If you wish to run more than
40 DB instances, have to apply
v. Amazon RDS reserved instances are purchased for a Region rather than for a
specific Availability Zone. As RIs are not specific to an Availability Zone, they are
not capacity reservations. 
 With Aurora MySQL you can configure cross-region Aurora Replicas using logical
replication to up to five secondary AWS regions. Aurora PostgreSQL currently does
not support cross-region replicas. Aurora Replica physical replication can only
replicate to one secondary region.
 Default is 1 day if via API or 7 via console, and 0 disables auto backups
i. Backups stored if not chosen to be deleted upon deleting RDS  stored
according to backup config
 When creating an RDS instance, you can select the Availability Zone into which you
deploy it.
 In RDS, changes to the backup window take effect immediately
 If you are using Amazon RDS Provisioned IOPS storage with a Microsoft SQL Server
database engine, what is the maximum size RDS volume you can have by default
i. 16TB
 There is minimal downtime when you are scaling up on a Multi-AZ environment because
the standby database gets upgraded first, then a failover will occur to the newly sized
database. A Single-AZ instance will be unavailable during the scale operation.
 Database performance   Enhanced Monitoring, which provides access to over 50 CPU,
memory, file system, and disk I/O metrics. You can enable these features on a per-
instance basis and you can choose the granularity (all the way down to 1 second).
 You can purchase up to 40 reserved DB instances. If you wish to run more than 40 DB
instances need to apply to AWS
Retain backups or create snapshot is optional

8. DynamoDB
 There is no charge for the transfer of data into DynamoDB, providing you stay within a
single region  provisioned capacity and data storage charged for
 Streams not activated by default  hold data for 24 hrs (records and shards based
storage)
 ACID transactions – DynamoDB transactions provide developers atomicity, consistency,
isolation, and durability (ACID)
9. AWS directory service
 Simple directory service 500 or 5000 limit no trust to outside Ads
 AWS Managed Windows AD  can build trust  Larger user bases  fully managed 
Shared responsibility and GPO or access/auth or scaling out is responsibility of client
 AD connector  link onsite AD to cloud resources  establish trust  e.g. EC2
resources
 Cloud directory  manages objects (100’s of millions) via tree structure  dev tool
10. Redshift
 Default  1-day retention period
 Maximum of 35 days
 Redshift always tries to maintain 3 copies of data
i. Original
ii. Replica on compute nodes
iii. Backup in s3  can replicate to another s3 region for backup
 Charged for backups and for transfers within VPC  Cant backup outside region
 Single node 160Gb limit
 When loading data into empty table redshift automatically samples data and selected
most appropriate compression technique
 Only available in 1 AZ
11. Aurora
 Up to 244Gig memory and 32 VCPUs
 Lose Up to 2 copies of data does not impact write
 Lose Up to 3 copies of data does not impact read
 Self-healing
 Replicas  up to 15 Aurora, 5 Mysql and 1 postgresSQL
 Backups do not impact performance and are automated.
 Snapshots do not impact performance
 Serverless option
i. Autoscaling  everything (space, mem, CPUs) based on needs  start and shuts
automatically
12. HPC
 EN  lower CPU consumption  EFA (110gig) and VF (10 gig)
 EFA  ML and OS bypass  Can bypass OS allowing applications such as HPC or
machine learning to talk directly to network interface  lower latency  only
supported on linux

13. Cloudwatch
 Metrics  dashboard
 Alarms
 Log groupslog streams  custom metric  Alarm
 Enahcned monitoing is RDS feature  Mem and CPU detail additional
 Detailed monitotinh EC2  reports more often but still same metrics  Need scripts
for more

14. Spotfleet  Launch spot and on-demain instances  according ot spot price  different lunch
pools:
 Capacity optimised
 Lowest price
 Diversified – From all pools
 InstancePoolsToUseCount  define which pools u want

Support plans
15. Aurora
 The Amazon Aurora architecture involves separation of storage and compute. Aurora
includes some high availability features that apply to the data in your DB cluster. The
data remains safe even if some or all of the DB instances in the cluster become
unavailable.
 Designed to transparently handle loss of 2 copies without affecting write and 3 copies
without affecting read availability
i. 6 copies of data
ii. 2 copies across minimum 3 AZs
 Millisecond failover MySQL is seconds
 Starts at 10gig and scales in 10gig increments to 64TB
 The cluster endpoint always represents the current primary instance in the cluster

16. Route 53
 With Route 53, there is a default limit of 50 domain names. However, this limit can be
increased by contacting AWS support
 Alias Records can also point to AWS Resources that are hosted in other accounts by
manually entering the ARN

17. VPC
 Amazon Virtual Private Cloud allows customers to create a new default VPC directly
from the console or by using the CLI.
 Can have regional peering
 Osaka has 1 AZ  only AZ like this
 By default, limit of 5 VPCs per AWS Region
 AWS evaluates all rules for all the security groups associated with an instance before
deciding whether to allow traffic in or out. The most permissive rule is applied—so
remember that your instance is only as secure as your weakest rule.
 NACL rules one at a time. Each NACL rule has a number, and AWS starts with the lowest
numbered rule. If traffic matches a rule, the rule is applied and no further rules are
evaluated. If traffic doesn’t match a rule, AWS moves on to evaluate the next
consecutive rule
 Default NACL and SG allows all outbound accessNACL allows inbound as well
 Custom SG and NACL allow no access except custom SG allows outgoing access
18. SQS

 maxReceiveCount  amount of failed msgs limit


 Visibility timeout  max 12hrs
 Delayperiod  delay before visible in queue
 ReceiveMessageWaitTimeSeconds (on the queue instance) or WaiTimeInSeconds (on
message  if >1 then long polling enabledStndard queue inflight msgs of 120,000 and FIFO
queue max of 20,000

19. Kinesis
 Streams
i. Devices stream info to kinesis  persistencestores data for 24 hours but up to
7 days

ii.  multiple by number of


shards
iii. Research  understand what a kinesis txn vs record is
 Firehose
20. Cognito
 Research – Understand how UserPool interacts with Facebook/Google
21. Lambda
 https://aws.amazon.com/blogs/architecture/understanding-the-different-ways-to-
invoke-lambda-functions/  direct vs ansychronous triggers
 https://aws.amazon.com/architecture/well-architected/
22. AWS Budgets  sleart when costs exceeded to close
23. cloudwatch agent can monitor more metric--> memory and disk usage
24. EFS is posix compliant
25. SecretsManager  auto Key rotation in RDS
 Integrated into Lambda to rotate keys across other services
 Generate random secrets , e.g. for cloufdormation and store passwords in Secret
manager, or use SDKs in app code  Can share secrets across accounts
26. Parameter store  10000 secret limit  Tired structure  secret password text can be seen on
Param store console.--> Not across account
Exam comparison

https://medium.com/@roymiddleton81/how-i-passed-my-aws-certified-solutions-architect-
associate-exam-saa-c02-this-2020-5c8d4ab3747c

https://info.acloud.guru/resources/should-i-take-the-saa-c01-or-saa-c02
27. Todo
1. Redo global accelerator module
2. Redo Direct connect module and youtube video
3.

You might also like