You are on page 1of 55

SRV308

Deep Dive on Amazon Aurora


Debanjan Saha
deban@amazon.com
GM, Amazon Aurora & Glue

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
What is Amazon Aurora?
Database reimagined for the cloud

 Speed and availability of high-end commercial databases

 Simplicity and cost-effectiveness of open source databases

 Drop-in compatibility with MySQL and PostgreSQL

 Simple pay as you go pricing

Delivered as a managed service

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Re-imagining the relational database

1 Scale-out, distributed architecture

2 Service-oriented architecture leveraging AWS services

3 Automate administrative tasks – fully managed service

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scale-out, distributed architecture

Availability Availability Availability


 Purpose-built log-structured distributed Zone 1 Zone 2 Zone 3
storage system designed for databases
SQL SQL SQL

 Storage volume is striped across Transactions Transactions Transactions


hundreds of storage nodes distributed Master Replica
Caching Caching Caching
over 3 different availability zones

 Six copies of data, two copies in each


availability zone to protect against
Shared storage volume
AZ+1 failures

 Plan to apply same principles to other


layers of the stack

Storage nodes with SSDs


© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
L e v e r a g i n g AW S s e r v i c e s

Lambda Invoke Lambda events from stored procedures/triggers

S3 Load data from S3, store snapshots and backups in S3

IAM Use IAM roles to manage database access control

CloudWatch Upload systems metrics and audit logs to CloudWatch

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Automate administrative tasks

Automatic fail-over
Schema design Backup & recovery
Query construction You Isolation & security
Query optimization Industry compliance
AWS Push-button scaling
Automated patching
Advanced monitoring
Routine maintenance

Takes care of your time-consuming database management tasks,


freeing you to focus on your applications and business
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora customer adoption

Fastest growing
service in AWS history

Aurora is used by
¾ of the top 100
AWS customers

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Who is moving to Aurora and why?

 Higher performance – up to 5x
Customers using  Better availability and durability
open source engines  Reduces cost – up to 60%
 Easy migration; no application change

 One tenth of the cost; no licenses


Customers using  Integration with cloud ecosystem
Commercial engines  Comparable performance and availability
 Migration tooling and services

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
A URORA PERFORMANCE

5x faster than MySQL; 3x faster than PostgreSQL

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora MySQL performance

WRITE PERFORMANCE READ PERFORMANCE


700000
250000
600000
200000
500000

150000 400000

300000
100000
200000
50000
100000

0 0

MySQL SysBench results; R4.16XL: 64cores / 488 GB RAM Aurora MySQL 5.6

Aurora read write throughput compared to MySQL 5.6


based on industry standard benchmarks.

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora PostgreSQL performance
While running pgbench at load, throughput is
3x more consistent than PostgreSQL
pgbench throughput over time, 150 GiB, 1024 clients
45000

40000

35000

30000
Throughput, tps

25000

20000

15000

10000

5000

0
10 15 20 25 30 35 40 45 50 55 60

PostgreSQL (Single AZ) Minutes Amazon Aurora (Three AZs)


© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
How did we achieve this?

DO LESS WORK BE MORE EFFICIENT

Do fewer IOs Process asynchronously

Minimize network packets Reduce latency path

Cache prior results Use lock-free data structures

Offload the database engine Batch operations together

DATABASES ARE ALL ABOUT I/O

NETWORK-ATTACHED STORAGE IS ALL ABOUT PACKETS/SECOND

HIGH-THROUGHPUT PROCESSING IS ALL ABOUT CONTEXT SWITCHES

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora I/O profile
MYSQL WITH REPLICA AMAZON AURORA
AZ 1 3 AZ 2
AZ 1 AZ 2 AZ 3
Primary Replica Primary Replica Replica
Instance Instance Instance Instance Instance
ASYNC
1 4 4/6 QUORUM

Amazon Elastic
Block Store (EBS) EBS

2 5 DISTRIBUT
ED WRITES

EBS mirror EBS mirror

Amazon S3 Amazon S3

MySQL I/O profile for 30 min Sysbench run Aurora IO profile for 30 min Sysbench run
780K transactions 27,378K transactions 35X MORE
7,388K I/Os per million txns (excludes mirroring, standby)
0.95 I/Os per transaction (6X amplification) 7.7X LESS
Average 7.4 I/Os per transaction

TYPE OF W RITE

LOG BINLOG DATA


© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. DOUBLE-WRITE FRM FILES
IO traffic in Aurora (storage node)

STORAGE NODE IO FLOW

7 ① Receive record and add to in-memory queue


INCOMING QUEUE 1
LOG RECORDS
② Persist record and ACK
Primary ACK GC ③ Organize records and identify gaps in log
Instance
2 UPDATE
④ Gossip with peers to fill in holes
QUEUE COALESCE DATA ⑤ Coalesce log records into new data block versions
BLOCKS SCRUB 8
5 ⑥ Periodically stage log and new block versions to S3
SORT
GROUP
3 ⑦ Periodically garbage collect old versions
Peer ⑧ Periodically validate CRC codes on blocks
PEER TO PEER GOSSIP HOT
Storage LOG
Nodes 4 POINT IN TIME
OBSERVATIONS
SNAPSHOT
All steps are asynchronous
6 Only steps 1 and 2 are in foreground latency path
Input queue is 46X less than MySQL (unamplified, per node)
S3 BACKUP
Favor latency-sensitive operations
Use disk space to buffer against spikes in activity

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora lock management
MySQL lock manager Aurora lock manager
Scan Scan Scan

Delete
Scan Scan
Insert
Delete Delete

Insert Insert

Insert

 Same locking semantics as MySQL  Multiple scanners in individual lock chains


 Concurrent access to lock chains  Lock-free deadlock detection

Needed to support many concurrent sessions, high update throughput


© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Asynchronous key prefetch

16.00 14.57
Works for queries using Batched Key 14.00
12.00
Access (BKA) join algorithm + Multi- 10.00
Range Read (MRR) Optimization 8.00
6.00
4.00
2.00
Performs a secondary to primary -

Query-3
Query-1
Query-2

Query-4
Query-5
Query-6
Query-7
Query-8
Query-9

Query-22
Query-10
Query-11
Query-12
Query-13
Query-14
Query-15
Query-16
Query-17
Query-18
Query-19
Query-20
Query-21
index lookup during JOIN evaluation

Uses background thread to Cold buffer


asynchronously load pages into
memory Latency improvement factor vs. Batched Key Access (BKA) join
algorithm. Decision support benchmark, R3.8xlarge
AKP used in queries 2, 5, 8, 9, 11, 13, 17, 18, 19, 20, 21

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Batched scans

Standard MySQL pprocesses rows one-at- 2.00 1.78X


a-time resulting in high overhead due to: 1.80
1.60
 Repeated function calls 1.40
 Locking and latching 1.20
1.00
 cursor store and restore 0.80
 InnoDB to MySQL format conversion 0.60
0.40
0.20
Amazon Aurora scan tuples from the
-
InnoDB buffer pool in batches for

Query-1
Query-2
Query-3
Query-4
Query-5
Query-6
Query-7
Query-8
Query-9
Query-10
Query-11
Query-12
Query-13
Query-14
Query-15
Query-16
Query-17
Query-18
Query-19
Query-20
Query-21
Query-22
 Table full scans
 Index full scans
 Index range scans Latency improvement factor vs. Batched Key Access (BKA)
join algorithm Decision support benchmark, R3.8xlarge

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
M O N I TO R I N G D ATA B A S E P E R F O R M A N C E

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Performance Insights

Dashboard showing load on DB


Max CPU
• Easy
• Powerful
Identifies source of bottlenecks
• Top SQL
Adjustable time frame
• Hour, day, week , month
• Up to 35 days of data

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Performance Insights

What if you could find out


what your databases were
doing, or had been doing?

Performance Insights is the


answer:
 Lets you see your overall
instance load
 Drill down by SQL
statement, by time, or by
calling host

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
W H AT A B O U T AVA I L A B I L I T Y ?

“Performance only matters if your database is up”

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
6-way replicated storage
Survives catastrophic failures
Six copies across three availability zones
4 out 6 write quorum; 3 out of 6 read quorum
Peer-to-peer replication for repairs
Volume striped across hundreds of storage nodes

AZ 1 AZ 2 AZ 3 AZ 1 AZ 2 AZ 3
SQL SQL

Transaction Transaction

Caching Caching

Read availability Read and write availability


© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Up to 15 promotable read replicas

Reader end-point

Read Read Read


Master Replica Replica Replica

Shared distributed storage volume

► Up to 15 promotable read replicas across multiple availability zones


► Re-do log based replication leads to low replica lag – typically < 10ms
► Reader end-point with load balancing and auto-scaling * NEW *
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Instant crash recovery

TRADITIONAL DATABASE AMAZON AURORA


Have to replay logs since the last Underlying storage replays redo records
checkpoint on demand as part of a disk read

Typically 5 minutes between checkpoints Parallel, distributed, asynchronous

Single-threaded in MySQL; requires a No replay for startup


large number of disk accesses Crash at T0 will result in redo logs being
applied to each segment on demand, in
parallel, asynchronously
Crash at T0 requires
a re-application of the
SQL in the redo log since
last checkpoint

Checkpointed Data Redo Log

T0 T0
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Database fail-over time

0 - 5s – 30% of fail-overs 5 - 10s – 40% of fail-overs


35.00% 50.00%
30.00% 45.00%
40.00%
25.00% 35.00%
20.00% 30.00%
15.00% 25.00%
20.00%
10.00% 15.00%
5.00% 10.00%
0.00% 5.00%
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 0.00%
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35
10 - 20s – 25% of fail-overs
20 - 30s – 5% of fail-overs
60%
20%
50%
40% 15%

30% 10%
20%
5%
10%
0%
0% 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35
1 3 5 7 9 11 13 15 17 19 21 23 25 27 Amazon
© 2018, 29 31 Web
33 Services,
35 Inc. or its affiliates. All rights reserved.
R E C E N T I N N O VAT I O N S

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Availability is about more than HW failures
Aurora solutions for availability disruptions

1. Patching your database software


 Zero Downtime Patching

2. Large-scale copying and reorganizations


 Fast Cloning

3. DBA errors requiring database restores


 Backtrack

4. Disasters
 Global replication
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Zero downtime patching

Net App
Old DB
Before ZDP

state state
Engine

Net App
state state New DB
User sessions terminate Engine Storage Service
during patching
With ZDP

Old DB

Application
Engine
Networking

state
state

New DB
User sessions remain Engine Storage Service
active through patching © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Fast database cloning
BENCHMARKS

Clone database without copying data


 Creation of a clone is nearly instantaneous DEV/TEST
 Data copy happens only on write – when APPLICATIONS CLONE
original and cloned volume data differ

Example use cases


 Clone a production DB to run tests CLONE CLONE
 Reorganize a database
PRODUCTION PRODUCTION
 Save a point in time snapshot for analysis APPLICATIONS APPLICATIONS
without impacting production system.

PRODUCTION DATABASE

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Database backtrack

Invisible
t4
Invisible
t2 t3
Rewind to t3
t0 t1
Rewind to t1

t0 t1 t2 t3 t4

Backtrack brings the database to a point in time without requiring restore from backups
• Backtracking from an unintentional DML or DDL operation
• Backtrack is not destructive. You can backtrack multiple times to find the right point in time

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
How does backtrack work?
SEGMENT LOG
SNAPSHOT RECORDS

SEGMENT 1

SEGMENT 2

SEGMENT 3

RECOVERY TIME
POINT

We keep periodic snapshot of each segment; we also preserve the redo logs
For backtrack, we identify the appropriate segment snapshots
Apply log streams to segment snapshots in parallel and asynchronously

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Online DDL: MySQL vs. Aurora
MySQL Aurora

ROOT
table name operation column-name time-stamp

Table 1 add-col column-abc t1


INDEX INDEX Table 2 add-col column-qpr t2
Table 3 add-col column-xyz t3

LEAF LEAF LEAF LEAF

 Use schema versioning to decode the block.


 Full Table copy in the background
 Modify-on-write primitive to upgrade to latest schema
 Rebuilds all indexes – can take hours or days
 Currently support add NULLable column at end of table
 DDL operation impacts DML throughput
 Add column anywhere and with default coming soon.
 Table lock applied to apply DML changes

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Online DDL performance

Aurora MySQL 5.6 MySQL 5.7

10GB table 0.27 sec 3,960 sec 1,600 sec


On r3.large
50GB table 0.25 sec 23,400 sec 5,040 sec
100GB table 0.26 sec 53,460 sec 9,720 sec

Aurora MySQL 5.6 MySQL 5.7

10GB table 0.06 sec 900 sec 1,080 sec


On r3.8xlarge
50GB table 0.08 sec 4,680 sec 5,040 sec
100GB table 0.15 sec 14,400 sec 9,720 sec

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Global replication - logical
Faster disaster recovery and enhanced data locality

Promote read-replica to a master


for faster recovery in the event of
disaster

Bring data close to your


customer’s applications in
different regions

Promote to a master for easy


migration

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Global replication – physical
Region 1: Primary Aurora Cluster Region 2: Read Replica
AZ 1 AZ 2 AZ 3 AZ 1

Aurora
Primary Aurora Aurora

Replication Server

Replication Agent
Replica
Instance Replica Replica
(optional)

Async.

Consistently fast, low-lag, high-performance replication for global relational databases


• Global-scale replication in seconds or less
• Dedicated replication infrastructure ensures unconstrained performance
• Local reads, faster recovery, tighter DR objectives, and seamless cross-region migration

TYPE OF W RITE
REDO LOG © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. FRM FILES
A U R O R A S E RV E R L E S S

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora Serverless use cases

• Infrequently used applications (e.g. a


low-volume blog site)

• Applications with intermittent or


unpredictable load (e.g. a news site)

• Development or test databases not


needed on nights or weekends

• Consolidated fleets of multi-tenant


SaaS applications

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Aurora Serverless
APPLICATION

• Starts up on demand,
shuts down when not in DATABASE END-POINT
use
• Scales up & down REQUEST ROUTER
automatically
WARM POOL
• No application impact OF INSTANCES

when scaling
• Pay per second, 1 minute SCALABLE DB CAPACITY
minimum
DATABASE STORAGE

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Instance provisioning and scaling
• First request triggers instance APPLICATION
provisioning. Usually 1-3 seconds
• Instance auto-scales up and down as
workload changes. Usually 1-3
seconds
REQUEST
• Instances hibernate after user-defined ROUTER
period of inactivity
• Scaling operations are transparent to
the application – user sessions are NEW
CURRENT
not terminated INSTANCE INSTANCE

• Database storage is persisted until WARM POOL


explicitly deleted by user
DATABASE STORAGE

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling to and from zero
• If desired, instances are removed after APPLICATION

user-defined period of inactivity


PROXY ENDPOINT

• While database is stopped, you only pay


for storage REQUEST
ROUTER

• A database request triggers instance


provisioning (usually under 5 seconds)
INSTANCE

• Scaling to zero is more expensive as WARM POOL


there is no running server from which to
copy the buffer cache
DATABASE STORAGE

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
A U R O R A M U LT I - M A S T E R

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Distributed Lock Manager

SHARED DISK CLUSTER LOCKING PROTOCOL MESSAGES

APPLICATION

GLOBAL
SQL SQL RESOURCE M1 M2 M3
TRANSACTIONS TRANSACTIONS MANAGER
CACHING CACHING
LOGGING LOGGING
M1 M2 M1 M3 M1 M2

STORAGE SHARED STORAGE

Pros Cons
All data available to all nodes Heavyweight cache coherency traffic, on per-lock basis
Easy to build applications Networking can be expensive
Similar cache coherency as in multi-processors Negative scaling when hot blocks

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Consensus with two phase or Paxos commit
DATA
SHARED NOTHING
DATA RANGE #4
RANGE #5
APPLICATION
L L

SQL SQL
TRANSACTIONS TRANSACTIONS L
CACHING CACHING DATA
RANGE #1 L DATA
LOGGING LOGGING
L RANGE #3

STORAGE STORAGE DATA


RANGE #2

Pros Cons
Query broken up and sent to data node Heavyweight commit and membership change protocols
Less coherence traffic – just for commits Range partitioning can result in hot partitions, not just hot
blocks. Re-partitioning expensive.
Can scale to many nodes
Cross partition operations expensive. Better at small
requests
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Conflict resolution using distributed ledgers
• There are many “oases” of Page1 Page 2 Page 1 Page 2
consistency in Aurora BT1 OT1

BT2 OT2
• The database nodes know MASTER
MASTER
BT3 OT3
transaction orders from that
BT4 OT4
node
• The storage nodes know
PAGE1 PAGE2
transactions orders applied
at that node Quorum 1 2 3 4 5 6 1 2 3 4 5 6

• Only have conflicts when


OT1[P2]
data changed at both BT1 [P1]

multiple database nodes BT2 [P1] OT2[P2]

AND multiple storage nodes BT3 [P1] OT3[P2]

BT4 [P1] OT4[P2]


• Much less coordination
required
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Hierarchical conflict resolution
• Both masters are writing to Regional
resolver
two pages P1 and P2
• BLUE master wins the
quorum at page P1; Page 1 Page 2 Page 1 Page 2
ORANGE master wins MASTER
BT1 X MASTER OT1 X
quorum at P2
• Both masters recognize the
conflict and have two
choices: (1) roll back the PAGE1 PAGE2

transactions or (2) escalate Quorum 1 2 3 4 5 6 1 2 3 4 5 6


to regional resolver
• Regional arbitrator decides BT1 [P1] OT1[P2]
who wins the tie breaker . OT1 [P1] BT1[P2]

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Multi-region Multi-Master
HEAD NODES HEAD NODES

LOCAL PARTITION REMOTE PARTITION LOCAL PARTITION REMOTE PARTITION

MULTI-AZ STORAGE VOLUME MULTI-AZ STORAGE VOLUME

REGION 1 REGION 2

Write accepted locally Conflicts handled hierarchically – at head nodes,


at storage nodes, at AZ and region level arbitrators
Optimistic concurrency control – no distributed lock
manager, no chatty lock management protocol Near-linear performance scaling when there is no
or low levels of conflicts

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
A U R O R A P A R A L L E L Q U E RY

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Parallel query processing

Aurora storage has thousands of CPUs


DATABASE NODE
 Presents opportunity to push down and parallelize
query processing using the storage fleet
 Moving processing close to data reduces network traffic AGGREGATE PUSH DOWN
and latency RESULTS PREDICATES

However, there are significant challenges


 Data stored in storage node is not range partitioned –
require full scans
 Data may be in-flight
 Read views may not allow viewing most recent data
 Not all functions can be pushed down to storage nodes

STORAGE NODES

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Processing at head node

Query Optimizer produces PQ Plan and APPLICATION


creates PQ context based on leaf page
discovery

PQ request is sent to storage node along


with PQ context OPTIMIZER RESULTS

Storage node produces PQ PLAN

 Partial results streams with processed EXECUTOR AGGREGATOR


stable rows PQ CONTEXT
PARTIAL
 Raw stream of unprocessed rows with INNODB
IN-FLIGHT RESULTS
pending undos DATA STREAM

Head node aggregates these data NETWORK STORAGE DRIVER


streams to produce final results

STORAGE NODES
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Processing at storage node
TO/FROM HEAD NODE
Each storage node runs up to 16 PQ
processes, each associated with a
parallel query

PQ process receives PQ context


 List of pages to scan PQ PROCESS
 Read view and projections STORAGE
NODE PROCESS Up to 16
 Expression evaluation byte code

PQ process makes two passes through PQ PROCESS


the page list
 Pass 1: Filter evaluation on
InnoDB formatted raw data
 Pass 2: Expression evaluation
on MySQL formatted data PAGE LISTS

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Example: parallel hash join
JOIN CONTEXT
Head node sends hash join context to
storage node consisting of:
PAGE SCAN
 List of pages – Page ID and LSNs
 Join expression byte code
 Read view – what rows to include
 Projections – what columns to select ROWS WITH
STABLE ROWS
PENDING UNDO
PQ process performs the following steps:
 Scans through the lists of pages
 Builds partial bloom filters; collect stats BLOOM
FILTER
 Sends two streams to head node
• Rows with matching hashes
• Rows with pending undo processing

Head node performs the Join function after UNPROCESSED FILTERED AND PROJECTED
merging data streams from different storage ROW STREAM ROW STREAM
nodes

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Parallel query performance
256 290.66

128 142.4
117.06
100.16
64 83.47

48.87
32 41.39
29.95

16
14.35
11.58
8
5.61
4 4.29

2.76
2 2.27
1.71 1.77
1.31 1.15 1.27
1

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Enjoy trying new features?
Sign up for previews: aws.amazon.com/rds/aurora/
AURORA SERVERLESS AURORA MULTI-MASTER

Scales out
Scales DB capacity
to match app needs writes across
multiple AZs PERFORMAN
CE INSIGHTS

AURORA PARALLEL QUERY PERFORMANCE INSIGHTS

Improves Assess DB
performance of load & tune
large queries performance

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Thank you!

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Please complete the session
survey in the summit mobile app.

© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.

You might also like