You are on page 1of 181

Oracle

Exadata
Best Practices Workshop

March 2019

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Safe Harbor Statement
The preceding is intended to outline our general product direction. It is intended for
information purposes only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied
upon in making purchasing decisions. The development, release, and timing of any
features or functionality described for Oracle’s products remains at the sole discretion of
Oracle.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 2


Agenda
•  Introduction – Hands On Wokshop Overview

•  Architecture: Exadata Bare Metal & Exadata OVM

•  MOS Documentation

•  Deployment Pre requisites and OEDA-OECA

•  Installation Process

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 3


Introduction

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 4
Data – Your Most Valuable Asset

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 5


From High Availability to Continuous Availability
High Availability Continuous Availability

•  Minimizes downtime •  No downtime for users


•  In-flight work is lost •  In-flight work is preserved
•  Rolling maintenance at DB •  Maintenance is hidden
•  Predictable runtime performance •  Predictable performance
•  Errors may be visible •  Errors visible only if unrecoverable
•  Designed for single failure •  Designed for multiple failures
•  Basic HA building blocks •  Builds on top of HA
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 6
How do we define Continuous Availability?
Customers have different definitions
Continuous Availability is not Absolute Availability.
Probable outages and maintenance events at the database level are masked from the
application, which continues to operate with no errors and within the specified response
time objectives while processing these events.

Key points:
1.  Planned maintenance and likely unplanned outages are hidden from applications
2.  There is neither data loss nor data inconsistency
3.  Majority of work (% varies by customer) completes within recovery time SLA
4.  May appear as a slightly delayed execution


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 7
What kinds of outages?

S#@$%

Planned Unpredictable Site Data Human


Maintenance Unplanned
Outages Response & Disasters Corruption Errors
Patches Throughput
Repairs
Upgrades
Changes

Which outage classes does your business need to handle?

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 8


Oracle Real Application Clusters
Continuous Availability

Real Application Service Levels


“Always Running”

Scales 4000 PDBs 8000 Services
Online Rolling Maintenance
Drains and balances gradually
DB Recovers in very low seconds
App Continuity recovers inflight
Data is always consistent

RAC or RAC One Node


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 9
Consolidation
Benefits and Challenges
VM VM VM
Virtual
Why consolidate your databases? Machines

•  It’s cheaper
Many DBs in
•  Typically plenty of CPU, memory, IO bandwidth One Server

•  But customers have challenging demands


Database 12c
– OLTP applications require low-latency transactions Multitenant
– Analytics generate IO intensive scans
– “That noisy neighbor is screwing up my performance!” Oracle Cloud
•  And management has plenty of demands too
– Customers must pay for performance!
– “Consolidated environments are less work, right?!”
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 10
Consolidation Multiple CDBs
per Server
Most Common Model Today

•  Most common consolidation model is


– Multi-tenant databases - has the most
powerful and flexible resource management Server
tools
– With multiple CDBs per server - typically to
support multiple Oracle versions
– And shared Exadata storage
Exadata
•  Oracle Autonomous Database uses this Server Storage
model! Small RAC
Clusters

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 11


Mapping Of Resource Management Tools With Resource

Inter-Database Inter-PDB
CPU
Instance Caging CDB Resource
CPU Configured on CDB
Plan Flash

Configured on CDB

Disk
Inter-Database Memory Resource
Storage Node 1 Flash & Disk IORM Management
Configured on Storage Cell
Configured on PDB
MEM

Storage Node 2 Regulate Resources Regulate Resources


Between CDBs Between PDBs

Storage Node 3

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 12


Resource Management Quick Re-Cap

•  Resources To Be Managed – CPU, Flash Space, Flash/Disk IOs & Memory


•  Tools Available
– Instance Caging
– Inter-Database IORM
– CDB Resource Plan
– Memory Resource Management



Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 13
Security

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | Public 14
How Hackers Attack the Database?

Exploit
Exploit Database
Application

Attack Application
Users

Target
Data Copies
Attack Bypass
Admins Database

Test Dev Partners

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Maximum Security Architecture
DB Security Assessment Tool
Data Redaction Database
Firewall
###-##-5100
Audit Data Audit
Vault

Apps
Data Masking
Transparent 010-11-5100
Data 022-22-5001
Database Encryption
Privilege Vault
Analysis

Key Vault Test Dev Partners

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Exadata Desing – Practice Study Case
Current State Exadata

Oracle
10g, 11g,
Oracle Database 12c
DB4
DB1 RHEL, Database 12c
OS HP-UX


DB3 Solaris AIX DB1 DB3 OS
DB2 Dell Servers Oracle Servers
HP Oracle
Cisco Network DB4 Infrastructure
DB2 Network
stack
Hitachi Storage Storage
DB5

Low Consolidation Operational Silos High Consolidation Density Unified Stack

Information Provided by the Customer


•  Oracle Database Versions: 11g R2, 12c R1, 12c R2
•  Platforms: Windows, AIX, HPUX , RHEL
•  Number of Databases: 50 RAC, 28 Stand Alone ( 20 will be converted to RAC, the rest will remain as is)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 17


Exadata Desing – Practice Study Case

•  Total amount of resources consumed
ü  320 CPUs = CPU_COUNT (No resource management)-200 CPUs for Oracle RAC, 120 for Stan Alone, the 20
databases which going to be moved to RAC the total amount of CPU consumed is 60 CPUs
ü  2.5 TB Memory: 1.5 for Oracle RAC databases and 1 TB for Stand Alone database, the 20 databases which
going to be moved to RAC the total amount of memory is 500 GB
ü  320 TB currently using : ASM (External Protection) for 20 RAC and 28 Stand Alone Databases, Veritas Cluster
File System for the rest of RAC databases, the custumer requieres maintain their referential integrity
ü  Currently the architecture of the Oracle RAC environments is 2 Nodes per environment and physically
involves 1 Interface for client access, 1 Interface for Interconnect, VIPs Interfaces
ü  The client expectations is to consolidate all their environments in no more than 2 clusters, could be physical
or virtual
ü  The customer wants to know if is possible to activate only the number of cores that they need to use during
the installation process and scale up when need it
ü  The customer requires a mix of versions in the same cluster for the consolidation process ( bi modal
architecture ) to decouple and upgrade environments under a philosophy of CI/CD and needs the
documentation of supported versions between GI and the RDBMS

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 18


Exadata Desing – Practice Study Case
•  Total amount of resources consumed
ü  The client requires a load balance with the new storage desing , currently he use the same ASM
DISK Groups / Cluster File System to send an Incremental Merge Backupset with RMAN with a
policy retention window of 2 weeks so we have to consider the correct percentage and
distribution of ASM Disk Groups and the cells to achieve this requirement

What are the customer expectations?
-High level desing, number of clusters, definition of ASM Disk Groups, Software versions, etc.
-Installation Pre Requisites, SCANs, DNS, IPs, Nodes Number, etc.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 19


Architecture:
Exadata Bare Metal &
Exadata OVM

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 20
Database Platform Leadership Since 2008
V1 V2 X2 X3 X4 X5 X6 X7

Sep 2008 Sep 2009 Sep 2010 Sep 2012 Nov 2013 Dec 2014 Apr 2016 Oct 2017
Xeon E5430 Xeon E5540 Xeon X5670 Xeon E5-2690 Xeon E5-2697 v2 Xeon E5-2699 v3 Xeon E5-2699 v4 Xeon 8160
Harpertown Nehalem Westmere Sandy Bridge Ivy Bridge Haswell Broadwell Skylake V1 – X7
Growth
168 336 504 504 672 1344 1344 1.68 PB 10 X
0 5.3 5.3 22.4 44.8 89.6 179.2 358 TB 64 X
64 64 96 128 192 288 352 384 cores 6 X
256 576 1152 2048 4096 6144 12288 12 TB 48 X
8 24 184 400 400 400 400 800 Gb/s 100 X
14 50 75 100 100 263 301 350 GB/s 25 X
.05 1 1.5 1.5 2.66 4.14 5.6 5.97 M 120 X

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 21


Database Release and Support Timelines
2009

2010

2011

2012

2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

2023

2024

2025

2026

2027
11.2 EXTENDED

12.1 EXTENDED

12.2 EXTENDED

12.2.0.1

18c (12.2.0.2)
Oracle 19Or
19c (12.2.0.3)
Oracle 19 EXTENDED*

20c
*Oracle Database 19c is the long term support release
Always check MOS Note 742060.1 for the latest schedule
Premier Waived Extended Support Fee Paid Extended Support

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 22


Exadata 18.1.0.0.0 and 19.1.0.0.0 Highlights
•  Over 40 unique software features and enhancements in a year
–  Better analytics, better transaction processing, better consolidation,
more secure, faster and more robust upgrades, and easier to manage
•  Complete investment protection
–  All new software features work on all supported Exadata hardware
generations
•  Full storage offload functionality for Database 19.1
–  Oracle Database 11.2, 12.1, 12.2 and 18.1 coexist alongside 19.1 on the
same system
•  Updated Oracle Linux kernel and Oracle VM improve
robustness and scalability
–  Oracle Linux 7.5 with UEK4, Oracle Virtual Machine 3.4.4

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 23


Oracle Linux 7: Seamless Upgrades from Oracle Linux 6
•  Strategy for Linux Upgrades
–  Upgrade the kernel faster to use the new features in Linux
–  Upgrade the distribution slowly to keep compatibility with as many applications
as possible
•  Oracle Database 19c works only on Linux 7 and won’t work on Linux 6
•  Bare Metal and guests (domU)
–  Oracle Linux 7.5 and UEK4 (4.1.12-124 series)
•  dom0
–  Oracle Linux 6.9 and Xen 3.4.4 errata on dom0 (no change here)
–  Dom0 Linux kernel updated to latest UEK4 (4.1.12-124 series but uses the Oracle
Linux 6 kernel)
•  No reimage required, just rolling upgrade to Oracle Linux 7

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 24


Oracle Database Release Minimum Required Version
•  Oracle Database and Grid Infrastructure 19c works only on Linux 7 and won’t work on Linux 6
•  Oracle Database and Grid Infrastructure 18c needs 18.3 or newer
•  Oracle Database and Grid Infrastructure 12c Release 2 (12.2.0.1.0) needs 12.2.0.1.180717 or newer
•  Oracle Database and Grid Infrastructure 12c Release 1 (12.1.0.2.0) needs 12.1.0.2.180831 or newer
•  Oracle Database 11g Release 2 (11.2.0.4.0) needs 11.2.0.4.180717 or newer
–  Requires Grid Infrastructure release 12.1.0.2.180831 or newer

•  Oracle Database 11g Release 2 (11.2.0.3.0) needs 11.2.0.3.28


–  Requires Grid Infrastructure release 12.1.0.2.180831 or newer

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 25


Exadata Technical Foundations

InfiniBand

Offload to Storage
Storage Indexes
PCI Flash
Flash Caching
HCC

I/O I/O I/O Hybrid Columnar Compression 10:1

Resource Management
In-Memory Database Technology

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 26


Exadata Database Machine X7-2
State-of-the-Art Hardware

•  Scale-Out Database Servers


2 socket Xeon processors
48 cores per server
384 GB - 1.5 TB DRAM


40 Gb/s InfiniBand internal network
•  Fastest Internal Fabric 25/10/1 GigE external network

•  Scale-Out Intelligent Storage


120 TB disk capacity (10 TB helium disks)
25.6 TB PCI NVMe Flash
20 cores for offload to storage
High-Capacity Storage Server
51.2 TB PCI NVMe Flash
20 cores for offload to storage
Extreme Flash Storage Server

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 27
Exadata Database Machine X7-2
Smart, Database-Aware Software

•  Scale-Out Database Servers Compute Software


–  Oracle Linux 6/7
–  Oracle Database Enterprise Edition
–  Oracle VM (optional)
–  Oracle Database options

•  Fastest Internal Fabric

•  Scale-Out Intelligent Storage


Storage Server Software
–  Smart Scan (offload to storage)
High-Capacity Storage Server –  Flash Caching
–  Hybrid Columnar Compression
–  IO Resource Management
Extreme Flash Storage Server

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 28
Exadata Database Machine X7-8
•  Scale-Out Database Servers Large SMP Processor Model
–  8-socket x86 –  Big data warehouses
processors –  Massive database consolidation
–  192 cores –  In-Memory databases
–  3-6 TB DRAM

40 Gb/s InfiniBand
•  Fastest Internal Fabric 25/10/1 GigE external connectivity

•  Scale-Out Intelligent Storage


Same Networking, Storage and Software
120 TB disk capacity (10 TB helium disks)
25.6 TB PCI NVMe Flash
as X7-2
20 cores for SQL offload
High-Capacity Storage Server
51.2 TB PCI NVMe Flash
20 cores for SQL offload
Extreme Flash Storage Server
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 29
External Connectivity for X7-2 DB Workloads—QR or Larger
“X-Option PN7111181”

2x SFP28 2x QSFP+ QDR HCA 4x RJ45 10G Ports
10G/25G Infiniband Add in Intel X710
BCM57414 X557-AT NIC

eth4 eth3 ib1 ib0 eth5 eth6 eth7 eth8

KEY eth0 eth1 eth2



10G RJ45 or 10/25G SFP28 Client or RJ45 (Net 3/4) OR SFP28 (Net1/2)
HOST admin
Backup but not all 4 ports at once ILOM (1G NET 0) BCM57417 LOM
(1G NET MGT)
Client or Backup – 10/25G SFP28
Client – 1/10G RJ45
Backup – 1/10G RJ45
Return to X7-2 Overview

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 30


External Connectivity for X7-2 DB Workloads—ER

2x QSFP+ QDR HCA 2x SFP28


Not expandable 10G/25G
CPU1 is removed Infiniband
BCM57414

ib1 ib0
eth4 eth3

KEY eth0 eth1 eth2



10G RJ45 or 10/25G SFP28 Client or RJ45 (Net 3/4) OR SFP28 (Net1/2)
HOST admin
Backup but not all 4 ports at once ILOM (1G NET 0) BCM57417 LOM
(1G NET MGT)
Client or Backup – 10/25G SFP28
Client – 1/10G RJ45
Backup – 1/10G RJ45

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 31


Configure Servers to Match Your Workload
Elastic Hardware Configurations

Add racks * Expand older


to racks with
continue new servers
scaling* and multi-rack
old and new
Add racks together
Servers
as
needed*

X7-2 Eighth Rack Quarter Rack Full Rack X7-8 Elastic Configuration

Capacity-on-Demand Software Licensing


• Enable compute cores as needed, subject to minimums * 14 cores minimum per DB server (max 48 cores)
• License Oracle software for enabled cores only * 8 cores minimum per Eighth Rack DB server (max 24 cores)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 32


Exadata Bare Metal
salescart_svc salescart_svc_pc

sales_e_svc sales_e_svc_pc
15
salesrpt_svc salesrpt_svc_pc 45 15
PROD DB
25

SOLUTION sales_w_svc sales_w_svc_pc

shipping_svc shipping_svc_pc
Web PDB
OS –Oracle Enterprise Linux web_east_svc web_east_svc_pc
45
15
15
Ship PDB
webrpt_svc webrpt_svc_pc 25
+ WEB DB
Grid Infrastructure Ditributed all Nodes
web_west_svc web_west_svc_pc
GOLD Pool
One Cluster for a set of physical nodes inventory_svc inventory_svc_pc

erp_svc erp_svc_pc 15
15
ERP PDB
45
WEB DB
+
acct_rpt_svc acct_rpt_svc_pc 25

Oracle Real Application Clusters acct_vend_svc acct_vend_svc_pc


50
25
25
FIN DB
Supported Mix Of Versions acct_cust_svc acct_cust_svc_pc
SILVER Pool
hcm_emp_svc hcm_emo_svc_pc

hcm_rpt_svc hcm_rpt_svc_pc 25
50 HCM DB
25
hcm_mgr_svc hcm_mgr_svc_pc
CMS PDB
publish_svc publish_svc_pc 25
50 MKTNG PDB
25 CONTENT DB
collab_svc collab_svc_pc
GENERIC Pool
mktng_svc mktng_svc_pc

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 33


Major Differences Between Physical and OVM
•  Details expanded throughout remaining slides

Topic How OVM differs from Physical


Hardware support 2-socket only
Cluster config System has one or more VM clusters, each with own GI/RAC/DB install
Exadata storage config Separate griddisks/DATA/RECO for each VM cluster; By default no DBFS disk group
Dbnode disk config VM filesystem sizes are small; GI/DB separate filesystems
Software updates Dbnodes require separate dom0 (Linux+fw) and domU (Linux) patchmgr updates
Exachk Run once for dom0/cells/ibswitches, run once for each VM cluster
Enterprise Manager EM + Exadata plugin + Virtualization Infrastructure plugin

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 34


Exadata Virtual Machines
High-Performance Virtualized Database Platform

•  VMs provide CPU, memory, OS, and sysadmin isolation for


FINANCE consolidated workloads
No Additional Cost –  Hosting, cloud, cross department consolidation, test/dev,
non-database or third party applications
X7-2, X6-2, X5-2, SALES •  Exadata VMs deliver near raw hardware performance
X4-2, X3-2, X2-2
–  I/Os go directly to high-speed InfiniBand bypassing hypervisor
DATABASE IN-MEMORY

DB 11.2 and higher •  Combine with Exadata network and I/O prioritization
to achieve unique full stack isolation
SUPPLY
CHAIN •  Trusted Partitions allow licensing by virtual machine

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 35


Software Architecture Comparison
Database Server: Bare Metal / Physical versus OVM

Bare Metal / OVM Database Server


Physical domU-3
Database Server dom0
Oracle GI/DB
domU-2 homes
Oracle GI/DB
Exadata (Linux)
domU-1 homes
Oracle GI/DB homes Oracle GI/DB
Exadata (Linux)
Exadata (Linux, homes
Exadata (Linux, fw) Xen, fw) Exadata (Linux)

No change to Storage Grid, Networking, or Other


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 36


Exadata VM Usage
•  Primary focused on consolidation and isolation
•  Can only run certified Oracle Linux versions
– Windows, RedHat, and other guest operating systems are not supported
•  Can virtualize other lightweight products
– E.g. Lightweight apps, management tools, ETL tools, security tools, etc.
•  Not recommended for heavyweight applications
– E.g. E-business Suite or SAP application tier
– Instead use Exalogic, Supercluster, or Private Cloud Appliance

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 37


Exadata OVM Requirements
•  Hardware
– 2-socket database servers supported (X2-2 and later)
•  Software
– Recommend latest Exadata 18.1 software
•  Supplied software (update with patchmgr - see MOS 888828.1)
–  domU and dom0 run same UEK kernel as physical (e.g. 4.1.12-94.8.4.el6uek (ueknano) for 18.1.7.0.0)
–  domU runs same Oracle Linux (OL) as physical (e.g. OL 6.9 for 18.1.7.0.0)
–  dom0 runs Oracle VM Server (OVS) 3.x (e.g. OVS 3.4.4 for 18.1.7.0.0)
– Grid Infrastructure / Database
•  Recommend 18, 12.2.0.1, or 12.1.0.2 with latest quarterly update
•  Supported 18, 12.2.0.1, 12.1.0.2, or 11.2.0.4

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 38


Exadata Security Isolation Recommendations
•  Each VM RAC cluster has its own Exadata grid disks and ASM Disk Groups
– Setting Up Oracle ASM-Scoped Security on Oracle Exadata Storage Servers
•  802.1Q VLAN Tagging for Client and Management Ethernet Networks
– Dbnodes configured w/ OEDA during deployment (requires pre-deployment switch config)
– Or configure manually post-deployment
•  Client network - MOS 2018550.1 Management network - MOS 2090345.1

•  InfiniBand Partitioning with PKEYs for Exadata Private Network


– OS and InfiniBand switches configured w/ OEDA during deployment
•  Storage Server administration isolation through ExaCLI

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 39


Exadata OVM Sizing Recommendations
•  Use Reference Architecture Sizing Tool to determine CPUs, memory, disk
space needed by each database
– Sizing evaluation should be done prior to deployment since OEDA will deploy your
desired VM configuration in an automated and simple manner.
– Changes can be made post deployment, but requires many more steps
– Sizing approach does not really change except for accommodating DOM0, and
additional system resources per VM
– Sizing tool currently does not size virtual systems
– Consider dom0 memory and CPU usage in sizing

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 40


Memory Sizing Recommendations
•  Can not over-provision physical memory
–  Sum of all VM + dom0 memory used cannot exceed physical memory
•  dom0 memory sizing
–  Change dom0 memory size only for database servers with memory expansion. See MOS xxxxxxx.1.
•  VM memory sizing
–  Minimum 16 GB per VM to support starter database, plus OS, Java, GI/ASM, etc.
•  Increase VM memory for larger database or additional databases (DB processes, PGA, SGA)
–  Maximum 720 GB for single VM (X6-2/X7-2 with memory expansion)
–  VM memory can not be changed online (no ballooning), memory resize requires reboot
–  OEDA VM template defaults (Adjustable at config time)
•  Small – 16 GB; Medium – 32 GB; Large – 64 GB

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 41


CPU Sizing Recommendations
•  CPU over-provisioning is possible
–  But workload performance conflicts can arise if all VMs become fully active
–  Dom0 allocated 2 cores (4 vCPUs)
•  Minimum per VM is 1 core (2 vCPUs)
–  1 vCPU == 1 hyper-thread; 1 core == 2 hyper-threads == 2 vCPUs
•  Maximum per VM is number of cores minus 2 for dom0
–  Example X7-2 maximum is 46 cores per VM (48 total minus 2 for dom0)
•  Number of vCPUs assigned to a VM can be changed online
•  OEDA VM template defaults (Adjustable at config time)
–  Small – 2 cores (4 vCPUs); Medium – 4 cores (8 vCPUs); Large – 8 cores (16 vCPUs)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 42


Local Disk Sizing Recommendations
•  Total local disk space for VMs is 1.6TB (X4 and later), 3.7TB with disk expansion kit
•  Disk space used per VM at deployment depends on VM size selected in OEDA
–  Small 190 GB; Medium 210 GB; Large 230 GB (selectable but not adjustable in OEDA)
•  70 GB system (root sys1/sys2, swap)
•  100 GB software homes (50 GB GIhome, 50 GB DBhome)
•  User /u01 – Small template 20 GB; Medium template 40 GB; Large template 60 GB
–  Actual allocated space for domU disk images initially much lower due to sparseness and shareable reflinks,
but will grow with domU use as shared space diverges and becomes less sparse
•  Over-provisioning disk may cause unpredictable out-of-space errors inside VMs if dom0 space is exhausted
•  Restoring VM backup will reduce (may eliminate) space savings (i.e. relying on over-provisioning may prevent full VM restore)
•  Long lived / prod VMs should budget for full space allocation (assume no benefit from sparseness and shareable reflinks)
•  Short lived test/dev VMs can assume 100 GB allocation

•  DomU local space can be extended after initial deployment by adding local disk images
–  Additionally, domU space can be extended with shared storage (e.g. ACFS, DBFS, external NFS) for user / app files
–  Avoid shared storage for Oracle/Linux binaries/config files. Access/network issues may cause system crash or hang.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 43


Database Server Disk Expansion
•  2 socket database servers have 8 disk bays, only 4 are populated out of the
factory
•  Virtual Machines need more storage on the database servers
•  X7-2, X6-2, and X5-2 database servers now support 8 x 600 GB HDDs
– Only two supported configurations 4 drives or 8 drives
– Servers will ship with only 4 drives out of the factory, customers can add 4 more hard
drives in the field
•  Minimum software version – Exadata Storage Software 12.1.2.3.0

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 44


Exadata Storage Recommendation
•  DATA/RECO size for initial VM clusters should consider future VM additions
– Using all space initially will require shrinking existing DATA/RECO before adding new
•  Spread DATA/RECO for each VM cluster across all disks on all cells
– By default no DBFS disk group
•  Enable ASM-Scoped Security to limit grid disk access
VM Cluster Cluster nodes Grid disks (DATA/RECO for all clusters on all disks in all cells)
clu1 db01vm01 DATAC1_CD_{00..11}_cel01 RECOC1_CD_{00..11}_cel01
db02vm01 DATAC1_CD_{00..11}_cel02 RECOC1_CD_{00..11}_cel02
DATAC1_CD_{00..11}_cel03 RECOC1_CD_{00..11}_cel03
clu2 db01vm02 DATAC2_CD_{00..11}_cel01 RECOC2_CD_{00..11}_cel01
db02vm02 DATAC2_CD_{00..11}_cel02 RECOC2_CD_{00..11}_cel02
DATAC2_CD_{00..11}_cel03 RECOC2_CD_{00..11}_cel03

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 45


Deployment Specifications and Limits
Hardware X2-2 X3-2 X4-2 X5-2 X6-2 X7-2
CPU* Memory VMs

Max VMs per database server 8


72 GB 256 GB 256 GB 256 GB 256 GB 384 GB
Physical per node (default/max)
144 GB 512 GB 512 GB 768 GB 1.5 TB 1.5 TB
Min per domU 16 GB min + additional DBs or app memory
Max per domU 96 GB 464 GB 720 GB
OEDA template defaults Small – 16 GB; Medium – 32 GB; Large – 64 GB (adjustable at config time)
Cores* per node 12 16 24 36 44 48
Min per VM 1 core (2 vCPUs)
Max per VM Cores minus 2 (dom0 assigned 2 cores/4vCPUs)
OEDA template defaults Small – 2 cores; Medium – 4 cores; Large – 8 cores (adjustable at config time)
Total usable disk per node for all domUs 700 GB 1.6 TB 1.6 TB (3.7 TB w/ DB Storage Expansion Kit)
Small 190 GB; Medium 210 GB; Large 230 GB (not adjustable at config time)
Disk


Used disk per domU at deployment (based on OEDA
Actual allocated space for domU disk images initially much lower due to sparseness and shareable reflinks, but will
templates)
grow with domU use as shared space diverges and becomes less sparse, hence budget for these values when
sizing.

*1 core = 1 OCPU = 2 hyper-threads = 2 vCPUs


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 46

Exadata Deployment Use Case Examples
• Configure Exadata as one Physical RAC Cluster
• Configure Exadata with Logical RAC Clusters
• Configure Exadata OVM Clusters (No isolation requirements)
• Configure Exadata OVM Clusters ( VLAN)
• Configure Exadata with two OVM Clusters ( VLAN+Pkey based
Isolation)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 47


Use Case 1. Configure Exadata Half Rack as one Physical RAC Cluster

DB Server 1 (Physical) DB Server 2 (Physical) DB Server 3 (Physical) DB Server 4 (Physical)

Physical Cluster

Storage Servers

3
RECO
DBFS

DATA
4

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 48


Use Case 2. Configure Exadata Half Rack as two Logical RAC Clusters

DB Server 1 DB Server 2 DB Server 3 DB Server 4

Logical Cluster 1 Logical Cluster 2

Storage Servers Storage Servers

1
5

RECO2
DBFS2
2
RECO1
DBFS1

DATA1 DATA2 6

3
7

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 49


Use Case 3. Configure Exadata Half Rack with two OVM Clusters (No isolation requirements)

DB Server 1 (Virtual) DB Server 2 (Virtual) DB Server 3 (Virtual) DB Server 4 (Virtual)

OVM Cluster
Dom U1 Dom U1 1 Dom U1 Dom U1

OVM Cluster
Dom U2 Dom U2 2 Dom U2 Dom U2

Dom 0 Dom 0
Dom 0 Dom 0
Storage Servers

2
DBFS1 (Optional)

DBFS2 (Optional)
3
RECO1

RECO2
DATA1 DATA2 4

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 50


Use Case 4. Configure Exadata Half Rack with two OVM Clusters with VLAN for Client Isolation

Client
DB Server 1 (Virtual) DB Server 2 (Virtual) DB Server 3 (Virtual) DB Server 4 (Virtual)

VLAN OVM
Cluster 1 Dom U1 Dom U1 Cluster 1 Dom U1 Dom U1

OVM VLAN
Dom U2 Dom U2 Cluster 2 Dom U2 Dom U2 Cluster 2

Dom 0 Dom 0 Dom 0 Dom 0

Storage Servers

1 Client
2
DBFS1 (Optional)

DBFS2 (Optional)
3
RECO1

RECO2
DATA1 DATA2 4

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 51


Use Case 5. Configure Exadata Half Rack with two OVM Clusters (VLAN + PKEY Isolation)

Client
DB Server 1 (Virtual) DB Server 2 (Virtual) DB Server 3 (Virtual) DB Server 4 (Virtual)

VLAN OVM
(Cluster 1) Dom U1 Dom U1 Cluster 1 Dom U1 Dom U1

OVM VLAN
Dom U2 Dom U2 Cluster 2 Dom U2 Dom U2 (Cluster 2

Dom 0 Dom 0 Dom 0 Dom 0

Storage Servers
PKEY Cluster 1 Storage Servers Client
PKEY Cluster 2
1
5

RECO2
DBFS2
2
RECO1
DBFS1

DATA1 DATA2 6

3
7

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 52


Capacity-On-Demand (CoD) Licensing on Exadata
Disable Cores during Installation – Enable Cores as you Grow

Definition: Capacity-on-demand (CoD) refers to an Exadata database server that is


installed with a subset of its cores turned off so that the database software license
cost can be reduced. Additional cores are subsequently enabled and licensed as
needed. Once turned on a core may not be turned off.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 53


Rules & Restrictions
•  CoD applies to X7-2, X6-2, X6-8, X5-2, X5-8, X4-2, X4-8 Exadata systems
–  Minimum Active Cores per Compute Server
Configuration X6-2, X7-2 X6-8 X5-2 X5-8 X4-2 X4-8
¼, ½, Full, Elastic Rack 14 56 14 56 12 48
Eighth Rack 8 n/a 8 n/a n/a n/a

•  Customer must supply the correct # of software licenses


•  Customer must adopt an approved method to audit the core count
–  Platinum Service, OCM (Oracle Configuration Manager), EM (Enterprise Manager)

•  See CoD Licensing Guide on exadata.us.oracle.com for complete details




Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 54
High-Level Overview of CoD
•  Step 1: As part of closing a new Exadata purchase, Oracle Sales introduces customer to
CoD (if necessary), and they agree on how many cores will be active/licensed upon
installation of Exadata. The customer must provide/procure the correct number of
database and option licenses per this agreement. Nothing about CoD is indicated in the
order document.
•  Step 2: Prior to installation, customer uses OEDA (Oracle Exadata Deployment
Assistant) tool to input installation details, including CoD. The tool displays a message
urging the user to verify CoD with his/her business counterpart (The OEDA user may
not know).
•  Step 3: The installation process activates cores per OEDA. Customer has 90 days to
ensure the use of Platinum Services, EM or OCM for monitoring active cores.
•  Step 4: As workload grows, customer may activate more cores and provide software
licenses accordingly.
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 55
How to Initiate COD How to Disable Cores
I’d like a But I don’t have
Half-Rack budget for all the Customer is instructed by Install Coordinator
Exadata DB software to input configuration data into OEDA
(Oracle Exadata Deployment Assistant),
No problem. We can turn off including # of active cores per db server
some of the cores and you
can license and turn them on
Customer when needed*
Oracle Rep OEDA
Utility
Configuration
Exadata
Order Submit
Software
Order
Order Customer
*Adjust the DB (or other) software
license order for COD (i.e. make
sure the correct number of licenses
is ordered). There is nothing to OEDA
order for COD. Install Script
(OneCommand)
*Sign-up customer for Platinum Service if not
already installed or install OCM in connected
mode or EM (connected or disconnected). Exadata Install Engineer
(ACS, Partner or Customer)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 56


OEDA Configuration Utility

After an Exadata order is placed, the


customer receives an email from an Oracle
Install Coordinator with a link to download
the OEDA utility from OTN.
They use OEDA to record information that
will be used to install Exadata later on.

This is the initial screen in OEDA.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | •  57


OEDA Configuration Utility

After entering network


information, OEDA asks for
information related to the
compute nodes (db servers).
This is where Cores are
disabled for Capacity-On-
Demand.

Customer checks this box to


enable CoD

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | •  58


OEDA Configuration Utility

If customers do NOT check the


Capacity-on-Demand box, this
message is displayed to make
sure that’s the correct choice,
since this individual may not
have been involved in the
business discussions.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | •  59


OEDA Configuration Utility

If Capacity-on-Demand is
chosen, the customer uses
this slider to pick # of active
cores per db server. By
default, all db servers will
get this number.

This message is displayed to


remind customers they need
to be connected to support
to use Capacity-on-Demand.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | •  60


OEDA Configuration Utility

Customer can modify the


core count node-by-node, if
desired. For compute nodes
within the same cluster,
Oracle recommends the
same number of active cores.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | •  61


How to Verify Number of Cores Enabled
Installation Summary Report Shows Active Cores

After installation completes,


the number of active cores
per compute server is
captured in this Installation
Summary report. The
deployment engineer can
verify the correct # of cores
are active before turning
over the system to the
customer.

Note: This example indicates
that all cores are active and
is not intended to match the
prior example of 18 active
cores per compute node (9
per socket).

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 62


Enabling Additional Cores after Installation
…some time later
Customer follows instructions in Exadata
You will need to order 5 documentation (Owner’s Guide for 11.2 or
I need to enable 10 additional processor licenses Maintenance Guide for 12.x) to increase
active cores
more cores. for the DB software and any DB
options

Customer Customer

Oracle Rep
Submit
Software
Order
Order

A normal order (no


mention of COD)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 63


How to Verify Additional Cores Enabled
DBMCLI or Exachk Output Shows Active Cores

1.


2. Or run exachk utility and search o_cpuinfo_<hostname>.out in the
outputfiles directory (see slide notes)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 64


Using CoD with Elastic Configurations

•  When adding elastic compute servers to any


Exadata configuration, CoD may be applied
•  Follow Instructions in the Extending and
Multi-Rack Cabling Guide

For 11.2 documentation, see the Owner’s Guide, Chapter 8

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 65


OCM and EM for Monitoring CoD
•  OCM Standalone Mode
– Must run in “connected mode”
•  Enterprise Manager
– Can run in “disconnected mode”
– Use the same guidance as EM for Trusted Partitions
•  See support note Doc ID 2008418.1

•  No additional license needed

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 66


EXADATA
New Technologies Adoption and Security Enhancements

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 67


DBaaS Service with Exadata
Service Definition Define service tiers to
simplify your offerings Bronze Silver Gold

Establish the technical q  Oracle RAC Small Large


Technical Service q  Oracle Data Guard
Description footprint of each service
tier q  Oracle GoldenGate Medium X-Large

Service Determine the individual


Provisioning Model services to be provisioned
PDB Database Schema

Oracle SECI Define Secure ECI Service


Architecture architecture per service needs
SECI
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 68
Memory Optimized Access for IoT Workloads

Example: Write Temperature Reading


IoT Client
•  New streaming ingest:
Insert: Optimized
Write
– Declare table MEMOPTIMIZE FOR WRITE
<6:05AM, 55o >
– Clients perform low-latency write into
In-Memory
Ingest Buffer in-memory buffer
Time Temp
Buffer
– Buffered writes drained in background
05:50 52o
Append
05:55 54o
Background
Drainers
– Very high throughput inserts since server
06:00 54o issues deferred writes in large batches
06:05 55o

Periodic
•  Performance:
Buffer Drain
Temp – 2x faster throughput than conventional
Readings

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 69


Microservice Architecture
•  Architecture Pattern, not just putting code into Docker
containers
•  Each microservice can run in a container Shipping
Service
–  Private database and data model
for each microservice
•  Microservices communicate using asynchronous
messaging via some event queuing system
–  Decoupled for maximum resiliency and scalability
–  Services do not talk to each other directly – only via event
queuing service
–  Insulated from slowdown or failure of other microservices

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 70


Oracle Database simplifies deployment of Microservices
MICRO SERVICES APPLICATION
MICRO SERVICES APPLICATION
EVENT
ORDERS CUSTOMERS WAREHOUSE
REST CALL REST API

ORDERS CUSTOMERS RETURNS WAREHOUSE ANALYTICS

EVENT

AQ MSG AQ MSG
ANALYTICS RETURNS CROSS PDB
QUERIES

Sharded Database for Ultra High Availability

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 71


PDB Sharding for Microservices
Scalability, fault isolation and geo-distribution Shard-1 Shard-1 Shard-1

•  Want centralized database (CDB) with ultra-high Product Check Reco


Catalog Out mm
availability and scalability – Exadata is great for this
•  19c also supports PDB Sharding
–  Each PDB can be sharded individually across multiple CDBs Shard-2 Shard-2

•  Provides fault isolation and geo-distribution for Product Check


microservices Catalog Out
–  Loss of an entire CDB makes only part of a PDB unavailable
•  Also allows each microservice to scale its PDB Shard-3
individually
Product
–  More efficient use of resources compared to scaling a Catalog
monolithic application (CDB).

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 72


What is Docker?
•  A software container platform
•  Originated from Linux / Linux Containers
–  Also available on Windows and Mac OS X
•  Part of the Linux Open Container Initiative (OCI)
–  Part of Open Source Linux
•  Editions:
–  Commercial Edition (EE) – Sold by Docker Corp
–  Community Edition (CE) – Part of Open Source Linux

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


What is Docker?
Concepts
•  Containers are non-persistent
–  Once a container is deleted, all files inside that container are gone
•  Images are immutable
–  Changes to an image require to build a new image
–  Data to be persisted has to be stored in volumes
•  Containers are spun up from images

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Docker Key Components

•  Registry
•  Images
•  Containers
•  Docker daemon/
engine

Source: https://docs.docker.com/engine/docker-overview/

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Why is Docker cool?
•  Abstracts virtual environment (container) creation
into simple steps:
–  docker create …
–  docker run …
–  docker rm …
•  Runs directly on the Linux kernel (cgroups)
–  Avoids the hypervisor overhead

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Docker Use Cases for Exadata
•  Host Oracle Applications
–  Segregated environment for Oracle Apps like Oracle R, Audit Vault
–  Cloud tooling agents can be also installed in a container making updates
simpler
•  Containerize agents and ISV apps
–  Customers deploy various agents that get affected by DB node upgrades
–  Agents and ISV apps that are not compatible with the default OL version
•  Support database releases for Test and Dev
–  Customers can deploy new database releases such as Database 18.1 for test
and dev
–  Customers can spin up database containers for rapid provisioning of test/dev

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 77


Exadata Setup for Docker
•  Docker available on Exadata since Exadata System
Software Release 18.1
– Recommend Exadata 19.1 with OL7
•  Service is turned off by default
•  Can be easily turned on: systemctl enable docker

[root@adczardb02 docker]# systemctl enable docker


Created symlink from /etc/systemd/system/multi-
user.target.wants/docker.service to /usr/lib/systemd/
system/docker.service.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Exadata Setup for Docker
•  Next install git on Exadata database server: yum install git

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 79


Setting up Oracle Database in a Docker container on Exadata
•  Get the latest Oracle projects from github
•  Use the install script in the Database projects to setup a
Database image
•  The script downloads the Linux image from docker.io and
installs the database home in the image
•  Image will work with DBCA and sets up the Listener as well
•  ORCL database processes are now running on the host

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 80


Setting up Oracle Database in a Docker container on Exadata
•  Download the latest Oracle docker image from github:

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 81


Setting up Oracle Database in a Docker container on Exadata
•  Download the latest Oracle Projects from github:

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 82


Setting up Oracle Database in a Docker container on Exadata
•  Build the Docker image next

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 83


Data-At-Rest Security Integration for Exadata
Integrations and Optimizations

•  Oracle Advanced Security Transparent Data


Encryption (TDE) to protect database columns and
tablespaces
–  Performance boost from leveraging Smart Scans and CPU-
based cryptographic acceleration

•  Oracle ASM Cluster File System (ACFS) encryption to


protect log and configuration files on Exadata
•  Oracle Key Vault to centrally manage Oracle Wallets/
Master Keys and TDE/ACFS master keys on Exadata

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 84


Hardened Security: Advanced Intrusion Detection
•  Enables Advanced Intrustion Detection Environment (AIDE)
–  Critical files checked every night via sha256, modified time, and more
–  Critical directories monitored for items added/removed

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 85


Protecting the System from Runaway SQL Statements

•  Execution plans that exceed Database Resource


Manager limits are automatically Quarantined
•  Quarantined plans are prevented from executing DBRM resource
•  New QUARANTINED column in v$SQL limit exceeded

Quarantine

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


MOS Documentation

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 87
Important My Oracle Support Documentation
•  Doc ID 1342281.1 •  Doc ID 2181944.1
•  Doc ID 1306791.2 •  Doc ID 1526868.1
•  Doc ID 888828.1 •  Doc ID 1291766.1
•  Doc ID 757552.1 •  Doc ID 1577323.1
•  Doc ID 1084360.1
•  Doc ID 2151671.1
•  Doc ID 1274318.1

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 88


Doc ID 1342281.1 How Do I Find the Exadata
Documentation, Such as Owner and User Guide?
•  Where is the Exadata documentation located? I cannot find it via MOS or
OTN.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 89


Doc ID 1306791.2 Information Center: Oracle Exadata
Database Machine
•  This is the main entry point for all the technical information relative to your Exadata product.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 90


Doc ID 888828.1: Exadata Database Machine and Exadata
Storage Server Supported Versions
•  This document lists the software patches and releases for Oracle Exadata
Database Machine. This document includes versions for both the database
servers and storage servers of Oracle Exadata Database Machine with
database servers running Intel x86-64 processors.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 91


Doc ID 757552.1: Oracle Exadata Best Practices
•  This document is a placeholder for articles related to best practices for the deployment of Oracle
Database Machine and Exadata Storage Server.
Oracle Exadata Database Machine Best Practices
Setup/Configuration
Verify Initialization Parameters and Diskgroup Attributes
Exadata Database Machine Cross Node Consistency Checks
High Availability
Performance
Migration
Operations
Solaris Specific
Diagnosability
Manageability
Security
Backup and Recovery
Automatic Storage Management
Zero Data Loss Recovery Appliance

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 92


Doc ID 1084360.1: Bare Metal Restore Procedure for
Compute Nodes on an Exadata Environment
•  This document describes the procedure to rebuild a computer node that
was determined to have been irretrievably damaged and replace it with a
new, unconfigured computer node ("Bare Metal") that must be re-imaged
to the proper specifications. At the end of the procedure, the Bare Metal
compute node will be synchronized with the surviving members of the
cluster with respect to Exadata and Oracle stack components.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 93


Doc ID 2151671.1: Reimaging Exadata Cell Node Guidance
•  This document provides steps on re-imaging an Exadata cell node when all
internal USB "thumb" drives have been replaced. This applies to Exadata
cell nodes with a single or dual internal USB "thumb" drives (aka USB
sticks). It can also assist in instances where the cell node needs to be re-
imaged even though a USB drive has not been replaced.

•  This document is for cell nodes only

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 94


Doc ID 1274318.1: Oracle Exadata Database Machine
Setup/Configuration Best Practices
•  The goal of this document is to present the best practices for the
deployment of Sun Oracle Database Machine V2/X2-2/X2-8/X3-2/X3-8/
X4-2/X4-8/X5-2 in the area of Setup and Configuration.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 95


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 96
Doc ID 2181944.1 How to Configure and Execute the
ExadataStigFix Script for Exadata STIG Environments
•  There are many industry security analysis tools to test and check your IT
infrastructure. One such guideline is the Security Technical Implementation
Guides (STIG). STIGs are the configuration standards for Department of
Defense (DoD) Information Assurance (IA) and IA-enabled devices/systems.
The STIGs contain technical guidance to "lock down" information systems/
software that might otherwise be vulnerable to a malicious computer
attack.
•  The script will make configuration changes to an Exadata system (db
nodes) in several categories, such as password complexity, auditing, umask
settings, root access, etc.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 97


Doc ID 1526868.1 Oracle Exadata Database Machine DoD
STIG and SCAP Guidelines
•  This document describes the implications of Department of Defense (DoD)
DISA Security Technical Implementation Guides (STIGs) on Oracle Exadata
Database Machine environments.

•  Because the DoD C&A STIG process requires vulnerability assessment and
remediation, Oracle will make commercially reasonable efforts to work
with the customer through the Oracle Support service request process to
meet the DoD C&A STIG remediation requirement or to enable customers
to make the necessary changes to the Oracle Exadata Database Machine in
order to do so, provided that the customer is officially supported by Oracle
Exadata Database Machine product development organization.
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 98
Doc ID 1291766.1 How to change OS user password for Cell Node,
Database Node , ILOM, KVM , Infiniband Switch , GigaBit Ethernet Switch
and PDU on Exadata Database Machine

•  This note explains how to change the user password for cell node,
database node, ILOM, KVM, InfiniBand Switch and Cisco 4948 Ethernet
Switch

•  For more information on default user accounts on Exadata Database


Machine servers and components, please refer to the
Exadata Security Guide in the documentation.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 99


Doc ID 1577323.1 How to setup a PXE Boot Server to Re-
Image an Exadata Compute Node
•  Most common methods of Bare Metal Restore procedure for Exadata compute nodes are generally using either
attached iLOM ISO image or USB directly attached to the compute node.
•  However an alternate method is to use PXE Boot which needs to be enabled in the BIOS Boot Order of the compute
node needing to be re-imaged.

PXE will require either FTP, HTTP or NFS.
•  Prerequisites

PXE Server - Accessible on the same public network as the Exadata Environment.
In my testing/examples the PXE server is accessible on the Management Network (eth0 of compute node).

PXE Server - install the following packages on the server:
•  a). tftp-server
b). dhcp
c). syslinux
d). NFS

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 100
Deployment
Pre requisites and
OEDA-OECA

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 101
Deployment Overview
•  OEDA is the only tool that should be used to create VMs on Exadata
1.  Create configuration with OEDA Configuration Tool
2.  Prepare customer environment for OEDA deployment
–  Configure DNS, configure switches for VLANs (if necessary)
3.  Prepare Exadata system for OEDA deployment
–  switch_to_ovm.sh; reclaimdisks.sh; applyElasticConfig.sh
4.  Deploy system with OEDA Deployment Tool
Note: OS VLAN config can be done by OEDA or post deployment (MOS 2018550.1)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 102
The Original Deployment Tools …

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 103
OEDA Configuration Tool
Advanced Network Configuration

•  Ethernet Network 802.1Q VLAN


Tagging
– For OVM define VM-specific VLAN IDs in
Cluster configuration pages later on
– Ethernet switches (customer and Cisco)
must have VLAN tag configuration done
before OEDA deployment
•  InfiniBand Network Partitioning with
PKEYS

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 104
OEDA Configuration Tool
Identify Nodes

•  Screen to decide OVM or Physical


– All OVM
– All Physical
– Some OVM, some physical

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 105
OEDA Configuration Tool
Define Clusters

•  Decide
– Number of VM clusters to create
– Dbnodes and Cells that will make
up those VM clusters
•  Recommend using all cells
•  What is a “VM cluster?”
– 1 or more user domains on
different database servers running
Oracle GI/RAC, each accessing the
same shared Exadata storage
managed by ASM.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 106
OEDA Configuration Tool
Cluster Configuration

•  Each VM cluster has its own
configuration page
– VM size (memory, CPU)
– Exadata software version
– Networking config
– OS users and groups
– GI/DB version and location
– Starter database config
– ASM disk group config

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 107
OEDA Configuration Tool
Cluster Configuration

•  Virtual Guest size


– Define CPU and Memory
configured during deployment
– Adjust by changing defaults
– 1 vCPU == 1 hyper-thread
– 1 core == 2 hyper-threads == 2
vCPUs
– /u01 “local disk” size is fixed
•  Small 20GB; Medium 40GB; Large 60GB
•  GI/DB homes are separate fs (not /u01)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 108
OEDA Configuration Tool
Cluster Configuration

•  Grid infrastructure installed in
each VM (grid disks “owned” by
a VM cluster)
–  Cluster 1 - DATAC1 / RECOC1 across all cells
–  Cluster 2 - DATAC2 / RECOC2 across all cells
–  Consider future clusters when sizing
–  DBFS not configured
–  ASM-Scoped Security permits a cluster to
access only its own grid disks. Available
with Advanced button.

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 109
OEDA Configuration Tool
Cluster Advanced Network Configuration

•  Ethernet VLAN ID and IP details
– To separate Ethernet traffic across VMs,
use distinct VLAN ID and IP info for each
cluster
•  InfiniBand PKEY and IP details
–  Typically just use OEDA defaults
–  Compute Cluster network for dbnode-to-
dbnode RAC traffic. Separates IB traffic by using
distinct Cluster PKEY and IP subnet for each
cluster.
–  Storage network for dbnode-to-cell or cell-to-
cell traffic - same PKEY/subnet for all clusters
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 110
OEDA Configuration Tool
Review and Edit

•  This page lists all network details for


each VM guest in all VM clusters

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 111
OEDA Configuration Tool
Installation Template

•  Verify proper settings


for all VM clusters in
Installation Template so
the environment can
properly configured
before deployment
(DNS, switches, VLANs,
etc.).

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 112
OEDA Configuration Tool
Network Requirements
Component Domain Network Example hostname
dom0 Mgmt eth0 dm01dbadm01
(one per database server) Mgmt ILOM dm01dbadm01-ilom
Mgmt eth0 dm01dbadm01vm01
Database servers domU Client bondeth0 dm01client01vm01
(one or more per Client VIP dm01client01vm01-vip
database server) Client SCAN dm01vm01-scan
Private ib dm01dbadm01vm01-priv1
Mgmt eth0 dm01celadm01
Storage servers (same as physical) Mgmt ILOM dm01celadm01-ilom
Private ib dm01celadm01-priv1
Switches (same as physical) Mgmt eth0 dm01sw-*

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 113
OECA for Exadata Overview X5-2 In-Memory X5-2 EF OLTP

•  New database workloads require different


compute to storage ratios
•  Elastic configurations allow users to
incrementally expand compute and/or storage
capacity as per their business needs
•  Oracle Exadata Configuration Assistant (OECA)
for Exadata facilitates scoping and analyzing
elastic configurations and reporting
environmental specifications

16x Database Servers 8x Database Servers


5 x HC Storage Servers 8x EF Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 114
Introducing OECA – Simplifying the Elastic Configuration Planning Process
•  OECA is an “Excel” Based Visual Basic
application that will allow the user to
rapidly plan Exadata Elastic Configuration
Scenarios
•  Release Notes
– Requires Microsoft Excel
– Must enable Macros in Excel
– Windows Only Support for Now
– These training slides reflect a recent version of
the tool but may appear differently than the
latest version of the tool

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 115
Creating a Workload Optimized Configuration
•  The following demonstration supports a planned DB In-Memory Machine
Exadata Database In-Memory Deployment
•  OECA for Exadata will produce a “visual”
representation of the intended configuration
•  The tool will also produce the Performance and
Environmental Characteristics for this
configuration
– Similar to what is done in the Exadata Datasheet for Full,
Half, Quarter, Eighth Racks etc.

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 116
Creating a Workload Optimized Configuration
•  Step 1 – Launch Application DB In-Memory Machine
– The Following Screen Appears

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 117
Creating a Workload Optimized Configuration
•  Step 2 – Select X5-2 in the “Selection” Dropdown DB In-Memory Machine

Click X5-2 or DB Machine and SER


Click X4-8 for 8 Socket

Reset wipes the current Config

Click 3 IB Switches to add a Spine.


Min 2 Leafs required

Calculations will appear once a legal


configuration is selected

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 118
Creating a Workload Optimized Configuration
•  Step 3 – Increment the X5-2 DB to 16 DB In-Memory Machine

DB Servers appear in the proper Uxx


Rack location and the proper order

Note appears telling the user what


to add to build a legal configuration

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 119
Creating a Workload Optimized Configuration
•  Step 4 – Increment X5-2L HC to 5 DB In-Memory Machine
Memory, Storage and Number of
Cores are calculated here

Legal configuration achieved and


calculations appear

HC and/or EF Servers appear in the proper Uxx


Rack location and the proper order
16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 120
Creating a Workload Optimized Configuration
•  Step 5 – Note the Environmental Report DB In-Memory Machine

Rack Level Environment Report

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 121
Creating a Workload Optimized Configuration
•  Step 6 – Scroll to See the Performance Report DB In-Memory Machine

Rack Level Performance Report

16 Database Servers
+
5 High Capacity Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 122
OECA for Exadata Summary X5-2 In-Memory X5-2 EF OLTP

•  Internal Tool to facilitate scoping and analyzing


elastic configurations and their environmental
specs
•  Supports from X5-2 DB Machine as well as Storage
Server Elastic Configurations
•  Supports from X4-8 Elastic Configurations
•  Allows quick reporting and analysis of Exadata
Performance, Cores, Memory, Power, Cooling etc
for a given configuration
•  Visually demonstrates how the elastic
configuration will appear in the end system

16x Database Servers 8x Database Servers


5 x HC Storage Servers 8x EF Storage Servers

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 123
The Evolution …

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 124
•  Download and Unzip the OEDA Tool

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 125
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 126
•  Access via web browser

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 127
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 128
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 129
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 130
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 131
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 132
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 133
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 134
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 135
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 136
Installation Process

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.


Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 137
Deployment Overview
1.  Create configuration with OEDA Configuration Tool
2.  Prepare customer environment for OEDA deployment
–  Configure DNS, configure switches for VLANs (if necessary)
3.  Prepare Exadata system for OEDA deployment
–  switch_to_ovm.sh; reclaimdisks.sh; applyElasticConfig.sh
4.  Deploy system with OEDA Deployment Tool
Note: OS VLAN config can be done by OEDA or post deployment (MOS 2018550.1)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 138
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 139
Preparation for Installation
Security starts early

•  Get educated
– Review Product Documentation Oracle® Exadata Database Machine Security Guide
– Review MOS note 1068804.1: Guidelines for enhancing the security for an Oracle
Database Machine deployment
– Review MOS 1405320.1: Responses to common Exadata security findings
– Subscribe to security alerts - http://is.gd/orasec
•  Collect security-related requirements from all stakeholders
•  Determine whether role-separated installation is required
•  Plan network layout

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 140
Installation and Deployment
Implement the available features and security plan

•  Exadata includes many security features by default


•  Implement the recommended security step during deployment
– AKA “Resecure Machine” step
•  Start secure, only open what is necessary
– “Doing security” later almost never happens (or works)
•  Configure ASM audits to use syslog (audit_syslog_level)
•  Configure ASM & DB init.ora: audit_sys_operations=true

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 141
Default Security Features
Implement the available features and security plan

•  short package install list •  auditd monitoring enabled


•  only necessary services enabled •  cellwall: iptables firewall
•  sshd secure default settings •  CPUs included in patch bundles,
•  https management interface releases synchronized
•  password aging •  system hardening
•  maximum failed login attempts •  boot loader password protection

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 142
Oracle Exadata Deployment

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 143
Documents ID

•  Exadata Database Machine and Exadata Storage Server Supported


Versions (Doc ID 888828.1)
•  Exadata 19.1.0.0.0 release and patch (27200347) (Doc ID 2334423.1)
•  PXE VM
•  Patch 27161995 - Storage cell ISO image (19.1.0.0.0.181016.2)
•  Patch 27161992 - Database server ISO image (19.1.0.0.0.181016.2)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. |


Provisioning Exadata Database Machine (2 Clusters) Demo

InfiniBand Switches

2 Compute Nodes Clu-2


4 Compute Nodes Clu-1
3 Exadata Storage Cells
3 Exadata Storage Cells

Exadata Virtual Exadata Bare Metal

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 145
Architecture Recommendation for Re-Image

Conectado Puerto 1
Cisco Switch PXE VM

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 146
Doc ID for PXE
•  Setting up a PXE server with Exadata image in a local notebook (Doc ID
2224935.1)
•  How to setup a PXE Boot Server to Re-Image an Exadata Compute Node
(Doc ID 1577323.1)
•  Bare Metal Restore Procedure for Compute Nodes on an Exadata
Environment (Doc ID 1084360.1)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 147
PXE Environment
•  IP 10.81.32.254

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 148
Download the PXE Guest VM

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 149
PXE Internal Services
SERVICES

service dhcpd restart
service nfs restart
showmount –e
tail -100f /var/log/messages
/etc/exports

EXPORTFS

/u01/exadata 192.168.0.0/22(rw,no_acl,sync,no_root_squash)
/u01/exadata 10.81.0.0/16(rw,no_acl,sync,no_root_squash)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 150
PXE DHCP /etc/dhcp/dhcpd.conf
# DHCP Server Configuration file.
# see /usr/share/doc/dhcp*/dhcpd.conf.sample
#
ddns-update-style none;
allow booting;
allow bootp;
default-lease-time 43200;
max-lease-time 43200;

subnet 192.168.0.0 netmask 255.255.252.0 {


range dynamic-bootp 192.168.2.124 192.168.2.249;
filename "pxelinux.0";
option routers 192.168.1.250;
option subnet-mask 255.255.252.0;
option broadcast-address 192.168.3.255;
option ip-forwarding off;
next-server 192.168.1.250;
}

subnet 10.81.32.0 netmask 255.255.254.0 {


range dynamic-bootp 10.81.32.245 10.81.32.250;
filename "pxelinux.0";
option routers 10.81.32.1;
option subnet-mask 255.255.252.0;
option broadcast-address 10.81.32.255;
option ip-forwarding off;
next-server 192.168.1.246;
}

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 151
PXE Network
[root@exa-nfsreimage-server ~]# ifconfig
eth6 Link encap:Ethernet HWaddr 08:00:27:AE:71:3F
inet addr:192.168.1.250 Bcast:192.168.3.255 Mask:255.255.252.0
inet6 addr: fe80::a00:27ff:feae:713f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20432332 errors:0 dropped:0 overruns:0 frame:0
TX packets:25052876 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11651176490 (10.8 GiB) TX bytes:36880548240 (34.3 GiB)

eth6:1 Link encap:Ethernet HWaddr 08:00:27:AE:71:3F


inet addr:10.81.32.254 Bcast:10.81.33.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

lo Link encap:Local Loopback


inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:151 errors:0 dropped:0 overruns:0 frame:0
TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:10888 (10.6 KiB) TX bytes:10888 (10.6 KiB)

[root@exa-nfsreimage-server ~]# netstat -rn


Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.81.32.1 0.0.0.0 UG 0 0 0 eth6
10.81.32.0 0.0.0.0 255.255.254.0 U 0 0 0 eth6
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth6
192.168.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth6
[root@exa-nfsreimage-server ~]#

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 152
Reimage Compute Node ILOM
set /SP/services/kvms/host_storage_device/ mode=disabled
set /SP/services/kvms/host_storage_device/remote/
server_URI=nfs://10.81.32.254:/u01/exadata/
compute_19.1.0.0.0_LINUX.X64_181016.2-1.x86_64.iso
set /SP/services/kvms/host_storage_device/ mode=remote
set /HOST boot_device=cdrom
show /SP/services/kvms/host_storage_device/ status
reset /SYS
start /SP/console
After completed
set /SP/services/kvms/host_storage_device/ mode=disabled

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 153
Reimage Cell Server ILOM
set /SP/services/kvms/host_storage_device/ mode=disabled
set /SP/services/kvms/host_storage_device/remote/
server_URI=nfs://10.81.32.254:/u01/exadata/
cell_19.1.0.0.0_LINUX.X64_181016.2-1.x86_64.iso
set /SP/services/kvms/host_storage_device/remote/
server_URI=nfs://10.81.32.254:/u01/exadata/
cell_18.1.5.0.0_LINUX.X64_180506-1.x86_64.iso

set /SP/services/kvms/host_storage_device/ mode=remote


set /HOST boot_device=cdrom
show /SP/services/kvms/host_storage_device/ status
reset /SYS
start /SP/console
After completed
set /SP/services/kvms/host_storage_device/ mode=disabled
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 154
Post Configurations Pre-Deploy
VIRTUALIZED

Run the following commands on both nodes to make Factory image to switch to OVM
/opt/oracle.SupportTools/switch_to_ovm.sh

After system is up, check and reclaim disk on both nodes:
/opt/oracle.SupportTools/reclaimdisks.sh -check
/opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim

BARE METAL

check and reclaim disk on both nodes:
/opt/oracle.SupportTools/reclaimdisks.sh -check
/opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 155
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 156
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 157
Upload Software to Compute Node 1
# mkdir /u01/oeda
# mount 10.81.32.59:/export/share /mnt
# cd /mnt/EXADATA
# cp p29280030_191200EXARU_Linux-x86-64.zip /u01/oeda
# cd /u01/oeda
# unzip –q p29280030_191200EXARU_Linux-x86-64.zip
# cd –
# cp V981623-01.zip V981627-01.zip exachk_29180779.zip
p29212937_185000GIRU_Linux-x86-64.zip p29212938_185000DBRU_Linux-x86-64.zip /u01/
oeda/linux-x64/WorkDir/
# mkdir /u01/oeda/OSCLAD_MX-ed05-Es;cp OSCLAD_MX-ed05-Es_VM_Corrected.zip /u01/
oeda/OSCLAD_MX-ed05-Es; cd /u01/oeda/OSCLAD_MX-ed05-Es;unzip OSCLAD_MX-ed05-
Es_VM_Corrected.zip
# cd /u01/oeda/linux-x64/

# ./install.sh -cf ../OSCLAD_MX-ed05-Es/OSCLAD_MX-ed05-ed05-cluster01.xml -l

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 158
Start Oracle Exadata Deployment Assistant (OEDA)

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 159
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 160
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 161
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 162
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 163
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 164
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 165
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 166
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 167
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 168
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 169
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 170
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 171
Post-Deployment Configuration
Address site-specific requirements

•  Change all passwords for all default


accounts (MOS 1291766.1)
•  Perform validation for local policies or
rules
– See MOS 1405320.1 for commonly
identified audit findings
•  Exadata Security – especially for
consolidation environments

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 172
Post-Deployment Configuration
Cell Lockdown & ExaCli

•  *New* in 12.1.2.2.0
•  Cells can have remote access disabled – no direct SSH access to OS
•  Must enable temporarily for maintenance (upgrades)
•  New cell attributes: remoteAccessPerm, remoteAccessTemp
•  Use exacli/exadcli from DB nodes for cell commands
•  *Upcoming!* Exadata All-Inclusive Secure Erase with a single command

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 173
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 174
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 175
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 176
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 177
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 178
Exadata Security (ASM, Griddisks)
Consolidation: sharing without peeking

•  ASM and Database Scoped Security


•  Restrict griddisks to certain clusters and/or certain database(s)
•  Especially effective to manage multiple administrators
•  See whitepapers
– Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles -
http://is.gd/exaconsolidation
– Best Practices for Database Consolidation On Exadata Database Machine - http://
is.gd/orclconswp

Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 179
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 180
Copyright © 2019, Oracle and/or its affiliates. All rights reserved. | 181

You might also like