You are on page 1of 77

Oracle

Database Appliance X5-2


Training

Tammy Bednar
Director of Product Management
Oracle Database Appliance
March, 2015



Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
Oracle Database Appliance X5-2 Hardware Front View
With Storage Expansion Shelf

1.  Server Node 1


2.  Server Node 0
3.  Storage Shelf
4.  OpRonal Storage Expansion Shelf

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance X5-2
Hardware Specifica8ons
•  2 x 1RU x86 Servers. Each Server Contains:
–  2 x 18-core 2.3 GHz Intel Xeon Processors E5-2699 v3
–  256 GB Memory (8 x 32 GB), expandable up to 768 GB
–  Mirrored 600 GB local storage
–  Redundant InfiniBand Interconnect
–  OpRonal 10GBase-T or 10GbE SFP+ Public Network
•  1 x 4RU Storage Shelf – Direct-Aeached:
– 800 GB raw SSD storage for redo logs
– 1.6 TB raw SSD storage for database cache, tablespaces, temporary files
–  64 TB raw HDD storage for data, archive logs, backups

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


ODA X5-2 Storage Expansion Shelf
Zero-Admin/Online Storage Expansion

Double Available Storage Capacity


•  AddiRonal 64 TB HDD, 128 TB total for DATA
•  AddiRonal 800 GB SSD, 1.6 TB total for REDO
•  AddiRonal 1.6 TB SSD, 3.2 TB total for FLASH

Zero Administration
•  AutomaRcally integrates when plugged in
•  Data automaRcally distributes to new shelf

Online Expand Storage


•  Hot-plug storage expansion shelf
•  No database downRme

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Storage – Built-in Redundancy
•  Each Server Node Node 0 Node 1
–  2x HBA HBA HBA HBA HBA

– In case of HBA failure


•  MulRpath sohware transparently manages
both paths for the database
IO Module IO Module
•  Storage Shelf P0 P1

–  2x IO Modules (Controllers)
•  Each connects to all 24 disks to protect against Slot 20 Slot 21 Slot 22 Slot 23
failure Slot 16 Slot 17 Slot 18 Slot 19
Slot 12 Slot 13 Slot 14 Slot 15
– Redundant HDDs and SSDs Slot 8 Slot 9 Slot 10 Slot 11
•  ASM stripes data across disks to protect against Slot 4 Slot 5 Slot 6 Slot 7
failure Slot 0 Slot 1 Slot 2 Slot 3

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance X5-2
•  4RU unit with manually connected/assembled server and storage
– 1x2 1RU Servers, 1x1 2RU Storage Shelf, OpRonal 2RU storage expansion shelf
•  Two, Sun Server X5-2, JBOD, OpRonal expansion JBOD
•  Intel Xeon 18-core processor E5-2699 v3
•  2x256GB Memory *
•  16x 8TB 7.2K rpm HDDs(128TB ) , 4x400GB ME SSDs ,4x200GB HE SSDs **
•  2X Infiniband/Fiber, 4x10GbE Base-T Copper
•  ILOM, Serial Port

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 7
Oracle Database Appliance
Hardware Components

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 8
Oracle Database Appliance Hardware Servers
•  Servers
– Two servers and one storage shelf per system
– Two, Sun Fire X86 64-bit X5-2 Servers
– Rack Size 2x1 RU

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 9
Oracle Database Appliance Hardware Processors
•  Two 18-core Intel® Xeon® processors E5-2699 v3 per server
•  Cache per Processor
– Level 1: 32 KB instrucRon and 32 KB data
– Level 2: 256 KB unified
– Level 3: 45MB shared inclusive

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 10
Oracle Database Appliance Hardware Memory
•  256 GB per server
– 8x32GB DIMMs
•  OpRonal Memory Expansion to 512 GB (16 x 32 GB) or 768 GB (24 x 32 GB)
per Server
•  Both Server should have the same amount of memory

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 11
Oracle Database Appliance Hardware Standard IO
• Four onboard auto-sensing 100/1000/10000 Base-T Ethernet ports per
server
• Four PCIe 3.0 slots per server:
– • PCIe internal slot: dual-port internal SAS HBA
– • PCIe slot 3: dual-port external SAS HBA
– • PCIe slot 2: dual-port external SAS HBA
– • PCIe slot 1: dual-port InfiniBand HCA
• OpRonal 10GbE SFP+ external networking connecRvity requires
replacement of InfiniBand HCA

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 12
Oracle Database Appliance Hardware Storage
•  • Capacity per storage shelf:
– Sixteen 3.5-inch 8 TB 7.2K rpm HDDs
– 128 TB raw, 64 TB (double-mirrored) or 42.7 TB (triple-mirrored) usable capacity
– Four 2.5-inch (3.5-inch bracket) 400 GB ME SSDs for frequently accessed data
– Four 2.5-inch (3.5-inch bracket) 200 GB HE SSDs for database redo logs
•  • OpRonal storage expansion with addiRonal storage shelf doubles the
•  • Two 2.5-inch 600 GB 10K rpm HDDs (mirrored) per server for OS
•  • External NFS storage support

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 13
Oracle Database Appliance Hardware Power
•  Two hot-swappable, redundant power supplies per server
– Power Supply Output Rated Maximum: 600W at 100-127V AC/200-240VAC
– AC power: Maximum AC input current at 100VAC and 600W output: 7.2A
•  Two hot-swappable, redundant power supplies per storage shelf
– Power Supply Output Rated Maximum: 580W at 100-127 VAC/200-240VAC
– AC Power : Maximum AC input current at 100VAC and 580W output: 8A

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 14
Oracle Database Appliance Front View

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 15
Oracle Database Appliance X5-2 Back View

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 16
Cable the Oracle Database Appliance X5-2

Purpose – Cabling Interconnect Start - Node 0 End – Node 1


1. Connect green InfiniBand cable Connect into green port (PORT2) in PCIe slot 1 Connect into green port (PORT2) in PCIe slot 1
2. Connect yellow InfiniBand cable Connect into yellow port (PORT1) in PCIe slot 1 Connect into yellow port (PORT1) in PCIe slot 1
Purpose – Cabling Storage Start - Compute Node End - Storage Shelf
3. Connect dark blue SAS cable Connect into dark blue port (SAS0) in PCIe slot 2 in node 0 Connect into dark blue port in top IO Module (port 0)
4. Connect light blue SAS cable Connect into light blue port (SAS1) in PCIe slot 3 in node 0 Connect into light blue port in boeom IO Module (port 0)
5. Connect dark red SAS cable Connect into dark red port (SAS1) in PCIe slot 2 node 1 Connect into dark red port in top IO Module (port 1)
6. Connect light red SAS cable Connect into light red port (SAS0) in PCIe slot 3 node 1 Connect into light red port in boeom IO Module (port 1)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Cable the ODA X5-2 w/ Storage Expansion Shelf

Purpose – Cabling Storage Expansion Start - Compute Node End - Storage Expansion Shelf
7. Connect dark blue SAS cable Connect into dark blue port (SAS0) in PCIe slot 2 in node 1 Connect into dark blue port in top IO Module (port 0)
8. Connect light blue SAS cable Connect into light blue port (SAS1) in PCIe slot 3 in node 1 Connect into light blue port in boeom IO Module (port 0)
9. Connect dark red SAS cable Connect into dark red port (SAS1) in PCIe slot 2 in node 0 Connect into dark red port in top IO Module (port 1)
10. Connect light red SAS cable Connect into light red port (SAS0) in PCIe slot 3 in node 0 Connect into light red port in boeom IO Module (port 1)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance Hardware System Management
•  Interfaces •  Monitoring
–  Dedicated 10/100M Base-T Ethernet network –  Comprehensive fault detecRon and noRficaRon
management port –  In-band and out-of-band and side-band SNMP
•  In-band, out-of-band and side-band network management monitoring V1, V2c, V3
access
–  Syslog and SMTP alerts, WS-MAN
–  RJ-45 serial management port
–  AutomaRcally create a service request for key
•  Service Processor hardware faults with Oracle Auto Service Request
–  Oracle Integrated Lights Out Manager (Oracle ILOM) (ASR)
provides:
•  Remote Keyboard, Video, Mouse redirecRon
•  Full remote management through command-line, IPMI,
and browser interfaces
•  Remote media capability (DVD, CD, ISO image, floppy)
•  Advanced power management and monitoring
•  AcRve Directory, LDAP, RADIUS support

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 19
Oracle Database Appliance Hardware Monitoring Tools
•  Oracle Auto Service Request (ASR)
– Automated remote monitoring, phone home
– Configured at the Rme of deployment or later
•  Oracle Integrated Lights-Out Management (ILOM)
– Complete system remote control, Remote console
•  Oracle Appliance Manager (oakcli)
– Manage hardware and sohware, hardware diagnosRcs, monitoring through oakcli
•  Oracle Enterprise Manager
– ODA Plug-in
– ILOM Plug-in
Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 20
Oracle Database Appliance Hardware Troubleshoo8ng
Tools
•  Oracle Appliance Manager
– # oakcli validate –d
– # oakcli orachk
– # oakcli show <component>
– ODA DiagnosRcs (oakcli manage diagcollect, stordiag, etc.)
•  Oracle ILOM
•  Oracle Enterprise Manager Cloud Control
– Oracle Database Appliance Plug-in

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal/Restricted/Highly Restricted 21




ODA X5-2 Network Cabling External 10GBase-T net 3 (bond1)
External 10GBase-T net 2 (bond1)
Interconnect Port IB 1 (ibbond0) External 10GBase-T net 1 (bond0)
•  Base configuraRon Interconnect Port IB 2 (ibbond0) External 10GBase-T net 0 (bond0)
– IB Interconnect
– 10GBase-T (copper)
Public Networking
•  OpRonal 10GbE
SFP+ (Fiber) Public
Networking
– Replace InfiniBand
w/ Fiber PCIe cards
– Interconnect w/
on-board copper
ports
Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
OpRonal ODA X5-2 Cabling to Connect to Fiber Public
Networks External 10GBase-T net 3 (bond1)
External 10GBase-T net 2 (bond1)
External 10GbE SFP+ net 1 (bond0) Interconnect Port net 1(icbond0)

External 10GbE SFP+ net 0 (bond0) Interconnect Port net 0 (icbond0)

•  Replace InfiniBand
with 10GbE SFP+
(Fiber) PCIe cards
•  Interconnect w/
on-board Net 0 &
Net 1 ports

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


X5-2 Network Interface details – IB

Type Interfaces at Bond devices Bridge in Interfaces IP Addresses


Dom0 at Dom0 Dom0 in ODA
Base
domain
CX3- Mellanox IB ib0 Ibbond0 - ib0,ib1 , Server or Dom0
Cards ib1 ibbond0 (SR- 192.168.16.24(Node 0)
IOV) - 192.168.16.25(Node1 )
Private ODA_Base
192.168.17.24(Node 0)
192.168.17.25(Node1 )
Onboard Quad eth0 bond0 net1 eth0 - Public
Port 10GbE eth1
eth2 bond1 net2 eth1 - Public
eth3

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


X5-2 Network Interface details – SFP+ OpRon

Type Interfaces Bond devices Bridge in Interfaces in IP Addresses


at Dom0 at Dom0 Dom0 ODA Base
domain
Dual Port 10GbE- eth2 bond0 net1 eth1 - Public
NianRc eth3
Onboard Quad eth0 icbond0 priv1 eth0 - Private Server or Dom0
Port 10GbE eth1 192.168.16.24(Node 0)
192.168.16.25(Node1 )
ODA_Base
192.168.17.24(Node 0)
192.168.17.25(Node1 )

eth4 bond1 net2 eth2 - Public
eth5

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


DNS or No DNS?

•  ODA can be deployed with or without a DNS


•  If you do not have a DNS, there is a bug
– ODA (Oracle Database Appliance): Deployment without DNS fails at Step 16 (Doc ID
2048836.1)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance
SoZware Overview

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance
Highly Reliable SoZware
Database 12c, 11gR2
RAC, RAC One Node, EE
Grid Infrastructure

Appliance Manager

Oracle VM (opRonal)

Oracle Enterprise Linux

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


What is the Appliance Manager?
Simplifies Deployment, Management, and Support of Oracle Database Appliance
Configurator Command Line Background Processes
•  GUI to gather ODA •  OAKCLI provides simple •  ConRnual monitoring and
configuraRon and deploy commands to streamline management to ensure
system ODA administraRon best pracRces compliance
–  System informaRon –  Database creaRon and opRmal performance
–  Network informaRon –  Patching –  Servers
–  Database informaRon –  Management –  Storage
–  OpRon to use online at Rme –  Support –  Database
of deployment or offline –  VMs
beforehand

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Appliance Manager 12.1.2.4 +
Support for Oracle Database Appliance X5-2
•  Full support for Oracle Database Release 12c (12.1.0.2)
– MulRtenant, In-Memory Fault Tolerance (Unique to Engineered Systems), …
– Grid Infrastructure: 12.1.0.2
– Support for database version: 12.1.0.2, 11.2.0.4, 11.2.0.3
– Choice of CDB and non-CDB for new 12.1.0.2 databases
•  ASM Cluster File System integraRon
– Database and VM Snapshots
•  All the ODA plaxorm advantages available for 12c and 11g databases
– Sizing template enhancements for workload types and flash

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance
Provisioning

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Two Deployment OpRons
•  Bare Metal (Factory Image) •  Virtualized Plaxorm (Re-image)
– OpRmized for Database – OpRmized for Database and ApplicaRons
Node 1 Node 0 Node 1 Node 0

Guest Domain Guest Domain


•  Oracle Database •  Oracle Database
•  Grid Infrastructure •  Grid Infrastructure Guest Domain Guest Domain
•  Clusterware •  Clusterware
•  ASM •  ASM DOM 0 ODA Base DOM 0 ODA Base
•  ACFS •  ACFS VM Storage •  Oracle Database VM Storage •  Oracle Database
Repository •  Grid Infrastructure Repository •  Grid Infrastructure
•  Oracle Linux •  Oracle Linux • Clusterware • Clusterware
•  Appliance Manager •  Appliance Manager • ASM • ASM
• ACFS • ACFS
•  Appliance Manager •  Appliance Manager

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


OperaRng System
•  Oracle Linux 5.11 UEK2 Kernel
– Installed only RPM required for GI and Database
– Create RAID devices and parRRons
•  Create appropriate boot, root, swap and /u01 file systems
•  Ensure all OS best pracRces configured correctly
•  Oracle runs OS security scans to ensure there are no vulnerabiliRes

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Provisioning Oracle Database Appliance with WebLogic Server

•  Simple to deploy
–  Automated deployment of WebLogic Server with wizard similar to Appliance
Manager configurator
–  Just hours to deploy highly available database infrastructure and middleware
plaxorm for applicaRons

•  Simple to maintain
•  Choice of two distribuRons RAC

–  WebLogic 11g (10.3.6) or WebLogic 12c (12.1.2, 12.1.3)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


WebLogic Deployment on Oracle Database Appliance
Applica8ons Solu8on Pla^orm
Oracle Database Appliance
Node 1 HA VIP Node 0 •  Deploy VMs to run WebLogic
Server and Sohware Load
Sohware Load Sohware Load Balancer
Balancer Balancer
App Domains App Domains •  HA Virtual IP provides single
entry point for applicaRon
WebLogic WebLogic clients with isolaRon
DOM 1
DOM0 ODA_Base DOM0 ODA_Base •  Oracle VM templates
•  Appliance Manager •  Appliance Manager
•  Grid Infrastructure •  Grid Infrastructure
•  Database •  Database

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Provisioning
Summary

•  Bare metal or virtualized deployment opRons


•  No need to install or configure the OS
•  No knowledge required to install Oracle Clusterware, RAC and Database
•  Sizing templates for any workload incorporaRng Oracle best pracRces
•  Completely tested and validated by Oracle

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance
Storage and VM Management

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Storage Management
•  Discovers disks
•  Creates parRRons
•  Sets up mulRpath
•  Creates ASM diskgroups and lays out data for best performance
– DATA for database data
– RECO for archive logs for backup data
•  External: 80% Data/20% Archive, Internal: 40% Data/60% Archive
– REDO for redo logs
– FLASH for frequently accessed data (database cache, tablespaces, temporary files)
•  Sets up ACFS file system and directory structure

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
Storage Management
Monitoring and Recovery
•  Tracks the layout, configuraRon and status of storage
•  Interacts with ASM to perform correcRve acRons
•  Monitors the disks for hard and soh failures
•  ProacRvely off lines disks that:
– Failed
– Predicted to fail
– Performing poorly
•  These acRons ensure highest level availability and performance at all Rmes

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


ASM Cluster File System (ACFS) IntegraRon
•  New Oracle 12c and 11g databases are automaRcally created in ACFS with
Appliance Manager 12.1.2.X release
•  Benefits of using ACFS
– High performance and highly available cluster file system
– Leverages underlying AutomaRc Storage Management (ASM)
– Fully integrated with Grid Infrastructure
– Database aware
– Support advanced storage features like file system snapshots and replicaRon
– Very high disk IO (naRve) performance for database workloads
– OpRmal for both database and non-database workloads
•  ExisRng databases remain in ASM unRl migrated to ACFS
Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
New Changes in ODA 12.1
• Default database created on ODA 12.1.2.0.0 will be ACFS based.
• All databases of version 11.2.0.4.x will be ACFS based
– 11.2.0.3 on ODA X5-2
• Any database created with version < 11.2.0.4.x will be ASM
based. (except for ODA X5-2)
• Non-CDB database created will be based on ACFS snapshot file
system
• No ACFS snapshot support for CDB databases.
– PDBs will support snapshot clone feature.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal 41
Advantages of ACFS based databases
• Support for snapshot database feature

• Enables quick creaRon of Test/Dev databases.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal 42
Storage Architecture
Oracle Database Appliance X5-2
Disk Disk Volumes Used For
Group
ASM Cluster File System (ACFS) HDD Outer Rings +DATA data Database data files
HDD Outer Rings +DATA Repo1.. Shared Repository for VMs,
repoN VDisk
data reco redo flash repo1..N HDD Inner Rings +RECO reco Database archive logs, RMAN
backups (Fast Recovery Area)
ASM Disk Groups HDD Inner Rings +RECO Repo1.. Shared Repository for VMs,
repoN VDisk
+DATA, +RECO, +REDO, +FLASH
HDD Inner Rings +RECO cloudfs Clustered file system – files
that need to be accessed by
Cache Log either server node
HDDs SSDs SSDs SSD +REDO redo Database redo logs
SSD +FLASH flash Frequently accessed data

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Storage Management
•  Discovers disks
•  Creates parRRons
•  Sets up mulRpath
•  Creates ASM diskgroups and lays out data for best performance
– DATA for database data
– RECO for archive logs for backup data
– REDO for redo logs
– FLASH for frequently accessed data (database cache, tablespaces, temporary files)
•  Sets up ACFS file system and directory structure

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Storage Management
Monitoring and Recovery
•  Tracks the layout, configuraRon and status of storage
•  Monitors the disks for hard and soh failures
•  ProacRvely off lines disks that:
– Failed
– Predicted to fail
– Performing poorly
•  Performs correcRve acRons if possible
•  These acRons ensure highest level availability and performance at all Rmes

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Disk ValidaRons: OAKCLI Framework

•  New state detail “ValidaRonFail” introduced for the disks to denote HARD validaRon failure.

•  Disk resource validaRon performed at resource iniRalizaRon Rme (deployment or disk add)

•  OAKCLI interface allows for disk validaRons to be enabled or disabled dynamically at run Rme.

•  OAKCLI commands are available to display validaRon failures(HARD/SOFT) informaRon .



•  Supported for all three ODA plaxorms (V1, X3-2, X4-2)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


What is need for Disk ValidaRons?

•  When ODA was introduced, it supported only one type of disk (vendor/model).

•  As we support different types of disks on ODA, the storage framework needs to ensure that we don’t mix
and match disks that are not compaRble.

•  Disk validaRon checks help in proacRvely detecRng issues and enforcing addiRonal quality control on top
of factory quality controls to ensure ODA system/stack always have tested and verified supported disks.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


ImplementaRon of Disk ValidaRons
•  ODA Storage Framework manages the disk informaRon used for validaRons using an XML configuraRon
file.

•  The file is : /opt/oracle/oak/conf/valida8on_props.xml

•  The file contains disk model aeributes (Vendor, Model, Firmware etc) for various disks that are
supported.

•  The file is updated with every new ODA release.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


XML ConfiguraRon File
•  Excerpt from /opt/oracle/oak/conf/validaRon_props.xml file.
<?xml version="1.0"?>
<OakResourceValidatorConf>
<PDType>
<HITACHI>
<Model type="string" dimension="vector" autocorrect="false" exitonfailure="true" persistent="true">
<Description>List of Model number of vendor disk </Description>
<Value>H106060SDSUN600G</Value>
<Value>HUS1560SCSUN600G</Value>
<Value>H109090SESUN900G</Value>
</Model>
<Size type="integer" dimension="vector" autocorrect="false" exitonfailure="true" persistent="true">
<Description>List of disk size supported by oak </Description>
<Value>600</Value>
<Value>900</Value>
</Size>
<DiskType type="string" dimension="vector" autocorrect="false" exitonfailure="true" persistent="true">
<Description>Type of disk media</Description>
<Value>HDD</Value>
</DiskType>
<KnownFwVersion type="string" dimension="vector" autocorrect="true" exitonfailure="false" persistent="true">
<Description>List of all firmware version known for all supported disk from vendor</Description>
<Value>A2B0</Value>
<Value>A5C6</Value>
<Value>A5C8</Value>
<Value>A6C0</Value>

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Disk ValidaRon Failures
•  The exitonfailure aeribute in the /opt/oracle/oak/confg/validaRon_props.xml file indicates whether it is a soh failure or a hard
failure.

•  Hard Failures
<Size type="integer" dimension="vector" autocorrect="false”
exitonfailure="true" persistent="true">


If exitonfailure aeribute is set to TRUE and the validaRon fails, the state of the disk is set to “Valida8onFail”. Examples: Model#,
Media Type, Size

•  SoZ Failures
<KnownFwVersion type="string" dimension="vector"
autocorrect="true" exitonfailure="false" persistent="true">


If exitonfailure aeribute is set to FALSE, conRnue even if the validaRon fails.
Example: Disk Firmware versions.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Disk ValidaRons: OAKCLI Framework

•  New state detail “ValidaRonFail” introduced for the disks to denote HARD validaRon failure.

•  Disk resource validaRon performed at resource iniRalizaRon Rme (deployment or disk add)

•  OAKCLI interface allows for disk validaRons to be enabled or disabled dynamically at run Rme.

•  OAKCLI commands are available to display validaRon failures(HARD/SOFT) informaRon .



•  Supported for all three ODA plaxorms (V1, X3-2, X4-2)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Disk ValidaRons: OAKCLI Framework
# oakcli show disk

NAME PATH TYPE STATE STATE_DETAILS

e0_pd_00 /dev/sda HDD ONLINE Good


e0_pd_01 /dev/sdb HDD ONLINE Good
e0_pd_02 /dev/sdaa HDD ONLINE ValidationFail

# oakcli show validation storage failures

Show soft validation failures


VALIDATION FAILURE: Disk eo_pd_201 in slot 20 Firmware Version [9440] not supported by ODA
VALIDATION FAILURE: Disk eo_pd_201 in slot 21 Firmware Version [9440] not supported by ODA
VALIDATION FAILURE: Disk eo_pd_201 in slot 22 Firmware Version [9440] not supported by ODA

-- Show hardware errors (ODA 2.8+)



# oakcli show storage –errors

VALIDATION ERROR: Disk pd_00 in slot 0 Model [ST32000SSSUN2.0T] not supported by ODA

-- Dynamically disable / enable Disk Validation

# oakcli disable validation storage


# oakcli show validateion storage
Disabled
# oakcli enable validation storage
# oakcli show validation storage
Enabled

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


DiagnosRcs
If a disk has state of ‘ValidaRonFailed’, the ‘oakcli show valida8on storage failures’ command can be run to
determine possible reasons for failures.

ValidaRon failures are also logged into OAKD log file with more specific details about the disk model, property that
failed, property value etc.

If disk state details is set to Valida8onFail due wrong model string, size or media type, it would get correct once
the right disk is inserted into slot.

Disk ValidaRon can be disabled using OAKCLI interface. If that does not work, the /opt/oracle/oak/conf/
validaRon_props.xml file can be manually edited to set the ‘exitonfailure’ property to ‘false’.

NOTE:
Manually edi8ng the valida8on_props.xml file on Produc8on systems is absolutely not advisable.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Storage opRmizaRons for ODA
ASM enhancements

•  Fixed Partnership
– ASM partnership algorithm is opRmized for ODA to provide
•  Faster rebalance

•  ASM content type


– Disk contains both DATA and RECO parRRons
– Creates asymmetrical partnership scheme for DATA and RECO
– MulRple disk failures does not result both DATA and RECO unavailable

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Fixed Partnering for DATA & RECO
Storage Shelf #0 Stoage Shelf #1

0 1 2 3 0 1 2 3

4 5 6 7 4 5 6 7

8 9 10 11 8 9 10 11

12 13 14 15 12 13 14 15

16 17 18 19 16 17 18 19

20 21 22 23 20 21 22 23

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


New Storage Checks
•  In ODA 12.1.2, you can see the ‘wear-leveling’ informaRon for SSDs using the ‘oakcli stordiag’ command.

# oakcli stordiag e0_pd_23

Node Name : rwsoda309c1n1


Test : Diagnostic Test Description

1 : OAK Check
NAME PATH TYPE STATE STATE_DETAILS
e0_pd_23 /dev/sdav SSD ONLINE Good

2 : ASM Check
ASM Disk Status : group_number state mode_s mount_s header_s
/dev/mapper/SSD_E0_S23_805864928p1 : 3 NORMAL ONLINE CACHED MEMBER

3 : Smartctl Health Check


SMART Health Status: OK

4 : Check SSD Media used endurance indicator


SS Media used endurance indicator: 0%
...
...

•  Higher the percentage, more ‘used-up’ the SSD disk is.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


New Storage Checks
•  There were some cases where the ‘asmappl.config’ and the ‘multipath.conf’ files were not in sync across both nodes.
This led to problems especially during disk replacements. The ‘oakcli storediag’ command now checks for the consistency
of these two files across both nodes.

# oakcli stordiag e0_pd_01

Node Name : rwsoda309c1n1


Test : Diagnostic Test Description

1 : OAK Check
NAME PATH TYPE STATE STATE_DETAILS
e0_pd_01 /dev/sdb HDD ONLINE Good
...
...
...

8 : asmappl.config and multipath.conf consistency check


/opt/oracle/extapi/asmappl.config file is in sync between nodes
/etc/multipath.conf file is in sync between nodes

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


New Storage Checks
•  There has been few cases where there were performance issues due to the fact that the ODA disk write cache was ‘enabled’. The
ODA disk write cache is supposed to be ‘disabled’ from the factory.
•  However, in spite of all the checks performed at the factory, some ODA units may end up shipping with disk write cache enabled.
•  To proacRvely check for this issue, following checks are in place now:

- Check and disable the ODA disk write cache during ISO imaging.
- Check and disable the ODA disk write cache during OAK upgrade to 2.10
- OAKCLI interface provides a mechanism to dynamically enable/disable disk write cache

# oakcli diskwritecache status

# oakcli diskwritecache disable

# oakcli diskwritecache enable

•  Disabling of ODA disk cache requires complete shutdown of ODA stack and reboot of both nodes.
•  Feature available with ODA s/w 2.10+ on ODA X3-2 onwards.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Determining Useable Storage
•  Database Backup LocaRon
– External: 86% Data/14%
Archive
– Internal: 43% Data/57%
Archive
•  Disk Group Redundancy
– HIGH – 3 Copies
– Normal – 2 Copies
•  If you want to change these
se|ng, you must redeploy

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
ODA 12.1.2.0.0 Deployment
•  Non-CDB Directory Structure
•  For Each Non-CDB database, 3
ASM Disk Group ACFS File System
separate mount points will be used
+DATA /u02/app/oracle/oradata/datastore/.ACFS/snaps<db unique
to store database related files.
(Datafiles) name>

+RECO (backup/ /u01/app/oracle/fast_recovery_area/< db unique name > •  E.g.


archivelog)
– /u02/app/oracle/oradata/datastore
+REDO /u01/app/oracle/oradata/datastore/< db unique name >

(online redo logs) – /u01/app/oracle/oradata/datastore


– /u01/app/oracle/fast_recovery_area/
datastore
– The data files are stored under /u02/
app/oracle/oradata/datastore/.ACFS/
snaps/<db unique name>

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 60


ororacle@odademo0 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot 30G 11G 18G 38% /
/dev/mapper/VolGroupSys-LogVolU01 97G 21G 72G 23% /u01
/dev/mapper/VolGroupSys-LogVolOpt 59G 8.5G 47G 16% /opt
/dev/sda1 99M 26M 68M 28% /boot
tmpfs 127G 1.2G 125G 1% /dev/shm
/dev/asm/flashdata-279 558G 1.3G 557G 1% /u02/app/oracle/oradata/flashdata
/dev/asm/acfsvol-64 50G 178M 50G 1% /cloudfs
/dev/asm/datafsvol-64 5.0G 87M 5.0G 2% /odadatafs
/dev/asm/datastore-415 62G 6.3G 56G 11% /u01/app/oracle/oradata/datastore
/dev/asm/datastore-64 1.2T 3.8G 1.2T 1% /u01/app/oracle/fast_recovery_area datastore
/dev/asm/datastore-51 7.3T 1.9G 7.3T 1% /u02/app/oracle/oradata/datastore

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


ODA 12.1.2.0.0 Deployment
•  Non-CDB Directory Structure
# tree -L 6 /u02/app/oracle/oradata/datastore/.ACFS/ tree -L 6 /u01/app/oracle/oradata/datastore/ramdb/ # tree -L 5 /u01/app/oracle/fast_recovery_area/datastore/
snaps/ramdb ramdb/
/u01/app/oracle/oradata/datastore/ramdb/
/u02/app/oracle/oradata/datastore/.ACFS/snaps/ramdb /u01/app/oracle/fast_recovery_area/datastore/ramdb/
`-- RAMDB
|-- RAMDB `-- RAMDB
|-- controlfile
| |-- datafile `-- archivelog
| `-- o1_mf_9zzz98h6_.ctl
| |-- o1_mf_sysaux_9zzwlf7b_.dbf |-- 2014_08_28
`-- onlinelog
| |-- o1_mf_system_9zzwl64x_.dbf | |-- o1_mf_1_12_b0089oky_.arc
|-- o1_mf_1_b0093ycw_.log
| |-- o1_mf_temp_b007of96_.tmp | |-- o1_mf_1_13_b008b4kx_.arc
|-- o1_mf_2_b0093ym9_.log
| |-- o1_mf_undotbs1_9zzwlnbw_.dbf | |-- o1_mf_1_14_b008yvw9_.arc
|-- o1_mf_4_b0093ysv_.log
| |-- o1_mf_undotbs2_9zzwlqgd_.dbf | |-- o1_mf_1_15_b008z4vl_.arc
|-- o1_mf_5_b0094122_.log
| `-- o1_mf_users_9zzwlwos_.dbf `-- 2014_08_29
`-- o1_mf_7_b00943f0_.log
|
`-- ramdb
|-- orapwramdb
|-- pfileramdb.ora
`-- spfileramdb.ora

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | Oracle ConfidenRal – Internal 62
12.1.2.3 Monitoring
[root@odademo0 ~]# oakcli show fs
Type Total Space Free Space Total DG Space Free DG Space Diskgroup Mount Point
ext3 29757M 8027M - - /
ext3 98M 63M - - /boot
ext3 59515M 38304M - - /opt
ext3 99193M 56943M - - /u01
acfs 7642112M 7637797M 52428736M 29482576M DATA /u02/app/oracle/oradata/datastore
acfs 571392M 568249M 1526208M 382932M FLASH /u02/app/oracle/oradata/flashdata
acfs 51200M 51022M 8618304M 4688436M RECO /cloudfs
acfs 5120M 5033M 8618304M 4688436M RECO /odadatafs
acfs 1253376M 1247093M 8618304M 4688436M RECO /u01/app/oracle/fast_recovery_area/datastore
acfs 63488M 43701M 763120M 480316M REDO /u01/app/oracle/oradata/datastore


Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
12.1.2.4 Monitoring
[root@odademo0 ~]# oakcli show dbstorage

All the DBs with DB TYPE as non-CDB share the same volumes

DB_NAMES DB_TYPE Filesystem Size Used Available AutoExtend Size DiskGroup
------- ------- ------------ ------ ----- --------- ---------------- --------
mydb1, mydb2, mydb non-CDB /u01/app/oracle/oradata/datastore 62G 19.32G 42.68G 6G
REDO
3, mydb4
/u02/app/oracle/oradata/datastore 7463G 4.21G 7458.79G 746G DATA
/u02/app/oracle/oradata/flashdata 558G 3.07G 554.93G 55G FLASH
/u01/app/oracle/fast_recovery_area/datastore 1224G 6.13G 1217.87G 122G RECO

• 
Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
Inside Notes on ACFS
NON-CDB
•  Create one ACFS file system per diskgroup (DATA, RECO and REDO) that all non-CDBs
share
•  Half of the available storage in each Diskgroup is allocated to that file system

DATA DG -- /u02/app/oracle/oradata/datastore
RECO DG -- /u01/app/oracle/fast_recovery_area/datastore
REDO DG -- /u02/app/oracle/oradata/datastore

•  Directory structure follows the OMF format

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Inside Notes on ACFS
CDB databases
•  Create a set of ACFS file systems, one on DATA, one on RECO and one on REDO, for
each CDB and it's PDBs
•  Sizing of the filesystem depends on the sizing template you choose

•  DATA DG is /u02/app/oracle/oradata/dat<db_name>
RECO DG is /u01/app/oracle/fast_recovery_area/rco<db_name>
REDO DG is /u02/app/oracle/oradata/rdo<db_name>

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Database Snapshots
Rapid and Efficient Database Copies

•  Rapid and efficient provisioning of database environments for


development and tesRng of applicaRons
•  Complete OAKCLI integraRon
– Very fast way to create database copies
•  Snapshot databases only uses space for the data that changes
– Only meta-data is originally created
– Blocks are wrieen when data is changed
•  Create large number of snapshots for a given database

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Database Snapshots
Pre-Requisites

•  Source database requirements


– 11.2.0.4 or greater
– Located on ACFS
– ARCHIVE mode enabled
– Primary database and open
– All data files are online and no data file is missing
– If data files are encrypted, wallet password is required

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


ProducRon Database Snapshots
Environment
Exadata Server Test & Dev ODA

Database Master

DB
• Copy database
ODA Server From Produc8on
• Secure with data Snapshot Snapshot Snapshot
masking and
redac8on
DB
• Snapshots use
less than 5% of
3rd-party server original database
Test Test Test
Database Database Database
DB

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Management
Summary
•  No storage management required with zero-admin storage
•  Highly opRmal design provides naRve disk performance to database for bare
metal and virtualized plaxorm deployments
•  ACFS integraRon provides addiRonal funcRonality without impacRng
performance
•  Integrated OAKCLI commands to manage the virtualized plaxorm
– Oracle VM experRse is not required
•  VLAN support provides workload security isolaRon across common
networks
•  Rapid and efficient database and VM snapshots for test/dev environments
Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |
Oracle Database Appliance
Monitoring and Diagnos8cs

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Hardware Health Check
•  ConRnuously monitor health of the HW components in server nodes.
•  Provide OAKCLI command line interface to display the status
•  Supports both bare metal and virtual plaxorms
Server Cooling Units
health details, power state, ambient health details, % , RPM
temperature, locator indicator, power Network
consump8on, open problems count/ health details, LinkStatus, Die temp.
detail Power Supply
Processor health details, InputPower,
health details OutputPower, InletTemp, ExhaustTemp
Memory
health details, correctable ECC errors

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Easier enterprise-grade management

•  Enterprise Manager Plug-in
– Manage ODA along with other data
center assets using Enterprise Manager
– Monitor and manage mulRple database
appliances
– Aggregated analyRcs across mulRple
database appliances
•  Wizard simplifies Enterprise
Manager deployments
– Easy to implement Enterprise Manager
on a highly available appliance

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 73


DiagnosRcs
•  ASR (Auto Service Request)
– Monitors and creates automaRc service request on any hardware component failure
with Oracle Support
•  System check for all hardware and sohware components
– oakcli orachk
– Validates hardware and sohware components and quickly idenRfies any anomaly or
violaRon of best pracRce compliance
•  DiagnosRcs collecRon
– oakcli diagcollect
– Gathers all relevant logs from hardware and sohware components and produces
single bundle for support to triage the issue

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Oracle Database Appliance
Complete. Simple. Reliable. Affordable.

•  Simple to deploy, manage and


maintain
•  Best-in-class availability
•  Best-in-class performance
•  Built-in scalability
•  Capacity-on-demand licensing
•  Solu8on-in-a-Box

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |


Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

You might also like