You are on page 1of 23

Exadata X10M Platform Overview TOI

Note: This information is accurate as of March 2023. It applies to Exadata solutions only. For
future non-Exadata solutions, please refer to documentation for that solution.
Product updates MAY make this information inaccurate or out of date.
Please always refer to the latest product documentation and product notes.

Confidential – © 2023 Oracle Confidential - Restricted 1


E5-2L Naming
E5-2L is the internal name for the platform and is
not recommended for widespread use with
customers.

The recommended customer facing names are


• Exadata X10M DATABASE SERVER
• Exadata X10M STORAGE SERVER

Even though the system is AMD, we are keeping the


X10M naming structure for the Exadata solution as a
whole.
E5-2L will NOT ship as a standalone server. It will
only ship to on-prem customers as part of a solution
(Exadata/ODA etc).

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
2
E5-2L Server
What has changed compared to X9 used in Exadata X9-2 (everything, so it’s 4 slides)

Chassis.
Single 2U chassis with multiple different DBP and PCIe slot configurations.
Chassis is longer and sits further forward in the rack. Front ears are offset compared to X9.

Comparison of E5-2L and X9-2L


chassis ear/
rack position

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
3
E5-2L Server
What has changed compared to X9 used in Exadata X9-2 (continued)

No standalone system configuration. Only available as part of a solution.


Move from Intel to AMD for both Database and Storage cell.
No 1U configuration. All E5-2L servers are 2U.
Three different configurations of E5-2L 2U.
1. 4 DBP with 9 PCIe slots in the rear (Database and Extreme Flash)
2. 4 DBP with 4 full height slots in the rear (cloud only)
3. 12 DBP with 9 PCIe slots in the rear (HDD Storage cell)

These are defined at factory build time and cannot be converted in the field.
New fans and thermals are different. 400W TDP CPU. 60mm fans.
New ROT card and dedicated ROT slot next to PSU.
No Persistent memory at all.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
4
X10M Database On-prem
X10M Extreme Flash On-prem system

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
5
X10M Storage Cell (not Extreme Flash)

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
6
X10M ExaCS

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
7
E5-2L Server

What has changed compared to X9 continued

New A271 power supply. This power supply is 1400W and HIGH
LINE ONLY

** IT DOES NOT WORK ON 110V **.


May affect lab/bench setups.

Power supply has fold up handle and a pink 1400W sticker on


the fan cover.

There is NO support for Windows/Solaris. Customers can’t


“Bring their own OS”.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
8
E5-2L Server

What has changed compared to X9 continued

New CPU attachment mechanism. Completely different to all Intel platforms, but similar to E4-2C.
Good to be familiar with this prior to any CPU replacement, as it can be a bit fiddly. 12 in-lb torque
driver required. See later in the slide deck.
New CPU socket compared to E4-2C. Called SP5
DDR5 memory. 12 DIMMs per CPU, 12 channels per CPU. No 2DPC.
New fault classes for DIMMs (DDR5 DIMMs contain PMICs and run off 12V)
PCIe Gen 5 capable PCIe slots (currently only CX7 is Gen 5)
New HBA (Silverthorne). JBOD only, no RAID HBA. No battery/supercap.
Support for Full height PCIe cards for Cloud (SmartNIC etc) coming later
No PCH (comes from move to AMD). PCH bootloader functions are offloaded to CPU0. This creates a
new set of fault classes (bootloader) that are new compared to Intel.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
9
E5-2L Server

What has changed compared to X9


continued

Flyover cable. PCIe slots 6 and 7 can be configured


in two different ways. This is used with the Full
Height CEM riser. All On-prem configurations use
the flyover cable connection to slot 7.

If a CEM riser is installed, the Flyover cable can be


moved to Slot 6 port which will enable 16 lanes to
the top slot in the CEM riser.

M.2 are NVMe ONLY. No RAID in BIOS. Has to be


hot-removed before being pulled.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
10
E5-2L Server

What has changed compared to X9 continued

TWO MB FRU. The CEM riser system uses a different


MB FRU as there are changes to plastics for the CEM
riser. CEM riser system does not support M.2. MB ASM
number is updated in SFT to reflect which MB FRU is
required.
CEM Riser has a x16 lanes to the top PCIe slot but only
x1 lane to the bottom slot (intended for SmartNIC)

No NVMe Drive support on 12DBP. SAS only. No


Hardware RAID.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
11
E5-2l Server

What hasn’t changed


Not much really. Change to AMD from Intel makes a large difference pretty much everywhere.

ILOM largely unchanged from a user perspective. ILOM SOC *has* changed to AST2600 from Pilot 4.
Diag Shell and UEFIdiag still awesome.
Rack Rail Kit is the same
It’s still Oracle Silver
Still uses i210 NIC for NET0 host connectivity
Does support single CPU.
Err, that’s it.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
12
E5-2L 4DBP 9 Slot configuration

System supports 4 x NVMe SFF drives in the front. These drives are directly connected to CPU0. They
are PCIe Gen 4 x2 ONLY.
System has 9 PCIe slots and 1 ROT card slot. Flyover cable connected to slot 7 connector. This makes
Slot 6 x8 and Slot 7 x8.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
13
E5-2L 4DBP CEM Riser Full height configuration (ExaCS only)

System supports 4 x NVMe SFF drives in the front. These drives are directly connected to CPU0. They
are PCIe Gen 4 x2 ONLY.
System has two full height CEM risers. CEM Riser is used with a different rear chassis frame. Allowing
for use of Full Height PCIe cards (Ortano SmartNIC).
CEM riser configuration HAS A DIFFERENT MB FRU, as there are plastics on the MB tray that are not
present.
CEM riser also has power supply fingers that provide additional 12V power to the riser PCIe cards.
No M.2 support on CEM Riser config

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
14
E5-2L 12DBP 9 Slot configuration

System supports 12 x HDD. Ships with 22TB HDD


HBA is new. It’s a Silverthorne HBA with NO RAID, no battery backup. Straight JBOD (think Erie but 16
port)
System has 9 PCIe slots and 1 ROT card slot. Flyover cable links lanes to Slot 7, which means that Slot
6 is x8 electrical despite being x16 mechanical.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
15
E5-2L Block Diagrams

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
16
CPU installation and Removal

• For those who have used E4-2C, the CPU installation and removal is similar. The CPU comes on a carrier (orange plastic)
that slides into a rail frame. The frame is then closed down onto the socket and screwed down. Then the heatsink is
installed.
• Different torque settings compared to Intel platforms. 12-in-lb

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
17
CPU installation and Removal

• For those who have used E4-2C, the CPU installation and removal is similar. The CPU comes on a carrier (orange plastic)
that slides into a rail frame. The frame is then closed down onto the socket and screwed down. Then the heatsink is
installed.
• Different torque settings compared to Intel platforms. 12-in-lb

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
18
E5-2L DIMM order

Each CPU has 12 DIMM slots, and 12 memory channels. No 2DPC.

DIMM silkscreen has some gaps (D1-D4). Check the service diagram.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
19
Memory population and the change to DDR5

DDR5 has PMICs onboard the DIMM. This means there can be a whole class of faults that are DIMM
faults but were not seen on previous generations of products.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
20
Aura 10 AIC
Aura 10 is very similar to Aura 9. Bifurication required and happens automatically on E5-2L if the Exadata profile bit is set.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
21
Other Product Notes and Items

New classis of bootloader faults. ABL faults. Some faults only show up on serial console if they
happen early in bootloader.

 Copyright © 2023, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners Use Only
22
Questions

Confidential – © 2023 Oracle Confidential - Restricted 23

You might also like